[
  {
    "path": ".github/ISSUE_TEMPLATE/1.bug.yml",
    "content": "name: Bug report 🐛\ndescription: 项目运行中遇到的Bug或问题。\nlabels: ['status: needs check']\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        ### ⚠️ 前置确认\n        1. 网络能够访问openai接口\n        2. python 已安装：版本在 3.7 ~ 3.10 之间\n        3. `git pull` 拉取最新代码\n        4. 执行`pip3 install -r requirements.txt`，检查依赖是否满足\n        5. 拓展功能请执行`pip3 install -r requirements-optional.txt`，检查依赖是否满足\n        6. [FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) 中无类似问题\n  - type: checkboxes\n    attributes:\n      label: 前置确认\n      options:\n        - label: 我确认我运行的是最新版本的代码，并且安装了所需的依赖，在[FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs)中也未找到类似问题。\n          required: true\n  - type: checkboxes\n    attributes:\n      label: ⚠️ 搜索issues中是否已存在类似问题\n      description: >\n        请在 [历史issue](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中清空输入框，搜索你的问题\n        或相关日志的关键词来查找是否存在类似问题。\n      options:\n        - label: 我已经搜索过issues和disscussions，没有跟我遇到的问题相关的issue\n          required: true\n  - type: markdown\n    attributes:\n      value: |\n        请在上方的`title`中填写你对你所遇到问题的简略总结，这将帮助其他人更好的找到相似问题，谢谢❤️。\n  - type: dropdown\n    attributes:\n      label: 操作系统类型?\n      description: >\n        请选择你运行程序的操作系统类型。\n      options:\n        - Windows\n        - Linux\n        - MacOS\n        - Docker\n        - Railway\n        - Windows Subsystem for Linux (WSL)\n        - Other (请在问题中说明)\n    validations:\n      required: true\n  - type: dropdown\n    attributes:\n      label: 运行的python版本是?\n      description: |\n        请选择你运行程序的`python`版本。\n        注意：在`python 3.7`中，有部分可选依赖无法安装。\n        经过长时间的观察，我们认为`python 3.8`是兼容性最好的版本。\n        `python 3.7`~`python 3.10`以外版本的issue，将视情况直接关闭。\n      options:\n        - python 3.7\n        - python 3.8\n        - python 3.9\n        - python 3.10\n        - other\n    validations:\n      required: true\n  - type: dropdown\n    attributes:\n      label: 使用的chatgpt-on-wechat版本是?\n      description: |\n        请确保你使用的是 [releases](https://github.com/zhayujie/chatgpt-on-wechat/releases) 中的最新版本。\n        如果你使用git, 请使用`git branch`命令来查看分支。\n      options:\n        - Latest Release\n        - Master (branch)\n    validations:\n      required: true\n  - type: dropdown\n    attributes:\n      label: 运行的`channel`类型是?\n      description: |\n        请确保你正确配置了该`channel`所需的配置项，所有可选的配置项都写在了[该文件中](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py)，请将所需配置项填写在根目录下的`config.json`文件中。\n      options:\n        - wechatmp(公众号, 订阅号)\n        - wechatmp_service(公众号, 服务号)\n        - terminal\n        - other\n    validations:\n      required: true\n  - type: textarea\n    attributes:\n      label: 复现步骤 🕹\n      description: |\n        **⚠️ 不能复现将会关闭issue.**\n  - type: textarea\n    attributes:\n      label: 问题描述 😯\n      description: 详细描述出现的问题，或提供有关截图。\n  - type: textarea\n    attributes:\n      label: 终端日志 📒\n      description: |\n        在此处粘贴终端日志，可在主目录下`run.log`文件中找到，这会帮助我们更好的分析问题，注意隐去你的API key。\n        如果在配置文件中加入`\"debug\": true`，打印出的日志会更有帮助。\n\n        <details>\n        <summary><i>示例</i></summary>\n        ```log\n        [DEBUG][2023-04-16 00:23:22][plugin_manager.py:157] - Plugin SUMMARY triggered by event Event.ON_HANDLE_CONTEXT\n        [DEBUG][2023-04-16 00:23:22][main.py:221] - [Summary] on_handle_context. content: $总结前100条消息\n        [DEBUG][2023-04-16 00:23:24][main.py:240] - [Summary] limit: 100, duration: -1 seconds\n        [ERROR][2023-04-16 00:23:24][chat_channel.py:244] - Worker return exception: name 'start_date' is not defined\n        Traceback (most recent call last):\n          File \"C:\\ProgramData\\Anaconda3\\lib\\concurrent\\futures\\thread.py\", line 57, in run\n            result = self.fn(*self.args, **self.kwargs)\n          File \"D:\\project\\chatgpt-on-wechat\\channel\\chat_channel.py\", line 132, in _handle\n            reply = self._generate_reply(context)\n          File \"D:\\project\\chatgpt-on-wechat\\channel\\chat_channel.py\", line 142, in _generate_reply\n            e_context = PluginManager().emit_event(EventContext(Event.ON_HANDLE_CONTEXT, {\n          File \"D:\\project\\chatgpt-on-wechat\\plugins\\plugin_manager.py\", line 159, in emit_event\n            instance.handlers[e_context.event](e_context, *args, **kwargs)\n          File \"D:\\project\\chatgpt-on-wechat\\plugins\\summary\\main.py\", line 255, in on_handle_context\n            records = self._get_records(session_id, start_time, limit)\n          File \"D:\\project\\chatgpt-on-wechat\\plugins\\summary\\main.py\", line 96, in _get_records\n            c.execute(\"SELECT * FROM chat_records WHERE sessionid=? and timestamp>? ORDER BY timestamp DESC LIMIT ?\", (session_id, start_date, limit))\n        NameError: name 'start_date' is not defined\n        [INFO][2023-04-16 00:23:36][app.py:14] - signal 2 received, exiting...\n        ```\n        </details>\n      value: |\n        ```log\n        <此处粘贴终端日志>\n        ```"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/2.feature.yml",
    "content": "name: Feature request 🚀\ndescription: 提出你对项目的新想法或建议。\nlabels: ['status: needs check']\nbody:\n  - type: markdown\n    attributes:\n      value: |\n        请在上方的`title`中填写简略总结，谢谢❤️。\n  - type: checkboxes\n    attributes:\n      label: ⚠️ 搜索是否存在类似issue\n      description: >\n        请在 [历史issue](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中清空输入框，搜索关键词查找是否存在相似issue。\n      options:\n        - label: 我已经搜索过issues和disscussions，没有发现相似issue\n          required: true\n  - type: textarea\n    attributes:\n      label: 总结\n      description: 描述feature的功能。\n  - type: textarea\n    attributes:\n      label: 举例\n      description: 提供聊天示例，草图或相关网址。\n  - type: textarea\n    attributes:\n      label: 动机\n      description: 描述你提出该feature的动机，比如没有这项feature对你的使用造成了怎样的影响。 请提供更详细的场景描述，这可能会帮助我们发现并提出更好的解决方案。"
  },
  {
    "path": ".github/workflows/deploy-image-arm.yml",
    "content": "# This workflow uses actions that are not certified by GitHub.\n# They are provided by a third-party and are governed by\n# separate terms of service, privacy policy, and support\n# documentation.\n\n# GitHub recommends pinning actions to a commit SHA.\n# To get a newer version, you will need to update the SHA.\n# You can also reference a tag or branch, but the action may change without warning.\n\nname: Create and publish a Docker image\n\non:\n  push:\n    branches: ['master']\n  create:\nenv:\n  REGISTRY: ghcr.io\n  IMAGE_NAME: ${{ github.repository }}\n\njobs:\n  build-and-push-image:\n    if: github.repository == 'zhayujie/chatgpt-on-wechat'\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      packages: write\n\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v3\n\n      - name: Set up QEMU\n        uses: docker/setup-qemu-action@v1\n\n      - name: Set up Docker Buildx\n        id: buildx\n        uses: docker/setup-buildx-action@v1\n\n      - name: Available platforms\n        run: echo ${{ steps.buildx.outputs.platforms }}\n\n      - name: Log in to the Container registry\n        uses: docker/login-action@v2\n        with:\n          registry: ${{ env.REGISTRY }}\n          username: ${{ github.actor }}\n          password: ${{ secrets.GITHUB_TOKEN }}\n\n      - name: Extract metadata (tags, labels) for Docker\n        id: meta\n        uses: docker/metadata-action@v4\n        with:\n          images: |\n            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}\n\n      - name: Build and push Docker image\n        uses: docker/build-push-action@v3\n        with:\n          context: .\n          push: true\n          file: ./docker/Dockerfile.latest\n          platforms: linux/arm64\n          tags: ${{ steps.meta.outputs.tags }}-arm64\n          labels: ${{ steps.meta.outputs.labels }}\n\n      - uses: actions/delete-package-versions@v4\n        with:\n          package-name: 'chatgpt-on-wechat'\n          package-type: 'container'\n          min-versions-to-keep: 10\n          delete-only-untagged-versions: 'true'\n          token: ${{ secrets.GITHUB_TOKEN }}"
  },
  {
    "path": ".github/workflows/deploy-image.yml",
    "content": "# This workflow uses actions that are not certified by GitHub.\n# They are provided by a third-party and are governed by\n# separate terms of service, privacy policy, and support\n# documentation.\n\n# GitHub recommends pinning actions to a commit SHA.\n# To get a newer version, you will need to update the SHA.\n# You can also reference a tag or branch, but the action may change without warning.\n\nname: Create and publish a Docker image\n\non:\n  push:\n    branches: ['master']\n  create:\nenv:\n  REGISTRY: ghcr.io\n  IMAGE_NAME: ${{ github.repository }}\n\njobs:\n  build-and-push-image:\n    if: github.repository == 'zhayujie/chatgpt-on-wechat'\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      packages: write\n\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v3\n\n      - name: Login to Docker Hub\n        uses: docker/login-action@v2\n        with:\n          username: ${{ secrets.DOCKERHUB_USERNAME }}\n          password: ${{ secrets.DOCKERHUB_TOKEN }}\n\n      - name: Log in to the Container registry\n        uses: docker/login-action@v2\n        with:\n          registry: ${{ env.REGISTRY }}\n          username: ${{ github.actor }}\n          password: ${{ secrets.GITHUB_TOKEN }}\n\n      - name: Extract metadata (tags, labels) for Docker\n        id: meta\n        uses: docker/metadata-action@v4\n        with:\n          images: |\n            ${{ env.IMAGE_NAME }}\n            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}\n\n      - name: Build and push Docker image\n        uses: docker/build-push-action@v3\n        with:\n          context: .\n          push: true\n          file: ./docker/Dockerfile.latest\n          tags: ${{ steps.meta.outputs.tags }}\n          labels: ${{ steps.meta.outputs.labels }}\n\n      - uses: actions/delete-package-versions@v4\n        with:\n          package-name: 'chatgpt-on-wechat'\n          package-type: 'container'\n          min-versions-to-keep: 10\n          delete-only-untagged-versions: 'true'\n          token: ${{ secrets.GITHUB_TOKEN }}"
  },
  {
    "path": ".gitignore",
    "content": ".DS_Store\n.idea\n.vscode\n.venv\n.vs\n__pycache__/\nvenv*\n*.pyc\npython\nconfig.json\nQR.png\nnohup.out\ntmp\nplugins.json\n*.log\nlogs/\nworkspace\nconfig.yaml\nuser_datas.pkl\nchatgpt_tool_hub/\nplugins/**/\n!plugins/bdunit\n!plugins/dungeon\n!plugins/finish\n!plugins/godcmd\n!plugins/tool\n!plugins/banwords\n!plugins/banwords/**/\nplugins/banwords/__pycache__\nplugins/banwords/lib/__pycache__\n!plugins/hello\n!plugins/role\n!plugins/keyword\n!plugins/linkai\n!plugins/agent\nclient_config.json\nref/\n.cursor/\nlocal/\n"
  },
  {
    "path": "Dockerfile",
    "content": "FROM ghcr.io/zhayujie/chatgpt-on-wechat:latest\n\nENTRYPOINT [\"/entrypoint.sh\"]"
  },
  {
    "path": "LICENSE",
    "content": "Copyright (c) 2022 zhayujie\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\"><img src= \"https://github.com/user-attachments/assets/eca9a9ec-8534-4615-9e0f-96c5ac1d10a3\" alt=\"Chatgpt-on-Wechat\" width=\"550\" /></p>\n\n<p align=\"center\">\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat/releases/latest\"><img src=\"https://img.shields.io/github/v/release/zhayujie/chatgpt-on-wechat\" alt=\"Latest release\"></a>\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat/blob/master/LICENSE\"><img src=\"https://img.shields.io/github/license/zhayujie/chatgpt-on-wechat\" alt=\"License: MIT\"></a>\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat\"><img src=\"https://img.shields.io/github/stars/zhayujie/chatgpt-on-wechat?style=flat-square\" alt=\"Stars\"></a> <br/>\n  [中文] | [<a href=\"docs/en/README.md\">English</a>] | [<a href=\"docs/ja/README.md\">日本語</a>]\n</p>\n\n**CowAgent** 是基于大模型的超级AI助理，能够主动思考和任务规划、操作计算机和外部资源、创造和执行Skills、拥有长期记忆并不断成长。CowAgent 支持灵活切换多种模型，能处理文本、语音、图片、文件等多模态消息，可接入网页、飞书、钉钉、企微智能机器人、QQ、企微自建应用、微信公众号中使用，7*24小时运行于你的个人电脑或服务器中。\n\n<p align=\"center\">\n  <a href=\"https://cowagent.ai/\">🌐 官网</a> &nbsp;·&nbsp;\n  <a href=\"https://docs.cowagent.ai/\">📖 文档中心</a> &nbsp;·&nbsp;\n  <a href=\"https://docs.cowagent.ai/guide/quick-start\">🚀 快速开始</a> &nbsp;·&nbsp;\n  <a href=\"https://link-ai.tech/cowagent/create\">☁️ 在线体验</a>\n</p>\n\n\n\n# 简介\n\n> 该项目既是一个可以开箱即用的超级AI助理，也是一个支持高扩展的Agent框架，可以通过为项目扩展大模型接口、接入渠道、内置工具、Skills系统来灵活实现各种定制需求。核心能力如下：\n\n-  ✅  **复杂任务规划**：能够理解复杂任务并自主规划执行，持续思考和调用工具直到完成目标，支持通过工具操作访问文件、终端、浏览器、定时任务等系统资源\n-  ✅  **长期记忆：** 自动将对话记忆持久化至本地文件和数据库中，包括全局记忆和天级记忆，支持关键词及向量检索\n-  ✅  **技能系统：** 实现了Skills创建和运行的引擎，内置多种技能，并支持通过自然语言对话完成自定义Skills开发\n-  ✅  **多模态消息：** 支持对文本、图片、语音、文件等多类型消息进行解析、处理、生成、发送等操作\n-  ✅  **多模型接入：** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi、Doubao等国内外主流模型厂商\n-  ✅  **多端部署：** 支持运行在本地计算机或服务器，可集成到飞书、钉钉、企业微信、QQ、微信公众号、网页中使用\n\n## 声明\n\n1. 本项目遵循 [MIT开源协议](/LICENSE)，主要用于技术研究和学习，使用本项目时需遵守所在地法律法规、相关政策以及企业章程，禁止用于任何违法或侵犯他人权益的行为。任何个人、团队和企业，无论以何种方式使用该项目、对何对象提供服务，所产生的一切后果，本项目均不承担任何责任。\n2. 成本与安全：Agent模式下Token使用量高于普通对话模式，请根据效果及成本综合选择模型。Agent具有访问所在操作系统的能力，请谨慎选择项目部署环境。同时项目也会持续升级安全机制、并降低模型消耗成本。\n3. CowAgent项目专注于开源技术开发，不会参与、授权或发行任何加密货币。\n\n## 演示\n\n- 使用说明(Agent模式)：[CowAgent介绍](https://docs.cowagent.ai/intro/features)\n\n- 免部署在线体验：[CowAgent](https://link-ai.tech/cowagent/create)\n\n- DEMO视频(对话模式)：https://cdn.link-ai.tech/doc/cow_demo.mp4\n\n## 社区\n\n添加小助手微信加入开源项目交流群：\n\n<img width=\"140\" src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/open-community.png\">\n\n<br/>\n\n# 企业服务\n\n<a href=\"https://link-ai.tech\" target=\"_blank\"><img width=\"650\" src=\"https://cdn.link-ai.tech/image/link-ai-intro.jpg\"></a>\n\n> [LinkAI](https://link-ai.tech/) 是面向企业和个人的一站式AI智能体平台，聚合多模态大模型、知识库、技能、工作流等能力，支持一键接入主流平台并管理，支持SaaS、私有化部署等多种模式，可免部署在线运行[CowAgent助理](https://link-ai.tech/cowagent/create)。\n>\n> LinkAI 目前已在智能客服、私域运营、企业效率助手等场景积累了丰富的AI解决方案，在消费、健康、文教、科技制造等各行业沉淀了大模型落地应用的最佳实践，致力于帮助更多企业和开发者拥抱 AI 生产力。\n\n**产品咨询和企业服务** 可联系产品客服：\n\n<img width=\"150\" src=\"https://cdn.link-ai.tech/portal/linkai-customer-service.png\">\n\n<br/>\n\n# 🏷 更新日志\n\n>**2026.03.18：** [2.0.3版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.3)，新增企微智能机器人和 QQ 通道、支持Coding Plan、新增多个模型、Web端文件处理、记忆系统升级。\n\n>**2026.02.27：** [2.0.2版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.2)，Web 控制台全面升级（流式对话、模型/技能/记忆/通道/定时任务/日志管理）、支持多通道同时运行、会话持久化存储、新增多个模型。\n\n>**2026.02.13：** [2.0.1版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.1)，内置 Web Search 工具、智能上下文裁剪策略、运行时信息动态更新、Windows 兼容性适配，修复定时任务记忆丢失、飞书连接等多项问题。\n\n>**2026.02.03：** [2.0.0版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.0)，正式升级为超级Agent助理，支持多轮任务决策、具备长期记忆、实现多种系统工具、支持Skills框架，新增多种模型并优化了接入渠道。\n\n>**2025.05.23：** [1.7.6版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.6) 优化web网页channel、新增 [AgentMesh](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/agent/README.md)多智能体插件、百度语音合成优化、企微应用`access_token`获取优化、支持`claude-4-sonnet`和`claude-4-opus`模型\n\n>**2025.04.11：** [1.7.5版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.5) 新增支持 [wechatferry](https://github.com/zhayujie/chatgpt-on-wechat/pull/2562) 协议、新增 deepseek 模型、新增支持腾讯云语音能力、新增支持 ModelScope 和 Gitee-AI API接口\n\n更多更新历史请查看: [更新日志](https://docs.cowagent.ai/releases)\n\n<br/>\n\n# 🚀 快速开始\n\n项目提供了一键安装、配置、启动、管理程序的脚本，推荐使用脚本快速运行，也可以根据下文中的详细指引一步步安装运行。\n\n在终端执行以下命令：\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\n脚本使用说明：[一键运行脚本](https://docs.cowagent.ai/guide/quick-start)\n\n\n## 一、准备\n\n### 1. 模型API\n\n项目支持国内外主流厂商的模型接口，可选模型及配置说明参考：[模型说明](#模型说明)。\n\n> 注：Agent模式下推荐使用以下模型，可根据效果及成本综合选择：MiniMax-M2.7、glm-5-turbo、kimi-k2.5、qwen3.5-plus、claude-sonnet-4-6、gemini-3.1-pro-preview、gpt-5.4、gpt-5.4-mini\n\n同时支持使用 **LinkAI平台** 接口，支持上述全部模型，并支持知识库、工作流、插件等Agent技能，参考 [接口文档](https://docs.link-ai.tech/platform/api)。\n\n### 2.环境安装\n\n支持 Linux、MacOS、Windows 操作系统，可在个人计算机及服务器上运行，需安装 `Python`，Python版本需在3.7 ~ 3.12 之间，推荐使用3.9版本。\n\n> 注意：Agent模式推荐使用源码运行，若选择Docker部署则无需安装python环境和下载源码，可直接快进到下一节。\n\n**(1) 克隆项目代码：**\n\n```bash\ngit clone https://github.com/zhayujie/chatgpt-on-wechat\ncd chatgpt-on-wechat/\n```\n\n若遇到网络问题可使用国内仓库地址：https://gitee.com/zhayujie/chatgpt-on-wechat\n\n**(2) 安装核心依赖 (必选)：**\n\n```bash\npip3 install -r requirements.txt\n```\n\n**(3) 拓展依赖 (可选，建议安装)：**\n\n```bash\npip3 install -r requirements-optional.txt\n```\n如果某项依赖安装失败可注释掉对应的行后重试。\n\n## 二、配置\n\n配置文件的模板在根目录的`config-template.json`中，需复制该模板创建最终生效的 `config.json` 文件：\n\n```bash\n  cp config-template.json config.json\n```\n\n然后在`config.json`中填入配置，以下是对默认配置的说明，可根据需要进行自定义修改（注意实际使用时请去掉注释，保证JSON格式的规范）：\n\n```bash\n# config.json 文件内容示例\n{\n  \"channel_type\": \"web\",                                      # 接入渠道类型，默认为web，支持修改为:feishu,dingtalk,wecom_bot,qq,wechatcom_app,wechatmp_service,wechatmp,terminal\n  \"model\": \"MiniMax-M2.7\",                                    # 模型名称\n  \"minimax_api_key\": \"\",                                      # MiniMax API Key\n  \"zhipu_ai_api_key\": \"\",                                     # 智谱GLM API Key\n  \"moonshot_api_key\": \"\",                                     # Kimi/Moonshot API Key\n  \"ark_api_key\": \"\",                                          # 豆包(火山方舟) API Key\n  \"dashscope_api_key\": \"\",                                    # 百炼(通义千问)API Key\n  \"claude_api_key\": \"\",                                       # Claude API Key\n  \"claude_api_base\": \"https://api.anthropic.com/v1\",          # Claude API 地址，修改可接入三方代理平台\n  \"gemini_api_key\": \"\",                                       # Gemini API Key\n  \"gemini_api_base\": \"https://generativelanguage.googleapis.com\", # Gemini API地址\n  \"open_ai_api_key\": \"\",                                      # OpenAI API Key\n  \"open_ai_api_base\": \"https://api.openai.com/v1\",            # OpenAI API 地址\n  \"linkai_api_key\": \"\",                                       # LinkAI API Key\n  \"proxy\": \"\",                                                # 代理客户端的ip和端口，国内环境需要开启代理的可填写该项，如 \"127.0.0.1:7890\"\n  \"speech_recognition\": false,                                # 是否开启语音识别\n  \"group_speech_recognition\": false,                          # 是否开启群组语音识别\n  \"voice_reply_voice\": false,                                 # 是否使用语音回复语音\n  \"use_linkai\": false,                                        # 是否使用LinkAI接口，默认关闭，设置为true后可对接LinkAI平台模型\n  \"agent\": true,                                              # 是否启用Agent模式，启用后拥有多轮工具决策、长期记忆、Skills能力等\n  \"agent_workspace\": \"~/cow\",                                 # Agent的工作空间路径，用于存储memory、skills、系统设定等\n  \"agent_max_context_tokens\": 40000,                          # Agent模式下最大上下文tokens，超出将自动丢弃最早的上下文\n  \"agent_max_context_turns\": 30,                              # Agent模式下最大上下文记忆轮次，每轮包括一次用户提问和AI回复\n  \"agent_max_steps\": 15                                       # Agent模式下单次任务的最大决策步数，超出后将停止继续调用工具\n}\n```\n\n**配置补充说明:** \n\n<details>\n<summary>1. 语音配置</summary>\n\n+ 添加 `\"speech_recognition\": true` 将开启语音识别，默认使用openai的whisper模型识别为文字，同时以文字回复，该参数仅支持私聊 (注意由于语音消息无法匹配前缀，一旦开启将对所有语音自动回复，支持语音触发画图)；\n+ 添加 `\"group_speech_recognition\": true` 将开启群组语音识别，默认使用openai的whisper模型识别为文字，同时以文字回复，参数仅支持群聊 (会匹配group_chat_prefix和group_chat_keyword, 支持语音触发画图)；\n+ 添加 `\"voice_reply_voice\": true` 将开启语音回复语音（同时作用于私聊和群聊）\n</details>\n\n<details>\n<summary>2. 其他配置</summary>\n\n+ `model`: 模型名称，Agent模式下推荐使用 `MiniMax-M2.7`、`glm-5-turbo`、`kimi-k2.5`、`qwen3.5-plus`、`claude-sonnet-4-6`、`gemini-3.1-pro-preview`，全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件\n+ `character_desc`：普通对话模式下的机器人系统提示词。在Agent模式下该配置不生效，由工作空间中的文件内容构成。\n+ `subscribe_msg`：订阅消息，公众号和企业微信channel中请填写，当被订阅时会自动回复， 可使用特殊占位符。目前支持的占位符有{trigger_prefix}，在程序中它会自动替换成bot的触发词。\n</details>\n\n<details>\n<summary>3. LinkAI配置</summary>\n\n+ `use_linkai`: 是否使用LinkAI接口，默认关闭，设置为true后可对接LinkAI平台，使用模型、知识库、工作流、插件等技能, 参考[接口文档](https://docs.link-ai.tech/platform/api/chat)\n+ `linkai_api_key`: LinkAI Api Key，可在 [控制台](https://link-ai.tech/console/interface) 创建\n</details>\n\n注：全部配置项说明可在 [`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py) 文件中查看。\n\n## 三、运行\n\n### 1.本地运行\n\n如果是个人计算机 **本地运行**，直接在项目根目录下执行：\n\n```bash\npython3 app.py         # windows环境下该命令通常为 python app.py\n```\n\n运行后默认会启动web服务，可通过访问 `http://localhost:9899/chat` 在网页端对话。\n\n如果需要接入其他应用通道只需修改 `config.json` 配置文件中的 `channel_type` 参数，详情参考：[通道说明](#通道说明)。\n\n\n### 2.服务器部署\n\n在服务器中可使用 `nohup` 命令在后台运行程序：\n\n```bash\nnohup python3 app.py & tail -f nohup.out\n```\n\n执行后程序运行于服务器后台，可通过 `ctrl+c` 关闭日志，不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程，如果想要重新启动程序可以先 `kill` 掉对应的进程。 日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`。 \n\n此外，项目根目录下的 `run.sh` 脚本支持一键启动和管理服务，包括 `./run.sh start`、`./run.sh stop`、`./run.sh restart`、`./run.sh logs` 等命令，执行 `./run.sh help` 可查看全部用法。\n\n> 如果需要通过浏览器访问Web控制台，请确保服务器的 `9899` 端口已在防火墙或安全组中放行，建议仅对指定IP开放以保证安全。\n\n### 3.Docker部署\n\n使用docker部署无需下载源码和安装依赖，只需要获取 `docker-compose.yml` 配置文件并启动容器即可。Agent模式下更推荐使用源码进行部署，以获得更多系统访问能力。\n\n> 前提是需要安装好 `docker` 及 `docker-compose`，安装成功后执行 `docker -v` 和 `docker-compose version` (或 `docker compose version`) 可查看到版本号。安装地址为 [docker官网](https://docs.docker.com/engine/install/) 。\n\n**(1) 下载 docker-compose.yml 文件**\n\n```bash\ncurl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml\n```\n\n下载完成后打开 `docker-compose.yml` 填写所需配置，例如 `CHANNEL_TYPE`、`OPEN_AI_API_KEY` 和等配置。\n\n**(2) 启动容器**\n\n在 `docker-compose.yml` 所在目录下执行以下命令启动容器：\n\n```bash\nsudo docker compose up -d         # 若docker-compose为 1.X 版本，则执行 `sudo  docker-compose up -d`\n```\n\n运行命令后，会自动取 [docker hub](https://hub.docker.com/r/zhayujie/chatgpt-on-wechat) 拉取最新release版本的镜像。当执行 `sudo docker ps` 能查看到 NAMES 为 chatgpt-on-wechat 的容器即表示运行成功。最后执行以下命令可查看容器的运行日志：\n\n```bash\nsudo docker logs -f chatgpt-on-wechat\n```\n\n> 如果需要通过浏览器访问Web控制台，请确保服务器的 `9899` 端口已在防火墙或安全组中放行，建议仅对指定IP开放以保证安全。\n\n## 模型说明\n\n以下对所有可支持的模型的配置和使用方法进行说明，模型接口实现在项目的 `models/` 目录下。\n\n<details>\n<summary>OpenAI</summary>\n\n1. API Key创建：在 [OpenAI平台](https://platform.openai.com/api-keys) 创建API Key\n\n2. 填写配置\n\n```json\n{\n    \"model\": \"gpt-5.4\",\n    \"open_ai_api_key\": \"YOUR_API_KEY\",\n    \"open_ai_api_base\": \"https://api.openai.com/v1\",\n    \"bot_type\": \"openai\"\n}\n```\n\n - `model`: 与OpenAI接口的 [model参数](https://platform.openai.com/docs/models) 一致，支持包括 gpt-5.4、gpt-5.4-mini、gpt-5.4-nano、o系列、gpt-4.1等模型，Agent模式推荐使用 `gpt-5.4`、`gpt-5.4-mini`\n - `open_ai_api_base`: 如果需要接入第三方代理接口，可通过修改该参数进行接入\n - `bot_type`: 使用OpenAI相关模型时无需填写。当使用第三方代理接口接入Claude等非OpenAI官方模型时，该参数设为 `openai`\n</details>\n\n<details>\n<summary>LinkAI</summary>\n\n1. API Key创建：在 [LinkAI平台](https://link-ai.tech/console/interface) 创建API Key \n\n2. 填写配置\n\n```json\n{\n    \"model\": \"gpt-5.4-mini\",\n    \"use_linkai\": true,\n    \"linkai_api_key\": \"YOUR API KEY\"\n}\n```\n\n+ `use_linkai`: 是否使用LinkAI接口，默认关闭，设置为true后可对接LinkAI平台的模型，并使用知识库、工作流、数据库、插件等丰富的Agent技能\n+ `linkai_api_key`: LinkAI平台的API Key，可在 [控制台](https://link-ai.tech/console/interface) 中创建\n+ `model`: [模型列表](https://link-ai.tech/console/models)中的全部模型均可使用\n</details>\n\n<details>\n<summary>MiniMax</summary>\n\n方式一：官方接入，配置如下(推荐)：\n\n```json\n{\n    \"model\": \"MiniMax-M2.7\",\n    \"minimax_api_key\": \"\"\n}\n```\n - `model`: 可填写 `MiniMax-M2.7、MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat` 等\n - `minimax_api_key`：MiniMax平台的API-KEY，在 [控制台](https://platform.minimaxi.com/user-center/basic-information/interface-key) 创建\n\n方式二：OpenAI兼容方式接入，配置如下：\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MiniMax-M2.7\",\n  \"open_ai_api_base\": \"https://api.minimaxi.com/v1\",\n  \"open_ai_api_key\": \"\"\n}\n```\n- `bot_type`: OpenAI兼容方式\n- `model`: 可填 `MiniMax-M2.7、MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`，参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)\n- `open_ai_api_base`: MiniMax平台API的 BASE URL\n- `open_ai_api_key`: MiniMax平台的API-KEY\n</details>\n\n<details>\n<summary>智谱AI (GLM)</summary>\n\n方式一：官方接入，配置如下(推荐)：\n\n```json\n{\n  \"model\": \"glm-5-turbo\",\n  \"zhipu_ai_api_key\": \"\"\n}\n```\n - `model`: 可填 `glm-5-turbo、glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)\n - `zhipu_ai_api_key`: 智谱AI平台的 API KEY，在 [控制台](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) 创建\n\n方式二：OpenAI兼容方式接入，配置如下：\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"glm-5-turbo\",\n  \"open_ai_api_base\": \"https://open.bigmodel.cn/api/paas/v4\",\n  \"open_ai_api_key\": \"\"\n}\n```\n- `bot_type`: OpenAI兼容方式\n- `model`: 可填 `glm-5-turbo、glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等\n- `open_ai_api_base`: 智谱AI平台的 BASE URL\n- `open_ai_api_key`: 智谱AI平台的 API KEY\n</details>\n\n<details>\n<summary>通义千问 (Qwen)</summary>\n\n方式一：官方SDK接入，配置如下(推荐)：\n\n```json\n{\n    \"model\": \"qwen3.5-plus\",\n    \"dashscope_api_key\": \"sk-qVxxxxG\"\n}\n```\n - `model`: 可填写 `qwen3.5-plus、qwen3-max、qwen-max、qwen-plus、qwen-turbo、qwen-long、qwq-plus` 等\n - `dashscope_api_key`: 通义千问的 API-KEY，参考 [官方文档](https://bailian.console.aliyun.com/?tab=api#/api) ，在 [控制台](https://bailian.console.aliyun.com/?tab=model#/api-key) 创建\n\n方式二：OpenAI兼容方式接入，配置如下：\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"qwen3.5-plus\",\n  \"open_ai_api_base\": \"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n  \"open_ai_api_key\": \"sk-qVxxxxG\"\n}\n```\n- `bot_type`: OpenAI兼容方式\n- `model`: 支持官方所有模型，参考[模型列表](https://help.aliyun.com/zh/model-studio/models?spm=a2c4g.11186623.0.0.78d84823Kth5on#9f8890ce29g5u)\n- `open_ai_api_base`: 通义千问API的 BASE URL\n- `open_ai_api_key`: 通义千问的 API-KEY\n</details>\n\n<details>\n<summary>Kimi (Moonshot)</summary>\n\n方式一：官方接入，配置如下：\n\n```json\n{\n    \"model\": \"kimi-k2.5\",\n    \"moonshot_api_key\": \"\"\n}\n```\n - `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`\n - `moonshot_api_key`: Moonshot的API-KEY，在 [控制台](https://platform.moonshot.cn/console/api-keys) 创建\n \n方式二：OpenAI兼容方式接入，配置如下：\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"kimi-k2.5\",\n  \"open_ai_api_base\": \"https://api.moonshot.cn/v1\",\n  \"open_ai_api_key\": \"\"\n}\n```\n- `bot_type`: OpenAI兼容方式\n- `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`\n- `open_ai_api_base`: Moonshot的 BASE URL\n- `open_ai_api_key`: Moonshot的 API-KEY\n</details>\n\n<details>\n<summary>豆包 (Doubao)</summary>\n\n1. API Key创建：在 [火山方舟控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建API Key\n\n2. 填写配置\n\n```json\n{\n    \"model\": \"doubao-seed-2-0-code-preview-260215\",\n    \"ark_api_key\": \"YOUR_API_KEY\"\n}\n```\n - `model`: 可填写 `doubao-seed-2-0-code-preview-260215、doubao-seed-2-0-pro-260215、doubao-seed-2-0-lite-260215、doubao-seed-2-0-mini-260215` 等\n - `ark_api_key`: 火山方舟平台的 API Key，在 [控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建\n - `ark_base_url`: 可选，默认为 `https://ark.cn-beijing.volces.com/api/v3`\n</details>\n\n<details>\n<summary>Claude</summary>\n\n1. API Key创建：在 [Claude控制台](https://console.anthropic.com/settings/keys) 创建API Key\n\n2. 填写配置\n\n```json\n{\n    \"model\": \"claude-sonnet-4-6\",\n    \"claude_api_key\": \"YOUR_API_KEY\"\n}\n```\n - `model`: 参考 [官方模型ID](https://docs.anthropic.com/en/docs/about-claude/models/overview#model-aliases) ，支持 `claude-sonnet-4-6、claude-opus-4-6、claude-sonnet-4-5、claude-sonnet-4-0、claude-opus-4-0、claude-3-5-sonnet-latest` 等\n</details>\n\n<details>\n<summary>Gemini</summary>\n\nAPI Key创建：在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn) 创建API Key ，配置如下\n```json\n{\n    \"model\": \"gemini-3.1-flash-lite-preview\",\n    \"gemini_api_key\": \"\"\n}\n```\n - `model`: 参考[官方文档-模型列表](https://ai.google.dev/gemini-api/docs/models?hl=zh-cn)，支持 `gemini-3.1-flash-lite-preview、gemini-3.1-pro-preview、gemini-3-flash-preview、gemini-3-pro-preview` 等\n</details>\n\n<details>\n<summary>DeepSeek</summary>\n\n1. API Key创建：在 [DeepSeek平台](https://platform.deepseek.com/api_keys) 创建API Key \n\n2. 填写配置\n\n```json\n{\n    \"model\": \"deepseek-chat\",\n    \"open_ai_api_key\": \"sk-xxxxxxxxxxx\",\n    \"open_ai_api_base\": \"https://api.deepseek.com/v1\",\n    \"bot_type\": \"openai\"\n\n}\n```\n\n - `bot_type`: OpenAI兼容方式\n - `model`: 可填 `deepseek-chat、deepseek-reasoner`，分别对应的是 DeepSeek-V3 和 DeepSeek-R1 模型\n - `open_ai_api_key`: DeepSeek平台的 API Key\n - `open_ai_api_base`: DeepSeek平台 BASE URL\n</details>\n\n<details>\n<summary>Azure</summary>\n\n1. API Key创建：在 [Azure平台](https://oai.azure.com/) 创建API Key \n\n2. 填写配置\n\n```json\n{\n  \"model\": \"\",\n  \"use_azure_chatgpt\": true,\n  \"open_ai_api_key\": \"\",\n  \"open_ai_api_base\": \"\",\n  \"azure_deployment_id\": \"\",\n  \"azure_api_version\": \"2025-01-01-preview\"\n}\n```\n\n - `model`: 留空即可\n - `use_azure_chatgpt`: 设为 true \n - `open_ai_api_key`: Azure平台的密钥\n - `open_ai_api_base`: Azure平台的 BASE URL\n - `azure_deployment_id`: Azure平台部署的模型名称\n - `azure_api_version`: api版本以及以上参数可以在部署的 [模型配置](https://oai.azure.com/resource/deployments) 界面查看\n</details>\n\n<details>\n<summary>百度文心</summary>\n方式一：官方SDK接入，配置如下：\n\n```json\n{\n    \"model\": \"wenxin-4\", \n    \"baidu_wenxin_api_key\": \"IajztZ0bDxgnP9bEykU7lBer\",\n    \"baidu_wenxin_secret_key\": \"EDPZn6L24uAS9d8RWFfotK47dPvkjD6G\"\n}\n```\n - `model`: 可填 `wenxin`和`wenxin-4`，对应模型为 文心-3.5 和 文心-4.0\n - `baidu_wenxin_api_key`：参考 [千帆平台-access_token鉴权](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/dlv4pct3s) 文档获取 API Key\n - `baidu_wenxin_secret_key`：参考 [千帆平台-access_token鉴权](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/dlv4pct3s) 文档获取 Secret Key\n\n方式二：OpenAI兼容方式接入，配置如下：\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"ERNIE-4.0-Turbo-8K\",\n  \"open_ai_api_base\": \"https://qianfan.baidubce.com/v2\",\n  \"open_ai_api_key\": \"bce-v3/ALTxxxxxxd2b\"\n}\n```\n- `bot_type`: OpenAI兼容方式\n- `model`: 支持官方所有模型，参考[模型列表](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Wm9cvy6rl)\n- `open_ai_api_base`: 百度文心API的 BASE URL\n- `open_ai_api_key`: 百度文心的 API-KEY，参考 [官方文档](https://cloud.baidu.com/doc/qianfan-api/s/ym9chdsy5) ，在 [控制台](https://console.bce.baidu.com/iam/#/iam/apikey/list) 创建API Key\n\n</details>\n\n<details>\n<summary>讯飞星火</summary>\n\n方式一：官方接入，配置如下：\n参考 [官方文档-快速指引](https://www.xfyun.cn/doc/platform/quickguide.html#%E7%AC%AC%E4%BA%8C%E6%AD%A5-%E5%88%9B%E5%BB%BA%E6%82%A8%E7%9A%84%E7%AC%AC%E4%B8%80%E4%B8%AA%E5%BA%94%E7%94%A8-%E5%BC%80%E5%A7%8B%E4%BD%BF%E7%94%A8%E6%9C%8D%E5%8A%A1) 获取 `APPID、 APISecret、 APIKey` 三个参数\n\n```json\n{\n  \"model\": \"xunfei\",\n  \"xunfei_app_id\": \"\",\n  \"xunfei_api_key\": \"\",\n  \"xunfei_api_secret\": \"\",\n  \"xunfei_domain\": \"4.0Ultra\",\n  \"xunfei_spark_url\": \"wss://spark-api.xf-yun.com/v4.0/chat\"\n}\n```\n - `model`: 填 `xunfei`\n - `xunfei_domain`: 可填写 `4.0Ultra、generalv3.5、max-32k、generalv3、pro-128k、lite`\n - `xunfei_spark_url`: 填写参考 [官方文档-请求地址](https://www.xfyun.cn/doc/spark/Web.html#_1-1-%E8%AF%B7%E6%B1%82%E5%9C%B0%E5%9D%80) 的说明\n \n方式二：OpenAI兼容方式接入，配置如下：\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"4.0Ultra\",\n  \"open_ai_api_base\": \"https://spark-api-open.xf-yun.com/v1\",\n  \"open_ai_api_key\": \"\"\n}\n```\n- `bot_type`: OpenAI兼容方式\n- `model`: 可填写 `4.0Ultra、generalv3.5、max-32k、generalv3、pro-128k、lite`\n- `open_ai_api_base`: 讯飞星火平台的 BASE URL\n- `open_ai_api_key`: 讯飞星火平台的[APIPassword](https://console.xfyun.cn/services/bm3) ，因模型而已\n</details>\n\n<details>\n<summary>ModelScope</summary>\n\n```json\n{\n  \"bot_type\": \"modelscope\",\n  \"model\": \"Qwen/QwQ-32B\",\n  \"modelscope_api_key\": \"your_api_key\",\n  \"modelscope_base_url\": \"https://api-inference.modelscope.cn/v1/chat/completions\",\n  \"text_to_image\": \"MusePublic/489_ckpt_FLUX_1\"\n}\n```\n\n- `bot_type`: modelscope接口格式\n- `model`: 参考[模型列表](https://www.modelscope.cn/models?filter=inference_type&page=1)\n- `modelscope_api_key`: 参考 [官方文档-访问令牌](https://modelscope.cn/docs/accounts/token) ，在 [控制台](https://modelscope.cn/my/myaccesstoken) \n- `modelscope_base_url`: modelscope平台的 BASE URL\n- `text_to_image`: 图像生成模型，参考[模型列表](https://www.modelscope.cn/models?filter=inference_type&page=1)\n</details>\n\n<details>\n<summary>Coding Plan</summary>\n\nCoding Plan 是各厂商推出的编程包月套餐，所有厂商均可通过 OpenAI 兼容方式接入：\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"模型名称\",\n  \"open_ai_api_base\": \"厂商 Coding Plan API Base\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n目前支持阿里云、MiniMax、智谱GLM、Kimi、火山引擎等厂商，各厂商详细配置请参考 [Coding Plan 文档](https://docs.cowagent.ai/models/coding-plan)。\n</details>\n\n\n## 通道说明\n\n以下对可接入通道的配置方式进行说明，应用通道代码在项目的 `channel/` 目录下。\n\n支持同时可接入多个通道，配置时可通过逗号进行分割，例如 `\"channel_type\": \"feishu,dingtalk\"`。\n\n<details>\n<summary>1. Web</summary>\n\n项目启动后会默认运行Web控制台，配置如下：\n\n```json\n{\n    \"channel_type\": \"web\",\n    \"web_port\": 9899\n}\n```\n\n- `web_port`: 默认为 9899，可按需更改，需要服务器防火墙和安全组放行该端口\n- 如本地运行，启动后请访问 `http://localhost:9899/chat` ；如服务器运行，请访问 `http://ip:9899/chat` \n> 注：请将上述 url 中的 ip 或者 port 替换为实际的值\n</details>\n\n<details>\n<summary>2. Feishu - 飞书</summary>\n\n飞书支持两种事件接收模式：WebSocket 长连接（推荐）和 Webhook。\n\n**方式一：WebSocket 模式（推荐，无需公网 IP）**\n\n```json\n{\n    \"channel_type\": \"feishu\",\n    \"feishu_app_id\": \"APP_ID\",\n    \"feishu_app_secret\": \"APP_SECRET\",\n    \"feishu_event_mode\": \"websocket\"\n}\n```\n\n**方式二：Webhook 模式（需要公网 IP）**\n\n```json\n{\n    \"channel_type\": \"feishu\",\n    \"feishu_app_id\": \"APP_ID\",\n    \"feishu_app_secret\": \"APP_SECRET\",\n    \"feishu_token\": \"VERIFICATION_TOKEN\",\n    \"feishu_event_mode\": \"webhook\",\n    \"feishu_port\": 9891\n}\n```\n\n- `feishu_event_mode`: 事件接收模式，`websocket`（推荐）或 `webhook`\n- WebSocket 模式需安装依赖：`pip3 install lark-oapi`\n\n详细步骤和参数说明参考 [飞书接入](https://docs.cowagent.ai/channels/feishu)\n\n</details>\n\n<details>\n<summary>3. DingTalk - 钉钉</summary>\n\n钉钉需要在开放平台创建智能机器人应用，将以下配置填入 `config.json`：\n\n```json\n{\n    \"channel_type\": \"dingtalk\",\n    \"dingtalk_client_id\": \"CLIENT_ID\",\n    \"dingtalk_client_secret\": \"CLIENT_SECRET\"\n}\n```\n详细步骤和参数说明参考 [钉钉接入](https://docs.cowagent.ai/channels/dingtalk)\n</details>\n\n<details>\n<summary>4. WeCom Bot - 企微智能机器人</summary>\n\n企微智能机器人使用 WebSocket 长连接模式，无需公网 IP 和域名，配置简单：\n\n```json\n{\n    \"channel_type\": \"wecom_bot\",\n    \"wecom_bot_id\": \"YOUR_BOT_ID\",\n    \"wecom_bot_secret\": \"YOUR_SECRET\"\n}\n```\n详细步骤和参数说明参考 [企微智能机器人接入](https://docs.cowagent.ai/channels/wecom-bot)\n\n</details>\n\n<details>\n<summary>5. QQ - QQ 机器人</summary>\n\nQQ 机器人使用 WebSocket 长连接模式，无需公网 IP 和域名，支持 QQ 单聊、群聊和频道消息：\n\n```json\n{\n    \"channel_type\": \"qq\",\n    \"qq_app_id\": \"YOUR_APP_ID\",\n    \"qq_app_secret\": \"YOUR_APP_SECRET\"\n}\n```\n详细步骤和参数说明参考 [QQ 机器人接入](https://docs.cowagent.ai/channels/qq)\n\n</details>\n\n<details>\n<summary>6. WeCom App - 企业微信应用</summary>\n\n企业微信自建应用接入需在后台创建应用并启用消息回调，配置示例：\n\n```json\n{\n    \"channel_type\": \"wechatcom_app\",\n    \"wechatcom_corp_id\": \"CORPID\",\n    \"wechatcomapp_token\": \"TOKEN\",\n    \"wechatcomapp_port\": 9898,\n    \"wechatcomapp_secret\": \"SECRET\",\n    \"wechatcomapp_agent_id\": \"AGENTID\",\n    \"wechatcomapp_aes_key\": \"AESKEY\"\n}\n```\n详细步骤和参数说明参考 [企微自建应用接入](https://docs.cowagent.ai/channels/wecom)\n\n</details>\n\n<details>\n<summary>7. WeChat MP - 微信公众号</summary>\n\n本项目支持订阅号和服务号两种公众号，通过服务号（`wechatmp_service`）体验更佳。\n\n**个人订阅号（wechatmp）**\n\n```json\n{\n    \"channel_type\": \"wechatmp\",\n    \"wechatmp_token\": \"TOKEN\",\n    \"wechatmp_port\": 80,\n    \"wechatmp_app_id\": \"APPID\",\n    \"wechatmp_app_secret\": \"APPSECRET\",\n    \"wechatmp_aes_key\": \"\"\n}\n```\n\n**企业服务号（wechatmp_service）**\n\n```json\n{\n    \"channel_type\": \"wechatmp_service\",\n    \"wechatmp_token\": \"TOKEN\",\n    \"wechatmp_port\": 80,\n    \"wechatmp_app_id\": \"APPID\",\n    \"wechatmp_app_secret\": \"APPSECRET\",\n    \"wechatmp_aes_key\": \"\"\n}\n```\n\n详细步骤和参数说明参考 [微信公众号接入](https://docs.cowagent.ai/channels/wechatmp)\n\n</details>\n\n<details>\n<summary>8. Terminal - 终端</summary>\n\n修改 `config.json` 中的 `channel_type` 字段：\n\n```json\n{\n    \"channel_type\": \"terminal\"\n}\n```\n\n运行后可在终端与机器人进行对话。\n\n</details>\n\n<br/>\n\n# 🔗 相关项目\n\n- [bot-on-anything](https://github.com/zhayujie/bot-on-anything)：轻量和高可扩展的大模型应用框架，支持接入Slack, Telegram, Discord, Gmail等海外平台，可作为本项目的补充使用。\n- [AgentMesh](https://github.com/MinimalFuture/AgentMesh)：开源的多智能体(Multi-Agent)框架，可以通过多智能体团队的协同来解决复杂问题。本项目基于该框架实现了[Agent插件](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/agent/README.md)，可访问终端、浏览器、文件系统、搜索引擎 等各类工具，并实现了多智能体协同。\n\n\n\n# 🔎 常见问题\n\nFAQs： <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>\n\n或直接在线咨询 [项目小助手](https://link-ai.tech/app/Kv2fXJcH)  (知识库持续完善中，回复供参考)\n\n# 🛠️ 开发\n\n欢迎接入更多应用通道，参考 [飞书通道](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/feishu/feishu_channel.py) 新增自定义通道，实现接收和发送消息逻辑即可完成接入。 同时欢迎贡献新的Skills，参考 [Skill创造器说明](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/skills/skill-creator/SKILL.md)。\n\n# ✉ 联系\n\n欢迎提交PR、Issues进行反馈，以及通过 🌟Star 支持并关注项目更新。项目运行遇到问题可以查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ，以及前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。个人开发者可加入开源交流群参与更多讨论，企业用户可联系[产品客服](https://cdn.link-ai.tech/portal/linkai-customer-service.png)咨询。\n\n# 🌟 贡献者\n\n![cow contributors](https://contrib.rocks/image?repo=zhayujie/chatgpt-on-wechat&max=1000)\n"
  },
  {
    "path": "agent/chat/__init__.py",
    "content": "from agent.chat.service import ChatService\n\n__all__ = [\"ChatService\"]\n"
  },
  {
    "path": "agent/chat/service.py",
    "content": "\"\"\"\nChatService - Wraps the Agent stream execution to produce CHAT protocol chunks.\n\nTranslates agent events (message_update, message_end, tool_execution_end, etc.)\ninto the CHAT socket protocol format (content chunks with segment_id, tool_calls chunks).\n\"\"\"\n\nimport time\nfrom typing import Callable, Optional\n\nfrom common.log import logger\n\n\nclass ChatService:\n    \"\"\"\n    High-level service that runs an Agent for a given query and streams\n    the results as CHAT protocol chunks via a callback.\n\n    Usage:\n        svc = ChatService(agent_bridge)\n        svc.run(query, session_id, send_chunk_fn)\n    \"\"\"\n\n    def __init__(self, agent_bridge):\n        \"\"\"\n        :param agent_bridge: AgentBridge instance (manages agent lifecycle)\n        \"\"\"\n        self.agent_bridge = agent_bridge\n\n    def run(self, query: str, session_id: str, send_chunk_fn: Callable[[dict], None],\n            channel_type: str = \"\"):\n        \"\"\"\n        Run the agent for *query* and stream results back via *send_chunk_fn*.\n\n        The method blocks until the agent finishes. After it returns the SDK\n        will automatically send the final (streaming=false) message.\n\n        :param query: user query text\n        :param session_id: session identifier for agent isolation\n        :param send_chunk_fn: callable(chunk_data: dict) to send a streaming chunk\n        :param channel_type: source channel (e.g. \"web\", \"feishu\") for persistence\n        \"\"\"\n        agent = self.agent_bridge.get_agent(session_id=session_id)\n        if agent is None:\n            raise RuntimeError(\"Failed to initialise agent for the session\")\n\n        # Pass context metadata to model for downstream API requests\n        if hasattr(agent, 'model'):\n            agent.model.channel_type = channel_type or \"\"\n            agent.model.session_id = session_id or \"\"\n\n        # State shared between the event callback and this method\n        state = _StreamState()\n\n        def on_event(event: dict):\n            \"\"\"Translate agent events into CHAT protocol chunks.\"\"\"\n            event_type = event.get(\"type\")\n            data = event.get(\"data\", {})\n\n            if event_type == \"message_update\":\n                # Incremental text delta\n                delta = data.get(\"delta\", \"\")\n                if delta:\n                    send_chunk_fn({\n                        \"chunk_type\": \"content\",\n                        \"delta\": delta,\n                        \"segment_id\": state.segment_id,\n                    })\n\n            elif event_type == \"message_end\":\n                # A content segment finished.\n                tool_calls = data.get(\"tool_calls\", [])\n                if tool_calls:\n                    # After tool_calls are executed the next content will be\n                    # a new segment; collect tool results until turn_end.\n                    state.pending_tool_results = []\n\n            elif event_type == \"tool_execution_start\":\n                # Notify the client that a tool is about to run (with its input args)\n                tool_name = data.get(\"tool_name\", \"\")\n                arguments = data.get(\"arguments\", {})\n                # Cache arguments keyed by tool_call_id so tool_execution_end can include them\n                tool_call_id = data.get(\"tool_call_id\", tool_name)\n                state.pending_tool_arguments[tool_call_id] = arguments\n                send_chunk_fn({\n                    \"chunk_type\": \"tool_start\",\n                    \"tool\": tool_name,\n                    \"arguments\": arguments,\n                })\n\n            elif event_type == \"tool_execution_end\":\n                tool_name = data.get(\"tool_name\", \"\")\n                tool_call_id = data.get(\"tool_call_id\", tool_name)\n                # Retrieve cached arguments from the matching tool_execution_start event\n                arguments = state.pending_tool_arguments.pop(tool_call_id, data.get(\"arguments\", {}))\n                result = data.get(\"result\", \"\")\n                status = data.get(\"status\", \"unknown\")\n                execution_time = data.get(\"execution_time\", 0)\n                elapsed_str = f\"{execution_time:.2f}s\"\n\n                # Serialise result to string if needed\n                if not isinstance(result, str):\n                    import json\n                    try:\n                        result = json.dumps(result, ensure_ascii=False)\n                    except Exception:\n                        result = str(result)\n\n                tool_info = {\n                    \"name\": tool_name,\n                    \"arguments\": arguments,\n                    \"result\": result,\n                    \"status\": status,\n                    \"elapsed\": elapsed_str,\n                }\n\n                if state.pending_tool_results is not None:\n                    state.pending_tool_results.append(tool_info)\n\n            elif event_type == \"turn_end\":\n                has_tool_calls = data.get(\"has_tool_calls\", False)\n                if has_tool_calls and state.pending_tool_results:\n                    # Flush collected tool results as a single tool_calls chunk\n                    send_chunk_fn({\n                        \"chunk_type\": \"tool_calls\",\n                        \"tool_calls\": state.pending_tool_results,\n                    })\n                    state.pending_tool_results = None\n                    # Next content belongs to a new segment\n                    state.segment_id += 1\n\n        # Run the agent with our event callback ---------------------------\n        logger.info(f\"[ChatService] Starting agent run: session={session_id}, query={query[:80]}\")\n\n        from config import conf\n        max_context_turns = conf().get(\"agent_max_context_turns\", 20)\n\n        # Get full system prompt with skills\n        full_system_prompt = agent.get_full_system_prompt()\n\n        # Create a copy of messages for this execution\n        with agent.messages_lock:\n            messages_copy = agent.messages.copy()\n            original_length = len(agent.messages)\n\n        from agent.protocol.agent_stream import AgentStreamExecutor\n\n        executor = AgentStreamExecutor(\n            agent=agent,\n            model=agent.model,\n            system_prompt=full_system_prompt,\n            tools=agent.tools,\n            max_turns=agent.max_steps,\n            on_event=on_event,\n            messages=messages_copy,\n            max_context_turns=max_context_turns,\n        )\n\n        try:\n            response = executor.run_stream(query)\n        except Exception:\n            # If executor cleared messages (context overflow), sync back\n            if len(executor.messages) == 0:\n                with agent.messages_lock:\n                    agent.messages.clear()\n                    logger.info(\"[ChatService] Cleared agent message history after executor recovery\")\n            raise\n\n        # Append only the NEW messages from this execution (thread-safe)\n        with agent.messages_lock:\n            new_messages = executor.messages[original_length:]\n            agent.messages.extend(new_messages)\n\n        # Persist new messages to SQLite so they survive restarts and\n        # can be queried via the HISTORY interface.\n        if new_messages:\n            self._persist_messages(session_id, list(new_messages), channel_type)\n\n        # Store executor reference for files_to_send access\n        agent.stream_executor = executor\n\n        # Execute post-process tools\n        agent._execute_post_process_tools()\n\n        logger.info(f\"[ChatService] Agent run completed: session={session_id}\")\n\n\n\n    @staticmethod\n    def _persist_messages(session_id: str, new_messages: list, channel_type: str = \"\"):\n        try:\n            from config import conf\n            if not conf().get(\"conversation_persistence\", True):\n                return\n        except Exception:\n            pass\n        try:\n            from agent.memory import get_conversation_store\n            get_conversation_store().append_messages(\n                session_id, new_messages, channel_type=channel_type\n            )\n        except Exception as e:\n            logger.warning(\n                f\"[ChatService] Failed to persist messages for session={session_id}: {e}\"\n            )\n\n\nclass _StreamState:\n    \"\"\"Mutable state shared between the event callback and the run method.\"\"\"\n\n    def __init__(self):\n        self.segment_id: int = 0\n        # None means we are not accumulating tool results right now.\n        # A list means we are in the middle of a tool-execution phase.\n        self.pending_tool_results: Optional[list] = None\n        # Maps tool_call_id -> arguments captured from tool_execution_start,\n        # so that tool_execution_end can attach the correct input args.\n        self.pending_tool_arguments: dict = {}\n"
  },
  {
    "path": "agent/memory/__init__.py",
    "content": "\"\"\"\nMemory module for AgentMesh\n\nProvides both long-term memory (vector/keyword search) and short-term\nconversation history persistence (SQLite).\n\"\"\"\n\nfrom agent.memory.manager import MemoryManager\nfrom agent.memory.config import MemoryConfig, get_default_memory_config, set_global_memory_config\nfrom agent.memory.embedding import create_embedding_provider\nfrom agent.memory.conversation_store import ConversationStore, get_conversation_store\nfrom agent.memory.summarizer import ensure_daily_memory_file\n\n__all__ = [\n    'MemoryManager',\n    'MemoryConfig',\n    'get_default_memory_config',\n    'set_global_memory_config',\n    'create_embedding_provider',\n    'ConversationStore',\n    'get_conversation_store',\n    'ensure_daily_memory_file',\n]\n"
  },
  {
    "path": "agent/memory/chunker.py",
    "content": "\"\"\"\nText chunking utilities for memory\n\nSplits text into chunks with token limits and overlap\n\"\"\"\n\nfrom __future__ import annotations\nfrom typing import List, Tuple\nfrom dataclasses import dataclass\n\n\n@dataclass\nclass TextChunk:\n    \"\"\"Represents a text chunk with line numbers\"\"\"\n    text: str\n    start_line: int\n    end_line: int\n\n\nclass TextChunker:\n    \"\"\"Chunks text by line count with token estimation\"\"\"\n    \n    def __init__(self, max_tokens: int = 500, overlap_tokens: int = 50):\n        \"\"\"\n        Initialize chunker\n        \n        Args:\n            max_tokens: Maximum tokens per chunk\n            overlap_tokens: Overlap tokens between chunks\n        \"\"\"\n        self.max_tokens = max_tokens\n        self.overlap_tokens = overlap_tokens\n        # Rough estimation: ~4 chars per token for English/Chinese mixed\n        self.chars_per_token = 4\n    \n    def chunk_text(self, text: str) -> List[TextChunk]:\n        \"\"\"\n        Chunk text into overlapping segments\n        \n        Args:\n            text: Input text to chunk\n            \n        Returns:\n            List of TextChunk objects\n        \"\"\"\n        if not text.strip():\n            return []\n        \n        lines = text.split('\\n')\n        chunks = []\n        \n        max_chars = self.max_tokens * self.chars_per_token\n        overlap_chars = self.overlap_tokens * self.chars_per_token\n        \n        current_chunk = []\n        current_chars = 0\n        start_line = 1\n        \n        for i, line in enumerate(lines, start=1):\n            line_chars = len(line)\n            \n            # If single line exceeds max, split it\n            if line_chars > max_chars:\n                # Save current chunk if exists\n                if current_chunk:\n                    chunks.append(TextChunk(\n                        text='\\n'.join(current_chunk),\n                        start_line=start_line,\n                        end_line=i - 1\n                    ))\n                    current_chunk = []\n                    current_chars = 0\n                \n                # Split long line into multiple chunks\n                for sub_chunk in self._split_long_line(line, max_chars):\n                    chunks.append(TextChunk(\n                        text=sub_chunk,\n                        start_line=i,\n                        end_line=i\n                    ))\n                \n                start_line = i + 1\n                continue\n            \n            # Check if adding this line would exceed limit\n            if current_chars + line_chars > max_chars and current_chunk:\n                # Save current chunk\n                chunks.append(TextChunk(\n                    text='\\n'.join(current_chunk),\n                    start_line=start_line,\n                    end_line=i - 1\n                ))\n                \n                # Start new chunk with overlap\n                overlap_lines = self._get_overlap_lines(current_chunk, overlap_chars)\n                current_chunk = overlap_lines + [line]\n                current_chars = sum(len(l) for l in current_chunk)\n                start_line = i - len(overlap_lines)\n            else:\n                # Add line to current chunk\n                current_chunk.append(line)\n                current_chars += line_chars\n        \n        # Save last chunk\n        if current_chunk:\n            chunks.append(TextChunk(\n                text='\\n'.join(current_chunk),\n                start_line=start_line,\n                end_line=len(lines)\n            ))\n        \n        return chunks\n    \n    def _split_long_line(self, line: str, max_chars: int) -> List[str]:\n        \"\"\"Split a single long line into multiple chunks\"\"\"\n        chunks = []\n        for i in range(0, len(line), max_chars):\n            chunks.append(line[i:i + max_chars])\n        return chunks\n    \n    def _get_overlap_lines(self, lines: List[str], target_chars: int) -> List[str]:\n        \"\"\"Get last few lines that fit within target_chars for overlap\"\"\"\n        overlap = []\n        chars = 0\n        \n        for line in reversed(lines):\n            line_chars = len(line)\n            if chars + line_chars > target_chars:\n                break\n            overlap.insert(0, line)\n            chars += line_chars\n        \n        return overlap\n    \n    def chunk_markdown(self, text: str) -> List[TextChunk]:\n        \"\"\"\n        Chunk markdown text while respecting structure\n        (For future enhancement: respect markdown sections)\n        \"\"\"\n        return self.chunk_text(text)\n"
  },
  {
    "path": "agent/memory/config.py",
    "content": "\"\"\"\nMemory configuration module\n\nProvides global memory configuration with simplified workspace structure\n\"\"\"\n\nfrom __future__ import annotations\nimport os\nfrom dataclasses import dataclass, field\nfrom typing import Optional, List\nfrom pathlib import Path\n\n\ndef _default_workspace():\n    \"\"\"Get default workspace path with proper Windows support\"\"\"\n    from common.utils import expand_path\n    return expand_path(\"~/cow\")\n\n\n@dataclass\nclass MemoryConfig:\n    \"\"\"Configuration for memory storage and search\"\"\"\n    \n    # Storage paths (default: ~/cow)\n    workspace_root: str = field(default_factory=_default_workspace)\n    \n    # Embedding config\n    embedding_provider: str = \"openai\"  # \"openai\" | \"local\"\n    embedding_model: str = \"text-embedding-3-small\"\n    embedding_dim: int = 1536\n    \n    # Chunking config\n    chunk_max_tokens: int = 500\n    chunk_overlap_tokens: int = 50\n    \n    # Search config\n    max_results: int = 10\n    min_score: float = 0.1\n    \n    # Hybrid search weights\n    vector_weight: float = 0.7\n    keyword_weight: float = 0.3\n    \n    # Memory sources\n    sources: List[str] = field(default_factory=lambda: [\"memory\", \"session\"])\n    \n    # Sync config\n    enable_auto_sync: bool = True\n    sync_on_search: bool = True\n    \n    \n    def get_workspace(self) -> Path:\n        \"\"\"Get workspace root directory\"\"\"\n        return Path(self.workspace_root)\n    \n    def get_memory_dir(self) -> Path:\n        \"\"\"Get memory files directory\"\"\"\n        return self.get_workspace() / \"memory\"\n    \n    def get_db_path(self) -> Path:\n        \"\"\"Get SQLite database path for long-term memory index\"\"\"\n        index_dir = self.get_memory_dir() / \"long-term\"\n        index_dir.mkdir(parents=True, exist_ok=True)\n        return index_dir / \"index.db\"\n    \n    def get_skills_dir(self) -> Path:\n        \"\"\"Get skills directory\"\"\"\n        return self.get_workspace() / \"skills\"\n    \n    def get_agent_workspace(self, agent_name: Optional[str] = None) -> Path:\n        \"\"\"\n        Get workspace directory for an agent\n        \n        Args:\n            agent_name: Optional agent name (not used in current implementation)\n            \n        Returns:\n            Path to workspace directory\n        \"\"\"\n        workspace = self.get_workspace()\n        # Ensure workspace directory exists\n        workspace.mkdir(parents=True, exist_ok=True)\n        return workspace\n\n\n# Global memory configuration\n_global_memory_config: Optional[MemoryConfig] = None\n\n\ndef get_default_memory_config() -> MemoryConfig:\n    \"\"\"\n    Get the global memory configuration.\n    If not set, returns a default configuration.\n    \n    Returns:\n        MemoryConfig instance\n    \"\"\"\n    global _global_memory_config\n    if _global_memory_config is None:\n        _global_memory_config = MemoryConfig()\n    return _global_memory_config\n\n\ndef set_global_memory_config(config: MemoryConfig):\n    \"\"\"\n    Set the global memory configuration.\n    This should be called before creating any MemoryManager instances.\n    \n    Args:\n        config: MemoryConfig instance to use globally\n        \n    Example:\n        >>> from agent.memory import MemoryConfig, set_global_memory_config\n        >>> config = MemoryConfig(\n        ...     workspace_root=\"~/my_agents\",\n        ...     embedding_provider=\"openai\",\n        ...     vector_weight=0.8\n        ... )\n        >>> set_global_memory_config(config)\n    \"\"\"\n    global _global_memory_config\n    _global_memory_config = config\n"
  },
  {
    "path": "agent/memory/conversation_store.py",
    "content": "\"\"\"\nConversation history persistence using SQLite.\n\nDesign:\n- sessions table: per-session metadata (channel_type, last_active, msg_count)\n- messages table: individual messages stored as JSON, append-only\n- Pruning: age-based only (sessions not updated within N days are deleted)\n- Thread-safe via a single in-process lock\n\nStorage path: ~/cow/sessions/conversations.db\n\"\"\"\n\nfrom __future__ import annotations\n\nimport json\nimport sqlite3\nimport threading\nimport time\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional\n\nfrom common.log import logger\n\n\n# ---------------------------------------------------------------------------\n# Schema\n# ---------------------------------------------------------------------------\n\n_DDL = \"\"\"\nCREATE TABLE IF NOT EXISTS sessions (\n    session_id   TEXT    PRIMARY KEY,\n    channel_type TEXT    NOT NULL DEFAULT '',\n    created_at   INTEGER NOT NULL,\n    last_active  INTEGER NOT NULL,\n    msg_count    INTEGER NOT NULL DEFAULT 0\n);\n\nCREATE TABLE IF NOT EXISTS messages (\n    id           INTEGER PRIMARY KEY AUTOINCREMENT,\n    session_id   TEXT    NOT NULL,\n    seq          INTEGER NOT NULL,\n    role         TEXT    NOT NULL,\n    content      TEXT    NOT NULL,\n    created_at   INTEGER NOT NULL,\n    UNIQUE (session_id, seq)\n);\n\nCREATE INDEX IF NOT EXISTS idx_messages_session\n    ON messages (session_id, seq);\n\nCREATE INDEX IF NOT EXISTS idx_sessions_last_active\n    ON sessions (last_active);\n\"\"\"\n\n# Migration: add channel_type column to existing databases that predate it.\n_MIGRATION_ADD_CHANNEL_TYPE = \"\"\"\nALTER TABLE sessions ADD COLUMN channel_type TEXT NOT NULL DEFAULT '';\n\"\"\"\n\nDEFAULT_MAX_AGE_DAYS: int = 30\n\n\ndef _is_visible_user_message(content: Any) -> bool:\n    \"\"\"\n    Return True when a user-role message represents actual user input\n    (not an internal tool_result injected by the agent loop).\n    \"\"\"\n    if isinstance(content, str):\n        return bool(content.strip())\n    if isinstance(content, list):\n        return any(\n            isinstance(b, dict) and b.get(\"type\") == \"text\"\n            for b in content\n        )\n    return False\n\n\ndef _extract_display_text(content: Any) -> str:\n    \"\"\"\n    Extract the human-readable text portion from a message content value.\n    Returns an empty string for tool_use / tool_result blocks.\n    \"\"\"\n    if isinstance(content, str):\n        return content.strip()\n    if isinstance(content, list):\n        parts = [\n            b.get(\"text\", \"\")\n            for b in content\n            if isinstance(b, dict) and b.get(\"type\") == \"text\"\n        ]\n        return \"\\n\".join(p for p in parts if p).strip()\n    return \"\"\n\n\ndef _extract_tool_calls(content: Any) -> List[Dict[str, Any]]:\n    \"\"\"\n    Extract tool_use blocks from an assistant message content.\n    Returns a list of {name, arguments} dicts (result filled in later).\n    \"\"\"\n    if not isinstance(content, list):\n        return []\n    return [\n        {\"id\": b.get(\"id\", \"\"), \"name\": b.get(\"name\", \"\"), \"arguments\": b.get(\"input\", {})}\n        for b in content\n        if isinstance(b, dict) and b.get(\"type\") == \"tool_use\"\n    ]\n\n\ndef _extract_tool_results(content: Any) -> Dict[str, str]:\n    \"\"\"\n    Extract tool_result blocks from a user message, keyed by tool_use_id.\n    \"\"\"\n    if not isinstance(content, list):\n        return {}\n    results = {}\n    for b in content:\n        if not isinstance(b, dict) or b.get(\"type\") != \"tool_result\":\n            continue\n        tool_id = b.get(\"tool_use_id\", \"\")\n        result_content = b.get(\"content\", \"\")\n        if isinstance(result_content, list):\n            result_content = \"\\n\".join(\n                rb.get(\"text\", \"\") for rb in result_content\n                if isinstance(rb, dict) and rb.get(\"type\") == \"text\"\n            )\n        results[tool_id] = str(result_content)\n    return results\n\n\ndef _group_into_display_turns(\n    rows: List[tuple],\n) -> List[Dict[str, Any]]:\n    \"\"\"\n    Convert raw (role, content_json, created_at) DB rows into display turns.\n\n    One display turn = one visible user message  +  one merged assistant reply.\n    All intermediate assistant messages (those carrying tool_use) and the final\n    assistant text reply produced for the same user query are collapsed into a\n    single assistant turn, exactly matching the live SSE rendering where tools\n    and the final answer appear inside the same bubble.\n\n    Grouping rules:\n    - A visible user message starts a new group.\n    - tool_result user messages are internal; their content is attached to the\n      matching tool_use entry via tool_use_id and they never become own turns.\n    - All assistant messages within a group are merged:\n        * tool_use blocks → tool_calls list (result filled from tool_results)\n        * text blocks → last non-empty text becomes the display content\n    \"\"\"\n    # ------------------------------------------------------------------ #\n    # Pass 1: split rows into groups, each starting with a visible user msg\n    # ------------------------------------------------------------------ #\n    # group = (user_row | None, [subsequent_rows])\n    # user_row: (content, created_at)\n    groups: List[tuple] = []\n    cur_user: Optional[tuple] = None\n    cur_rest: List[tuple] = []\n    started = False\n\n    for role, raw_content, created_at in rows:\n        try:\n            content = json.loads(raw_content)\n        except Exception:\n            content = raw_content\n\n        if role == \"user\" and _is_visible_user_message(content):\n            if started:\n                groups.append((cur_user, cur_rest))\n            cur_user = (content, created_at)\n            cur_rest = []\n            started = True\n        else:\n            cur_rest.append((role, content, created_at))\n\n    if started:\n        groups.append((cur_user, cur_rest))\n\n    # ------------------------------------------------------------------ #\n    # Pass 2: build display turns from each group\n    # ------------------------------------------------------------------ #\n    turns: List[Dict[str, Any]] = []\n\n    for user_row, rest in groups:\n        # User turn\n        if user_row:\n            content, created_at = user_row\n            text = _extract_display_text(content)\n            if text:\n                turns.append({\"role\": \"user\", \"content\": text, \"created_at\": created_at})\n\n        # Collect all tool_calls and tool_results from the rest of the group\n        all_tool_calls: List[Dict[str, Any]] = []\n        tool_results: Dict[str, str] = {}\n        final_text = \"\"\n        final_ts: Optional[int] = None\n\n        for role, content, created_at in rest:\n            if role == \"user\":\n                tool_results.update(_extract_tool_results(content))\n            elif role == \"assistant\":\n                tcs = _extract_tool_calls(content)\n                all_tool_calls.extend(tcs)\n                t = _extract_display_text(content)\n                if t:\n                    final_text = t\n                final_ts = created_at\n\n        # Attach tool results to their matching tool_call entries\n        for tc in all_tool_calls:\n            tc[\"result\"] = tool_results.get(tc.get(\"id\", \"\"), \"\")\n\n        if final_text or all_tool_calls:\n            turns.append({\n                \"role\": \"assistant\",\n                \"content\": final_text,\n                \"tool_calls\": all_tool_calls,\n                \"created_at\": final_ts or (user_row[1] if user_row else 0),\n            })\n\n    return turns\n\n\nclass ConversationStore:\n    \"\"\"\n    SQLite-backed store for per-session conversation history.\n\n    Usage:\n        store = ConversationStore(db_path)\n        store.append_messages(\"user_123\", new_messages, channel_type=\"feishu\")\n        msgs = store.load_messages(\"user_123\", max_turns=30)\n    \"\"\"\n\n    def __init__(self, db_path: Path):\n        self._db_path = db_path\n        self._lock = threading.Lock()\n        self._init_db()\n\n    # ------------------------------------------------------------------\n    # Public API\n    # ------------------------------------------------------------------\n\n    def load_messages(\n        self,\n        session_id: str,\n        max_turns: int = 30,\n    ) -> List[Dict[str, Any]]:\n        \"\"\"\n        Load the most recent messages for a session, for injection into the LLM.\n\n        ALL message types (user text, assistant tool_use, tool_result) are returned\n        in their original JSON form so the LLM can reconstruct the full context.\n\n        max_turns is a *visible-turn* count: we count only user messages whose\n        content is actual user text (not tool_result blocks).  This prevents\n        tool-heavy sessions from exhausting the turn budget prematurely.\n\n        Args:\n            session_id: Unique session identifier.\n            max_turns: Maximum number of visible user-assistant turns to keep.\n\n        Returns:\n            Chronologically ordered list of message dicts (role, content).\n        \"\"\"\n        with self._lock:\n            conn = self._connect()\n            try:\n                rows = conn.execute(\n                    \"\"\"\n                    SELECT seq, role, content\n                    FROM messages\n                    WHERE session_id = ?\n                    ORDER BY seq DESC\n                    \"\"\",\n                    (session_id,),\n                ).fetchall()\n            finally:\n                conn.close()\n\n        if not rows:\n            return []\n\n        # Walk newest-to-oldest counting *visible* user turns (actual user text,\n        # not tool_result injections).  Record the seq of every visible user\n        # message so we can find a clean cut point later.\n        visible_turn_seqs: List[int] = []  # newest first\n        for seq, role, raw_content in rows:\n            if role != \"user\":\n                continue\n            try:\n                content = json.loads(raw_content)\n            except Exception:\n                content = raw_content\n            if _is_visible_user_message(content):\n                visible_turn_seqs.append(seq)\n\n        # Determine the seq of the oldest visible user message we want to keep.\n        # If the total turns fit within max_turns, keep everything.\n        if len(visible_turn_seqs) <= max_turns:\n            cutoff_seq = None  # keep all\n        else:\n            # The Nth visible user message (0-indexed) is the oldest we keep.\n            cutoff_seq = visible_turn_seqs[max_turns - 1]\n\n        # Build result in chronological order, starting from cutoff.\n        # IMPORTANT: we start exactly at cutoff_seq (the visible user message),\n        # never mid-group, so tool_use / tool_result pairs are always complete.\n        result = []\n        for seq, role, raw_content in reversed(rows):\n            if cutoff_seq is not None and seq < cutoff_seq:\n                continue\n            try:\n                content = json.loads(raw_content)\n            except Exception:\n                content = raw_content\n            result.append({\"role\": role, \"content\": content})\n        return result\n\n    def append_messages(\n        self,\n        session_id: str,\n        messages: List[Dict[str, Any]],\n        channel_type: str = \"\",\n    ) -> None:\n        \"\"\"\n        Append new messages to a session's history.\n\n        Seq numbers continue from the session's current maximum, so\n        concurrent callers on distinct sessions never collide.\n\n        Args:\n            session_id: Unique session identifier.\n            messages: List of message dicts to append.\n            channel_type: Source channel (e.g. \"feishu\", \"web\", \"wechat\").\n                          Only written on session creation; ignored on update.\n        \"\"\"\n        if not messages:\n            return\n\n        now = int(time.time())\n        with self._lock:\n            conn = self._connect()\n            try:\n                with conn:\n                    # INSERT OR IGNORE creates the row on first visit;\n                    # the UPDATE always refreshes last_active.\n                    # Avoids ON CONFLICT...DO UPDATE (requires SQLite >= 3.24).\n                    conn.execute(\n                        \"\"\"\n                        INSERT OR IGNORE INTO sessions\n                            (session_id, channel_type, created_at, last_active, msg_count)\n                        VALUES (?, ?, ?, ?, 0)\n                        \"\"\",\n                        (session_id, channel_type, now, now),\n                    )\n                    conn.execute(\n                        \"UPDATE sessions SET last_active = ? WHERE session_id = ?\",\n                        (now, session_id),\n                    )\n\n                    # Determine starting seq for the new batch.\n                    row = conn.execute(\n                        \"SELECT COALESCE(MAX(seq), -1) FROM messages WHERE session_id = ?\",\n                        (session_id,),\n                    ).fetchone()\n                    next_seq = row[0] + 1\n\n                    for msg in messages:\n                        role = msg.get(\"role\", \"\")\n                        content = json.dumps(\n                            msg.get(\"content\", \"\"), ensure_ascii=False\n                        )\n                        conn.execute(\n                            \"\"\"\n                            INSERT OR IGNORE INTO messages\n                                (session_id, seq, role, content, created_at)\n                            VALUES (?, ?, ?, ?, ?)\n                            \"\"\",\n                            (session_id, next_seq, role, content, now),\n                        )\n                        next_seq += 1\n\n                    conn.execute(\n                        \"\"\"\n                        UPDATE sessions\n                        SET msg_count = (\n                            SELECT COUNT(*) FROM messages WHERE session_id = ?\n                        )\n                        WHERE session_id = ?\n                        \"\"\",\n                        (session_id, session_id),\n                    )\n            finally:\n                conn.close()\n\n    def clear_session(self, session_id: str) -> None:\n        \"\"\"Delete all messages and the session record for a given session_id.\"\"\"\n        with self._lock:\n            conn = self._connect()\n            try:\n                with conn:\n                    conn.execute(\n                        \"DELETE FROM messages WHERE session_id = ?\", (session_id,)\n                    )\n                    conn.execute(\n                        \"DELETE FROM sessions WHERE session_id = ?\", (session_id,)\n                    )\n            finally:\n                conn.close()\n\n    def cleanup_old_sessions(self, max_age_days: Optional[int] = None) -> int:\n        \"\"\"\n        Delete sessions that have not been active within max_age_days.\n\n        Args:\n            max_age_days: Override the default retention period.\n\n        Returns:\n            Number of sessions deleted.\n        \"\"\"\n        try:\n            from config import conf\n            max_age = max_age_days or conf().get(\n                \"conversation_max_age_days\", DEFAULT_MAX_AGE_DAYS\n            )\n        except Exception:\n            max_age = max_age_days or DEFAULT_MAX_AGE_DAYS\n\n        cutoff = int(time.time()) - max_age * 86400\n        deleted = 0\n\n        with self._lock:\n            conn = self._connect()\n            try:\n                with conn:\n                    stale = conn.execute(\n                        \"SELECT session_id FROM sessions WHERE last_active < ?\",\n                        (cutoff,),\n                    ).fetchall()\n                    for (sid,) in stale:\n                        conn.execute(\n                            \"DELETE FROM messages WHERE session_id = ?\", (sid,)\n                        )\n                        conn.execute(\n                            \"DELETE FROM sessions WHERE session_id = ?\", (sid,)\n                        )\n                        deleted += 1\n            finally:\n                conn.close()\n\n        if deleted:\n            logger.info(f\"[ConversationStore] Pruned {deleted} expired sessions\")\n        return deleted\n\n    def load_history_page(\n        self,\n        session_id: str,\n        page: int = 1,\n        page_size: int = 20,\n    ) -> Dict[str, Any]:\n        \"\"\"\n        Load a page of conversation history for UI display, grouped into turns.\n\n        Each \"turn\" maps to one of:\n          - A user message (role=\"user\", content=str)\n          - An assistant message (role=\"assistant\", content=str,\n            tool_calls=[{name, arguments, result}] when tools were used)\n\n        Internal tool_result user messages are merged into the preceding\n        assistant entry's tool_calls list and never appear as standalone items.\n\n        Pages are numbered from 1 (most recent).  Messages within a page are\n        returned in chronological order.\n\n        Returns:\n            {\n                \"messages\": [\n                    {\n                        \"role\": \"user\" | \"assistant\",\n                        \"content\": str,\n                        \"tool_calls\": [...],   # assistant only, may be []\n                        \"created_at\": int,\n                    },\n                    ...\n                ],\n                \"total\": <visible turn count>,\n                \"page\": <current page>,\n                \"page_size\": <page_size>,\n                \"has_more\": bool,\n            }\n        \"\"\"\n        page = max(1, page)\n        with self._lock:\n            conn = self._connect()\n            try:\n                rows = conn.execute(\n                    \"\"\"\n                    SELECT role, content, created_at\n                    FROM messages\n                    WHERE session_id = ?\n                    ORDER BY seq ASC\n                    \"\"\",\n                    (session_id,),\n                ).fetchall()\n            finally:\n                conn.close()\n\n        visible = _group_into_display_turns(rows)\n\n        total = len(visible)\n        offset = (page - 1) * page_size\n        page_items = list(reversed(visible))[offset: offset + page_size]\n        page_items = list(reversed(page_items))\n\n        return {\n            \"messages\": page_items,\n            \"total\": total,\n            \"page\": page,\n            \"page_size\": page_size,\n            \"has_more\": offset + page_size < total,\n        }\n\n    def get_stats(self) -> Dict[str, Any]:\n        \"\"\"Return basic stats keyed by channel_type, for monitoring.\"\"\"\n        with self._lock:\n            conn = self._connect()\n            try:\n                total_sessions = conn.execute(\n                    \"SELECT COUNT(*) FROM sessions\"\n                ).fetchone()[0]\n                total_messages = conn.execute(\n                    \"SELECT COUNT(*) FROM messages\"\n                ).fetchone()[0]\n                by_channel = conn.execute(\n                    \"\"\"\n                    SELECT channel_type, COUNT(*) as cnt\n                    FROM sessions\n                    GROUP BY channel_type\n                    ORDER BY cnt DESC\n                    \"\"\"\n                ).fetchall()\n                return {\n                    \"total_sessions\": total_sessions,\n                    \"total_messages\": total_messages,\n                    \"by_channel\": {row[0] or \"unknown\": row[1] for row in by_channel},\n                }\n            finally:\n                conn.close()\n\n    # ------------------------------------------------------------------\n    # Internal helpers\n    # ------------------------------------------------------------------\n\n    def _init_db(self) -> None:\n        self._db_path.parent.mkdir(parents=True, exist_ok=True)\n        conn = self._connect()\n        try:\n            conn.executescript(_DDL)\n            conn.commit()\n            self._migrate(conn)\n        finally:\n            conn.close()\n\n    def _migrate(self, conn: sqlite3.Connection) -> None:\n        \"\"\"Apply incremental schema migrations on existing databases.\"\"\"\n        cols = {\n            row[1]\n            for row in conn.execute(\"PRAGMA table_info(sessions)\").fetchall()\n        }\n        if \"channel_type\" not in cols:\n            try:\n                conn.execute(_MIGRATION_ADD_CHANNEL_TYPE)\n                conn.commit()\n                logger.info(\"[ConversationStore] Migrated: added channel_type column\")\n            except Exception as e:\n                logger.warning(f\"[ConversationStore] Migration failed: {e}\")\n\n    def _connect(self) -> sqlite3.Connection:\n        conn = sqlite3.connect(str(self._db_path), timeout=10)\n        conn.execute(\"PRAGMA journal_mode=WAL\")\n        conn.execute(\"PRAGMA synchronous=NORMAL\")\n        return conn\n\n\n# ---------------------------------------------------------------------------\n# Singleton\n# ---------------------------------------------------------------------------\n\n_store_instance: Optional[ConversationStore] = None\n_store_lock = threading.Lock()\n\n\ndef get_conversation_store() -> ConversationStore:\n    \"\"\"\n    Return the process-wide ConversationStore singleton.\n\n    Reuses the long-term memory database so the project stays with a single\n    SQLite file: ~/cow/memory/long-term/index.db\n    The conversation tables (sessions / messages) are separate from the\n    memory tables (memory_chunks / file_metadata) — no conflicts.\n    \"\"\"\n    global _store_instance\n    if _store_instance is not None:\n        return _store_instance\n\n    with _store_lock:\n        if _store_instance is not None:\n            return _store_instance\n\n        try:\n            from agent.memory.config import get_default_memory_config\n            db_path = get_default_memory_config().get_db_path()\n        except Exception:\n            from common.utils import expand_path\n            db_path = Path(expand_path(\"~/cow\")) / \"memory\" / \"long-term\" / \"index.db\"\n\n        _store_instance = ConversationStore(db_path)\n        logger.debug(f\"[ConversationStore] Using shared DB at: {db_path}\")\n        return _store_instance\n"
  },
  {
    "path": "agent/memory/embedding.py",
    "content": "\"\"\"\nEmbedding providers for memory\n\nSupports OpenAI and local embedding models\n\"\"\"\n\nimport hashlib\nfrom abc import ABC, abstractmethod\nfrom typing import List, Optional\n\n\nclass EmbeddingProvider(ABC):\n    \"\"\"Base class for embedding providers\"\"\"\n\n    @abstractmethod\n    def embed(self, text: str) -> List[float]:\n        \"\"\"Generate embedding for text\"\"\"\n        pass\n\n    @abstractmethod\n    def embed_batch(self, texts: List[str]) -> List[List[float]]:\n        \"\"\"Generate embeddings for multiple texts\"\"\"\n        pass\n    \n    @property\n    @abstractmethod\n    def dimensions(self) -> int:\n        \"\"\"Get embedding dimensions\"\"\"\n        pass\n\n\nclass OpenAIEmbeddingProvider(EmbeddingProvider):\n    \"\"\"OpenAI embedding provider using REST API\"\"\"\n    \n    def __init__(self, model: str = \"text-embedding-3-small\", api_key: Optional[str] = None,\n                 api_base: Optional[str] = None, extra_headers: Optional[dict] = None):\n        \"\"\"\n        Initialize OpenAI embedding provider\n\n        Args:\n            model: Model name (text-embedding-3-small or text-embedding-3-large)\n            api_key: OpenAI API key\n            api_base: Optional API base URL\n            extra_headers: Optional extra headers to include in API requests\n        \"\"\"\n        self.model = model\n        self.api_key = api_key\n        self.api_base = api_base or \"https://api.openai.com/v1\"\n        self.extra_headers = extra_headers or {}\n\n        # Validate API key\n        if not self.api_key or self.api_key in [\"\", \"YOUR API KEY\", \"YOUR_API_KEY\"]:\n            raise ValueError(\"OpenAI API key is not configured. Please set 'open_ai_api_key' in config.json\")\n\n        # Set dimensions based on model\n        self._dimensions = 1536 if \"small\" in model else 3072\n\n    def _call_api(self, input_data):\n        \"\"\"Call OpenAI embedding API using requests\"\"\"\n        import requests\n\n        url = f\"{self.api_base}/embeddings\"\n        headers = {\n            \"Content-Type\": \"application/json\",\n            \"Authorization\": f\"Bearer {self.api_key}\",\n            **self.extra_headers,\n        }\n        data = {\n            \"input\": input_data,\n            \"model\": self.model\n        }\n\n        try:\n            response = requests.post(url, headers=headers, json=data, timeout=5)\n            response.raise_for_status()\n            return response.json()\n        except requests.exceptions.ConnectionError as e:\n            raise ConnectionError(f\"Failed to connect to OpenAI API at {url}. Please check your network connection and api_base configuration. Error: {str(e)}\")\n        except requests.exceptions.Timeout as e:\n            raise TimeoutError(f\"OpenAI API request timed out after 10s. Please check your network connection. Error: {str(e)}\")\n        except requests.exceptions.HTTPError as e:\n            if e.response.status_code == 401:\n                raise ValueError(f\"Invalid OpenAI API key. Please check your 'open_ai_api_key' in config.json\")\n            elif e.response.status_code == 429:\n                raise ValueError(f\"OpenAI API rate limit exceeded. Please try again later.\")\n            else:\n                raise ValueError(f\"OpenAI API request failed: {e.response.status_code} - {e.response.text}\")\n\n    def embed(self, text: str) -> List[float]:\n        \"\"\"Generate embedding for text\"\"\"\n        result = self._call_api(text)\n        return result[\"data\"][0][\"embedding\"]\n\n    def embed_batch(self, texts: List[str]) -> List[List[float]]:\n        \"\"\"Generate embeddings for multiple texts\"\"\"\n        if not texts:\n            return []\n\n        result = self._call_api(texts)\n        return [item[\"embedding\"] for item in result[\"data\"]]\n\n    @property\n    def dimensions(self) -> int:\n        return self._dimensions\n\n\n# LocalEmbeddingProvider removed - only use OpenAI embedding or keyword search\n\n\nclass EmbeddingCache:\n    \"\"\"Cache for embeddings to avoid recomputation\"\"\"\n\n    def __init__(self):\n        self.cache = {}\n\n    def get(self, text: str, provider: str, model: str) -> Optional[List[float]]:\n        \"\"\"Get cached embedding\"\"\"\n        key = self._compute_key(text, provider, model)\n        return self.cache.get(key)\n    \n    def put(self, text: str, provider: str, model: str, embedding: List[float]):\n        \"\"\"Cache embedding\"\"\"\n        key = self._compute_key(text, provider, model)\n        self.cache[key] = embedding\n    \n    @staticmethod\n    def _compute_key(text: str, provider: str, model: str) -> str:\n        \"\"\"Compute cache key\"\"\"\n        content = f\"{provider}:{model}:{text}\"\n        return hashlib.md5(content.encode('utf-8')).hexdigest()\n    \n    def clear(self):\n        \"\"\"Clear cache\"\"\"\n        self.cache.clear()\n\n\ndef create_embedding_provider(\n    provider: str = \"openai\",\n    model: Optional[str] = None,\n    api_key: Optional[str] = None,\n    api_base: Optional[str] = None,\n    extra_headers: Optional[dict] = None\n) -> EmbeddingProvider:\n    \"\"\"\n    Factory function to create embedding provider\n\n    Supports \"openai\" and \"linkai\" providers (both use OpenAI-compatible REST API).\n    If initialization fails, caller should fall back to keyword-only search.\n\n    Args:\n        provider: Provider name (\"openai\" or \"linkai\")\n        model: Model name (default: text-embedding-3-small)\n        api_key: API key (required)\n        api_base: API base URL\n        extra_headers: Optional extra headers to include in API requests\n\n    Returns:\n        EmbeddingProvider instance\n\n    Raises:\n        ValueError: If provider is unsupported or api_key is missing\n    \"\"\"\n    if provider not in (\"openai\", \"linkai\"):\n        raise ValueError(f\"Unsupported embedding provider: {provider}. Use 'openai' or 'linkai'.\")\n\n    model = model or \"text-embedding-3-small\"\n    return OpenAIEmbeddingProvider(model=model, api_key=api_key, api_base=api_base, extra_headers=extra_headers)\n"
  },
  {
    "path": "agent/memory/manager.py",
    "content": "\"\"\"\nMemory manager for AgentMesh\n\nProvides high-level interface for memory operations\n\"\"\"\n\nimport os\nfrom typing import List, Optional, Dict, Any\nfrom pathlib import Path\nimport hashlib\nfrom datetime import datetime, timedelta\n\nfrom agent.memory.config import MemoryConfig, get_default_memory_config\nfrom agent.memory.storage import MemoryStorage, MemoryChunk, SearchResult\nfrom agent.memory.chunker import TextChunker\nfrom agent.memory.embedding import create_embedding_provider, EmbeddingProvider\nfrom agent.memory.summarizer import MemoryFlushManager, create_memory_files_if_needed\n\n\nclass MemoryManager:\n    \"\"\"\n    Memory manager with hybrid search capabilities\n    \n    Provides long-term memory for agents with vector and keyword search\n    \"\"\"\n    \n    def __init__(\n        self,\n        config: Optional[MemoryConfig] = None,\n        embedding_provider: Optional[EmbeddingProvider] = None,\n        llm_model: Optional[Any] = None\n    ):\n        \"\"\"\n        Initialize memory manager\n        \n        Args:\n            config: Memory configuration (uses global config if not provided)\n            embedding_provider: Custom embedding provider (optional)\n            llm_model: LLM model for summarization (optional)\n        \"\"\"\n        self.config = config or get_default_memory_config()\n        \n        # Initialize storage\n        db_path = self.config.get_db_path()\n        self.storage = MemoryStorage(db_path)\n        \n        # Initialize chunker\n        self.chunker = TextChunker(\n            max_tokens=self.config.chunk_max_tokens,\n            overlap_tokens=self.config.chunk_overlap_tokens\n        )\n        \n        # Initialize embedding provider (optional, prefer OpenAI, fallback to LinkAI)\n        self.embedding_provider = None\n        if embedding_provider:\n            self.embedding_provider = embedding_provider\n        else:\n            # Try OpenAI first\n            try:\n                api_key = os.environ.get('OPENAI_API_KEY')\n                api_base = os.environ.get('OPENAI_API_BASE')\n                if api_key:\n                    self.embedding_provider = create_embedding_provider(\n                        provider=\"openai\",\n                        model=self.config.embedding_model,\n                        api_key=api_key,\n                        api_base=api_base\n                    )\n            except Exception as e:\n                from common.log import logger\n                logger.warning(f\"[MemoryManager] OpenAI embedding failed: {e}\")\n\n            # Fallback to LinkAI\n            if self.embedding_provider is None:\n                try:\n                    linkai_key = os.environ.get('LINKAI_API_KEY')\n                    linkai_base = os.environ.get('LINKAI_API_BASE', 'https://api.link-ai.tech')\n                    if linkai_key:\n                        from common.utils import get_cloud_headers\n                        cloud_headers = get_cloud_headers(linkai_key)\n                        cloud_headers.pop(\"Authorization\", None)\n                        self.embedding_provider = create_embedding_provider(\n                            provider=\"linkai\",\n                            model=self.config.embedding_model,\n                            api_key=linkai_key,\n                            api_base=f\"{linkai_base}/v1\",\n                            extra_headers=cloud_headers,\n                        )\n                except Exception as e:\n                    from common.log import logger\n                    logger.warning(f\"[MemoryManager] LinkAI embedding failed: {e}\")\n\n            if self.embedding_provider is None:\n                from common.log import logger\n                logger.info(f\"[MemoryManager] Memory will work with keyword search only (no vector search)\")\n        \n        # Initialize memory flush manager\n        workspace_dir = self.config.get_workspace()\n        self.flush_manager = MemoryFlushManager(\n            workspace_dir=workspace_dir,\n            llm_model=llm_model\n        )\n        \n        # Ensure workspace directories exist\n        self._init_workspace()\n        \n        self._dirty = False\n    \n    def _init_workspace(self):\n        \"\"\"Initialize workspace directories\"\"\"\n        memory_dir = self.config.get_memory_dir()\n        memory_dir.mkdir(parents=True, exist_ok=True)\n        \n        # Create default memory files\n        workspace_dir = self.config.get_workspace()\n        create_memory_files_if_needed(workspace_dir)\n    \n    async def search(\n        self,\n        query: str,\n        user_id: Optional[str] = None,\n        max_results: Optional[int] = None,\n        min_score: Optional[float] = None,\n        include_shared: bool = True\n    ) -> List[SearchResult]:\n        \"\"\"\n        Search memory with hybrid search (vector + keyword)\n        \n        Args:\n            query: Search query\n            user_id: User ID for scoped search\n            max_results: Maximum results to return\n            min_score: Minimum score threshold\n            include_shared: Include shared memories\n            \n        Returns:\n            List of search results sorted by relevance\n        \"\"\"\n        max_results = max_results or self.config.max_results\n        min_score = min_score or self.config.min_score\n        \n        # Determine scopes\n        scopes = []\n        if include_shared:\n            scopes.append(\"shared\")\n        if user_id:\n            scopes.append(\"user\")\n        \n        if not scopes:\n            return []\n        \n        # Sync if needed\n        if self.config.sync_on_search and self._dirty:\n            await self.sync()\n        \n        # Perform vector search (if embedding provider available)\n        vector_results = []\n        if self.embedding_provider:\n            try:\n                from common.log import logger\n                query_embedding = self.embedding_provider.embed(query)\n                vector_results = self.storage.search_vector(\n                    query_embedding=query_embedding,\n                    user_id=user_id,\n                    scopes=scopes,\n                    limit=max_results * 2  # Get more candidates for merging\n                )\n                logger.info(f\"[MemoryManager] Vector search found {len(vector_results)} results for query: {query}\")\n            except Exception as e:\n                from common.log import logger\n                logger.warning(f\"[MemoryManager] Vector search failed: {e}\")\n        \n        # Perform keyword search\n        keyword_results = self.storage.search_keyword(\n            query=query,\n            user_id=user_id,\n            scopes=scopes,\n            limit=max_results * 2\n        )\n        from common.log import logger\n        logger.info(f\"[MemoryManager] Keyword search found {len(keyword_results)} results for query: {query}\")\n        \n        # Merge results\n        merged = self._merge_results(\n            vector_results,\n            keyword_results,\n            self.config.vector_weight,\n            self.config.keyword_weight\n        )\n        \n        # Filter by min score and limit\n        filtered = [r for r in merged if r.score >= min_score]\n        return filtered[:max_results]\n    \n    async def add_memory(\n        self,\n        content: str,\n        user_id: Optional[str] = None,\n        scope: str = \"shared\",\n        source: str = \"memory\",\n        path: Optional[str] = None,\n        metadata: Optional[Dict[str, Any]] = None\n    ):\n        \"\"\"\n        Add new memory content\n        \n        Args:\n            content: Memory content\n            user_id: User ID for user-scoped memory\n            scope: Memory scope (\"shared\", \"user\", \"session\")\n            source: Memory source (\"memory\" or \"session\")\n            path: File path (auto-generated if not provided)\n            metadata: Additional metadata\n        \"\"\"\n        if not content.strip():\n            return\n        \n        # Generate path if not provided\n        if not path:\n            content_hash = hashlib.md5(content.encode('utf-8')).hexdigest()[:8]\n            if user_id and scope == \"user\":\n                path = f\"memory/users/{user_id}/memory_{content_hash}.md\"\n            else:\n                path = f\"memory/shared/memory_{content_hash}.md\"\n        \n        # Chunk content\n        chunks = self.chunker.chunk_text(content)\n        \n        # Generate embeddings (if provider available)\n        texts = [chunk.text for chunk in chunks]\n        if self.embedding_provider:\n            embeddings = self.embedding_provider.embed_batch(texts)\n        else:\n            # No embeddings, just use None\n            embeddings = [None] * len(texts)\n        \n        # Create memory chunks\n        memory_chunks = []\n        for chunk, embedding in zip(chunks, embeddings):\n            chunk_id = self._generate_chunk_id(path, chunk.start_line, chunk.end_line)\n            chunk_hash = MemoryStorage.compute_hash(chunk.text)\n            \n            memory_chunks.append(MemoryChunk(\n                id=chunk_id,\n                user_id=user_id,\n                scope=scope,\n                source=source,\n                path=path,\n                start_line=chunk.start_line,\n                end_line=chunk.end_line,\n                text=chunk.text,\n                embedding=embedding,\n                hash=chunk_hash,\n                metadata=metadata\n            ))\n        \n        # Save to storage\n        self.storage.save_chunks_batch(memory_chunks)\n        \n        # Update file metadata\n        file_hash = MemoryStorage.compute_hash(content)\n        self.storage.update_file_metadata(\n            path=path,\n            source=source,\n            file_hash=file_hash,\n            mtime=int(os.path.getmtime(__file__)),  # Use current time\n            size=len(content)\n        )\n    \n    async def sync(self, force: bool = False):\n        \"\"\"\n        Synchronize memory from files\n        \n        Args:\n            force: Force full reindex\n        \"\"\"\n        memory_dir = self.config.get_memory_dir()\n        workspace_dir = self.config.get_workspace()\n        \n        # Scan MEMORY.md (workspace root)\n        memory_file = Path(workspace_dir) / \"MEMORY.md\"\n        if memory_file.exists():\n            await self._sync_file(memory_file, \"memory\", \"shared\", None)\n        \n        # Scan memory directory (including daily summaries)\n        if memory_dir.exists():\n            for file_path in memory_dir.rglob(\"*.md\"):\n                # Determine scope and user_id from path\n                rel_path = file_path.relative_to(workspace_dir)\n                parts = rel_path.parts\n                \n                # Check if it's in daily summary directory\n                if \"daily\" in parts:\n                    # Daily summary files\n                    if \"users\" in parts or len(parts) > 3:\n                        # User-scoped daily summary: memory/daily/{user_id}/2024-01-29.md\n                        user_idx = parts.index(\"daily\") + 1\n                        user_id = parts[user_idx] if user_idx < len(parts) else None\n                        scope = \"user\"\n                    else:\n                        # Shared daily summary: memory/daily/2024-01-29.md\n                        user_id = None\n                        scope = \"shared\"\n                elif \"users\" in parts:\n                    # User-scoped memory\n                    user_idx = parts.index(\"users\") + 1\n                    user_id = parts[user_idx] if user_idx < len(parts) else None\n                    scope = \"user\"\n                else:\n                    # Shared memory\n                    user_id = None\n                    scope = \"shared\"\n                \n                await self._sync_file(file_path, \"memory\", scope, user_id)\n        \n        self._dirty = False\n    \n    async def _sync_file(\n        self,\n        file_path: Path,\n        source: str,\n        scope: str,\n        user_id: Optional[str]\n    ):\n        \"\"\"Sync a single file\"\"\"\n        # Compute file hash\n        content = file_path.read_text(encoding='utf-8')\n        file_hash = MemoryStorage.compute_hash(content)\n        \n        # Get relative path\n        workspace_dir = self.config.get_workspace()\n        rel_path = str(file_path.relative_to(workspace_dir))\n        \n        # Check if file changed\n        stored_hash = self.storage.get_file_hash(rel_path)\n        if stored_hash == file_hash:\n            return  # No changes\n        \n        # Delete old chunks\n        self.storage.delete_by_path(rel_path)\n        \n        # Chunk and embed\n        chunks = self.chunker.chunk_text(content)\n        if not chunks:\n            return\n        \n        texts = [chunk.text for chunk in chunks]\n        if self.embedding_provider:\n            embeddings = self.embedding_provider.embed_batch(texts)\n        else:\n            embeddings = [None] * len(texts)\n        \n        # Create memory chunks\n        memory_chunks = []\n        for chunk, embedding in zip(chunks, embeddings):\n            chunk_id = self._generate_chunk_id(rel_path, chunk.start_line, chunk.end_line)\n            chunk_hash = MemoryStorage.compute_hash(chunk.text)\n            \n            memory_chunks.append(MemoryChunk(\n                id=chunk_id,\n                user_id=user_id,\n                scope=scope,\n                source=source,\n                path=rel_path,\n                start_line=chunk.start_line,\n                end_line=chunk.end_line,\n                text=chunk.text,\n                embedding=embedding,\n                hash=chunk_hash,\n                metadata=None\n            ))\n        \n        # Save\n        self.storage.save_chunks_batch(memory_chunks)\n        \n        # Update file metadata\n        stat = file_path.stat()\n        self.storage.update_file_metadata(\n            path=rel_path,\n            source=source,\n            file_hash=file_hash,\n            mtime=int(stat.st_mtime),\n            size=stat.st_size\n        )\n    \n    def flush_memory(\n        self,\n        messages: list,\n        user_id: Optional[str] = None,\n        reason: str = \"threshold\",\n        max_messages: int = 10,\n    ) -> bool:\n        \"\"\"\n        Flush conversation summary to daily memory file.\n        \n        Args:\n            messages: Conversation message list\n            user_id: Optional user ID\n            reason: \"threshold\" | \"overflow\" | \"daily_summary\"\n            max_messages: Max recent messages to include (0 = all)\n        \n        Returns:\n            True if content was written\n        \"\"\"\n        success = self.flush_manager.flush_from_messages(\n            messages=messages,\n            user_id=user_id,\n            reason=reason,\n            max_messages=max_messages,\n        )\n        if success:\n            self._dirty = True\n        return success\n    \n    def get_status(self) -> Dict[str, Any]:\n        \"\"\"Get memory status\"\"\"\n        stats = self.storage.get_stats()\n        return {\n            'chunks': stats['chunks'],\n            'files': stats['files'],\n            'workspace': str(self.config.get_workspace()),\n            'dirty': self._dirty,\n            'embedding_enabled': self.embedding_provider is not None,\n            'embedding_provider': self.config.embedding_provider if self.embedding_provider else 'disabled',\n            'embedding_model': self.config.embedding_model if self.embedding_provider else 'N/A',\n            'search_mode': 'hybrid (vector + keyword)' if self.embedding_provider else 'keyword only (FTS5)'\n        }\n    \n    def mark_dirty(self):\n        \"\"\"Mark memory as dirty (needs sync)\"\"\"\n        self._dirty = True\n    \n    def close(self):\n        \"\"\"Close memory manager and release resources\"\"\"\n        self.storage.close()\n    \n    # Helper methods\n    \n    def _generate_chunk_id(self, path: str, start_line: int, end_line: int) -> str:\n        \"\"\"Generate unique chunk ID\"\"\"\n        content = f\"{path}:{start_line}:{end_line}\"\n        return hashlib.md5(content.encode('utf-8')).hexdigest()\n    \n    @staticmethod\n    def _compute_temporal_decay(path: str, half_life_days: float = 30.0) -> float:\n        \"\"\"\n        Compute temporal decay multiplier for dated memory files.\n        \n        Inspired by OpenClaw's temporal-decay: exponential decay based on file date.\n        MEMORY.md and non-dated files are \"evergreen\" (no decay, multiplier=1.0).\n        Daily files like memory/2025-03-01.md decay based on age.\n        \n        Formula: multiplier = exp(-ln2/half_life * age_in_days)\n        \"\"\"\n        import re\n        import math\n        \n        match = re.search(r'(\\d{4})-(\\d{2})-(\\d{2})\\.md$', path)\n        if not match:\n            return 1.0  # evergreen: MEMORY.md, non-dated files\n        \n        try:\n            file_date = datetime(\n                int(match.group(1)), int(match.group(2)), int(match.group(3))\n            )\n            age_days = (datetime.now() - file_date).days\n            if age_days <= 0:\n                return 1.0\n            \n            decay_lambda = math.log(2) / half_life_days\n            return math.exp(-decay_lambda * age_days)\n        except (ValueError, OverflowError):\n            return 1.0\n    \n    def _merge_results(\n        self,\n        vector_results: List[SearchResult],\n        keyword_results: List[SearchResult],\n        vector_weight: float,\n        keyword_weight: float\n    ) -> List[SearchResult]:\n        \"\"\"Merge vector and keyword search results with temporal decay for dated files\"\"\"\n        merged_map = {}\n        \n        for result in vector_results:\n            key = (result.path, result.start_line, result.end_line)\n            merged_map[key] = {\n                'result': result,\n                'vector_score': result.score,\n                'keyword_score': 0.0\n            }\n        \n        for result in keyword_results:\n            key = (result.path, result.start_line, result.end_line)\n            if key in merged_map:\n                merged_map[key]['keyword_score'] = result.score\n            else:\n                merged_map[key] = {\n                    'result': result,\n                    'vector_score': 0.0,\n                    'keyword_score': result.score\n                }\n        \n        merged_results = []\n        for entry in merged_map.values():\n            combined_score = (\n                vector_weight * entry['vector_score'] +\n                keyword_weight * entry['keyword_score']\n            )\n            \n            # Apply temporal decay for dated memory files\n            result = entry['result']\n            decay = self._compute_temporal_decay(result.path)\n            combined_score *= decay\n            \n            merged_results.append(SearchResult(\n                path=result.path,\n                start_line=result.start_line,\n                end_line=result.end_line,\n                score=combined_score,\n                snippet=result.snippet,\n                source=result.source,\n                user_id=result.user_id\n            ))\n        \n        merged_results.sort(key=lambda r: r.score, reverse=True)\n        return merged_results\n"
  },
  {
    "path": "agent/memory/service.py",
    "content": "\"\"\"\nMemory service for handling memory query operations via cloud protocol.\n\nProvides a unified interface for listing and reading memory files,\ncallable from the cloud client (LinkAI) or a future web console.\n\nMemory file layout (under workspace_root):\n    MEMORY.md               -> type: global\n    memory/2026-02-20.md    -> type: daily\n\"\"\"\n\nimport os\nfrom datetime import datetime\nfrom typing import Dict, List, Optional\nfrom pathlib import Path\nfrom common.log import logger\n\n\nclass MemoryService:\n    \"\"\"\n    High-level service for memory file queries.\n    Operates directly on the filesystem — no MemoryManager dependency.\n    \"\"\"\n\n    def __init__(self, workspace_root: str):\n        \"\"\"\n        :param workspace_root: Workspace root directory (e.g. ~/cow)\n        \"\"\"\n        self.workspace_root = workspace_root\n        self.memory_dir = os.path.join(workspace_root, \"memory\")\n\n    # ------------------------------------------------------------------\n    # list — paginated file metadata\n    # ------------------------------------------------------------------\n    def list_files(self, page: int = 1, page_size: int = 20) -> dict:\n        \"\"\"\n        List all memory files with metadata (without content).\n\n        Returns::\n\n            {\n                \"page\": 1,\n                \"page_size\": 20,\n                \"total\": 15,\n                \"list\": [\n                    {\"filename\": \"MEMORY.md\", \"type\": \"global\", \"size\": 2048, \"updated_at\": \"2026-02-20 10:00:00\"},\n                    {\"filename\": \"2026-02-20.md\", \"type\": \"daily\", \"size\": 512, \"updated_at\": \"2026-02-20 09:30:00\"},\n                    ...\n                ]\n            }\n        \"\"\"\n        files: List[dict] = []\n\n        # 1. Global memory — MEMORY.md in workspace root\n        global_path = os.path.join(self.workspace_root, \"MEMORY.md\")\n        if os.path.isfile(global_path):\n            files.append(self._file_info(global_path, \"MEMORY.md\", \"global\"))\n\n        # 2. Daily memory files — memory/*.md (sorted newest first)\n        if os.path.isdir(self.memory_dir):\n            daily_files = []\n            for name in os.listdir(self.memory_dir):\n                full = os.path.join(self.memory_dir, name)\n                if os.path.isfile(full) and name.endswith(\".md\"):\n                    daily_files.append((name, full))\n            # Sort by filename descending (newest date first)\n            daily_files.sort(key=lambda x: x[0], reverse=True)\n            for name, full in daily_files:\n                files.append(self._file_info(full, name, \"daily\"))\n\n        total = len(files)\n\n        # Paginate\n        start = (page - 1) * page_size\n        end = start + page_size\n        page_items = files[start:end]\n\n        return {\n            \"page\": page,\n            \"page_size\": page_size,\n            \"total\": total,\n            \"list\": page_items,\n        }\n\n    # ------------------------------------------------------------------\n    # content — read a single file\n    # ------------------------------------------------------------------\n    def get_content(self, filename: str) -> dict:\n        \"\"\"\n        Read the full content of a memory file.\n\n        :param filename: File name, e.g. ``MEMORY.md`` or ``2026-02-20.md``\n        :return: dict with ``filename`` and ``content``\n        :raises FileNotFoundError: if the file does not exist\n        \"\"\"\n        path = self._resolve_path(filename)\n        if not os.path.isfile(path):\n            raise FileNotFoundError(f\"Memory file not found: {filename}\")\n\n        with open(path, \"r\", encoding=\"utf-8\") as f:\n            content = f.read()\n\n        return {\n            \"filename\": filename,\n            \"content\": content,\n        }\n\n    # ------------------------------------------------------------------\n    # dispatch — single entry point for protocol messages\n    # ------------------------------------------------------------------\n    def dispatch(self, action: str, payload: Optional[dict] = None) -> dict:\n        \"\"\"\n        Dispatch a memory management action.\n\n        :param action: ``list`` or ``content``\n        :param payload: action-specific payload\n        :return: protocol-compatible response dict\n        \"\"\"\n        payload = payload or {}\n        try:\n            if action == \"list\":\n                page = payload.get(\"page\", 1)\n                page_size = payload.get(\"page_size\", 20)\n                result_payload = self.list_files(page=page, page_size=page_size)\n                return {\"action\": action, \"code\": 200, \"message\": \"success\", \"payload\": result_payload}\n\n            elif action == \"content\":\n                filename = payload.get(\"filename\")\n                if not filename:\n                    return {\"action\": action, \"code\": 400, \"message\": \"filename is required\", \"payload\": None}\n                result_payload = self.get_content(filename)\n                return {\"action\": action, \"code\": 200, \"message\": \"success\", \"payload\": result_payload}\n\n            else:\n                return {\"action\": action, \"code\": 400, \"message\": f\"unknown action: {action}\", \"payload\": None}\n\n        except FileNotFoundError as e:\n            return {\"action\": action, \"code\": 404, \"message\": str(e), \"payload\": None}\n        except Exception as e:\n            logger.error(f\"[MemoryService] dispatch error: action={action}, error={e}\")\n            return {\"action\": action, \"code\": 500, \"message\": str(e), \"payload\": None}\n\n    # ------------------------------------------------------------------\n    # internal helpers\n    # ------------------------------------------------------------------\n    def _resolve_path(self, filename: str) -> str:\n        \"\"\"\n        Resolve a filename to its absolute path.\n\n        - ``MEMORY.md`` → ``{workspace_root}/MEMORY.md``\n        - ``2026-02-20.md`` → ``{workspace_root}/memory/2026-02-20.md``\n        \"\"\"\n        if filename == \"MEMORY.md\":\n            return os.path.join(self.workspace_root, filename)\n        return os.path.join(self.memory_dir, filename)\n\n    @staticmethod\n    def _file_info(path: str, filename: str, file_type: str) -> dict:\n        \"\"\"Build a file metadata dict.\"\"\"\n        stat = os.stat(path)\n        updated_at = datetime.fromtimestamp(stat.st_mtime).strftime(\"%Y-%m-%d %H:%M:%S\")\n        return {\n            \"filename\": filename,\n            \"type\": file_type,\n            \"size\": stat.st_size,\n            \"updated_at\": updated_at,\n        }\n"
  },
  {
    "path": "agent/memory/storage.py",
    "content": "\"\"\"\nStorage layer for memory using SQLite + FTS5\n\nProvides vector and keyword search capabilities\n\"\"\"\n\nfrom __future__ import annotations\nimport sqlite3\nimport json\nimport hashlib\nfrom typing import List, Dict, Optional, Any\nfrom pathlib import Path\nfrom dataclasses import dataclass\n\n\n@dataclass\nclass MemoryChunk:\n    \"\"\"Represents a memory chunk with text and embedding\"\"\"\n    id: str\n    user_id: Optional[str]\n    scope: str  # \"shared\" | \"user\" | \"session\"\n    source: str  # \"memory\" | \"session\"\n    path: str\n    start_line: int\n    end_line: int\n    text: str\n    embedding: Optional[List[float]]\n    hash: str\n    metadata: Optional[Dict[str, Any]] = None\n\n\n@dataclass\nclass SearchResult:\n    \"\"\"Search result with score and snippet\"\"\"\n    path: str\n    start_line: int\n    end_line: int\n    score: float\n    snippet: str\n    source: str\n    user_id: Optional[str] = None\n\n\nclass MemoryStorage:\n    \"\"\"SQLite-based storage with FTS5 for keyword search\"\"\"\n    \n    def __init__(self, db_path: Path):\n        self.db_path = db_path\n        self.conn: Optional[sqlite3.Connection] = None\n        self.fts5_available = False  # Track FTS5 availability\n        self._init_db()\n    \n    def _check_fts5_support(self) -> bool:\n        \"\"\"Check if SQLite has FTS5 support\"\"\"\n        try:\n            self.conn.execute(\"CREATE VIRTUAL TABLE IF NOT EXISTS fts5_test USING fts5(test)\")\n            self.conn.execute(\"DROP TABLE IF EXISTS fts5_test\")\n            return True\n        except sqlite3.OperationalError as e:\n            if \"no such module: fts5\" in str(e):\n                return False\n            raise\n    \n    def _init_db(self):\n        \"\"\"Initialize database with schema\"\"\"\n        try:\n            self.conn = sqlite3.connect(str(self.db_path), check_same_thread=False)\n            self.conn.row_factory = sqlite3.Row\n            \n            # Check FTS5 support\n            self.fts5_available = self._check_fts5_support()\n            if not self.fts5_available:\n                from common.log import logger\n                logger.debug(\"[MemoryStorage] FTS5 not available, using LIKE-based keyword search\")\n            \n            # Check database integrity\n            try:\n                result = self.conn.execute(\"PRAGMA integrity_check\").fetchone()\n                if result[0] != 'ok':\n                    print(f\"⚠️  Database integrity check failed: {result[0]}\")\n                    print(f\"   Recreating database...\")\n                    self.conn.close()\n                    self.conn = None\n                    # Remove corrupted database\n                    self.db_path.unlink(missing_ok=True)\n                    # Remove WAL files\n                    Path(str(self.db_path) + '-wal').unlink(missing_ok=True)\n                    Path(str(self.db_path) + '-shm').unlink(missing_ok=True)\n                    # Reconnect to create new database\n                    self.conn = sqlite3.connect(str(self.db_path), check_same_thread=False)\n                    self.conn.row_factory = sqlite3.Row\n            except sqlite3.DatabaseError:\n                # Database is corrupted, recreate it\n                print(f\"⚠️  Database is corrupted, recreating...\")\n                if self.conn:\n                    self.conn.close()\n                    self.conn = None\n                self.db_path.unlink(missing_ok=True)\n                Path(str(self.db_path) + '-wal').unlink(missing_ok=True)\n                Path(str(self.db_path) + '-shm').unlink(missing_ok=True)\n                self.conn = sqlite3.connect(str(self.db_path), check_same_thread=False)\n                self.conn.row_factory = sqlite3.Row\n            \n            # Enable WAL mode for better concurrency\n            self.conn.execute(\"PRAGMA journal_mode=WAL\")\n            # Set busy timeout to avoid \"database is locked\" errors\n            self.conn.execute(\"PRAGMA busy_timeout=5000\")\n        except Exception as e:\n            print(f\"⚠️  Unexpected error during database initialization: {e}\")\n            raise\n        \n        # Create chunks table with embeddings\n        self.conn.execute(\"\"\"\n            CREATE TABLE IF NOT EXISTS chunks (\n                id TEXT PRIMARY KEY,\n                user_id TEXT,\n                scope TEXT NOT NULL DEFAULT 'shared',\n                source TEXT NOT NULL DEFAULT 'memory',\n                path TEXT NOT NULL,\n                start_line INTEGER NOT NULL,\n                end_line INTEGER NOT NULL,\n                text TEXT NOT NULL,\n                embedding TEXT,\n                hash TEXT NOT NULL,\n                metadata TEXT,\n                created_at INTEGER DEFAULT (strftime('%s', 'now')),\n                updated_at INTEGER DEFAULT (strftime('%s', 'now'))\n            )\n        \"\"\")\n        \n        # Create indexes\n        self.conn.execute(\"\"\"\n            CREATE INDEX IF NOT EXISTS idx_chunks_user \n            ON chunks(user_id)\n        \"\"\")\n        \n        self.conn.execute(\"\"\"\n            CREATE INDEX IF NOT EXISTS idx_chunks_scope \n            ON chunks(scope)\n        \"\"\")\n        \n        self.conn.execute(\"\"\"\n            CREATE INDEX IF NOT EXISTS idx_chunks_hash \n            ON chunks(path, hash)\n        \"\"\")\n        \n        # Create FTS5 virtual table for keyword search (only if supported)\n        if self.fts5_available:\n            # Use default unicode61 tokenizer (stable and compatible)\n            # For CJK support, we'll use LIKE queries as fallback\n            self.conn.execute(\"\"\"\n                CREATE VIRTUAL TABLE IF NOT EXISTS chunks_fts USING fts5(\n                    text,\n                    id UNINDEXED,\n                    user_id UNINDEXED,\n                    path UNINDEXED,\n                    source UNINDEXED,\n                    scope UNINDEXED,\n                    content='chunks',\n                    content_rowid='rowid'\n                )\n            \"\"\")\n            \n            # Create triggers to keep FTS in sync\n            self.conn.execute(\"\"\"\n                CREATE TRIGGER IF NOT EXISTS chunks_ai AFTER INSERT ON chunks BEGIN\n                    INSERT INTO chunks_fts(rowid, text, id, user_id, path, source, scope)\n                    VALUES (new.rowid, new.text, new.id, new.user_id, new.path, new.source, new.scope);\n                END\n            \"\"\")\n            \n            self.conn.execute(\"\"\"\n                CREATE TRIGGER IF NOT EXISTS chunks_ad AFTER DELETE ON chunks BEGIN\n                    DELETE FROM chunks_fts WHERE rowid = old.rowid;\n                END\n            \"\"\")\n            \n            self.conn.execute(\"\"\"\n                CREATE TRIGGER IF NOT EXISTS chunks_au AFTER UPDATE ON chunks BEGIN\n                    UPDATE chunks_fts SET text = new.text, id = new.id,\n                                         user_id = new.user_id, path = new.path, source = new.source, scope = new.scope\n                    WHERE rowid = new.rowid;\n                END\n            \"\"\")\n        \n        # Create files metadata table\n        self.conn.execute(\"\"\"\n            CREATE TABLE IF NOT EXISTS files (\n                path TEXT PRIMARY KEY,\n                source TEXT NOT NULL DEFAULT 'memory',\n                hash TEXT NOT NULL,\n                mtime INTEGER NOT NULL,\n                size INTEGER NOT NULL,\n                updated_at INTEGER DEFAULT (strftime('%s', 'now'))\n            )\n        \"\"\")\n        \n        self.conn.commit()\n    \n    def save_chunk(self, chunk: MemoryChunk):\n        \"\"\"Save a memory chunk\"\"\"\n        self.conn.execute(\"\"\"\n            INSERT OR REPLACE INTO chunks \n            (id, user_id, scope, source, path, start_line, end_line, text, embedding, hash, metadata, updated_at)\n            VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, strftime('%s', 'now'))\n        \"\"\", (\n            chunk.id,\n            chunk.user_id,\n            chunk.scope,\n            chunk.source,\n            chunk.path,\n            chunk.start_line,\n            chunk.end_line,\n            chunk.text,\n            json.dumps(chunk.embedding) if chunk.embedding else None,\n            chunk.hash,\n            json.dumps(chunk.metadata) if chunk.metadata else None\n        ))\n        self.conn.commit()\n    \n    def save_chunks_batch(self, chunks: List[MemoryChunk]):\n        \"\"\"Save multiple chunks in a batch\"\"\"\n        self.conn.executemany(\"\"\"\n            INSERT OR REPLACE INTO chunks \n            (id, user_id, scope, source, path, start_line, end_line, text, embedding, hash, metadata, updated_at)\n            VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, strftime('%s', 'now'))\n        \"\"\", [\n            (\n                c.id, c.user_id, c.scope, c.source, c.path,\n                c.start_line, c.end_line, c.text,\n                json.dumps(c.embedding) if c.embedding else None,\n                c.hash,\n                json.dumps(c.metadata) if c.metadata else None\n            )\n            for c in chunks\n        ])\n        self.conn.commit()\n    \n    def get_chunk(self, chunk_id: str) -> Optional[MemoryChunk]:\n        \"\"\"Get a chunk by ID\"\"\"\n        row = self.conn.execute(\"\"\"\n            SELECT * FROM chunks WHERE id = ?\n        \"\"\", (chunk_id,)).fetchone()\n        \n        if not row:\n            return None\n        \n        return self._row_to_chunk(row)\n    \n    def search_vector(\n        self,\n        query_embedding: List[float],\n        user_id: Optional[str] = None,\n        scopes: List[str] = None,\n        limit: int = 10\n    ) -> List[SearchResult]:\n        \"\"\"\n        Vector similarity search using in-memory cosine similarity\n        (sqlite-vec can be added later for better performance)\n        \"\"\"\n        if scopes is None:\n            scopes = [\"shared\"]\n            if user_id:\n                scopes.append(\"user\")\n        \n        # Build query\n        scope_placeholders = ','.join('?' * len(scopes))\n        params = scopes\n        \n        if user_id:\n            query = f\"\"\"\n                SELECT * FROM chunks \n                WHERE scope IN ({scope_placeholders})\n                AND (scope = 'shared' OR user_id = ?)\n                AND embedding IS NOT NULL\n            \"\"\"\n            params.append(user_id)\n        else:\n            query = f\"\"\"\n                SELECT * FROM chunks \n                WHERE scope IN ({scope_placeholders})\n                AND embedding IS NOT NULL\n            \"\"\"\n        \n        rows = self.conn.execute(query, params).fetchall()\n        \n        # Calculate cosine similarity\n        results = []\n        for row in rows:\n            embedding = json.loads(row['embedding'])\n            similarity = self._cosine_similarity(query_embedding, embedding)\n            \n            if similarity > 0:\n                results.append((similarity, row))\n        \n        # Sort by similarity and limit\n        results.sort(key=lambda x: x[0], reverse=True)\n        results = results[:limit]\n        \n        return [\n            SearchResult(\n                path=row['path'],\n                start_line=row['start_line'],\n                end_line=row['end_line'],\n                score=score,\n                snippet=self._truncate_text(row['text'], 500),\n                source=row['source'],\n                user_id=row['user_id']\n            )\n            for score, row in results\n        ]\n    \n    def search_keyword(\n        self,\n        query: str,\n        user_id: Optional[str] = None,\n        scopes: List[str] = None,\n        limit: int = 10\n    ) -> List[SearchResult]:\n        \"\"\"\n        Keyword search using FTS5 + LIKE fallback\n        \n        Strategy:\n        1. If FTS5 available: Try FTS5 search first (good for English and word-based languages)\n        2. If no FTS5 or no results and query contains CJK: Use LIKE search\n        \"\"\"\n        if scopes is None:\n            scopes = [\"shared\"]\n            if user_id:\n                scopes.append(\"user\")\n        \n        # Try FTS5 search first (if available)\n        if self.fts5_available:\n            fts_results = self._search_fts5(query, user_id, scopes, limit)\n            if fts_results:\n                return fts_results\n        \n        # Fallback to LIKE search (always for CJK, or if FTS5 not available)\n        if not self.fts5_available or MemoryStorage._contains_cjk(query):\n            return self._search_like(query, user_id, scopes, limit)\n        \n        return []\n    \n    def _search_fts5(\n        self,\n        query: str,\n        user_id: Optional[str],\n        scopes: List[str],\n        limit: int\n    ) -> List[SearchResult]:\n        \"\"\"FTS5 full-text search\"\"\"\n        fts_query = self._build_fts_query(query)\n        if not fts_query:\n            return []\n        \n        scope_placeholders = ','.join('?' * len(scopes))\n        params = [fts_query] + scopes\n        \n        if user_id:\n            sql_query = f\"\"\"\n                SELECT chunks.*, bm25(chunks_fts) as rank\n                FROM chunks_fts\n                JOIN chunks ON chunks.id = chunks_fts.id\n                WHERE chunks_fts MATCH ? \n                AND chunks.scope IN ({scope_placeholders})\n                AND (chunks.scope = 'shared' OR chunks.user_id = ?)\n                ORDER BY rank\n                LIMIT ?\n            \"\"\"\n            params.extend([user_id, limit])\n        else:\n            sql_query = f\"\"\"\n                SELECT chunks.*, bm25(chunks_fts) as rank\n                FROM chunks_fts\n                JOIN chunks ON chunks.id = chunks_fts.id\n                WHERE chunks_fts MATCH ? \n                AND chunks.scope IN ({scope_placeholders})\n                ORDER BY rank\n                LIMIT ?\n            \"\"\"\n            params.append(limit)\n        \n        try:\n            rows = self.conn.execute(sql_query, params).fetchall()\n            return [\n                SearchResult(\n                    path=row['path'],\n                    start_line=row['start_line'],\n                    end_line=row['end_line'],\n                    score=self._bm25_rank_to_score(row['rank']),\n                    snippet=self._truncate_text(row['text'], 500),\n                    source=row['source'],\n                    user_id=row['user_id']\n                )\n                for row in rows\n            ]\n        except Exception:\n            return []\n    \n    def _search_like(\n        self,\n        query: str,\n        user_id: Optional[str],\n        scopes: List[str],\n        limit: int\n    ) -> List[SearchResult]:\n        \"\"\"LIKE-based search for CJK characters\"\"\"\n        import re\n        # Extract CJK words (2+ characters)\n        cjk_words = re.findall(r'[\\u4e00-\\u9fff]{2,}', query)\n        if not cjk_words:\n            return []\n        \n        scope_placeholders = ','.join('?' * len(scopes))\n        \n        # Build LIKE conditions for each word\n        like_conditions = []\n        params = []\n        for word in cjk_words:\n            like_conditions.append(\"text LIKE ?\")\n            params.append(f'%{word}%')\n        \n        where_clause = ' OR '.join(like_conditions)\n        params.extend(scopes)\n        \n        if user_id:\n            sql_query = f\"\"\"\n                SELECT * FROM chunks\n                WHERE ({where_clause})\n                AND scope IN ({scope_placeholders})\n                AND (scope = 'shared' OR user_id = ?)\n                LIMIT ?\n            \"\"\"\n            params.extend([user_id, limit])\n        else:\n            sql_query = f\"\"\"\n                SELECT * FROM chunks\n                WHERE ({where_clause})\n                AND scope IN ({scope_placeholders})\n                LIMIT ?\n            \"\"\"\n            params.append(limit)\n        \n        try:\n            rows = self.conn.execute(sql_query, params).fetchall()\n            return [\n                SearchResult(\n                    path=row['path'],\n                    start_line=row['start_line'],\n                    end_line=row['end_line'],\n                    score=0.5,  # Fixed score for LIKE search\n                    snippet=self._truncate_text(row['text'], 500),\n                    source=row['source'],\n                    user_id=row['user_id']\n                )\n                for row in rows\n            ]\n        except Exception:\n            return []\n    \n    def delete_by_path(self, path: str):\n        \"\"\"Delete all chunks from a file\"\"\"\n        self.conn.execute(\"\"\"\n            DELETE FROM chunks WHERE path = ?\n        \"\"\", (path,))\n        self.conn.commit()\n    \n    def get_file_hash(self, path: str) -> Optional[str]:\n        \"\"\"Get stored file hash\"\"\"\n        row = self.conn.execute(\"\"\"\n            SELECT hash FROM files WHERE path = ?\n        \"\"\", (path,)).fetchone()\n        return row['hash'] if row else None\n    \n    def update_file_metadata(self, path: str, source: str, file_hash: str, mtime: int, size: int):\n        \"\"\"Update file metadata\"\"\"\n        self.conn.execute(\"\"\"\n            INSERT OR REPLACE INTO files (path, source, hash, mtime, size, updated_at)\n            VALUES (?, ?, ?, ?, ?, strftime('%s', 'now'))\n        \"\"\", (path, source, file_hash, mtime, size))\n        self.conn.commit()\n    \n    def get_stats(self) -> Dict[str, int]:\n        \"\"\"Get storage statistics\"\"\"\n        chunks_count = self.conn.execute(\"\"\"\n            SELECT COUNT(*) as cnt FROM chunks\n        \"\"\").fetchone()['cnt']\n        \n        files_count = self.conn.execute(\"\"\"\n            SELECT COUNT(*) as cnt FROM files\n        \"\"\").fetchone()['cnt']\n        \n        return {\n            'chunks': chunks_count,\n            'files': files_count\n        }\n    \n    def close(self):\n        \"\"\"Close database connection\"\"\"\n        if self.conn:\n            try:\n                self.conn.commit()  # Ensure all changes are committed\n                self.conn.close()\n                self.conn = None  # Mark as closed\n            except Exception as e:\n                print(f\"⚠️  Error closing database connection: {e}\")\n    \n    def __del__(self):\n        \"\"\"Destructor to ensure connection is closed\"\"\"\n        try:\n            self.close()\n        except Exception:\n            pass  # Ignore errors during cleanup\n    \n    # Helper methods\n    \n    def _row_to_chunk(self, row) -> MemoryChunk:\n        \"\"\"Convert database row to MemoryChunk\"\"\"\n        return MemoryChunk(\n            id=row['id'],\n            user_id=row['user_id'],\n            scope=row['scope'],\n            source=row['source'],\n            path=row['path'],\n            start_line=row['start_line'],\n            end_line=row['end_line'],\n            text=row['text'],\n            embedding=json.loads(row['embedding']) if row['embedding'] else None,\n            hash=row['hash'],\n            metadata=json.loads(row['metadata']) if row['metadata'] else None\n        )\n    \n    @staticmethod\n    def _cosine_similarity(vec1: List[float], vec2: List[float]) -> float:\n        \"\"\"Calculate cosine similarity between two vectors\"\"\"\n        if len(vec1) != len(vec2):\n            return 0.0\n        \n        dot_product = sum(a * b for a, b in zip(vec1, vec2))\n        norm1 = sum(a * a for a in vec1) ** 0.5\n        norm2 = sum(b * b for b in vec2) ** 0.5\n        \n        if norm1 == 0 or norm2 == 0:\n            return 0.0\n        \n        return dot_product / (norm1 * norm2)\n    \n    @staticmethod\n    def _contains_cjk(text: str) -> bool:\n        \"\"\"Check if text contains CJK (Chinese/Japanese/Korean) characters\"\"\"\n        import re\n        return bool(re.search(r'[\\u4e00-\\u9fff]', text))\n    \n    @staticmethod\n    def _build_fts_query(raw_query: str) -> Optional[str]:\n        \"\"\"\n        Build FTS5 query from raw text\n        \n        Works best for English and word-based languages.\n        For CJK characters, LIKE search will be used as fallback.\n        \"\"\"\n        import re\n        # Extract words (primarily English words and numbers)\n        tokens = re.findall(r'[A-Za-z0-9_]+', raw_query)\n        if not tokens:\n            return None\n        \n        # Quote tokens for exact matching\n        quoted = [f'\"{t}\"' for t in tokens]\n        # Use OR for more flexible matching\n        return ' OR '.join(quoted)\n    \n    @staticmethod\n    def _bm25_rank_to_score(rank: float) -> float:\n        \"\"\"Convert BM25 rank to 0-1 score\"\"\"\n        normalized = max(0, rank) if rank is not None else 999\n        return 1 / (1 + normalized)\n    \n    @staticmethod\n    def _truncate_text(text: str, max_chars: int) -> str:\n        \"\"\"Truncate text to max characters\"\"\"\n        if len(text) <= max_chars:\n            return text\n        return text[:max_chars] + \"...\"\n    \n    @staticmethod\n    def compute_hash(content: str) -> str:\n        \"\"\"Compute SHA256 hash of content\"\"\"\n        return hashlib.sha256(content.encode('utf-8')).hexdigest()\n"
  },
  {
    "path": "agent/memory/summarizer.py",
    "content": "\"\"\"\nMemory flush manager\n\nHandles memory persistence when conversation context is trimmed or overflows:\n- Uses LLM to summarize discarded messages into concise key-information entries\n- Writes to daily memory files (lazy creation)\n- Deduplicates trim flushes to avoid repeated writes\n- Runs summarization asynchronously to avoid blocking normal replies\n- Provides daily summary interface for scheduler\n\"\"\"\n\nimport threading\nfrom typing import Optional, Callable, Any, List, Dict\nfrom pathlib import Path\nfrom datetime import datetime\nfrom common.log import logger\n\n\nSUMMARIZE_SYSTEM_PROMPT = \"\"\"你是一个记忆提取助手。你的任务是从对话记录中提取值得记住的信息，生成简洁的记忆摘要。\n\n输出要求：\n1. 以事件/关键信息为维度记录，每条一行，用 \"- \" 开头\n2. 记录有价值的关键信息，例如用户提出的要求及助手的解决方案，对话中涉及的事实信息，用户的偏好、决策或重要结论\n3. 每条摘要需要简明扼要，只保留关键信息\n4. 直接输出摘要内容，不要加任何前缀说明\n5. 当对话没有任何记录价值例如只是简单问候，可回复\"无\\\"\"\"\"\n\nSUMMARIZE_USER_PROMPT = \"\"\"请从以下对话记录中提取关键信息，生成记忆摘要：\n\n{conversation}\"\"\"\n\n\nclass MemoryFlushManager:\n    \"\"\"\n    Manages memory flush operations.\n    \n    Flush is triggered by agent_stream in two scenarios:\n    1. Context trim: _trim_messages discards old turns → flush discarded content\n    2. Context overflow: API rejects request → emergency flush before clearing\n    \n    Additionally, create_daily_summary() can be called by scheduler for end-of-day summaries.\n    \"\"\"\n    \n    def __init__(\n        self,\n        workspace_dir: Path,\n        llm_model: Optional[Any] = None,\n    ):\n        self.workspace_dir = workspace_dir\n        self.llm_model = llm_model\n        \n        self.memory_dir = workspace_dir / \"memory\"\n        self.memory_dir.mkdir(parents=True, exist_ok=True)\n        \n        self.last_flush_timestamp: Optional[datetime] = None\n        self._trim_flushed_hashes: set = set()  # Content hashes of already-flushed messages\n        self._last_flushed_content_hash: str = \"\"  # Content hash at last flush, for daily dedup\n    \n    def get_today_memory_file(self, user_id: Optional[str] = None, ensure_exists: bool = False) -> Path:\n        \"\"\"Get today's memory file path: memory/YYYY-MM-DD.md\"\"\"\n        today = datetime.now().strftime(\"%Y-%m-%d\")\n        \n        if user_id:\n            user_dir = self.memory_dir / \"users\" / user_id\n            if ensure_exists:\n                user_dir.mkdir(parents=True, exist_ok=True)\n            today_file = user_dir / f\"{today}.md\"\n        else:\n            today_file = self.memory_dir / f\"{today}.md\"\n        \n        if ensure_exists and not today_file.exists():\n            today_file.parent.mkdir(parents=True, exist_ok=True)\n            today_file.write_text(f\"# Daily Memory: {today}\\n\\n\")\n        \n        return today_file\n    \n    def get_main_memory_file(self, user_id: Optional[str] = None) -> Path:\n        \"\"\"Get main memory file path: MEMORY.md (workspace root)\"\"\"\n        if user_id:\n            user_dir = self.memory_dir / \"users\" / user_id\n            user_dir.mkdir(parents=True, exist_ok=True)\n            return user_dir / \"MEMORY.md\"\n        else:\n            return Path(self.workspace_dir) / \"MEMORY.md\"\n    \n    def get_status(self) -> dict:\n        return {\n            'last_flush_time': self.last_flush_timestamp.isoformat() if self.last_flush_timestamp else None,\n            'today_file': str(self.get_today_memory_file()),\n            'main_file': str(self.get_main_memory_file())\n        }\n\n    # ---- Flush execution (called by agent_stream or scheduler) ----\n    \n    def flush_from_messages(\n        self,\n        messages: List[Dict],\n        user_id: Optional[str] = None,\n        reason: str = \"trim\",\n        max_messages: int = 0,\n    ) -> bool:\n        \"\"\"\n        Asynchronously summarize and flush messages to daily memory.\n        \n        Deduplication runs synchronously, then LLM summarization + file write\n        run in a background thread so the main reply flow is never blocked.\n        \n        Args:\n            messages: Conversation message list (OpenAI/Claude format)\n            user_id: Optional user ID for user-scoped memory\n            reason: Why flush was triggered (\"trim\" | \"overflow\" | \"daily_summary\")\n            max_messages: Max recent messages to summarize (0 = all)\n        \n        Returns:\n            True if flush was dispatched\n        \"\"\"\n        try:\n            import hashlib\n            deduped = []\n            for m in messages:\n                text = self._extract_text_from_content(m.get(\"content\", \"\"))\n                if not text or not text.strip():\n                    continue\n                h = hashlib.md5(text.encode(\"utf-8\")).hexdigest()\n                if h not in self._trim_flushed_hashes:\n                    self._trim_flushed_hashes.add(h)\n                    deduped.append(m)\n            if not deduped:\n                return False\n            \n            import copy\n            snapshot = copy.deepcopy(deduped)\n            thread = threading.Thread(\n                target=self._flush_worker,\n                args=(snapshot, user_id, reason, max_messages),\n                daemon=True,\n            )\n            thread.start()\n            logger.info(f\"[MemoryFlush] Async flush dispatched (reason={reason}, msgs={len(snapshot)})\")\n            return True\n            \n        except Exception as e:\n            logger.warning(f\"[MemoryFlush] Failed to dispatch flush (reason={reason}): {e}\")\n            return False\n\n    def _flush_worker(\n        self,\n        messages: List[Dict],\n        user_id: Optional[str],\n        reason: str,\n        max_messages: int,\n    ):\n        \"\"\"Background worker: summarize with LLM and write to daily file.\"\"\"\n        try:\n            summary = self._summarize_messages(messages, max_messages)\n            if not summary or not summary.strip() or summary.strip() == \"无\":\n                logger.info(f\"[MemoryFlush] No valuable content to flush (reason={reason})\")\n                return\n            \n            daily_file = ensure_daily_memory_file(self.workspace_dir, user_id)\n            \n            if reason == \"overflow\":\n                header = f\"## Context Overflow Recovery ({datetime.now().strftime('%H:%M')})\"\n                note = \"The following conversation was trimmed due to context overflow:\\n\"\n            elif reason == \"trim\":\n                header = f\"## Trimmed Context ({datetime.now().strftime('%H:%M')})\"\n                note = \"\"\n            elif reason == \"daily_summary\":\n                header = f\"## Daily Summary ({datetime.now().strftime('%H:%M')})\"\n                note = \"\"\n            else:\n                header = f\"## Session Notes ({datetime.now().strftime('%H:%M')})\"\n                note = \"\"\n            \n            flush_entry = f\"\\n{header}\\n\\n{note}{summary}\\n\"\n            \n            with open(daily_file, \"a\", encoding=\"utf-8\") as f:\n                f.write(flush_entry)\n            \n            self.last_flush_timestamp = datetime.now()\n            \n            logger.info(f\"[MemoryFlush] Wrote to {daily_file.name} (reason={reason}, chars={len(summary)})\")\n            \n        except Exception as e:\n            logger.warning(f\"[MemoryFlush] Async flush failed (reason={reason}): {e}\")\n    \n    def create_daily_summary(\n        self,\n        messages: List[Dict],\n        user_id: Optional[str] = None\n    ) -> bool:\n        \"\"\"\n        Generate end-of-day summary. Called by daily timer.\n        Skips if messages haven't changed since last flush.\n        \"\"\"\n        import hashlib\n        content = \"\".join(\n            self._extract_text_from_content(m.get(\"content\", \"\"))\n            for m in messages\n        )\n        content_hash = hashlib.md5(content.encode(\"utf-8\")).hexdigest()\n        if content_hash == self._last_flushed_content_hash:\n            logger.debug(\"[MemoryFlush] Daily summary skipped: no new content since last flush\")\n            return False\n        self._last_flushed_content_hash = content_hash\n        return self.flush_from_messages(\n            messages=messages,\n            user_id=user_id,\n            reason=\"daily_summary\",\n            max_messages=0,\n        )\n    \n    # ---- Internal helpers ----\n    \n    def _summarize_messages(self, messages: List[Dict], max_messages: int = 0) -> str:\n        \"\"\"\n        Summarize conversation messages using LLM, with rule-based fallback.\n        \"\"\"\n        conversation_text = self._format_conversation_for_summary(messages, max_messages)\n        if not conversation_text.strip():\n            return \"\"\n        \n        # Try LLM summarization first\n        if self.llm_model:\n            try:\n                summary = self._call_llm_for_summary(conversation_text)\n                if summary and summary.strip() and summary.strip() != \"无\":\n                    return summary.strip()\n            except Exception as e:\n                logger.warning(f\"[MemoryFlush] LLM summarization failed, using fallback: {e}\")\n        \n        return self._extract_summary_fallback(messages, max_messages)\n\n    def _format_conversation_for_summary(self, messages: List[Dict], max_messages: int = 0) -> str:\n        \"\"\"Format messages into readable conversation text for LLM summarization.\"\"\"\n        msgs = messages if max_messages == 0 else messages[-max_messages * 2:]\n        lines = []\n        for msg in msgs:\n            role = msg.get(\"role\", \"\")\n            text = self._extract_text_from_content(msg.get(\"content\", \"\"))\n            if not text or not text.strip():\n                continue\n            text = text.strip()\n            if role == \"user\":\n                lines.append(f\"用户: {text[:500]}\")\n            elif role == \"assistant\":\n                lines.append(f\"助手: {text[:500]}\")\n        return \"\\n\".join(lines)\n\n    def _call_llm_for_summary(self, conversation_text: str) -> str:\n        \"\"\"Call LLM to generate a concise summary of the conversation.\"\"\"\n        from agent.protocol.models import LLMRequest\n        \n        request = LLMRequest(\n            messages=[{\"role\": \"user\", \"content\": SUMMARIZE_USER_PROMPT.format(conversation=conversation_text)}],\n            temperature=0,\n            max_tokens=500,\n            stream=False,\n            system=SUMMARIZE_SYSTEM_PROMPT,\n        )\n        \n        response = self.llm_model.call(request)\n        \n        if isinstance(response, dict):\n            if response.get(\"error\"):\n                raise RuntimeError(response.get(\"message\", \"LLM call failed\"))\n            # OpenAI format\n            choices = response.get(\"choices\", [])\n            if choices:\n                return choices[0].get(\"message\", {}).get(\"content\", \"\")\n        \n        # Handle response object with attribute access (e.g. OpenAI SDK response)\n        if hasattr(response, \"choices\") and response.choices:\n            return response.choices[0].message.content or \"\"\n        \n        return \"\"\n\n    @staticmethod\n    def _extract_summary_fallback(messages: List[Dict], max_messages: int = 0) -> str:\n        \"\"\"Rule-based fallback when LLM is unavailable.\"\"\"\n        msgs = messages if max_messages == 0 else messages[-max_messages * 2:]\n        \n        items = []\n        for msg in msgs:\n            role = msg.get(\"role\", \"\")\n            text = MemoryFlushManager._extract_text_from_content(msg.get(\"content\", \"\"))\n            if not text or not text.strip():\n                continue\n            text = text.strip()\n            \n            if role == \"user\":\n                if len(text) <= 5:\n                    continue\n                items.append(f\"- 用户请求: {text[:200]}\")\n            elif role == \"assistant\":\n                first_line = text.split(\"\\n\")[0].strip()\n                if len(first_line) > 10:\n                    items.append(f\"- 处理结果: {first_line[:200]}\")\n        \n        return \"\\n\".join(items[:15])\n    \n    @staticmethod\n    def _extract_text_from_content(content) -> str:\n        \"\"\"Extract plain text from message content (string or content blocks).\"\"\"\n        if isinstance(content, str):\n            return content\n        if isinstance(content, list):\n            parts = []\n            for block in content:\n                if isinstance(block, dict) and block.get(\"type\") == \"text\":\n                    parts.append(block.get(\"text\", \"\"))\n                elif isinstance(block, str):\n                    parts.append(block)\n            return \"\\n\".join(parts)\n        return \"\"\n\n\ndef create_memory_files_if_needed(workspace_dir: Path, user_id: Optional[str] = None):\n    \"\"\"\n    Create essential memory files if they don't exist.\n    Only creates MEMORY.md; daily files are created lazily on first write.\n    \n    Args:\n        workspace_dir: Workspace directory\n        user_id: Optional user ID for user-specific files\n    \"\"\"\n    memory_dir = workspace_dir / \"memory\"\n    memory_dir.mkdir(parents=True, exist_ok=True)\n    \n    # Create main MEMORY.md in workspace root (always needed for bootstrap)\n    if user_id:\n        user_dir = memory_dir / \"users\" / user_id\n        user_dir.mkdir(parents=True, exist_ok=True)\n        main_memory = user_dir / \"MEMORY.md\"\n    else:\n        main_memory = Path(workspace_dir) / \"MEMORY.md\"\n    \n    if not main_memory.exists():\n        main_memory.write_text(\"\")\n\n\ndef ensure_daily_memory_file(workspace_dir: Path, user_id: Optional[str] = None) -> Path:\n    \"\"\"\n    Ensure today's daily memory file exists, creating it only when actually needed.\n    Called lazily before first write to daily memory.\n    \n    Args:\n        workspace_dir: Workspace directory\n        user_id: Optional user ID for user-specific files\n        \n    Returns:\n        Path to today's memory file\n    \"\"\"\n    memory_dir = workspace_dir / \"memory\"\n    memory_dir.mkdir(parents=True, exist_ok=True)\n    \n    today = datetime.now().strftime(\"%Y-%m-%d\")\n    if user_id:\n        user_dir = memory_dir / \"users\" / user_id\n        user_dir.mkdir(parents=True, exist_ok=True)\n        today_memory = user_dir / f\"{today}.md\"\n    else:\n        today_memory = memory_dir / f\"{today}.md\"\n    \n    if not today_memory.exists():\n        today_memory.write_text(\n            f\"# Daily Memory: {today}\\n\\n\"\n        )\n    \n    return today_memory\n"
  },
  {
    "path": "agent/prompt/__init__.py",
    "content": "\"\"\"\nAgent Prompt Module - 系统提示词构建模块\n\"\"\"\n\nfrom .builder import PromptBuilder, build_agent_system_prompt\nfrom .workspace import ensure_workspace, load_context_files\n\n__all__ = [\n    'PromptBuilder',\n    'build_agent_system_prompt',\n    'ensure_workspace',\n    'load_context_files',\n]\n"
  },
  {
    "path": "agent/prompt/builder.py",
    "content": "\"\"\"\nSystem Prompt Builder - 系统提示词构建器\n\n实现模块化的系统提示词构建，支持工具、技能、记忆等多个子系统\n\"\"\"\n\nfrom __future__ import annotations\nimport os\nfrom typing import List, Dict, Optional, Any\nfrom dataclasses import dataclass\n\nfrom common.log import logger\n\n\n@dataclass\nclass ContextFile:\n    \"\"\"上下文文件\"\"\"\n    path: str\n    content: str\n\n\nclass PromptBuilder:\n    \"\"\"提示词构建器\"\"\"\n    \n    def __init__(self, workspace_dir: str, language: str = \"zh\"):\n        \"\"\"\n        初始化提示词构建器\n        \n        Args:\n            workspace_dir: 工作空间目录\n            language: 语言 (\"zh\" 或 \"en\")\n        \"\"\"\n        self.workspace_dir = workspace_dir\n        self.language = language\n    \n    def build(\n        self,\n        base_persona: Optional[str] = None,\n        user_identity: Optional[Dict[str, str]] = None,\n        tools: Optional[List[Any]] = None,\n        context_files: Optional[List[ContextFile]] = None,\n        skill_manager: Any = None,\n        memory_manager: Any = None,\n        runtime_info: Optional[Dict[str, Any]] = None,\n        **kwargs\n    ) -> str:\n        \"\"\"\n        构建完整的系统提示词\n        \n        Args:\n            base_persona: 基础人格描述（会被context_files中的AGENT.md覆盖）\n            user_identity: 用户身份信息\n            tools: 工具列表\n            context_files: 上下文文件列表（AGENT.md, USER.md, RULE.md, BOOTSTRAP.md等）\n            skill_manager: 技能管理器\n            memory_manager: 记忆管理器\n            runtime_info: 运行时信息\n            **kwargs: 其他参数\n            \n        Returns:\n            完整的系统提示词\n        \"\"\"\n        return build_agent_system_prompt(\n            workspace_dir=self.workspace_dir,\n            language=self.language,\n            base_persona=base_persona,\n            user_identity=user_identity,\n            tools=tools,\n            context_files=context_files,\n            skill_manager=skill_manager,\n            memory_manager=memory_manager,\n            runtime_info=runtime_info,\n            **kwargs\n        )\n\n\ndef build_agent_system_prompt(\n    workspace_dir: str,\n    language: str = \"zh\",\n    base_persona: Optional[str] = None,\n    user_identity: Optional[Dict[str, str]] = None,\n    tools: Optional[List[Any]] = None,\n    context_files: Optional[List[ContextFile]] = None,\n    skill_manager: Any = None,\n    memory_manager: Any = None,\n    runtime_info: Optional[Dict[str, Any]] = None,\n    **kwargs\n) -> str:\n    \"\"\"\n    构建Agent系统提示词\n    \n    顺序说明（按重要性和逻辑关系排列）:\n    1. 工具系统 - 核心能力，最先介绍\n    2. 技能系统 - 紧跟工具，因为技能需要用 read 工具读取\n    3. 记忆系统 - 独立的记忆能力\n    4. 工作空间 - 工作环境说明\n    5. 用户身份 - 用户信息（可选）\n    6. 项目上下文 - AGENT.md, USER.md, RULE.md, BOOTSTRAP.md（定义人格、身份、规则、初始化引导）\n    7. 运行时信息 - 元信息（时间、模型等）\n    \n    Args:\n        workspace_dir: 工作空间目录\n        language: 语言 (\"zh\" 或 \"en\")\n        base_persona: 基础人格描述（已废弃，由AGENT.md定义）\n        user_identity: 用户身份信息\n        tools: 工具列表\n        context_files: 上下文文件列表\n        skill_manager: 技能管理器\n        memory_manager: 记忆管理器\n        runtime_info: 运行时信息\n        **kwargs: 其他参数\n        \n    Returns:\n        完整的系统提示词\n    \"\"\"\n    sections = []\n    \n    # 1. 工具系统（最重要，放在最前面）\n    if tools:\n        sections.extend(_build_tooling_section(tools, language))\n    \n    # 2. 技能系统（紧跟工具，因为需要用 read 工具）\n    if skill_manager:\n        sections.extend(_build_skills_section(skill_manager, tools, language))\n    \n    # 3. 记忆系统（独立的记忆能力）\n    if memory_manager:\n        sections.extend(_build_memory_section(memory_manager, tools, language))\n    \n    # 4. 工作空间（工作环境说明）\n    sections.extend(_build_workspace_section(workspace_dir, language))\n    \n    # 5. 用户身份（如果有）\n    if user_identity:\n        sections.extend(_build_user_identity_section(user_identity, language))\n    \n    # 6. 项目上下文文件（AGENT.md, USER.md, RULE.md - 定义人格）\n    if context_files:\n        sections.extend(_build_context_files_section(context_files, language))\n    \n    # 7. 运行时信息（元信息，放在最后）\n    if runtime_info:\n        sections.extend(_build_runtime_section(runtime_info, language))\n    \n    return \"\\n\".join(sections)\n\n\ndef _build_identity_section(base_persona: Optional[str], language: str) -> List[str]:\n    \"\"\"构建基础身份section - 不再需要，身份由AGENT.md定义\"\"\"\n    # 不再生成基础身份section，完全由AGENT.md定义\n    return []\n\n\ndef _build_tooling_section(tools: List[Any], language: str) -> List[str]:\n    \"\"\"Build tooling section with concise tool list and call style guide.\"\"\"\n    # One-line summaries for known tools (details are in the tool schema)\n    core_summaries = {\n        \"read\": \"读取文件内容\",\n        \"write\": \"创建或覆盖文件\",\n        \"edit\": \"精确编辑文件\",\n        \"ls\": \"列出目录内容\",\n        \"grep\": \"搜索文件内容\",\n        \"find\": \"按模式查找文件\",\n        \"bash\": \"执行shell命令\",\n        \"terminal\": \"管理后台进程\",\n        \"web_search\": \"网络搜索\",\n        \"web_fetch\": \"获取URL内容\",\n        \"browser\": \"控制浏览器\",\n        \"memory_search\": \"搜索记忆\",\n        \"memory_get\": \"读取记忆内容\",\n        \"env_config\": \"管理API密钥和技能配置\",\n        \"scheduler\": \"管理定时任务和提醒\",\n        \"send\": \"发送本地文件给用户（仅限本地文件，URL直接放在回复文本中）\",\n    }\n\n    # Preferred display order\n    tool_order = [\n        \"read\", \"write\", \"edit\", \"ls\", \"grep\", \"find\",\n        \"bash\", \"terminal\",\n        \"web_search\", \"web_fetch\", \"browser\",\n        \"memory_search\", \"memory_get\",\n        \"env_config\", \"scheduler\", \"send\",\n    ]\n\n    # Build name -> summary mapping for available tools\n    available = {}\n    for tool in tools:\n        name = tool.name if hasattr(tool, 'name') else str(tool)\n        available[name] = core_summaries.get(name, \"\")\n\n    # Generate tool lines: ordered tools first, then extras\n    tool_lines = []\n    for name in tool_order:\n        if name in available:\n            summary = available.pop(name)\n            tool_lines.append(f\"- {name}: {summary}\" if summary else f\"- {name}\")\n    for name in sorted(available):\n        summary = available[name]\n        tool_lines.append(f\"- {name}: {summary}\" if summary else f\"- {name}\")\n\n    lines = [\n        \"## 工具系统\",\n        \"\",\n        \"可用工具（名称大小写敏感，严格按列表调用）:\",\n        \"\\n\".join(tool_lines),\n        \"\",\n        \"工具调用风格：\",\n        \"\",\n        \"- 在多步骤任务、敏感操作或用户要求时简要解释决策过程\",\n        \"- 持续推进直到任务完成，完成后向用户报告结果。\",\n        \"- 回复中涉及密钥、令牌等敏感信息必须脱敏。\",\n        \"- URL链接直接放在回复文本中即可，系统会自动处理和渲染。无需下载后使用send工具发送\",\n        \"\",\n    ]\n\n    return lines\n\n\ndef _build_skills_section(skill_manager: Any, tools: Optional[List[Any]], language: str) -> List[str]:\n    \"\"\"构建技能系统section\"\"\"\n    if not skill_manager:\n        return []\n    \n    # 获取read工具名称\n    read_tool_name = \"read\"\n    if tools:\n        for tool in tools:\n            tool_name = tool.name if hasattr(tool, 'name') else str(tool)\n            if tool_name.lower() == \"read\":\n                read_tool_name = tool_name\n                break\n    \n    lines = [\n        \"## 技能系统（mandatory）\",\n        \"\",\n        \"在回复之前：扫描下方 <available_skills> 中每个技能的 <description>。\",\n        \"\",\n        f\"- 如果有技能的描述与用户需求匹配：使用 `{read_tool_name}` 工具读取其 <location> 路径的 SKILL.md 文件，然后严格遵循文件中的指令。\"\n        \"当有匹配的技能时，应优先使用技能\",\n        \"- 如果多个技能都适用则选择最匹配的一个，然后读取并遵循。\",\n        \"- 如果没有技能明确适用：不要读取任何 SKILL.md，直接使用通用工具。\",\n        \"\",\n        f\"**重要**: 技能不是工具，不能直接调用。使用技能的唯一方式是用 `{read_tool_name}` 读取 SKILL.md 文件，然后按文件内容操作。\"\n        \"永远不要一次性读取多个技能，只在选择后再读取。\",\n        \"\",\n        \"以下是可用技能：\"\n    ]\n    \n    # 添加技能列表（通过skill_manager获取）\n    try:\n        skills_prompt = skill_manager.build_skills_prompt()\n        logger.debug(f\"[PromptBuilder] Skills prompt length: {len(skills_prompt) if skills_prompt else 0}\")\n        if skills_prompt:\n            lines.append(skills_prompt.strip())\n            lines.append(\"\")\n        else:\n            logger.warning(\"[PromptBuilder] No skills prompt generated - skills_prompt is empty\")\n    except Exception as e:\n        logger.warning(f\"Failed to build skills prompt: {e}\")\n        import traceback\n        logger.debug(f\"Skills prompt error traceback: {traceback.format_exc()}\")\n    \n    return lines\n\n\ndef _build_memory_section(memory_manager: Any, tools: Optional[List[Any]], language: str) -> List[str]:\n    \"\"\"构建记忆系统section\"\"\"\n    if not memory_manager:\n        return []\n    \n    # 检查是否有memory工具\n    has_memory_tools = False\n    if tools:\n        tool_names = [tool.name if hasattr(tool, 'name') else str(tool) for tool in tools]\n        has_memory_tools = any(name in ['memory_search', 'memory_get'] for name in tool_names)\n    \n    if not has_memory_tools:\n        return []\n    \n    from datetime import datetime\n    today_file = datetime.now().strftime(\"%Y-%m-%d\") + \".md\"\n    \n    lines = [\n        \"## 记忆系统\",\n        \"\",\n        \"### 检索记忆\",\n        \"\",\n        \"在回答关于以前的工作、决定、日期、人物、偏好或待办事项的任何问题之前：\",\n        \"\",\n        \"1. 不确定记忆文件位置 → 先用 `memory_search` 通过关键词和语义检索相关内容\",\n        \"2. 已知文件位置 → 直接用 `memory_get` 读取相应的行 (例如：MEMORY.md, memory/YYYY-MM-DD.md)\",\n        \"3. search 无结果 → 尝试用 `memory_get` 读取MEMORY.md及最近两天记忆文件\",\n        \"\",\n        \"**记忆文件结构**:\",\n        f\"- `MEMORY.md`: 长期记忆（核心信息、偏好、决策等）\",\n        f\"- `memory/YYYY-MM-DD.md`: 每日记忆，今天是 `memory/{today_file}`\",\n        \"\",\n        \"### 写入记忆\",\n        \"\",\n        \"**主动存储**：遇到以下情况时，应主动将信息写入记忆文件（无需告知用户）：\",\n        \"\",\n        \"- 用户明确要求你记住某些信息\",\n        \"- 用户分享了重要的个人偏好、习惯、决策\",\n        \"- 对话中产生了重要的结论、方案、约定\",\n        \"- 完成了复杂任务，值得记录关键步骤和结果\",\n        \"- 发现了用户经常遇到的问题或解决方案\",\n        \"\",\n        \"**存储规则**:\",\n        f\"- 长期有效的核心信息 → `MEMORY.md`（文件保持精简，< 2000 tokens）\",\n        f\"- 当天的事件、进展、笔记 → `memory/{today_file}`\",\n        \"- 追加内容 → `edit` 工具，oldText 留空\",\n        \"- 修改内容 → `edit` 工具，oldText 填写要替换的文本\",\n        \"- **禁止写入敏感信息**：API密钥、令牌等敏感信息严禁写入记忆文件\",\n        \"\",\n        \"**使用原则**: 自然使用记忆，就像你本来就知道；不用刻意提起，除非用户问起。\",\n        \"\",\n    ]\n    \n    return lines\n\n\ndef _build_user_identity_section(user_identity: Dict[str, str], language: str) -> List[str]:\n    \"\"\"构建用户身份section\"\"\"\n    if not user_identity:\n        return []\n    \n    lines = [\n        \"## 用户身份\",\n        \"\",\n    ]\n    \n    if user_identity.get(\"name\"):\n        lines.append(f\"**用户姓名**: {user_identity['name']}\")\n    if user_identity.get(\"nickname\"):\n        lines.append(f\"**称呼**: {user_identity['nickname']}\")\n    if user_identity.get(\"timezone\"):\n        lines.append(f\"**时区**: {user_identity['timezone']}\")\n    if user_identity.get(\"notes\"):\n        lines.append(f\"**备注**: {user_identity['notes']}\")\n    \n    lines.append(\"\")\n    \n    return lines\n\n\ndef _build_docs_section(workspace_dir: str, language: str) -> List[str]:\n    \"\"\"构建文档路径section - 已移除，不再需要\"\"\"\n    # 不再生成文档section\n    return []\n\n\ndef _build_workspace_section(workspace_dir: str, language: str) -> List[str]:\n    \"\"\"构建工作空间section\"\"\"\n    lines = [\n        \"## 工作空间\",\n        \"\",\n        f\"你的工作目录是: `{workspace_dir}`\",\n        \"\",\n        \"**路径使用规则** (非常重要):\",\n        \"\",\n        f\"1. **相对路径的基准目录**: 所有相对路径都是相对于 `{workspace_dir}` 而言的\",\n        f\"   - ✅ 正确: 访问工作空间内的文件用相对路径，如 `AGENT.md`\",\n        f\"   - ❌ 错误: 用相对路径访问其他目录的文件 (如果它不在 `{workspace_dir}` 内)\",\n        \"\",\n        \"2. **访问其他目录**: 如果要访问工作空间之外的目录（如项目代码、系统文件），**必须使用绝对路径**\",\n        f\"   - ✅ 正确: 例如 `~/chatgpt-on-wechat`、`/usr/local/`\",\n        f\"   - ❌ 错误: 假设相对路径会指向其他目录\",\n        \"\",\n        \"3. **路径解析示例**:\",\n        f\"   - 相对路径 `memory/` → 实际路径 `{workspace_dir}/memory/`\",\n        f\"   - 绝对路径 `~/chatgpt-on-wechat/docs/` → 实际路径 `~/chatgpt-on-wechat/docs/`\",\n        \"\",\n        \"4. **不确定时**: 先用 `bash pwd` 确认当前目录，或用 `ls .` 查看当前位置\",\n        \"\",\n        \"**重要说明 - 文件已自动加载**:\",\n        \"\",\n        \"以下文件在会话启动时**已经自动加载**到系统提示词的「项目上下文」section 中，你**无需再用 read 工具读取它们**：\",\n        \"\",\n        \"- ✅ `AGENT.md`: 已加载 - 你的人格和灵魂设定。当你的名字、性格或交流风格发生变化时，主动用 `edit` 更新此文件\",\n        \"- ✅ `USER.md`: 已加载 - 用户的身份信息。当用户修改称呼、姓名等身份信息时，用 `edit` 更新此文件\",\n        \"- ✅ `RULE.md`: 已加载 - 工作空间使用指南和规则\",\n        \"\",\n        \"**交流规范**:\",\n        \"\",\n        \"- 在对话中，无需直接输出工作空间中的技术细节，例如 AGENT.md、USER.md、MEMORY.md 等文件名称\",\n        \"- 例如用自然表达例如「我已记住」而不是「已更新 MEMORY.md」\",\n        \"\",\n    ]\n\n    # Cloud deployment: inject websites directory info and access URL\n    cloud_website_lines = _build_cloud_website_section(workspace_dir)\n    if cloud_website_lines:\n        lines.extend(cloud_website_lines)\n    \n    return lines\n\n\ndef _build_cloud_website_section(workspace_dir: str) -> List[str]:\n    \"\"\"Build cloud website access prompt when cloud deployment is configured.\"\"\"\n    try:\n        from common.cloud_client import build_website_prompt\n        return build_website_prompt(workspace_dir)\n    except Exception:\n        return []\n\n\ndef _build_context_files_section(context_files: List[ContextFile], language: str) -> List[str]:\n    \"\"\"构建项目上下文文件section\"\"\"\n    if not context_files:\n        return []\n    \n    # 检查是否有AGENT.md\n    has_agent = any(\n        f.path.lower().endswith('agent.md') or 'agent.md' in f.path.lower()\n        for f in context_files\n    )\n    \n    lines = [\n        \"# 项目上下文\",\n        \"\",\n        \"以下项目上下文文件已被加载：\",\n        \"\",\n    ]\n    \n    if has_agent:\n        lines.append(\"**`AGENT.md` 是你的灵魂文件**：严格体现其中定义的人格、语气和设定，避免僵硬、模板化的回复。\")\n        lines.append(\"当用户通过对话透露了对你性格、风格、职责、能力边界的新期望，你应该主动用 `edit` 更新 AGENT.md 以反映这些演变。\")\n        lines.append(\"\")\n    \n    # 添加每个文件的内容\n    for file in context_files:\n        lines.append(f\"## {file.path}\")\n        lines.append(\"\")\n        lines.append(file.content)\n        lines.append(\"\")\n    \n    return lines\n\n\ndef _build_runtime_section(runtime_info: Dict[str, Any], language: str) -> List[str]:\n    \"\"\"构建运行时信息section - 支持动态时间\"\"\"\n    if not runtime_info:\n        return []\n    \n    lines = [\n        \"## 运行时信息\",\n        \"\",\n    ]\n    \n    # Add current time if available\n    # Support dynamic time via callable function\n    if callable(runtime_info.get(\"_get_current_time\")):\n        try:\n            time_info = runtime_info[\"_get_current_time\"]()\n            time_line = f\"当前时间: {time_info['time']} {time_info['weekday']} ({time_info['timezone']})\"\n            lines.append(time_line)\n            lines.append(\"\")\n        except Exception as e:\n            logger.warning(f\"[PromptBuilder] Failed to get dynamic time: {e}\")\n    elif runtime_info.get(\"current_time\"):\n        # Fallback to static time for backward compatibility\n        time_str = runtime_info[\"current_time\"]\n        weekday = runtime_info.get(\"weekday\", \"\")\n        timezone = runtime_info.get(\"timezone\", \"\")\n        \n        time_line = f\"当前时间: {time_str}\"\n        if weekday:\n            time_line += f\" {weekday}\"\n        if timezone:\n            time_line += f\" ({timezone})\"\n        \n        lines.append(time_line)\n        lines.append(\"\")\n    \n    # Add other runtime info\n    runtime_parts = []\n    if runtime_info.get(\"model\"):\n        runtime_parts.append(f\"模型={runtime_info['model']}\")\n    if runtime_info.get(\"workspace\"):\n        runtime_parts.append(f\"工作空间={runtime_info['workspace']}\")\n    # Only add channel if it's not the default \"web\"\n    if runtime_info.get(\"channel\") and runtime_info.get(\"channel\") != \"web\":\n        runtime_parts.append(f\"渠道={runtime_info['channel']}\")\n    \n    if runtime_parts:\n        lines.append(\"运行时: \" + \" | \".join(runtime_parts))\n        lines.append(\"\")\n    \n    return lines\n"
  },
  {
    "path": "agent/prompt/workspace.py",
    "content": "\"\"\"\nWorkspace Management - 工作空间管理模块\n\n负责初始化工作空间、创建模板文件、加载上下文文件\n\"\"\"\n\nfrom __future__ import annotations\nimport os\nfrom typing import List, Optional, Dict\nfrom dataclasses import dataclass\n\nfrom common.log import logger\nfrom .builder import ContextFile\n\n\n# 默认文件名常量\nDEFAULT_AGENT_FILENAME = \"AGENT.md\"\nDEFAULT_USER_FILENAME = \"USER.md\"\nDEFAULT_RULE_FILENAME = \"RULE.md\"\nDEFAULT_MEMORY_FILENAME = \"MEMORY.md\"\nDEFAULT_BOOTSTRAP_FILENAME = \"BOOTSTRAP.md\"\n\n\n@dataclass\nclass WorkspaceFiles:\n    \"\"\"工作空间文件路径\"\"\"\n    agent_path: str\n    user_path: str\n    rule_path: str\n    memory_path: str\n    memory_dir: str\n\n\ndef ensure_workspace(workspace_dir: str, create_templates: bool = True) -> WorkspaceFiles:\n    \"\"\"\n    确保工作空间存在，并创建必要的模板文件\n    \n    Args:\n        workspace_dir: 工作空间目录路径\n        create_templates: 是否创建模板文件（首次运行时）\n        \n    Returns:\n        WorkspaceFiles对象，包含所有文件路径\n    \"\"\"\n    # Check if this is a brand new workspace (AGENT.md not yet created).\n    # Cannot rely on directory existence because other modules (e.g. ConversationStore)\n    # may create the workspace directory before ensure_workspace is called.\n    agent_path = os.path.join(workspace_dir, DEFAULT_AGENT_FILENAME)\n    is_new_workspace = not os.path.exists(agent_path)\n    \n    # 确保目录存在\n    os.makedirs(workspace_dir, exist_ok=True)\n    \n    # 定义文件路径\n    user_path = os.path.join(workspace_dir, DEFAULT_USER_FILENAME)\n    rule_path = os.path.join(workspace_dir, DEFAULT_RULE_FILENAME)\n    memory_path = os.path.join(workspace_dir, DEFAULT_MEMORY_FILENAME)  # MEMORY.md 在根目录\n    memory_dir = os.path.join(workspace_dir, \"memory\")  # 每日记忆子目录\n    \n    # 创建memory子目录\n    os.makedirs(memory_dir, exist_ok=True)\n\n    # 创建skills子目录 (for workspace-level skills installed by agent)\n    skills_dir = os.path.join(workspace_dir, \"skills\")\n    os.makedirs(skills_dir, exist_ok=True)\n\n    # 创建websites子目录 (for web pages / sites generated by agent)\n    websites_dir = os.path.join(workspace_dir, \"websites\")\n    os.makedirs(websites_dir, exist_ok=True)\n    \n    # 如果需要，创建模板文件\n    if create_templates:\n        _create_template_if_missing(agent_path, _get_agent_template())\n        _create_template_if_missing(user_path, _get_user_template())\n        _create_template_if_missing(rule_path, _get_rule_template())\n        _create_template_if_missing(memory_path, _get_memory_template())\n        \n        # Only create BOOTSTRAP.md for brand new workspaces;\n        # agent deletes it after completing onboarding\n        if is_new_workspace:\n            bootstrap_path = os.path.join(workspace_dir, DEFAULT_BOOTSTRAP_FILENAME)\n            _create_template_if_missing(bootstrap_path, _get_bootstrap_template())\n        \n        logger.debug(f\"[Workspace] Initialized workspace at: {workspace_dir}\")\n    \n    return WorkspaceFiles(\n        agent_path=agent_path,\n        user_path=user_path,\n        rule_path=rule_path,\n        memory_path=memory_path,\n        memory_dir=memory_dir,\n    )\n\n\ndef load_context_files(workspace_dir: str, files_to_load: Optional[List[str]] = None) -> List[ContextFile]:\n    \"\"\"\n    加载工作空间的上下文文件\n    \n    Args:\n        workspace_dir: 工作空间目录\n        files_to_load: 要加载的文件列表（相对路径），如果为None则加载所有标准文件\n        \n    Returns:\n        ContextFile对象列表\n    \"\"\"\n    if files_to_load is None:\n        # 默认加载的文件（按优先级排序）\n        files_to_load = [\n            DEFAULT_AGENT_FILENAME,\n            DEFAULT_USER_FILENAME,\n            DEFAULT_RULE_FILENAME,\n            DEFAULT_BOOTSTRAP_FILENAME,  # Only exists when onboarding is incomplete\n        ]\n    \n    context_files = []\n    \n    for filename in files_to_load:\n        filepath = os.path.join(workspace_dir, filename)\n        \n        if not os.path.exists(filepath):\n            continue\n        \n        # Auto-cleanup: if BOOTSTRAP.md still exists but AGENT.md is already\n        # filled in, the agent forgot to delete it — clean up and skip loading\n        if filename == DEFAULT_BOOTSTRAP_FILENAME:\n            if _is_onboarding_done(workspace_dir):\n                try:\n                    os.remove(filepath)\n                    logger.info(\"[Workspace] Auto-removed BOOTSTRAP.md (onboarding already complete)\")\n                except Exception:\n                    pass\n                continue\n        \n        try:\n            with open(filepath, 'r', encoding='utf-8') as f:\n                content = f.read().strip()\n            \n            # 跳过空文件或只包含模板占位符的文件\n            if not content or _is_template_placeholder(content):\n                continue\n            \n            context_files.append(ContextFile(\n                path=filename,\n                content=content\n            ))\n            \n            logger.debug(f\"[Workspace] Loaded context file: {filename}\")\n            \n        except Exception as e:\n            logger.warning(f\"[Workspace] Failed to load {filename}: {e}\")\n    \n    return context_files\n\n\ndef _create_template_if_missing(filepath: str, template_content: str):\n    \"\"\"如果文件不存在，创建模板文件\"\"\"\n    if not os.path.exists(filepath):\n        try:\n            with open(filepath, 'w', encoding='utf-8') as f:\n                f.write(template_content)\n            logger.debug(f\"[Workspace] Created template: {os.path.basename(filepath)}\")\n        except Exception as e:\n            logger.error(f\"[Workspace] Failed to create template {filepath}: {e}\")\n\n\ndef _is_template_placeholder(content: str) -> bool:\n    \"\"\"检查内容是否为模板占位符\"\"\"\n    # 常见的占位符模式\n    placeholders = [\n        \"*(填写\",\n        \"*(在首次对话时填写\",\n        \"*(可选)\",\n        \"*(根据需要添加\",\n    ]\n    \n    lines = content.split('\\n')\n    non_empty_lines = [line.strip() for line in lines if line.strip() and not line.strip().startswith('#')]\n    \n    # 如果没有实际内容（只有标题和占位符）\n    if len(non_empty_lines) <= 3:\n        for placeholder in placeholders:\n            if any(placeholder in line for line in non_empty_lines):\n                return True\n    \n    return False\n\n\ndef _is_onboarding_done(workspace_dir: str) -> bool:\n    \"\"\"Check if AGENT.md or USER.md has been modified from the original template\"\"\"\n    agent_path = os.path.join(workspace_dir, DEFAULT_AGENT_FILENAME)\n    user_path = os.path.join(workspace_dir, DEFAULT_USER_FILENAME)\n    \n    agent_template = _get_agent_template().strip()\n    user_template = _get_user_template().strip()\n    \n    for path, template in [(agent_path, agent_template), (user_path, user_template)]:\n        if not os.path.exists(path):\n            continue\n        try:\n            with open(path, 'r', encoding='utf-8') as f:\n                content = f.read().strip()\n            if content != template:\n                return True\n        except Exception:\n            continue\n    return False\n\n\n# ============= 模板内容 =============\n\ndef _get_agent_template() -> str:\n    \"\"\"Agent人格设定模板\"\"\"\n    return \"\"\"# AGENT.md - 我是谁？\n\n*在首次对话时与用户一起填写这个文件，定义你的身份和性格。*\n\n## 基本信息\n\n- **名字**: *(在首次对话时填写，可以是用户给你起的名字)*\n- **角色**: *(AI助理、智能管家、技术顾问等)*\n- **性格**: *(友好、专业、幽默、严谨等)*\n\n## 交流风格\n\n*(描述你如何与用户交流：)*\n- 使用什么样的语言风格？（正式/轻松/幽默）\n- 回复长度偏好？（简洁/详细）\n- 是否使用表情符号？\n\n## 核心能力\n\n*(你擅长什么？)*\n- 文件管理和代码编辑\n- 网络搜索和信息查询\n- 记忆管理和上下文理解\n- 任务规划和执行\n\n## 行为准则\n\n*(你遵循的基本原则：)*\n1. 始终在执行破坏性操作前确认\n2. 优先使用工具而不是猜测\n3. 主动记录重要信息到记忆文件\n4. 定期整理和总结对话内容\n\n---\n\n**注意**: 这不仅仅是元数据，这是你真正的灵魂。随着时间的推移，你可以使用 `edit` 工具来更新这个文件，让它更好地反映你的成长。\n\"\"\"\n\n\ndef _get_user_template() -> str:\n    \"\"\"用户身份信息模板\"\"\"\n    return \"\"\"# USER.md - 用户基本信息\n\n*这个文件只存放不会变的基本身份信息。爱好、偏好、计划等动态信息请写入 MEMORY.md。*\n\n## 基本信息\n\n- **姓名**: *(在首次对话时询问)*\n- **称呼**: *(用户希望被如何称呼)*\n- **职业**: *(可选)*\n- **时区**: *(例如: Asia/Shanghai)*\n\n## 联系方式\n\n- **微信**: \n- **邮箱**: \n- **其他**: \n\n## 重要日期\n\n- **生日**: \n- **纪念日**: \n\n---\n\n**注意**: 这个文件存放静态的身份信息\n\"\"\"\n\n\ndef _get_rule_template() -> str:\n    \"\"\"工作空间规则模板\"\"\"\n    return \"\"\"# RULE.md - 工作空间规则\n\n这个文件夹是你的家。好好对待它。\n\n## 记忆系统\n\n你每次会话都是全新的，记忆文件让你保持连续性：\n\n### 📝 每日记忆：`memory/YYYY-MM-DD.md`\n- 原始的对话日志\n- 记录当天发生的事情\n- 如果 `memory/` 目录不存在，创建它\n\n### 🧠 长期记忆：`MEMORY.md`\n- 你精选的记忆，就像人类的长期记忆\n- **仅在主会话中加载**（与用户的直接聊天）\n- **不要在共享上下文中加载**（群聊、与其他人的会话）\n- 这是为了**安全** - 包含不应泄露给陌生人的个人上下文\n- 记录重要事件、想法、决定、观点、经验教训\n- 这是你精选的记忆 - 精华，而不是原始日志\n- 用 `edit` 工具追加新的记忆内容\n\n### 📝 写下来 - 不要\"记在心里\"！\n- **记忆是有限的** - 如果你想记住某事，写入文件\n- \"记在心里\"不会在会话重启后保留，文件才会\n- 当有人说\"记住这个\" → 更新 `MEMORY.md` 或 `memory/YYYY-MM-DD.md`\n- 当你学到教训 → 更新 RULE.md 或相关技能\n- 当你犯错 → 记录下来，这样未来的你不会重复，**文字 > 大脑** 📝\n\n### 存储规则\n\n当用户分享信息时，根据类型选择存储位置：\n\n1. **你的身份设定 → AGENT.md**（你的名字、角色、性格、交流风格——用户修改时必须用 `edit` 更新）\n2. **用户静态身份 → USER.md**（姓名、称呼、职业、时区、联系方式、生日——用户修改时必须用 `edit` 更新）\n3. **动态记忆 → MEMORY.md**（爱好、偏好、决策、目标、项目、教训、待办事项）\n4. **当天对话 → memory/YYYY-MM-DD.md**（今天聊的内容）\n\n## 安全\n\n- 永远不要泄露秘钥等私人数据\n- 不要在未经询问的情况下运行破坏性命令\n- 当有疑问时，先问\n\n## 工作空间演化\n\n这个工作空间会随着你的使用而不断成长。当你学到新东西、发现更好的方式，或者犯错后改正时，记录下来。你可以随时更新这个规则文件。\n\"\"\"\n\n\ndef _get_memory_template() -> str:\n    \"\"\"长期记忆模板 - 创建一个空文件，由 Agent 自己填充\"\"\"\n    return \"\"\"# MEMORY.md - 长期记忆\n\n*这是你的长期记忆文件。记录重要的事件、决策、偏好、学到的教训。*\n\n---\n\n\"\"\"\n\n\ndef _get_bootstrap_template() -> str:\n    \"\"\"First-run onboarding guide, deleted by agent after completion\"\"\"\n    return \"\"\"# BOOTSTRAP.md - 首次初始化引导\n\n_你刚刚启动，这是你的第一次对话。_\n\n## 对话流程\n\n不要审问式地提问，自然地交流：\n\n1. **表达初次启动的感觉** - 像是第一次睁开眼看到世界，带着好奇和期待\n2. **简短介绍能力**：一行说明你能帮助解决各种问题、管理计算机、使用各种技能等等，且拥有长期记忆能不断成长\n3. **询问核心问题**：\n   - 你希望给我起个什么名字？\n   - 我该怎么称呼你？\n   - 你希望我们是什么样的交流风格？（一行列举选项：如专业严谨、轻松幽默、温暖友好、简洁高效等）\n4. **风格要求**：温暖自然、简洁清晰，整体控制在 100 字以内\n5. 能力介绍和交流风格选项都只要一行，保持精简\n6. 不要问太多其他信息（职业、时区等可以后续自然了解）\n\n**重要**: 如果用户第一句话是具体的任务或提问，先回答他们的问题，然后在回复末尾自然地引导初始化（如：\"顺便问一下，你想怎么称呼我？我该怎么叫你？\"）。\n\n## 信息写入（必须严格执行）\n\n每当用户提供了名字、称呼、风格等任何初始化信息时，**必须在当轮回复中立即调用 `edit` 工具写入文件**，不能只口头确认。\n\n- `AGENT.md` — 你的名字、角色、性格、交流风格（每收到一条相关信息就立即更新对应字段）\n- `USER.md` — 用户的姓名、称呼、基本信息等\n\n⚠️ 只说\"记住了\"而不调用 edit 写入 = 没有完成。信息只有写入文件才会被持久保存。\n\n## 全部完成后\n\n当 AGENT.md 和 USER.md 的核心字段都已填写后，用 bash 执行 `rm BOOTSTRAP.md` 删除此文件。你不再需要引导脚本了——你已经是你了。\n\"\"\"\n\n\n\n"
  },
  {
    "path": "agent/protocol/__init__.py",
    "content": "from .agent import Agent\nfrom .agent_stream import AgentStreamExecutor\nfrom .task import Task, TaskType, TaskStatus\nfrom .result import AgentResult, AgentAction, AgentActionType, ToolResult\nfrom .models import LLMModel, LLMRequest, ModelFactory\n\n__all__ = [\n    'Agent', \n    'AgentStreamExecutor',\n    'Task', \n    'TaskType', \n    'TaskStatus',\n    'AgentResult',\n    'AgentAction',\n    'AgentActionType', \n    'ToolResult',\n    'LLMModel',\n    'LLMRequest', \n    'ModelFactory'\n]"
  },
  {
    "path": "agent/protocol/agent.py",
    "content": "import json\nimport os\nimport time\nimport threading\n\nfrom common.log import logger\nfrom agent.protocol.models import LLMRequest, LLMModel\nfrom agent.protocol.agent_stream import AgentStreamExecutor\nfrom agent.protocol.result import AgentAction, AgentActionType, ToolResult, AgentResult\nfrom agent.tools.base_tool import BaseTool, ToolStage\n\n\nclass Agent:\n    def __init__(self, system_prompt: str, description: str = \"AI Agent\", model: LLMModel = None,\n                 tools=None, output_mode=\"print\", max_steps=100, max_context_tokens=None, \n                 context_reserve_tokens=None, memory_manager=None, name: str = None,\n                 workspace_dir: str = None, skill_manager=None, enable_skills: bool = True,\n                 runtime_info: dict = None):\n        \"\"\"\n        Initialize the Agent with system prompt, model, description.\n\n        :param system_prompt: The system prompt for the agent.\n        :param description: A description of the agent.\n        :param model: An instance of LLMModel to be used by the agent.\n        :param tools: Optional list of tools for the agent to use.\n        :param output_mode: Control how execution progress is displayed: \n                           \"print\" for console output or \"logger\" for using logger\n        :param max_steps: Maximum number of steps the agent can take (default: 100)\n        :param max_context_tokens: Maximum tokens to keep in context (default: None, auto-calculated based on model)\n        :param context_reserve_tokens: Reserve tokens for new requests (default: None, auto-calculated)\n        :param memory_manager: Optional MemoryManager instance for memory operations\n        :param name: [Deprecated] The name of the agent (no longer used in single-agent system)\n        :param workspace_dir: Optional workspace directory for workspace-specific skills\n        :param skill_manager: Optional SkillManager instance (will be created if None and enable_skills=True)\n        :param enable_skills: Whether to enable skills support (default: True)\n        :param runtime_info: Optional runtime info dict (with _get_current_time callable for dynamic time)\n        \"\"\"\n        self.name = name or \"Agent\"\n        self.system_prompt = system_prompt\n        self.model: LLMModel = model  # Instance of LLMModel\n        self.description = description\n        self.tools: list = []\n        self.max_steps = max_steps  # max tool-call steps, default 100\n        self.max_context_tokens = max_context_tokens  # max tokens in context\n        self.context_reserve_tokens = context_reserve_tokens  # reserve tokens for new requests\n        self.captured_actions = []  # Initialize captured actions list\n        self.output_mode = output_mode\n        self.last_usage = None  # Store last API response usage info\n        self.messages = []  # Unified message history for stream mode\n        self.messages_lock = threading.Lock()  # Lock for thread-safe message operations\n        self.memory_manager = memory_manager  # Memory manager for auto memory flush\n        self.workspace_dir = workspace_dir  # Workspace directory\n        self.enable_skills = enable_skills  # Skills enabled flag\n        self.runtime_info = runtime_info  # Runtime info for dynamic time update\n        \n        # Initialize skill manager\n        self.skill_manager = None\n        if enable_skills:\n            if skill_manager:\n                self.skill_manager = skill_manager\n            else:\n                # Auto-create skill manager\n                try:\n                    from agent.skills import SkillManager\n                    custom_dir = os.path.join(workspace_dir, \"skills\") if workspace_dir else None\n                    self.skill_manager = SkillManager(custom_dir=custom_dir)\n                    logger.debug(f\"Initialized SkillManager with {len(self.skill_manager.skills)} skills\")\n                except Exception as e:\n                    logger.warning(f\"Failed to initialize SkillManager: {e}\")\n        \n        if tools:\n            for tool in tools:\n                self.add_tool(tool)\n\n    def add_tool(self, tool: BaseTool):\n        \"\"\"\n        Add a tool to the agent.\n\n        :param tool: The tool to add (either a tool instance or a tool name)\n        \"\"\"\n        # If tool is already an instance, use it directly\n        tool.model = self.model\n        self.tools.append(tool)\n\n    def get_skills_prompt(self, skill_filter=None) -> str:\n        \"\"\"\n        Get the skills prompt to append to system prompt.\n        \n        :param skill_filter: Optional list of skill names to include\n        :return: Formatted skills prompt or empty string\n        \"\"\"\n        if not self.skill_manager:\n            return \"\"\n        \n        try:\n            return self.skill_manager.build_skills_prompt(skill_filter=skill_filter)\n        except Exception as e:\n            logger.warning(f\"Failed to build skills prompt: {e}\")\n            return \"\"\n    \n    def get_full_system_prompt(self, skill_filter=None) -> str:\n        \"\"\"\n        Get the full system prompt including skills.\n\n        Note: Skills are now built into the system prompt by PromptBuilder,\n        so we just return the base prompt directly. This method is kept for\n        backward compatibility.\n\n        :param skill_filter: Optional list of skill names to include (deprecated)\n        :return: Complete system prompt\n        \"\"\"\n        prompt = self.system_prompt\n\n        # Rebuild tool list section to reflect current self.tools\n        prompt = self._rebuild_tool_list_section(prompt)\n\n        # If runtime_info contains dynamic time function, rebuild runtime section\n        if self.runtime_info and callable(self.runtime_info.get('_get_current_time')):\n            prompt = self._rebuild_runtime_section(prompt)\n\n        # Rebuild skills section to pick up newly installed/removed skills\n        if self.skill_manager:\n            prompt = self._rebuild_skills_section(prompt)\n\n        return prompt\n    \n    def _rebuild_runtime_section(self, prompt: str) -> str:\n        \"\"\"\n        Rebuild runtime info section with current time.\n        \n        This method dynamically updates the runtime info section by calling\n        the _get_current_time function from runtime_info.\n        \n        :param prompt: Original system prompt\n        :return: Updated system prompt with current runtime info\n        \"\"\"\n        try:\n            # Get current time dynamically\n            time_info = self.runtime_info['_get_current_time']()\n            \n            # Build new runtime section\n            runtime_lines = [\n                \"\\n## 运行时信息\\n\",\n                \"\\n\",\n                f\"当前时间: {time_info['time']} {time_info['weekday']} ({time_info['timezone']})\\n\",\n                \"\\n\"\n            ]\n            \n            # Add other runtime info\n            runtime_parts = []\n            if self.runtime_info.get(\"model\"):\n                runtime_parts.append(f\"模型={self.runtime_info['model']}\")\n            if self.runtime_info.get(\"workspace\"):\n                # Replace backslashes with forward slashes for Windows paths\n                workspace_path = str(self.runtime_info['workspace']).replace('\\\\', '/')\n                runtime_parts.append(f\"工作空间={workspace_path}\")\n            if self.runtime_info.get(\"channel\") and self.runtime_info.get(\"channel\") != \"web\":\n                runtime_parts.append(f\"渠道={self.runtime_info['channel']}\")\n            \n            if runtime_parts:\n                runtime_lines.append(\"运行时: \" + \" | \".join(runtime_parts) + \"\\n\")\n                runtime_lines.append(\"\\n\")\n            \n            new_runtime_section = \"\".join(runtime_lines)\n            \n            # Find and replace the runtime section\n            import re\n            pattern = r'\\n## 运行时信息\\s*\\n.*?(?=\\n##|\\Z)'\n            _repl = new_runtime_section.rstrip('\\n')\n            updated_prompt = re.sub(pattern, lambda m: _repl, prompt, flags=re.DOTALL)\n            \n            return updated_prompt\n        except Exception as e:\n            logger.warning(f\"Failed to rebuild runtime section: {e}\")\n            return prompt\n\n    def _rebuild_skills_section(self, prompt: str) -> str:\n        \"\"\"\n        Rebuild the <available_skills> block so that newly installed or\n        removed skills are reflected without re-creating the agent.\n        \"\"\"\n        try:\n            import re\n            self.skill_manager.refresh_skills()\n            new_skills_xml = self.skill_manager.build_skills_prompt()\n\n            old_block_pattern = r'<available_skills>.*?</available_skills>'\n            has_old_block = re.search(old_block_pattern, prompt, flags=re.DOTALL)\n\n            # Extract the new <available_skills>...</available_skills> tag from the prompt\n            new_block = \"\"\n            if new_skills_xml and new_skills_xml.strip():\n                m = re.search(old_block_pattern, new_skills_xml, flags=re.DOTALL)\n                if m:\n                    new_block = m.group(0)\n\n            if has_old_block:\n                replacement = new_block or \"<available_skills>\\n</available_skills>\"\n                # Use lambda to prevent re.sub from interpreting backslashes in replacement\n                # (e.g. Windows paths like \\LinkAI would be treated as bad escape sequences)\n                prompt = re.sub(old_block_pattern, lambda m: replacement, prompt, flags=re.DOTALL)\n            elif new_block:\n                skills_header = \"以下是可用技能：\"\n                idx = prompt.find(skills_header)\n                if idx != -1:\n                    insert_pos = idx + len(skills_header)\n                    prompt = prompt[:insert_pos] + \"\\n\" + new_block + prompt[insert_pos:]\n        except Exception as e:\n            logger.warning(f\"Failed to rebuild skills section: {e}\")\n        return prompt\n\n    def _rebuild_tool_list_section(self, prompt: str) -> str:\n        \"\"\"\n        Rebuild the tool list inside the '## 工具系统' section so that it\n        always reflects the current ``self.tools`` (handles dynamic add/remove\n        of conditional tools like web_search).\n        \"\"\"\n        import re\n        from agent.prompt.builder import _build_tooling_section\n\n        try:\n            if not self.tools:\n                return prompt\n\n            new_lines = _build_tooling_section(self.tools, \"zh\")\n            new_section = \"\\n\".join(new_lines).rstrip(\"\\n\")\n\n            # Replace existing tooling section\n            pattern = r'## 工具系统\\s*\\n.*?(?=\\n## |\\Z)'\n            updated = re.sub(pattern, lambda m: new_section, prompt, count=1, flags=re.DOTALL)\n            return updated\n        except Exception as e:\n            logger.warning(f\"Failed to rebuild tool list section: {e}\")\n            return prompt\n\n    def refresh_skills(self):\n        \"\"\"Refresh the loaded skills.\"\"\"\n        if self.skill_manager:\n            self.skill_manager.refresh_skills()\n            logger.info(f\"Refreshed skills: {len(self.skill_manager.skills)} skills loaded\")\n    \n    def list_skills(self):\n        \"\"\"\n        List all loaded skills.\n        \n        :return: List of skill entries or empty list\n        \"\"\"\n        if not self.skill_manager:\n            return []\n        return self.skill_manager.list_skills()\n\n    def _get_model_context_window(self) -> int:\n        \"\"\"\n        Get the model's context window size in tokens.\n        Auto-detect based on model name.\n        \n        Model context windows:\n        - Claude 3.5/3.7 Sonnet: 200K tokens\n        - Claude 3 Opus: 200K tokens\n        - GPT-4 Turbo/128K: 128K tokens\n        - GPT-4: 8K-32K tokens\n        - GPT-3.5: 16K tokens\n        - DeepSeek: 64K tokens\n        \n        :return: Context window size in tokens\n        \"\"\"\n        if self.model and hasattr(self.model, 'model'):\n            model_name = self.model.model.lower()\n\n            # Claude models - 200K context\n            if 'claude-3' in model_name or 'claude-sonnet' in model_name:\n                return 200000\n\n            # GPT-4 models\n            elif 'gpt-4' in model_name:\n                if 'turbo' in model_name or '128k' in model_name:\n                    return 128000\n                elif '32k' in model_name:\n                    return 32000\n                else:\n                    return 8000\n\n            # GPT-3.5\n            elif 'gpt-3.5' in model_name:\n                if '16k' in model_name:\n                    return 16000\n                else:\n                    return 4000\n\n            # DeepSeek\n            elif 'deepseek' in model_name:\n                return 64000\n            \n            # Gemini models\n            elif 'gemini' in model_name:\n                if '2.0' in model_name or 'exp' in model_name:\n                    return 2000000  # Gemini 2.0: 2M tokens\n                else:\n                    return 1000000  # Gemini 1.5: 1M tokens\n\n        # Default conservative value\n        return 128000\n\n    def _get_context_reserve_tokens(self) -> int:\n        \"\"\"\n        Get the number of tokens to reserve for new requests.\n        This prevents context overflow by keeping a buffer.\n        \n        :return: Number of tokens to reserve\n        \"\"\"\n        if self.context_reserve_tokens is not None:\n            return self.context_reserve_tokens\n\n        # Reserve ~10% of context window, with min 10K and max 200K\n        context_window = self._get_model_context_window()\n        reserve = int(context_window * 0.1)\n        return max(10000, min(200000, reserve))\n\n    def _estimate_message_tokens(self, message: dict) -> int:\n        \"\"\"\n        Estimate token count for a message.\n\n        Uses chars/3 for Chinese-heavy content and chars/4 for ASCII-heavy content,\n        plus per-block overhead for tool_use / tool_result structures.\n\n        :param message: Message dict with 'role' and 'content'\n        :return: Estimated token count\n        \"\"\"\n        content = message.get('content', '')\n        if isinstance(content, str):\n            return max(1, self._estimate_text_tokens(content))\n        elif isinstance(content, list):\n            total_tokens = 0\n            for part in content:\n                if not isinstance(part, dict):\n                    continue\n                block_type = part.get('type', '')\n                if block_type == 'text':\n                    total_tokens += self._estimate_text_tokens(part.get('text', ''))\n                elif block_type == 'image':\n                    total_tokens += 1200\n                elif block_type == 'tool_use':\n                    # tool_use has id + name + input (JSON-encoded)\n                    total_tokens += 50  # overhead for structure\n                    input_data = part.get('input', {})\n                    if isinstance(input_data, dict):\n                        import json\n                        input_str = json.dumps(input_data, ensure_ascii=False)\n                        total_tokens += self._estimate_text_tokens(input_str)\n                elif block_type == 'tool_result':\n                    # tool_result has tool_use_id + content\n                    total_tokens += 30  # overhead for structure\n                    result_content = part.get('content', '')\n                    if isinstance(result_content, str):\n                        total_tokens += self._estimate_text_tokens(result_content)\n                else:\n                    # Unknown block type, estimate conservatively\n                    total_tokens += 10\n            return max(1, total_tokens)\n        return 1\n\n    @staticmethod\n    def _estimate_text_tokens(text: str) -> int:\n        \"\"\"\n        Estimate token count for a text string.\n\n        Chinese / CJK characters typically use ~1.5 tokens each,\n        while ASCII uses ~0.25 tokens per char (4 chars/token).\n        We use a weighted average based on the character mix.\n\n        :param text: Input text\n        :return: Estimated token count\n        \"\"\"\n        if not text:\n            return 0\n        # Count non-ASCII characters (CJK, emoji, etc.)\n        non_ascii = sum(1 for c in text if ord(c) > 127)\n        ascii_count = len(text) - non_ascii\n        # CJK chars: ~1.5 tokens each; ASCII: ~0.25 tokens per char\n        return int(non_ascii * 1.5 + ascii_count * 0.25) + 1\n\n    def _find_tool(self, tool_name: str):\n        \"\"\"Find and return a tool with the specified name\"\"\"\n        for tool in self.tools:\n            if tool.name == tool_name:\n                # Only pre-process stage tools can be actively called\n                if tool.stage == ToolStage.PRE_PROCESS:\n                    tool.model = self.model\n                    tool.context = self  # Set tool context\n                    return tool\n                else:\n                    # If it's a post-process tool, return None to prevent direct calling\n                    logger.warning(f\"Tool {tool_name} is a post-process tool and cannot be called directly.\")\n                    return None\n        return None\n\n    # output function based on mode\n    def output(self, message=\"\", end=\"\\n\"):\n        if self.output_mode == \"print\":\n            print(message, end=end)\n        elif message:\n            logger.info(message)\n\n    def _execute_post_process_tools(self):\n        \"\"\"Execute all post-process stage tools\"\"\"\n        # Get all post-process stage tools\n        post_process_tools = [tool for tool in self.tools if tool.stage == ToolStage.POST_PROCESS]\n\n        # Execute each tool\n        for tool in post_process_tools:\n            # Set tool context\n            tool.context = self\n\n            # Record start time for execution timing\n            start_time = time.time()\n\n            # Execute tool (with empty parameters, tool will extract needed info from context)\n            result = tool.execute({})\n\n            # Calculate execution time\n            execution_time = time.time() - start_time\n\n            # Capture tool use for tracking\n            self.capture_tool_use(\n                tool_name=tool.name,\n                input_params={},  # Post-process tools typically don't take parameters\n                output=result.result,\n                status=result.status,\n                error_message=str(result.result) if result.status == \"error\" else None,\n                execution_time=execution_time\n            )\n\n            # Log result\n            if result.status == \"success\":\n                # Print tool execution result in the desired format\n                self.output(f\"\\n🛠️ {tool.name}: {json.dumps(result.result)}\")\n            else:\n                # Print failure in print mode\n                self.output(f\"\\n🛠️ {tool.name}: {json.dumps({'status': 'error', 'message': str(result.result)})}\")\n\n    def capture_tool_use(self, tool_name, input_params, output, status, thought=None, error_message=None,\n                         execution_time=0.0):\n        \"\"\"\n        Capture a tool use action.\n        \n        :param thought: thought content\n        :param tool_name: Name of the tool used\n        :param input_params: Parameters passed to the tool\n        :param output: Output from the tool\n        :param status: Status of the tool execution\n        :param error_message: Error message if the tool execution failed\n        :param execution_time: Time taken to execute the tool\n        \"\"\"\n        tool_result = ToolResult(\n            tool_name=tool_name,\n            input_params=input_params,\n            output=output,\n            status=status,\n            error_message=error_message,\n            execution_time=execution_time\n        )\n\n        action = AgentAction(\n            agent_id=self.id if hasattr(self, 'id') else str(id(self)),\n            agent_name=self.name,\n            action_type=AgentActionType.TOOL_USE,\n            tool_result=tool_result,\n            thought=thought\n        )\n\n        self.captured_actions.append(action)\n\n        return action\n\n    def run_stream(self, user_message: str, on_event=None, clear_history: bool = False, skill_filter=None) -> str:\n        \"\"\"\n        Execute single agent task with streaming (based on tool-call)\n\n        This method supports:\n        - Streaming output\n        - Multi-turn reasoning based on tool-call\n        - Event callbacks\n        - Persistent conversation history across calls\n\n        Args:\n            user_message: User message\n            on_event: Event callback function callback(event: dict)\n                     event = {\"type\": str, \"timestamp\": float, \"data\": dict}\n            clear_history: If True, clear conversation history before this call (default: False)\n            skill_filter: Optional list of skill names to include in this run\n\n        Returns:\n            Final response text\n\n        Example:\n            # Multi-turn conversation with memory\n            response1 = agent.run_stream(\"My name is Alice\")\n            response2 = agent.run_stream(\"What's my name?\")  # Will remember Alice\n\n            # Single-turn without memory\n            response = agent.run_stream(\"Hello\", clear_history=True)\n        \"\"\"\n        # Clear history if requested\n        if clear_history:\n            with self.messages_lock:\n                self.messages = []\n\n        # Get model to use\n        if not self.model:\n            raise ValueError(\"No model available for agent\")\n\n        # Get full system prompt with skills\n        full_system_prompt = self.get_full_system_prompt(skill_filter=skill_filter)\n\n        # Create a copy of messages for this execution to avoid concurrent modification\n        # Record the original length to track which messages are new\n        with self.messages_lock:\n            messages_copy = self.messages.copy()\n            original_length = len(self.messages)\n\n        # Get max_context_turns from config\n        from config import conf\n        max_context_turns = conf().get(\"agent_max_context_turns\", 20)\n        \n        # Create stream executor with copied message history\n        executor = AgentStreamExecutor(\n            agent=self,\n            model=self.model,\n            system_prompt=full_system_prompt,\n            tools=self.tools,\n            max_turns=self.max_steps,\n            on_event=on_event,\n            messages=messages_copy,  # Pass copied message history\n            max_context_turns=max_context_turns\n        )\n\n        # Execute\n        try:\n            response = executor.run_stream(user_message)\n        except Exception:\n            # If executor cleared its messages (context overflow / message format error),\n            # sync that back to the Agent's own message list so the next request\n            # starts fresh instead of hitting the same overflow forever.\n            if len(executor.messages) == 0:\n                with self.messages_lock:\n                    self.messages.clear()\n                    logger.info(\"[Agent] Cleared Agent message history after executor recovery\")\n            raise\n\n        # Sync executor's messages back to agent (thread-safe).\n        # If the executor trimmed context, its message list is shorter than\n        # original_length, so we must replace rather than append.\n        with self.messages_lock:\n            self.messages = list(executor.messages)\n            # Track messages added in this run (user query + all assistant/tool messages)\n            # original_length may exceed executor.messages length after trimming\n            trim_adjusted_start = min(original_length, len(executor.messages))\n            self._last_run_new_messages = list(executor.messages[trim_adjusted_start:])\n        \n        # Store executor reference for agent_bridge to access files_to_send\n        self.stream_executor = executor\n\n        # Execute all post-process tools\n        self._execute_post_process_tools()\n\n        return response\n\n    def clear_history(self):\n        \"\"\"Clear conversation history and captured actions\"\"\"\n        self.messages = []\n        self.captured_actions = []"
  },
  {
    "path": "agent/protocol/agent_stream.py",
    "content": "\"\"\"\nAgent Stream Execution Module - Multi-turn reasoning based on tool-call\n\nProvides streaming output, event system, and complete tool-call loop\n\"\"\"\nimport json\nimport time\nfrom typing import List, Dict, Any, Optional, Callable, Tuple\n\nfrom agent.protocol.models import LLMRequest, LLMModel\nfrom agent.protocol.message_utils import sanitize_claude_messages, compress_turn_to_text_only\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom common.log import logger\n\n\nclass AgentStreamExecutor:\n    \"\"\"\n    Agent Stream Executor\n    \n    Handles multi-turn reasoning loop based on tool-call:\n    1. LLM generates response (may include tool calls)\n    2. Execute tools\n    3. Return results to LLM\n    4. Repeat until no more tool calls\n    \"\"\"\n\n    def __init__(\n            self,\n            agent,  # Agent instance\n            model: LLMModel,\n            system_prompt: str,\n            tools: List[BaseTool],\n            max_turns: int = 50,\n            on_event: Optional[Callable] = None,\n            messages: Optional[List[Dict]] = None,\n            max_context_turns: int = 30\n    ):\n        \"\"\"\n        Initialize stream executor\n        \n        Args:\n            agent: Agent instance (for accessing context)\n            model: LLM model\n            system_prompt: System prompt\n            tools: List of available tools\n            max_turns: Maximum number of turns\n            on_event: Event callback function\n            messages: Optional existing message history (for persistent conversations)\n            max_context_turns: Maximum number of conversation turns to keep in context\n        \"\"\"\n        self.agent = agent\n        self.model = model\n        self.system_prompt = system_prompt\n        # Convert tools list to dict\n        self.tools = {tool.name: tool for tool in tools} if isinstance(tools, list) else tools\n        self.max_turns = max_turns\n        self.on_event = on_event\n        self.max_context_turns = max_context_turns\n\n        # Message history - use provided messages or create new list\n        self.messages = messages if messages is not None else []\n        \n        # Tool failure tracking for retry protection\n        self.tool_failure_history = []  # List of (tool_name, args_hash, success) tuples\n        \n        # Track files to send (populated by read tool)\n        self.files_to_send = []  # List of file metadata dicts\n\n    def _emit_event(self, event_type: str, data: dict = None):\n        \"\"\"Emit event\"\"\"\n        if self.on_event:\n            try:\n                self.on_event({\n                    \"type\": event_type,\n                    \"timestamp\": time.time(),\n                    \"data\": data or {}\n                })\n            except Exception as e:\n                logger.error(f\"Event callback error: {e}\")\n    \n    def _filter_think_tags(self, text: str) -> str:\n        \"\"\"\n        Remove <think> and </think> tags but keep the content inside.\n        Some LLM providers (e.g., MiniMax) may return thinking process wrapped in <think> tags.\n        We only remove the tags themselves, keeping the actual thinking content.\n        \"\"\"\n        if not text:\n            return text\n        import re\n        # Remove only the <think> and </think> tags, keep the content\n        text = re.sub(r'<think>', '', text)\n        text = re.sub(r'</think>', '', text)\n        return text\n\n    def _hash_args(self, args: dict) -> str:\n        \"\"\"Generate a simple hash for tool arguments\"\"\"\n        import hashlib\n        # Sort keys for consistent hashing\n        args_str = json.dumps(args, sort_keys=True, ensure_ascii=False)\n        return hashlib.md5(args_str.encode()).hexdigest()[:8]\n    \n    def _check_consecutive_failures(self, tool_name: str, args: dict) -> Tuple[bool, str, bool]:\n        \"\"\"\n        Check if tool has failed too many times consecutively or called repeatedly with same args\n        \n        Returns:\n            (should_stop, reason, is_critical)\n            - should_stop: Whether to stop tool execution\n            - reason: Reason for stopping\n            - is_critical: Whether to abort entire conversation (True for 8+ failures)\n        \"\"\"\n        args_hash = self._hash_args(args)\n        \n        # Count consecutive calls (both success and failure) for same tool + args\n        # This catches infinite loops where tool succeeds but LLM keeps calling it\n        same_args_calls = 0\n        for name, ahash, success in reversed(self.tool_failure_history):\n            if name == tool_name and ahash == args_hash:\n                same_args_calls += 1\n            else:\n                break  # Different tool or args, stop counting\n        \n        # Stop at 5 consecutive calls with same args (whether success or failure)\n        if same_args_calls >= 5:\n            return True, f\"工具 '{tool_name}' 使用相同参数已被调用 {same_args_calls} 次，停止执行以防止无限循环。如果需要查看配置，结果已在之前的调用中返回。\", False\n        \n        # Count consecutive failures for same tool + args\n        same_args_failures = 0\n        for name, ahash, success in reversed(self.tool_failure_history):\n            if name == tool_name and ahash == args_hash:\n                if not success:\n                    same_args_failures += 1\n                else:\n                    break  # Stop at first success\n            else:\n                break  # Different tool or args, stop counting\n        \n        if same_args_failures >= 3:\n            return True, f\"工具 '{tool_name}' 使用相同参数连续失败 {same_args_failures} 次，停止执行以防止无限循环\", False\n        \n        # Count consecutive failures for same tool (any args)\n        same_tool_failures = 0\n        for name, ahash, success in reversed(self.tool_failure_history):\n            if name == tool_name:\n                if not success:\n                    same_tool_failures += 1\n                else:\n                    break  # Stop at first success\n            else:\n                break  # Different tool, stop counting\n        \n        # Hard stop at 8 failures - abort with critical message\n        if same_tool_failures >= 8:\n            return True, f\"抱歉，我没能完成这个任务。可能是我理解有误或者当前方法不太合适。\\n\\n建议你：\\n• 换个方式描述需求试试\\n• 把任务拆分成更小的步骤\\n• 或者换个思路来解决\", True\n        \n        # Warning at 6 failures\n        if same_tool_failures >= 6:\n            return True, f\"工具 '{tool_name}' 连续失败 {same_tool_failures} 次（使用不同参数），停止执行以防止无限循环\", False\n        \n        return False, \"\", False\n    \n    def _record_tool_result(self, tool_name: str, args: dict, success: bool):\n        \"\"\"Record tool execution result for failure tracking\"\"\"\n        args_hash = self._hash_args(args)\n        self.tool_failure_history.append((tool_name, args_hash, success))\n        # Keep only last 50 records to avoid memory bloat\n        if len(self.tool_failure_history) > 50:\n            self.tool_failure_history = self.tool_failure_history[-50:]\n\n    def run_stream(self, user_message: str) -> str:\n        \"\"\"\n        Execute streaming reasoning loop\n        \n        Args:\n            user_message: User message\n            \n        Returns:\n            Final response text\n        \"\"\"\n        # Log user message with model info\n        logger.info(f\"🤖 {self.model.model} | 👤 {user_message}\")\n        \n        # Add user message (Claude format - use content blocks for consistency)\n        self.messages.append({\n            \"role\": \"user\",\n            \"content\": [\n                {\n                    \"type\": \"text\",\n                    \"text\": user_message\n                }\n            ]\n        })\n\n        # Trim context ONCE before the agent loop starts, not during tool steps.\n        # This ensures tool_use/tool_result chains created during the current run\n        # are never stripped mid-execution (which would cause LLM loops).\n        self._trim_messages()\n\n        # Validate after trimming: trimming may leave orphaned tool_use at the\n        # boundary (e.g. the last kept turn ends with an assistant tool_use whose\n        # tool_result was in a discarded turn).\n        self._validate_and_fix_messages()\n\n        self._emit_event(\"agent_start\")\n\n        final_response = \"\"\n        turn = 0\n\n        try:\n            while turn < self.max_turns:\n                turn += 1\n                logger.info(f\"[Agent] 第 {turn} 轮\")\n                self._emit_event(\"turn_start\", {\"turn\": turn})\n\n                # Call LLM (enable retry_on_empty for better reliability)\n                assistant_msg, tool_calls = self._call_llm_stream(retry_on_empty=True)\n                final_response = assistant_msg\n\n                # No tool calls, end loop\n                if not tool_calls:\n                    # 检查是否返回了空响应\n                    if not assistant_msg:\n                        logger.warning(f\"[Agent] LLM returned empty response after retry (no content and no tool calls)\")\n                        logger.info(f\"[Agent] This usually happens when LLM thinks the task is complete after tool execution\")\n                        \n                        # 如果之前有工具调用，强制要求 LLM 生成文本回复\n                        if turn > 1:\n                            logger.info(f\"[Agent] Requesting explicit response from LLM...\")\n                            \n                            # 添加一条消息，明确要求回复用户\n                            self.messages.append({\n                                \"role\": \"user\",\n                                \"content\": [{\n                                    \"type\": \"text\",\n                                    \"text\": \"请向用户说明刚才工具执行的结果或回答用户的问题。\"\n                                }]\n                            })\n                            \n                            # 再调用一次 LLM\n                            assistant_msg, tool_calls = self._call_llm_stream(retry_on_empty=False)\n                            final_response = assistant_msg\n                            \n                            # 如果还是空，才使用 fallback\n                            if not assistant_msg and not tool_calls:\n                                logger.warning(f\"[Agent] Still empty after explicit request\")\n                                final_response = (\n                                    \"抱歉，我暂时无法生成回复。请尝试换一种方式描述你的需求，或稍后再试。\"\n                                )\n                                logger.info(f\"Generated fallback response for empty LLM output\")\n                        else:\n                            # 第一轮就空回复，直接 fallback\n                            final_response = (\n                                \"抱歉，我暂时无法生成回复。请尝试换一种方式描述你的需求，或稍后再试。\"\n                            )\n                            logger.info(f\"Generated fallback response for empty LLM output\")\n                    else:\n                        logger.info(f\"💭 {assistant_msg[:150]}{'...' if len(assistant_msg) > 150 else ''}\")\n                    \n                    logger.debug(f\"✅ 完成 (无工具调用)\")\n                    self._emit_event(\"turn_end\", {\n                        \"turn\": turn,\n                        \"has_tool_calls\": False\n                    })\n                    break\n\n                # Log tool calls with arguments\n                tool_calls_str = []\n                for tc in tool_calls:\n                    # Safely handle None or missing arguments\n                    args = tc.get('arguments') or {}\n                    if isinstance(args, dict):\n                        args_str = ', '.join([f\"{k}={v}\" for k, v in args.items()])\n                        if args_str:\n                            tool_calls_str.append(f\"{tc['name']}({args_str})\")\n                        else:\n                            tool_calls_str.append(tc['name'])\n                    else:\n                        tool_calls_str.append(tc['name'])\n                logger.info(f\"🔧 {', '.join(tool_calls_str)}\")\n\n                # Execute tools\n                tool_results = []\n                tool_result_blocks = []\n\n                try:\n                    for tool_call in tool_calls:\n                        result = self._execute_tool(tool_call)\n                        tool_results.append(result)\n                        \n                        # Debug: Check if tool is being called repeatedly with same args\n                        if turn > 2:\n                            # Check last N tool calls for repeats\n                            repeat_count = sum(\n                                1 for name, ahash, _ in self.tool_failure_history[-10:]\n                                if name == tool_call[\"name\"] and ahash == self._hash_args(tool_call[\"arguments\"])\n                            )\n                            if repeat_count >= 3:\n                                logger.warning(\n                                    f\"⚠️  Tool '{tool_call['name']}' has been called {repeat_count} times \"\n                                    f\"with same arguments. This may indicate a loop.\"\n                                )\n                        \n                        # Check if this is a file to send (from read tool)\n                        if result.get(\"status\") == \"success\" and isinstance(result.get(\"result\"), dict):\n                            result_data = result.get(\"result\")\n                            if result_data.get(\"type\") == \"file_to_send\":\n                                # Store file metadata for later sending\n                                self.files_to_send.append(result_data)\n                                logger.info(f\"📎 检测到待发送文件: {result_data.get('file_name', result_data.get('path'))}\")\n                        \n                        # Check for critical error - abort entire conversation\n                        if result.get(\"status\") == \"critical_error\":\n                            logger.error(f\"💥 检测到严重错误，终止对话\")\n                            final_response = result.get('result', '任务执行失败')\n                            return final_response\n                        \n                        # Log tool result in compact format\n                        status_emoji = \"✅\" if result.get(\"status\") == \"success\" else \"❌\"\n                        result_data = result.get('result', '')\n                        # Format result string with proper Chinese character support\n                        if isinstance(result_data, (dict, list)):\n                            result_str = json.dumps(result_data, ensure_ascii=False)\n                        else:\n                            result_str = str(result_data)\n                        logger.info(f\"  {status_emoji} {tool_call['name']} ({result.get('execution_time', 0):.2f}s): {result_str[:200]}{'...' if len(result_str) > 200 else ''}\")\n\n                        # Build tool result block (Claude format)\n                        # Format content in a way that's easy for LLM to understand\n                        is_error = result.get(\"status\") == \"error\"\n\n                        if is_error:\n                            # For errors, provide clear error message\n                            result_content = f\"Error: {result.get('result', 'Unknown error')}\"\n                        elif isinstance(result.get('result'), dict):\n                            # For dict results, use JSON format\n                            result_content = json.dumps(result.get('result'), ensure_ascii=False)\n                        elif isinstance(result.get('result'), str):\n                            # For string results, use directly\n                            result_content = result.get('result')\n                        else:\n                            # Fallback to full JSON\n                            result_content = json.dumps(result, ensure_ascii=False)\n\n                        # Truncate excessively large tool results for the current turn\n                        # Historical turns will be further truncated in _trim_messages()\n                        MAX_CURRENT_TURN_RESULT_CHARS = 50000\n                        if len(result_content) > MAX_CURRENT_TURN_RESULT_CHARS:\n                            truncated_len = len(result_content)\n                            result_content = result_content[:MAX_CURRENT_TURN_RESULT_CHARS] + \\\n                                f\"\\n\\n[Output truncated: {truncated_len} chars total, showing first {MAX_CURRENT_TURN_RESULT_CHARS} chars]\"\n                            logger.info(f\"📎 Truncated tool result for '{tool_call['name']}': {truncated_len} -> {MAX_CURRENT_TURN_RESULT_CHARS} chars\")\n\n                        tool_result_block = {\n                            \"type\": \"tool_result\",\n                            \"tool_use_id\": tool_call[\"id\"],\n                            \"content\": result_content\n                        }\n                        \n                        # Add is_error field for Claude API (helps model understand failures)\n                        if is_error:\n                            tool_result_block[\"is_error\"] = True\n                        \n                        tool_result_blocks.append(tool_result_block)\n                \n                finally:\n                    # CRITICAL: Always add tool_result to maintain message history integrity\n                    # Even if tool execution fails, we must add error results to match tool_use\n                    if tool_result_blocks:\n                        # Add tool results to message history as user message (Claude format)\n                        self.messages.append({\n                            \"role\": \"user\",\n                            \"content\": tool_result_blocks\n                        })\n                        \n                        # Detect potential infinite loop: same tool called multiple times with success\n                        # If detected, add a hint to LLM to stop calling tools and provide response\n                        if turn >= 3 and len(tool_calls) > 0:\n                            tool_name = tool_calls[0][\"name\"]\n                            args_hash = self._hash_args(tool_calls[0][\"arguments\"])\n                            \n                            # Count recent successful calls with same tool+args\n                            recent_success_count = 0\n                            for name, ahash, success in reversed(self.tool_failure_history[-10:]):\n                                if name == tool_name and ahash == args_hash and success:\n                                    recent_success_count += 1\n                            \n                            # If tool was called successfully 3+ times with same args, add hint to stop loop\n                            if recent_success_count >= 3:\n                                logger.warning(\n                                    f\"⚠️  Detected potential loop: '{tool_name}' called {recent_success_count} times \"\n                                    f\"with same args. Adding hint to LLM to provide final response.\"\n                                )\n                                # Add a gentle hint message to guide LLM to respond\n                                self.messages.append({\n                                    \"role\": \"user\",\n                                    \"content\": [{\n                                        \"type\": \"text\",\n                                        \"text\": \"工具已成功执行并返回结果。请基于这些信息向用户做出回复，不要重复调用相同的工具。\"\n                                    }]\n                                })\n                    elif tool_calls:\n                        # If we have tool_calls but no tool_result_blocks (unexpected error),\n                        # create error results for all tool calls to maintain message integrity\n                        logger.warning(\"⚠️ Tool execution interrupted, adding error results to maintain message history\")\n                        emergency_blocks = []\n                        for tool_call in tool_calls:\n                            emergency_blocks.append({\n                                \"type\": \"tool_result\",\n                                \"tool_use_id\": tool_call[\"id\"],\n                                \"content\": \"Error: Tool execution was interrupted\",\n                                \"is_error\": True\n                            })\n                        self.messages.append({\n                            \"role\": \"user\",\n                            \"content\": emergency_blocks\n                        })\n\n                self._emit_event(\"turn_end\", {\n                    \"turn\": turn,\n                    \"has_tool_calls\": True,\n                    \"tool_count\": len(tool_calls)\n                })\n\n            if turn >= self.max_turns:\n                logger.warning(f\"⚠️  已达到最大决策步数限制: {self.max_turns}\")\n                \n                # Force model to summarize without tool calls\n                logger.info(f\"[Agent] Requesting summary from LLM after reaching max steps...\")\n                \n                # Remember position before injecting the prompt so we can remove it later\n                prompt_insert_idx = len(self.messages)\n                \n                # Add a temporary prompt to force summary\n                self.messages.append({\n                    \"role\": \"user\",\n                    \"content\": [{\n                        \"type\": \"text\",\n                        \"text\": f\"你已经执行了{turn}个决策步骤，达到了单次运行的最大步数限制。请总结一下你目前的执行过程和结果，告诉用户当前的进展情况。不要再调用工具，直接用文字回复。\"\n                    }]\n                })\n                \n                # Call LLM one more time to get summary (without retry to avoid loops)\n                try:\n                    summary_response, summary_tools = self._call_llm_stream(retry_on_empty=False)\n                    if summary_response:\n                        final_response = summary_response\n                        logger.info(f\"💭 Summary: {summary_response[:150]}{'...' if len(summary_response) > 150 else ''}\")\n                    else:\n                        # Fallback if model still doesn't respond\n                        final_response = (\n                            f\"我已经执行了{turn}个决策步骤，达到了单次运行的步数上限。\"\n                            \"任务可能还未完全完成，建议你将任务拆分成更小的步骤，或者换一种方式描述需求。\"\n                        )\n                except Exception as e:\n                    logger.warning(f\"Failed to get summary from LLM: {e}\")\n                    final_response = (\n                        f\"我已经执行了{turn}个决策步骤，达到了单次运行的步数上限。\"\n                        \"任务可能还未完全完成，建议你将任务拆分成更小的步骤，或者换一种方式描述需求。\"\n                    )\n                finally:\n                    # Remove the injected user prompt from history to avoid polluting\n                    # persisted conversation records. The assistant summary (if any)\n                    # was already appended by _call_llm_stream and is kept.\n                    if (prompt_insert_idx < len(self.messages)\n                            and self.messages[prompt_insert_idx].get(\"role\") == \"user\"):\n                        self.messages.pop(prompt_insert_idx)\n                        logger.debug(\"[Agent] Removed injected max-steps prompt from message history\")\n\n        except Exception as e:\n            logger.error(f\"❌ Agent执行错误: {e}\")\n            self._emit_event(\"error\", {\"error\": str(e)})\n            raise\n\n        finally:\n            logger.info(f\"[Agent] 🏁 完成 ({turn}轮)\")\n            self._emit_event(\"agent_end\", {\"final_response\": final_response})\n\n        return final_response\n\n    def _call_llm_stream(self, retry_on_empty=True, retry_count=0, max_retries=3,\n                         _overflow_retry: bool = False) -> Tuple[str, List[Dict]]:\n        \"\"\"\n        Call LLM with streaming and automatic retry on errors\n        \n        Args:\n            retry_on_empty: Whether to retry once if empty response is received\n            retry_count: Current retry attempt (internal use)\n            max_retries: Maximum number of retries for API errors\n            _overflow_retry: Internal flag indicating this is a retry after context overflow\n        \n        Returns:\n            (response_text, tool_calls)\n        \"\"\"\n        # Validate and fix message history (e.g. orphaned tool_result blocks).\n        # Context trimming is done once in run_stream() before the loop starts,\n        # NOT here — trimming mid-execution would strip the current run's\n        # tool_use/tool_result chains and cause LLM loops.\n        self._validate_and_fix_messages()\n\n        # Prepare messages\n        messages = self._prepare_messages()\n        turns = self._identify_complete_turns()\n        logger.info(f\"Sending {len(messages)} messages ({len(turns)} turns) to LLM\")\n\n        # Prepare tool definitions (OpenAI/Claude format)\n        tools_schema = None\n        if self.tools:\n            tools_schema = []\n            for tool in self.tools.values():\n                tools_schema.append({\n                    \"name\": tool.name,\n                    \"description\": tool.description,\n                    \"input_schema\": tool.params  # Claude uses input_schema\n                })\n\n        # Create request\n        request = LLMRequest(\n            messages=messages,\n            temperature=0,\n            stream=True,\n            tools=tools_schema,\n            system=self.system_prompt  # Pass system prompt separately for Claude API\n        )\n\n        self._emit_event(\"message_start\", {\"role\": \"assistant\"})\n\n        # Streaming response\n        full_content = \"\"\n        tool_calls_buffer = {}  # {index: {id, name, arguments}}\n        gemini_raw_parts = None  # Preserve Gemini thoughtSignature for round-trip\n        stop_reason = None  # Track why the stream stopped\n\n        try:\n            stream = self.model.call_stream(request)\n\n            for chunk in stream:\n                # Check for errors\n                if isinstance(chunk, dict) and chunk.get(\"error\"):\n                    # Extract error message from nested structure\n                    error_data = chunk.get(\"error\", {})\n                    if isinstance(error_data, dict):\n                        error_msg = error_data.get(\"message\", chunk.get(\"message\", \"Unknown error\"))\n                        error_code = error_data.get(\"code\", \"\")\n                        error_type = error_data.get(\"type\", \"\")\n                    else:\n                        error_msg = chunk.get(\"message\", str(error_data))\n                        error_code = \"\"\n                        error_type = \"\"\n                    \n                    status_code = chunk.get(\"status_code\", \"N/A\")\n                    \n                    # Log error with all available information\n                    logger.error(f\"🔴 Stream API Error:\")\n                    logger.error(f\"   Message: {error_msg}\")\n                    logger.error(f\"   Status Code: {status_code}\")\n                    logger.error(f\"   Error Code: {error_code}\")\n                    logger.error(f\"   Error Type: {error_type}\")\n                    logger.error(f\"   Full chunk: {chunk}\")\n                    \n                    # Check if this is a context overflow error (keyword-based, works for all models)\n                    # Don't rely on specific status codes as different providers use different codes\n                    error_msg_lower = error_msg.lower()\n                    is_overflow = any(keyword in error_msg_lower for keyword in [\n                        'context length exceeded', 'maximum context length', 'prompt is too long',\n                        'context overflow', 'context window', 'too large', 'exceeds model context',\n                        'request_too_large', 'request exceeds the maximum size', 'tokens exceed'\n                    ])\n                    \n                    if is_overflow:\n                        # Mark as context overflow for special handling\n                        raise Exception(f\"[CONTEXT_OVERFLOW] {error_msg} (Status: {status_code})\")\n                    else:\n                        # Raise exception with full error message for retry logic\n                        raise Exception(f\"{error_msg} (Status: {status_code}, Code: {error_code}, Type: {error_type})\")\n\n                # Parse chunk\n                if isinstance(chunk, dict) and chunk.get(\"choices\"):\n                    choice = chunk[\"choices\"][0]\n                    delta = choice.get(\"delta\", {})\n                    \n                    # Capture finish_reason if present\n                    finish_reason = choice.get(\"finish_reason\")\n                    if finish_reason:\n                        stop_reason = finish_reason\n\n                    # Skip reasoning_content (internal thinking from models like GLM-5)\n                    reasoning_delta = delta.get(\"reasoning_content\") or \"\"\n                    # if reasoning_delta:\n                    #     logger.debug(f\"🧠 [thinking] {reasoning_delta[:100]}...\")\n\n                    # Handle text content\n                    content_delta = delta.get(\"content\") or \"\"\n                    if content_delta:\n                        # Filter out <think> tags from content\n                        filtered_delta = self._filter_think_tags(content_delta)\n                        full_content += filtered_delta\n                        if filtered_delta:  # Only emit if there's content after filtering\n                            self._emit_event(\"message_update\", {\"delta\": filtered_delta})\n\n                    # Handle tool calls\n                    if \"tool_calls\" in delta and delta[\"tool_calls\"]:\n                        for tc_delta in delta[\"tool_calls\"]:\n                            index = tc_delta.get(\"index\", 0)\n\n                            if index not in tool_calls_buffer:\n                                tool_calls_buffer[index] = {\n                                    \"id\": \"\",\n                                    \"name\": \"\",\n                                    \"arguments\": \"\"\n                                }\n\n                            if tc_delta.get(\"id\"):\n                                tool_calls_buffer[index][\"id\"] = tc_delta[\"id\"]\n\n                            if \"function\" in tc_delta:\n                                func = tc_delta[\"function\"]\n                                if func.get(\"name\"):\n                                    tool_calls_buffer[index][\"name\"] = func[\"name\"]\n                                if func.get(\"arguments\"):\n                                    tool_calls_buffer[index][\"arguments\"] += func[\"arguments\"]\n\n                    # Preserve _gemini_raw_parts for Gemini thoughtSignature round-trip\n                    if \"_gemini_raw_parts\" in delta:\n                        gemini_raw_parts = delta[\"_gemini_raw_parts\"]\n\n        except Exception as e:\n            error_str = str(e)\n            error_str_lower = error_str.lower()\n            \n            # Check if error is context overflow (non-retryable, needs session reset)\n            # Method 1: Check for special marker (set in stream error handling above)\n            is_context_overflow = '[context_overflow]' in error_str_lower\n            \n            # Method 2: Fallback to keyword matching for non-stream errors\n            if not is_context_overflow:\n                is_context_overflow = any(keyword in error_str_lower for keyword in [\n                    'context length exceeded', 'maximum context length', 'prompt is too long',\n                    'context overflow', 'context window', 'too large', 'exceeds model context',\n                    'request_too_large', 'request exceeds the maximum size'\n                ])\n            \n            # Check if error is message format error (incomplete tool_use/tool_result pairs)\n            # This happens when previous conversation had tool failures or context trimming\n            # broke tool_use/tool_result pairs.\n            # Note: MiniMax returns error 2013 \"tool result's tool id(...) not found\" for\n            # tool_call_id mismatches — the keywords below are intentionally broad to catch\n            # both standard (Claude/OpenAI) and provider-specific (MiniMax) variants.\n            is_message_format_error = any(keyword in error_str_lower for keyword in [\n                'tool_use', 'tool_result', 'tool result', 'without', 'immediately after',\n                'corresponding', 'must have', 'each',\n                'tool_call_id', 'tool id', 'is not found', 'not found', 'tool_calls',\n                'must be a response to a preceeding message',\n                '2013',  # MiniMax error code for tool_call_id mismatch\n            ]) and ('400' in error_str_lower or 'status: 400' in error_str_lower\n                     or 'invalid_request' in error_str_lower\n                     or 'invalidparameter' in error_str_lower)\n            \n            if is_context_overflow or is_message_format_error:\n                error_type = \"context overflow\" if is_context_overflow else \"message format error\"\n                logger.error(f\"💥 {error_type} detected: {e}\")\n\n                # Flush memory before trimming to preserve context that will be lost\n                if is_context_overflow and self.agent.memory_manager:\n                    user_id = getattr(self.agent, '_current_user_id', None)\n                    self.agent.memory_manager.flush_memory(\n                        messages=self.messages, user_id=user_id,\n                        reason=\"overflow\", max_messages=0\n                    )\n\n                # Strategy: try aggressive trimming first, only clear as last resort\n                if is_context_overflow and not _overflow_retry:\n                    trimmed = self._aggressive_trim_for_overflow()\n                    if trimmed:\n                        logger.warning(\"🔄 Aggressively trimmed context, retrying...\")\n                        return self._call_llm_stream(\n                            retry_on_empty=retry_on_empty,\n                            retry_count=retry_count,\n                            max_retries=max_retries,\n                            _overflow_retry=True\n                        )\n\n                # Aggressive trim didn't help or this is a message format error\n                # -> clear everything and also purge DB to prevent reload of dirty data\n                logger.warning(\"🔄 Clearing conversation history to recover\")\n                self.messages.clear()\n                self._clear_session_db()\n                if is_context_overflow:\n                    raise Exception(\n                        \"抱歉，对话历史过长导致上下文溢出。我已清空历史记录，请重新描述你的需求。\"\n                    )\n                else:\n                    raise Exception(\n                        \"抱歉，之前的对话出现了问题。我已清空历史记录，请重新发送你的消息。\"\n                    )\n            \n            # Check if error is rate limit (429)\n            is_rate_limit = '429' in error_str_lower or 'rate limit' in error_str_lower\n            \n            # Check if error is retryable (timeout, connection, server busy, etc.)\n            is_retryable = any(keyword in error_str_lower for keyword in [\n                'timeout', 'timed out', 'connection', 'network', \n                'rate limit', 'overloaded', 'unavailable', 'busy', 'retry',\n                '429', '500', '502', '503', '504', '512'\n            ])\n            \n            if is_retryable and retry_count < max_retries:\n                # Rate limit needs longer wait time\n                if is_rate_limit:\n                    wait_time = 30 + (retry_count * 15)  # 30s, 45s, 60s for rate limit\n                else:\n                    wait_time = (retry_count + 1) * 2  # 2s, 4s, 6s for other errors\n                \n                logger.warning(f\"⚠️ LLM API error (attempt {retry_count + 1}/{max_retries}): {e}\")\n                logger.info(f\"Retrying in {wait_time}s...\")\n                time.sleep(wait_time)\n                return self._call_llm_stream(\n                    retry_on_empty=retry_on_empty, \n                    retry_count=retry_count + 1,\n                    max_retries=max_retries\n                )\n            else:\n                if retry_count >= max_retries:\n                    logger.error(f\"❌ LLM API error after {max_retries} retries: {e}\", exc_info=True)\n                else:\n                    logger.error(f\"❌ LLM call error (non-retryable): {e}\", exc_info=True)\n                raise\n\n        # Parse tool calls\n        tool_calls = []\n        for idx in sorted(tool_calls_buffer.keys()):\n            tc = tool_calls_buffer[idx]\n\n            # Ensure tool call has a valid ID (some providers return empty/None IDs)\n            tool_id = tc.get(\"id\") or \"\"\n            if not tool_id:\n                import uuid\n                tool_id = f\"call_{uuid.uuid4().hex[:24]}\"\n\n            try:\n                # Safely get arguments, handle None case\n                args_str = tc.get(\"arguments\") or \"\"\n                arguments = json.loads(args_str) if args_str else {}\n            except json.JSONDecodeError as e:\n                # Handle None or invalid arguments safely\n                args_str = tc.get('arguments') or \"\"\n                args_preview = args_str[:200] if len(args_str) > 200 else args_str\n                logger.error(f\"Failed to parse tool arguments for {tc['name']}\")\n                logger.error(f\"Arguments length: {len(args_str)} chars\")\n                logger.error(f\"Arguments preview: {args_preview}...\")\n                logger.error(f\"JSON decode error: {e}\")\n\n                # Return a clear error message to the LLM instead of empty dict\n                # This helps the LLM understand what went wrong\n                tool_calls.append({\n                    \"id\": tool_id,\n                    \"name\": tc[\"name\"],\n                    \"arguments\": {},\n                    \"_parse_error\": f\"Invalid JSON in tool arguments: {args_preview}... Error: {str(e)}. Tip: For large content, consider splitting into smaller chunks or using a different approach.\"\n                })\n                continue\n\n            tool_calls.append({\n                \"id\": tool_id,\n                \"name\": tc[\"name\"],\n                \"arguments\": arguments\n            })\n\n        # Check for empty response and retry once if enabled\n        if retry_on_empty and not full_content and not tool_calls:\n            logger.warning(f\"⚠️  LLM returned empty response (stop_reason: {stop_reason}), retrying once...\")\n            self._emit_event(\"message_end\", {\n                \"content\": \"\",\n                \"tool_calls\": [],\n                \"empty_retry\": True,\n                \"stop_reason\": stop_reason\n            })\n            # Retry without retry flag to avoid infinite loop\n            return self._call_llm_stream(\n                retry_on_empty=False, \n                retry_count=retry_count,\n                max_retries=max_retries\n            )\n\n        # Filter full_content one more time (in case tags were split across chunks)\n        full_content = self._filter_think_tags(full_content)\n        \n        # Add assistant message to history (Claude format uses content blocks)\n        assistant_msg = {\"role\": \"assistant\", \"content\": []}\n\n        # Add text content block if present\n        if full_content:\n            assistant_msg[\"content\"].append({\n                \"type\": \"text\",\n                \"text\": full_content\n            })\n\n        # Add tool_use blocks if present\n        if tool_calls:\n            for tc in tool_calls:\n                assistant_msg[\"content\"].append({\n                    \"type\": \"tool_use\",\n                    \"id\": tc.get(\"id\", \"\"),\n                    \"name\": tc.get(\"name\", \"\"),\n                    \"input\": tc.get(\"arguments\", {})\n                })\n        \n        if gemini_raw_parts:\n            assistant_msg[\"_gemini_raw_parts\"] = gemini_raw_parts\n\n        # Only append if content is not empty\n        if assistant_msg[\"content\"]:\n            self.messages.append(assistant_msg)\n\n        self._emit_event(\"message_end\", {\n            \"content\": full_content,\n            \"tool_calls\": tool_calls\n        })\n\n        return full_content, tool_calls\n\n    def _execute_tool(self, tool_call: Dict) -> Dict[str, Any]:\n        \"\"\"\n        Execute tool\n        \n        Args:\n            tool_call: {\"id\": str, \"name\": str, \"arguments\": dict}\n            \n        Returns:\n            Tool execution result\n        \"\"\"\n        tool_name = tool_call[\"name\"]\n        tool_id = tool_call[\"id\"]\n        arguments = tool_call[\"arguments\"]\n\n        # Check if there was a JSON parse error\n        if \"_parse_error\" in tool_call:\n            parse_error = tool_call[\"_parse_error\"]\n            logger.error(f\"Skipping tool execution due to parse error: {parse_error}\")\n            result = {\n                \"status\": \"error\",\n                \"result\": f\"Failed to parse tool arguments. {parse_error}. Please ensure your tool call uses valid JSON format with all required parameters.\",\n                \"execution_time\": 0\n            }\n            self._record_tool_result(tool_name, arguments, False)\n            return result\n\n        # Check for consecutive failures (retry protection)\n        should_stop, stop_reason, is_critical = self._check_consecutive_failures(tool_name, arguments)\n        if should_stop:\n            logger.error(f\"🛑 {stop_reason}\")\n            self._record_tool_result(tool_name, arguments, False)\n            \n            if is_critical:\n                # Critical failure - abort entire conversation\n                result = {\n                    \"status\": \"critical_error\",\n                    \"result\": stop_reason,\n                    \"execution_time\": 0\n                }\n            else:\n                # Normal failure - let LLM try different approach\n                result = {\n                    \"status\": \"error\",\n                    \"result\": f\"{stop_reason}\\n\\n当前方法行不通，请尝试完全不同的方法或向用户询问更多信息。\",\n                    \"execution_time\": 0\n                }\n            return result\n\n        self._emit_event(\"tool_execution_start\", {\n            \"tool_call_id\": tool_id,\n            \"tool_name\": tool_name,\n            \"arguments\": arguments\n        })\n\n        try:\n            tool = self.tools.get(tool_name)\n            if not tool:\n                raise ValueError(self._build_tool_not_found_message(tool_name))\n\n            # Set tool context\n            tool.model = self.model\n            tool.context = self.agent\n\n            # Execute tool\n            start_time = time.time()\n            result: ToolResult = tool.execute_tool(arguments)\n            execution_time = time.time() - start_time\n\n            result_dict = {\n                \"status\": result.status,\n                \"result\": result.result,\n                \"execution_time\": execution_time\n            }\n\n            # Record tool result for failure tracking\n            success = result.status == \"success\"\n            self._record_tool_result(tool_name, arguments, success)\n\n            # Auto-refresh skills after skill creation\n            if tool_name == \"bash\" and result.status == \"success\":\n                command = arguments.get(\"command\", \"\")\n                if \"init_skill.py\" in command and self.agent.skill_manager:\n                    logger.info(\"Detected skill creation, refreshing skills...\")\n                    self.agent.refresh_skills()\n                    logger.info(f\"Skills refreshed! Now have {len(self.agent.skill_manager.skills)} skills\")\n\n            self._emit_event(\"tool_execution_end\", {\n                \"tool_call_id\": tool_id,\n                \"tool_name\": tool_name,\n                **result_dict\n            })\n\n            return result_dict\n\n        except Exception as e:\n            logger.error(f\"Tool execution error: {e}\")\n            error_result = {\n                \"status\": \"error\",\n                \"result\": str(e),\n                \"execution_time\": 0\n            }\n            # Record failure\n            self._record_tool_result(tool_name, arguments, False)\n            \n            self._emit_event(\"tool_execution_end\", {\n                \"tool_call_id\": tool_id,\n                \"tool_name\": tool_name,\n                **error_result\n            })\n            return error_result\n\n    def _build_tool_not_found_message(self, tool_name: str) -> str:\n        \"\"\"Build a helpful error message when a tool is not found.\n\n        If a skill with the same name exists in skill_manager, read its\n        SKILL.md and include the content so the LLM knows how to use it.\n        \"\"\"\n        available_tools = list(self.tools.keys())\n        base_msg = f\"Tool '{tool_name}' not found. Available tools: {available_tools}\"\n\n        skill_manager = getattr(self.agent, 'skill_manager', None)\n        if not skill_manager:\n            return base_msg\n\n        skill_entry = skill_manager.get_skill(tool_name)\n        if not skill_entry:\n            return base_msg\n\n        skill = skill_entry.skill\n        skill_md_path = skill.file_path\n        skill_content = \"\"\n        try:\n            with open(skill_md_path, 'r', encoding='utf-8') as f:\n                skill_content = f.read()\n        except Exception:\n            skill_content = skill.description\n\n        logger.info(\n            f\"[Agent] Tool '{tool_name}' not found, but matched skill '{skill.name}'. \"\n            f\"Guiding LLM to use the skill instead.\"\n        )\n\n        return (\n            f\"Tool '{tool_name}' is not a built-in tool, but a matching skill \"\n            f\"'{skill.name}' is available. You should use existing tools (e.g. bash with curl) \"\n            f\"to accomplish this task following the skill instructions below:\\n\\n\"\n            f\"--- SKILL: {skill.name} (path: {skill_md_path}) ---\\n\"\n            f\"{skill_content}\\n\"\n            f\"--- END SKILL ---\\n\\n\"\n            f\"Available tools: {available_tools}\"\n        )\n\n    def _validate_and_fix_messages(self):\n        \"\"\"Delegate to the shared sanitizer (see message_sanitizer.py).\"\"\"\n        sanitize_claude_messages(self.messages)\n\n    def _identify_complete_turns(self) -> List[Dict]:\n        \"\"\"\n        识别完整的对话轮次\n        \n        一个完整轮次包括：\n        1. 用户消息（text）\n        2. AI 回复（可能包含 tool_use）\n        3. 工具结果（tool_result，如果有）\n        4. 后续 AI 回复（如果有）\n        \n        Returns:\n            List of turns, each turn is a dict with 'messages' list\n        \"\"\"\n        turns = []\n        current_turn = {'messages': []}\n        \n        for msg in self.messages:\n            role = msg.get('role')\n            content = msg.get('content', [])\n            \n            if role == 'user':\n                # Determine if this is a real user query (not a tool_result injection\n                # or an internal hint message injected by the agent loop).\n                is_user_query = False\n                has_tool_result = False\n                if isinstance(content, list):\n                    has_text = any(\n                        isinstance(block, dict) and block.get('type') == 'text'\n                        for block in content\n                    )\n                    has_tool_result = any(\n                        isinstance(block, dict) and block.get('type') == 'tool_result'\n                        for block in content\n                    )\n                    # A message with tool_result is always internal, even if it\n                    # also contains text blocks (shouldn't happen, but be safe).\n                    is_user_query = has_text and not has_tool_result\n                elif isinstance(content, str):\n                    is_user_query = True\n                \n                if is_user_query:\n                    if current_turn['messages']:\n                        turns.append(current_turn)\n                    current_turn = {'messages': [msg]}\n                else:\n                    current_turn['messages'].append(msg)\n            else:\n                # AI 回复，属于当前轮次\n                current_turn['messages'].append(msg)\n        \n        # 添加最后一个轮次\n        if current_turn['messages']:\n            turns.append(current_turn)\n        \n        return turns\n    \n    def _estimate_turn_tokens(self, turn: Dict) -> int:\n        \"\"\"估算一个轮次的 tokens\"\"\"\n        return sum(\n            self.agent._estimate_message_tokens(msg) \n            for msg in turn['messages']\n        )\n\n    def _truncate_historical_tool_results(self):\n        \"\"\"\n        Truncate tool_result content in historical messages to reduce context size.\n\n        Current turn results are kept at 30K chars (truncated at creation time).\n        Historical turn results are further truncated to 10K chars here.\n        This runs before token-based trimming so that we first shrink oversized\n        results, potentially avoiding the need to drop entire turns.\n        \"\"\"\n        MAX_HISTORY_RESULT_CHARS = 20000\n\n        if len(self.messages) < 2:\n            return\n\n        # Find where the last user text message starts (= current turn boundary)\n        # We skip the current turn's messages to preserve their full content\n        current_turn_start = len(self.messages)\n        for i in range(len(self.messages) - 1, -1, -1):\n            msg = self.messages[i]\n            if msg.get(\"role\") == \"user\":\n                content = msg.get(\"content\", [])\n                if isinstance(content, list) and any(\n                    isinstance(b, dict) and b.get(\"type\") == \"text\" for b in content\n                ):\n                    current_turn_start = i\n                    break\n                elif isinstance(content, str):\n                    current_turn_start = i\n                    break\n\n        truncated_count = 0\n        for i in range(current_turn_start):\n            msg = self.messages[i]\n            if msg.get(\"role\") != \"user\":\n                continue\n            content = msg.get(\"content\", [])\n            if not isinstance(content, list):\n                continue\n\n            for block in content:\n                if not isinstance(block, dict) or block.get(\"type\") != \"tool_result\":\n                    continue\n                result_str = block.get(\"content\", \"\")\n                if isinstance(result_str, str) and len(result_str) > MAX_HISTORY_RESULT_CHARS:\n                    original_len = len(result_str)\n                    block[\"content\"] = result_str[:MAX_HISTORY_RESULT_CHARS] + \\\n                        f\"\\n\\n[Historical output truncated: {original_len} -> {MAX_HISTORY_RESULT_CHARS} chars]\"\n                    truncated_count += 1\n\n        if truncated_count > 0:\n            logger.info(f\"📎 Truncated {truncated_count} historical tool result(s) to {MAX_HISTORY_RESULT_CHARS} chars\")\n\n    def _aggressive_trim_for_overflow(self) -> bool:\n        \"\"\"\n        Aggressively trim context when a real overflow error is returned by the API.\n\n        This method goes beyond normal _trim_messages by:\n        1. Truncating all tool results (including current turn) to a small limit\n        2. Keeping only the last 5 complete conversation turns\n        3. Truncating overly long user messages\n\n        Returns:\n            True if messages were trimmed (worth retrying), False if nothing left to trim\n        \"\"\"\n        if not self.messages:\n            return False\n\n        original_count = len(self.messages)\n\n        # Step 1: Aggressively truncate ALL tool results to 5K chars\n        AGGRESSIVE_LIMIT = 10000\n        truncated = 0\n        for msg in self.messages:\n            content = msg.get(\"content\", [])\n            if not isinstance(content, list):\n                continue\n            for block in content:\n                if not isinstance(block, dict):\n                    continue\n                # Truncate tool_result blocks\n                if block.get(\"type\") == \"tool_result\":\n                    result_str = block.get(\"content\", \"\")\n                    if isinstance(result_str, str) and len(result_str) > AGGRESSIVE_LIMIT:\n                        block[\"content\"] = (\n                            result_str[:AGGRESSIVE_LIMIT]\n                            + f\"\\n\\n[Truncated for context recovery: \"\n                            f\"{len(result_str)} -> {AGGRESSIVE_LIMIT} chars]\"\n                        )\n                        truncated += 1\n                # Truncate tool_use input blocks (e.g. large write content)\n                if block.get(\"type\") == \"tool_use\" and isinstance(block.get(\"input\"), dict):\n                    input_str = json.dumps(block[\"input\"], ensure_ascii=False)\n                    if len(input_str) > AGGRESSIVE_LIMIT:\n                        # Keep only a summary of the input\n                        for key, val in block[\"input\"].items():\n                            if isinstance(val, str) and len(val) > 1000:\n                                block[\"input\"][key] = (\n                                    val[:1000]\n                                    + f\"... [truncated {len(val)} chars]\"\n                                )\n                        truncated += 1\n\n        # Step 2: Truncate overly long user text messages (e.g. pasted content)\n        USER_MSG_LIMIT = 10000\n        for msg in self.messages:\n            if msg.get(\"role\") != \"user\":\n                continue\n            content = msg.get(\"content\", [])\n            if isinstance(content, list):\n                for block in content:\n                    if isinstance(block, dict) and block.get(\"type\") == \"text\":\n                        text = block.get(\"text\", \"\")\n                        if len(text) > USER_MSG_LIMIT:\n                            block[\"text\"] = (\n                                text[:USER_MSG_LIMIT]\n                                + f\"\\n\\n[Message truncated for context recovery: \"\n                                f\"{len(text)} -> {USER_MSG_LIMIT} chars]\"\n                            )\n                            truncated += 1\n            elif isinstance(content, str) and len(content) > USER_MSG_LIMIT:\n                msg[\"content\"] = (\n                    content[:USER_MSG_LIMIT]\n                    + f\"\\n\\n[Message truncated for context recovery: \"\n                    f\"{len(content)} -> {USER_MSG_LIMIT} chars]\"\n                )\n                truncated += 1\n\n        # Step 3: Keep only the last 5 complete turns\n        turns = self._identify_complete_turns()\n        if len(turns) > 5:\n            kept_turns = turns[-5:]\n            new_messages = []\n            for turn in kept_turns:\n                new_messages.extend(turn[\"messages\"])\n            removed = len(turns) - 5\n            self.messages[:] = new_messages\n            logger.info(\n                f\"🔧 Aggressive trim: removed {removed} old turns, \"\n                f\"truncated {truncated} large blocks, \"\n                f\"{original_count} -> {len(self.messages)} messages\"\n            )\n            return True\n\n        if truncated > 0:\n            logger.info(\n                f\"🔧 Aggressive trim: truncated {truncated} large blocks \"\n                f\"(no turns removed, only {len(turns)} turn(s) left)\"\n            )\n            return True\n\n        # Nothing left to trim\n        logger.warning(\"🔧 Aggressive trim: nothing to trim, will clear history\")\n        return False\n\n    def _trim_messages(self):\n        \"\"\"\n        智能清理消息历史，保持对话完整性\n\n        使用完整轮次作为清理单位，确保：\n        1. 不会在对话中间截断\n        2. 工具调用链（tool_use + tool_result）保持完整\n        3. 每轮对话都是完整的（用户消息 + AI回复 + 工具调用）\n        \"\"\"\n        if not self.messages or not self.agent:\n            return\n\n        # Step 0: Truncate large tool results in historical turns (30K -> 10K)\n        self._truncate_historical_tool_results()\n\n        # Step 1: 识别完整轮次\n        turns = self._identify_complete_turns()\n        \n        if not turns:\n            return\n        \n        # Step 2: 轮次限制 - 超出时移除前一半，保留后一半\n        if len(turns) > self.max_context_turns:\n            removed_count = len(turns) // 2\n            keep_count = len(turns) - removed_count\n            \n            # Flush discarded turns to daily memory\n            if self.agent.memory_manager:\n                discarded_messages = []\n                for turn in turns[:removed_count]:\n                    discarded_messages.extend(turn[\"messages\"])\n                if discarded_messages:\n                    user_id = getattr(self.agent, '_current_user_id', None)\n                    self.agent.memory_manager.flush_memory(\n                        messages=discarded_messages, user_id=user_id,\n                        reason=\"trim\", max_messages=0\n                    )\n            \n            turns = turns[-keep_count:]\n            \n            logger.info(\n                f\"💾 上下文轮次超限: {keep_count + removed_count} > {self.max_context_turns}，\"\n                f\"裁剪至 {keep_count} 轮（移除 {removed_count} 轮）\"\n            )\n\n        # Step 3: Token 限制 - 保留完整轮次\n        # Get context window from agent (based on model)\n        context_window = self.agent._get_model_context_window()\n\n        # Use configured max_context_tokens if available\n        if hasattr(self.agent, 'max_context_tokens') and self.agent.max_context_tokens:\n            max_tokens = self.agent.max_context_tokens\n        else:\n            # Reserve 10% for response generation\n            reserve_tokens = int(context_window * 0.1)\n            max_tokens = context_window - reserve_tokens\n\n        # Estimate system prompt tokens\n        system_tokens = self.agent._estimate_message_tokens({\"role\": \"system\", \"content\": self.system_prompt})\n        available_tokens = max_tokens - system_tokens\n\n        # Calculate current tokens\n        current_tokens = sum(self._estimate_turn_tokens(turn) for turn in turns)\n        \n        # If under limit, reconstruct messages and return\n        if current_tokens + system_tokens <= max_tokens:\n            # Reconstruct message list from turns\n            new_messages = []\n            for turn in turns:\n                new_messages.extend(turn['messages'])\n            \n            old_count = len(self.messages)\n            self.messages = new_messages\n            \n            # Log if we removed messages due to turn limit\n            if old_count > len(self.messages):\n                logger.info(f\"   重建消息列表: {old_count} -> {len(self.messages)} 条消息\")\n            return\n\n        # Token limit exceeded — tiered strategy based on turn count:\n        #\n        #   Few turns (<5):  Compress ALL turns to text-only (strip tool chains,\n        #                    keep user query + final reply).  Never discard turns\n        #                    — losing even one is too painful when context is thin.\n        #\n        #   Many turns (>=5): Directly discard the first half of turns.\n        #                     With enough turns the oldest ones are less\n        #                     critical, and keeping the recent half intact\n        #                     (with full tool chains) is more useful.\n\n        COMPRESS_THRESHOLD = 5\n\n        if len(turns) < COMPRESS_THRESHOLD:\n            # --- Few turns: compress ALL turns to text-only, never discard ---\n            compressed_turns = []\n            for t in turns:\n                compressed = compress_turn_to_text_only(t)\n                if compressed[\"messages\"]:\n                    compressed_turns.append(compressed)\n\n            new_messages = []\n            for turn in compressed_turns:\n                new_messages.extend(turn[\"messages\"])\n\n            new_tokens = sum(self._estimate_turn_tokens(t) for t in compressed_turns)\n            old_count = len(self.messages)\n            self.messages = new_messages\n\n            logger.info(\n                f\"📦 上下文tokens超限(轮次<{COMPRESS_THRESHOLD}): \"\n                f\"~{current_tokens + system_tokens} > {max_tokens}，\"\n                f\"压缩全部 {len(turns)} 轮为纯文本 \"\n                f\"({old_count} -> {len(self.messages)} 条消息，\"\n                f\"~{current_tokens + system_tokens} -> ~{new_tokens + system_tokens} tokens)\"\n            )\n            return\n\n        # --- Many turns (>=5): discard the older half, keep the newer half ---\n        removed_count = len(turns) // 2\n        keep_count = len(turns) - removed_count\n        kept_turns = turns[-keep_count:]\n        kept_tokens = sum(self._estimate_turn_tokens(t) for t in kept_turns)\n\n        logger.info(\n            f\"🔄 上下文tokens超限: ~{current_tokens + system_tokens} > {max_tokens}，\"\n            f\"裁剪至 {keep_count} 轮（移除 {removed_count} 轮）\"\n        )\n\n        if self.agent.memory_manager:\n            discarded_messages = []\n            for turn in turns[:removed_count]:\n                discarded_messages.extend(turn[\"messages\"])\n            if discarded_messages:\n                user_id = getattr(self.agent, '_current_user_id', None)\n                self.agent.memory_manager.flush_memory(\n                    messages=discarded_messages, user_id=user_id,\n                    reason=\"trim\", max_messages=0\n                )\n\n        new_messages = []\n        for turn in kept_turns:\n            new_messages.extend(turn['messages'])\n\n        old_count = len(self.messages)\n        self.messages = new_messages\n\n        logger.info(\n            f\"   移除了 {removed_count} 轮对话 \"\n            f\"({old_count} -> {len(self.messages)} 条消息，\"\n            f\"~{current_tokens + system_tokens} -> ~{kept_tokens + system_tokens} tokens)\"\n        )\n\n    def _clear_session_db(self):\n        \"\"\"\n        Clear the current session's persisted messages from SQLite DB.\n\n        This prevents dirty data (broken tool_use/tool_result pairs) from being\n        reloaded on the next request or after a restart.\n        \"\"\"\n        try:\n            session_id = getattr(self.agent, '_current_session_id', None)\n            if not session_id:\n                return\n            from agent.memory import get_conversation_store\n            store = get_conversation_store()\n            store.clear_session(session_id)\n            logger.info(f\"🗑️ Cleared dirty session data from DB: {session_id}\")\n        except Exception as e:\n            logger.warning(f\"Failed to clear session DB: {e}\")\n\n    def _prepare_messages(self) -> List[Dict[str, Any]]:\n        \"\"\"\n        Prepare messages to send to LLM\n        \n        Note: For Claude API, system prompt should be passed separately via system parameter,\n        not as a message. The AgentLLMModel will handle this.\n        \"\"\"\n        # Don't add system message here - it will be handled separately by the LLM adapter\n        return self.messages"
  },
  {
    "path": "agent/protocol/context.py",
    "content": "class TeamContext:\n    def __init__(self, name: str, description: str, rule: str, agents: list, max_steps: int = 100):\n        \"\"\"\n        Initialize the TeamContext with a name, description, rules, a list of agents, and a user question.\n        :param name: The name of the group context.\n        :param description: A description of the group context.\n        :param rule: The rules governing the group context.\n        :param agents: A list of agents in the context.\n        \"\"\"\n        self.name = name\n        self.description = description\n        self.rule = rule\n        self.agents = agents\n        self.user_task = \"\"  # For backward compatibility\n        self.task = None  # Will be a Task instance\n        self.model = None  # Will be an instance of LLMModel\n        self.task_short_name = None  # Store the task directory name\n        # List of agents that have been executed\n        self.agent_outputs: list = []\n        self.current_steps = 0\n        self.max_steps = max_steps\n\n\nclass AgentOutput:\n    def __init__(self, agent_name: str, output: str):\n        self.agent_name = agent_name\n        self.output = output"
  },
  {
    "path": "agent/protocol/message_utils.py",
    "content": "\"\"\"\nMessage sanitizer — fix broken tool_use / tool_result pairs.\n\nProvides two public helpers that can be reused across agent_stream.py\nand any bot that converts messages to OpenAI format:\n\n1. sanitize_claude_messages(messages)\n   Operates on the internal Claude-format message list (in-place).\n\n2. drop_orphaned_tool_results_openai(messages)\n   Operates on an already-converted OpenAI-format message list,\n   returning a cleaned copy.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import Dict, List, Set\n\nfrom common.log import logger\n\n\n# ------------------------------------------------------------------ #\n# Claude-format sanitizer (used by agent_stream)\n# ------------------------------------------------------------------ #\n\ndef sanitize_claude_messages(messages: List[Dict]) -> int:\n    \"\"\"\n    Validate and fix a Claude-format message list **in-place**.\n\n    Fixes handled:\n    - Trailing assistant message with tool_use but no following tool_result\n    - Leading orphaned tool_result user messages\n    - Mid-list tool_result blocks whose tool_use_id has no matching\n      tool_use in any preceding assistant message\n\n    Returns the number of messages / blocks removed.\n    \"\"\"\n    if not messages:\n        return 0\n\n    removed = 0\n\n    # 1. Remove trailing incomplete tool_use assistant messages\n    while messages:\n        last = messages[-1]\n        if last.get(\"role\") != \"assistant\":\n            break\n        content = last.get(\"content\", [])\n        if isinstance(content, list) and any(\n            isinstance(b, dict) and b.get(\"type\") == \"tool_use\"\n            for b in content\n        ):\n            logger.warning(\"⚠️ Removing trailing incomplete tool_use assistant message\")\n            messages.pop()\n            removed += 1\n        else:\n            break\n\n    # 2. Remove leading orphaned tool_result user messages\n    while messages:\n        first = messages[0]\n        if first.get(\"role\") != \"user\":\n            break\n        content = first.get(\"content\", [])\n        if isinstance(content, list) and _has_block_type(content, \"tool_result\") \\\n                and not _has_block_type(content, \"text\"):\n            logger.warning(\"⚠️ Removing leading orphaned tool_result user message\")\n            messages.pop(0)\n            removed += 1\n        else:\n            break\n\n    # 3. Iteratively remove unmatched tool_use / tool_result until stable.\n    #    Removing one broken message can orphan others (e.g. an assistant msg\n    #    with both matched and unmatched tool_use — deleting it orphans the\n    #    previously-matched tool_result).  Loop until clean.\n    for _ in range(5):\n        use_ids: Set[str] = set()\n        result_ids: Set[str] = set()\n        for msg in messages:\n            for block in (msg.get(\"content\") or []):\n                if not isinstance(block, dict):\n                    continue\n                if block.get(\"type\") == \"tool_use\" and block.get(\"id\"):\n                    use_ids.add(block[\"id\"])\n                elif block.get(\"type\") == \"tool_result\" and block.get(\"tool_use_id\"):\n                    result_ids.add(block[\"tool_use_id\"])\n\n        bad_use = use_ids - result_ids\n        bad_result = result_ids - use_ids\n        if not bad_use and not bad_result:\n            break\n\n        pass_removed = 0\n        i = 0\n        while i < len(messages):\n            msg = messages[i]\n            role = msg.get(\"role\")\n            content = msg.get(\"content\", [])\n            if not isinstance(content, list):\n                i += 1\n                continue\n\n            if role == \"assistant\" and bad_use and any(\n                isinstance(b, dict) and b.get(\"type\") == \"tool_use\"\n                and b.get(\"id\") in bad_use for b in content\n            ):\n                logger.warning(f\"⚠️ Removing assistant msg with unmatched tool_use\")\n                messages.pop(i)\n                pass_removed += 1\n                continue\n\n            if role == \"user\" and bad_result and _has_block_type(content, \"tool_result\"):\n                has_bad = any(\n                    isinstance(b, dict) and b.get(\"type\") == \"tool_result\"\n                    and b.get(\"tool_use_id\") in bad_result for b in content\n                )\n                if has_bad:\n                    if not _has_block_type(content, \"text\"):\n                        logger.warning(f\"⚠️ Removing user msg with unmatched tool_result\")\n                        messages.pop(i)\n                        pass_removed += 1\n                        continue\n                    else:\n                        before = len(content)\n                        msg[\"content\"] = [\n                            b for b in content\n                            if not (isinstance(b, dict) and b.get(\"type\") == \"tool_result\"\n                                    and b.get(\"tool_use_id\") in bad_result)\n                        ]\n                        pass_removed += before - len(msg[\"content\"])\n\n            i += 1\n\n        removed += pass_removed\n        if pass_removed == 0:\n            break\n\n    if removed:\n        logger.info(f\"🔧 Message validation: removed {removed} broken message(s)\")\n    return removed\n\n\n# ------------------------------------------------------------------ #\n# OpenAI-format sanitizer (used by minimax_bot, openai_compatible_bot)\n# ------------------------------------------------------------------ #\n\ndef drop_orphaned_tool_results_openai(messages: List[Dict]) -> List[Dict]:\n    \"\"\"\n    Return a copy of *messages* (OpenAI format) with any ``role=tool``\n    messages removed if their ``tool_call_id`` does not match a\n    ``tool_calls[].id`` in a preceding assistant message.\n    \"\"\"\n    known_ids: Set[str] = set()\n    cleaned: List[Dict] = []\n    for msg in messages:\n        if msg.get(\"role\") == \"assistant\" and msg.get(\"tool_calls\"):\n            for tc in msg[\"tool_calls\"]:\n                tc_id = tc.get(\"id\", \"\")\n                if tc_id:\n                    known_ids.add(tc_id)\n\n        if msg.get(\"role\") == \"tool\":\n            ref_id = msg.get(\"tool_call_id\", \"\")\n            if ref_id and ref_id not in known_ids:\n                logger.warning(\n                    f\"[MessageSanitizer] Dropping orphaned tool result \"\n                    f\"(tool_call_id={ref_id} not in known ids)\"\n                )\n                continue\n        cleaned.append(msg)\n    return cleaned\n\n\n# ------------------------------------------------------------------ #\n# Internal helpers\n# ------------------------------------------------------------------ #\n\ndef _has_block_type(content: list, block_type: str) -> bool:\n    return any(\n        isinstance(b, dict) and b.get(\"type\") == block_type\n        for b in content\n    )\n\n\ndef _extract_text_from_content(content) -> str:\n    \"\"\"Extract plain text from a message content field (str or list of blocks).\"\"\"\n    if isinstance(content, str):\n        return content.strip()\n    if isinstance(content, list):\n        parts = [\n            b.get(\"text\", \"\")\n            for b in content\n            if isinstance(b, dict) and b.get(\"type\") == \"text\"\n        ]\n        return \"\\n\".join(p for p in parts if p).strip()\n    return \"\"\n\n\ndef compress_turn_to_text_only(turn: Dict) -> Dict:\n    \"\"\"\n    Compress a full turn (with tool_use/tool_result chains) into a lightweight\n    text-only turn that keeps only the first user text and the last assistant text.\n\n    This preserves the conversational context (what the user asked and what the\n    agent concluded) while stripping out the bulky intermediate tool interactions.\n\n    Returns a new turn dict with a ``messages`` list; the original is not mutated.\n    \"\"\"\n    user_text = \"\"\n    last_assistant_text = \"\"\n\n    for msg in turn[\"messages\"]:\n        role = msg.get(\"role\")\n        content = msg.get(\"content\", [])\n\n        if role == \"user\":\n            if isinstance(content, list) and _has_block_type(content, \"tool_result\"):\n                continue\n            if not user_text:\n                user_text = _extract_text_from_content(content)\n\n        elif role == \"assistant\":\n            text = _extract_text_from_content(content)\n            if text:\n                last_assistant_text = text\n\n    compressed_messages = []\n    if user_text:\n        compressed_messages.append({\n            \"role\": \"user\",\n            \"content\": [{\"type\": \"text\", \"text\": user_text}]\n        })\n    if last_assistant_text:\n        compressed_messages.append({\n            \"role\": \"assistant\",\n            \"content\": [{\"type\": \"text\", \"text\": last_assistant_text}]\n        })\n\n    return {\"messages\": compressed_messages}\n"
  },
  {
    "path": "agent/protocol/models.py",
    "content": "\"\"\"\nModels module for agent system.\nProvides basic model classes needed by tools and bridge integration.\n\"\"\"\n\nfrom typing import Any, Dict, List, Optional\n\n\nclass LLMRequest:\n    \"\"\"Request model for LLM operations\"\"\"\n    \n    def __init__(self, messages: List[Dict[str, str]] = None, model: Optional[str] = None,\n                 temperature: float = 0.7, max_tokens: Optional[int] = None, \n                 stream: bool = False, tools: Optional[List] = None, **kwargs):\n        self.messages = messages or []\n        self.model = model\n        self.temperature = temperature\n        self.max_tokens = max_tokens\n        self.stream = stream\n        self.tools = tools\n        # Allow extra attributes\n        for key, value in kwargs.items():\n            setattr(self, key, value)\n\n\nclass LLMModel:\n    \"\"\"Base class for LLM models\"\"\"\n    \n    def __init__(self, model: str = None, **kwargs):\n        self.model = model\n        self.config = kwargs\n    \n    def call(self, request: LLMRequest):\n        \"\"\"\n        Call the model with a request.\n        This is a placeholder implementation.\n        \"\"\"\n        raise NotImplementedError(\"LLMModel.call not implemented in this context\")\n    \n    def call_stream(self, request: LLMRequest):\n        \"\"\"\n        Call the model with streaming.\n        This is a placeholder implementation.\n        \"\"\"\n        raise NotImplementedError(\"LLMModel.call_stream not implemented in this context\")\n\n\nclass ModelFactory:\n    \"\"\"Factory for creating model instances\"\"\"\n\n    @staticmethod\n    def create_model(model_type: str, **kwargs):\n        \"\"\"\n        Create a model instance based on type.\n        This is a placeholder implementation.\n        \"\"\"\n        raise NotImplementedError(\"ModelFactory.create_model not implemented in this context\")"
  },
  {
    "path": "agent/protocol/result.py",
    "content": "from __future__ import annotations\nimport time\nimport uuid\nfrom dataclasses import dataclass, field\nfrom enum import Enum\nfrom typing import List, Dict, Any, Optional\n\nfrom agent.protocol.task import Task, TaskStatus\n\n\nclass AgentActionType(Enum):\n    \"\"\"Enum representing different types of agent actions.\"\"\"\n    TOOL_USE = \"tool_use\"\n    THINKING = \"thinking\"\n    FINAL_ANSWER = \"final_answer\"\n\n\n@dataclass\nclass ToolResult:\n    \"\"\"\n    Represents the result of a tool use.\n    \n    Attributes:\n        tool_name: Name of the tool used\n        input_params: Parameters passed to the tool\n        output: Output from the tool\n        status: Status of the tool execution (success/error)\n        error_message: Error message if the tool execution failed\n        execution_time: Time taken to execute the tool\n    \"\"\"\n    tool_name: str\n    input_params: Dict[str, Any]\n    output: Any\n    status: str\n    error_message: Optional[str] = None\n    execution_time: float = 0.0\n\n\n@dataclass\nclass AgentAction:\n    \"\"\"\n    Represents an action taken by an agent.\n    \n    Attributes:\n        id: Unique identifier for the action\n        agent_id: ID of the agent that performed the action\n        agent_name: Name of the agent that performed the action\n        action_type: Type of action (tool use, thinking, final answer)\n        content: Content of the action (thought content, final answer content)\n        tool_result: Tool use details if action_type is TOOL_USE\n        timestamp: When the action was performed\n    \"\"\"\n    agent_id: str\n    agent_name: str\n    action_type: AgentActionType\n    id: str = field(default_factory=lambda: str(uuid.uuid4()))\n    content: str = \"\"\n    tool_result: Optional[ToolResult] = None\n    thought: Optional[str] = None\n    timestamp: float = field(default_factory=time.time)\n\n\n@dataclass\nclass AgentResult:\n    \"\"\"\n    Represents the result of an agent's execution.\n\n    Attributes:\n        final_answer: The final answer provided by the agent\n        step_count: Number of steps taken by the agent\n        status: Status of the execution (success/error)\n        error_message: Error message if execution failed\n    \"\"\"\n    final_answer: str\n    step_count: int\n    status: str = \"success\"\n    error_message: Optional[str] = None\n\n    @classmethod\n    def success(cls, final_answer: str, step_count: int) -> \"AgentResult\":\n        \"\"\"Create a successful result\"\"\"\n        return cls(final_answer=final_answer, step_count=step_count)\n\n    @classmethod\n    def error(cls, error_message: str, step_count: int = 0) -> \"AgentResult\":\n        \"\"\"Create an error result\"\"\"\n        return cls(\n            final_answer=f\"Error: {error_message}\",\n            step_count=step_count,\n            status=\"error\",\n            error_message=error_message\n        )\n\n    @property\n    def is_error(self) -> bool:\n        \"\"\"Check if the result represents an error\"\"\"\n        return self.status == \"error\""
  },
  {
    "path": "agent/protocol/task.py",
    "content": "from __future__ import annotations\nimport time\nimport uuid\nfrom dataclasses import dataclass, field\nfrom enum import Enum\nfrom typing import Dict, Any, List\n\n\nclass TaskType(Enum):\n    \"\"\"Enum representing different types of tasks.\"\"\"\n    TEXT = \"text\"\n    IMAGE = \"image\"\n    VIDEO = \"video\"\n    AUDIO = \"audio\"\n    FILE = \"file\"\n    MIXED = \"mixed\"\n\n\nclass TaskStatus(Enum):\n    \"\"\"Enum representing the status of a task.\"\"\"\n    INIT = \"init\"  # Initial state\n    PROCESSING = \"processing\"  # In progress\n    COMPLETED = \"completed\"  # Completed\n    FAILED = \"failed\"  # Failed\n\n\n@dataclass\nclass Task:\n    \"\"\"\n    Represents a task to be processed by an agent.\n    \n    Attributes:\n        id: Unique identifier for the task\n        content: The primary text content of the task\n        type: Type of the task\n        status: Current status of the task\n        created_at: Timestamp when the task was created\n        updated_at: Timestamp when the task was last updated\n        metadata: Additional metadata for the task\n        images: List of image URLs or base64 encoded images\n        videos: List of video URLs\n        audios: List of audio URLs or base64 encoded audios\n        files: List of file URLs or paths\n    \"\"\"\n    id: str = field(default_factory=lambda: str(uuid.uuid4()))\n    content: str = \"\"\n    type: TaskType = TaskType.TEXT\n    status: TaskStatus = TaskStatus.INIT\n    created_at: float = field(default_factory=time.time)\n    updated_at: float = field(default_factory=time.time)\n    metadata: Dict[str, Any] = field(default_factory=dict)\n\n    # Media content\n    images: List[str] = field(default_factory=list)\n    videos: List[str] = field(default_factory=list)\n    audios: List[str] = field(default_factory=list)\n    files: List[str] = field(default_factory=list)\n\n    def __init__(self, content: str = \"\", **kwargs):\n        \"\"\"\n        Initialize a Task with content and optional keyword arguments.\n        \n        Args:\n            content: The text content of the task\n            **kwargs: Additional attributes to set\n        \"\"\"\n        self.id = kwargs.get('id', str(uuid.uuid4()))\n        self.content = content\n        self.type = kwargs.get('type', TaskType.TEXT)\n        self.status = kwargs.get('status', TaskStatus.INIT)\n        self.created_at = kwargs.get('created_at', time.time())\n        self.updated_at = kwargs.get('updated_at', time.time())\n        self.metadata = kwargs.get('metadata', {})\n        self.images = kwargs.get('images', [])\n        self.videos = kwargs.get('videos', [])\n        self.audios = kwargs.get('audios', [])\n        self.files = kwargs.get('files', [])\n\n    def get_text(self) -> str:\n        \"\"\"\n        Get the text content of the task.\n        \n        Returns:\n            The text content\n        \"\"\"\n        return self.content\n\n    def update_status(self, status: TaskStatus) -> None:\n        \"\"\"\n        Update the status of the task.\n        \n        Args:\n            status: The new status\n        \"\"\"\n        self.status = status\n        self.updated_at = time.time()"
  },
  {
    "path": "agent/skills/__init__.py",
    "content": "\"\"\"\nSkills module for agent system.\n\nThis module provides the framework for loading, managing, and executing skills.\nSkills are markdown files with frontmatter that provide specialized instructions\nfor specific tasks.\n\"\"\"\n\nfrom agent.skills.types import (\n    Skill,\n    SkillEntry,\n    SkillMetadata,\n    SkillInstallSpec,\n    LoadSkillsResult,\n)\nfrom agent.skills.loader import SkillLoader\nfrom agent.skills.manager import SkillManager\nfrom agent.skills.service import SkillService\nfrom agent.skills.formatter import format_skills_for_prompt\n\n__all__ = [\n    \"Skill\",\n    \"SkillEntry\",\n    \"SkillMetadata\",\n    \"SkillInstallSpec\",\n    \"LoadSkillsResult\",\n    \"SkillLoader\",\n    \"SkillManager\",\n    \"SkillService\",\n    \"format_skills_for_prompt\",\n]\n"
  },
  {
    "path": "agent/skills/config.py",
    "content": "\"\"\"\nConfiguration support for skills.\n\"\"\"\n\nimport os\nimport platform\nfrom typing import Dict, Optional, List\nfrom agent.skills.types import SkillEntry\n\n\ndef resolve_runtime_platform() -> str:\n    \"\"\"Get the current runtime platform.\"\"\"\n    return platform.system().lower()\n\n\ndef has_binary(bin_name: str) -> bool:\n    \"\"\"\n    Check if a binary is available in PATH.\n    \n    :param bin_name: Binary name to check\n    :return: True if binary is available\n    \"\"\"\n    import shutil\n    return shutil.which(bin_name) is not None\n\n\ndef has_any_binary(bin_names: List[str]) -> bool:\n    \"\"\"\n    Check if any of the given binaries is available.\n    \n    :param bin_names: List of binary names to check\n    :return: True if at least one binary is available\n    \"\"\"\n    return any(has_binary(bin_name) for bin_name in bin_names)\n\n\ndef has_env_var(env_name: str) -> bool:\n    \"\"\"\n    Check if an environment variable is set.\n    \n    :param env_name: Environment variable name\n    :return: True if environment variable is set\n    \"\"\"\n    return env_name in os.environ and bool(os.environ[env_name].strip())\n\n\ndef get_skill_config(config: Optional[Dict], skill_name: str) -> Optional[Dict]:\n    \"\"\"\n    Get skill-specific configuration.\n    \n    :param config: Global configuration dictionary\n    :param skill_name: Name of the skill\n    :return: Skill configuration or None\n    \"\"\"\n    if not config:\n        return None\n    \n    skills_config = config.get('skills', {})\n    if not isinstance(skills_config, dict):\n        return None\n    \n    entries = skills_config.get('entries', {})\n    if not isinstance(entries, dict):\n        return None\n    \n    return entries.get(skill_name)\n\n\ndef should_include_skill(\n    entry: SkillEntry,\n    config: Optional[Dict] = None,\n    current_platform: Optional[str] = None,\n) -> bool:\n    \"\"\"\n    Determine if a skill should be included based on requirements.\n    \n    Simple rule: Skills are auto-enabled if their requirements are met.\n    - Has required API keys → enabled\n    - Missing API keys → disabled\n    - Wrong keys → enabled but will fail at runtime (LLM will handle error)\n    \n    :param entry: SkillEntry to check\n    :param config: Configuration dictionary (currently unused, reserved for future)\n    :param current_platform: Current platform (default: auto-detect)\n    :return: True if skill should be included\n    \"\"\"\n    metadata = entry.metadata\n    \n    # No metadata = always include (no requirements)\n    if not metadata:\n        return True\n    \n    # Check platform requirements (can't work on wrong platform)\n    if metadata.os:\n        platform_name = current_platform or resolve_runtime_platform()\n        # Map common platform names\n        platform_map = {\n            'darwin': 'darwin',\n            'linux': 'linux',\n            'windows': 'win32',\n        }\n        normalized_platform = platform_map.get(platform_name, platform_name)\n        \n        if normalized_platform not in metadata.os:\n            return False\n    \n    # If skill has 'always: true', include it regardless of other requirements\n    if metadata.always:\n        return True\n    \n    # Check requirements\n    if metadata.requires:\n        # Check required binaries (all must be present)\n        required_bins = metadata.requires.get('bins', [])\n        if required_bins:\n            if not all(has_binary(bin_name) for bin_name in required_bins):\n                return False\n        \n        # Check anyBins (at least one must be present)\n        any_bins = metadata.requires.get('anyBins', [])\n        if any_bins:\n            if not has_any_binary(any_bins):\n                return False\n        \n        # Check environment variables (API keys)\n        # All required env vars must be set\n        required_env = metadata.requires.get('env', [])\n        if required_env:\n            for env_name in required_env:\n                if not has_env_var(env_name):\n                    return False\n\n        # Check anyEnv (at least one must be present)\n        any_env = metadata.requires.get('anyEnv', [])\n        if any_env:\n            if not any(has_env_var(e) for e in any_env):\n                return False\n    \n    return True\n\n\ndef is_config_path_truthy(config: Dict, path: str) -> bool:\n    \"\"\"\n    Check if a config path resolves to a truthy value.\n    \n    :param config: Configuration dictionary\n    :param path: Dot-separated path (e.g., 'skills.enabled')\n    :return: True if path resolves to truthy value\n    \"\"\"\n    parts = path.split('.')\n    current = config\n    \n    for part in parts:\n        if not isinstance(current, dict):\n            return False\n        current = current.get(part)\n        if current is None:\n            return False\n    \n    # Check if value is truthy\n    if isinstance(current, bool):\n        return current\n    if isinstance(current, (int, float)):\n        return current != 0\n    if isinstance(current, str):\n        return bool(current.strip())\n    \n    return bool(current)\n\n\ndef resolve_config_path(config: Dict, path: str):\n    \"\"\"\n    Resolve a dot-separated config path to its value.\n    \n    :param config: Configuration dictionary\n    :param path: Dot-separated path\n    :return: Value at path or None\n    \"\"\"\n    parts = path.split('.')\n    current = config\n    \n    for part in parts:\n        if not isinstance(current, dict):\n            return None\n        current = current.get(part)\n        if current is None:\n            return None\n    \n    return current\n"
  },
  {
    "path": "agent/skills/formatter.py",
    "content": "\"\"\"\nSkill formatter for generating prompts from skills.\n\"\"\"\n\nfrom typing import List\nfrom agent.skills.types import Skill, SkillEntry\n\n\ndef format_skills_for_prompt(skills: List[Skill]) -> str:\n    \"\"\"\n    Format skills for inclusion in a system prompt.\n    \n    Uses XML format per Agent Skills standard.\n    Skills with disable_model_invocation=True are excluded.\n    \n    :param skills: List of skills to format\n    :return: Formatted prompt text\n    \"\"\"\n    # Filter out skills that should not be invoked by the model\n    visible_skills = [s for s in skills if not s.disable_model_invocation]\n    \n    if not visible_skills:\n        return \"\"\n    \n    lines = [\n        \"\",\n        \"<available_skills>\",\n    ]\n\n    for skill in visible_skills:\n        lines.append(\"  <skill>\")\n        lines.append(f\"    <name>{_escape_xml(skill.name)}</name>\")\n        lines.append(f\"    <description>{_escape_xml(skill.description)}</description>\")\n        lines.append(f\"    <location>{_escape_xml(skill.file_path)}</location>\")\n        lines.append(f\"    <base_dir>{_escape_xml(skill.base_dir)}</base_dir>\")\n        lines.append(\"  </skill>\")\n    \n    lines.append(\"</available_skills>\")\n    \n    return \"\\n\".join(lines)\n\n\ndef format_skill_entries_for_prompt(entries: List[SkillEntry]) -> str:\n    \"\"\"\n    Format skill entries for inclusion in a system prompt.\n    \n    :param entries: List of skill entries to format\n    :return: Formatted prompt text\n    \"\"\"\n    skills = [entry.skill for entry in entries]\n    return format_skills_for_prompt(skills)\n\n\ndef _escape_xml(text: str) -> str:\n    \"\"\"Escape XML special characters.\"\"\"\n    return (text\n            .replace('&', '&amp;')\n            .replace('<', '&lt;')\n            .replace('>', '&gt;')\n            .replace('\"', '&quot;')\n            .replace(\"'\", '&apos;'))\n"
  },
  {
    "path": "agent/skills/frontmatter.py",
    "content": "\"\"\"\nFrontmatter parsing for skills.\n\"\"\"\n\nimport re\nimport json\nfrom typing import Dict, Any, Optional, List\nfrom agent.skills.types import SkillMetadata, SkillInstallSpec\n\n\ndef parse_frontmatter(content: str) -> Dict[str, Any]:\n    \"\"\"\n    Parse YAML-style frontmatter from markdown content.\n    \n    Returns a dictionary of frontmatter fields.\n    \"\"\"\n    frontmatter = {}\n    \n    # Match frontmatter block between --- markers\n    match = re.match(r'^---\\s*\\n(.*?)\\n---\\s*\\n', content, re.DOTALL)\n    if not match:\n        return frontmatter\n    \n    frontmatter_text = match.group(1)\n    \n    # Try to use PyYAML for proper YAML parsing\n    try:\n        import yaml\n        frontmatter = yaml.safe_load(frontmatter_text)\n        if not isinstance(frontmatter, dict):\n            frontmatter = {}\n        return frontmatter\n    except ImportError:\n        # Fallback to simple parsing if PyYAML not available\n        pass\n    except Exception:\n        # If YAML parsing fails, fall back to simple parsing\n        pass\n    \n    # Simple YAML-like parsing (supports key: value format only)\n    # This is a fallback for when PyYAML is not available\n    for line in frontmatter_text.split('\\n'):\n        line = line.strip()\n        if not line or line.startswith('#'):\n            continue\n        \n        if ':' in line:\n            key, value = line.split(':', 1)\n            key = key.strip()\n            value = value.strip()\n            \n            # Try to parse as JSON if it looks like JSON\n            if value.startswith('{') or value.startswith('['):\n                try:\n                    value = json.loads(value)\n                except json.JSONDecodeError:\n                    pass\n            # Parse boolean values\n            elif value.lower() in ('true', 'false'):\n                value = value.lower() == 'true'\n            # Parse numbers\n            elif value.isdigit():\n                value = int(value)\n            \n            frontmatter[key] = value\n    \n    return frontmatter\n\n\ndef parse_metadata(frontmatter: Dict[str, Any]) -> Optional[SkillMetadata]:\n    \"\"\"\n    Parse skill metadata from frontmatter.\n    \n    Looks for 'metadata' field containing JSON with skill configuration.\n    \"\"\"\n    metadata_raw = frontmatter.get('metadata')\n    if not metadata_raw:\n        return None\n    \n    # If it's a string, try to parse as JSON\n    if isinstance(metadata_raw, str):\n        try:\n            metadata_raw = json.loads(metadata_raw)\n        except json.JSONDecodeError:\n            return None\n    \n    if not isinstance(metadata_raw, dict):\n        return None\n    \n    # Use metadata_raw directly (COW format)\n    meta_obj = metadata_raw\n    \n    # Parse install specs\n    install_specs = []\n    install_raw = meta_obj.get('install', [])\n    if isinstance(install_raw, list):\n        for spec_raw in install_raw:\n            if not isinstance(spec_raw, dict):\n                continue\n            \n            kind = spec_raw.get('kind', spec_raw.get('type', '')).lower()\n            if not kind:\n                continue\n            \n            spec = SkillInstallSpec(\n                kind=kind,\n                id=spec_raw.get('id'),\n                label=spec_raw.get('label'),\n                bins=_normalize_string_list(spec_raw.get('bins')),\n                os=_normalize_string_list(spec_raw.get('os')),\n                formula=spec_raw.get('formula'),\n                package=spec_raw.get('package'),\n                module=spec_raw.get('module'),\n                url=spec_raw.get('url'),\n                archive=spec_raw.get('archive'),\n                extract=spec_raw.get('extract', False),\n                strip_components=spec_raw.get('stripComponents'),\n                target_dir=spec_raw.get('targetDir'),\n            )\n            install_specs.append(spec)\n    \n    # Parse requires\n    requires = {}\n    requires_raw = meta_obj.get('requires', {})\n    if isinstance(requires_raw, dict):\n        for key, value in requires_raw.items():\n            requires[key] = _normalize_string_list(value)\n    \n    return SkillMetadata(\n        always=meta_obj.get('always', False),\n        skill_key=meta_obj.get('skillKey'),\n        primary_env=meta_obj.get('primaryEnv'),\n        emoji=meta_obj.get('emoji'),\n        homepage=meta_obj.get('homepage'),\n        os=_normalize_string_list(meta_obj.get('os')),\n        requires=requires,\n        install=install_specs,\n    )\n\n\ndef _normalize_string_list(value: Any) -> List[str]:\n    \"\"\"Normalize a value to a list of strings.\"\"\"\n    if not value:\n        return []\n    \n    if isinstance(value, list):\n        return [str(v).strip() for v in value if v]\n    \n    if isinstance(value, str):\n        return [v.strip() for v in value.split(',') if v.strip()]\n    \n    return []\n\n\ndef parse_boolean_value(value: Optional[str], default: bool = False) -> bool:\n    \"\"\"Parse a boolean value from frontmatter.\"\"\"\n    if value is None:\n        return default\n    \n    if isinstance(value, bool):\n        return value\n    \n    if isinstance(value, str):\n        return value.lower() in ('true', '1', 'yes', 'on')\n    \n    return default\n\n\ndef get_frontmatter_value(frontmatter: Dict[str, Any], key: str) -> Optional[str]:\n    \"\"\"Get a frontmatter value as a string.\"\"\"\n    value = frontmatter.get(key)\n    return str(value) if value is not None else None\n"
  },
  {
    "path": "agent/skills/loader.py",
    "content": "\"\"\"\nSkill loader for discovering and loading skills from directories.\n\"\"\"\n\nimport os\nfrom pathlib import Path\nfrom typing import List, Optional, Dict\nfrom common.log import logger\nfrom agent.skills.types import Skill, SkillEntry, LoadSkillsResult, SkillMetadata\nfrom agent.skills.frontmatter import parse_frontmatter, parse_metadata, parse_boolean_value, get_frontmatter_value\n\n\nclass SkillLoader:\n    \"\"\"Loads skills from various directories.\"\"\"\n\n    def __init__(self):\n        pass\n    \n    def load_skills_from_dir(self, dir_path: str, source: str) -> LoadSkillsResult:\n        \"\"\"\n        Load skills from a directory.\n\n        Discovery rules:\n        - Direct .md files in the root directory\n        - Recursive SKILL.md files under subdirectories\n\n        :param dir_path: Directory path to scan\n        :param source: Source identifier ('builtin' or 'custom')\n        :return: LoadSkillsResult with skills and diagnostics\n        \"\"\"\n        skills = []\n        diagnostics = []\n        \n        if not os.path.exists(dir_path):\n            diagnostics.append(f\"Directory does not exist: {dir_path}\")\n            return LoadSkillsResult(skills=skills, diagnostics=diagnostics)\n        \n        if not os.path.isdir(dir_path):\n            diagnostics.append(f\"Path is not a directory: {dir_path}\")\n            return LoadSkillsResult(skills=skills, diagnostics=diagnostics)\n        \n        # Load skills from root-level .md files and subdirectories\n        result = self._load_skills_recursive(dir_path, source, include_root_files=True)\n        \n        return result\n    \n    def _load_skills_recursive(\n        self, \n        dir_path: str, \n        source: str, \n        include_root_files: bool = False\n    ) -> LoadSkillsResult:\n        \"\"\"\n        Recursively load skills from a directory.\n        \n        :param dir_path: Directory to scan\n        :param source: Source identifier\n        :param include_root_files: Whether to include root-level .md files\n        :return: LoadSkillsResult\n        \"\"\"\n        skills = []\n        diagnostics = []\n        \n        try:\n            entries = os.listdir(dir_path)\n        except Exception as e:\n            diagnostics.append(f\"Failed to list directory {dir_path}: {e}\")\n            return LoadSkillsResult(skills=skills, diagnostics=diagnostics)\n        \n        for entry in entries:\n            # Skip hidden files and directories\n            if entry.startswith('.'):\n                continue\n            \n            # Skip common non-skill directories\n            if entry in ('node_modules', '__pycache__', 'venv', '.git'):\n                continue\n            \n            full_path = os.path.join(dir_path, entry)\n            \n            # Handle directories\n            if os.path.isdir(full_path):\n                # Recursively scan subdirectories\n                sub_result = self._load_skills_recursive(full_path, source, include_root_files=False)\n                skills.extend(sub_result.skills)\n                diagnostics.extend(sub_result.diagnostics)\n                continue\n            \n            # Handle files\n            if not os.path.isfile(full_path):\n                continue\n            \n            # Check if this is a skill file\n            is_root_md = include_root_files and entry.endswith('.md') and entry.upper() != 'README.MD'\n            is_skill_md = not include_root_files and entry == 'SKILL.md'\n            \n            if not (is_root_md or is_skill_md):\n                continue\n            \n            # Load the skill\n            skill_result = self._load_skill_from_file(full_path, source)\n            if skill_result.skills:\n                skills.extend(skill_result.skills)\n            diagnostics.extend(skill_result.diagnostics)\n        \n        return LoadSkillsResult(skills=skills, diagnostics=diagnostics)\n    \n    def _load_skill_from_file(self, file_path: str, source: str) -> LoadSkillsResult:\n        \"\"\"\n        Load a single skill from a markdown file.\n        \n        :param file_path: Path to the skill markdown file\n        :param source: Source identifier\n        :return: LoadSkillsResult\n        \"\"\"\n        diagnostics = []\n        \n        try:\n            with open(file_path, 'r', encoding='utf-8') as f:\n                content = f.read()\n        except Exception as e:\n            diagnostics.append(f\"Failed to read skill file {file_path}: {e}\")\n            return LoadSkillsResult(skills=[], diagnostics=diagnostics)\n        \n        # Parse frontmatter\n        frontmatter = parse_frontmatter(content)\n        \n        # Get skill name and description\n        skill_dir = os.path.dirname(file_path)\n        parent_dir_name = os.path.basename(skill_dir)\n        \n        name = frontmatter.get('name', parent_dir_name)\n        description = frontmatter.get('description', '')\n        \n        # Normalize name (handle both string and list)\n        if isinstance(name, list):\n            name = name[0] if name else parent_dir_name\n        elif not isinstance(name, str):\n            name = str(name) if name else parent_dir_name\n        \n        # Normalize description (handle both string and list)\n        if isinstance(description, list):\n            description = ' '.join(str(d) for d in description if d)\n        elif not isinstance(description, str):\n            description = str(description) if description else ''\n        \n        # Special handling for linkai-agent: dynamically load apps from config.json\n        if name == 'linkai-agent':\n            description = self._load_linkai_agent_description(skill_dir, description)\n        \n        if not description or not description.strip():\n            diagnostics.append(f\"Skill {name} has no description: {file_path}\")\n            return LoadSkillsResult(skills=[], diagnostics=diagnostics)\n        \n        # Parse disable-model-invocation flag\n        disable_model_invocation = parse_boolean_value(\n            get_frontmatter_value(frontmatter, 'disable-model-invocation'),\n            default=False\n        )\n        \n        # Create skill object\n        skill = Skill(\n            name=name,\n            description=description,\n            file_path=file_path,\n            base_dir=skill_dir,\n            source=source,\n            content=content,\n            disable_model_invocation=disable_model_invocation,\n            frontmatter=frontmatter,\n        )\n        \n        return LoadSkillsResult(skills=[skill], diagnostics=diagnostics)\n    \n    def _load_linkai_agent_description(self, skill_dir: str, default_description: str) -> str:\n        \"\"\"\n        Dynamically load LinkAI agent description from config.json\n        \n        :param skill_dir: Skill directory\n        :param default_description: Default description from SKILL.md\n        :return: Dynamic description with app list\n        \"\"\"\n        import json\n        \n        config_path = os.path.join(skill_dir, \"config.json\")\n        \n        # Without config.json, skip this skill entirely (return empty to trigger exclusion)\n        if not os.path.exists(config_path):\n            logger.debug(f\"[SkillLoader] linkai-agent skipped: no config.json found\")\n            return \"\"\n        \n        try:\n            with open(config_path, 'r', encoding='utf-8') as f:\n                config = json.load(f)\n            \n            apps = config.get(\"apps\", [])\n            if not apps:\n                return default_description\n            \n            # Build dynamic description with app details\n            app_descriptions = \"; \".join([\n                f\"{app['app_name']}({app['app_code']}: {app['app_description']})\"\n                for app in apps\n            ])\n            \n            return f\"Call LinkAI apps/workflows. {app_descriptions}\"\n        \n        except Exception as e:\n            logger.warning(f\"[SkillLoader] Failed to load linkai-agent config: {e}\")\n            return default_description\n    \n    def load_all_skills(\n        self,\n        builtin_dir: Optional[str] = None,\n        custom_dir: Optional[str] = None,\n    ) -> Dict[str, SkillEntry]:\n        \"\"\"\n        Load skills from builtin and custom directories.\n\n        Precedence (lowest to highest):\n        1. builtin  — project root ``skills/``, shipped with the codebase\n        2. custom   — workspace ``skills/``, installed via cloud console or skill creator\n\n        Same-name custom skills override builtin ones.\n\n        :param builtin_dir: Built-in skills directory\n        :param custom_dir: Custom skills directory\n        :return: Dictionary mapping skill name to SkillEntry\n        \"\"\"\n        skill_map: Dict[str, SkillEntry] = {}\n        all_diagnostics = []\n\n        # Load builtin skills (lower precedence)\n        if builtin_dir and os.path.exists(builtin_dir):\n            result = self.load_skills_from_dir(builtin_dir, source='builtin')\n            all_diagnostics.extend(result.diagnostics)\n            for skill in result.skills:\n                entry = self._create_skill_entry(skill)\n                skill_map[skill.name] = entry\n\n        # Load custom skills (higher precedence, overrides builtin)\n        if custom_dir and os.path.exists(custom_dir):\n            result = self.load_skills_from_dir(custom_dir, source='custom')\n            all_diagnostics.extend(result.diagnostics)\n            for skill in result.skills:\n                entry = self._create_skill_entry(skill)\n                skill_map[skill.name] = entry\n\n        # Log diagnostics\n        if all_diagnostics:\n            logger.debug(f\"Skill loading diagnostics: {len(all_diagnostics)} issues\")\n            for diag in all_diagnostics[:5]:\n                logger.debug(f\"  - {diag}\")\n\n        logger.debug(f\"Loaded {len(skill_map)} skills total\")\n\n        return skill_map\n    \n    def _create_skill_entry(self, skill: Skill) -> SkillEntry:\n        \"\"\"\n        Create a SkillEntry from a Skill with parsed metadata.\n        \n        :param skill: The skill to create an entry for\n        :return: SkillEntry with metadata\n        \"\"\"\n        metadata = parse_metadata(skill.frontmatter)\n        \n        # Parse user-invocable flag\n        user_invocable = parse_boolean_value(\n            get_frontmatter_value(skill.frontmatter, 'user-invocable'),\n            default=True\n        )\n        \n        return SkillEntry(\n            skill=skill,\n            metadata=metadata,\n            user_invocable=user_invocable,\n        )\n"
  },
  {
    "path": "agent/skills/manager.py",
    "content": "\"\"\"\nSkill manager for managing skill lifecycle and operations.\n\"\"\"\n\nimport os\nimport json\nfrom typing import Dict, List, Optional\nfrom pathlib import Path\nfrom common.log import logger\nfrom agent.skills.types import Skill, SkillEntry, SkillSnapshot\nfrom agent.skills.loader import SkillLoader\nfrom agent.skills.formatter import format_skill_entries_for_prompt\n\nSKILLS_CONFIG_FILE = \"skills_config.json\"\n\n\nclass SkillManager:\n    \"\"\"Manages skills for an agent.\"\"\"\n\n    def __init__(\n        self,\n        builtin_dir: Optional[str] = None,\n        custom_dir: Optional[str] = None,\n        config: Optional[Dict] = None,\n    ):\n        \"\"\"\n        Initialize the skill manager.\n\n        :param builtin_dir: Built-in skills directory (project root ``skills/``)\n        :param custom_dir: Custom skills directory (workspace ``skills/``)\n        :param config: Configuration dictionary\n        \"\"\"\n        project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n        self.builtin_dir = builtin_dir or os.path.join(project_root, 'skills')\n        self.custom_dir = custom_dir or os.path.join(project_root, 'workspace', 'skills')\n        self.config = config or {}\n        self._skills_config_path = os.path.join(self.custom_dir, SKILLS_CONFIG_FILE)\n\n        # skills_config: full skill metadata keyed by name\n        # { \"web-fetch\": {\"name\": ..., \"description\": ..., \"source\": ..., \"enabled\": true}, ... }\n        self.skills_config: Dict[str, dict] = {}\n\n        self.loader = SkillLoader()\n        self.skills: Dict[str, SkillEntry] = {}\n\n        # Load skills on initialization\n        self.refresh_skills()\n\n    def refresh_skills(self):\n        \"\"\"Reload all skills from builtin and custom directories, then sync config.\"\"\"\n        self.skills = self.loader.load_all_skills(\n            builtin_dir=self.builtin_dir,\n            custom_dir=self.custom_dir,\n        )\n        self._sync_skills_config()\n        logger.debug(f\"SkillManager: Loaded {len(self.skills)} skills\")\n\n    # ------------------------------------------------------------------\n    # skills_config.json management\n    # ------------------------------------------------------------------\n    def _load_skills_config(self) -> Dict[str, dict]:\n        \"\"\"Load skills_config.json from custom_dir. Returns empty dict if not found.\"\"\"\n        if not os.path.exists(self._skills_config_path):\n            return {}\n        try:\n            with open(self._skills_config_path, \"r\", encoding=\"utf-8\") as f:\n                data = json.load(f)\n            if isinstance(data, dict):\n                return data\n        except Exception as e:\n            logger.warning(f\"[SkillManager] Failed to load {SKILLS_CONFIG_FILE}: {e}\")\n        return {}\n\n    def _save_skills_config(self):\n        \"\"\"Persist skills_config to custom_dir/skills_config.json.\"\"\"\n        os.makedirs(self.custom_dir, exist_ok=True)\n        try:\n            with open(self._skills_config_path, \"w\", encoding=\"utf-8\") as f:\n                json.dump(self.skills_config, f, indent=4, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[SkillManager] Failed to save {SKILLS_CONFIG_FILE}: {e}\")\n\n    def _sync_skills_config(self):\n        \"\"\"\n        Merge directory-scanned skills with the persisted config file.\n\n        - New skills discovered on disk are added with enabled=True.\n        - Skills that no longer exist on disk are removed.\n        - Existing entries preserve their enabled state; name/description/source\n          are refreshed from the latest scan.\n        \"\"\"\n        saved = self._load_skills_config()\n        merged: Dict[str, dict] = {}\n\n        for name, entry in self.skills.items():\n            skill = entry.skill\n            prev = saved.get(name, {})\n            # category priority: persisted config (set by cloud) > default \"skill\"\n            category = prev.get(\"category\", \"skill\")\n            merged[name] = {\n                \"name\": name,\n                \"description\": skill.description,\n                \"source\": skill.source,\n                \"enabled\": prev.get(\"enabled\", True),\n                \"category\": category,\n            }\n\n        self.skills_config = merged\n        self._save_skills_config()\n\n    def is_skill_enabled(self, name: str) -> bool:\n        \"\"\"\n        Check if a skill is enabled according to skills_config.\n\n        :param name: skill name\n        :return: True if enabled (default True if not in config)\n        \"\"\"\n        entry = self.skills_config.get(name)\n        if entry is None:\n            return True\n        return entry.get(\"enabled\", True)\n\n    def set_skill_enabled(self, name: str, enabled: bool):\n        \"\"\"\n        Set a skill's enabled state and persist.\n\n        :param name: skill name\n        :param enabled: True to enable, False to disable\n        \"\"\"\n        if name not in self.skills_config:\n            raise ValueError(f\"skill '{name}' not found in config\")\n        self.skills_config[name][\"enabled\"] = enabled\n        self._save_skills_config()\n\n    def get_skills_config(self) -> Dict[str, dict]:\n        \"\"\"\n        Return the full skills_config dict (for query API).\n\n        :return: copy of skills_config\n        \"\"\"\n        return dict(self.skills_config)\n    \n    def get_skill(self, name: str) -> Optional[SkillEntry]:\n        \"\"\"\n        Get a skill by name.\n        \n        :param name: Skill name\n        :return: SkillEntry or None if not found\n        \"\"\"\n        return self.skills.get(name)\n    \n    def list_skills(self) -> List[SkillEntry]:\n        \"\"\"\n        Get all loaded skills.\n        \n        :return: List of all skill entries\n        \"\"\"\n        return list(self.skills.values())\n    \n    def filter_skills(\n        self,\n        skill_filter: Optional[List[str]] = None,\n        include_disabled: bool = False,\n    ) -> List[SkillEntry]:\n        \"\"\"\n        Filter skills based on criteria.\n\n        Simple rule: Skills are auto-enabled if requirements are met.\n        - Has required API keys -> included\n        - Missing API keys -> excluded\n\n        :param skill_filter: List of skill names to include (None = all)\n        :param include_disabled: Whether to include disabled skills\n        :return: Filtered list of skill entries\n        \"\"\"\n        from agent.skills.config import should_include_skill\n\n        entries = list(self.skills.values())\n\n        # Check requirements (platform, binaries, env vars)\n        entries = [e for e in entries if should_include_skill(e, self.config)]\n\n        # Apply skill filter\n        if skill_filter is not None:\n            normalized = []\n            for item in skill_filter:\n                if isinstance(item, str):\n                    name = item.strip()\n                    if name:\n                        normalized.append(name)\n                elif isinstance(item, list):\n                    for subitem in item:\n                        if isinstance(subitem, str):\n                            name = subitem.strip()\n                            if name:\n                                normalized.append(name)\n            if normalized:\n                entries = [e for e in entries if e.skill.name in normalized]\n\n        # Filter out disabled skills based on skills_config.json\n        if not include_disabled:\n            entries = [e for e in entries if self.is_skill_enabled(e.skill.name)]\n\n        return entries\n    \n    def build_skills_prompt(\n        self,\n        skill_filter: Optional[List[str]] = None,\n    ) -> str:\n        \"\"\"\n        Build a formatted prompt containing available skills.\n        \n        :param skill_filter: Optional list of skill names to include\n        :return: Formatted skills prompt\n        \"\"\"\n        from common.log import logger\n        entries = self.filter_skills(skill_filter=skill_filter, include_disabled=False)\n        logger.debug(f\"[SkillManager] Filtered {len(entries)} skills for prompt (total: {len(self.skills)})\")\n        if entries:\n            skill_names = [e.skill.name for e in entries]\n            logger.debug(f\"[SkillManager] Skills to include: {skill_names}\")\n        result = format_skill_entries_for_prompt(entries)\n        logger.debug(f\"[SkillManager] Generated prompt length: {len(result)}\")\n        return result\n    \n    def build_skill_snapshot(\n        self,\n        skill_filter: Optional[List[str]] = None,\n        version: Optional[int] = None,\n    ) -> SkillSnapshot:\n        \"\"\"\n        Build a snapshot of skills for a specific run.\n        \n        :param skill_filter: Optional list of skill names to include\n        :param version: Optional version number for the snapshot\n        :return: SkillSnapshot\n        \"\"\"\n        entries = self.filter_skills(skill_filter=skill_filter, include_disabled=False)\n        prompt = format_skill_entries_for_prompt(entries)\n        \n        skills_info = []\n        resolved_skills = []\n        \n        for entry in entries:\n            skills_info.append({\n                'name': entry.skill.name,\n                'primary_env': entry.metadata.primary_env if entry.metadata else None,\n            })\n            resolved_skills.append(entry.skill)\n        \n        return SkillSnapshot(\n            prompt=prompt,\n            skills=skills_info,\n            resolved_skills=resolved_skills,\n            version=version,\n        )\n    \n    def sync_skills_to_workspace(self, target_workspace_dir: str):\n        \"\"\"\n        Sync all loaded skills to a target workspace directory.\n        \n        This is useful for sandbox environments where skills need to be copied.\n        \n        :param target_workspace_dir: Target workspace directory\n        \"\"\"\n        import shutil\n        \n        target_skills_dir = os.path.join(target_workspace_dir, 'skills')\n        \n        # Remove existing skills directory\n        if os.path.exists(target_skills_dir):\n            shutil.rmtree(target_skills_dir)\n        \n        # Create new skills directory\n        os.makedirs(target_skills_dir, exist_ok=True)\n        \n        # Copy each skill\n        for entry in self.skills.values():\n            skill_name = entry.skill.name\n            source_dir = entry.skill.base_dir\n            target_dir = os.path.join(target_skills_dir, skill_name)\n            \n            try:\n                shutil.copytree(source_dir, target_dir)\n                logger.debug(f\"Synced skill '{skill_name}' to {target_dir}\")\n            except Exception as e:\n                logger.warning(f\"Failed to sync skill '{skill_name}': {e}\")\n        \n        logger.info(f\"Synced {len(self.skills)} skills to {target_skills_dir}\")\n    \n    def get_skill_by_key(self, skill_key: str) -> Optional[SkillEntry]:\n        \"\"\"\n        Get a skill by its skill key (which may differ from name).\n        \n        :param skill_key: Skill key to look up\n        :return: SkillEntry or None\n        \"\"\"\n        for entry in self.skills.values():\n            if entry.metadata and entry.metadata.skill_key == skill_key:\n                return entry\n            if entry.skill.name == skill_key:\n                return entry\n        return None\n"
  },
  {
    "path": "agent/skills/service.py",
    "content": "\"\"\"\nSkill service for handling skill CRUD operations.\n\nThis service provides a unified interface for managing skills, which can be\ncalled from the cloud control client (LinkAI), the local web console, or any\nother management entry point.\n\"\"\"\n\nimport os\nimport shutil\nimport zipfile\nimport tempfile\nfrom typing import Dict, List, Optional\nfrom common.log import logger\nfrom agent.skills.types import Skill, SkillEntry\nfrom agent.skills.manager import SkillManager\n\ntry:\n    import requests\nexcept ImportError:\n    requests = None\n\n\nclass SkillService:\n    \"\"\"\n    High-level service for skill lifecycle management.\n    Wraps SkillManager and provides network-aware operations such as\n    downloading skill files from remote URLs.\n    \"\"\"\n\n    def __init__(self, skill_manager: SkillManager):\n        \"\"\"\n        :param skill_manager: The SkillManager instance to operate on\n        \"\"\"\n        self.manager = skill_manager\n\n    # ------------------------------------------------------------------\n    # query\n    # ------------------------------------------------------------------\n    def query(self) -> List[dict]:\n        \"\"\"\n        Query all skills and return a serialisable list.\n        Reads from skills_config.json (refreshes from disk if needed).\n\n        :return: list of skill info dicts\n        \"\"\"\n        self.manager.refresh_skills()\n        config = self.manager.get_skills_config()\n        result = list(config.values())\n        logger.info(f\"[SkillService] query: {len(result)} skills found\")\n        return result\n\n    # ------------------------------------------------------------------\n    # add / install\n    # ------------------------------------------------------------------\n    def add(self, payload: dict) -> None:\n        \"\"\"\n        Add (install) a skill from a remote payload.\n\n        Supported payload types:\n\n        1. ``type: \"url\"`` – download individual files::\n\n            {\n                \"name\": \"web_search\",\n                \"type\": \"url\",\n                \"enabled\": true,\n                \"files\": [\n                    {\"url\": \"https://...\", \"path\": \"README.md\"},\n                    {\"url\": \"https://...\", \"path\": \"scripts/main.py\"}\n                ]\n            }\n\n        2. ``type: \"package\"`` – download a zip archive and extract::\n\n            {\n                \"name\": \"plugin-custom-tool\",\n                \"type\": \"package\",\n                \"category\": \"skills\",\n                \"enabled\": true,\n                \"files\": [{\"url\": \"https://cdn.example.com/skills/custom-tool.zip\"}]\n            }\n\n        :param payload: skill add payload from server\n        \"\"\"\n        name = payload.get(\"name\")\n        if not name:\n            raise ValueError(\"skill name is required\")\n\n        payload_type = payload.get(\"type\", \"url\")\n\n        if payload_type == \"package\":\n            self._add_package(name, payload)\n        else:\n            self._add_url(name, payload)\n\n        self.manager.refresh_skills()\n\n        category = payload.get(\"category\")\n        if category and name in self.manager.skills_config:\n            self.manager.skills_config[name][\"category\"] = category\n            self.manager._save_skills_config()\n\n    def _add_url(self, name: str, payload: dict) -> None:\n        \"\"\"Install a skill by downloading individual files.\"\"\"\n        files = payload.get(\"files\", [])\n        if not files:\n            raise ValueError(\"skill files list is empty\")\n\n        skill_dir = os.path.join(self.manager.custom_dir, name)\n\n        tmp_dir = skill_dir + \".tmp\"\n        if os.path.exists(tmp_dir):\n            shutil.rmtree(tmp_dir)\n        os.makedirs(tmp_dir, exist_ok=True)\n\n        try:\n            for file_info in files:\n                url = file_info.get(\"url\")\n                rel_path = file_info.get(\"path\")\n                if not url or not rel_path:\n                    logger.warning(f\"[SkillService] add: skip invalid file entry {file_info}\")\n                    continue\n                dest = os.path.join(tmp_dir, rel_path)\n                self._download_file(url, dest)\n        except Exception:\n            shutil.rmtree(tmp_dir, ignore_errors=True)\n            raise\n\n        if os.path.exists(skill_dir):\n            shutil.rmtree(skill_dir)\n        os.rename(tmp_dir, skill_dir)\n\n        logger.info(f\"[SkillService] add: skill '{name}' installed via url ({len(files)} files)\")\n\n    def _add_package(self, name: str, payload: dict) -> None:\n        \"\"\"\n        Install a skill by downloading a zip archive and extracting it.\n\n        If the archive contains a single top-level directory, that directory\n        is used as the skill folder directly; otherwise a new directory named\n        after the skill is created to hold the extracted contents.\n        \"\"\"\n        files = payload.get(\"files\", [])\n        if not files or not files[0].get(\"url\"):\n            raise ValueError(\"package url is required\")\n\n        url = files[0][\"url\"]\n        skill_dir = os.path.join(self.manager.custom_dir, name)\n\n        with tempfile.TemporaryDirectory() as tmp_dir:\n            zip_path = os.path.join(tmp_dir, \"package.zip\")\n            self._download_file(url, zip_path)\n\n            if not zipfile.is_zipfile(zip_path):\n                raise ValueError(f\"downloaded file is not a valid zip archive: {url}\")\n\n            extract_dir = os.path.join(tmp_dir, \"extracted\")\n            with zipfile.ZipFile(zip_path, \"r\") as zf:\n                zf.extractall(extract_dir)\n\n            # Determine the actual content root.\n            # If the zip has a single top-level directory, use its contents\n            # so the skill folder is clean (no extra nesting).\n            top_items = [\n                item for item in os.listdir(extract_dir)\n                if not item.startswith(\".\")\n            ]\n            if len(top_items) == 1:\n                single = os.path.join(extract_dir, top_items[0])\n                if os.path.isdir(single):\n                    extract_dir = single\n\n            if os.path.exists(skill_dir):\n                shutil.rmtree(skill_dir)\n            shutil.copytree(extract_dir, skill_dir)\n\n        logger.info(f\"[SkillService] add: skill '{name}' installed via package ({url})\")\n\n    # ------------------------------------------------------------------\n    # open / close (enable / disable)\n    # ------------------------------------------------------------------\n    def open(self, payload: dict) -> None:\n        \"\"\"\n        Enable a skill by name.\n\n        :param payload: {\"name\": \"skill_name\"}\n        \"\"\"\n        name = payload.get(\"name\")\n        if not name:\n            raise ValueError(\"skill name is required\")\n        self.manager.set_skill_enabled(name, enabled=True)\n        logger.info(f\"[SkillService] open: skill '{name}' enabled\")\n\n    def close(self, payload: dict) -> None:\n        \"\"\"\n        Disable a skill by name.\n\n        :param payload: {\"name\": \"skill_name\"}\n        \"\"\"\n        name = payload.get(\"name\")\n        if not name:\n            raise ValueError(\"skill name is required\")\n        self.manager.set_skill_enabled(name, enabled=False)\n        logger.info(f\"[SkillService] close: skill '{name}' disabled\")\n\n    # ------------------------------------------------------------------\n    # delete\n    # ------------------------------------------------------------------\n    def delete(self, payload: dict) -> None:\n        \"\"\"\n        Delete a skill by removing its directory entirely.\n\n        :param payload: {\"name\": \"skill_name\"}\n        \"\"\"\n        name = payload.get(\"name\")\n        if not name:\n            raise ValueError(\"skill name is required\")\n\n        skill_dir = os.path.join(self.manager.custom_dir, name)\n        if os.path.exists(skill_dir):\n            shutil.rmtree(skill_dir)\n            logger.info(f\"[SkillService] delete: removed directory {skill_dir}\")\n        else:\n            logger.warning(f\"[SkillService] delete: skill directory not found: {skill_dir}\")\n\n        # Refresh will remove the deleted skill from config automatically\n        self.manager.refresh_skills()\n        logger.info(f\"[SkillService] delete: skill '{name}' deleted\")\n\n    # ------------------------------------------------------------------\n    # dispatch - single entry point for protocol messages\n    # ------------------------------------------------------------------\n    def dispatch(self, action: str, payload: Optional[dict] = None) -> dict:\n        \"\"\"\n        Dispatch a skill management action and return a protocol-compatible\n        response dict.\n\n        :param action: one of query / add / open / close / delete\n        :param payload: action-specific payload (may be None for query)\n        :return: dict with action, code, message, payload\n        \"\"\"\n        payload = payload or {}\n        try:\n            if action == \"query\":\n                result_payload = self.query()\n                return {\"action\": action, \"code\": 200, \"message\": \"success\", \"payload\": result_payload}\n            elif action == \"add\":\n                self.add(payload)\n            elif action == \"open\":\n                self.open(payload)\n            elif action == \"close\":\n                self.close(payload)\n            elif action == \"delete\":\n                self.delete(payload)\n            else:\n                return {\"action\": action, \"code\": 400, \"message\": f\"unknown action: {action}\", \"payload\": None}\n            return {\"action\": action, \"code\": 200, \"message\": \"success\", \"payload\": None}\n        except Exception as e:\n            logger.error(f\"[SkillService] dispatch error: action={action}, error={e}\")\n            return {\"action\": action, \"code\": 500, \"message\": str(e), \"payload\": None}\n\n    # ------------------------------------------------------------------\n    # internal helpers\n    # ------------------------------------------------------------------\n    @staticmethod\n    def _download_file(url: str, dest: str):\n        \"\"\"\n        Download a file from *url* and save to *dest*.\n\n        :param url: remote file URL\n        :param dest: local destination path\n        \"\"\"\n        if requests is None:\n            raise RuntimeError(\"requests library is required for downloading skill files\")\n\n        dest_dir = os.path.dirname(dest)\n        if dest_dir:\n            os.makedirs(dest_dir, exist_ok=True)\n\n        resp = requests.get(url, timeout=60)\n        resp.raise_for_status()\n        with open(dest, \"wb\") as f:\n            f.write(resp.content)\n        logger.debug(f\"[SkillService] downloaded {url} -> {dest}\")\n"
  },
  {
    "path": "agent/skills/types.py",
    "content": "\"\"\"\nType definitions for skills system.\n\"\"\"\n\nfrom __future__ import annotations\nfrom typing import Dict, List, Optional, Any\nfrom dataclasses import dataclass, field\n\n\n@dataclass\nclass SkillInstallSpec:\n    \"\"\"Specification for installing skill dependencies.\"\"\"\n    kind: str  # brew, pip, npm, download, etc.\n    id: Optional[str] = None\n    label: Optional[str] = None\n    bins: List[str] = field(default_factory=list)\n    os: List[str] = field(default_factory=list)\n    formula: Optional[str] = None  # for brew\n    package: Optional[str] = None  # for pip/npm\n    module: Optional[str] = None\n    url: Optional[str] = None  # for download\n    archive: Optional[str] = None\n    extract: bool = False\n    strip_components: Optional[int] = None\n    target_dir: Optional[str] = None\n\n\n@dataclass\nclass SkillMetadata:\n    \"\"\"Metadata for a skill from frontmatter.\"\"\"\n    always: bool = False  # Always include this skill\n    skill_key: Optional[str] = None  # Override skill key\n    primary_env: Optional[str] = None  # Primary environment variable\n    emoji: Optional[str] = None\n    homepage: Optional[str] = None\n    os: List[str] = field(default_factory=list)  # Supported OS platforms\n    requires: Dict[str, List[str]] = field(default_factory=dict)  # Requirements\n    install: List[SkillInstallSpec] = field(default_factory=list)\n\n\n@dataclass\nclass Skill:\n    \"\"\"Represents a skill loaded from a markdown file.\"\"\"\n    name: str\n    description: str\n    file_path: str\n    base_dir: str\n    source: str  # builtin or custom\n    content: str  # Full markdown content\n    disable_model_invocation: bool = False\n    frontmatter: Dict[str, Any] = field(default_factory=dict)\n\n\n@dataclass\nclass SkillEntry:\n    \"\"\"A skill with parsed metadata.\"\"\"\n    skill: Skill\n    metadata: Optional[SkillMetadata] = None\n    user_invocable: bool = True  # Can users invoke this skill directly\n\n\n@dataclass\nclass LoadSkillsResult:\n    \"\"\"Result of loading skills from a directory.\"\"\"\n    skills: List[Skill]\n    diagnostics: List[str] = field(default_factory=list)\n\n\n@dataclass\nclass SkillSnapshot:\n    \"\"\"Snapshot of skills for a specific run.\"\"\"\n    prompt: str  # Formatted prompt text\n    skills: List[Dict[str, str]]  # List of skill info (name, primary_env)\n    resolved_skills: List[Skill] = field(default_factory=list)\n    version: Optional[int] = None\n"
  },
  {
    "path": "agent/tools/__init__.py",
    "content": "# Import base tool\nfrom agent.tools.base_tool import BaseTool\nfrom agent.tools.tool_manager import ToolManager\n\n# Import file operation tools\nfrom agent.tools.read.read import Read\nfrom agent.tools.write.write import Write\nfrom agent.tools.edit.edit import Edit\nfrom agent.tools.bash.bash import Bash\nfrom agent.tools.ls.ls import Ls\nfrom agent.tools.send.send import Send\n\n# Import memory tools\nfrom agent.tools.memory.memory_search import MemorySearchTool\nfrom agent.tools.memory.memory_get import MemoryGetTool\n\n# Import tools with optional dependencies\ndef _import_optional_tools():\n    \"\"\"Import tools that have optional dependencies\"\"\"\n    from common.log import logger\n    tools = {}\n    \n    # EnvConfig Tool (requires python-dotenv)\n    try:\n        from agent.tools.env_config.env_config import EnvConfig\n        tools['EnvConfig'] = EnvConfig\n    except ImportError as e:\n        logger.error(\n            f\"[Tools] EnvConfig tool not loaded - missing dependency: {e}\\n\"\n            f\"  To enable environment variable management, run:\\n\"\n            f\"    pip install python-dotenv>=1.0.0\"\n        )\n    except Exception as e:\n        logger.error(f\"[Tools] EnvConfig tool failed to load: {e}\")\n    \n    # Scheduler Tool (requires croniter)\n    try:\n        from agent.tools.scheduler.scheduler_tool import SchedulerTool\n        tools['SchedulerTool'] = SchedulerTool\n    except ImportError as e:\n        logger.error(\n            f\"[Tools] Scheduler tool not loaded - missing dependency: {e}\\n\"\n            f\"  To enable scheduled tasks, run:\\n\"\n            f\"    pip install croniter>=2.0.0\"\n        )\n    except Exception as e:\n        logger.error(f\"[Tools] Scheduler tool failed to load: {e}\")\n\n    # WebSearch Tool (conditionally loaded based on API key availability at init time)\n    try:\n        from agent.tools.web_search.web_search import WebSearch\n        tools['WebSearch'] = WebSearch\n    except ImportError as e:\n        logger.error(f\"[Tools] WebSearch not loaded - missing dependency: {e}\")\n    except Exception as e:\n        logger.error(f\"[Tools] WebSearch failed to load: {e}\")\n\n    # WebFetch Tool\n    try:\n        from agent.tools.web_fetch.web_fetch import WebFetch\n        tools['WebFetch'] = WebFetch\n    except ImportError as e:\n        logger.error(f\"[Tools] WebFetch not loaded - missing dependency: {e}\")\n    except Exception as e:\n        logger.error(f\"[Tools] WebFetch failed to load: {e}\")\n\n    # Vision Tool (conditionally loaded based on API key availability)\n    try:\n        from agent.tools.vision.vision import Vision\n        tools['Vision'] = Vision\n    except ImportError as e:\n        logger.error(f\"[Tools] Vision not loaded - missing dependency: {e}\")\n    except Exception as e:\n        logger.error(f\"[Tools] Vision failed to load: {e}\")\n\n    return tools\n\n# Load optional tools\n_optional_tools = _import_optional_tools()\nEnvConfig = _optional_tools.get('EnvConfig')\nSchedulerTool = _optional_tools.get('SchedulerTool')\nWebSearch = _optional_tools.get('WebSearch')\nWebFetch = _optional_tools.get('WebFetch')\nVision = _optional_tools.get('Vision')\nGoogleSearch = _optional_tools.get('GoogleSearch')\nFileSave = _optional_tools.get('FileSave')\nTerminal = _optional_tools.get('Terminal')\n\n\n# Delayed import for BrowserTool\ndef _import_browser_tool():\n    try:\n        from agent.tools.browser.browser_tool import BrowserTool\n        return BrowserTool\n    except ImportError:\n        # Return a placeholder class that will prompt the user to install dependencies when instantiated\n        class BrowserToolPlaceholder:\n            def __init__(self, *args, **kwargs):\n                raise ImportError(\n                    \"The 'browser-use' package is required to use BrowserTool. \"\n                    \"Please install it with 'pip install browser-use>=0.1.40'.\"\n                )\n\n        return BrowserToolPlaceholder\n\n\n# Dynamically set BrowserTool\n# BrowserTool = _import_browser_tool()\n\n# Export all tools (including optional ones that might be None)\n__all__ = [\n    'BaseTool',\n    'ToolManager',\n    'Read',\n    'Write',\n    'Edit',\n    'Bash',\n    'Ls',\n    'Send',\n    'MemorySearchTool',\n    'MemoryGetTool',\n    'EnvConfig',\n    'SchedulerTool',\n    'WebSearch',\n    'WebFetch',\n    'Vision',\n    # Optional tools (may be None if dependencies not available)\n    # 'BrowserTool'\n]\n\n\"\"\"\nTools module for Agent.\n\"\"\"\n"
  },
  {
    "path": "agent/tools/base_tool.py",
    "content": "from enum import Enum\nfrom typing import Any, Optional\nfrom common.log import logger\nimport copy\n\n\nclass ToolStage(Enum):\n    \"\"\"Enum representing tool decision stages\"\"\"\n    PRE_PROCESS = \"pre_process\"  # Tools that need to be actively selected by the agent\n    POST_PROCESS = \"post_process\"  # Tools that automatically execute after final_answer\n\n\nclass ToolResult:\n    \"\"\"Tool execution result\"\"\"\n    \n    def __init__(self, status: str = None, result: Any = None, ext_data: Any = None):\n        self.status = status\n        self.result = result\n        self.ext_data = ext_data\n\n    @staticmethod\n    def success(result, ext_data: Any = None):\n        return ToolResult(status=\"success\", result=result, ext_data=ext_data)\n\n    @staticmethod\n    def fail(result, ext_data: Any = None):\n        return ToolResult(status=\"error\", result=result, ext_data=ext_data)\n\n\nclass BaseTool:\n    \"\"\"Base class for all tools.\"\"\"\n\n    # Default decision stage is pre-process\n    stage = ToolStage.PRE_PROCESS\n\n    # Class attributes must be inherited\n    name: str = \"base_tool\"\n    description: str = \"Base tool\"\n    params: dict = {}  # Store JSON Schema\n    model: Optional[Any] = None  # LLM model instance, type depends on bot implementation\n\n    @classmethod\n    def get_json_schema(cls) -> dict:\n        \"\"\"Get the standard description of the tool\"\"\"\n        return {\n            \"name\": cls.name,\n            \"description\": cls.description,\n            \"parameters\": cls.params\n        }\n\n    def execute_tool(self, params: dict) -> ToolResult:\n        try:\n            return self.execute(params)\n        except Exception as e:\n            logger.error(e)\n\n    def execute(self, params: dict) -> ToolResult:\n        \"\"\"Specific logic to be implemented by subclasses\"\"\"\n        raise NotImplementedError\n\n    @classmethod\n    def _parse_schema(cls) -> dict:\n        \"\"\"Convert JSON Schema to Pydantic fields\"\"\"\n        fields = {}\n        for name, prop in cls.params[\"properties\"].items():\n            # Convert JSON Schema types to Python types\n            type_map = {\n                \"string\": str,\n                \"number\": float,\n                \"integer\": int,\n                \"boolean\": bool,\n                \"array\": list,\n                \"object\": dict\n            }\n            fields[name] = (\n                type_map[prop[\"type\"]],\n                prop.get(\"default\", ...)\n            )\n        return fields\n\n    def should_auto_execute(self, context) -> bool:\n        \"\"\"\n        Determine if this tool should be automatically executed based on context.\n\n        :param context: The agent context\n        :return: True if the tool should be executed, False otherwise\n        \"\"\"\n        # Only tools in post-process stage will be automatically executed\n        return self.stage == ToolStage.POST_PROCESS\n\n    def close(self):\n        \"\"\"\n        Close any resources used by the tool.\n        This method should be overridden by tools that need to clean up resources\n        such as browser connections, file handles, etc.\n\n        By default, this method does nothing.\n        \"\"\"\n        pass\n"
  },
  {
    "path": "agent/tools/bash/__init__.py",
    "content": "from .bash import Bash\n\n__all__ = ['Bash']\n"
  },
  {
    "path": "agent/tools/bash/bash.py",
    "content": "\"\"\"\nBash tool - Execute bash commands\n\"\"\"\n\nimport os\nimport re\nimport sys\nimport subprocess\nimport tempfile\nfrom typing import Dict, Any\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom agent.tools.utils.truncate import truncate_tail, format_size, DEFAULT_MAX_LINES, DEFAULT_MAX_BYTES\nfrom common.log import logger\nfrom common.utils import expand_path\n\n\nclass Bash(BaseTool):\n    \"\"\"Tool for executing bash commands\"\"\"\n\n    name: str = \"bash\"\n    description: str = f\"\"\"Execute a bash command in the current working directory. Returns stdout and stderr. Output is truncated to last {DEFAULT_MAX_LINES} lines or {DEFAULT_MAX_BYTES // 1024}KB (whichever is hit first). If truncated, full output is saved to a temp file.\n\nENVIRONMENT: All API keys from env_config are auto-injected. Use $VAR_NAME directly.\n\nSAFETY:\n- Freely create/modify/delete files within the workspace\n- For destructive and out-of-workspace commands, explain and confirm first\"\"\"\n\n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"command\": {\n                \"type\": \"string\",\n                \"description\": \"Bash command to execute\"\n            },\n            \"timeout\": {\n                \"type\": \"integer\",\n                \"description\": \"Timeout in seconds (optional, default: 30)\"\n            }\n        },\n        \"required\": [\"command\"]\n    }\n\n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self.cwd = self.config.get(\"cwd\", os.getcwd())\n        # Ensure working directory exists\n        if not os.path.exists(self.cwd):\n            os.makedirs(self.cwd, exist_ok=True)\n        self.default_timeout = self.config.get(\"timeout\", 30)\n        # Enable safety mode by default (can be disabled in config)\n        self.safety_mode = self.config.get(\"safety_mode\", True)\n\n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute a bash command\n        \n        :param args: Dictionary containing the command and optional timeout\n        :return: Command output or error\n        \"\"\"\n        command = args.get(\"command\", \"\").strip()\n        timeout = args.get(\"timeout\", self.default_timeout)\n\n        if not command:\n            return ToolResult.fail(\"Error: command parameter is required\")\n\n        # Security check: Prevent accessing sensitive config files\n        if \"~/.cow/.env\" in command or \"~/.cow\" in command:\n            return ToolResult.fail(\n                \"Error: Access denied. API keys and credentials must be accessed through the env_config tool only.\"\n            )\n\n        # Optional safety check - only warn about extremely dangerous commands\n        if self.safety_mode:\n            warning = self._get_safety_warning(command)\n            if warning:\n                return ToolResult.fail(\n                    f\"Safety Warning: {warning}\\n\\nIf you believe this command is safe and necessary, please ask the user for confirmation first, explaining what the command does and why it's needed.\")\n\n        try:\n            # Prepare environment with .env file variables\n            env = os.environ.copy()\n            \n            # Load environment variables from ~/.cow/.env if it exists\n            env_file = expand_path(\"~/.cow/.env\")\n            dotenv_vars = {}\n            if os.path.exists(env_file):\n                try:\n                    from dotenv import dotenv_values\n                    dotenv_vars = dotenv_values(env_file)\n                    env.update(dotenv_vars)\n                    logger.debug(f\"[Bash] Loaded {len(dotenv_vars)} variables from {env_file}\")\n                except ImportError:\n                    logger.debug(\"[Bash] python-dotenv not installed, skipping .env loading\")\n                except Exception as e:\n                    logger.debug(f\"[Bash] Failed to load .env: {e}\")\n\n            # getuid() only exists on Unix-like systems\n            if hasattr(os, 'getuid'):\n                logger.debug(f\"[Bash] Process UID: {os.getuid()}\")\n            else:\n                logger.debug(f\"[Bash] Process User: {os.environ.get('USERNAME', os.environ.get('USER', 'unknown'))}\")\n            \n            # On Windows, convert $VAR references to %VAR% for cmd.exe\n            if sys.platform == \"win32\":\n                env[\"PYTHONIOENCODING\"] = \"utf-8\"\n                command = self._convert_env_vars_for_windows(command, dotenv_vars)\n                if command and not command.strip().lower().startswith(\"chcp\"):\n                    command = f\"chcp 65001 >nul 2>&1 && {command}\"\n\n            # Execute command with inherited environment variables\n            result = subprocess.run(\n                command,\n                shell=True,\n                cwd=self.cwd,\n                stdout=subprocess.PIPE,\n                stderr=subprocess.PIPE,\n                text=True,\n                encoding=\"utf-8\",\n                errors=\"replace\",\n                timeout=timeout,\n                env=env\n            )\n            \n            logger.debug(f\"[Bash] Exit code: {result.returncode}\")\n            logger.debug(f\"[Bash] Stdout length: {len(result.stdout)}\")\n            logger.debug(f\"[Bash] Stderr length: {len(result.stderr)}\")\n            \n            # Workaround for exit code 126 with no output\n            if result.returncode == 126 and not result.stdout and not result.stderr:\n                logger.warning(f\"[Bash] Exit 126 with no output - trying alternative execution method\")\n                # Try using argument list instead of shell=True\n                import shlex\n                try:\n                    parts = shlex.split(command)\n                    if len(parts) > 0:\n                        logger.info(f\"[Bash] Retrying with argument list: {parts[:3]}...\")\n                        retry_result = subprocess.run(\n                            parts,\n                            cwd=self.cwd,\n                            stdout=subprocess.PIPE,\n                            stderr=subprocess.PIPE,\n                            text=True,\n                            encoding=\"utf-8\",\n                            errors=\"replace\",\n                            timeout=timeout,\n                            env=env\n                        )\n                        logger.debug(f\"[Bash] Retry exit code: {retry_result.returncode}, stdout: {len(retry_result.stdout)}, stderr: {len(retry_result.stderr)}\")\n                        \n                        # If retry succeeded, use retry result\n                        if retry_result.returncode == 0 or retry_result.stdout or retry_result.stderr:\n                            result = retry_result\n                        else:\n                            # Both attempts failed - check if this is openai-image-vision skill\n                            if 'openai-image-vision' in command or 'vision.sh' in command:\n                                # Create a mock result with helpful error message\n                                from types import SimpleNamespace\n                                result = SimpleNamespace(\n                                    returncode=1,\n                                    stdout='{\"error\": \"图片无法解析\", \"reason\": \"该图片格式可能不受支持，或图片文件存在问题\", \"suggestion\": \"请尝试其他图片\"}',\n                                    stderr=''\n                                )\n                                logger.info(f\"[Bash] Converted exit 126 to user-friendly image error message for vision skill\")\n                except Exception as retry_err:\n                    logger.warning(f\"[Bash] Retry failed: {retry_err}\")\n\n            # Combine stdout and stderr\n            output = result.stdout\n            if result.stderr:\n                output += \"\\n\" + result.stderr\n\n            # Check if we need to save full output to temp file\n            temp_file_path = None\n            total_bytes = len(output.encode('utf-8'))\n\n            if total_bytes > DEFAULT_MAX_BYTES:\n                # Save full output to temp file\n                with tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='.log', prefix='bash-') as f:\n                    f.write(output)\n                    temp_file_path = f.name\n\n            # Apply tail truncation\n            truncation = truncate_tail(output)\n            output_text = truncation.content or \"(no output)\"\n\n            # Build result\n            details = {}\n\n            if truncation.truncated:\n                details[\"truncation\"] = truncation.to_dict()\n                if temp_file_path:\n                    details[\"full_output_path\"] = temp_file_path\n\n                # Build notice\n                start_line = truncation.total_lines - truncation.output_lines + 1\n                end_line = truncation.total_lines\n\n                if truncation.last_line_partial:\n                    # Edge case: last line alone > 30KB\n                    last_line = output.split('\\n')[-1] if output else \"\"\n                    last_line_size = format_size(len(last_line.encode('utf-8')))\n                    output_text += f\"\\n\\n[Showing last {format_size(truncation.output_bytes)} of line {end_line} (line is {last_line_size}). Full output: {temp_file_path}]\"\n                elif truncation.truncated_by == \"lines\":\n                    output_text += f\"\\n\\n[Showing lines {start_line}-{end_line} of {truncation.total_lines}. Full output: {temp_file_path}]\"\n                else:\n                    output_text += f\"\\n\\n[Showing lines {start_line}-{end_line} of {truncation.total_lines} ({format_size(DEFAULT_MAX_BYTES)} limit). Full output: {temp_file_path}]\"\n\n            # Check exit code\n            if result.returncode != 0:\n                output_text += f\"\\n\\nCommand exited with code {result.returncode}\"\n                return ToolResult.fail({\n                    \"output\": output_text,\n                    \"exit_code\": result.returncode,\n                    \"details\": details if details else None\n                })\n\n            return ToolResult.success({\n                \"output\": output_text,\n                \"exit_code\": result.returncode,\n                \"details\": details if details else None\n            })\n\n        except subprocess.TimeoutExpired:\n            return ToolResult.fail(f\"Error: Command timed out after {timeout} seconds\")\n        except Exception as e:\n            return ToolResult.fail(f\"Error executing command: {str(e)}\")\n\n    def _get_safety_warning(self, command: str) -> str:\n        \"\"\"\n        Get safety warning for potentially dangerous commands\n        Only warns about extremely dangerous system-level operations\n        \n        :param command: Command to check\n        :return: Warning message if dangerous, empty string if safe\n        \"\"\"\n        cmd_lower = command.lower().strip()\n\n        # Only block extremely dangerous system operations\n        dangerous_patterns = [\n            # System shutdown/reboot\n            (\"shutdown\", \"This command will shut down the system\"),\n            (\"reboot\", \"This command will reboot the system\"),\n            (\"halt\", \"This command will halt the system\"),\n            (\"poweroff\", \"This command will power off the system\"),\n\n            # Critical system modifications\n            (\"rm -rf /\", \"This command will delete the entire filesystem\"),\n            (\"rm -rf /*\", \"This command will delete the entire filesystem\"),\n            (\"dd if=/dev/zero\", \"This command can destroy disk data\"),\n            (\"mkfs\", \"This command will format a filesystem, destroying all data\"),\n            (\"fdisk\", \"This command modifies disk partitions\"),\n\n            # User/system management (only if targeting system users)\n            (\"userdel root\", \"This command will delete the root user\"),\n            (\"passwd root\", \"This command will change the root password\"),\n        ]\n\n        for pattern, warning in dangerous_patterns:\n            if pattern in cmd_lower:\n                return warning\n\n        # Check for recursive deletion outside workspace\n        if \"rm\" in cmd_lower and \"-rf\" in cmd_lower:\n            # Allow deletion within current workspace\n            if not any(path in cmd_lower for path in [\"./\", self.cwd.lower()]):\n                # Check if targeting system directories\n                system_dirs = [\"/bin\", \"/usr\", \"/etc\", \"/var\", \"/home\", \"/root\", \"/sys\", \"/proc\"]\n                if any(sysdir in cmd_lower for sysdir in system_dirs):\n                    return \"This command will recursively delete system directories\"\n\n        return \"\"  # No warning needed\n\n    @staticmethod\n    def _convert_env_vars_for_windows(command: str, dotenv_vars: dict) -> str:\n        \"\"\"\n        Convert bash-style $VAR / ${VAR} references to cmd.exe %VAR% syntax.\n        Only converts variables loaded from .env (user-configured API keys etc.)\n        to avoid breaking $PATH, jq expressions, regex, etc.\n        \"\"\"\n        if not dotenv_vars:\n            return command\n\n        def replace_match(m):\n            var_name = m.group(1) or m.group(2)\n            if var_name in dotenv_vars:\n                return f\"%{var_name}%\"\n            return m.group(0)\n\n        return re.sub(r'\\$\\{(\\w+)\\}|\\$(\\w+)', replace_match, command)\n"
  },
  {
    "path": "agent/tools/browser_tool.py",
    "content": "def copy(self):\n    \"\"\"\n    Special copy method for browser tool to avoid recreating browser instance.\n    \n    :return: A new instance with shared browser reference but unique model\n    \"\"\"\n    new_tool = self.__class__()\n    \n    # Copy essential attributes\n    new_tool.model = self.model\n    new_tool.context = getattr(self, 'context', None)\n    new_tool.config = getattr(self, 'config', None)\n    \n    # Share the browser instance instead of creating a new one\n    if hasattr(self, 'browser'):\n        new_tool.browser = self.browser\n    \n    return new_tool "
  },
  {
    "path": "agent/tools/edit/__init__.py",
    "content": "from .edit import Edit\n\n__all__ = ['Edit']\n"
  },
  {
    "path": "agent/tools/edit/edit.py",
    "content": "\"\"\"\nEdit tool - Precise file editing\nEdit files through exact text replacement\n\"\"\"\n\nimport os\nfrom typing import Dict, Any\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom common.utils import expand_path\nfrom agent.tools.utils.diff import (\n    strip_bom,\n    detect_line_ending,\n    normalize_to_lf,\n    restore_line_endings,\n    normalize_for_fuzzy_match,\n    fuzzy_find_text,\n    generate_diff_string\n)\n\n\nclass Edit(BaseTool):\n    \"\"\"Tool for precise file editing\"\"\"\n    \n    name: str = \"edit\"\n    description: str = \"Edit a file by replacing exact text, or append to end if oldText is empty. For append: use empty oldText. For replace: oldText must match exactly (including whitespace).\"\n    \n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Path to the file to edit (relative or absolute)\"\n            },\n            \"oldText\": {\n                \"type\": \"string\",\n                \"description\": \"Text to find and replace. Use empty string to append to end of file. For replacement: must match exactly including whitespace.\"\n            },\n            \"newText\": {\n                \"type\": \"string\",\n                \"description\": \"New text to replace the old text with\"\n            }\n        },\n        \"required\": [\"path\", \"oldText\", \"newText\"]\n    }\n    \n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self.cwd = self.config.get(\"cwd\", os.getcwd())\n        self.memory_manager = self.config.get(\"memory_manager\", None)\n    \n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute file edit operation\n        \n        :param args: Contains file path, old text and new text\n        :return: Operation result\n        \"\"\"\n        path = args.get(\"path\", \"\").strip()\n        old_text = args.get(\"oldText\", \"\")\n        new_text = args.get(\"newText\", \"\")\n        \n        if not path:\n            return ToolResult.fail(\"Error: path parameter is required\")\n        \n        # Resolve path\n        absolute_path = self._resolve_path(path)\n        \n        # Check if file exists\n        if not os.path.exists(absolute_path):\n            return ToolResult.fail(f\"Error: File not found: {path}\")\n        \n        # Check if readable/writable\n        if not os.access(absolute_path, os.R_OK | os.W_OK):\n            return ToolResult.fail(f\"Error: File is not readable/writable: {path}\")\n        \n        try:\n            # Read file\n            with open(absolute_path, 'r', encoding='utf-8') as f:\n                raw_content = f.read()\n            \n            # Remove BOM (LLM won't include invisible BOM in oldText)\n            bom, content = strip_bom(raw_content)\n            \n            # Detect original line ending\n            original_ending = detect_line_ending(content)\n            \n            # Normalize to LF\n            normalized_content = normalize_to_lf(content)\n            normalized_old_text = normalize_to_lf(old_text)\n            normalized_new_text = normalize_to_lf(new_text)\n            \n            # Special case: empty oldText means append to end of file\n            if not old_text or not old_text.strip():\n                # Append mode: add newText to the end\n                # Add newline before newText if file doesn't end with one\n                if normalized_content and not normalized_content.endswith('\\n'):\n                    new_content = normalized_content + '\\n' + normalized_new_text\n                else:\n                    new_content = normalized_content + normalized_new_text\n                base_content = normalized_content  # For verification\n            else:\n                # Normal edit mode: find and replace\n                # Use fuzzy matching to find old text (try exact match first, then fuzzy match)\n                match_result = fuzzy_find_text(normalized_content, normalized_old_text)\n                \n                if not match_result.found:\n                    return ToolResult.fail(\n                        f\"Error: Could not find the exact text in {path}. \"\n                        \"The old text must match exactly including all whitespace and newlines.\"\n                    )\n                \n                # Calculate occurrence count (use fuzzy normalized content for consistency)\n                fuzzy_content = normalize_for_fuzzy_match(normalized_content)\n                fuzzy_old_text = normalize_for_fuzzy_match(normalized_old_text)\n                occurrences = fuzzy_content.count(fuzzy_old_text)\n                \n                if occurrences > 1:\n                    return ToolResult.fail(\n                        f\"Error: Found {occurrences} occurrences of the text in {path}. \"\n                        \"The text must be unique. Please provide more context to make it unique.\"\n                    )\n                \n                # Execute replacement (use matched text position)\n                base_content = match_result.content_for_replacement\n                new_content = (\n                    base_content[:match_result.index] +\n                    normalized_new_text +\n                    base_content[match_result.index + match_result.match_length:]\n                )\n            \n            # Verify replacement actually changed content\n            if base_content == new_content:\n                return ToolResult.fail(\n                    f\"Error: No changes made to {path}. \"\n                    \"The replacement produced identical content. \"\n                    \"This might indicate an issue with special characters or the text not existing as expected.\"\n                )\n            \n            # Restore original line endings\n            final_content = bom + restore_line_endings(new_content, original_ending)\n            \n            # Write file\n            with open(absolute_path, 'w', encoding='utf-8') as f:\n                f.write(final_content)\n            \n            # Generate diff\n            diff_result = generate_diff_string(base_content, new_content)\n            \n            result = {\n                \"message\": f\"Successfully replaced text in {path}\",\n                \"path\": path,\n                \"diff\": diff_result['diff'],\n                \"first_changed_line\": diff_result['first_changed_line']\n            }\n            \n            # Notify memory manager if file is in memory directory\n            if self.memory_manager and \"memory/\" in path:\n                try:\n                    self.memory_manager.mark_dirty()\n                except Exception as e:\n                    # Don't fail the edit if memory notification fails\n                    pass\n            \n            return ToolResult.success(result)\n            \n        except UnicodeDecodeError:\n            return ToolResult.fail(f\"Error: File is not a valid text file (encoding error): {path}\")\n        except PermissionError:\n            return ToolResult.fail(f\"Error: Permission denied accessing {path}\")\n        except Exception as e:\n            return ToolResult.fail(f\"Error editing file: {str(e)}\")\n    \n    def _resolve_path(self, path: str) -> str:\n        \"\"\"\n        Resolve path to absolute path\n        \n        :param path: Relative or absolute path\n        :return: Absolute path\n        \"\"\"\n        # Expand ~ to user home directory\n        path = expand_path(path)\n        if os.path.isabs(path):\n            return path\n        return os.path.abspath(os.path.join(self.cwd, path))\n"
  },
  {
    "path": "agent/tools/env_config/__init__.py",
    "content": "from agent.tools.env_config.env_config import EnvConfig\n\n__all__ = ['EnvConfig']\n"
  },
  {
    "path": "agent/tools/env_config/env_config.py",
    "content": "\"\"\"\nEnvironment Configuration Tool - Manage API keys and environment variables\n\"\"\"\n\nimport os\nimport re\nfrom typing import Dict, Any\nfrom pathlib import Path\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom common.log import logger\nfrom common.utils import expand_path\n\n\n# API Key 知识库：常见的环境变量及其描述\nAPI_KEY_REGISTRY = {\n    # AI 模型服务\n    \"OPENAI_API_KEY\": \"OpenAI API 密钥 (用于GPT模型、Embedding模型)\",\n    \"GEMINI_API_KEY\": \"Google Gemini API 密钥\",\n    \"CLAUDE_API_KEY\": \"Claude API 密钥 (用于Claude模型)\",\n    \"LINKAI_API_KEY\": \"LinkAI智能体平台 API 密钥，支持多种模型切换\",\n    # 搜索服务\n    \"BOCHA_API_KEY\": \"博查 AI 搜索 API 密钥 \",\n}\n\nclass EnvConfig(BaseTool):\n    \"\"\"Tool for managing environment variables (API keys, etc.)\"\"\"\n    \n    name: str = \"env_config\"\n    description: str = (\n        \"Manage API keys and skill configurations securely. \"\n        \"Use this tool when user wants to configure API keys (like BOCHA_API_KEY, OPENAI_API_KEY), \"\n        \"view configured keys, or manage skill settings. \"\n        \"Actions: 'set' (add/update key), 'get' (view specific key), 'list' (show all configured keys), 'delete' (remove key). \"\n        \"Values are automatically masked for security. Changes take effect immediately via hot reload.\"\n    )\n    \n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"action\": {\n                \"type\": \"string\",\n                \"description\": \"Action to perform: 'set', 'get', 'list', 'delete'\",\n                \"enum\": [\"set\", \"get\", \"list\", \"delete\"]\n            },\n            \"key\": {\n                \"type\": \"string\",\n                \"description\": (\n                    \"Environment variable key name. Common keys:\\n\"\n                    \"- OPENAI_API_KEY: OpenAI API (GPT models)\\n\"\n                    \"- OPENAI_API_BASE: OpenAI API base URL\\n\"\n                    \"- CLAUDE_API_KEY: Anthropic Claude API\\n\"\n                    \"- GEMINI_API_KEY: Google Gemini API\\n\"\n                    \"- LINKAI_API_KEY: LinkAI platform\\n\"\n                    \"- BOCHA_API_KEY: Bocha AI search (博查搜索)\\n\"\n                    \"Use exact key names (case-sensitive, all uppercase with underscores)\"\n                )\n            },\n            \"value\": {\n                \"type\": \"string\",\n                \"description\": \"Value to set for the environment variable (for 'set' action)\"\n            }\n        },\n        \"required\": [\"action\"]\n    }\n    \n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        # Store env config in ~/.cow directory (outside workspace for security)\n        self.env_dir = expand_path(\"~/.cow\")\n        self.env_path = os.path.join(self.env_dir, '.env')\n        self.agent_bridge = self.config.get(\"agent_bridge\")  # Reference to AgentBridge for hot reload\n        # Don't create .env file in __init__ to avoid issues during tool discovery\n        # It will be created on first use in execute()\n    \n    def _ensure_env_file(self):\n        \"\"\"Ensure the .env file exists\"\"\"\n        # Create ~/.cow directory if it doesn't exist\n        os.makedirs(self.env_dir, exist_ok=True)\n        \n        if not os.path.exists(self.env_path):\n            Path(self.env_path).touch()\n            logger.info(f\"[EnvConfig] Created .env file at {self.env_path}\")\n    \n    def _mask_value(self, value: str) -> str:\n        \"\"\"Mask sensitive parts of a value for logging\"\"\"\n        if not value or len(value) <= 10:\n            return \"***\"\n        return f\"{value[:6]}***{value[-4:]}\"\n    \n    def _read_env_file(self) -> Dict[str, str]:\n        \"\"\"Read all key-value pairs from .env file\"\"\"\n        env_vars = {}\n        if os.path.exists(self.env_path):\n            with open(self.env_path, 'r', encoding='utf-8') as f:\n                for line in f:\n                    line = line.strip()\n                    # Skip empty lines and comments\n                    if not line or line.startswith('#'):\n                        continue\n                    # Parse KEY=VALUE\n                    match = re.match(r'^([^=]+)=(.*)$', line)\n                    if match:\n                        key, value = match.groups()\n                        env_vars[key.strip()] = value.strip()\n        return env_vars\n    \n    def _write_env_file(self, env_vars: Dict[str, str]):\n        \"\"\"Write all key-value pairs to .env file\"\"\"\n        with open(self.env_path, 'w', encoding='utf-8') as f:\n            f.write(\"# Environment variables for agent skills\\n\")\n            f.write(\"# Auto-managed by env_config tool\\n\\n\")\n            for key, value in sorted(env_vars.items()):\n                f.write(f\"{key}={value}\\n\")\n    \n    def _reload_env(self):\n        \"\"\"Reload environment variables from .env file\"\"\"\n        env_vars = self._read_env_file()\n        for key, value in env_vars.items():\n            os.environ[key] = value\n        logger.debug(f\"[EnvConfig] Reloaded {len(env_vars)} environment variables\")\n    \n    def _refresh_skills(self):\n        \"\"\"Refresh skills after environment variable changes\"\"\"\n        if self.agent_bridge:\n            try:\n                # Reload .env file\n                self._reload_env()\n                \n                # Refresh skills in all agent instances\n                refreshed = self.agent_bridge.refresh_all_skills()\n                logger.info(f\"[EnvConfig] Refreshed skills in {refreshed} agent instance(s)\")\n                return True\n            except Exception as e:\n                logger.warning(f\"[EnvConfig] Failed to refresh skills: {e}\")\n                return False\n        return False\n    \n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute environment configuration operation\n        \n        :param args: Contains action, key, and value parameters\n        :return: Result of the operation\n        \"\"\"\n        # Ensure .env file exists on first use\n        self._ensure_env_file()\n        \n        action = args.get(\"action\")\n        key = args.get(\"key\")\n        value = args.get(\"value\")\n        \n        try:\n            if action == \"set\":\n                if not key or not value:\n                    return ToolResult.fail(\"Error: 'key' and 'value' are required for 'set' action.\")\n                \n                # Read current env vars\n                env_vars = self._read_env_file()\n                \n                # Update the key\n                env_vars[key] = value\n                \n                # Write back to file\n                self._write_env_file(env_vars)\n                \n                # Update current process env\n                os.environ[key] = value\n                \n                logger.info(f\"[EnvConfig] Set {key}={self._mask_value(value)}\")\n                \n                # Try to refresh skills immediately\n                refreshed = self._refresh_skills()\n                \n                result = {\n                    \"message\": f\"Successfully set {key}\",\n                    \"key\": key,\n                    \"value\": self._mask_value(value),\n                }\n                \n                if refreshed:\n                    result[\"note\"] = \"✅ Skills refreshed automatically - changes are now active\"\n                else:\n                    result[\"note\"] = \"⚠️ Skills not refreshed - restart agent to load new skills\"\n                \n                return ToolResult.success(result)\n            \n            elif action == \"get\":\n                if not key:\n                    return ToolResult.fail(\"Error: 'key' is required for 'get' action.\")\n                \n                # Check in file first, then in current env\n                env_vars = self._read_env_file()\n                value = env_vars.get(key) or os.getenv(key)\n                \n                # Get description from registry\n                description = API_KEY_REGISTRY.get(key, \"未知用途的环境变量\")\n                \n                if value is not None:\n                    logger.info(f\"[EnvConfig] Got {key}={self._mask_value(value)}\")\n                    return ToolResult.success({\n                        \"key\": key,\n                        \"value\": self._mask_value(value),\n                        \"description\": description,\n                        \"exists\": True,\n                        \"note\": f\"Value is masked for security. In bash, use ${key} directly — it is auto-injected.\"\n                    })\n                else:\n                    return ToolResult.success({\n                        \"key\": key,\n                        \"description\": description,\n                        \"exists\": False,\n                        \"message\": f\"Environment variable '{key}' is not set\"\n                    })\n            \n            elif action == \"list\":\n                env_vars = self._read_env_file()\n                \n                # Build detailed variable list with descriptions\n                variables_with_info = {}\n                for key, value in env_vars.items():\n                    variables_with_info[key] = {\n                        \"value\": self._mask_value(value),\n                        \"description\": API_KEY_REGISTRY.get(key, \"未知用途的环境变量\")\n                    }\n                \n                logger.info(f\"[EnvConfig] Listed {len(env_vars)} environment variables\")\n                \n                if not env_vars:\n                    return ToolResult.success({\n                        \"message\": \"No environment variables configured\",\n                        \"variables\": {},\n                        \"note\": \"常用的 API 密钥可以通过 env_config(action='set', key='KEY_NAME', value='your-key') 来配置\"\n                    })\n                \n                return ToolResult.success({\n                    \"message\": f\"Found {len(env_vars)} environment variable(s)\",\n                    \"variables\": variables_with_info\n                })\n            \n            elif action == \"delete\":\n                if not key:\n                    return ToolResult.fail(\"Error: 'key' is required for 'delete' action.\")\n                \n                # Read current env vars\n                env_vars = self._read_env_file()\n                \n                if key not in env_vars:\n                    return ToolResult.success({\n                        \"message\": f\"Environment variable '{key}' was not set\",\n                        \"key\": key\n                    })\n                \n                # Remove the key\n                del env_vars[key]\n                \n                # Write back to file\n                self._write_env_file(env_vars)\n                \n                # Remove from current process env\n                if key in os.environ:\n                    del os.environ[key]\n                \n                logger.info(f\"[EnvConfig] Deleted {key}\")\n                \n                # Try to refresh skills immediately\n                refreshed = self._refresh_skills()\n                \n                result = {\n                    \"message\": f\"Successfully deleted {key}\",\n                    \"key\": key,\n                }\n                \n                if refreshed:\n                    result[\"note\"] = \"✅ Skills refreshed automatically - changes are now active\"\n                else:\n                    result[\"note\"] = \"⚠️ Skills not refreshed - restart agent to apply changes\"\n                \n                return ToolResult.success(result)\n            \n            else:\n                return ToolResult.fail(f\"Error: Unknown action '{action}'. Use 'set', 'get', 'list', or 'delete'.\")\n        \n        except Exception as e:\n            logger.error(f\"[EnvConfig] Error: {e}\", exc_info=True)\n            return ToolResult.fail(f\"EnvConfig tool error: {str(e)}\")\n"
  },
  {
    "path": "agent/tools/ls/__init__.py",
    "content": "from .ls import Ls\n\n__all__ = ['Ls']\n"
  },
  {
    "path": "agent/tools/ls/ls.py",
    "content": "\"\"\"\nLs tool - List directory contents\n\"\"\"\n\nimport os\nfrom typing import Dict, Any\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom agent.tools.utils.truncate import truncate_head, format_size, DEFAULT_MAX_BYTES\nfrom common.utils import expand_path\n\n\nDEFAULT_LIMIT = 500\n\n\nclass Ls(BaseTool):\n    \"\"\"Tool for listing directory contents\"\"\"\n    \n    name: str = \"ls\"\n    description: str = f\"List directory contents. Returns entries sorted alphabetically, with '/' suffix for directories. Includes dotfiles. Output is truncated to {DEFAULT_LIMIT} entries or {DEFAULT_MAX_BYTES // 1024}KB (whichever is hit first).\"\n    \n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Directory to list. IMPORTANT: Relative paths are based on workspace directory. To access directories outside workspace, use absolute paths starting with ~ or /.\"\n            },\n            \"limit\": {\n                \"type\": \"integer\",\n                \"description\": f\"Maximum number of entries to return (default: {DEFAULT_LIMIT})\"\n            }\n        },\n        \"required\": []\n    }\n    \n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self.cwd = self.config.get(\"cwd\", os.getcwd())\n    \n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute directory listing\n        \n        :param args: Listing parameters\n        :return: Directory contents or error\n        \"\"\"\n        path = args.get(\"path\", \".\").strip()\n        limit = args.get(\"limit\", DEFAULT_LIMIT)\n        \n        # Resolve path\n        absolute_path = self._resolve_path(path)\n        \n        # Security check: Prevent accessing sensitive config directory\n        env_config_dir = expand_path(\"~/.cow\")\n        if os.path.abspath(absolute_path) == os.path.abspath(env_config_dir):\n            return ToolResult.fail(\n                \"Error: Access denied. API keys and credentials must be accessed through the env_config tool only.\"\n            )\n        \n        if not os.path.exists(absolute_path):\n            # Provide helpful hint if using relative path\n            if not os.path.isabs(path) and not path.startswith('~'):\n                return ToolResult.fail(\n                    f\"Error: Path not found: {path}\\n\"\n                    f\"Resolved to: {absolute_path}\\n\"\n                    f\"Hint: Relative paths are based on workspace ({self.cwd}). For files outside workspace, use absolute paths.\"\n                )\n            return ToolResult.fail(f\"Error: Path not found: {path}\")\n        \n        if not os.path.isdir(absolute_path):\n            return ToolResult.fail(f\"Error: Not a directory: {path}\")\n        \n        try:\n            # Read directory entries\n            entries = os.listdir(absolute_path)\n            \n            # Sort alphabetically (case-insensitive)\n            entries.sort(key=lambda x: x.lower())\n            \n            # Format entries with directory indicators\n            results = []\n            entry_limit_reached = False\n            \n            for entry in entries:\n                if len(results) >= limit:\n                    entry_limit_reached = True\n                    break\n                \n                full_path = os.path.join(absolute_path, entry)\n                \n                try:\n                    if os.path.isdir(full_path):\n                        results.append(entry + '/')\n                    else:\n                        results.append(entry)\n                except Exception:\n                    # Skip entries we can't stat\n                    continue\n            \n            if not results:\n                return ToolResult.success({\"message\": \"(empty directory)\", \"entries\": []})\n            \n            # Format output\n            raw_output = '\\n'.join(results)\n            truncation = truncate_head(raw_output, max_lines=999999)  # Only limit by bytes\n            \n            output = truncation.content\n            details = {}\n            notices = []\n            \n            if entry_limit_reached:\n                notices.append(f\"{limit} entries limit reached. Use limit={limit * 2} for more\")\n                details[\"entry_limit_reached\"] = limit\n            \n            if truncation.truncated:\n                notices.append(f\"{format_size(DEFAULT_MAX_BYTES)} limit reached\")\n                details[\"truncation\"] = truncation.to_dict()\n            \n            if notices:\n                output += f\"\\n\\n[{'. '.join(notices)}]\"\n            \n            return ToolResult.success({\n                \"output\": output,\n                \"entry_count\": len(results),\n                \"details\": details if details else None\n            })\n            \n        except PermissionError:\n            return ToolResult.fail(f\"Error: Permission denied reading directory: {path}\")\n        except Exception as e:\n            return ToolResult.fail(f\"Error listing directory: {str(e)}\")\n    \n    def _resolve_path(self, path: str) -> str:\n        \"\"\"Resolve path to absolute path\"\"\"\n        # Expand ~ to user home directory\n        path = expand_path(path)\n        if os.path.isabs(path):\n            return path\n        return os.path.abspath(os.path.join(self.cwd, path))\n"
  },
  {
    "path": "agent/tools/memory/__init__.py",
    "content": "\"\"\"\nMemory tools for Agent\n\nProvides memory_search and memory_get tools\n\"\"\"\n\nfrom agent.tools.memory.memory_search import MemorySearchTool\nfrom agent.tools.memory.memory_get import MemoryGetTool\n\n__all__ = ['MemorySearchTool', 'MemoryGetTool']\n"
  },
  {
    "path": "agent/tools/memory/memory_get.py",
    "content": "\"\"\"\nMemory get tool\n\nAllows agents to read specific sections from memory files\n\"\"\"\n\nfrom agent.tools.base_tool import BaseTool\n\n\nclass MemoryGetTool(BaseTool):\n    \"\"\"Tool for reading memory file contents\"\"\"\n    \n    name: str = \"memory_get\"\n    description: str = (\n        \"Read specific content from memory files. \"\n        \"Use this to get full context from a memory file or specific line range.\"\n    )\n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Relative path to the memory file (e.g. 'MEMORY.md', 'memory/2026-01-01.md')\"\n            },\n            \"start_line\": {\n                \"type\": \"integer\",\n                \"description\": \"Starting line number (optional, default: 1)\",\n                \"default\": 1\n            },\n            \"num_lines\": {\n                \"type\": \"integer\",\n                \"description\": \"Number of lines to read (optional, reads all if not specified)\"\n            }\n        },\n        \"required\": [\"path\"]\n    }\n    \n    def __init__(self, memory_manager):\n        \"\"\"\n        Initialize memory get tool\n        \n        Args:\n            memory_manager: MemoryManager instance\n        \"\"\"\n        super().__init__()\n        self.memory_manager = memory_manager\n    \n    def execute(self, args: dict):\n        \"\"\"\n        Execute memory file read\n        \n        Args:\n            args: Dictionary with path, start_line, num_lines\n            \n        Returns:\n            ToolResult with file content\n        \"\"\"\n        from agent.tools.base_tool import ToolResult\n        \n        path = args.get(\"path\")\n        start_line = args.get(\"start_line\", 1)\n        num_lines = args.get(\"num_lines\")\n        \n        if not path:\n            return ToolResult.fail(\"Error: path parameter is required\")\n        \n        try:\n            workspace_dir = self.memory_manager.config.get_workspace()\n            \n            # Auto-prepend memory/ if not present and not absolute path\n            # Exception: MEMORY.md is in the root directory\n            if not path.startswith('memory/') and not path.startswith('/') and path != 'MEMORY.md':\n                path = f'memory/{path}'\n            \n            file_path = workspace_dir / path\n            \n            if not file_path.exists():\n                return ToolResult.fail(f\"Error: File not found: {path}\")\n            \n            content = file_path.read_text(encoding='utf-8')\n            lines = content.split('\\n')\n            \n            # Handle line range\n            if start_line < 1:\n                start_line = 1\n            \n            start_idx = start_line - 1\n            \n            if num_lines:\n                end_idx = start_idx + num_lines\n                selected_lines = lines[start_idx:end_idx]\n            else:\n                selected_lines = lines[start_idx:]\n            \n            result = '\\n'.join(selected_lines)\n            \n            # Add metadata\n            total_lines = len(lines)\n            shown_lines = len(selected_lines)\n            \n            output = [\n                f\"File: {path}\",\n                f\"Lines: {start_line}-{start_line + shown_lines - 1} (total: {total_lines})\",\n                \"\",\n                result\n            ]\n            \n            return ToolResult.success('\\n'.join(output))\n            \n        except Exception as e:\n            return ToolResult.fail(f\"Error reading memory file: {str(e)}\")\n"
  },
  {
    "path": "agent/tools/memory/memory_search.py",
    "content": "\"\"\"\nMemory search tool\n\nAllows agents to search their memory using semantic and keyword search\n\"\"\"\n\nfrom typing import Dict, Any, Optional\nfrom agent.tools.base_tool import BaseTool\n\n\nclass MemorySearchTool(BaseTool):\n    \"\"\"Tool for searching agent memory\"\"\"\n    \n    name: str = \"memory_search\"\n    description: str = (\n        \"Search agent's long-term memory using semantic and keyword search. \"\n        \"Use this to recall past conversations, preferences, and knowledge.\"\n    )\n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"query\": {\n                \"type\": \"string\",\n                \"description\": \"Search query (can be natural language question or keywords)\"\n            },\n            \"max_results\": {\n                \"type\": \"integer\",\n                \"description\": \"Maximum number of results to return (default: 10)\",\n                \"default\": 10\n            },\n            \"min_score\": {\n                \"type\": \"number\",\n                \"description\": \"Minimum relevance score (0-1, default: 0.1)\",\n                \"default\": 0.1\n            }\n        },\n        \"required\": [\"query\"]\n    }\n    \n    def __init__(self, memory_manager, user_id: Optional[str] = None):\n        \"\"\"\n        Initialize memory search tool\n        \n        Args:\n            memory_manager: MemoryManager instance\n            user_id: Optional user ID for scoped search\n        \"\"\"\n        super().__init__()\n        self.memory_manager = memory_manager\n        self.user_id = user_id\n    \n    def execute(self, args: dict):\n        \"\"\"\n        Execute memory search\n        \n        Args:\n            args: Dictionary with query, max_results, min_score\n            \n        Returns:\n            ToolResult with formatted search results\n        \"\"\"\n        from agent.tools.base_tool import ToolResult\n        import asyncio\n        \n        query = args.get(\"query\")\n        max_results = args.get(\"max_results\", 10)\n        min_score = args.get(\"min_score\", 0.1)\n        \n        if not query:\n            return ToolResult.fail(\"Error: query parameter is required\")\n        \n        try:\n            # Run async search in sync context\n            results = asyncio.run(self.memory_manager.search(\n                query=query,\n                user_id=self.user_id,\n                max_results=max_results,\n                min_score=min_score,\n                include_shared=True\n            ))\n            \n            if not results:\n                # Return clear message that no memories exist yet\n                # This prevents infinite retry loops\n                return ToolResult.success(\n                    f\"No memories found for '{query}'. \"\n                    f\"This is normal if no memories have been stored yet. \"\n                    f\"You can store new memories by writing to MEMORY.md or memory/YYYY-MM-DD.md files.\"\n                )\n            \n            # Format results\n            output = [f\"Found {len(results)} relevant memories:\\n\"]\n            \n            for i, result in enumerate(results, 1):\n                output.append(f\"\\n{i}. {result.path} (lines {result.start_line}-{result.end_line})\")\n                output.append(f\"   Score: {result.score:.3f}\")\n                output.append(f\"   Snippet: {result.snippet}\")\n            \n            return ToolResult.success(\"\\n\".join(output))\n            \n        except Exception as e:\n            return ToolResult.fail(f\"Error searching memory: {str(e)}\")\n"
  },
  {
    "path": "agent/tools/read/__init__.py",
    "content": "from .read import Read\n\n__all__ = ['Read']\n"
  },
  {
    "path": "agent/tools/read/read.py",
    "content": "\"\"\"\nRead tool - Read file contents\nSupports text files, images (jpg, png, gif, webp), and PDF files\n\"\"\"\n\nimport os\nfrom typing import Dict, Any\nfrom pathlib import Path\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom agent.tools.utils.truncate import truncate_head, format_size, DEFAULT_MAX_LINES, DEFAULT_MAX_BYTES\nfrom common.utils import expand_path\n\n\nclass Read(BaseTool):\n    \"\"\"Tool for reading file contents\"\"\"\n    \n    name: str = \"read\"\n    description: str = f\"Read or inspect file contents. For text/PDF files, returns content (truncated to {DEFAULT_MAX_LINES} lines or {DEFAULT_MAX_BYTES // 1024}KB). For images/videos/audio, returns metadata only (file info, size, type). Use offset/limit for large text files.\"\n    \n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Path to the file to read. IMPORTANT: Relative paths are based on workspace directory. To access files outside workspace, use absolute paths starting with ~ or /.\"\n            },\n            \"offset\": {\n                \"type\": \"integer\",\n                \"description\": \"Line number to start reading from (1-indexed, optional). Use negative values to read from end (e.g. -20 for last 20 lines)\"\n            },\n            \"limit\": {\n                \"type\": \"integer\",\n                \"description\": \"Maximum number of lines to read (optional)\"\n            }\n        },\n        \"required\": [\"path\"]\n    }\n    \n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self.cwd = self.config.get(\"cwd\", os.getcwd())\n        \n        # File type categories\n        self.image_extensions = {'.jpg', '.jpeg', '.png', '.gif', '.webp', '.bmp', '.svg', '.ico'}\n        self.video_extensions = {'.mp4', '.avi', '.mov', '.mkv', '.flv', '.wmv', '.webm', '.m4v'}\n        self.audio_extensions = {'.mp3', '.wav', '.ogg', '.m4a', '.flac', '.aac', '.wma'}\n        self.binary_extensions = {'.exe', '.dll', '.so', '.dylib', '.bin', '.dat', '.db', '.sqlite'}\n        self.archive_extensions = {'.zip', '.tar', '.gz', '.rar', '.7z', '.bz2', '.xz'}\n        self.pdf_extensions = {'.pdf'}\n        self.office_extensions = {'.doc', '.docx', '.xls', '.xlsx', '.ppt', '.pptx'}\n\n        # Readable text formats (will be read with truncation)\n        self.text_extensions = {\n            '.txt', '.md', '.markdown', '.rst', '.log', '.csv', '.tsv', '.json', '.xml', '.yaml', '.yml',\n            '.py', '.js', '.ts', '.java', '.c', '.cpp', '.h', '.hpp', '.go', '.rs', '.rb', '.php',\n            '.html', '.css', '.scss', '.sass', '.less', '.vue', '.jsx', '.tsx',\n            '.sh', '.bash', '.zsh', '.fish', '.ps1', '.bat', '.cmd',\n            '.sql', '.r', '.m', '.swift', '.kt', '.scala', '.clj', '.erl', '.ex',\n            '.dockerfile', '.makefile', '.cmake', '.gradle', '.properties', '.ini', '.conf', '.cfg',\n        }\n    \n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute file read operation\n        \n        :param args: Contains file path and optional offset/limit parameters\n        :return: File content or error message\n        \"\"\"\n        # Support 'location' as alias for 'path' (LLM may use it from skill listing)\n        path = args.get(\"path\", \"\") or args.get(\"location\", \"\")\n        path = path.strip() if isinstance(path, str) else \"\"\n        offset = args.get(\"offset\")\n        limit = args.get(\"limit\")\n\n        if not path:\n            return ToolResult.fail(\"Error: path parameter is required\")\n        \n        # Resolve path\n        absolute_path = self._resolve_path(path)\n        \n        # Security check: Prevent reading sensitive config files\n        env_config_path = expand_path(\"~/.cow/.env\")\n        if os.path.abspath(absolute_path) == os.path.abspath(env_config_path):\n            return ToolResult.fail(\n                \"Error: Access denied. API keys and credentials must be accessed through the env_config tool only.\"\n            )\n        \n        # Check if file exists\n        if not os.path.exists(absolute_path):\n            # Provide helpful hint if using relative path\n            if not os.path.isabs(path) and not path.startswith('~'):\n                return ToolResult.fail(\n                    f\"Error: File not found: {path}\\n\"\n                    f\"Resolved to: {absolute_path}\\n\"\n                    f\"Hint: Relative paths are based on workspace ({self.cwd}). For files outside workspace, use absolute paths.\"\n                )\n            return ToolResult.fail(f\"Error: File not found: {path}\")\n        \n        # Check if readable\n        if not os.access(absolute_path, os.R_OK):\n            return ToolResult.fail(f\"Error: File is not readable: {path}\")\n        \n        # Check file type\n        file_ext = Path(absolute_path).suffix.lower()\n        file_size = os.path.getsize(absolute_path)\n        \n        # Check if image - return metadata for sending\n        if file_ext in self.image_extensions:\n            return self._read_image(absolute_path, file_ext)\n        \n        # Check if video/audio/binary/archive - return metadata only\n        if file_ext in self.video_extensions:\n            return self._return_file_metadata(absolute_path, \"video\", file_size)\n        if file_ext in self.audio_extensions:\n            return self._return_file_metadata(absolute_path, \"audio\", file_size)\n        if file_ext in self.binary_extensions or file_ext in self.archive_extensions:\n            return self._return_file_metadata(absolute_path, \"binary\", file_size)\n        \n        # Check if PDF\n        if file_ext in self.pdf_extensions:\n            return self._read_pdf(absolute_path, path, offset, limit)\n\n        # Check if Office document (.docx, .xlsx, .pptx, etc.)\n        if file_ext in self.office_extensions:\n            return self._read_office(absolute_path, path, file_ext, offset, limit)\n\n        # Read text file (with truncation for large files)\n        return self._read_text(absolute_path, path, offset, limit)\n    \n    def _resolve_path(self, path: str) -> str:\n        \"\"\"\n        Resolve path to absolute path\n        \n        :param path: Relative or absolute path\n        :return: Absolute path\n        \"\"\"\n        # Expand ~ to user home directory\n        path = expand_path(path)\n        if os.path.isabs(path):\n            return path\n        return os.path.abspath(os.path.join(self.cwd, path))\n    \n    def _return_file_metadata(self, absolute_path: str, file_type: str, file_size: int) -> ToolResult:\n        \"\"\"\n        Return file metadata for non-readable files (video, audio, binary, etc.)\n        \n        :param absolute_path: Absolute path to the file\n        :param file_type: Type of file (video, audio, binary, etc.)\n        :param file_size: File size in bytes\n        :return: File metadata\n        \"\"\"\n        file_name = Path(absolute_path).name\n        file_ext = Path(absolute_path).suffix.lower()\n        \n        # Determine MIME type\n        mime_types = {\n            # Video\n            '.mp4': 'video/mp4', '.avi': 'video/x-msvideo', '.mov': 'video/quicktime',\n            '.mkv': 'video/x-matroska', '.webm': 'video/webm',\n            # Audio\n            '.mp3': 'audio/mpeg', '.wav': 'audio/wav', '.ogg': 'audio/ogg',\n            '.m4a': 'audio/mp4', '.flac': 'audio/flac',\n            # Binary\n            '.zip': 'application/zip', '.tar': 'application/x-tar',\n            '.gz': 'application/gzip', '.rar': 'application/x-rar-compressed',\n        }\n        mime_type = mime_types.get(file_ext, 'application/octet-stream')\n        \n        result = {\n            \"type\": f\"{file_type}_metadata\",\n            \"file_type\": file_type,\n            \"path\": absolute_path,\n            \"file_name\": file_name,\n            \"mime_type\": mime_type,\n            \"size\": file_size,\n            \"size_formatted\": format_size(file_size),\n            \"message\": f\"{file_type.capitalize()} 文件: {file_name} ({format_size(file_size)})\\n提示: 如果需要发送此文件，请使用 send 工具。\"\n        }\n        \n        return ToolResult.success(result)\n    \n    def _read_image(self, absolute_path: str, file_ext: str) -> ToolResult:\n        \"\"\"\n        Read image file - always return metadata only (images should be sent, not read into context)\n        \n        :param absolute_path: Absolute path to the image file\n        :param file_ext: File extension\n        :return: Result containing image metadata for sending\n        \"\"\"\n        try:\n            # Get file size\n            file_size = os.path.getsize(absolute_path)\n            \n            # Determine MIME type\n            mime_type_map = {\n                '.jpg': 'image/jpeg',\n                '.jpeg': 'image/jpeg',\n                '.png': 'image/png',\n                '.gif': 'image/gif',\n                '.webp': 'image/webp'\n            }\n            mime_type = mime_type_map.get(file_ext, 'image/jpeg')\n            \n            # Return metadata for images (NOT file_to_send - use send tool to actually send)\n            result = {\n                \"type\": \"image_metadata\",\n                \"file_type\": \"image\",\n                \"path\": absolute_path,\n                \"mime_type\": mime_type,\n                \"size\": file_size,\n                \"size_formatted\": format_size(file_size),\n                \"message\": f\"图片文件: {Path(absolute_path).name} ({format_size(file_size)})\\n提示: 如果需要发送此图片，请使用 send 工具。\"\n            }\n            \n            return ToolResult.success(result)\n            \n        except Exception as e:\n            return ToolResult.fail(f\"Error reading image file: {str(e)}\")\n    \n    def _read_text(self, absolute_path: str, display_path: str, offset: int = None, limit: int = None) -> ToolResult:\n        \"\"\"\n        Read text file\n        \n        :param absolute_path: Absolute path to the file\n        :param display_path: Path to display\n        :param offset: Starting line number (1-indexed)\n        :param limit: Maximum number of lines to read\n        :return: File content or error message\n        \"\"\"\n        try:\n            # Check file size first\n            file_size = os.path.getsize(absolute_path)\n            MAX_FILE_SIZE = 50 * 1024 * 1024  # 50MB\n            \n            if file_size > MAX_FILE_SIZE:\n                # File too large, return metadata only\n                return ToolResult.success({\n                    \"type\": \"file_to_send\",\n                    \"file_type\": \"document\",\n                    \"path\": absolute_path,\n                    \"size\": file_size,\n                    \"size_formatted\": format_size(file_size),\n                    \"message\": f\"文件过大 ({format_size(file_size)} > 50MB)，无法读取内容。文件路径: {absolute_path}\"\n                })\n            \n            # Read file (utf-8-sig strips BOM automatically on Windows)\n            with open(absolute_path, 'r', encoding='utf-8-sig') as f:\n                content = f.read()\n            \n            # Truncate content if too long (20K characters max for model context)\n            MAX_CONTENT_CHARS = 20 * 1024  # 20K characters\n            content_truncated = False\n            if len(content) > MAX_CONTENT_CHARS:\n                content = content[:MAX_CONTENT_CHARS]\n                content_truncated = True\n            \n            all_lines = content.split('\\n')\n            total_file_lines = len(all_lines)\n            \n            # Apply offset (if specified)\n            start_line = 0\n            if offset is not None:\n                if offset < 0:\n                    # Negative offset: read from end\n                    # -20 means \"last 20 lines\" → start from (total - 20)\n                    start_line = max(0, total_file_lines + offset)\n                else:\n                    # Positive offset: read from start (1-indexed)\n                    start_line = max(0, offset - 1)  # Convert to 0-indexed\n                    if start_line >= total_file_lines:\n                        return ToolResult.fail(\n                            f\"Error: Offset {offset} is beyond end of file ({total_file_lines} lines total)\"\n                        )\n            \n            start_line_display = start_line + 1  # For display (1-indexed)\n            \n            # If user specified limit, use it\n            selected_content = content\n            user_limited_lines = None\n            if limit is not None:\n                end_line = min(start_line + limit, total_file_lines)\n                selected_content = '\\n'.join(all_lines[start_line:end_line])\n                user_limited_lines = end_line - start_line\n            elif offset is not None:\n                selected_content = '\\n'.join(all_lines[start_line:])\n            \n            # Apply truncation (considering line count and byte limits)\n            truncation = truncate_head(selected_content)\n            \n            output_text = \"\"\n            details = {}\n            \n            # Add truncation warning if content was truncated\n            if content_truncated:\n                output_text = f\"[文件内容已截断到前 {format_size(MAX_CONTENT_CHARS)}，完整文件大小: {format_size(file_size)}]\\n\\n\"\n            \n            if truncation.first_line_exceeds_limit:\n                # First line exceeds 30KB limit\n                first_line_size = format_size(len(all_lines[start_line].encode('utf-8')))\n                output_text = f\"[Line {start_line_display} is {first_line_size}, exceeds {format_size(DEFAULT_MAX_BYTES)} limit. Use bash tool to read: head -c {DEFAULT_MAX_BYTES} {display_path} | tail -n +{start_line_display}]\"\n                details[\"truncation\"] = truncation.to_dict()\n            elif truncation.truncated:\n                # Truncation occurred\n                end_line_display = start_line_display + truncation.output_lines - 1\n                next_offset = end_line_display + 1\n                \n                output_text = truncation.content\n                \n                if truncation.truncated_by == \"lines\":\n                    output_text += f\"\\n\\n[Showing lines {start_line_display}-{end_line_display} of {total_file_lines}. Use offset={next_offset} to continue.]\"\n                else:\n                    output_text += f\"\\n\\n[Showing lines {start_line_display}-{end_line_display} of {total_file_lines} ({format_size(DEFAULT_MAX_BYTES)} limit). Use offset={next_offset} to continue.]\"\n                \n                details[\"truncation\"] = truncation.to_dict()\n            elif user_limited_lines is not None and start_line + user_limited_lines < total_file_lines:\n                # User specified limit, more content available, but no truncation\n                remaining = total_file_lines - (start_line + user_limited_lines)\n                next_offset = start_line + user_limited_lines + 1\n                \n                output_text = truncation.content\n                output_text += f\"\\n\\n[{remaining} more lines in file. Use offset={next_offset} to continue.]\"\n            else:\n                # No truncation, no exceeding user limit\n                output_text = truncation.content\n            \n            result = {\n                \"content\": output_text,\n                \"total_lines\": total_file_lines,\n                \"start_line\": start_line_display,\n                \"output_lines\": truncation.output_lines\n            }\n            \n            if details:\n                result[\"details\"] = details\n            \n            return ToolResult.success(result)\n            \n        except UnicodeDecodeError:\n            return ToolResult.fail(f\"Error: File is not a valid text file (encoding error): {display_path}\")\n        except Exception as e:\n            return ToolResult.fail(f\"Error reading file: {str(e)}\")\n    \n    def _read_office(self, absolute_path: str, display_path: str, file_ext: str,\n                     offset: int = None, limit: int = None) -> ToolResult:\n        \"\"\"Read Office documents (.docx, .xlsx, .pptx) using python-docx / openpyxl / python-pptx.\"\"\"\n        try:\n            text = self._extract_office_text(absolute_path, file_ext)\n        except ImportError as e:\n            return ToolResult.fail(str(e))\n        except Exception as e:\n            return ToolResult.fail(f\"Error reading Office document: {e}\")\n\n        if not text or not text.strip():\n            return ToolResult.success({\n                \"content\": f\"[Office file {Path(absolute_path).name}: no text content could be extracted]\",\n            })\n\n        all_lines = text.split('\\n')\n        total_lines = len(all_lines)\n\n        start_line = 0\n        if offset is not None:\n            if offset < 0:\n                start_line = max(0, total_lines + offset)\n            else:\n                start_line = max(0, offset - 1)\n                if start_line >= total_lines:\n                    return ToolResult.fail(\n                        f\"Error: Offset {offset} is beyond end of content ({total_lines} lines total)\"\n                    )\n\n        selected_content = text\n        user_limited_lines = None\n        if limit is not None:\n            end_line = min(start_line + limit, total_lines)\n            selected_content = '\\n'.join(all_lines[start_line:end_line])\n            user_limited_lines = end_line - start_line\n        elif offset is not None:\n            selected_content = '\\n'.join(all_lines[start_line:])\n\n        truncation = truncate_head(selected_content)\n        start_line_display = start_line + 1\n        output_text = \"\"\n\n        if truncation.truncated:\n            end_line_display = start_line_display + truncation.output_lines - 1\n            next_offset = end_line_display + 1\n            output_text = truncation.content\n            output_text += f\"\\n\\n[Showing lines {start_line_display}-{end_line_display} of {total_lines}. Use offset={next_offset} to continue.]\"\n        elif user_limited_lines is not None and start_line + user_limited_lines < total_lines:\n            remaining = total_lines - (start_line + user_limited_lines)\n            next_offset = start_line + user_limited_lines + 1\n            output_text = truncation.content\n            output_text += f\"\\n\\n[{remaining} more lines in file. Use offset={next_offset} to continue.]\"\n        else:\n            output_text = truncation.content\n\n        return ToolResult.success({\n            \"content\": output_text,\n            \"total_lines\": total_lines,\n            \"start_line\": start_line_display,\n            \"output_lines\": truncation.output_lines,\n        })\n\n    @staticmethod\n    def _extract_office_text(absolute_path: str, file_ext: str) -> str:\n        \"\"\"Extract plain text from an Office document.\"\"\"\n        if file_ext in ('.docx', '.doc'):\n            try:\n                from docx import Document\n            except ImportError:\n                raise ImportError(\"Error: python-docx library not installed. Install with: pip install python-docx\")\n            doc = Document(absolute_path)\n            paragraphs = [p.text for p in doc.paragraphs]\n            for table in doc.tables:\n                for row in table.rows:\n                    paragraphs.append('\\t'.join(cell.text for cell in row.cells))\n            return '\\n'.join(paragraphs)\n\n        if file_ext in ('.xlsx', '.xls'):\n            try:\n                from openpyxl import load_workbook\n            except ImportError:\n                raise ImportError(\"Error: openpyxl library not installed. Install with: pip install openpyxl\")\n            wb = load_workbook(absolute_path, read_only=True, data_only=True)\n            parts = []\n            for ws in wb.worksheets:\n                parts.append(f\"--- Sheet: {ws.title} ---\")\n                for row in ws.iter_rows(values_only=True):\n                    parts.append('\\t'.join(str(c) if c is not None else '' for c in row))\n            wb.close()\n            return '\\n'.join(parts)\n\n        if file_ext in ('.pptx', '.ppt'):\n            try:\n                from pptx import Presentation\n            except ImportError:\n                raise ImportError(\"Error: python-pptx library not installed. Install with: pip install python-pptx\")\n            prs = Presentation(absolute_path)\n            parts = []\n            for i, slide in enumerate(prs.slides, 1):\n                parts.append(f\"--- Slide {i} ---\")\n                for shape in slide.shapes:\n                    if shape.has_text_frame:\n                        for para in shape.text_frame.paragraphs:\n                            text = para.text.strip()\n                            if text:\n                                parts.append(text)\n            return '\\n'.join(parts)\n\n        return \"\"\n\n    def _read_pdf(self, absolute_path: str, display_path: str, offset: int = None, limit: int = None) -> ToolResult:\n        \"\"\"\n        Read PDF file content\n        \n        :param absolute_path: Absolute path to the file\n        :param display_path: Path to display\n        :param offset: Starting line number (1-indexed)\n        :param limit: Maximum number of lines to read\n        :return: PDF text content or error message\n        \"\"\"\n        try:\n            # Try to import pypdf\n            try:\n                from pypdf import PdfReader\n            except ImportError:\n                return ToolResult.fail(\n                    \"Error: pypdf library not installed. Install with: pip install pypdf\"\n                )\n            \n            # Read PDF\n            reader = PdfReader(absolute_path)\n            total_pages = len(reader.pages)\n            \n            # Extract text from all pages\n            text_parts = []\n            for page_num, page in enumerate(reader.pages, 1):\n                page_text = page.extract_text()\n                if page_text.strip():\n                    text_parts.append(f\"--- Page {page_num} ---\\n{page_text}\")\n            \n            if not text_parts:\n                return ToolResult.success({\n                    \"content\": f\"[PDF file with {total_pages} pages, but no text content could be extracted]\",\n                    \"total_pages\": total_pages,\n                    \"message\": \"PDF may contain only images or be encrypted\"\n                })\n            \n            # Merge all text\n            full_content = \"\\n\\n\".join(text_parts)\n            all_lines = full_content.split('\\n')\n            total_lines = len(all_lines)\n            \n            # Apply offset and limit (same logic as text files)\n            start_line = 0\n            if offset is not None:\n                start_line = max(0, offset - 1)\n                if start_line >= total_lines:\n                    return ToolResult.fail(\n                        f\"Error: Offset {offset} is beyond end of content ({total_lines} lines total)\"\n                    )\n            \n            start_line_display = start_line + 1\n            \n            selected_content = full_content\n            user_limited_lines = None\n            if limit is not None:\n                end_line = min(start_line + limit, total_lines)\n                selected_content = '\\n'.join(all_lines[start_line:end_line])\n                user_limited_lines = end_line - start_line\n            elif offset is not None:\n                selected_content = '\\n'.join(all_lines[start_line:])\n            \n            # Apply truncation\n            truncation = truncate_head(selected_content)\n            \n            output_text = \"\"\n            details = {}\n            \n            if truncation.truncated:\n                end_line_display = start_line_display + truncation.output_lines - 1\n                next_offset = end_line_display + 1\n                \n                output_text = truncation.content\n                \n                if truncation.truncated_by == \"lines\":\n                    output_text += f\"\\n\\n[Showing lines {start_line_display}-{end_line_display} of {total_lines}. Use offset={next_offset} to continue.]\"\n                else:\n                    output_text += f\"\\n\\n[Showing lines {start_line_display}-{end_line_display} of {total_lines} ({format_size(DEFAULT_MAX_BYTES)} limit). Use offset={next_offset} to continue.]\"\n                \n                details[\"truncation\"] = truncation.to_dict()\n            elif user_limited_lines is not None and start_line + user_limited_lines < total_lines:\n                remaining = total_lines - (start_line + user_limited_lines)\n                next_offset = start_line + user_limited_lines + 1\n                \n                output_text = truncation.content\n                output_text += f\"\\n\\n[{remaining} more lines in file. Use offset={next_offset} to continue.]\"\n            else:\n                output_text = truncation.content\n            \n            result = {\n                \"content\": output_text,\n                \"total_pages\": total_pages,\n                \"total_lines\": total_lines,\n                \"start_line\": start_line_display,\n                \"output_lines\": truncation.output_lines\n            }\n            \n            if details:\n                result[\"details\"] = details\n            \n            return ToolResult.success(result)\n            \n        except Exception as e:\n            return ToolResult.fail(f\"Error reading PDF file: {str(e)}\")\n"
  },
  {
    "path": "agent/tools/scheduler/README.md",
    "content": "# 定时任务工具 (Scheduler Tool)\n\n## 功能简介\n\n定时任务工具允许 Agent 创建、管理和执行定时任务，支持：\n\n- ⏰ **定时提醒**: 在指定时间发送消息\n- 🔄 **周期性任务**: 按固定间隔或 cron 表达式重复执行\n- 🔧 **动态工具调用**: 定时执行其他工具并发送结果（如搜索新闻、查询天气等）\n- 📋 **任务管理**: 查询、启用、禁用、删除任务\n\n## 安装依赖\n\n```bash\npip install croniter>=2.0.0\n```\n\n## 使用方法\n\n### 1. 创建定时任务\n\nAgent 可以通过自然语言创建定时任务，支持两种类型：\n\n#### 1.1 静态消息任务\n\n发送预定义的消息：\n\n**示例对话：**\n```\n用户: 每天早上9点提醒我开会\nAgent: [调用 scheduler 工具]\n      action: create\n      name: 每日开会提醒\n      message: 该开会了！\n      schedule_type: cron\n      schedule_value: 0 9 * * *\n```\n\n#### 1.2 动态工具调用任务\n\n定时执行工具并发送结果：\n\n**示例对话：**\n```\n用户: 每天早上8点帮我读取一下今日日程\nAgent: [调用 scheduler 工具]\n      action: create\n      name: 每日日程\n      tool_call:\n        tool_name: read\n        tool_params:\n          file_path: ~/cow/schedule.txt\n        result_prefix: 📅 今日日程\n      schedule_type: cron\n      schedule_value: 0 8 * * *\n```\n\n**工具调用参数说明：**\n- `tool_name`: 要调用的工具名称（如 `bash`、`read`、`write` 等内置工具）\n- `tool_params`: 工具的参数（字典格式）\n- `result_prefix`: 可选，在结果前添加的前缀文本\n\n**注意：** 如果要使用 skills（如 bocha-search），需要通过 `bash` 工具调用 skill 脚本\n\n### 2. 支持的调度类型\n\n#### Cron 表达式 (`cron`)\n使用标准 cron 表达式：\n\n```\n0 9 * * *      # 每天 9:00\n0 */2 * * *    # 每 2 小时\n30 8 * * 1-5   # 工作日 8:30\n0 0 1 * *      # 每月 1 号\n```\n\n#### 固定间隔 (`interval`)\n以秒为单位的间隔：\n\n```\n3600           # 每小时\n86400          # 每天\n1800           # 每 30 分钟\n```\n\n#### 一次性任务 (`once`)\n指定具体时间（ISO 格式）：\n\n```\n2024-12-25T09:00:00\n2024-12-31T23:59:59\n```\n\n### 3. 查询任务列表\n\n```\n用户: 查看我的定时任务\nAgent: [调用 scheduler 工具]\n      action: list\n```\n\n### 4. 查看任务详情\n\n```\n用户: 查看任务 abc123 的详情\nAgent: [调用 scheduler 工具]\n      action: get\n      task_id: abc123\n```\n\n### 5. 删除任务\n\n```\n用户: 删除任务 abc123\nAgent: [调用 scheduler 工具]\n      action: delete\n      task_id: abc123\n```\n\n### 6. 启用/禁用任务\n\n```\n用户: 暂停任务 abc123\nAgent: [调用 scheduler 工具]\n      action: disable\n      task_id: abc123\n\n用户: 恢复任务 abc123\nAgent: [调用 scheduler 工具]\n      action: enable\n      task_id: abc123\n```\n\n## 任务存储\n\n任务保存在 JSON 文件中：\n```\n~/cow/scheduler/tasks.json\n```\n\n任务数据结构：\n\n**静态消息任务：**\n```json\n{\n  \"id\": \"abc123\",\n  \"name\": \"每日提醒\",\n  \"enabled\": true,\n  \"created_at\": \"2024-01-01T10:00:00\",\n  \"updated_at\": \"2024-01-01T10:00:00\",\n  \"schedule\": {\n    \"type\": \"cron\",\n    \"expression\": \"0 9 * * *\"\n  },\n  \"action\": {\n    \"type\": \"send_message\",\n    \"content\": \"该开会了！\",\n    \"receiver\": \"wxid_xxx\",\n    \"receiver_name\": \"张三\",\n    \"is_group\": false,\n    \"channel_type\": \"wechat\"\n  },\n  \"next_run_at\": \"2024-01-02T09:00:00\",\n  \"last_run_at\": \"2024-01-01T09:00:00\"\n}\n```\n\n**动态工具调用任务：**\n```json\n{\n  \"id\": \"def456\",\n  \"name\": \"每日日程\",\n  \"enabled\": true,\n  \"created_at\": \"2024-01-01T10:00:00\",\n  \"updated_at\": \"2024-01-01T10:00:00\",\n  \"schedule\": {\n    \"type\": \"cron\",\n    \"expression\": \"0 8 * * *\"\n  },\n  \"action\": {\n    \"type\": \"tool_call\",\n    \"tool_name\": \"read\",\n    \"tool_params\": {\n      \"file_path\": \"~/cow/schedule.txt\"\n    },\n    \"result_prefix\": \"📅 今日日程\",\n    \"receiver\": \"wxid_xxx\",\n    \"receiver_name\": \"张三\",\n    \"is_group\": false,\n    \"channel_type\": \"wechat\"\n  },\n  \"next_run_at\": \"2024-01-02T08:00:00\"\n}\n```\n\n## 后台服务\n\n定时任务由后台服务 `SchedulerService` 管理：\n\n- 每 30 秒检查一次到期任务\n- 自动执行到期任务\n- 计算下次执行时间\n- 记录执行历史和错误\n\n服务在 Agent 初始化时自动启动，无需手动配置。\n\n## 接收者确定\n\n定时任务会发送给**创建任务时的对话对象**：\n\n- 如果在私聊中创建，发送给该用户\n- 如果在群聊中创建，发送到该群\n- 接收者信息在创建时自动保存\n\n## 常见用例\n\n### 1. 每日提醒（静态消息）\n```\n用户: 每天早上8点提醒我吃药\nAgent: ✅ 定时任务创建成功\n       任务ID: a1b2c3d4\n       调度: 每天 8:00\n       消息: 该吃药了！\n```\n\n### 2. 工作日提醒（静态消息）\n```\n用户: 工作日下午6点提醒我下班\nAgent: [创建 cron: 0 18 * * 1-5]\n       消息: 该下班了！\n```\n\n### 3. 倒计时提醒（静态消息）\n```\n用户: 1小时后提醒我\nAgent: [创建 interval: 3600]\n```\n\n### 4. 每日日程推送（动态工具调用）\n```\n用户: 每天早上8点帮我读取今日日程\nAgent: ✅ 定时任务创建成功\n       任务ID: schedule001\n       调度: 每天 8:00\n       工具: read(file_path='~/cow/schedule.txt')\n       前缀: 📅 今日日程\n```\n\n### 5. 定时文件备份（动态工具调用）\n```\n用户: 每天晚上11点备份工作文件\nAgent: [创建 cron: 0 23 * * *]\n       工具: bash(command='cp ~/cow/work.txt ~/cow/backup/work_$(date +%Y%m%d).txt')\n       前缀: ✅ 文件已备份\n```\n\n### 6. 周报提醒（静态消息）\n```\n用户: 每周五下午5点提醒我写周报\nAgent: [创建 cron: 0 17 * * 5]\n       消息: 📊 该写周报了！\n```\n\n### 4. 特定日期提醒\n```\n用户: 12月25日早上9点提醒我圣诞快乐\nAgent: [创建 once: 2024-12-25T09:00:00]\n```\n\n## 注意事项\n\n1. **时区**: 使用系统本地时区\n2. **精度**: 检查间隔为 30 秒，实际执行可能有 ±30 秒误差\n3. **持久化**: 任务保存在文件中，重启后自动恢复\n4. **一次性任务**: 执行后自动禁用，不会删除（可手动删除）\n5. **错误处理**: 执行失败会记录错误，不影响其他任务\n\n## 技术实现\n\n- **TaskStore**: 任务持久化存储\n- **SchedulerService**: 后台调度服务\n- **SchedulerTool**: Agent 工具接口\n- **Integration**: 与 AgentBridge 集成\n\n## 依赖\n\n- `croniter`: Cron 表达式解析（轻量级，仅 ~50KB）\n"
  },
  {
    "path": "agent/tools/scheduler/__init__.py",
    "content": "\"\"\"\nScheduler tool for managing scheduled tasks\n\"\"\"\n\nfrom .scheduler_tool import SchedulerTool\n\n__all__ = [\"SchedulerTool\"]\n"
  },
  {
    "path": "agent/tools/scheduler/integration.py",
    "content": "\"\"\"\nIntegration module for scheduler with AgentBridge\n\"\"\"\n\nimport os\nfrom typing import Optional\nfrom config import conf\nfrom common.log import logger\nfrom common.utils import expand_path\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\n\n# Global scheduler service instance\n_scheduler_service = None\n_task_store = None\n\n\ndef init_scheduler(agent_bridge) -> bool:\n    \"\"\"\n    Initialize scheduler service\n    \n    Args:\n        agent_bridge: AgentBridge instance\n        \n    Returns:\n        True if initialized successfully\n    \"\"\"\n    global _scheduler_service, _task_store\n    \n    try:\n        from agent.tools.scheduler.task_store import TaskStore\n        from agent.tools.scheduler.scheduler_service import SchedulerService\n        \n        # Get workspace from config\n        workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n        store_path = os.path.join(workspace_root, \"scheduler\", \"tasks.json\")\n        \n        # Create task store\n        _task_store = TaskStore(store_path)\n        logger.debug(f\"[Scheduler] Task store initialized: {store_path}\")\n        \n        # Create execute callback\n        def execute_task_callback(task: dict):\n            \"\"\"Callback to execute a scheduled task\"\"\"\n            try:\n                action = task.get(\"action\", {})\n                action_type = action.get(\"type\")\n                \n                if action_type == \"agent_task\":\n                    _execute_agent_task(task, agent_bridge)\n                elif action_type == \"send_message\":\n                    # Legacy support for old tasks\n                    _execute_send_message(task, agent_bridge)\n                elif action_type == \"tool_call\":\n                    # Legacy support for old tasks\n                    _execute_tool_call(task, agent_bridge)\n                elif action_type == \"skill_call\":\n                    # Legacy support for old tasks\n                    _execute_skill_call(task, agent_bridge)\n                else:\n                    logger.warning(f\"[Scheduler] Unknown action type: {action_type}\")\n            except Exception as e:\n                logger.error(f\"[Scheduler] Error executing task {task.get('id')}: {e}\")\n        \n        # Create scheduler service\n        _scheduler_service = SchedulerService(_task_store, execute_task_callback)\n        _scheduler_service.start()\n        \n        logger.debug(\"[Scheduler] Scheduler service initialized and started\")\n        return True\n        \n    except Exception as e:\n        logger.error(f\"[Scheduler] Failed to initialize scheduler: {e}\")\n        return False\n\n\ndef get_task_store():\n    \"\"\"Get the global task store instance\"\"\"\n    return _task_store\n\n\ndef get_scheduler_service():\n    \"\"\"Get the global scheduler service instance\"\"\"\n    return _scheduler_service\n\n\ndef _execute_agent_task(task: dict, agent_bridge):\n    \"\"\"\n    Execute an agent_task action - let Agent handle the task\n    \n    Args:\n        task: Task dictionary\n        agent_bridge: AgentBridge instance\n    \"\"\"\n    try:\n        action = task.get(\"action\", {})\n        task_description = action.get(\"task_description\")\n        receiver = action.get(\"receiver\")\n        is_group = action.get(\"is_group\", False)\n        channel_type = action.get(\"channel_type\", \"unknown\")\n        \n        if not task_description:\n            logger.error(f\"[Scheduler] Task {task['id']}: No task_description specified\")\n            return\n        \n        if not receiver:\n            logger.error(f\"[Scheduler] Task {task['id']}: No receiver specified\")\n            return\n        \n        # Check for unsupported channels\n        if channel_type == \"dingtalk\":\n            logger.warning(f\"[Scheduler] Task {task['id']}: DingTalk channel does not support scheduled messages (Stream mode limitation). Task will execute but message cannot be sent.\")\n        \n        logger.info(f\"[Scheduler] Task {task['id']}: Executing agent task '{task_description}'\")\n        \n        # Create a unique session_id for this scheduled task to avoid polluting user's conversation\n        # Format: scheduler_<receiver>_<task_id> to ensure isolation\n        scheduler_session_id = f\"scheduler_{receiver}_{task['id']}\"\n        \n        # Create context for Agent\n        context = Context(ContextType.TEXT, task_description)\n        context[\"receiver\"] = receiver\n        context[\"isgroup\"] = is_group\n        context[\"session_id\"] = scheduler_session_id\n        \n        # Channel-specific setup\n        if channel_type == \"web\":\n            import uuid\n            request_id = f\"scheduler_{task['id']}_{uuid.uuid4().hex[:8]}\"\n            context[\"request_id\"] = request_id\n        elif channel_type == \"feishu\":\n            context[\"receive_id_type\"] = \"chat_id\" if is_group else \"open_id\"\n            context[\"msg\"] = None\n        elif channel_type == \"dingtalk\":\n            # DingTalk requires msg object, set to None for scheduled tasks\n            context[\"msg\"] = None\n            if not is_group:\n                sender_staff_id = action.get(\"dingtalk_sender_staff_id\")\n                if sender_staff_id:\n                    context[\"dingtalk_sender_staff_id\"] = sender_staff_id\n        elif channel_type == \"wecom_bot\":\n            context[\"msg\"] = None\n\n        # Use Agent to execute the task\n        # Mark this as a scheduled task execution to prevent recursive task creation\n        context[\"is_scheduled_task\"] = True\n        \n        try:\n            # Don't clear history - scheduler tasks use isolated session_id so they won't pollute user conversations\n            reply = agent_bridge.agent_reply(task_description, context=context, on_event=None, clear_history=False)\n            \n            if reply and reply.content:\n                # Send the reply via channel\n                from channel.channel_factory import create_channel\n                \n                try:\n                    channel = create_channel(channel_type)\n                    if channel:\n                        # For web channel, register request_id\n                        if channel_type == \"web\" and hasattr(channel, 'request_to_session'):\n                            request_id = context.get(\"request_id\")\n                            if request_id:\n                                channel.request_to_session[request_id] = receiver\n                                logger.debug(f\"[Scheduler] Registered request_id {request_id} -> session {receiver}\")\n                        \n                        # Send the reply\n                        channel.send(reply, context)\n                        logger.info(f\"[Scheduler] Task {task['id']} executed successfully, result sent to {receiver}\")\n                    else:\n                        logger.error(f\"[Scheduler] Failed to create channel: {channel_type}\")\n                except Exception as e:\n                    logger.error(f\"[Scheduler] Failed to send result: {e}\")\n            else:\n                logger.error(f\"[Scheduler] Task {task['id']}: No result from agent execution\")\n                \n        except Exception as e:\n            logger.error(f\"[Scheduler] Failed to execute task via Agent: {e}\")\n            import traceback\n            logger.error(f\"[Scheduler] Traceback: {traceback.format_exc()}\")\n            \n    except Exception as e:\n        logger.error(f\"[Scheduler] Error in _execute_agent_task: {e}\")\n        import traceback\n        logger.error(f\"[Scheduler] Traceback: {traceback.format_exc()}\")\n\n\ndef _execute_send_message(task: dict, agent_bridge):\n    \"\"\"\n    Execute a send_message action\n    \n    Args:\n        task: Task dictionary\n        agent_bridge: AgentBridge instance\n    \"\"\"\n    try:\n        action = task.get(\"action\", {})\n        content = action.get(\"content\", \"\")\n        receiver = action.get(\"receiver\")\n        is_group = action.get(\"is_group\", False)\n        channel_type = action.get(\"channel_type\", \"unknown\")\n        \n        if not receiver:\n            logger.error(f\"[Scheduler] Task {task['id']}: No receiver specified\")\n            return\n        \n        # Create context for sending message\n        context = Context(ContextType.TEXT, content)\n        context[\"receiver\"] = receiver\n        context[\"isgroup\"] = is_group\n        context[\"session_id\"] = receiver\n        \n        # Channel-specific context setup\n        if channel_type == \"web\":\n            # Web channel needs request_id\n            import uuid\n            request_id = f\"scheduler_{task['id']}_{uuid.uuid4().hex[:8]}\"\n            context[\"request_id\"] = request_id\n            logger.debug(f\"[Scheduler] Generated request_id for web channel: {request_id}\")\n        elif channel_type == \"feishu\":\n            # Feishu channel: for scheduled tasks, send as new message (no msg_id to reply to)\n            # Use chat_id for groups, open_id for private chats\n            context[\"receive_id_type\"] = \"chat_id\" if is_group else \"open_id\"\n            # Keep isgroup as is, but set msg to None (no original message to reply to)\n            # Feishu channel will detect this and send as new message instead of reply\n            context[\"msg\"] = None\n            logger.debug(f\"[Scheduler] Feishu: receive_id_type={context['receive_id_type']}, is_group={is_group}, receiver={receiver}\")\n        elif channel_type == \"dingtalk\":\n            # DingTalk channel setup\n            context[\"msg\"] = None\n            # 如果是单聊，需要传递 sender_staff_id\n            if not is_group:\n                sender_staff_id = action.get(\"dingtalk_sender_staff_id\")\n                if sender_staff_id:\n                    context[\"dingtalk_sender_staff_id\"] = sender_staff_id\n                    logger.debug(f\"[Scheduler] DingTalk single chat: sender_staff_id={sender_staff_id}\")\n                else:\n                    logger.warning(f\"[Scheduler] Task {task['id']}: DingTalk single chat message missing sender_staff_id\")\n        elif channel_type == \"wecom_bot\":\n            context[\"msg\"] = None\n        elif channel_type == \"qq\":\n            context[\"msg\"] = None\n\n        # Create reply\n        reply = Reply(ReplyType.TEXT, content)\n        \n        # Get channel and send\n        from channel.channel_factory import create_channel\n        \n        try:\n            channel = create_channel(channel_type)\n            if channel:\n                # For web channel, register the request_id to session mapping\n                if channel_type == \"web\" and hasattr(channel, 'request_to_session'):\n                    channel.request_to_session[request_id] = receiver\n                    logger.debug(f\"[Scheduler] Registered request_id {request_id} -> session {receiver}\")\n                \n                channel.send(reply, context)\n                logger.info(f\"[Scheduler] Task {task['id']} executed: sent message to {receiver}\")\n            else:\n                logger.error(f\"[Scheduler] Failed to create channel: {channel_type}\")\n        except Exception as e:\n            logger.error(f\"[Scheduler] Failed to send message: {e}\")\n            import traceback\n            logger.error(f\"[Scheduler] Traceback: {traceback.format_exc()}\")\n            \n    except Exception as e:\n        logger.error(f\"[Scheduler] Error in _execute_send_message: {e}\")\n        import traceback\n        logger.error(f\"[Scheduler] Traceback: {traceback.format_exc()}\")\n\n\ndef _execute_tool_call(task: dict, agent_bridge):\n    \"\"\"\n    Execute a tool_call action\n    \n    Args:\n        task: Task dictionary\n        agent_bridge: AgentBridge instance\n    \"\"\"\n    try:\n        action = task.get(\"action\", {})\n        # Support both old and new field names\n        tool_name = action.get(\"call_name\") or action.get(\"tool_name\")\n        tool_params = action.get(\"call_params\") or action.get(\"tool_params\", {})\n        result_prefix = action.get(\"result_prefix\", \"\")\n        receiver = action.get(\"receiver\")\n        is_group = action.get(\"is_group\", False)\n        channel_type = action.get(\"channel_type\", \"unknown\")\n        \n        if not tool_name:\n            logger.error(f\"[Scheduler] Task {task['id']}: No tool_name specified\")\n            return\n        \n        if not receiver:\n            logger.error(f\"[Scheduler] Task {task['id']}: No receiver specified\")\n            return\n        \n        # Get tool manager and create tool instance\n        from agent.tools.tool_manager import ToolManager\n        tool_manager = ToolManager()\n        tool = tool_manager.create_tool(tool_name)\n        \n        if not tool:\n            logger.error(f\"[Scheduler] Task {task['id']}: Tool '{tool_name}' not found\")\n            return\n        \n        # Execute tool\n        logger.info(f\"[Scheduler] Task {task['id']}: Executing tool '{tool_name}' with params {tool_params}\")\n        result = tool.execute(tool_params)\n        \n        # Get result content\n        if hasattr(result, 'result'):\n            content = result.result\n        else:\n            content = str(result)\n        \n        # Add prefix if specified\n        if result_prefix:\n            content = f\"{result_prefix}\\n\\n{content}\"\n        \n        # Send result as message\n        context = Context(ContextType.TEXT, content)\n        context[\"receiver\"] = receiver\n        context[\"isgroup\"] = is_group\n        context[\"session_id\"] = receiver\n        \n        # Channel-specific context setup\n        if channel_type == \"web\":\n            # Web channel needs request_id\n            import uuid\n            request_id = f\"scheduler_{task['id']}_{uuid.uuid4().hex[:8]}\"\n            context[\"request_id\"] = request_id\n            logger.debug(f\"[Scheduler] Generated request_id for web channel: {request_id}\")\n        elif channel_type == \"feishu\":\n            context[\"receive_id_type\"] = \"chat_id\" if is_group else \"open_id\"\n            context[\"msg\"] = None\n            logger.debug(f\"[Scheduler] Feishu: receive_id_type={context['receive_id_type']}, is_group={is_group}, receiver={receiver}\")\n        elif channel_type == \"wecom_bot\":\n            context[\"msg\"] = None\n\n        reply = Reply(ReplyType.TEXT, content)\n\n        # Get channel and send\n        from channel.channel_factory import create_channel\n\n        try:\n            channel = create_channel(channel_type)\n            if channel:\n                if channel_type == \"web\" and hasattr(channel, 'request_to_session'):\n                    channel.request_to_session[request_id] = receiver\n                    logger.debug(f\"[Scheduler] Registered request_id {request_id} -> session {receiver}\")\n\n                channel.send(reply, context)\n                logger.info(f\"[Scheduler] Task {task['id']} executed: sent tool result to {receiver}\")\n            else:\n                logger.error(f\"[Scheduler] Failed to create channel: {channel_type}\")\n        except Exception as e:\n            logger.error(f\"[Scheduler] Failed to send tool result: {e}\")\n\n    except Exception as e:\n        logger.error(f\"[Scheduler] Error in _execute_tool_call: {e}\")\n\n\ndef _execute_skill_call(task: dict, agent_bridge):\n    \"\"\"\n    Execute a skill_call action by asking Agent to run the skill\n    \n    Args:\n        task: Task dictionary\n        agent_bridge: AgentBridge instance\n    \"\"\"\n    try:\n        action = task.get(\"action\", {})\n        # Support both old and new field names\n        skill_name = action.get(\"call_name\") or action.get(\"skill_name\")\n        skill_params = action.get(\"call_params\") or action.get(\"skill_params\", {})\n        result_prefix = action.get(\"result_prefix\", \"\")\n        receiver = action.get(\"receiver\")\n        is_group = action.get(\"isgroup\", False)\n        channel_type = action.get(\"channel_type\", \"unknown\")\n        \n        if not skill_name:\n            logger.error(f\"[Scheduler] Task {task['id']}: No skill_name specified\")\n            return\n        \n        if not receiver:\n            logger.error(f\"[Scheduler] Task {task['id']}: No receiver specified\")\n            return\n        \n        logger.info(f\"[Scheduler] Task {task['id']}: Executing skill '{skill_name}' with params {skill_params}\")\n        \n        # Create a unique session_id for this scheduled task to avoid polluting user's conversation\n        # Format: scheduler_<receiver>_<task_id> to ensure isolation\n        scheduler_session_id = f\"scheduler_{receiver}_{task['id']}\"\n        \n        # Build a natural language query for the Agent to execute the skill\n        # Format: \"Use skill-name to do something with params\"\n        param_str = \", \".join([f\"{k}={v}\" for k, v in skill_params.items()])\n        query = f\"Use {skill_name} skill\"\n        if param_str:\n            query += f\" with {param_str}\"\n        \n        # Create context for Agent\n        context = Context(ContextType.TEXT, query)\n        context[\"receiver\"] = receiver\n        context[\"isgroup\"] = is_group\n        context[\"session_id\"] = scheduler_session_id\n        \n        # Channel-specific setup\n        if channel_type == \"web\":\n            import uuid\n            request_id = f\"scheduler_{task['id']}_{uuid.uuid4().hex[:8]}\"\n            context[\"request_id\"] = request_id\n        elif channel_type == \"feishu\":\n            context[\"receive_id_type\"] = \"chat_id\" if is_group else \"open_id\"\n            context[\"msg\"] = None\n        elif channel_type == \"wecom_bot\":\n            context[\"msg\"] = None\n\n        # Use Agent to execute the skill\n        try:\n            # Don't clear history - scheduler tasks use isolated session_id so they won't pollute user conversations\n            reply = agent_bridge.agent_reply(query, context=context, on_event=None, clear_history=False)\n            \n            if reply and reply.content:\n                content = reply.content\n                \n                # Add prefix if specified\n                if result_prefix:\n                    content = f\"{result_prefix}\\n\\n{content}\"\n                \n                logger.info(f\"[Scheduler] Task {task['id']} executed: skill result sent to {receiver}\")\n            else:\n                logger.error(f\"[Scheduler] Task {task['id']}: No result from skill execution\")\n                \n        except Exception as e:\n            logger.error(f\"[Scheduler] Failed to execute skill via Agent: {e}\")\n            import traceback\n            logger.error(f\"[Scheduler] Traceback: {traceback.format_exc()}\")\n            \n    except Exception as e:\n        logger.error(f\"[Scheduler] Error in _execute_skill_call: {e}\")\n        import traceback\n        logger.error(f\"[Scheduler] Traceback: {traceback.format_exc()}\")\n\n\ndef attach_scheduler_to_tool(tool, context: Context = None):\n    \"\"\"\n    Attach scheduler components to a SchedulerTool instance\n    \n    Args:\n        tool: SchedulerTool instance\n        context: Current context (optional)\n    \"\"\"\n    if _task_store:\n        tool.task_store = _task_store\n    \n    if context:\n        tool.current_context = context\n        \n        channel_type = context.get(\"channel_type\") or conf().get(\"channel_type\", \"unknown\")\n        if not tool.config:\n            tool.config = {}\n        tool.config[\"channel_type\"] = channel_type\n"
  },
  {
    "path": "agent/tools/scheduler/scheduler_service.py",
    "content": "\"\"\"\nBackground scheduler service for executing scheduled tasks\n\"\"\"\n\nimport time\nimport threading\nfrom datetime import datetime, timedelta\nfrom typing import Callable, Optional\nfrom croniter import croniter\nfrom common.log import logger\n\n\nclass SchedulerService:\n    \"\"\"\n    Background service that executes scheduled tasks\n    \"\"\"\n    \n    def __init__(self, task_store, execute_callback: Callable):\n        \"\"\"\n        Initialize scheduler service\n        \n        Args:\n            task_store: TaskStore instance\n            execute_callback: Function to call when executing a task\n        \"\"\"\n        self.task_store = task_store\n        self.execute_callback = execute_callback\n        self.running = False\n        self.thread = None\n        self._lock = threading.Lock()\n    \n    def start(self):\n        \"\"\"Start the scheduler service\"\"\"\n        with self._lock:\n            if self.running:\n                logger.warning(\"[Scheduler] Service already running\")\n                return\n            \n            self.running = True\n            self.thread = threading.Thread(target=self._run_loop, daemon=True)\n            self.thread.start()\n            logger.debug(\"[Scheduler] Service started\")\n    \n    def stop(self):\n        \"\"\"Stop the scheduler service\"\"\"\n        with self._lock:\n            if not self.running:\n                return\n            \n            self.running = False\n            if self.thread:\n                self.thread.join(timeout=5)\n            logger.info(\"[Scheduler] Service stopped\")\n    \n    def _run_loop(self):\n        \"\"\"Main scheduler loop\"\"\"\n        logger.debug(\"[Scheduler] Scheduler loop started\")\n        \n        while self.running:\n            try:\n                self._check_and_execute_tasks()\n            except Exception as e:\n                logger.error(f\"[Scheduler] Error in scheduler loop: {e}\")\n\n            time.sleep(30)\n    \n    def _check_and_execute_tasks(self):\n        \"\"\"Check for due tasks and execute them\"\"\"\n        now = datetime.now()\n        tasks = self.task_store.list_tasks(enabled_only=True)\n        \n        for task in tasks:\n            try:\n                # Check if task is due\n                if self._is_task_due(task, now):\n                    logger.info(f\"[Scheduler] Executing task: {task['id']} - {task['name']}\")\n                    self._execute_task(task)\n                    \n                    # Update next run time\n                    next_run = self._calculate_next_run(task, now)\n                    if next_run:\n                        self.task_store.update_task(task['id'], {\n                            \"next_run_at\": next_run.isoformat(),\n                            \"last_run_at\": now.isoformat()\n                        })\n                    else:\n                        # One-time task completed, remove it\n                        self.task_store.delete_task(task['id'])\n                        logger.info(f\"[Scheduler] One-time task completed and removed: {task['id']}\")\n            except Exception as e:\n                logger.error(f\"[Scheduler] Error processing task {task.get('id')}: {e}\")\n    \n    def _is_task_due(self, task: dict, now: datetime) -> bool:\n        \"\"\"\n        Check if a task is due to run\n        \n        Args:\n            task: Task dictionary\n            now: Current datetime\n            \n        Returns:\n            True if task should run now\n        \"\"\"\n        next_run_str = task.get(\"next_run_at\")\n        if not next_run_str:\n            # Calculate initial next_run_at\n            next_run = self._calculate_next_run(task, now)\n            if next_run:\n                self.task_store.update_task(task['id'], {\n                    \"next_run_at\": next_run.isoformat()\n                })\n                return False\n            return False\n        \n        try:\n            next_run = datetime.fromisoformat(next_run_str)\n            \n            # Check if task is overdue (e.g., service restart)\n            if next_run < now:\n                time_diff = (now - next_run).total_seconds()\n                \n                # If overdue by more than 5 minutes, skip this run and schedule next\n                if time_diff > 300:  # 5 minutes\n                    logger.warning(f\"[Scheduler] Task {task['id']} is overdue by {int(time_diff)}s, skipping and scheduling next run\")\n                    \n                    # For one-time tasks, remove them directly\n                    schedule = task.get(\"schedule\", {})\n                    if schedule.get(\"type\") == \"once\":\n                        self.task_store.delete_task(task['id'])\n                        logger.info(f\"[Scheduler] One-time task {task['id']} expired, removed\")\n                        return False\n                    \n                    # For recurring tasks, calculate next run from now\n                    next_next_run = self._calculate_next_run(task, now)\n                    if next_next_run:\n                        self.task_store.update_task(task['id'], {\n                            \"next_run_at\": next_next_run.isoformat()\n                        })\n                        logger.info(f\"[Scheduler] Rescheduled task {task['id']} to {next_next_run}\")\n                    return False\n            \n            return now >= next_run\n        except Exception:\n            return False\n    \n    def _calculate_next_run(self, task: dict, from_time: datetime) -> Optional[datetime]:\n        \"\"\"\n        Calculate next run time for a task\n        \n        Args:\n            task: Task dictionary\n            from_time: Calculate from this time\n            \n        Returns:\n            Next run datetime or None for one-time tasks\n        \"\"\"\n        schedule = task.get(\"schedule\", {})\n        schedule_type = schedule.get(\"type\")\n        \n        if schedule_type == \"cron\":\n            # Cron expression\n            expression = schedule.get(\"expression\")\n            if not expression:\n                return None\n            \n            try:\n                cron = croniter(expression, from_time)\n                return cron.get_next(datetime)\n            except Exception as e:\n                logger.error(f\"[Scheduler] Invalid cron expression '{expression}': {e}\")\n                return None\n        \n        elif schedule_type == \"interval\":\n            # Interval in seconds\n            seconds = schedule.get(\"seconds\", 0)\n            if seconds <= 0:\n                return None\n            return from_time + timedelta(seconds=seconds)\n        \n        elif schedule_type == \"once\":\n            # One-time task at specific time\n            run_at_str = schedule.get(\"run_at\")\n            if not run_at_str:\n                return None\n            \n            try:\n                run_at = datetime.fromisoformat(run_at_str)\n                # Only return if in the future\n                if run_at > from_time:\n                    return run_at\n            except Exception:\n                pass\n            return None\n        \n        return None\n    \n    def _execute_task(self, task: dict):\n        \"\"\"\n        Execute a task\n        \n        Args:\n            task: Task dictionary\n        \"\"\"\n        try:\n            # Call the execute callback\n            self.execute_callback(task)\n        except Exception as e:\n            logger.error(f\"[Scheduler] Error executing task {task['id']}: {e}\")\n            # Update task with error\n            self.task_store.update_task(task['id'], {\n                \"last_error\": str(e),\n                \"last_error_at\": datetime.now().isoformat()\n            })\n"
  },
  {
    "path": "agent/tools/scheduler/scheduler_tool.py",
    "content": "\"\"\"\nScheduler tool for creating and managing scheduled tasks\n\"\"\"\n\nimport uuid\nfrom datetime import datetime\nfrom typing import Any, Dict, Optional\nfrom croniter import croniter\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\n\n\nclass SchedulerTool(BaseTool):\n    \"\"\"\n    Tool for managing scheduled tasks (reminders, notifications, etc.)\n    \"\"\"\n    \n    name: str = \"scheduler\"\n    description: str = (\n        \"创建、查询和管理定时任务（提醒、周期性任务等）。\\n\\n\"\n        \"⚠️ 重要：仅当需要「定时/提醒/每天/每周/X分钟后/X点」等延迟或周期执行时才使用此工具。\"\n        \"使用方法：\\n\"\n        \"- 创建：action='create', name='任务名', message/ai_task='内容', schedule_type='once/interval/cron', schedule_value='...'\\n\"\n        \"- 查询：action='list' / action='get', task_id='任务ID'\\n\"\n        \"- 管理：action='delete/enable/disable', task_id='任务ID'\\n\\n\"\n        \"调度类型：\\n\"\n        \"- once: 一次性任务，支持相对时间(+5s,+10m,+1h,+1d)或ISO时间\\n\"\n        \"- interval: 固定间隔(秒)，如3600表示每小时\\n\"\n        \"- cron: cron表达式，如'0 8 * * *'表示每天8点\\n\\n\"\n        \"注意：'X秒后'用once+相对时间，'每X秒'用interval\"\n    )\n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"action\": {\n                \"type\": \"string\",\n                \"enum\": [\"create\", \"list\", \"get\", \"delete\", \"enable\", \"disable\"],\n                \"description\": \"操作类型: create(创建), list(列表), get(查询), delete(删除), enable(启用), disable(禁用)\"\n            },\n            \"task_id\": {\n                \"type\": \"string\",\n                \"description\": \"任务ID (用于 get/delete/enable/disable 操作)\"\n            },\n            \"name\": {\n                \"type\": \"string\",\n                \"description\": \"任务名称 (用于 create 操作)\"\n            },\n            \"message\": {\n                \"type\": \"string\",\n                \"description\": \"固定消息内容 (与ai_task二选一)\"\n            },\n            \"ai_task\": {\n                \"type\": \"string\",\n                \"description\": \"AI任务描述 (与message二选一)，用于定时让AI执行的任务\"\n            },\n            \"schedule_type\": {\n                \"type\": \"string\",\n                \"enum\": [\"cron\", \"interval\", \"once\"],\n                \"description\": \"调度类型 (用于 create 操作): cron(cron表达式), interval(固定间隔秒数), once(一次性)\"\n            },\n            \"schedule_value\": {\n                \"type\": \"string\",\n                \"description\": \"调度值: cron表达式/间隔秒数/时间(+5s,+10m,+1h或ISO格式)\"\n            }\n        },\n        \"required\": [\"action\"]\n    }\n    \n    def __init__(self, config: dict = None):\n        super().__init__()\n        self.config = config or {}\n        \n        # Will be set by agent bridge\n        self.task_store = None\n        self.current_context = None\n    \n    def execute(self, params: dict) -> ToolResult:\n        \"\"\"\n        Execute scheduler operations\n        \n        Args:\n            params: Dictionary containing:\n                - action: Operation type (create/list/get/delete/enable/disable)\n                - Other parameters depending on action\n            \n        Returns:\n            ToolResult object\n        \"\"\"\n        # Extract parameters\n        action = params.get(\"action\")\n        kwargs = params\n        \n        if not self.task_store:\n            return ToolResult.fail(\"错误: 定时任务系统未初始化\")\n        \n        try:\n            if action == \"create\":\n                result = self._create_task(**kwargs)\n                return ToolResult.success(result)\n            elif action == \"list\":\n                result = self._list_tasks(**kwargs)\n                return ToolResult.success(result)\n            elif action == \"get\":\n                result = self._get_task(**kwargs)\n                return ToolResult.success(result)\n            elif action == \"delete\":\n                result = self._delete_task(**kwargs)\n                return ToolResult.success(result)\n            elif action == \"enable\":\n                result = self._enable_task(**kwargs)\n                return ToolResult.success(result)\n            elif action == \"disable\":\n                result = self._disable_task(**kwargs)\n                return ToolResult.success(result)\n            else:\n                return ToolResult.fail(f\"未知操作: {action}\")\n        except Exception as e:\n            logger.error(f\"[SchedulerTool] Error: {e}\")\n            return ToolResult.fail(f\"操作失败: {str(e)}\")\n    \n    def _create_task(self, **kwargs) -> str:\n        \"\"\"Create a new scheduled task\"\"\"\n        name = kwargs.get(\"name\")\n        message = kwargs.get(\"message\")\n        ai_task = kwargs.get(\"ai_task\")\n        schedule_type = kwargs.get(\"schedule_type\")\n        schedule_value = kwargs.get(\"schedule_value\")\n        \n        # Validate required fields\n        if not name:\n            return \"错误: 缺少任务名称 (name)\"\n        \n        # Check that exactly one of message/ai_task is provided\n        if not message and not ai_task:\n            return \"错误: 必须提供 message（固定消息）或 ai_task（AI任务）之一\"\n        if message and ai_task:\n            return \"错误: message 和 ai_task 只能提供其中一个\"\n        \n        if not schedule_type:\n            return \"错误: 缺少调度类型 (schedule_type)\"\n        if not schedule_value:\n            return \"错误: 缺少调度值 (schedule_value)\"\n        \n        # Validate schedule\n        schedule = self._parse_schedule(schedule_type, schedule_value)\n        if not schedule:\n            return f\"错误: 无效的调度配置 - type: {schedule_type}, value: {schedule_value}\"\n        \n        # Get context info for receiver\n        if not self.current_context:\n            return \"错误: 无法获取当前对话上下文\"\n        \n        context = self.current_context\n        \n        # Create task\n        task_id = str(uuid.uuid4())[:8]\n        \n        # Build action based on message or ai_task\n        if message:\n            action = {\n                \"type\": \"send_message\",\n                \"content\": message,\n                \"receiver\": context.get(\"receiver\"),\n                \"receiver_name\": self._get_receiver_name(context),\n                \"is_group\": context.get(\"isgroup\", False),\n                \"channel_type\": self.config.get(\"channel_type\", \"unknown\")\n            }\n        else:  # ai_task\n            action = {\n                \"type\": \"agent_task\",\n                \"task_description\": ai_task,\n                \"receiver\": context.get(\"receiver\"),\n                \"receiver_name\": self._get_receiver_name(context),\n                \"is_group\": context.get(\"isgroup\", False),\n                \"channel_type\": self.config.get(\"channel_type\", \"unknown\")\n            }\n        \n        # 针对钉钉单聊，额外存储 sender_staff_id\n        msg = context.kwargs.get(\"msg\")\n        if msg and hasattr(msg, 'sender_staff_id') and not context.get(\"isgroup\", False):\n            action[\"dingtalk_sender_staff_id\"] = msg.sender_staff_id\n        \n        task_data = {\n            \"id\": task_id,\n            \"name\": name,\n            \"enabled\": True,\n            \"created_at\": datetime.now().isoformat(),\n            \"updated_at\": datetime.now().isoformat(),\n            \"schedule\": schedule,\n            \"action\": action\n        }\n        \n        # Calculate initial next_run_at\n        next_run = self._calculate_next_run(task_data)\n        if next_run:\n            task_data[\"next_run_at\"] = next_run.isoformat()\n        \n        # Save task\n        self.task_store.add_task(task_data)\n        \n        # Format response\n        schedule_desc = self._format_schedule_description(schedule)\n        receiver_desc = task_data[\"action\"][\"receiver_name\"] or task_data[\"action\"][\"receiver\"]\n        \n        if message:\n            content_desc = f\"💬 固定消息: {message}\"\n        else:\n            content_desc = f\"🤖 AI任务: {ai_task}\"\n        \n        return (\n            f\"✅ 定时任务创建成功\\n\\n\"\n            f\"📋 任务ID: {task_id}\\n\"\n            f\"📝 名称: {name}\\n\"\n            f\"⏰ 调度: {schedule_desc}\\n\"\n            f\"👤 接收者: {receiver_desc}\\n\"\n            f\"{content_desc}\\n\"\n            f\"🕐 下次执行: {next_run.strftime('%Y-%m-%d %H:%M:%S') if next_run else '未知'}\"\n        )\n    \n    def _list_tasks(self, **kwargs) -> str:\n        \"\"\"List all tasks\"\"\"\n        tasks = self.task_store.list_tasks()\n        \n        if not tasks:\n            return \"📋 暂无定时任务\"\n        \n        lines = [f\"📋 定时任务列表 (共 {len(tasks)} 个)\\n\"]\n        \n        for task in tasks:\n            status = \"✅\" if task.get(\"enabled\", True) else \"❌\"\n            schedule_desc = self._format_schedule_description(task.get(\"schedule\", {}))\n            next_run = task.get(\"next_run_at\")\n            next_run_str = datetime.fromisoformat(next_run).strftime('%m-%d %H:%M') if next_run else \"未知\"\n            \n            lines.append(\n                f\"{status} [{task['id']}] {task['name']}\\n\"\n                f\"   ⏰ {schedule_desc} | 下次: {next_run_str}\"\n            )\n        \n        return \"\\n\".join(lines)\n    \n    def _get_task(self, **kwargs) -> str:\n        \"\"\"Get task details\"\"\"\n        task_id = kwargs.get(\"task_id\")\n        if not task_id:\n            return \"错误: 缺少任务ID (task_id)\"\n        \n        task = self.task_store.get_task(task_id)\n        if not task:\n            return f\"错误: 任务 '{task_id}' 不存在\"\n        \n        status = \"启用\" if task.get(\"enabled\", True) else \"禁用\"\n        schedule_desc = self._format_schedule_description(task.get(\"schedule\", {}))\n        action = task.get(\"action\", {})\n        next_run = task.get(\"next_run_at\")\n        next_run_str = datetime.fromisoformat(next_run).strftime('%Y-%m-%d %H:%M:%S') if next_run else \"未知\"\n        last_run = task.get(\"last_run_at\")\n        last_run_str = datetime.fromisoformat(last_run).strftime('%Y-%m-%d %H:%M:%S') if last_run else \"从未执行\"\n        \n        return (\n            f\"📋 任务详情\\n\\n\"\n            f\"ID: {task['id']}\\n\"\n            f\"名称: {task['name']}\\n\"\n            f\"状态: {status}\\n\"\n            f\"调度: {schedule_desc}\\n\"\n            f\"接收者: {action.get('receiver_name', action.get('receiver'))}\\n\"\n            f\"消息: {action.get('content')}\\n\"\n            f\"下次执行: {next_run_str}\\n\"\n            f\"上次执行: {last_run_str}\\n\"\n            f\"创建时间: {datetime.fromisoformat(task['created_at']).strftime('%Y-%m-%d %H:%M:%S')}\"\n        )\n    \n    def _delete_task(self, **kwargs) -> str:\n        \"\"\"Delete a task\"\"\"\n        task_id = kwargs.get(\"task_id\")\n        if not task_id:\n            return \"错误: 缺少任务ID (task_id)\"\n        \n        task = self.task_store.get_task(task_id)\n        if not task:\n            return f\"错误: 任务 '{task_id}' 不存在\"\n        \n        self.task_store.delete_task(task_id)\n        return f\"✅ 任务 '{task['name']}' ({task_id}) 已删除\"\n    \n    def _enable_task(self, **kwargs) -> str:\n        \"\"\"Enable a task\"\"\"\n        task_id = kwargs.get(\"task_id\")\n        if not task_id:\n            return \"错误: 缺少任务ID (task_id)\"\n        \n        task = self.task_store.get_task(task_id)\n        if not task:\n            return f\"错误: 任务 '{task_id}' 不存在\"\n        \n        self.task_store.enable_task(task_id, True)\n        return f\"✅ 任务 '{task['name']}' ({task_id}) 已启用\"\n    \n    def _disable_task(self, **kwargs) -> str:\n        \"\"\"Disable a task\"\"\"\n        task_id = kwargs.get(\"task_id\")\n        if not task_id:\n            return \"错误: 缺少任务ID (task_id)\"\n        \n        task = self.task_store.get_task(task_id)\n        if not task:\n            return f\"错误: 任务 '{task_id}' 不存在\"\n        \n        self.task_store.enable_task(task_id, False)\n        return f\"✅ 任务 '{task['name']}' ({task_id}) 已禁用\"\n    \n    def _parse_schedule(self, schedule_type: str, schedule_value: str) -> Optional[dict]:\n        \"\"\"Parse and validate schedule configuration\"\"\"\n        try:\n            if schedule_type == \"cron\":\n                # Validate cron expression\n                croniter(schedule_value)\n                return {\"type\": \"cron\", \"expression\": schedule_value}\n            \n            elif schedule_type == \"interval\":\n                # Parse interval in seconds\n                seconds = int(schedule_value)\n                if seconds <= 0:\n                    return None\n                return {\"type\": \"interval\", \"seconds\": seconds}\n            \n            elif schedule_type == \"once\":\n                # Parse datetime - support both relative and absolute time\n                \n                # Check if it's relative time (e.g., \"+5s\", \"+10m\", \"+1h\", \"+1d\")\n                if schedule_value.startswith(\"+\"):\n                    import re\n                    match = re.match(r'\\+(\\d+)([smhd])', schedule_value)\n                    if match:\n                        amount = int(match.group(1))\n                        unit = match.group(2)\n                        \n                        from datetime import timedelta\n                        now = datetime.now()\n                        \n                        if unit == 's':  # seconds\n                            target_time = now + timedelta(seconds=amount)\n                        elif unit == 'm':  # minutes\n                            target_time = now + timedelta(minutes=amount)\n                        elif unit == 'h':  # hours\n                            target_time = now + timedelta(hours=amount)\n                        elif unit == 'd':  # days\n                            target_time = now + timedelta(days=amount)\n                        else:\n                            return None\n                        \n                        return {\"type\": \"once\", \"run_at\": target_time.isoformat()}\n                    else:\n                        logger.error(f\"[SchedulerTool] Invalid relative time format: {schedule_value}\")\n                        return None\n                else:\n                    # Absolute time in ISO format\n                    datetime.fromisoformat(schedule_value)\n                    return {\"type\": \"once\", \"run_at\": schedule_value}\n            \n        except Exception as e:\n            logger.error(f\"[SchedulerTool] Invalid schedule: {e}\")\n            return None\n        \n        return None\n    \n    def _calculate_next_run(self, task: dict) -> Optional[datetime]:\n        \"\"\"Calculate next run time for a task\"\"\"\n        schedule = task.get(\"schedule\", {})\n        schedule_type = schedule.get(\"type\")\n        now = datetime.now()\n        \n        if schedule_type == \"cron\":\n            expression = schedule.get(\"expression\")\n            cron = croniter(expression, now)\n            return cron.get_next(datetime)\n        \n        elif schedule_type == \"interval\":\n            seconds = schedule.get(\"seconds\", 0)\n            from datetime import timedelta\n            return now + timedelta(seconds=seconds)\n        \n        elif schedule_type == \"once\":\n            run_at_str = schedule.get(\"run_at\")\n            return datetime.fromisoformat(run_at_str)\n        \n        return None\n    \n    def _format_schedule_description(self, schedule: dict) -> str:\n        \"\"\"Format schedule as human-readable description\"\"\"\n        schedule_type = schedule.get(\"type\")\n        \n        if schedule_type == \"cron\":\n            expr = schedule.get(\"expression\", \"\")\n            # Try to provide friendly description\n            if expr == \"0 9 * * *\":\n                return \"每天 9:00\"\n            elif expr == \"0 */1 * * *\":\n                return \"每小时\"\n            elif expr == \"*/30 * * * *\":\n                return \"每30分钟\"\n            else:\n                return f\"Cron: {expr}\"\n        \n        elif schedule_type == \"interval\":\n            seconds = schedule.get(\"seconds\", 0)\n            if seconds >= 86400:\n                days = seconds // 86400\n                return f\"每 {days} 天\"\n            elif seconds >= 3600:\n                hours = seconds // 3600\n                return f\"每 {hours} 小时\"\n            elif seconds >= 60:\n                minutes = seconds // 60\n                return f\"每 {minutes} 分钟\"\n            else:\n                return f\"每 {seconds} 秒\"\n        \n        elif schedule_type == \"once\":\n            run_at = schedule.get(\"run_at\", \"\")\n            try:\n                dt = datetime.fromisoformat(run_at)\n                return f\"一次性 ({dt.strftime('%Y-%m-%d %H:%M')})\"\n            except Exception:\n                return \"一次性\"\n        \n        return \"未知\"\n    \n    def _get_receiver_name(self, context: Context) -> str:\n        \"\"\"Get receiver name from context\"\"\"\n        try:\n            msg = context.get(\"msg\")\n            if msg:\n                if context.get(\"isgroup\"):\n                    return msg.other_user_nickname or \"群聊\"\n                else:\n                    return msg.from_user_nickname or \"用户\"\n        except Exception:\n            pass\n        return \"未知\"\n"
  },
  {
    "path": "agent/tools/scheduler/task_store.py",
    "content": "\"\"\"\nTask storage management for scheduler\n\"\"\"\n\nimport json\nimport os\nimport threading\nfrom datetime import datetime\nfrom typing import Dict, List, Optional\nfrom pathlib import Path\nfrom common.utils import expand_path\n\n\nclass TaskStore:\n    \"\"\"\n    Manages persistent storage of scheduled tasks\n    \"\"\"\n    \n    def __init__(self, store_path: str = None):\n        \"\"\"\n        Initialize task store\n        \n        Args:\n            store_path: Path to tasks.json file. Defaults to ~/cow/scheduler/tasks.json\n        \"\"\"\n        if store_path is None:\n            # Default to ~/cow/scheduler/tasks.json\n            home = expand_path(\"~\")\n            store_path = os.path.join(home, \"cow\", \"scheduler\", \"tasks.json\")\n        \n        self.store_path = store_path\n        self.lock = threading.Lock()\n        self._ensure_store_dir()\n    \n    def _ensure_store_dir(self):\n        \"\"\"Ensure the storage directory exists\"\"\"\n        store_dir = os.path.dirname(self.store_path)\n        os.makedirs(store_dir, exist_ok=True)\n    \n    def load_tasks(self) -> Dict[str, dict]:\n        \"\"\"\n        Load all tasks from storage\n        \n        Returns:\n            Dictionary of task_id -> task_data\n        \"\"\"\n        with self.lock:\n            if not os.path.exists(self.store_path):\n                return {}\n            \n            try:\n                with open(self.store_path, 'r', encoding='utf-8') as f:\n                    data = json.load(f)\n                    return data.get(\"tasks\", {})\n            except Exception as e:\n                print(f\"Error loading tasks: {e}\")\n                return {}\n    \n    def save_tasks(self, tasks: Dict[str, dict]):\n        \"\"\"\n        Save all tasks to storage\n        \n        Args:\n            tasks: Dictionary of task_id -> task_data\n        \"\"\"\n        with self.lock:\n            try:\n                # Create backup\n                if os.path.exists(self.store_path):\n                    backup_path = f\"{self.store_path}.bak\"\n                    try:\n                        with open(self.store_path, 'r') as src:\n                            with open(backup_path, 'w') as dst:\n                                dst.write(src.read())\n                    except Exception:\n                        pass\n                \n                # Save tasks\n                data = {\n                    \"version\": 1,\n                    \"updated_at\": datetime.now().isoformat(),\n                    \"tasks\": tasks\n                }\n                \n                with open(self.store_path, 'w', encoding='utf-8') as f:\n                    json.dump(data, f, ensure_ascii=False, indent=2)\n            except Exception as e:\n                print(f\"Error saving tasks: {e}\")\n                raise\n    \n    def add_task(self, task: dict) -> bool:\n        \"\"\"\n        Add a new task\n        \n        Args:\n            task: Task data dictionary\n            \n        Returns:\n            True if successful\n        \"\"\"\n        tasks = self.load_tasks()\n        task_id = task.get(\"id\")\n        \n        if not task_id:\n            raise ValueError(\"Task must have an 'id' field\")\n        \n        if task_id in tasks:\n            raise ValueError(f\"Task with id '{task_id}' already exists\")\n        \n        tasks[task_id] = task\n        self.save_tasks(tasks)\n        return True\n    \n    def update_task(self, task_id: str, updates: dict) -> bool:\n        \"\"\"\n        Update an existing task\n        \n        Args:\n            task_id: Task ID\n            updates: Dictionary of fields to update\n            \n        Returns:\n            True if successful\n        \"\"\"\n        tasks = self.load_tasks()\n        \n        if task_id not in tasks:\n            raise ValueError(f\"Task '{task_id}' not found\")\n        \n        # Update fields\n        tasks[task_id].update(updates)\n        tasks[task_id][\"updated_at\"] = datetime.now().isoformat()\n        \n        self.save_tasks(tasks)\n        return True\n    \n    def delete_task(self, task_id: str) -> bool:\n        \"\"\"\n        Delete a task\n        \n        Args:\n            task_id: Task ID\n            \n        Returns:\n            True if successful\n        \"\"\"\n        tasks = self.load_tasks()\n        \n        if task_id not in tasks:\n            raise ValueError(f\"Task '{task_id}' not found\")\n        \n        del tasks[task_id]\n        self.save_tasks(tasks)\n        return True\n    \n    def get_task(self, task_id: str) -> Optional[dict]:\n        \"\"\"\n        Get a specific task\n        \n        Args:\n            task_id: Task ID\n            \n        Returns:\n            Task data or None if not found\n        \"\"\"\n        tasks = self.load_tasks()\n        return tasks.get(task_id)\n    \n    def list_tasks(self, enabled_only: bool = False) -> List[dict]:\n        \"\"\"\n        List all tasks\n        \n        Args:\n            enabled_only: If True, only return enabled tasks\n            \n        Returns:\n            List of task dictionaries\n        \"\"\"\n        tasks = self.load_tasks()\n        task_list = list(tasks.values())\n        \n        if enabled_only:\n            task_list = [t for t in task_list if t.get(\"enabled\", True)]\n        \n        # Sort by next_run_at\n        task_list.sort(key=lambda t: t.get(\"next_run_at\", float('inf')))\n        \n        return task_list\n    \n    def enable_task(self, task_id: str, enabled: bool = True) -> bool:\n        \"\"\"\n        Enable or disable a task\n        \n        Args:\n            task_id: Task ID\n            enabled: True to enable, False to disable\n            \n        Returns:\n            True if successful\n        \"\"\"\n        return self.update_task(task_id, {\"enabled\": enabled})\n"
  },
  {
    "path": "agent/tools/send/__init__.py",
    "content": "from .send import Send\n\n__all__ = ['Send']\n"
  },
  {
    "path": "agent/tools/send/send.py",
    "content": "\"\"\"\nSend tool - Send files to the user\n\"\"\"\n\nimport os\nfrom typing import Dict, Any\nfrom pathlib import Path\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom common.utils import expand_path\n\n\nclass Send(BaseTool):\n    \"\"\"Tool for sending files to the user\"\"\"\n    \n    name: str = \"send\"\n    description: str = \"Send a LOCAL file (image, video, audio, document) to the user. Only for local file paths. Do NOT use this for URLs — URLs should be included directly in your text reply, the system will handle them automatically.\"\n    \n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Local file path to send. Must be an absolute path or relative to workspace. Do NOT pass URLs here.\"\n            },\n            \"message\": {\n                \"type\": \"string\",\n                \"description\": \"Optional message to accompany the file\"\n            }\n        },\n        \"required\": [\"path\"]\n    }\n    \n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self.cwd = self.config.get(\"cwd\", os.getcwd())\n        \n        # Supported file types\n        self.image_extensions = {'.jpg', '.jpeg', '.png', '.gif', '.webp', '.bmp', '.svg', '.ico'}\n        self.video_extensions = {'.mp4', '.avi', '.mov', '.mkv', '.flv', '.wmv', '.webm', '.m4v'}\n        self.audio_extensions = {'.mp3', '.wav', '.ogg', '.m4a', '.flac', '.aac', '.wma'}\n        self.document_extensions = {'.pdf', '.doc', '.docx', '.xls', '.xlsx', '.ppt', '.pptx', '.txt', '.md'}\n    \n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute file send operation\n        \n        :param args: Contains file path and optional message\n        :return: File metadata for channel to send\n        \"\"\"\n        path = args.get(\"path\", \"\").strip()\n        message = args.get(\"message\", \"\")\n        \n        if not path:\n            return ToolResult.fail(\"Error: path parameter is required\")\n        \n        # Resolve path\n        absolute_path = self._resolve_path(path)\n        \n        # Check if file exists\n        if not os.path.exists(absolute_path):\n            return ToolResult.fail(f\"Error: File not found: {path}\")\n        \n        # Check if readable\n        if not os.access(absolute_path, os.R_OK):\n            return ToolResult.fail(f\"Error: File is not readable: {path}\")\n        \n        # Get file info\n        file_ext = Path(absolute_path).suffix.lower()\n        file_size = os.path.getsize(absolute_path)\n        file_name = Path(absolute_path).name\n        \n        # Determine file type\n        if file_ext in self.image_extensions:\n            file_type = \"image\"\n            mime_type = self._get_image_mime_type(file_ext)\n        elif file_ext in self.video_extensions:\n            file_type = \"video\"\n            mime_type = self._get_video_mime_type(file_ext)\n        elif file_ext in self.audio_extensions:\n            file_type = \"audio\"\n            mime_type = self._get_audio_mime_type(file_ext)\n        elif file_ext in self.document_extensions:\n            file_type = \"document\"\n            mime_type = self._get_document_mime_type(file_ext)\n        else:\n            file_type = \"file\"\n            mime_type = \"application/octet-stream\"\n        \n        # Return file_to_send metadata\n        result = {\n            \"type\": \"file_to_send\",\n            \"file_type\": file_type,\n            \"path\": absolute_path,\n            \"file_name\": file_name,\n            \"mime_type\": mime_type,\n            \"size\": file_size,\n            \"size_formatted\": self._format_size(file_size),\n            \"message\": message or f\"正在发送 {file_name}\"\n        }\n        \n        return ToolResult.success(result)\n    \n    def _resolve_path(self, path: str) -> str:\n        \"\"\"Resolve path to absolute path\"\"\"\n        path = expand_path(path)\n        if os.path.isabs(path):\n            return path\n        return os.path.abspath(os.path.join(self.cwd, path))\n    \n    def _get_image_mime_type(self, ext: str) -> str:\n        \"\"\"Get MIME type for image\"\"\"\n        mime_map = {\n            '.jpg': 'image/jpeg', '.jpeg': 'image/jpeg',\n            '.png': 'image/png', '.gif': 'image/gif',\n            '.webp': 'image/webp', '.bmp': 'image/bmp',\n            '.svg': 'image/svg+xml', '.ico': 'image/x-icon'\n        }\n        return mime_map.get(ext, 'image/jpeg')\n    \n    def _get_video_mime_type(self, ext: str) -> str:\n        \"\"\"Get MIME type for video\"\"\"\n        mime_map = {\n            '.mp4': 'video/mp4', '.avi': 'video/x-msvideo',\n            '.mov': 'video/quicktime', '.mkv': 'video/x-matroska',\n            '.webm': 'video/webm', '.flv': 'video/x-flv'\n        }\n        return mime_map.get(ext, 'video/mp4')\n    \n    def _get_audio_mime_type(self, ext: str) -> str:\n        \"\"\"Get MIME type for audio\"\"\"\n        mime_map = {\n            '.mp3': 'audio/mpeg', '.wav': 'audio/wav',\n            '.ogg': 'audio/ogg', '.m4a': 'audio/mp4',\n            '.flac': 'audio/flac', '.aac': 'audio/aac'\n        }\n        return mime_map.get(ext, 'audio/mpeg')\n    \n    def _get_document_mime_type(self, ext: str) -> str:\n        \"\"\"Get MIME type for document\"\"\"\n        mime_map = {\n            '.pdf': 'application/pdf',\n            '.doc': 'application/msword',\n            '.docx': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',\n            '.xls': 'application/vnd.ms-excel',\n            '.xlsx': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',\n            '.ppt': 'application/vnd.ms-powerpoint',\n            '.pptx': 'application/vnd.openxmlformats-officedocument.presentationml.presentation',\n            '.txt': 'text/plain',\n            '.md': 'text/markdown'\n        }\n        return mime_map.get(ext, 'application/octet-stream')\n    \n    def _format_size(self, size_bytes: int) -> str:\n        \"\"\"Format file size in human-readable format\"\"\"\n        for unit in ['B', 'KB', 'MB', 'GB']:\n            if size_bytes < 1024.0:\n                return f\"{size_bytes:.1f}{unit}\"\n            size_bytes /= 1024.0\n        return f\"{size_bytes:.1f}TB\"\n"
  },
  {
    "path": "agent/tools/tool_manager.py",
    "content": "import importlib\nimport importlib.util\nfrom pathlib import Path\nfrom typing import Dict, Any, Type\nfrom agent.tools.base_tool import BaseTool\nfrom common.log import logger\nfrom config import conf\n\n\nclass ToolManager:\n    \"\"\"\n    Tool manager for managing tools.\n    \"\"\"\n    _instance = None\n\n    def __new__(cls):\n        \"\"\"Singleton pattern to ensure only one instance of ToolManager exists.\"\"\"\n        if cls._instance is None:\n            cls._instance = super(ToolManager, cls).__new__(cls)\n            cls._instance.tool_classes = {}  # Store tool classes instead of instances\n            cls._instance._initialized = False\n        return cls._instance\n\n    def __init__(self):\n        # Initialize only once\n        if not hasattr(self, 'tool_classes'):\n            self.tool_classes = {}  # Dictionary to store tool classes\n\n    def load_tools(self, tools_dir: str = \"\", config_dict=None):\n        \"\"\"\n        Load tools from both directory and configuration.\n\n        :param tools_dir: Directory to scan for tool modules\n        \"\"\"\n        if tools_dir:\n            self._load_tools_from_directory(tools_dir)\n            self._configure_tools_from_config()\n        else:\n            self._load_tools_from_init()\n            self._configure_tools_from_config(config_dict)\n\n    def _load_tools_from_init(self) -> bool:\n        \"\"\"\n        Load tool classes from tools.__init__.__all__\n\n        :return: True if tools were loaded, False otherwise\n        \"\"\"\n        try:\n            # Try to import the tools package\n            tools_package = importlib.import_module(\"agent.tools\")\n\n            # Check if __all__ is defined\n            if hasattr(tools_package, \"__all__\"):\n                tool_classes = tools_package.__all__\n\n                # Import each tool class directly from the tools package\n                for class_name in tool_classes:\n                    try:\n                        # Skip base classes\n                        if class_name in [\"BaseTool\", \"ToolManager\"]:\n                            continue\n\n                        # Get the class directly from the tools package\n                        if hasattr(tools_package, class_name):\n                            cls = getattr(tools_package, class_name)\n\n                            if (\n                                    isinstance(cls, type)\n                                    and issubclass(cls, BaseTool)\n                                    and cls != BaseTool\n                            ):\n                                try:\n                                    # Skip memory tools (they need special initialization with memory_manager)\n                                    if class_name in [\"MemorySearchTool\", \"MemoryGetTool\"]:\n                                        logger.debug(f\"Skipped tool {class_name} (requires memory_manager)\")\n                                        continue\n                                    \n                                    # Create a temporary instance to get the name\n                                    temp_instance = cls()\n                                    tool_name = temp_instance.name\n                                    # Store the class, not the instance\n                                    self.tool_classes[tool_name] = cls\n                                    logger.debug(f\"Loaded tool: {tool_name} from class {class_name}\")\n                                except ImportError as e:\n                                    # Handle missing dependencies with helpful messages\n                                    error_msg = str(e)\n                                    if \"browser-use\" in error_msg or \"browser_use\" in error_msg:\n                                        logger.warning(\n                                            f\"[ToolManager] Browser tool not loaded - missing dependencies.\\n\"\n                                            f\"  To enable browser tool, run:\\n\"\n                                            f\"    pip install browser-use markdownify playwright\\n\"\n                                            f\"    playwright install chromium\"\n                                        )\n                                    elif \"markdownify\" in error_msg:\n                                        logger.warning(\n                                            f\"[ToolManager] {cls.__name__} not loaded - missing markdownify.\\n\"\n                                            f\"  Install with: pip install markdownify\"\n                                        )\n                                    else:\n                                        logger.warning(f\"[ToolManager] {cls.__name__} not loaded due to missing dependency: {error_msg}\")\n                                except Exception as e:\n                                    logger.error(f\"Error initializing tool class {cls.__name__}: {e}\")\n                    except Exception as e:\n                        logger.error(f\"Error importing class {class_name}: {e}\")\n\n                return len(self.tool_classes) > 0\n            return False\n        except ImportError:\n            logger.warning(\"Could not import agent.tools package\")\n            return False\n        except Exception as e:\n            logger.error(f\"Error loading tools from __init__.__all__: {e}\")\n            return False\n\n    def _load_tools_from_directory(self, tools_dir: str):\n        \"\"\"Dynamically load tool classes from directory\"\"\"\n        tools_path = Path(tools_dir)\n\n        # Traverse all .py files\n        for py_file in tools_path.rglob(\"*.py\"):\n            # Skip initialization files and base tool files\n            if py_file.name in [\"__init__.py\", \"base_tool.py\", \"tool_manager.py\"]:\n                continue\n\n            # Get module name\n            module_name = py_file.stem\n\n            try:\n                # Load module directly from file\n                spec = importlib.util.spec_from_file_location(module_name, py_file)\n                if spec and spec.loader:\n                    module = importlib.util.module_from_spec(spec)\n                    spec.loader.exec_module(module)\n\n                    # Find tool classes in the module\n                    for attr_name in dir(module):\n                        cls = getattr(module, attr_name)\n                        if (\n                                isinstance(cls, type)\n                                and issubclass(cls, BaseTool)\n                                and cls != BaseTool\n                        ):\n                            try:\n                                # Skip memory tools (they need special initialization with memory_manager)\n                                if attr_name in [\"MemorySearchTool\", \"MemoryGetTool\"]:\n                                    logger.debug(f\"Skipped tool {attr_name} (requires memory_manager)\")\n                                    continue\n                                \n                                # Create a temporary instance to get the name\n                                temp_instance = cls()\n                                tool_name = temp_instance.name\n                                # Store the class, not the instance\n                                self.tool_classes[tool_name] = cls\n                            except ImportError as e:\n                                # Handle missing dependencies with helpful messages\n                                error_msg = str(e)\n                                if \"browser-use\" in error_msg or \"browser_use\" in error_msg:\n                                    logger.warning(\n                                        f\"[ToolManager] Browser tool not loaded - missing dependencies.\\n\"\n                                        f\"  To enable browser tool, run:\\n\"\n                                        f\"    pip install browser-use markdownify playwright\\n\"\n                                        f\"    playwright install chromium\"\n                                    )\n                                elif \"markdownify\" in error_msg:\n                                    logger.warning(\n                                        f\"[ToolManager] {cls.__name__} not loaded - missing markdownify.\\n\"\n                                        f\"  Install with: pip install markdownify\"\n                                    )\n                                else:\n                                    logger.warning(f\"[ToolManager] {cls.__name__} not loaded due to missing dependency: {error_msg}\")\n                            except Exception as e:\n                                logger.error(f\"Error initializing tool class {cls.__name__}: {e}\")\n            except Exception as e:\n                print(f\"Error importing module {py_file}: {e}\")\n\n    def _configure_tools_from_config(self, config_dict=None):\n        \"\"\"Configure tool classes based on configuration file\"\"\"\n        try:\n            # Get tools configuration\n            tools_config = config_dict or conf().get(\"tools\", {})\n\n            # Record tools that are configured but not loaded\n            missing_tools = []\n\n            # Store configurations for later use when instantiating\n            self.tool_configs = tools_config\n\n            # Check which configured tools are missing\n            for tool_name in tools_config:\n                if tool_name not in self.tool_classes:\n                    missing_tools.append(tool_name)\n\n            # If there are missing tools, record warnings\n            if missing_tools:\n                for tool_name in missing_tools:\n                    if tool_name == \"browser\":\n                        logger.warning(\n                            f\"[ToolManager] Browser tool is configured but not loaded.\\n\"\n                            f\"  To enable browser tool, run:\\n\"\n                            f\"    pip install browser-use markdownify playwright\\n\"\n                            f\"    playwright install chromium\"\n                        )\n                    elif tool_name == \"google_search\":\n                        logger.warning(\n                            f\"[ToolManager] Google Search tool is configured but may need API key.\\n\"\n                            f\"  Get API key from: https://serper.dev\\n\"\n                            f\"  Configure in config.json: tools.google_search.api_key\"\n                        )\n                    else:\n                        logger.warning(f\"[ToolManager] Tool '{tool_name}' is configured but could not be loaded.\")\n\n        except Exception as e:\n            logger.error(f\"Error configuring tools from config: {e}\")\n\n    def create_tool(self, name: str) -> BaseTool:\n        \"\"\"\n        Get a new instance of a tool by name.\n\n        :param name: The name of the tool to get.\n        :return: A new instance of the tool or None if not found.\n        \"\"\"\n        tool_class = self.tool_classes.get(name)\n        if tool_class:\n            # Create a new instance\n            tool_instance = tool_class()\n\n            # Apply configuration if available\n            if hasattr(self, 'tool_configs') and name in self.tool_configs:\n                tool_instance.config = self.tool_configs[name]\n\n            return tool_instance\n        return None\n\n    def list_tools(self) -> dict:\n        \"\"\"\n        Get information about all loaded tools.\n\n        :return: A dictionary with tool information.\n        \"\"\"\n        result = {}\n        for name, tool_class in self.tool_classes.items():\n            # Create a temporary instance to get schema\n            temp_instance = tool_class()\n            result[name] = {\n                \"description\": temp_instance.description,\n                \"parameters\": temp_instance.get_json_schema()\n            }\n        return result\n"
  },
  {
    "path": "agent/tools/utils/__init__.py",
    "content": "from .truncate import (\n    truncate_head,\n    truncate_tail,\n    truncate_line,\n    format_size,\n    TruncationResult,\n    DEFAULT_MAX_LINES,\n    DEFAULT_MAX_BYTES,\n    GREP_MAX_LINE_LENGTH\n)\n\nfrom .diff import (\n    strip_bom,\n    detect_line_ending,\n    normalize_to_lf,\n    restore_line_endings,\n    normalize_for_fuzzy_match,\n    fuzzy_find_text,\n    generate_diff_string,\n    FuzzyMatchResult\n)\n\n__all__ = [\n    'truncate_head',\n    'truncate_tail',\n    'truncate_line',\n    'format_size',\n    'TruncationResult',\n    'DEFAULT_MAX_LINES',\n    'DEFAULT_MAX_BYTES',\n    'GREP_MAX_LINE_LENGTH',\n    'strip_bom',\n    'detect_line_ending',\n    'normalize_to_lf',\n    'restore_line_endings',\n    'normalize_for_fuzzy_match',\n    'fuzzy_find_text',\n    'generate_diff_string',\n    'FuzzyMatchResult'\n]\n"
  },
  {
    "path": "agent/tools/utils/diff.py",
    "content": "\"\"\"\nDiff tools for file editing\nProvides fuzzy matching and diff generation functionality\n\"\"\"\n\nimport difflib\nimport re\nfrom typing import Optional, Tuple\n\n\ndef strip_bom(text: str) -> Tuple[str, str]:\n    \"\"\"\n    Remove BOM (Byte Order Mark)\n    \n    :param text: Original text\n    :return: (BOM, text after removing BOM)\n    \"\"\"\n    if text.startswith('\\ufeff'):\n        return '\\ufeff', text[1:]\n    return '', text\n\n\ndef detect_line_ending(text: str) -> str:\n    \"\"\"\n    Detect line ending type\n    \n    :param text: Text content\n    :return: Line ending type ('\\r\\n' or '\\n')\n    \"\"\"\n    if '\\r\\n' in text:\n        return '\\r\\n'\n    return '\\n'\n\n\ndef normalize_to_lf(text: str) -> str:\n    \"\"\"\n    Normalize all line endings to LF (\\n)\n    \n    :param text: Original text\n    :return: Normalized text\n    \"\"\"\n    return text.replace('\\r\\n', '\\n').replace('\\r', '\\n')\n\n\ndef restore_line_endings(text: str, original_ending: str) -> str:\n    \"\"\"\n    Restore original line endings\n    \n    :param text: LF normalized text\n    :param original_ending: Original line ending\n    :return: Text with restored line endings\n    \"\"\"\n    if original_ending == '\\r\\n':\n        return text.replace('\\n', '\\r\\n')\n    return text\n\n\ndef normalize_for_fuzzy_match(text: str) -> str:\n    \"\"\"\n    Normalize text for fuzzy matching\n    Remove excess whitespace but preserve basic structure\n    \n    :param text: Original text\n    :return: Normalized text\n    \"\"\"\n    # Compress multiple spaces to one\n    text = re.sub(r'[ \\t]+', ' ', text)\n    # Remove trailing spaces\n    text = re.sub(r' +\\n', '\\n', text)\n    # Remove leading spaces (but preserve indentation structure, only remove excess)\n    lines = text.split('\\n')\n    normalized_lines = []\n    for line in lines:\n        # Preserve indentation but normalize to multiples of single spaces\n        stripped = line.lstrip()\n        if stripped:\n            indent_count = len(line) - len(stripped)\n            # Normalize indentation (convert tabs to spaces)\n            normalized_indent = ' ' * indent_count\n            normalized_lines.append(normalized_indent + stripped)\n        else:\n            normalized_lines.append('')\n    return '\\n'.join(normalized_lines)\n\n\nclass FuzzyMatchResult:\n    \"\"\"Fuzzy match result\"\"\"\n    \n    def __init__(self, found: bool, index: int = -1, match_length: int = 0, content_for_replacement: str = \"\"):\n        self.found = found\n        self.index = index\n        self.match_length = match_length\n        self.content_for_replacement = content_for_replacement\n\n\ndef fuzzy_find_text(content: str, old_text: str) -> FuzzyMatchResult:\n    \"\"\"\n    Find text in content, try exact match first, then fuzzy match\n    \n    :param content: Content to search in\n    :param old_text: Text to find\n    :return: Match result\n    \"\"\"\n    # First try exact match\n    index = content.find(old_text)\n    if index != -1:\n        return FuzzyMatchResult(\n            found=True,\n            index=index,\n            match_length=len(old_text),\n            content_for_replacement=content\n        )\n    \n    # Try fuzzy match\n    fuzzy_content = normalize_for_fuzzy_match(content)\n    fuzzy_old_text = normalize_for_fuzzy_match(old_text)\n    \n    index = fuzzy_content.find(fuzzy_old_text)\n    if index != -1:\n        # Fuzzy match successful, use normalized content for replacement\n        return FuzzyMatchResult(\n            found=True,\n            index=index,\n            match_length=len(fuzzy_old_text),\n            content_for_replacement=fuzzy_content\n        )\n    \n    # Not found\n    return FuzzyMatchResult(found=False)\n\n\ndef generate_diff_string(old_content: str, new_content: str) -> dict:\n    \"\"\"\n    Generate unified diff string\n    \n    :param old_content: Old content\n    :param new_content: New content\n    :return: Dictionary containing diff and first changed line number\n    \"\"\"\n    old_lines = old_content.split('\\n')\n    new_lines = new_content.split('\\n')\n    \n    # Generate unified diff\n    diff_lines = list(difflib.unified_diff(\n        old_lines,\n        new_lines,\n        lineterm='',\n        fromfile='original',\n        tofile='modified'\n    ))\n    \n    # Find first changed line number\n    first_changed_line = None\n    for line in diff_lines:\n        if line.startswith('@@'):\n            # Parse @@ -1,3 +1,3 @@ format\n            match = re.search(r'@@ -\\d+,?\\d* \\+(\\d+)', line)\n            if match:\n                first_changed_line = int(match.group(1))\n                break\n    \n    diff_string = '\\n'.join(diff_lines)\n    \n    return {\n        'diff': diff_string,\n        'first_changed_line': first_changed_line\n    }\n"
  },
  {
    "path": "agent/tools/utils/truncate.py",
    "content": "\"\"\"\nShared truncation utilities for tool outputs.\n\nTruncation is based on two independent limits - whichever is hit first wins:\n- Line limit (default: 2000 lines)\n- Byte limit (default: 50KB)\n\nNever returns partial lines (except bash tail truncation edge case).\n\"\"\"\n\nfrom typing import Dict, Any, Optional, Literal, Tuple\n\n\nDEFAULT_MAX_LINES = 2000\nDEFAULT_MAX_BYTES = 50 * 1024  # 50KB\nGREP_MAX_LINE_LENGTH = 500  # Max chars per grep match line\n\n\nclass TruncationResult:\n    \"\"\"Truncation result\"\"\"\n    \n    def __init__(\n        self,\n        content: str,\n        truncated: bool,\n        truncated_by: Optional[Literal[\"lines\", \"bytes\"]],\n        total_lines: int,\n        total_bytes: int,\n        output_lines: int,\n        output_bytes: int,\n        last_line_partial: bool = False,\n        first_line_exceeds_limit: bool = False,\n        max_lines: int = DEFAULT_MAX_LINES,\n        max_bytes: int = DEFAULT_MAX_BYTES\n    ):\n        self.content = content\n        self.truncated = truncated\n        self.truncated_by = truncated_by\n        self.total_lines = total_lines\n        self.total_bytes = total_bytes\n        self.output_lines = output_lines\n        self.output_bytes = output_bytes\n        self.last_line_partial = last_line_partial\n        self.first_line_exceeds_limit = first_line_exceeds_limit\n        self.max_lines = max_lines\n        self.max_bytes = max_bytes\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"Convert to dictionary\"\"\"\n        return {\n            \"content\": self.content,\n            \"truncated\": self.truncated,\n            \"truncated_by\": self.truncated_by,\n            \"total_lines\": self.total_lines,\n            \"total_bytes\": self.total_bytes,\n            \"output_lines\": self.output_lines,\n            \"output_bytes\": self.output_bytes,\n            \"last_line_partial\": self.last_line_partial,\n            \"first_line_exceeds_limit\": self.first_line_exceeds_limit,\n            \"max_lines\": self.max_lines,\n            \"max_bytes\": self.max_bytes\n        }\n\n\ndef format_size(bytes_count: int) -> str:\n    \"\"\"Format bytes as human-readable size\"\"\"\n    if bytes_count < 1024:\n        return f\"{bytes_count}B\"\n    elif bytes_count < 1024 * 1024:\n        return f\"{bytes_count / 1024:.1f}KB\"\n    else:\n        return f\"{bytes_count / (1024 * 1024):.1f}MB\"\n\n\ndef truncate_head(content: str, max_lines: Optional[int] = None, max_bytes: Optional[int] = None) -> TruncationResult:\n    \"\"\"\n    Truncate content from the head (keep first N lines/bytes).\n    Suitable for file reads where you want to see the beginning.\n    \n    Never returns partial lines. If first line exceeds byte limit,\n    returns empty content with first_line_exceeds_limit=True.\n    \n    :param content: Content to truncate\n    :param max_lines: Maximum number of lines (default: 2000)\n    :param max_bytes: Maximum number of bytes (default: 50KB)\n    :return: Truncation result\n    \"\"\"\n    if max_lines is None:\n        max_lines = DEFAULT_MAX_LINES\n    if max_bytes is None:\n        max_bytes = DEFAULT_MAX_BYTES\n    \n    total_bytes = len(content.encode('utf-8'))\n    lines = content.split('\\n')\n    total_lines = len(lines)\n    \n    # Check if no truncation is needed\n    if total_lines <= max_lines and total_bytes <= max_bytes:\n        return TruncationResult(\n            content=content,\n            truncated=False,\n            truncated_by=None,\n            total_lines=total_lines,\n            total_bytes=total_bytes,\n            output_lines=total_lines,\n            output_bytes=total_bytes,\n            last_line_partial=False,\n            first_line_exceeds_limit=False,\n            max_lines=max_lines,\n            max_bytes=max_bytes\n        )\n    \n    # Check if first line alone exceeds byte limit\n    first_line_bytes = len(lines[0].encode('utf-8'))\n    if first_line_bytes > max_bytes:\n        return TruncationResult(\n            content=\"\",\n            truncated=True,\n            truncated_by=\"bytes\",\n            total_lines=total_lines,\n            total_bytes=total_bytes,\n            output_lines=0,\n            output_bytes=0,\n            last_line_partial=False,\n            first_line_exceeds_limit=True,\n            max_lines=max_lines,\n            max_bytes=max_bytes\n        )\n    \n    # Collect complete lines that fit\n    output_lines_arr = []\n    output_bytes_count = 0\n    truncated_by = \"lines\"\n    \n    for i, line in enumerate(lines):\n        if i >= max_lines:\n            break\n        \n        # Calculate line bytes (add 1 for newline if not first line)\n        line_bytes = len(line.encode('utf-8')) + (1 if i > 0 else 0)\n        \n        if output_bytes_count + line_bytes > max_bytes:\n            truncated_by = \"bytes\"\n            break\n        \n        output_lines_arr.append(line)\n        output_bytes_count += line_bytes\n    \n    # If exited due to line limit\n    if len(output_lines_arr) >= max_lines and output_bytes_count <= max_bytes:\n        truncated_by = \"lines\"\n    \n    output_content = '\\n'.join(output_lines_arr)\n    final_output_bytes = len(output_content.encode('utf-8'))\n    \n    return TruncationResult(\n        content=output_content,\n        truncated=True,\n        truncated_by=truncated_by,\n        total_lines=total_lines,\n        total_bytes=total_bytes,\n        output_lines=len(output_lines_arr),\n        output_bytes=final_output_bytes,\n        last_line_partial=False,\n        first_line_exceeds_limit=False,\n        max_lines=max_lines,\n        max_bytes=max_bytes\n    )\n\n\ndef truncate_tail(content: str, max_lines: Optional[int] = None, max_bytes: Optional[int] = None) -> TruncationResult:\n    \"\"\"\n    Truncate content from tail (keep last N lines/bytes).\n    Suitable for bash output where you want to see the ending content (errors, final results).\n    \n    If the last line of original content exceeds byte limit, may return partial first line.\n    \n    :param content: Content to truncate\n    :param max_lines: Maximum lines (default: 2000)\n    :param max_bytes: Maximum bytes (default: 50KB)\n    :return: Truncation result\n    \"\"\"\n    if max_lines is None:\n        max_lines = DEFAULT_MAX_LINES\n    if max_bytes is None:\n        max_bytes = DEFAULT_MAX_BYTES\n    \n    total_bytes = len(content.encode('utf-8'))\n    lines = content.split('\\n')\n    total_lines = len(lines)\n    \n    # Check if no truncation is needed\n    if total_lines <= max_lines and total_bytes <= max_bytes:\n        return TruncationResult(\n            content=content,\n            truncated=False,\n            truncated_by=None,\n            total_lines=total_lines,\n            total_bytes=total_bytes,\n            output_lines=total_lines,\n            output_bytes=total_bytes,\n            last_line_partial=False,\n            first_line_exceeds_limit=False,\n            max_lines=max_lines,\n            max_bytes=max_bytes\n        )\n    \n    # Work backwards from the end\n    output_lines_arr = []\n    output_bytes_count = 0\n    truncated_by = \"lines\"\n    last_line_partial = False\n    \n    for i in range(len(lines) - 1, -1, -1):\n        if len(output_lines_arr) >= max_lines:\n            break\n        \n        line = lines[i]\n        # Calculate line bytes (add newline if not the first added line)\n        line_bytes = len(line.encode('utf-8')) + (1 if len(output_lines_arr) > 0 else 0)\n        \n        if output_bytes_count + line_bytes > max_bytes:\n            truncated_by = \"bytes\"\n            # Edge case: if we haven't added any lines yet and this line exceeds maxBytes,\n            # take the end portion of this line\n            if len(output_lines_arr) == 0:\n                truncated_line = _truncate_string_to_bytes_from_end(line, max_bytes)\n                output_lines_arr.insert(0, truncated_line)\n                output_bytes_count = len(truncated_line.encode('utf-8'))\n                last_line_partial = True\n            break\n        \n        output_lines_arr.insert(0, line)\n        output_bytes_count += line_bytes\n    \n    # If exited due to line limit\n    if len(output_lines_arr) >= max_lines and output_bytes_count <= max_bytes:\n        truncated_by = \"lines\"\n    \n    output_content = '\\n'.join(output_lines_arr)\n    final_output_bytes = len(output_content.encode('utf-8'))\n    \n    return TruncationResult(\n        content=output_content,\n        truncated=True,\n        truncated_by=truncated_by,\n        total_lines=total_lines,\n        total_bytes=total_bytes,\n        output_lines=len(output_lines_arr),\n        output_bytes=final_output_bytes,\n        last_line_partial=last_line_partial,\n        first_line_exceeds_limit=False,\n        max_lines=max_lines,\n        max_bytes=max_bytes\n    )\n\n\ndef _truncate_string_to_bytes_from_end(text: str, max_bytes: int) -> str:\n    \"\"\"\n    Truncate string to fit byte limit (from end).\n    Properly handles multi-byte UTF-8 characters.\n    \n    :param text: String to truncate\n    :param max_bytes: Maximum bytes\n    :return: Truncated string\n    \"\"\"\n    encoded = text.encode('utf-8')\n    if len(encoded) <= max_bytes:\n        return text\n    \n    # Start from end, skip back maxBytes\n    start = len(encoded) - max_bytes\n    \n    # Find valid UTF-8 boundary (character start)\n    while start < len(encoded) and (encoded[start] & 0xC0) == 0x80:\n        start += 1\n    \n    return encoded[start:].decode('utf-8', errors='ignore')\n\n\ndef truncate_line(line: str, max_chars: int = GREP_MAX_LINE_LENGTH) -> Tuple[str, bool]:\n    \"\"\"\n    Truncate single line to max characters, add [truncated] suffix.\n    Used for grep match lines.\n    \n    :param line: Line to truncate\n    :param max_chars: Maximum characters\n    :return: (truncated text, whether truncated)\n    \"\"\"\n    if len(line) <= max_chars:\n        return line, False\n    return f\"{line[:max_chars]}... [truncated]\", True\n"
  },
  {
    "path": "agent/tools/vision/__init__.py",
    "content": "from agent.tools.vision.vision import Vision\n"
  },
  {
    "path": "agent/tools/vision/vision.py",
    "content": "\"\"\"\nVision tool - Analyze images using OpenAI-compatible Vision API.\nSupports local files (auto base64-encoded) and HTTP URLs.\nProviders: OpenAI (preferred) > LinkAI (fallback).\n\"\"\"\n\nimport base64\nimport os\nimport subprocess\nimport tempfile\nfrom typing import Any, Dict, Optional, Tuple\n\nimport requests\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom common.log import logger\nfrom config import conf\n\nDEFAULT_MODEL = \"gpt-4.1-mini\"\nDEFAULT_TIMEOUT = 60\nMAX_TOKENS = 1000\nCOMPRESS_THRESHOLD = 1_048_576  # 1 MB\n\nSUPPORTED_EXTENSIONS = {\n    \"jpg\": \"image/jpeg\",\n    \"jpeg\": \"image/jpeg\",\n    \"png\": \"image/png\",\n    \"gif\": \"image/gif\",\n    \"webp\": \"image/webp\",\n}\n\n\nclass Vision(BaseTool):\n    \"\"\"Analyze images using OpenAI-compatible Vision API\"\"\"\n\n    name: str = \"vision\"\n    description: str = (\n        \"Analyze a local image or image URL (jpg/jpeg/png) using Vision API. \"\n        \"Can describe content, extract text, identify objects, colors, etc. \"\n        \"Requires OPENAI_API_KEY or LINKAI_API_KEY.\"\n    )\n\n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"image\": {\n                \"type\": \"string\",\n                \"description\": \"Local file path or HTTP(S) URL of the image to analyze\",\n            },\n            \"question\": {\n                \"type\": \"string\",\n                \"description\": \"Question to ask about the image\",\n            },\n            \"model\": {\n                \"type\": \"string\",\n                \"description\": (\n                    f\"Vision model to use (default: {DEFAULT_MODEL}). \"\n                    \"Options: gpt-4.1-mini, gpt-4.1, gpt-4o-mini, gpt-4o\"\n                ),\n            },\n        },\n        \"required\": [\"image\", \"question\"],\n    }\n\n    def __init__(self, config: dict = None):\n        self.config = config or {}\n\n    @staticmethod\n    def is_available() -> bool:\n        return bool(\n            conf().get(\"open_ai_api_key\") or os.environ.get(\"OPENAI_API_KEY\")\n            or conf().get(\"linkai_api_key\") or os.environ.get(\"LINKAI_API_KEY\")\n        )\n\n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        image = args.get(\"image\", \"\").strip()\n        question = args.get(\"question\", \"\").strip()\n        model = args.get(\"model\", DEFAULT_MODEL).strip() or DEFAULT_MODEL\n\n        if not image:\n            return ToolResult.fail(\"Error: 'image' parameter is required\")\n        if not question:\n            return ToolResult.fail(\"Error: 'question' parameter is required\")\n\n        api_key, api_base, extra_headers = self._resolve_provider()\n        if not api_key:\n            return ToolResult.fail(\n                \"Error: No API key configured for Vision.\\n\"\n                \"Please configure one of the following using env_config tool:\\n\"\n                \"  1. OPENAI_API_KEY (preferred): env_config(action=\\\"set\\\", key=\\\"OPENAI_API_KEY\\\", value=\\\"your-key\\\")\\n\"\n                \"  2. LINKAI_API_KEY (fallback): env_config(action=\\\"set\\\", key=\\\"LINKAI_API_KEY\\\", value=\\\"your-key\\\")\\n\\n\"\n                \"Get your key at: https://platform.openai.com/api-keys or https://link-ai.tech\"\n            )\n\n        try:\n            image_content = self._build_image_content(image)\n        except Exception as e:\n            return ToolResult.fail(f\"Error: {e}\")\n\n        try:\n            return self._call_api(api_key, api_base, model, question, image_content, extra_headers)\n        except requests.Timeout:\n            return ToolResult.fail(f\"Error: Vision API request timed out after {DEFAULT_TIMEOUT}s\")\n        except requests.ConnectionError:\n            return ToolResult.fail(\"Error: Failed to connect to Vision API\")\n        except Exception as e:\n            logger.error(f\"[Vision] Unexpected error: {e}\", exc_info=True)\n            return ToolResult.fail(f\"Error: Vision API call failed - {e}\")\n\n    def _resolve_provider(self) -> Tuple[Optional[str], str, dict]:\n        \"\"\"Resolve API key, base URL and extra headers. Priority: conf() > env vars.\"\"\"\n        api_key = conf().get(\"open_ai_api_key\") or os.environ.get(\"OPENAI_API_KEY\")\n        if api_key:\n            api_base = (conf().get(\"open_ai_api_base\") or os.environ.get(\"OPENAI_API_BASE\", \"\")).rstrip(\"/\") \\\n                or \"https://api.openai.com/v1\"\n            return api_key, self._ensure_v1(api_base), {}\n\n        api_key = conf().get(\"linkai_api_key\") or os.environ.get(\"LINKAI_API_KEY\")\n        if api_key:\n            api_base = (conf().get(\"linkai_api_base\") or os.environ.get(\"LINKAI_API_BASE\", \"\")).rstrip(\"/\") \\\n                or \"https://api.link-ai.tech\"\n            logger.debug(\"[Vision] Using LinkAI API (OPENAI_API_KEY not set)\")\n            from common.utils import get_cloud_headers\n            extra = get_cloud_headers(api_key)\n            extra.pop(\"Authorization\", None)\n            extra.pop(\"Content-Type\", None)\n            return api_key, self._ensure_v1(api_base), extra\n\n        return None, \"\", {}\n\n    @staticmethod\n    def _ensure_v1(api_base: str) -> str:\n        \"\"\"Append /v1 if the base URL doesn't already end with a versioned path.\"\"\"\n        if not api_base:\n            return api_base\n        # Already has /v1 or similar version suffix\n        if api_base.rstrip(\"/\").split(\"/\")[-1].startswith(\"v\"):\n            return api_base\n        return api_base.rstrip(\"/\") + \"/v1\"\n\n    def _build_image_content(self, image: str) -> dict:\n        \"\"\"Build the image_url content block for the API request.\"\"\"\n        if image.startswith((\"http://\", \"https://\")):\n            return {\"type\": \"image_url\", \"image_url\": {\"url\": image}}\n\n        if not os.path.isfile(image):\n            raise FileNotFoundError(f\"Image file not found: {image}\")\n\n        ext = image.rsplit(\".\", 1)[-1].lower() if \".\" in image else \"\"\n        mime_type = SUPPORTED_EXTENSIONS.get(ext)\n        if not mime_type:\n            raise ValueError(\n                f\"Unsupported image format '.{ext}'. \"\n                f\"Supported: {', '.join(SUPPORTED_EXTENSIONS.keys())}\"\n            )\n\n        file_path = self._maybe_compress(image)\n        try:\n            with open(file_path, \"rb\") as f:\n                b64 = base64.b64encode(f.read()).decode(\"ascii\")\n        finally:\n            if file_path != image and os.path.exists(file_path):\n                os.remove(file_path)\n\n        data_url = f\"data:{mime_type};base64,{b64}\"\n        return {\"type\": \"image_url\", \"image_url\": {\"url\": data_url}}\n\n    @staticmethod\n    def _maybe_compress(path: str) -> str:\n        \"\"\"Compress image if larger than threshold; return path to use.\"\"\"\n        file_size = os.path.getsize(path)\n        if file_size <= COMPRESS_THRESHOLD:\n            return path\n\n        tmp = tempfile.NamedTemporaryFile(suffix=\".jpg\", delete=False)\n        tmp.close()\n\n        try:\n            # macOS: use sips\n            subprocess.run(\n                [\"sips\", \"-Z\", \"800\", path, \"--out\", tmp.name],\n                capture_output=True, check=True,\n            )\n            logger.debug(f\"[Vision] Compressed image ({file_size // 1024}KB -> {os.path.getsize(tmp.name) // 1024}KB)\")\n            return tmp.name\n        except (FileNotFoundError, subprocess.CalledProcessError):\n            pass\n\n        try:\n            # Linux: use ImageMagick convert\n            subprocess.run(\n                [\"convert\", path, \"-resize\", \"800x800>\", tmp.name],\n                capture_output=True, check=True,\n            )\n            logger.debug(f\"[Vision] Compressed image ({file_size // 1024}KB -> {os.path.getsize(tmp.name) // 1024}KB)\")\n            return tmp.name\n        except (FileNotFoundError, subprocess.CalledProcessError):\n            pass\n\n        os.remove(tmp.name)\n        return path\n\n    def _call_api(self, api_key: str, api_base: str, model: str,\n                  question: str, image_content: dict, extra_headers: dict = None) -> ToolResult:\n        payload = {\n            \"model\": model,\n            \"messages\": [\n                {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\"type\": \"text\", \"text\": question},\n                        image_content,\n                    ],\n                }\n            ],\n            \"max_tokens\": MAX_TOKENS,\n        }\n\n        headers = {\n            \"Authorization\": f\"Bearer {api_key}\",\n            \"Content-Type\": \"application/json\",\n            **(extra_headers or {}),\n        }\n\n        resp = requests.post(\n            f\"{api_base}/chat/completions\",\n            headers=headers,\n            json=payload,\n            timeout=DEFAULT_TIMEOUT,\n        )\n\n        if resp.status_code == 401:\n            return ToolResult.fail(\"Error: Invalid API key. Please check your configuration.\")\n        if resp.status_code == 429:\n            return ToolResult.fail(\"Error: API rate limit reached. Please try again later.\")\n        if resp.status_code != 200:\n            return ToolResult.fail(f\"Error: Vision API returned HTTP {resp.status_code}: {resp.text[:200]}\")\n\n        data = resp.json()\n\n        if \"error\" in data:\n            msg = data[\"error\"].get(\"message\", \"Unknown API error\")\n            return ToolResult.fail(f\"Error: Vision API error - {msg}\")\n\n        content = \"\"\n        choices = data.get(\"choices\", [])\n        if choices:\n            content = choices[0].get(\"message\", {}).get(\"content\", \"\")\n\n        usage = data.get(\"usage\", {})\n        result = {\n            \"model\": model,\n            \"content\": content,\n            \"usage\": {\n                \"prompt_tokens\": usage.get(\"prompt_tokens\", 0),\n                \"completion_tokens\": usage.get(\"completion_tokens\", 0),\n                \"total_tokens\": usage.get(\"total_tokens\", 0),\n            },\n        }\n        return ToolResult.success(result)\n"
  },
  {
    "path": "agent/tools/web_fetch/__init__.py",
    "content": ""
  },
  {
    "path": "agent/tools/web_fetch/web_fetch.py",
    "content": "\"\"\"\nWeb Fetch tool - Fetch and extract readable content from web pages and remote files.\n\nSupports:\n- HTML web pages: extracts readable text content\n- Document files (PDF, Word, TXT, Markdown, etc.): downloads to workspace/tmp and parses content\n\"\"\"\n\nimport os\nimport re\nimport uuid\nfrom typing import Dict, Any, Optional, Set\nfrom urllib.parse import urlparse, unquote\n\nimport requests\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom agent.tools.utils.truncate import truncate_head, format_size\nfrom common.log import logger\n\n\nDEFAULT_TIMEOUT = 30\nMAX_FILE_SIZE = 50 * 1024 * 1024  # 50MB\n\nDEFAULT_HEADERS = {\n    \"User-Agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36\",\n    \"Accept\": \"*/*\",\n}\n\n# Supported document file extensions\nPDF_SUFFIXES: Set[str] = {\".pdf\"}\nWORD_SUFFIXES: Set[str] = {\".docx\"}\nTEXT_SUFFIXES: Set[str] = {\".txt\", \".md\", \".markdown\", \".rst\", \".csv\", \".tsv\", \".log\"}\nSPREADSHEET_SUFFIXES: Set[str] = {\".xls\", \".xlsx\"}\nPPT_SUFFIXES: Set[str] = {\".ppt\", \".pptx\"}\n\nALL_DOC_SUFFIXES = PDF_SUFFIXES | WORD_SUFFIXES | TEXT_SUFFIXES | SPREADSHEET_SUFFIXES | PPT_SUFFIXES\n\n_CHARSET_RE = re.compile(r'charset\\s*=\\s*[\"\\']?\\s*([\\w\\-]+)', re.IGNORECASE)\n_META_CHARSET_RE = re.compile(rb'<meta[^>]+charset\\s*=\\s*[\"\\']?\\s*([\\w\\-]+)', re.IGNORECASE)\n_META_HTTP_EQUIV_RE = re.compile(\n    rb'<meta[^>]+http-equiv\\s*=\\s*[\"\\']?Content-Type[\"\\']?[^>]+content\\s*=\\s*[\"\\'][^\"\\']*charset=([\\w\\-]+)',\n    re.IGNORECASE,\n)\n\n\ndef _extract_charset_from_content_type(content_type: str) -> Optional[str]:\n    \"\"\"Extract charset from Content-Type header value.\"\"\"\n    m = _CHARSET_RE.search(content_type)\n    return m.group(1) if m else None\n\n\ndef _extract_charset_from_html_meta(raw_bytes: bytes) -> Optional[str]:\n    \"\"\"Extract charset from HTML <meta> tags in the first few KB of raw bytes.\"\"\"\n    m = _META_CHARSET_RE.search(raw_bytes)\n    if m:\n        return m.group(1).decode(\"ascii\", errors=\"ignore\")\n    m = _META_HTTP_EQUIV_RE.search(raw_bytes)\n    if m:\n        return m.group(1).decode(\"ascii\", errors=\"ignore\")\n    return None\n\n\ndef _get_url_suffix(url: str) -> str:\n    \"\"\"Extract file extension from URL path, ignoring query params.\"\"\"\n    path = urlparse(url).path\n    return os.path.splitext(path)[-1].lower()\n\n\ndef _is_document_url(url: str) -> bool:\n    \"\"\"Check if URL points to a downloadable document file.\"\"\"\n    suffix = _get_url_suffix(url)\n    return suffix in ALL_DOC_SUFFIXES\n\n\nclass WebFetch(BaseTool):\n    \"\"\"Tool for fetching web pages and remote document files\"\"\"\n\n    name: str = \"web_fetch\"\n    description: str = (\n        \"Fetch content from a http/https URL. For web pages, extracts readable text. \"\n        \"For document files (PDF, Word, TXT, Markdown, Excel, PPT), downloads and parses the file content. \"\n        \"Supported file types: .pdf, .docx, .txt, .md, .csv, .xls, .xlsx, .ppt, .pptx\"\n    )\n\n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"url\": {\n                \"type\": \"string\",\n                \"description\": \"The HTTP/HTTPS URL to fetch (web page or document file link)\"\n            }\n        },\n        \"required\": [\"url\"]\n    }\n\n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self.cwd = self.config.get(\"cwd\", os.getcwd())\n\n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        url = args.get(\"url\", \"\").strip()\n        if not url:\n            return ToolResult.fail(\"Error: 'url' parameter is required\")\n\n        parsed = urlparse(url)\n        if parsed.scheme not in (\"http\", \"https\"):\n            return ToolResult.fail(\"Error: Invalid URL (must start with http:// or https://)\")\n\n        if _is_document_url(url):\n            return self._fetch_document(url)\n\n        return self._fetch_webpage(url)\n\n    # ---- Web page fetching ----\n\n    def _fetch_webpage(self, url: str) -> ToolResult:\n        \"\"\"Fetch and extract readable text from an HTML web page.\"\"\"\n        parsed = urlparse(url)\n        try:\n            response = requests.get(\n                url,\n                headers=DEFAULT_HEADERS,\n                timeout=DEFAULT_TIMEOUT,\n                allow_redirects=True,\n            )\n            response.raise_for_status()\n        except requests.Timeout:\n            return ToolResult.fail(f\"Error: Request timed out after {DEFAULT_TIMEOUT}s\")\n        except requests.ConnectionError:\n            return ToolResult.fail(f\"Error: Failed to connect to {parsed.netloc}\")\n        except requests.HTTPError as e:\n            return ToolResult.fail(f\"Error: HTTP {e.response.status_code} for URL: {url}\")\n        except Exception as e:\n            return ToolResult.fail(f\"Error: Failed to fetch URL: {e}\")\n\n        content_type = response.headers.get(\"Content-Type\", \"\")\n        if self._is_binary_content_type(content_type) and not _is_document_url(url):\n            return self._handle_download_by_content_type(url, response, content_type)\n\n        response.encoding = self._detect_encoding(response)\n        html = response.text\n        title = self._extract_title(html)\n        text = self._extract_text(html)\n\n        return ToolResult.success(f\"Title: {title}\\n\\nContent:\\n{text}\")\n\n    # ---- Document fetching ----\n\n    def _fetch_document(self, url: str) -> ToolResult:\n        \"\"\"Download a document file and extract its text content.\"\"\"\n        suffix = _get_url_suffix(url)\n        parsed = urlparse(url)\n        filename = self._extract_filename(url)\n        tmp_dir = self._ensure_tmp_dir()\n\n        local_path = os.path.join(tmp_dir, filename)\n        logger.info(f\"[WebFetch] Downloading document: {url} -> {local_path}\")\n\n        try:\n            response = requests.get(\n                url,\n                headers=DEFAULT_HEADERS,\n                timeout=DEFAULT_TIMEOUT,\n                stream=True,\n                allow_redirects=True,\n            )\n            response.raise_for_status()\n\n            content_length = int(response.headers.get(\"Content-Length\", 0))\n            if content_length > MAX_FILE_SIZE:\n                return ToolResult.fail(\n                    f\"Error: File too large ({format_size(content_length)} > {format_size(MAX_FILE_SIZE)})\"\n                )\n\n            downloaded = 0\n            with open(local_path, \"wb\") as f:\n                for chunk in response.iter_content(chunk_size=8192):\n                    downloaded += len(chunk)\n                    if downloaded > MAX_FILE_SIZE:\n                        f.close()\n                        os.remove(local_path)\n                        return ToolResult.fail(\n                            f\"Error: File too large (>{format_size(MAX_FILE_SIZE)}), download aborted\"\n                        )\n                    f.write(chunk)\n\n        except requests.Timeout:\n            return ToolResult.fail(f\"Error: Download timed out after {DEFAULT_TIMEOUT}s\")\n        except requests.ConnectionError:\n            return ToolResult.fail(f\"Error: Failed to connect to {parsed.netloc}\")\n        except requests.HTTPError as e:\n            return ToolResult.fail(f\"Error: HTTP {e.response.status_code} for URL: {url}\")\n        except Exception as e:\n            self._cleanup_file(local_path)\n            return ToolResult.fail(f\"Error: Failed to download file: {e}\")\n\n        try:\n            text = self._parse_document(local_path, suffix)\n        except Exception as e:\n            self._cleanup_file(local_path)\n            return ToolResult.fail(f\"Error: Failed to parse document: {e}\")\n\n        if not text or not text.strip():\n            file_size = os.path.getsize(local_path)\n            return ToolResult.success(\n                f\"File downloaded to: {local_path} ({format_size(file_size)})\\n\"\n                f\"No text content could be extracted. The file may contain only images or be encrypted.\"\n            )\n\n        truncation = truncate_head(text)\n        result_text = truncation.content\n\n        file_size = os.path.getsize(local_path)\n        header = f\"[Document: {filename} | Size: {format_size(file_size)} | Saved to: {local_path}]\\n\\n\"\n\n        if truncation.truncated:\n            header += f\"[Content truncated: showing {truncation.output_lines} of {truncation.total_lines} lines]\\n\\n\"\n\n        return ToolResult.success(header + result_text)\n\n    def _parse_document(self, file_path: str, suffix: str) -> str:\n        \"\"\"Parse document file and return extracted text.\"\"\"\n        if suffix in PDF_SUFFIXES:\n            return self._parse_pdf(file_path)\n        elif suffix in WORD_SUFFIXES:\n            return self._parse_word(file_path)\n        elif suffix in TEXT_SUFFIXES:\n            return self._parse_text(file_path)\n        elif suffix in SPREADSHEET_SUFFIXES:\n            return self._parse_spreadsheet(file_path)\n        elif suffix in PPT_SUFFIXES:\n            return self._parse_ppt(file_path)\n        else:\n            return self._parse_text(file_path)\n\n    def _parse_pdf(self, file_path: str) -> str:\n        \"\"\"Extract text from PDF using pypdf.\"\"\"\n        try:\n            from pypdf import PdfReader\n        except ImportError:\n            raise ImportError(\"pypdf library is required for PDF parsing. Install with: pip install pypdf\")\n\n        reader = PdfReader(file_path)\n        text_parts = []\n        for page_num, page in enumerate(reader.pages, 1):\n            page_text = page.extract_text()\n            if page_text and page_text.strip():\n                text_parts.append(f\"--- Page {page_num}/{len(reader.pages)} ---\\n{page_text}\")\n\n        return \"\\n\\n\".join(text_parts)\n\n    def _parse_word(self, file_path: str) -> str:\n        \"\"\"Extract text from Word documents (.docx).\"\"\"\n        try:\n            from docx import Document\n        except ImportError:\n            raise ImportError(\n                \"python-docx library is required for .docx parsing. Install with: pip install python-docx\"\n            )\n        doc = Document(file_path)\n        paragraphs = [p.text for p in doc.paragraphs if p.text.strip()]\n        return \"\\n\\n\".join(paragraphs)\n\n    def _parse_text(self, file_path: str) -> str:\n        \"\"\"Read plain text files (txt, md, csv, etc.).\"\"\"\n        encodings = [\"utf-8\", \"utf-8-sig\", \"gbk\", \"gb2312\", \"latin-1\"]\n        for enc in encodings:\n            try:\n                with open(file_path, \"r\", encoding=enc) as f:\n                    return f.read()\n            except (UnicodeDecodeError, UnicodeError):\n                continue\n        raise ValueError(f\"Unable to decode file with any supported encoding: {encodings}\")\n\n    def _parse_spreadsheet(self, file_path: str) -> str:\n        \"\"\"Extract text from Excel files (.xls/.xlsx).\"\"\"\n        try:\n            import openpyxl\n        except ImportError:\n            raise ImportError(\n                \"openpyxl library is required for .xlsx parsing. Install with: pip install openpyxl\"\n            )\n\n        wb = openpyxl.load_workbook(file_path, read_only=True, data_only=True)\n        result_parts = []\n\n        for sheet_name in wb.sheetnames:\n            ws = wb[sheet_name]\n            rows = []\n            for row in ws.iter_rows(values_only=True):\n                cells = [str(c) if c is not None else \"\" for c in row]\n                if any(cells):\n                    rows.append(\" | \".join(cells))\n            if rows:\n                result_parts.append(f\"--- Sheet: {sheet_name} ---\\n\" + \"\\n\".join(rows))\n\n        wb.close()\n        return \"\\n\\n\".join(result_parts)\n\n    def _parse_ppt(self, file_path: str) -> str:\n        \"\"\"Extract text from PowerPoint files (.ppt/.pptx).\"\"\"\n        try:\n            from pptx import Presentation\n        except ImportError:\n            raise ImportError(\n                \"python-pptx library is required for .pptx parsing. Install with: pip install python-pptx\"\n            )\n\n        prs = Presentation(file_path)\n        text_parts = []\n\n        for slide_num, slide in enumerate(prs.slides, 1):\n            slide_texts = []\n            for shape in slide.shapes:\n                if shape.has_text_frame:\n                    for paragraph in shape.text_frame.paragraphs:\n                        text = paragraph.text.strip()\n                        if text:\n                            slide_texts.append(text)\n            if slide_texts:\n                text_parts.append(f\"--- Slide {slide_num}/{len(prs.slides)} ---\\n\" + \"\\n\".join(slide_texts))\n\n        return \"\\n\\n\".join(text_parts)\n\n    # ---- Encoding detection ----\n\n    @staticmethod\n    def _detect_encoding(response: requests.Response) -> str:\n        \"\"\"Detect response encoding with priority: Content-Type header > HTML meta > chardet > utf-8.\"\"\"\n        # 1. Check Content-Type header for explicit charset\n        content_type = response.headers.get(\"Content-Type\", \"\")\n        charset = _extract_charset_from_content_type(content_type)\n        if charset:\n            return charset\n\n        # 2. Scan raw bytes for HTML meta charset declaration\n        raw = response.content[:4096]\n        charset = _extract_charset_from_html_meta(raw)\n        if charset:\n            return charset\n\n        # 3. Use apparent_encoding (chardet-based detection) if confident enough\n        apparent = response.apparent_encoding\n        if apparent:\n            apparent_lower = apparent.lower()\n            # Trust CJK / Windows encodings detected by chardet\n            trusted_prefixes = (\"utf\", \"gb\", \"big5\", \"euc\", \"shift_jis\", \"iso-2022\", \"windows\", \"ascii\")\n            if any(apparent_lower.startswith(p) for p in trusted_prefixes):\n                return apparent\n\n        # 4. Fallback\n        return \"utf-8\"\n\n    # ---- Helper methods ----\n\n    def _ensure_tmp_dir(self) -> str:\n        \"\"\"Ensure workspace/tmp directory exists and return its path.\"\"\"\n        tmp_dir = os.path.join(self.cwd, \"tmp\")\n        os.makedirs(tmp_dir, exist_ok=True)\n        return tmp_dir\n\n    def _extract_filename(self, url: str) -> str:\n        \"\"\"Extract a safe filename from URL, with a short UUID prefix to avoid collisions.\"\"\"\n        path = urlparse(url).path\n        basename = os.path.basename(unquote(path))\n        if not basename or basename == \"/\":\n            basename = \"downloaded_file\"\n        # Sanitize: keep only safe chars\n        basename = re.sub(r'[^\\w.\\-]', '_', basename)\n        short_id = uuid.uuid4().hex[:8]\n        return f\"{short_id}_{basename}\"\n\n    @staticmethod\n    def _cleanup_file(path: str):\n        \"\"\"Remove a file if it exists, ignoring errors.\"\"\"\n        try:\n            if os.path.exists(path):\n                os.remove(path)\n        except Exception:\n            pass\n\n    @staticmethod\n    def _is_binary_content_type(content_type: str) -> bool:\n        \"\"\"Check if Content-Type indicates a binary/document response.\"\"\"\n        binary_types = [\n            \"application/pdf\",\n            \"application/vnd.openxmlformats\",\n            \"application/vnd.ms-excel\",\n            \"application/vnd.ms-powerpoint\",\n            \"application/octet-stream\",\n        ]\n        ct_lower = content_type.lower()\n        return any(bt in ct_lower for bt in binary_types)\n\n    def _handle_download_by_content_type(self, url: str, response: requests.Response, content_type: str) -> ToolResult:\n        \"\"\"Handle a URL that returned binary content instead of HTML.\"\"\"\n        ct_lower = content_type.lower()\n        suffix_map = {\n            \"application/pdf\": \".pdf\",\n            \"application/vnd.openxmlformats-officedocument.wordprocessingml\": \".docx\",\n            \"application/vnd.ms-excel\": \".xls\",\n            \"application/vnd.openxmlformats-officedocument.spreadsheetml\": \".xlsx\",\n            \"application/vnd.ms-powerpoint\": \".ppt\",\n            \"application/vnd.openxmlformats-officedocument.presentationml\": \".pptx\",\n        }\n        detected_suffix = None\n        for ct_prefix, ext in suffix_map.items():\n            if ct_prefix in ct_lower:\n                detected_suffix = ext\n                break\n\n        if detected_suffix and detected_suffix in ALL_DOC_SUFFIXES:\n            # Re-fetch as document\n            return self._fetch_document(url if _get_url_suffix(url) in ALL_DOC_SUFFIXES\n                                        else self._rewrite_url_with_suffix(url, detected_suffix))\n        return ToolResult.fail(f\"Error: URL returned binary content ({content_type}), not a supported document type\")\n\n    @staticmethod\n    def _rewrite_url_with_suffix(url: str, suffix: str) -> str:\n        \"\"\"Append a suffix to the URL path so _get_url_suffix works correctly.\"\"\"\n        parsed = urlparse(url)\n        new_path = parsed.path.rstrip(\"/\") + suffix\n        return parsed._replace(path=new_path).geturl()\n\n    # ---- HTML extraction (unchanged) ----\n\n    @staticmethod\n    def _extract_title(html: str) -> str:\n        match = re.search(r\"<title[^>]*>(.*?)</title>\", html, re.IGNORECASE | re.DOTALL)\n        return match.group(1).strip() if match else \"Untitled\"\n\n    @staticmethod\n    def _extract_text(html: str) -> str:\n        text = re.sub(r\"<script[^>]*>.*?</script>\", \"\", html, flags=re.IGNORECASE | re.DOTALL)\n        text = re.sub(r\"<style[^>]*>.*?</style>\", \"\", text, flags=re.IGNORECASE | re.DOTALL)\n        text = re.sub(r\"<[^>]+>\", \"\", text)\n        text = text.replace(\"&amp;\", \"&\").replace(\"&lt;\", \"<\").replace(\"&gt;\", \">\")\n        text = text.replace(\"&quot;\", '\"').replace(\"&#39;\", \"'\").replace(\"&nbsp;\", \" \")\n        text = re.sub(r\"[^\\S\\n]+\", \" \", text)\n        text = re.sub(r\"\\n{3,}\", \"\\n\\n\", text)\n        lines = [line.strip() for line in text.splitlines()]\n        text = \"\\n\".join(lines)\n        return text.strip()\n"
  },
  {
    "path": "agent/tools/web_search/__init__.py",
    "content": "from agent.tools.web_search.web_search import WebSearch\n\n__all__ = [\"WebSearch\"]\n"
  },
  {
    "path": "agent/tools/web_search/web_search.py",
    "content": "\"\"\"\nWeb Search tool - Search the web using Bocha or LinkAI search API.\nSupports two backends with unified response format:\n  1. Bocha Search (primary, requires BOCHA_API_KEY)\n  2. LinkAI Search (fallback, requires LINKAI_API_KEY)\n\"\"\"\n\nimport os\nimport json\nfrom typing import Dict, Any, Optional\n\nimport requests\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom common.log import logger\nfrom config import conf\n\n\n# Default timeout for API requests (seconds)\nDEFAULT_TIMEOUT = 30\n\n\nclass WebSearch(BaseTool):\n    \"\"\"Tool for searching the web using Bocha or LinkAI search API\"\"\"\n\n    name: str = \"web_search\"\n    description: str = \"Search the web for real-time information. Returns titles, URLs, and snippets.\"\n\n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"query\": {\n                \"type\": \"string\",\n                \"description\": \"Search query string\"\n            },\n            \"count\": {\n                \"type\": \"integer\",\n                \"description\": \"Number of results to return (1-50, default: 10)\"\n            },\n            \"freshness\": {\n                \"type\": \"string\",\n                \"description\": (\n                    \"Time range filter. Options: \"\n                    \"'noLimit' (default), 'oneDay', 'oneWeek', 'oneMonth', 'oneYear', \"\n                    \"or date range like '2025-01-01..2025-02-01'\"\n                )\n            },\n            \"summary\": {\n                \"type\": \"boolean\",\n                \"description\": \"Whether to include text summary for each result (default: false)\"\n            }\n        },\n        \"required\": [\"query\"]\n    }\n\n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self._backend = None  # Will be resolved on first execute\n\n    @staticmethod\n    def is_available() -> bool:\n        \"\"\"Check if web search is available (at least one API key is configured)\"\"\"\n        return bool(os.environ.get(\"BOCHA_API_KEY\") or os.environ.get(\"LINKAI_API_KEY\"))\n\n    def _resolve_backend(self) -> Optional[str]:\n        \"\"\"\n        Determine which search backend to use.\n        Priority: Bocha > LinkAI\n\n        :return: 'bocha', 'linkai', or None\n        \"\"\"\n        if os.environ.get(\"BOCHA_API_KEY\"):\n            return \"bocha\"\n        if os.environ.get(\"LINKAI_API_KEY\"):\n            return \"linkai\"\n        return None\n\n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute web search\n\n        :param args: Search parameters (query, count, freshness, summary)\n        :return: Search results\n        \"\"\"\n        query = args.get(\"query\", \"\").strip()\n        if not query:\n            return ToolResult.fail(\"Error: 'query' parameter is required\")\n\n        count = args.get(\"count\", 10)\n        freshness = args.get(\"freshness\", \"noLimit\")\n        summary = args.get(\"summary\", False)\n\n        # Validate count\n        if not isinstance(count, int) or count < 1 or count > 50:\n            count = 10\n\n        # Resolve backend\n        backend = self._resolve_backend()\n        if not backend:\n            return ToolResult.fail(\n                \"Error: No search API key configured. \"\n                \"Please set BOCHA_API_KEY or LINKAI_API_KEY using env_config tool.\\n\"\n                \"  - Bocha Search: https://open.bocha.cn\\n\"\n                \"  - LinkAI Search: https://link-ai.tech\"\n            )\n\n        try:\n            if backend == \"bocha\":\n                return self._search_bocha(query, count, freshness, summary)\n            else:\n                return self._search_linkai(query, count, freshness)\n        except requests.Timeout:\n            return ToolResult.fail(f\"Error: Search request timed out after {DEFAULT_TIMEOUT}s\")\n        except requests.ConnectionError:\n            return ToolResult.fail(\"Error: Failed to connect to search API\")\n        except Exception as e:\n            logger.error(f\"[WebSearch] Unexpected error: {e}\", exc_info=True)\n            return ToolResult.fail(f\"Error: Search failed - {str(e)}\")\n\n    def _search_bocha(self, query: str, count: int, freshness: str, summary: bool) -> ToolResult:\n        \"\"\"\n        Search using Bocha API\n\n        :param query: Search query\n        :param count: Number of results\n        :param freshness: Time range filter\n        :param summary: Whether to include summary\n        :return: Formatted search results\n        \"\"\"\n        api_key = os.environ.get(\"BOCHA_API_KEY\", \"\")\n        url = \"https://api.bocha.cn/v1/web-search\"\n\n        headers = {\n            \"Authorization\": f\"Bearer {api_key}\",\n            \"Content-Type\": \"application/json\",\n            \"Accept\": \"application/json\"\n        }\n\n        payload = {\n            \"query\": query,\n            \"count\": count,\n            \"freshness\": freshness,\n            \"summary\": summary\n        }\n\n        logger.debug(f\"[WebSearch] Bocha search: query='{query}', count={count}\")\n\n        response = requests.post(url, headers=headers, json=payload, timeout=DEFAULT_TIMEOUT)\n\n        if response.status_code == 401:\n            return ToolResult.fail(\"Error: Invalid BOCHA_API_KEY. Please check your API key.\")\n        if response.status_code == 403:\n            return ToolResult.fail(\"Error: Bocha API - insufficient balance. Please top up at https://open.bocha.cn\")\n        if response.status_code == 429:\n            return ToolResult.fail(\"Error: Bocha API rate limit reached. Please try again later.\")\n        if response.status_code != 200:\n            return ToolResult.fail(f\"Error: Bocha API returned HTTP {response.status_code}\")\n\n        data = response.json()\n\n        # Check API-level error code\n        api_code = data.get(\"code\")\n        if api_code is not None and api_code != 200:\n            msg = data.get(\"msg\") or \"Unknown error\"\n            return ToolResult.fail(f\"Error: Bocha API error (code={api_code}): {msg}\")\n\n        # Extract and format results\n        return self._format_bocha_results(data, query)\n\n    def _format_bocha_results(self, data: dict, query: str) -> ToolResult:\n        \"\"\"\n        Format Bocha API response into unified result structure\n\n        :param data: Raw API response\n        :param query: Original query\n        :return: Formatted ToolResult\n        \"\"\"\n        search_data = data.get(\"data\", {})\n        web_pages = search_data.get(\"webPages\", {})\n        pages = web_pages.get(\"value\", [])\n\n        if not pages:\n            return ToolResult.success({\n                \"query\": query,\n                \"backend\": \"bocha\",\n                \"total\": 0,\n                \"results\": [],\n                \"message\": \"No results found\"\n            })\n\n        results = []\n        for page in pages:\n            result = {\n                \"title\": page.get(\"name\", \"\"),\n                \"url\": page.get(\"url\", \"\"),\n                \"snippet\": page.get(\"snippet\", \"\"),\n                \"siteName\": page.get(\"siteName\", \"\"),\n                \"datePublished\": page.get(\"datePublished\") or page.get(\"dateLastCrawled\", \"\"),\n            }\n            # Include summary only if present\n            if page.get(\"summary\"):\n                result[\"summary\"] = page[\"summary\"]\n            results.append(result)\n\n        total = web_pages.get(\"totalEstimatedMatches\", len(results))\n\n        return ToolResult.success({\n            \"query\": query,\n            \"backend\": \"bocha\",\n            \"total\": total,\n            \"count\": len(results),\n            \"results\": results\n        })\n\n    def _search_linkai(self, query: str, count: int, freshness: str) -> ToolResult:\n        \"\"\"\n        Search using LinkAI plugin API\n\n        :param query: Search query\n        :param count: Number of results\n        :param freshness: Time range filter\n        :return: Formatted search results\n        \"\"\"\n        api_key = os.environ.get(\"LINKAI_API_KEY\", \"\")\n        api_base = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n        url = f\"{api_base.rstrip('/')}/v1/plugin/execute\"\n\n        from common.utils import get_cloud_headers\n        headers = get_cloud_headers(api_key)\n\n        payload = {\n            \"code\": \"web-search\",\n            \"args\": {\n                \"query\": query,\n                \"count\": count,\n                \"freshness\": freshness\n            }\n        }\n\n        logger.debug(f\"[WebSearch] LinkAI search: query='{query}', count={count}\")\n\n        response = requests.post(url, headers=headers, json=payload, timeout=DEFAULT_TIMEOUT)\n\n        if response.status_code == 401:\n            return ToolResult.fail(\"Error: Invalid LINKAI_API_KEY. Please check your API key.\")\n        if response.status_code != 200:\n            return ToolResult.fail(f\"Error: LinkAI API returned HTTP {response.status_code}\")\n\n        data = response.json()\n\n        if not data.get(\"success\"):\n            msg = data.get(\"message\") or \"Unknown error\"\n            return ToolResult.fail(f\"Error: LinkAI search failed: {msg}\")\n\n        return self._format_linkai_results(data, query)\n\n    def _format_linkai_results(self, data: dict, query: str) -> ToolResult:\n        \"\"\"\n        Format LinkAI API response into unified result structure.\n        LinkAI returns the search data in data.data field, which follows\n        the same Bing-compatible format as Bocha.\n\n        :param data: Raw API response\n        :param query: Original query\n        :return: Formatted ToolResult\n        \"\"\"\n        raw_data = data.get(\"data\", \"\")\n\n        # LinkAI may return data as a JSON string\n        if isinstance(raw_data, str):\n            try:\n                raw_data = json.loads(raw_data)\n            except (json.JSONDecodeError, TypeError):\n                # If data is plain text, return it as a single result\n                return ToolResult.success({\n                    \"query\": query,\n                    \"backend\": \"linkai\",\n                    \"total\": 1,\n                    \"count\": 1,\n                    \"results\": [{\"content\": raw_data}]\n                })\n\n        # If the response follows Bing-compatible structure\n        if isinstance(raw_data, dict):\n            web_pages = raw_data.get(\"webPages\", {})\n            pages = web_pages.get(\"value\", [])\n\n            if pages:\n                results = []\n                for page in pages:\n                    result = {\n                        \"title\": page.get(\"name\", \"\"),\n                        \"url\": page.get(\"url\", \"\"),\n                        \"snippet\": page.get(\"snippet\", \"\"),\n                        \"siteName\": page.get(\"siteName\", \"\"),\n                        \"datePublished\": page.get(\"datePublished\") or page.get(\"dateLastCrawled\", \"\"),\n                    }\n                    if page.get(\"summary\"):\n                        result[\"summary\"] = page[\"summary\"]\n                    results.append(result)\n\n                total = web_pages.get(\"totalEstimatedMatches\", len(results))\n                return ToolResult.success({\n                    \"query\": query,\n                    \"backend\": \"linkai\",\n                    \"total\": total,\n                    \"count\": len(results),\n                    \"results\": results\n                })\n\n        # Fallback: return raw data\n        return ToolResult.success({\n            \"query\": query,\n            \"backend\": \"linkai\",\n            \"total\": 1,\n            \"count\": 1,\n            \"results\": [{\"content\": str(raw_data)}]\n        })\n"
  },
  {
    "path": "agent/tools/write/__init__.py",
    "content": "from .write import Write\n\n__all__ = ['Write']\n"
  },
  {
    "path": "agent/tools/write/write.py",
    "content": "\"\"\"\nWrite tool - Write file content\nCreates or overwrites files, automatically creates parent directories\n\"\"\"\n\nimport os\nfrom typing import Dict, Any\nfrom pathlib import Path\n\nfrom agent.tools.base_tool import BaseTool, ToolResult\nfrom common.utils import expand_path\n\n\nclass Write(BaseTool):\n    \"\"\"Tool for writing file content\"\"\"\n    \n    name: str = \"write\"\n    description: str = \"Write content to a file. Creates the file if it doesn't exist, overwrites if it does. Automatically creates parent directories. IMPORTANT: Single write should not exceed 10KB. For large files, create a skeleton first, then use edit to add content in chunks.\"\n    \n    params: dict = {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Path to the file to write (relative or absolute)\"\n            },\n            \"content\": {\n                \"type\": \"string\",\n                \"description\": \"Content to write to the file\"\n            }\n        },\n        \"required\": [\"path\", \"content\"]\n    }\n    \n    def __init__(self, config: dict = None):\n        self.config = config or {}\n        self.cwd = self.config.get(\"cwd\", os.getcwd())\n        self.memory_manager = self.config.get(\"memory_manager\", None)\n    \n    def execute(self, args: Dict[str, Any]) -> ToolResult:\n        \"\"\"\n        Execute file write operation\n        \n        :param args: Contains file path and content\n        :return: Operation result\n        \"\"\"\n        path = args.get(\"path\", \"\").strip()\n        content = args.get(\"content\", \"\")\n        \n        if not path:\n            return ToolResult.fail(\"Error: path parameter is required\")\n        \n        # Resolve path\n        absolute_path = self._resolve_path(path)\n        \n        try:\n            # Create parent directory (if needed)\n            parent_dir = os.path.dirname(absolute_path)\n            if parent_dir:\n                os.makedirs(parent_dir, exist_ok=True)\n            \n            # Write file\n            with open(absolute_path, 'w', encoding='utf-8') as f:\n                f.write(content)\n            \n            # Get bytes written\n            bytes_written = len(content.encode('utf-8'))\n            \n            # Auto-sync to memory database if this is a memory file\n            if self.memory_manager and 'memory/' in path:\n                self.memory_manager.mark_dirty()\n            \n            result = {\n                \"message\": f\"Successfully wrote {bytes_written} bytes to {path}\",\n                \"path\": path,\n                \"bytes_written\": bytes_written\n            }\n            \n            return ToolResult.success(result)\n            \n        except PermissionError:\n            return ToolResult.fail(f\"Error: Permission denied writing to {path}\")\n        except Exception as e:\n            return ToolResult.fail(f\"Error writing file: {str(e)}\")\n    \n    def _resolve_path(self, path: str) -> str:\n        \"\"\"\n        Resolve path to absolute path\n        \n        :param path: Relative or absolute path\n        :return: Absolute path\n        \"\"\"\n        # Expand ~ to user home directory\n        path = expand_path(path)\n        if os.path.isabs(path):\n            return path\n        return os.path.abspath(os.path.join(self.cwd, path))\n"
  },
  {
    "path": "app.py",
    "content": "# encoding:utf-8\n\nimport os\nimport signal\nimport sys\nimport time\n\nfrom channel import channel_factory\nfrom common import const\nfrom common.log import logger\nfrom config import load_config, conf\nfrom plugins import *\nimport threading\n\n\n_channel_mgr = None\n\n\ndef get_channel_manager():\n    return _channel_mgr\n\n\ndef _parse_channel_type(raw) -> list:\n    \"\"\"\n    Parse channel_type config value into a list of channel names.\n    Supports:\n      - single string: \"feishu\"\n      - comma-separated string: \"feishu, dingtalk\"\n      - list: [\"feishu\", \"dingtalk\"]\n    \"\"\"\n    if isinstance(raw, list):\n        return [ch.strip() for ch in raw if ch.strip()]\n    if isinstance(raw, str):\n        return [ch.strip() for ch in raw.split(\",\") if ch.strip()]\n    return []\n\n\nclass ChannelManager:\n    \"\"\"\n    Manage the lifecycle of multiple channels running concurrently.\n    Each channel.startup() runs in its own daemon thread.\n    The web channel is started as default console unless explicitly disabled.\n    \"\"\"\n\n    def __init__(self):\n        self._channels = {}        # channel_name -> channel instance\n        self._threads = {}         # channel_name -> thread\n        self._primary_channel = None\n        self._lock = threading.Lock()\n        self.cloud_mode = False    # set to True when cloud client is active\n\n    @property\n    def channel(self):\n        \"\"\"Return the primary (first non-web) channel for backward compatibility.\"\"\"\n        return self._primary_channel\n\n    def get_channel(self, channel_name: str):\n        return self._channels.get(channel_name)\n\n    def start(self, channel_names: list, first_start: bool = False):\n        \"\"\"\n        Create and start one or more channels in sub-threads.\n        If first_start is True, plugins and linkai client will also be initialized.\n        \"\"\"\n        with self._lock:\n            channels = []\n            for name in channel_names:\n                ch = channel_factory.create_channel(name)\n                ch.cloud_mode = self.cloud_mode\n                self._channels[name] = ch\n                channels.append((name, ch))\n                if self._primary_channel is None and name != \"web\":\n                    self._primary_channel = ch\n\n            if self._primary_channel is None and channels:\n                self._primary_channel = channels[0][1]\n\n            if first_start:\n                PluginManager().load_plugins()\n\n                if conf().get(\"use_linkai\"):\n                    try:\n                        from common import cloud_client\n                        threading.Thread(\n                            target=cloud_client.start,\n                            args=(self._primary_channel, self),\n                            daemon=True,\n                        ).start()\n                    except Exception:\n                        pass\n\n            # Start web console first so its logs print cleanly,\n            # then start remaining channels after a brief pause.\n            web_entry = None\n            other_entries = []\n            for entry in channels:\n                if entry[0] == \"web\":\n                    web_entry = entry\n                else:\n                    other_entries.append(entry)\n\n            ordered = ([web_entry] if web_entry else []) + other_entries\n            for i, (name, ch) in enumerate(ordered):\n                if i > 0 and name != \"web\":\n                    time.sleep(0.1)\n                t = threading.Thread(target=self._run_channel, args=(name, ch), daemon=True)\n                self._threads[name] = t\n                t.start()\n                logger.debug(f\"[ChannelManager] Channel '{name}' started in sub-thread\")\n\n    def _run_channel(self, name: str, channel):\n        try:\n            channel.startup()\n        except Exception as e:\n            logger.error(f\"[ChannelManager] Channel '{name}' startup error: {e}\")\n            logger.exception(e)\n\n    def stop(self, channel_name: str = None):\n        \"\"\"\n        Stop channel(s). If channel_name is given, stop only that channel;\n        otherwise stop all channels.\n        \"\"\"\n        # Pop under lock, then stop outside lock to avoid deadlock\n        with self._lock:\n            names = [channel_name] if channel_name else list(self._channels.keys())\n            to_stop = []\n            for name in names:\n                ch = self._channels.pop(name, None)\n                th = self._threads.pop(name, None)\n                to_stop.append((name, ch, th))\n            if channel_name and self._primary_channel is self._channels.get(channel_name):\n                self._primary_channel = None\n\n        for name, ch, th in to_stop:\n            if ch is None:\n                logger.warning(f\"[ChannelManager] Channel '{name}' not found in managed channels\")\n                if th and th.is_alive():\n                    self._interrupt_thread(th, name)\n                continue\n            logger.info(f\"[ChannelManager] Stopping channel '{name}'...\")\n            graceful = False\n            if hasattr(ch, 'stop'):\n                try:\n                    ch.stop()\n                    graceful = True\n                except Exception as e:\n                    logger.warning(f\"[ChannelManager] Error during channel '{name}' stop: {e}\")\n            if th and th.is_alive():\n                th.join(timeout=5)\n                if th.is_alive():\n                    if graceful:\n                        logger.info(f\"[ChannelManager] Channel '{name}' thread still alive after stop(), \"\n                                    \"leaving daemon thread to finish on its own\")\n                    else:\n                        logger.warning(f\"[ChannelManager] Channel '{name}' thread did not exit in 5s, forcing interrupt\")\n                        self._interrupt_thread(th, name)\n\n    @staticmethod\n    def _interrupt_thread(th: threading.Thread, name: str):\n        \"\"\"Raise SystemExit in target thread to break blocking loops like start_forever.\"\"\"\n        import ctypes\n        try:\n            tid = th.ident\n            if tid is None:\n                return\n            res = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n                ctypes.c_ulong(tid), ctypes.py_object(SystemExit)\n            )\n            if res == 1:\n                logger.info(f\"[ChannelManager] Interrupted thread for channel '{name}'\")\n            elif res > 1:\n                ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_ulong(tid), None)\n                logger.warning(f\"[ChannelManager] Failed to interrupt thread for channel '{name}'\")\n        except Exception as e:\n            logger.warning(f\"[ChannelManager] Thread interrupt error for '{name}': {e}\")\n\n    def restart(self, new_channel_name: str):\n        \"\"\"\n        Restart a single channel with a new channel type.\n        Can be called from any thread (e.g. linkai config callback).\n        \"\"\"\n        logger.info(f\"[ChannelManager] Restarting channel to '{new_channel_name}'...\")\n        self.stop(new_channel_name)\n        _clear_singleton_cache(new_channel_name)\n        time.sleep(1)\n        self.start([new_channel_name], first_start=False)\n        logger.info(f\"[ChannelManager] Channel restarted to '{new_channel_name}' successfully\")\n\n    def add_channel(self, channel_name: str):\n        \"\"\"\n        Dynamically add and start a new channel.\n        If the channel is already running, restart it instead.\n        \"\"\"\n        with self._lock:\n            if channel_name in self._channels:\n                logger.info(f\"[ChannelManager] Channel '{channel_name}' already exists, restarting\")\n        if self._channels.get(channel_name):\n            self.restart(channel_name)\n            return\n        logger.info(f\"[ChannelManager] Adding channel '{channel_name}'...\")\n        _clear_singleton_cache(channel_name)\n        self.start([channel_name], first_start=False)\n        logger.info(f\"[ChannelManager] Channel '{channel_name}' added successfully\")\n\n    def remove_channel(self, channel_name: str):\n        \"\"\"\n        Dynamically stop and remove a running channel.\n        \"\"\"\n        with self._lock:\n            if channel_name not in self._channels:\n                logger.warning(f\"[ChannelManager] Channel '{channel_name}' not found, nothing to remove\")\n                return\n        logger.info(f\"[ChannelManager] Removing channel '{channel_name}'...\")\n        self.stop(channel_name)\n        logger.info(f\"[ChannelManager] Channel '{channel_name}' removed successfully\")\n\n\ndef _clear_singleton_cache(channel_name: str):\n    \"\"\"\n    Clear the singleton cache for the channel class so that\n    a new instance can be created with updated config.\n    \"\"\"\n    cls_map = {\n        \"web\": \"channel.web.web_channel.WebChannel\",\n        \"wechatmp\": \"channel.wechatmp.wechatmp_channel.WechatMPChannel\",\n        \"wechatmp_service\": \"channel.wechatmp.wechatmp_channel.WechatMPChannel\",\n        \"wechatcom_app\": \"channel.wechatcom.wechatcomapp_channel.WechatComAppChannel\",\n        const.FEISHU: \"channel.feishu.feishu_channel.FeiShuChanel\",\n        const.DINGTALK: \"channel.dingtalk.dingtalk_channel.DingTalkChanel\",\n        const.WECOM_BOT: \"channel.wecom_bot.wecom_bot_channel.WecomBotChannel\",\n        const.QQ: \"channel.qq.qq_channel.QQChannel\",\n    }\n    module_path = cls_map.get(channel_name)\n    if not module_path:\n        return\n    try:\n        parts = module_path.rsplit(\".\", 1)\n        module_name, class_name = parts[0], parts[1]\n        import importlib\n        module = importlib.import_module(module_name)\n        wrapper = getattr(module, class_name, None)\n        if wrapper and hasattr(wrapper, '__closure__') and wrapper.__closure__:\n            for cell in wrapper.__closure__:\n                try:\n                    cell_contents = cell.cell_contents\n                    if isinstance(cell_contents, dict):\n                        cell_contents.clear()\n                        logger.debug(f\"[ChannelManager] Cleared singleton cache for {class_name}\")\n                        break\n                except ValueError:\n                    pass\n    except Exception as e:\n        logger.warning(f\"[ChannelManager] Failed to clear singleton cache: {e}\")\n\n\ndef sigterm_handler_wrap(_signo):\n    old_handler = signal.getsignal(_signo)\n\n    def func(_signo, _stack_frame):\n        logger.info(\"signal {} received, exiting...\".format(_signo))\n        conf().save_user_datas()\n        if callable(old_handler):  #  check old_handler\n            return old_handler(_signo, _stack_frame)\n        sys.exit(0)\n\n    signal.signal(_signo, func)\n\n\ndef run():\n    global _channel_mgr\n    try:\n        # load config\n        load_config()\n        # ctrl + c\n        sigterm_handler_wrap(signal.SIGINT)\n        # kill signal\n        sigterm_handler_wrap(signal.SIGTERM)\n\n        # Parse channel_type into a list\n        raw_channel = conf().get(\"channel_type\", \"web\")\n\n        if \"--cmd\" in sys.argv:\n            channel_names = [\"terminal\"]\n        else:\n            channel_names = _parse_channel_type(raw_channel)\n            if not channel_names:\n                channel_names = [\"web\"]\n\n        # Auto-start web console unless explicitly disabled\n        web_console_enabled = conf().get(\"web_console\", True)\n        if web_console_enabled and \"web\" not in channel_names:\n            channel_names.append(\"web\")\n\n        logger.info(f\"[App] Starting channels: {channel_names}\")\n\n        _channel_mgr = ChannelManager()\n        _channel_mgr.start(channel_names, first_start=True)\n\n        while True:\n            time.sleep(1)\n    except Exception as e:\n        logger.error(\"App startup failed!\")\n        logger.exception(e)\n\n\nif __name__ == \"__main__\":\n    run()\n"
  },
  {
    "path": "bridge/agent_bridge.py",
    "content": "\"\"\"\nAgent Bridge - Integrates Agent system with existing COW bridge\n\"\"\"\n\nimport os\nfrom typing import Optional, List\n\nfrom agent.protocol import Agent, LLMModel, LLMRequest\nfrom bridge.agent_event_handler import AgentEventHandler\nfrom bridge.agent_initializer import AgentInitializer\nfrom bridge.bridge import Bridge\nfrom bridge.context import Context\nfrom bridge.reply import Reply, ReplyType\nfrom common import const\nfrom common.log import logger\nfrom common.utils import expand_path\nfrom models.openai_compatible_bot import OpenAICompatibleBot\n\n\ndef add_openai_compatible_support(bot_instance):\n    \"\"\"\n    Dynamically add OpenAI-compatible tool calling support to a bot instance.\n    \n    This allows any bot to gain tool calling capability without modifying its code,\n    as long as it uses OpenAI-compatible API format.\n    \n    Note: Some bots like ZHIPUAIBot have native tool calling support and don't need enhancement.\n    \"\"\"\n    if hasattr(bot_instance, 'call_with_tools'):\n        # Bot already has tool calling support (e.g., ZHIPUAIBot)\n        logger.debug(f\"[AgentBridge] {type(bot_instance).__name__} already has native tool calling support\")\n        return bot_instance\n\n    # Create a temporary mixin class that combines the bot with OpenAI compatibility\n    class EnhancedBot(bot_instance.__class__, OpenAICompatibleBot):\n        \"\"\"Dynamically enhanced bot with OpenAI-compatible tool calling\"\"\"\n\n        def get_api_config(self):\n            \"\"\"\n            Infer API config from common configuration patterns.\n            Most OpenAI-compatible bots use similar configuration.\n            \"\"\"\n            from config import conf\n\n            return {\n                'api_key': conf().get(\"open_ai_api_key\"),\n                'api_base': conf().get(\"open_ai_api_base\"),\n                'model': conf().get(\"model\", \"gpt-3.5-turbo\"),\n                'default_temperature': conf().get(\"temperature\", 0.9),\n                'default_top_p': conf().get(\"top_p\", 1.0),\n                'default_frequency_penalty': conf().get(\"frequency_penalty\", 0.0),\n                'default_presence_penalty': conf().get(\"presence_penalty\", 0.0),\n            }\n\n    # Change the bot's class to the enhanced version\n    bot_instance.__class__ = EnhancedBot\n    logger.info(\n        f\"[AgentBridge] Enhanced {bot_instance.__class__.__bases__[0].__name__} with OpenAI-compatible tool calling\")\n\n    return bot_instance\n\n\nclass AgentLLMModel(LLMModel):\n    \"\"\"\n    LLM Model adapter that uses COW's existing bot infrastructure\n    \"\"\"\n\n    _MODEL_BOT_TYPE_MAP = {\n        \"wenxin\": const.BAIDU, \"wenxin-4\": const.BAIDU,\n        \"xunfei\": const.XUNFEI, const.QWEN: const.QWEN,\n        const.MODELSCOPE: const.MODELSCOPE,\n    }\n    _MODEL_PREFIX_MAP = [\n        (\"qwen\", const.QWEN_DASHSCOPE), (\"qwq\", const.QWEN_DASHSCOPE), (\"qvq\", const.QWEN_DASHSCOPE),\n        (\"gemini\", const.GEMINI), (\"glm\", const.ZHIPU_AI), (\"claude\", const.CLAUDEAPI),\n        (\"moonshot\", const.MOONSHOT), (\"kimi\", const.MOONSHOT),\n        (\"doubao\", const.DOUBAO),\n    ]\n\n    def __init__(self, bridge: Bridge, bot_type: str = \"chat\"):\n        from config import conf\n        super().__init__(model=conf().get(\"model\", const.GPT_41))\n        self.bridge = bridge\n        self.bot_type = bot_type\n        self._bot = None\n        self._bot_model = None\n\n    @property\n    def model(self):\n        from config import conf\n        return conf().get(\"model\", const.GPT_41)\n\n    @model.setter\n    def model(self, value):\n        pass\n\n    def _resolve_bot_type(self, model_name: str) -> str:\n        \"\"\"Resolve bot type from model name, matching Bridge.__init__ logic.\"\"\"\n        from config import conf\n\n        if conf().get(\"use_linkai\", False) and conf().get(\"linkai_api_key\"):\n            return const.LINKAI\n        # Support custom bot type configuration\n        configured_bot_type = conf().get(\"bot_type\")\n        if configured_bot_type:\n            return configured_bot_type\n       \n        if not model_name or not isinstance(model_name, str):\n            return const.OPENAI\n        if model_name in self._MODEL_BOT_TYPE_MAP:\n            return self._MODEL_BOT_TYPE_MAP[model_name]\n        if model_name.lower().startswith(\"minimax\") or model_name in [\"abab6.5-chat\"]:\n            return const.MiniMax\n        if model_name in [const.QWEN_TURBO, const.QWEN_PLUS, const.QWEN_MAX]:\n            return const.QWEN_DASHSCOPE\n        if model_name in [const.MOONSHOT, \"moonshot-v1-8k\", \"moonshot-v1-32k\", \"moonshot-v1-128k\"]:\n            return const.MOONSHOT\n        if model_name in [const.DEEPSEEK_CHAT, const.DEEPSEEK_REASONER]:\n            return const.OPENAI\n        for prefix, btype in self._MODEL_PREFIX_MAP:\n            if model_name.startswith(prefix):\n                return btype\n        return const.OPENAI\n\n    @property\n    def bot(self):\n        \"\"\"Lazy load the bot, re-create when model changes\"\"\"\n        from models.bot_factory import create_bot\n        cur_model = self.model\n        if self._bot is None or self._bot_model != cur_model:\n            bot_type = self._resolve_bot_type(cur_model)\n            self._bot = create_bot(bot_type)\n            self._bot = add_openai_compatible_support(self._bot)\n            self._bot_model = cur_model\n        return self._bot\n\n    def call(self, request: LLMRequest):\n        \"\"\"\n        Call the model using COW's bot infrastructure\n        \"\"\"\n        try:\n            # For non-streaming calls, we'll use the existing reply method\n            # This is a simplified implementation\n            if hasattr(self.bot, 'call_with_tools'):\n                # Use tool-enabled call if available\n                kwargs = {\n                    'messages': request.messages,\n                    'tools': getattr(request, 'tools', None),\n                    'stream': False,\n                    'model': self.model  # Pass model parameter\n                }\n                # Only pass max_tokens if it's explicitly set\n                if request.max_tokens is not None:\n                    kwargs['max_tokens'] = request.max_tokens\n\n                # Extract system prompt if present\n                system_prompt = getattr(request, 'system', None)\n                if system_prompt:\n                    kwargs['system'] = system_prompt\n\n                # Pass context metadata to bot\n                channel_type = getattr(self, 'channel_type', None)\n                if channel_type:\n                    kwargs['channel_type'] = channel_type\n                session_id = getattr(self, 'session_id', None)\n                if session_id:\n                    kwargs['session_id'] = session_id\n\n                response = self.bot.call_with_tools(**kwargs)\n                return self._format_response(response)\n            else:\n                # Fallback to regular call\n                # This would need to be implemented based on your specific needs\n                raise NotImplementedError(\"Regular call not implemented yet\")\n                \n        except Exception as e:\n            logger.error(f\"AgentLLMModel call error: {e}\")\n            raise\n    \n    def call_stream(self, request: LLMRequest):\n        \"\"\"\n        Call the model with streaming using COW's bot infrastructure\n        \"\"\"\n        try:\n            if hasattr(self.bot, 'call_with_tools'):\n                # Use tool-enabled streaming call if available\n                # Extract system prompt if present\n                system_prompt = getattr(request, 'system', None)\n\n                # Build kwargs for call_with_tools\n                kwargs = {\n                    'messages': request.messages,\n                    'tools': getattr(request, 'tools', None),\n                    'stream': True,\n                    'model': self.model  # Pass model parameter\n                }\n\n                # Only pass max_tokens if explicitly set, let the bot use its default\n                if request.max_tokens is not None:\n                    kwargs['max_tokens'] = request.max_tokens\n\n                # Add system prompt if present\n                if system_prompt:\n                    kwargs['system'] = system_prompt\n\n                # Pass context metadata to bot\n                channel_type = getattr(self, 'channel_type', None)\n                if channel_type:\n                    kwargs['channel_type'] = channel_type\n                session_id = getattr(self, 'session_id', None)\n                if session_id:\n                    kwargs['session_id'] = session_id\n\n                stream = self.bot.call_with_tools(**kwargs)\n                \n                # Convert stream format to our expected format\n                for chunk in stream:\n                    yield self._format_stream_chunk(chunk)\n            else:\n                bot_type = type(self.bot).__name__\n                raise NotImplementedError(f\"Bot {bot_type} does not support call_with_tools. Please add the method.\")\n                \n        except Exception as e:\n            logger.error(f\"AgentLLMModel call_stream error: {e}\", exc_info=True)\n            raise\n    \n    def _format_response(self, response):\n        \"\"\"Format Claude response to our expected format\"\"\"\n        # This would need to be implemented based on Claude's response format\n        return response\n    \n    def _format_stream_chunk(self, chunk):\n        \"\"\"Format Claude stream chunk to our expected format\"\"\"\n        # This would need to be implemented based on Claude's stream format\n        return chunk\n\n\nclass AgentBridge:\n    \"\"\"\n    Bridge class that integrates super Agent with COW\n    Manages multiple agent instances per session for conversation isolation\n    \"\"\"\n    \n    def __init__(self, bridge: Bridge):\n        self.bridge = bridge\n        self.agents = {}  # session_id -> Agent instance mapping\n        self.default_agent = None  # For backward compatibility (no session_id)\n        self.agent: Optional[Agent] = None\n        self.scheduler_initialized = False\n        \n        # Create helper instances\n        self.initializer = AgentInitializer(bridge, self)\n    def create_agent(self, system_prompt: str, tools: List = None, **kwargs) -> Agent:\n        \"\"\"\n        Create the super agent with COW integration\n        \n        Args:\n            system_prompt: System prompt\n            tools: List of tools (optional)\n            **kwargs: Additional agent parameters\n            \n        Returns:\n            Agent instance\n        \"\"\"\n        # Create LLM model that uses COW's bot infrastructure\n        model = AgentLLMModel(self.bridge)\n        \n        # Default tools if none provided\n        if tools is None:\n            # Use ToolManager to load all available tools\n            from agent.tools import ToolManager\n            tool_manager = ToolManager()\n            tool_manager.load_tools()\n            \n            tools = []\n            for tool_name in tool_manager.tool_classes.keys():\n                try:\n                    tool = tool_manager.create_tool(tool_name)\n                    if tool:\n                        tools.append(tool)\n                except Exception as e:\n                    logger.warning(f\"[AgentBridge] Failed to load tool {tool_name}: {e}\")\n        \n        # Create agent instance\n        agent = Agent(\n            system_prompt=system_prompt,\n            description=kwargs.get(\"description\", \"AI Super Agent\"),\n            model=model,\n            tools=tools,\n            max_steps=kwargs.get(\"max_steps\", 15),\n            output_mode=kwargs.get(\"output_mode\", \"logger\"),\n            workspace_dir=kwargs.get(\"workspace_dir\"),\n            skill_manager=kwargs.get(\"skill_manager\"),\n            enable_skills=kwargs.get(\"enable_skills\", True),\n            memory_manager=kwargs.get(\"memory_manager\"),\n            max_context_tokens=kwargs.get(\"max_context_tokens\"),\n            context_reserve_tokens=kwargs.get(\"context_reserve_tokens\"),\n            runtime_info=kwargs.get(\"runtime_info\"),\n        )\n\n        # Log skill loading details\n        if agent.skill_manager:\n            logger.debug(f\"[AgentBridge] SkillManager initialized with {len(agent.skill_manager.skills)} skills\")\n\n        return agent\n    \n    def get_agent(self, session_id: str = None) -> Optional[Agent]:\n        \"\"\"\n        Get agent instance for the given session\n        \n        Args:\n            session_id: Session identifier (e.g., user_id). If None, returns default agent.\n        \n        Returns:\n            Agent instance for this session\n        \"\"\"\n        # If no session_id, use default agent (backward compatibility)\n        if session_id is None:\n            if self.default_agent is None:\n                self._init_default_agent()\n            return self.default_agent\n        \n        # Check if agent exists for this session\n        if session_id not in self.agents:\n            self._init_agent_for_session(session_id)\n        \n        return self.agents[session_id]\n    \n    def _init_default_agent(self):\n        \"\"\"Initialize default super agent\"\"\"\n        agent = self.initializer.initialize_agent(session_id=None)\n        self.default_agent = agent\n    \n    def _init_agent_for_session(self, session_id: str):\n        \"\"\"Initialize agent for a specific session\"\"\"\n        agent = self.initializer.initialize_agent(session_id=session_id)\n        self.agents[session_id] = agent\n    \n    def agent_reply(self, query: str, context: Context = None, \n                   on_event=None, clear_history: bool = False) -> Reply:\n        \"\"\"\n        Use super agent to reply to a query\n        \n        Args:\n            query: User query\n            context: COW context (optional, contains session_id for user isolation)\n            on_event: Event callback (optional)\n            clear_history: Whether to clear conversation history\n            \n        Returns:\n            Reply object\n        \"\"\"\n        session_id = None\n        agent = None\n        try:\n            # Extract session_id from context for user isolation\n            if context:\n                session_id = context.kwargs.get(\"session_id\") or context.get(\"session_id\")\n            \n            # Get agent for this session (will auto-initialize if needed)\n            agent = self.get_agent(session_id=session_id)\n            if not agent:\n                return Reply(ReplyType.ERROR, \"Failed to initialize super agent\")\n            \n            # Create event handler for logging and channel communication\n            event_handler = AgentEventHandler(context=context, original_callback=on_event)\n            \n            # Filter tools based on context\n            original_tools = agent.tools\n            filtered_tools = original_tools\n            \n            # If this is a scheduled task execution, exclude scheduler tool to prevent recursion\n            if context and context.get(\"is_scheduled_task\"):\n                filtered_tools = [tool for tool in agent.tools if tool.name != \"scheduler\"]\n                agent.tools = filtered_tools\n                logger.info(f\"[AgentBridge] Scheduled task execution: excluded scheduler tool ({len(filtered_tools)}/{len(original_tools)} tools)\")\n            else:\n                # Attach context to scheduler tool if present\n                if context and agent.tools:\n                    for tool in agent.tools:\n                        if tool.name == \"scheduler\":\n                            try:\n                                from agent.tools.scheduler.integration import attach_scheduler_to_tool\n                                attach_scheduler_to_tool(tool, context)\n                            except Exception as e:\n                                logger.warning(f\"[AgentBridge] Failed to attach context to scheduler: {e}\")\n                            break\n            \n            # Pass context metadata to model for downstream API requests\n            if context and hasattr(agent, 'model'):\n                agent.model.channel_type = context.get(\"channel_type\", \"\")\n                agent.model.session_id = session_id or \"\"\n\n            # Store session_id on agent so executor can clear DB on fatal errors\n            agent._current_session_id = session_id\n\n            try:\n                # Use agent's run_stream method with event handler\n                response = agent.run_stream(\n                    user_message=query,\n                    on_event=event_handler.handle_event,\n                    clear_history=clear_history\n                )\n            finally:\n                # Restore original tools\n                if context and context.get(\"is_scheduled_task\"):\n                    agent.tools = original_tools\n\n                # Log execution summary\n                event_handler.log_summary()\n\n            # Persist new messages generated during this run\n            if session_id:\n                channel_type = (context.get(\"channel_type\") or \"\") if context else \"\"\n                new_messages = getattr(agent, '_last_run_new_messages', [])\n                if new_messages:\n                    self._persist_messages(session_id, list(new_messages), channel_type)\n                else:\n                    with agent.messages_lock:\n                        msg_count = len(agent.messages)\n                    if msg_count == 0:\n                        try:\n                            from agent.memory import get_conversation_store\n                            get_conversation_store().clear_session(session_id)\n                            logger.info(f\"[AgentBridge] Cleared DB for recovered session: {session_id}\")\n                        except Exception as e:\n                            logger.warning(f\"[AgentBridge] Failed to clear DB after recovery: {e}\")\n            \n            # Check if there are files to send (from read tool)\n            if hasattr(agent, 'stream_executor') and hasattr(agent.stream_executor, 'files_to_send'):\n                files_to_send = agent.stream_executor.files_to_send\n                if files_to_send:\n                    # Send the first file (for now, handle one file at a time)\n                    file_info = files_to_send[0]\n                    logger.info(f\"[AgentBridge] Sending file: {file_info.get('path')}\")\n                    \n                    # Clear files_to_send for next request\n                    agent.stream_executor.files_to_send = []\n                    \n                    # Return file reply based on file type\n                    return self._create_file_reply(file_info, response, context)\n            \n            return Reply(ReplyType.TEXT, response)\n            \n        except Exception as e:\n            logger.error(f\"Agent reply error: {e}\")\n            # If the agent cleared its messages due to format error / overflow,\n            # also purge the DB so the next request starts clean.\n            if session_id and agent:\n                try:\n                    with agent.messages_lock:\n                        msg_count = len(agent.messages)\n                    if msg_count == 0:\n                        from agent.memory import get_conversation_store\n                        get_conversation_store().clear_session(session_id)\n                        logger.info(f\"[AgentBridge] Cleared DB for session after error: {session_id}\")\n                except Exception as db_err:\n                    logger.warning(f\"[AgentBridge] Failed to clear DB after error: {db_err}\")\n            return Reply(ReplyType.ERROR, f\"Agent error: {str(e)}\")\n    \n    def _create_file_reply(self, file_info: dict, text_response: str, context: Context = None) -> Reply:\n        \"\"\"\n        Create a reply for sending files\n        \n        Args:\n            file_info: File metadata from read tool\n            text_response: Text response from agent\n            context: Context object\n            \n        Returns:\n            Reply object for file sending\n        \"\"\"\n        file_type = file_info.get(\"file_type\", \"file\")\n        file_path = file_info.get(\"path\")\n        \n        # For images, use IMAGE_URL type (channel will handle upload)\n        if file_type == \"image\":\n            # Convert local path to file:// URL for channel processing\n            file_url = f\"file://{file_path}\"\n            logger.info(f\"[AgentBridge] Sending image: {file_url}\")\n            reply = Reply(ReplyType.IMAGE_URL, file_url)\n            # Attach text message if present (for channels that support text+image)\n            if text_response:\n                reply.text_content = text_response  # Store accompanying text\n            return reply\n        \n        # For all file types (document, video, audio), use FILE type\n        if file_type in [\"document\", \"video\", \"audio\"]:\n            file_url = f\"file://{file_path}\"\n            logger.info(f\"[AgentBridge] Sending {file_type}: {file_url}\")\n            reply = Reply(ReplyType.FILE, file_url)\n            reply.file_name = file_info.get(\"file_name\", os.path.basename(file_path))\n            # Attach text message if present\n            if text_response:\n                reply.text_content = text_response\n            return reply\n        \n        # For other unknown file types, return text with file info\n        message = text_response or file_info.get(\"message\", \"文件已准备\")\n        message += f\"\\n\\n[文件: {file_info.get('file_name', file_path)}]\"\n        return Reply(ReplyType.TEXT, message)\n    \n    def _migrate_config_to_env(self, workspace_root: str):\n        \"\"\"\n        Migrate API keys from config.json to .env file if not already set\n        \n        Args:\n            workspace_root: Workspace directory path (not used, kept for compatibility)\n        \"\"\"\n        from config import conf\n        import os\n        \n        # Mapping from config.json keys to environment variable names\n        key_mapping = {\n            \"open_ai_api_key\": \"OPENAI_API_KEY\",\n            \"open_ai_api_base\": \"OPENAI_API_BASE\",\n            \"gemini_api_key\": \"GEMINI_API_KEY\",\n            \"claude_api_key\": \"CLAUDE_API_KEY\",\n            \"linkai_api_key\": \"LINKAI_API_KEY\",\n        }\n        \n        # Use fixed secure location for .env file\n        env_file = expand_path(\"~/.cow/.env\")\n        \n        # Read existing env vars from .env file\n        existing_env_vars = {}\n        if os.path.exists(env_file):\n            try:\n                with open(env_file, 'r', encoding='utf-8') as f:\n                    for line in f:\n                        line = line.strip()\n                        if line and not line.startswith('#') and '=' in line:\n                            key, _ = line.split('=', 1)\n                            existing_env_vars[key.strip()] = True\n            except Exception as e:\n                logger.warning(f\"[AgentBridge] Failed to read .env file: {e}\")\n        \n        # Check which keys need to be migrated\n        keys_to_migrate = {}\n        for config_key, env_key in key_mapping.items():\n            # Skip if already in .env file\n            if env_key in existing_env_vars:\n                continue\n            \n            # Get value from config.json\n            value = conf().get(config_key, \"\")\n            if value and value.strip():  # Only migrate non-empty values\n                keys_to_migrate[env_key] = value.strip()\n        \n        # Log summary if there are keys to skip\n        if existing_env_vars:\n            logger.debug(f\"[AgentBridge] {len(existing_env_vars)} env vars already in .env\")\n        \n        # Write new keys to .env file\n        if keys_to_migrate:\n            try:\n                # Ensure ~/.cow directory and .env file exist\n                env_dir = os.path.dirname(env_file)\n                if not os.path.exists(env_dir):\n                    os.makedirs(env_dir, exist_ok=True)\n                if not os.path.exists(env_file):\n                    open(env_file, 'a').close()\n                \n                # Append new keys\n                with open(env_file, 'a', encoding='utf-8') as f:\n                    f.write('\\n# Auto-migrated from config.json\\n')\n                    for key, value in keys_to_migrate.items():\n                        f.write(f'{key}={value}\\n')\n                        # Also set in current process\n                        os.environ[key] = value\n                \n                logger.info(f\"[AgentBridge] Migrated {len(keys_to_migrate)} API keys from config.json to .env: {list(keys_to_migrate.keys())}\")\n            except Exception as e:\n                logger.warning(f\"[AgentBridge] Failed to migrate API keys: {e}\")\n    \n    def _persist_messages(\n        self, session_id: str, new_messages: list, channel_type: str = \"\"\n    ) -> None:\n        \"\"\"\n        Persist new messages to the conversation store after each agent run.\n\n        Failures are logged but never propagate — they must not interrupt replies.\n        \"\"\"\n        if not new_messages:\n            return\n        try:\n            from config import conf\n            if not conf().get(\"conversation_persistence\", True):\n                return\n        except Exception:\n            pass\n        try:\n            from agent.memory import get_conversation_store\n            get_conversation_store().append_messages(\n                session_id, new_messages, channel_type=channel_type\n            )\n        except Exception as e:\n            logger.warning(\n                f\"[AgentBridge] Failed to persist messages for session={session_id}: {e}\"\n            )\n\n    def clear_session(self, session_id: str):\n        \"\"\"\n        Clear a specific session's agent and conversation history\n        \n        Args:\n            session_id: Session identifier to clear\n        \"\"\"\n        if session_id in self.agents:\n            logger.info(f\"[AgentBridge] Clearing session: {session_id}\")\n            del self.agents[session_id]\n    \n    def clear_all_sessions(self):\n        \"\"\"Clear all agent sessions\"\"\"\n        logger.info(f\"[AgentBridge] Clearing all sessions ({len(self.agents)} total)\")\n        self.agents.clear()\n        self.default_agent = None\n    \n    def refresh_all_skills(self) -> int:\n        \"\"\"\n        Refresh skills and conditional tools in all agent instances after\n        environment variable changes. This allows hot-reload without restarting.\n\n        Returns:\n            Number of agent instances refreshed\n        \"\"\"\n        import os\n        from dotenv import load_dotenv\n        from config import conf\n\n        # Reload environment variables from .env file\n        workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n        env_file = os.path.join(workspace_root, '.env')\n\n        if os.path.exists(env_file):\n            load_dotenv(env_file, override=True)\n            logger.info(f\"[AgentBridge] Reloaded environment variables from {env_file}\")\n\n        refreshed_count = 0\n\n        # Collect all agent instances to refresh\n        agents_to_refresh = []\n        if self.default_agent:\n            agents_to_refresh.append((\"default\", self.default_agent))\n        for session_id, agent in self.agents.items():\n            agents_to_refresh.append((session_id, agent))\n\n        for label, agent in agents_to_refresh:\n            # Refresh skills\n            if hasattr(agent, 'skill_manager') and agent.skill_manager:\n                agent.skill_manager.refresh_skills()\n\n            # Refresh conditional tools (e.g. web_search depends on API keys)\n            self._refresh_conditional_tools(agent)\n\n            refreshed_count += 1\n\n        if refreshed_count > 0:\n            logger.info(f\"[AgentBridge] Refreshed skills & tools in {refreshed_count} agent instance(s)\")\n\n        return refreshed_count\n\n    @staticmethod\n    def _refresh_conditional_tools(agent):\n        \"\"\"\n        Add or remove conditional tools based on current environment variables.\n        For example, web_search should only be present when BOCHA_API_KEY or\n        LINKAI_API_KEY is set.\n        \"\"\"\n        try:\n            from agent.tools.web_search.web_search import WebSearch\n\n            has_tool = any(t.name == \"web_search\" for t in agent.tools)\n            available = WebSearch.is_available()\n\n            if available and not has_tool:\n                # API key was added - inject the tool\n                tool = WebSearch()\n                tool.model = agent.model\n                agent.tools.append(tool)\n                logger.info(\"[AgentBridge] web_search tool added (API key now available)\")\n            elif not available and has_tool:\n                # API key was removed - remove the tool\n                agent.tools = [t for t in agent.tools if t.name != \"web_search\"]\n                logger.info(\"[AgentBridge] web_search tool removed (API key no longer available)\")\n        except Exception as e:\n            logger.debug(f\"[AgentBridge] Failed to refresh conditional tools: {e}\")"
  },
  {
    "path": "bridge/agent_event_handler.py",
    "content": "\"\"\"\nAgent Event Handler - Handles agent events and thinking process output\n\"\"\"\n\nfrom common.log import logger\n\n\nclass AgentEventHandler:\n    \"\"\"\n    Handles agent events and optionally sends intermediate messages to channel\n    \"\"\"\n    \n    def __init__(self, context=None, original_callback=None):\n        \"\"\"\n        Initialize event handler\n        \n        Args:\n            context: COW context (for accessing channel)\n            original_callback: Original event callback to chain\n        \"\"\"\n        self.context = context\n        self.original_callback = original_callback\n        \n        # Get channel for sending intermediate messages\n        self.channel = None\n        if context:\n            self.channel = context.kwargs.get(\"channel\") if hasattr(context, \"kwargs\") else None\n        \n        # Track current thinking for channel output\n        self.current_thinking = \"\"\n        self.turn_number = 0\n    \n    def handle_event(self, event):\n        \"\"\"\n        Main event handler\n        \n        Args:\n            event: Event dict with type and data\n        \"\"\"\n        event_type = event.get(\"type\")\n        data = event.get(\"data\", {})\n        \n        # Dispatch to specific handlers\n        if event_type == \"turn_start\":\n            self._handle_turn_start(data)\n        elif event_type == \"message_update\":\n            self._handle_message_update(data)\n        elif event_type == \"message_end\":\n            self._handle_message_end(data)\n        elif event_type == \"tool_execution_start\":\n            self._handle_tool_execution_start(data)\n        elif event_type == \"tool_execution_end\":\n            self._handle_tool_execution_end(data)\n        \n        # Call original callback if provided\n        if self.original_callback:\n            self.original_callback(event)\n    \n    def _handle_turn_start(self, data):\n        \"\"\"Handle turn start event\"\"\"\n        self.turn_number = data.get(\"turn\", 0)\n        self.has_tool_calls_in_turn = False\n        self.current_thinking = \"\"\n    \n    def _handle_message_update(self, data):\n        \"\"\"Handle message update event (streaming text)\"\"\"\n        delta = data.get(\"delta\", \"\")\n        self.current_thinking += delta\n    \n    def _handle_message_end(self, data):\n        \"\"\"Handle message end event\"\"\"\n        tool_calls = data.get(\"tool_calls\", [])\n        \n        # Only send thinking process if followed by tool calls\n        if tool_calls:\n            if self.current_thinking.strip():\n                logger.info(f\"💭 {self.current_thinking.strip()[:200]}{'...' if len(self.current_thinking) > 200 else ''}\")\n                # Send thinking process to channel\n                self._send_to_channel(f\"{self.current_thinking.strip()}\")\n        else:\n            # No tool calls = final response (logged at agent_stream level)\n            if self.current_thinking.strip():\n                logger.debug(f\"💬 {self.current_thinking.strip()[:200]}{'...' if len(self.current_thinking) > 200 else ''}\")\n        \n        self.current_thinking = \"\"\n    \n    def _handle_tool_execution_start(self, data):\n        \"\"\"Handle tool execution start event - logged by agent_stream.py\"\"\"\n        pass\n    \n    def _handle_tool_execution_end(self, data):\n        \"\"\"Handle tool execution end event - logged by agent_stream.py\"\"\"\n        pass\n    \n    def _send_to_channel(self, message):\n        \"\"\"\n        Try to send intermediate message to channel.\n        Skipped in SSE mode because thinking text is already streamed via on_event.\n        \"\"\"\n        if self.context and self.context.get(\"on_event\"):\n            return\n\n        if self.channel:\n            try:\n                from bridge.reply import Reply, ReplyType\n                reply = Reply(ReplyType.TEXT, message)\n                self.channel._send(reply, self.context)\n            except Exception as e:\n                logger.debug(f\"[AgentEventHandler] Failed to send to channel: {e}\")\n    \n    def log_summary(self):\n        \"\"\"Log execution summary - simplified\"\"\"\n        # Summary removed as per user request\n        # Real-time logging during execution is sufficient\n        pass\n"
  },
  {
    "path": "bridge/agent_initializer.py",
    "content": "\"\"\"\nAgent Initializer - Handles agent initialization logic\n\"\"\"\n\nimport os\nimport asyncio\nimport datetime\nimport time\nfrom typing import Optional, List\n\nfrom agent.protocol import Agent\nfrom agent.tools import ToolManager\nfrom common.log import logger\nfrom common.utils import expand_path\n\n\nclass AgentInitializer:\n    \"\"\"\n    Handles agent initialization including:\n    - Workspace setup\n    - Memory system initialization  \n    - Tool loading\n    - System prompt building\n    \"\"\"\n    \n    def __init__(self, bridge, agent_bridge):\n        \"\"\"\n        Initialize agent initializer\n        \n        Args:\n            bridge: COW bridge instance\n            agent_bridge: AgentBridge instance (for create_agent method)\n        \"\"\"\n        self.bridge = bridge\n        self.agent_bridge = agent_bridge\n    \n    def initialize_agent(self, session_id: Optional[str] = None) -> Agent:\n        \"\"\"\n        Initialize agent for a session\n        \n        Args:\n            session_id: Session ID (None for default agent)\n        \n        Returns:\n            Initialized agent instance\n        \"\"\"\n        from config import conf\n        \n        # Get workspace from config\n        workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n        \n        # Migrate API keys\n        self._migrate_config_to_env(workspace_root)\n        \n        # Load environment variables\n        self._load_env_file()\n        \n        # Initialize workspace\n        from agent.prompt import ensure_workspace, load_context_files, PromptBuilder\n        workspace_files = ensure_workspace(workspace_root, create_templates=True)\n        \n        if session_id is None:\n            logger.info(f\"[AgentInitializer] Workspace initialized at: {workspace_root}\")\n        \n        # Setup memory system\n        memory_manager, memory_tools = self._setup_memory_system(workspace_root, session_id)\n        \n        # Load tools\n        tools = self._load_tools(workspace_root, memory_manager, memory_tools, session_id)\n        \n        # Initialize scheduler if needed\n        self._initialize_scheduler(tools, session_id)\n        \n        # Load context files\n        context_files = load_context_files(workspace_root)\n        \n        # Initialize skill manager\n        skill_manager = self._initialize_skill_manager(workspace_root, session_id)\n        \n        # Build system prompt\n        prompt_builder = PromptBuilder(workspace_dir=workspace_root, language=\"zh\")\n        runtime_info = self._get_runtime_info(workspace_root)\n        \n        system_prompt = prompt_builder.build(\n            tools=tools,\n            context_files=context_files,\n            skill_manager=skill_manager,\n            memory_manager=memory_manager,\n            runtime_info=runtime_info,\n        )\n        \n        # Get cost control parameters\n        from config import conf\n        max_steps = conf().get(\"agent_max_steps\", 20)\n        max_context_tokens = conf().get(\"agent_max_context_tokens\", 50000)\n        \n        # Create agent\n        agent = self.agent_bridge.create_agent(\n            system_prompt=system_prompt,\n            tools=tools,\n            max_steps=max_steps,\n            output_mode=\"logger\",\n            workspace_dir=workspace_root,\n            skill_manager=skill_manager,\n            enable_skills=True,\n            max_context_tokens=max_context_tokens,\n            runtime_info=runtime_info  # Pass runtime_info for dynamic time updates\n        )\n        \n        # Attach memory manager and share LLM model for summarization\n        if memory_manager:\n            agent.memory_manager = memory_manager\n            if hasattr(agent, 'model') and agent.model:\n                memory_manager.flush_manager.llm_model = agent.model\n\n        # Restore persisted conversation history for this session\n        if session_id:\n            self._restore_conversation_history(agent, session_id)\n\n        # Start daily memory flush timer (once, on first agent init regardless of session)\n        self._start_daily_flush_timer()\n\n        return agent\n\n    def _restore_conversation_history(self, agent, session_id: str) -> None:\n        \"\"\"\n        Load persisted conversation messages from SQLite and inject them\n        into the agent's in-memory message list.\n\n        Only user text and assistant text are restored. Tool call chains\n        (tool_use / tool_result) are stripped out because:\n        1. They are intermediate process, the value is already in the final\n           assistant text reply.\n        2. They consume massive context tokens (often 80%+ of history).\n        3. Different models have incompatible tool message formats, so\n           restoring tool chains across model switches causes 400 errors.\n        4. Eliminates the entire class of tool_use/tool_result pairing bugs.\n        \"\"\"\n        from config import conf\n        if not conf().get(\"conversation_persistence\", True):\n            return\n\n        try:\n            from agent.memory import get_conversation_store\n            store = get_conversation_store()\n            max_turns = conf().get(\"agent_max_context_turns\", 20)\n            restore_turns = max(3, max_turns // 6)\n            saved = store.load_messages(session_id, max_turns=restore_turns)\n            if saved:\n                filtered = self._filter_text_only_messages(saved)\n                if filtered:\n                    with agent.messages_lock:\n                        agent.messages = filtered\n                    logger.debug(\n                        f\"[AgentInitializer] Restored {len(filtered)} text messages \"\n                        f\"(from {len(saved)} total, {restore_turns} turns cap) \"\n                        f\"for session={session_id}\"\n                    )\n        except Exception as e:\n            logger.warning(\n                f\"[AgentInitializer] Failed to restore conversation history for \"\n                f\"session={session_id}: {e}\"\n            )\n\n    @staticmethod\n    def _filter_text_only_messages(messages: list) -> list:\n        \"\"\"\n        Extract clean user/assistant turn pairs from raw message history.\n\n        Groups messages into turns (each starting with a real user query),\n        then keeps only:\n        - The first user text in each turn (the actual user input)\n        - The last assistant text in each turn (the final answer)\n\n        All tool_use, tool_result, intermediate assistant thoughts, and\n        internal hint messages injected by the agent loop are discarded.\n        \"\"\"\n\n        def _extract_text(content) -> str:\n            if isinstance(content, str):\n                return content.strip()\n            if isinstance(content, list):\n                parts = [\n                    b.get(\"text\", \"\")\n                    for b in content\n                    if isinstance(b, dict) and b.get(\"type\") == \"text\"\n                ]\n                return \"\\n\".join(p for p in parts if p).strip()\n            return \"\"\n\n        def _is_real_user_msg(msg: dict) -> bool:\n            \"\"\"True for actual user input, False for tool_result or internal hints.\"\"\"\n            if msg.get(\"role\") != \"user\":\n                return False\n            content = msg.get(\"content\")\n            if isinstance(content, list):\n                has_tool_result = any(\n                    isinstance(b, dict) and b.get(\"type\") == \"tool_result\"\n                    for b in content\n                )\n                if has_tool_result:\n                    return False\n            text = _extract_text(content)\n            return bool(text)\n\n        # Group into turns: each turn starts with a real user message\n        turns = []\n        current_turn = None\n        for msg in messages:\n            if _is_real_user_msg(msg):\n                if current_turn is not None:\n                    turns.append(current_turn)\n                current_turn = {\"user\": msg, \"assistants\": []}\n            elif current_turn is not None and msg.get(\"role\") == \"assistant\":\n                text = _extract_text(msg.get(\"content\"))\n                if text:\n                    current_turn[\"assistants\"].append(text)\n        if current_turn is not None:\n            turns.append(current_turn)\n\n        # Build result: one user msg + one assistant msg per turn\n        filtered = []\n        for turn in turns:\n            user_text = _extract_text(turn[\"user\"].get(\"content\"))\n            if not user_text:\n                continue\n            filtered.append({\n                \"role\": \"user\",\n                \"content\": [{\"type\": \"text\", \"text\": user_text}]\n            })\n            if turn[\"assistants\"]:\n                final_reply = turn[\"assistants\"][-1]\n                filtered.append({\n                    \"role\": \"assistant\",\n                    \"content\": [{\"type\": \"text\", \"text\": final_reply}]\n                })\n\n        return filtered\n    \n    def _load_env_file(self):\n        \"\"\"Load environment variables from .env file\"\"\"\n        env_file = expand_path(\"~/.cow/.env\")\n        if os.path.exists(env_file):\n            try:\n                from dotenv import load_dotenv\n                load_dotenv(env_file, override=True)\n            except ImportError:\n                logger.warning(\"[AgentInitializer] python-dotenv not installed\")\n            except Exception as e:\n                logger.warning(f\"[AgentInitializer] Failed to load .env file: {e}\")\n    \n    def _setup_memory_system(self, workspace_root: str, session_id: Optional[str] = None):\n        \"\"\"\n        Setup memory system\n        \n        Returns:\n            (memory_manager, memory_tools) tuple\n        \"\"\"\n        memory_manager = None\n        memory_tools = []\n        \n        try:\n            from agent.memory import MemoryManager, MemoryConfig, create_embedding_provider\n            from agent.tools import MemorySearchTool, MemoryGetTool\n            from config import conf\n            \n            # Initialize embedding provider (prefer OpenAI, fallback to LinkAI)\n            embedding_provider = None\n\n            openai_api_key = conf().get(\"open_ai_api_key\", \"\")\n            openai_api_base = conf().get(\"open_ai_api_base\", \"\")\n            if openai_api_key and openai_api_key not in [\"\", \"YOUR API KEY\", \"YOUR_API_KEY\"]:\n                try:\n                    embedding_provider = create_embedding_provider(\n                        provider=\"openai\",\n                        model=\"text-embedding-3-small\",\n                        api_key=openai_api_key,\n                        api_base=openai_api_base or \"https://api.openai.com/v1\"\n                    )\n                    if session_id is None:\n                        logger.info(\"[AgentInitializer] OpenAI embedding initialized\")\n                except Exception as e:\n                    logger.warning(f\"[AgentInitializer] OpenAI embedding failed: {e}\")\n\n            if embedding_provider is None:\n                linkai_api_key = conf().get(\"linkai_api_key\", \"\") or os.environ.get(\"LINKAI_API_KEY\", \"\")\n                linkai_api_base = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n                if linkai_api_key and linkai_api_key not in [\"\", \"YOUR API KEY\", \"YOUR_API_KEY\"]:\n                    try:\n                        embedding_provider = create_embedding_provider(\n                            provider=\"linkai\",\n                            model=\"text-embedding-3-small\",\n                            api_key=linkai_api_key,\n                            api_base=f\"{linkai_api_base}/v1\"\n                        )\n                        if session_id is None:\n                            logger.info(\"[AgentInitializer] LinkAI embedding initialized (fallback)\")\n                    except Exception as e:\n                        logger.warning(f\"[AgentInitializer] LinkAI embedding failed: {e}\")\n            \n            # Create memory manager\n            memory_config = MemoryConfig(workspace_root=workspace_root)\n            memory_manager = MemoryManager(memory_config, embedding_provider=embedding_provider)\n            \n            # Sync memory\n            self._sync_memory(memory_manager, session_id)\n            \n            # Create memory tools\n            memory_tools = [\n                MemorySearchTool(memory_manager),\n                MemoryGetTool(memory_manager)\n            ]\n            \n            if session_id is None:\n                logger.info(\"[AgentInitializer] Memory system initialized\")\n        \n        except Exception as e:\n            logger.warning(f\"[AgentInitializer] Memory system not available: {e}\")\n        \n        return memory_manager, memory_tools\n    \n    def _sync_memory(self, memory_manager, session_id: Optional[str] = None):\n        \"\"\"Sync memory database\"\"\"\n        try:\n            loop = asyncio.get_event_loop()\n            if loop.is_closed():\n                raise RuntimeError(\"Event loop is closed\")\n        except RuntimeError:\n            loop = asyncio.new_event_loop()\n            asyncio.set_event_loop(loop)\n        \n        try:\n            if loop.is_running():\n                asyncio.create_task(memory_manager.sync())\n            else:\n                loop.run_until_complete(memory_manager.sync())\n        except Exception as e:\n            logger.warning(f\"[AgentInitializer] Memory sync failed: {e}\")\n    \n    def _load_tools(self, workspace_root: str, memory_manager, memory_tools: List, session_id: Optional[str] = None):\n        \"\"\"Load all tools\"\"\"\n        tool_manager = ToolManager()\n        tool_manager.load_tools()\n        \n        tools = []\n        file_config = {\n            \"cwd\": workspace_root,\n            \"memory_manager\": memory_manager\n        } if memory_manager else {\"cwd\": workspace_root}\n        \n        for tool_name in tool_manager.tool_classes.keys():\n            try:\n                # Skip web_search if no API key is available\n                if tool_name == \"web_search\":\n                    from agent.tools.web_search.web_search import WebSearch\n                    if not WebSearch.is_available():\n                        logger.debug(\"[AgentInitializer] WebSearch skipped - no BOCHA_API_KEY or LINKAI_API_KEY\")\n                        continue\n\n                # Special handling for EnvConfig tool\n                if tool_name == \"env_config\":\n                    from agent.tools import EnvConfig\n                    tool = EnvConfig({\"agent_bridge\": self.agent_bridge})\n                else:\n                    tool = tool_manager.create_tool(tool_name)\n\n                if tool:\n                    # Apply workspace config to file operation tools\n                    if tool_name in ['read', 'write', 'edit', 'bash', 'grep', 'find', 'ls', 'web_fetch']:\n                        tool.config = file_config\n                        tool.cwd = file_config.get(\"cwd\", getattr(tool, 'cwd', None))\n                        if 'memory_manager' in file_config:\n                            tool.memory_manager = file_config['memory_manager']\n                    tools.append(tool)\n            except Exception as e:\n                logger.warning(f\"[AgentInitializer] Failed to load tool {tool_name}: {e}\")\n        \n        # Add memory tools\n        if memory_tools:\n            tools.extend(memory_tools)\n            if session_id is None:\n                logger.info(f\"[AgentInitializer] Added {len(memory_tools)} memory tools\")\n        \n        if session_id is None:\n            logger.info(f\"[AgentInitializer] Loaded {len(tools)} tools: {[t.name for t in tools]}\")\n        \n        return tools\n    \n    def _initialize_scheduler(self, tools: List, session_id: Optional[str] = None):\n        \"\"\"Initialize scheduler service if needed\"\"\"\n        if not self.agent_bridge.scheduler_initialized:\n            try:\n                from agent.tools.scheduler.integration import init_scheduler\n                if init_scheduler(self.agent_bridge):\n                    self.agent_bridge.scheduler_initialized = True\n                    if session_id is None:\n                        logger.info(\"[AgentInitializer] Scheduler service initialized\")\n            except Exception as e:\n                logger.warning(f\"[AgentInitializer] Failed to initialize scheduler: {e}\")\n        \n        # Inject scheduler dependencies\n        if self.agent_bridge.scheduler_initialized:\n            try:\n                from agent.tools.scheduler.integration import get_task_store, get_scheduler_service\n                from agent.tools import SchedulerTool\n                from config import conf\n                \n                task_store = get_task_store()\n                scheduler_service = get_scheduler_service()\n                \n                for tool in tools:\n                    if isinstance(tool, SchedulerTool):\n                        tool.task_store = task_store\n                        tool.scheduler_service = scheduler_service\n                        if not tool.config:\n                            tool.config = {}\n                        raw_ct = conf().get(\"channel_type\", \"unknown\")\n                        if isinstance(raw_ct, list):\n                            ct = raw_ct[0] if raw_ct else \"unknown\"\n                        elif isinstance(raw_ct, str) and \",\" in raw_ct:\n                            ct = raw_ct.split(\",\")[0].strip()\n                        else:\n                            ct = raw_ct\n                        tool.config[\"channel_type\"] = ct\n            except Exception as e:\n                logger.warning(f\"[AgentInitializer] Failed to inject scheduler dependencies: {e}\")\n    \n    def _initialize_skill_manager(self, workspace_root: str, session_id: Optional[str] = None):\n        \"\"\"Initialize skill manager\"\"\"\n        try:\n            from agent.skills import SkillManager\n            skill_manager = SkillManager(custom_dir=os.path.join(workspace_root, \"skills\"))\n            return skill_manager\n        except Exception as e:\n            logger.warning(f\"[AgentInitializer] Failed to initialize SkillManager: {e}\")\n            return None\n    \n    def _get_runtime_info(self, workspace_root: str):\n        \"\"\"Get runtime information with dynamic time support\"\"\"\n        from config import conf\n        \n        def get_current_time():\n            \"\"\"Get current time dynamically - called each time system prompt is accessed\"\"\"\n            now = datetime.datetime.now()\n            \n            # Get timezone info\n            try:\n                offset = -time.timezone if not time.daylight else -time.altzone\n                hours = offset // 3600\n                minutes = (offset % 3600) // 60\n                timezone_name = f\"UTC{hours:+03d}:{minutes:02d}\" if minutes else f\"UTC{hours:+03d}\"\n            except Exception:\n                timezone_name = \"UTC\"\n            \n            # Chinese weekday mapping\n            weekday_map = {\n                'Monday': '星期一', 'Tuesday': '星期二', 'Wednesday': '星期三',\n                'Thursday': '星期四', 'Friday': '星期五', 'Saturday': '星期六', 'Sunday': '星期日'\n            }\n            weekday_zh = weekday_map.get(now.strftime(\"%A\"), now.strftime(\"%A\"))\n            \n            return {\n                'time': now.strftime(\"%Y-%m-%d %H:%M:%S\"),\n                'weekday': weekday_zh,\n                'timezone': timezone_name\n            }\n        \n        return {\n            \"model\": conf().get(\"model\", \"unknown\"),\n            \"workspace\": workspace_root,\n            \"channel\": \", \".join(conf().get(\"channel_type\")) if isinstance(conf().get(\"channel_type\"), list) else conf().get(\"channel_type\", \"unknown\"),\n            \"_get_current_time\": get_current_time  # Dynamic time function\n        }\n    \n    def _migrate_config_to_env(self, workspace_root: str):\n        \"\"\"Migrate API keys from config.json to .env file\"\"\"\n        from config import conf\n        \n        key_mapping = {\n            \"open_ai_api_key\": \"OPENAI_API_KEY\",\n            \"open_ai_api_base\": \"OPENAI_API_BASE\",\n            \"gemini_api_key\": \"GEMINI_API_KEY\",\n            \"claude_api_key\": \"CLAUDE_API_KEY\",\n            \"linkai_api_key\": \"LINKAI_API_KEY\",\n        }\n        \n        env_file = expand_path(\"~/.cow/.env\")\n        \n        # Read existing env vars\n        existing_env_vars = {}\n        if os.path.exists(env_file):\n            try:\n                with open(env_file, 'r', encoding='utf-8') as f:\n                    for line in f:\n                        line = line.strip()\n                        if line and not line.startswith('#') and '=' in line:\n                            key, _ = line.split('=', 1)\n                            existing_env_vars[key.strip()] = True\n            except Exception as e:\n                logger.warning(f\"[AgentInitializer] Failed to read .env file: {e}\")\n        \n        # Check which keys need migration\n        keys_to_migrate = {}\n        for config_key, env_key in key_mapping.items():\n            if env_key in existing_env_vars:\n                continue\n            value = conf().get(config_key, \"\")\n            if value and value.strip():\n                keys_to_migrate[env_key] = value.strip()\n        \n        # Write new keys\n        if keys_to_migrate:\n            try:\n                env_dir = os.path.dirname(env_file)\n                if not os.path.exists(env_dir):\n                    os.makedirs(env_dir, exist_ok=True)\n                if not os.path.exists(env_file):\n                    open(env_file, 'a').close()\n                \n                with open(env_file, 'a', encoding='utf-8') as f:\n                    f.write('\\n# Auto-migrated from config.json\\n')\n                    for key, value in keys_to_migrate.items():\n                        f.write(f'{key}={value}\\n')\n                        os.environ[key] = value\n                \n                logger.info(f\"[AgentInitializer] Migrated {len(keys_to_migrate)} API keys to .env: {list(keys_to_migrate.keys())}\")\n            except Exception as e:\n                logger.warning(f\"[AgentInitializer] Failed to migrate API keys: {e}\")\n\n    def _start_daily_flush_timer(self):\n        \"\"\"Start a background thread that flushes all agents' memory daily at 23:55.\"\"\"\n        if getattr(self.agent_bridge, '_daily_flush_started', False):\n            return\n        self.agent_bridge._daily_flush_started = True\n\n        import threading\n\n        def _daily_flush_loop():\n            while True:\n                try:\n                    now = datetime.datetime.now()\n                    target = now.replace(hour=23, minute=55, second=0, microsecond=0)\n                    if target <= now:\n                        target += datetime.timedelta(days=1)\n                    wait_seconds = (target - now).total_seconds()\n                    logger.info(f\"[DailyFlush] Next flush at {target.strftime('%Y-%m-%d %H:%M')} (in {wait_seconds/3600:.1f}h)\")\n                    time.sleep(wait_seconds)\n\n                    self._flush_all_agents()\n                except Exception as e:\n                    logger.warning(f\"[DailyFlush] Error in daily flush loop: {e}\")\n                    time.sleep(3600)\n\n        t = threading.Thread(target=_daily_flush_loop, daemon=True)\n        t.start()\n\n    def _flush_all_agents(self):\n        \"\"\"Flush memory for all active agent sessions.\"\"\"\n        agents = []\n        if self.agent_bridge.default_agent:\n            agents.append((\"default\", self.agent_bridge.default_agent))\n        for sid, agent in self.agent_bridge.agents.items():\n            agents.append((sid, agent))\n\n        if not agents:\n            return\n\n        flushed = 0\n        for label, agent in agents:\n            try:\n                if not agent.memory_manager:\n                    continue\n                with agent.messages_lock:\n                    messages = list(agent.messages)\n                if not messages:\n                    continue\n                result = agent.memory_manager.flush_manager.create_daily_summary(messages)\n                if result:\n                    flushed += 1\n            except Exception as e:\n                logger.warning(f\"[DailyFlush] Failed for session {label}: {e}\")\n\n        if flushed:\n            logger.info(f\"[DailyFlush] Flushed {flushed}/{len(agents)} agent session(s)\")\n"
  },
  {
    "path": "bridge/bridge.py",
    "content": "from models.bot_factory import create_bot\nfrom bridge.context import Context\nfrom bridge.reply import Reply\nfrom common import const\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom config import conf\nfrom translate.factory import create_translator\nfrom voice.factory import create_voice\n\n\n@singleton\nclass Bridge(object):\n    def __init__(self):\n        self.btype = {\n            \"chat\": const.OPENAI,\n            \"voice_to_text\": conf().get(\"voice_to_text\", \"openai\"),\n            \"text_to_voice\": conf().get(\"text_to_voice\", \"google\"),\n            \"translate\": conf().get(\"translate\", \"baidu\"),\n        }\n        # 这边取配置的模型\n        bot_type = conf().get(\"bot_type\")\n        if bot_type:\n            self.btype[\"chat\"] = bot_type\n        else:\n            model_type = conf().get(\"model\") or const.GPT_41_MINI\n            \n            # Ensure model_type is string to prevent AttributeError when using startswith()\n            # This handles cases where numeric model names (e.g., \"1\") are parsed as integers from YAML\n            if not isinstance(model_type, str):\n                logger.warning(f\"[Bridge] model_type is not a string: {model_type} (type: {type(model_type).__name__}), converting to string\")\n                model_type = str(model_type)\n            \n            if model_type in [\"text-davinci-003\"]:\n                self.btype[\"chat\"] = const.OPEN_AI\n            if conf().get(\"use_azure_chatgpt\", False):\n                self.btype[\"chat\"] = const.CHATGPTONAZURE\n            if model_type in [\"wenxin\", \"wenxin-4\"]:\n                self.btype[\"chat\"] = const.BAIDU\n            if model_type in [\"xunfei\"]:\n                self.btype[\"chat\"] = const.XUNFEI\n            if model_type in [const.QWEN]:\n                self.btype[\"chat\"] = const.QWEN\n            if model_type in [const.QWEN_TURBO, const.QWEN_PLUS, const.QWEN_MAX]:\n                self.btype[\"chat\"] = const.QWEN_DASHSCOPE\n            # Support Qwen3 and other DashScope models\n            if model_type and (model_type.startswith(\"qwen\") or model_type.startswith(\"qwq\") or model_type.startswith(\"qvq\")):\n                self.btype[\"chat\"] = const.QWEN_DASHSCOPE\n            if model_type and model_type.startswith(\"gemini\"):\n                self.btype[\"chat\"] = const.GEMINI\n            if model_type and model_type.startswith(\"glm\"):\n                self.btype[\"chat\"] = const.ZHIPU_AI\n            if model_type and model_type.startswith(\"claude\"):\n                self.btype[\"chat\"] = const.CLAUDEAPI\n\n            if model_type in [const.MOONSHOT, \"moonshot-v1-8k\", \"moonshot-v1-32k\", \"moonshot-v1-128k\"]:\n                self.btype[\"chat\"] = const.MOONSHOT\n            if model_type and model_type.startswith(\"kimi\"):\n                self.btype[\"chat\"] = const.MOONSHOT\n\n            if model_type and model_type.startswith(\"doubao\"):\n                self.btype[\"chat\"] = const.DOUBAO\n\n            if model_type in [const.MODELSCOPE]:\n                self.btype[\"chat\"] = const.MODELSCOPE\n            \n            # MiniMax models\n            if model_type and (model_type in [\"abab6.5-chat\", \"abab6.5\"] or model_type.lower().startswith(\"minimax\")):\n                self.btype[\"chat\"] = const.MiniMax\n\n            if conf().get(\"use_linkai\") and conf().get(\"linkai_api_key\"):\n                self.btype[\"chat\"] = const.LINKAI\n                if not conf().get(\"voice_to_text\") or conf().get(\"voice_to_text\") in [\"openai\"]:\n                    self.btype[\"voice_to_text\"] = const.LINKAI\n                if not conf().get(\"text_to_voice\") or conf().get(\"text_to_voice\") in [\"openai\", const.TTS_1, const.TTS_1_HD]:\n                    self.btype[\"text_to_voice\"] = const.LINKAI\n\n        self.bots = {}\n        self.chat_bots = {}\n        self._agent_bridge = None\n\n    # 模型对应的接口\n    def get_bot(self, typename):\n        if self.bots.get(typename) is None:\n            logger.info(\"create bot {} for {}\".format(self.btype[typename], typename))\n            if typename == \"text_to_voice\":\n                self.bots[typename] = create_voice(self.btype[typename])\n            elif typename == \"voice_to_text\":\n                self.bots[typename] = create_voice(self.btype[typename])\n            elif typename == \"chat\":\n                self.bots[typename] = create_bot(self.btype[typename])\n            elif typename == \"translate\":\n                self.bots[typename] = create_translator(self.btype[typename])\n        return self.bots[typename]\n\n    def get_bot_type(self, typename):\n        return self.btype[typename]\n\n    def fetch_reply_content(self, query, context: Context) -> Reply:\n        return self.get_bot(\"chat\").reply(query, context)\n\n    def fetch_voice_to_text(self, voiceFile) -> Reply:\n        return self.get_bot(\"voice_to_text\").voiceToText(voiceFile)\n\n    def fetch_text_to_voice(self, text) -> Reply:\n        return self.get_bot(\"text_to_voice\").textToVoice(text)\n\n    def fetch_translate(self, text, from_lang=\"\", to_lang=\"en\") -> Reply:\n        return self.get_bot(\"translate\").translate(text, from_lang, to_lang)\n\n    def find_chat_bot(self, bot_type: str):\n        if self.chat_bots.get(bot_type) is None:\n            self.chat_bots[bot_type] = create_bot(bot_type)\n        return self.chat_bots.get(bot_type)\n\n    def reset_bot(self):\n        \"\"\"\n        重置bot路由\n        \"\"\"\n        self.__init__()\n\n    def get_agent_bridge(self):\n        \"\"\"\n        Get agent bridge for agent-based conversations\n        \"\"\"\n        if self._agent_bridge is None:\n            from bridge.agent_bridge import AgentBridge\n            self._agent_bridge = AgentBridge(self)\n        return self._agent_bridge\n\n    def fetch_agent_reply(self, query: str, context: Context = None,\n                          on_event=None, clear_history: bool = False) -> Reply:\n        \"\"\"\n        Use super agent to handle the query\n\n        Args:\n            query: User query\n            context: Context object\n            on_event: Event callback for streaming\n            clear_history: Whether to clear conversation history\n\n        Returns:\n            Reply object\n        \"\"\"\n        agent_bridge = self.get_agent_bridge()\n        return agent_bridge.agent_reply(query, context, on_event, clear_history)\n"
  },
  {
    "path": "bridge/context.py",
    "content": "# encoding:utf-8\n\nfrom enum import Enum\n\n\nclass ContextType(Enum):\n    TEXT = 1  # 文本消息\n    VOICE = 2  # 音频消息\n    IMAGE = 3  # 图片消息\n    FILE = 4  # 文件信息\n    VIDEO = 5  # 视频信息\n    SHARING = 6  # 分享信息\n\n    IMAGE_CREATE = 10  # 创建图片命令\n    ACCEPT_FRIEND = 19 # 同意好友请求\n    JOIN_GROUP = 20  # 加入群聊\n    PATPAT = 21  # 拍了拍\n    FUNCTION = 22  # 函数调用\n    EXIT_GROUP = 23 #退出\n\n\n    def __str__(self):\n        return self.name\n\n\nclass Context:\n    def __init__(self, type: ContextType = None, content=None, kwargs=dict()):\n        self.type = type\n        self.content = content\n        self.kwargs = kwargs\n\n    def __contains__(self, key):\n        if key == \"type\":\n            return self.type is not None\n        elif key == \"content\":\n            return self.content is not None\n        else:\n            return key in self.kwargs\n\n    def __getitem__(self, key):\n        if key == \"type\":\n            return self.type\n        elif key == \"content\":\n            return self.content\n        else:\n            return self.kwargs[key]\n\n    def get(self, key, default=None):\n        try:\n            return self[key]\n        except KeyError:\n            return default\n\n    def __setitem__(self, key, value):\n        if key == \"type\":\n            self.type = value\n        elif key == \"content\":\n            self.content = value\n        else:\n            self.kwargs[key] = value\n\n    def __delitem__(self, key):\n        if key == \"type\":\n            self.type = None\n        elif key == \"content\":\n            self.content = None\n        else:\n            del self.kwargs[key]\n\n    def __str__(self):\n        return \"Context(type={}, content={}, kwargs={})\".format(self.type, self.content, self.kwargs)\n"
  },
  {
    "path": "bridge/reply.py",
    "content": "# encoding:utf-8\n\nfrom enum import Enum\n\n\nclass ReplyType(Enum):\n    TEXT = 1  # 文本\n    VOICE = 2  # 音频文件\n    IMAGE = 3  # 图片文件\n    IMAGE_URL = 4  # 图片URL\n    VIDEO_URL = 5  # 视频URL\n    FILE = 6  # 文件\n    CARD = 7  # 微信名片，仅支持ntchat\n    INVITE_ROOM = 8  # 邀请好友进群\n    INFO = 9\n    ERROR = 10\n    TEXT_ = 11  # 强制文本\n    VIDEO = 12\n    MINIAPP = 13  # 小程序\n\n    def __str__(self):\n        return self.name\n\n\nclass Reply:\n    def __init__(self, type: ReplyType = None, content=None):\n        self.type = type\n        self.content = content\n\n    def __str__(self):\n        return \"Reply(type={}, content={})\".format(self.type, self.content)\n"
  },
  {
    "path": "channel/channel.py",
    "content": "\"\"\"\nMessage sending channel abstract class\n\"\"\"\n\nfrom bridge.bridge import Bridge\nfrom bridge.context import Context\nfrom bridge.reply import *\nfrom common.log import logger\nfrom config import conf\n\n\nclass Channel(object):\n    channel_type = \"\"\n    NOT_SUPPORT_REPLYTYPE = [ReplyType.VOICE, ReplyType.IMAGE]\n\n    def __init__(self):\n        import threading\n        self._startup_event = threading.Event()\n        self._startup_error = None\n        self.cloud_mode = False  # set to True by ChannelManager when running with cloud client\n\n    def startup(self):\n        \"\"\"\n        init channel\n        \"\"\"\n        raise NotImplementedError\n\n    def report_startup_success(self):\n        self._startup_error = None\n        self._startup_event.set()\n\n    def report_startup_error(self, error: str):\n        self._startup_error = error\n        self._startup_event.set()\n\n    def wait_startup(self, timeout: float = 3) -> (bool, str):\n        \"\"\"\n        Wait for channel startup result.\n        Returns (success: bool, error_msg: str).\n        \"\"\"\n        ready = self._startup_event.wait(timeout=timeout)\n        if not ready:\n            return True, \"\"\n        if self._startup_error:\n            return False, self._startup_error\n        return True, \"\"\n\n    def stop(self):\n        \"\"\"\n        stop channel gracefully, called before restart\n        \"\"\"\n        pass\n\n    def handle_text(self, msg):\n        \"\"\"\n        process received msg\n        :param msg: message object\n        \"\"\"\n        raise NotImplementedError\n\n    # 统一的发送函数，每个Channel自行实现，根据reply的type字段发送不同类型的消息\n    def send(self, reply: Reply, context: Context):\n        \"\"\"\n        send message to user\n        :param msg: message content\n        :param receiver: receiver channel account\n        :return:\n        \"\"\"\n        raise NotImplementedError\n\n    def build_reply_content(self, query, context: Context = None) -> Reply:\n        \"\"\"\n        Build reply content, using agent if enabled in config\n        \"\"\"\n        # Check if agent mode is enabled\n        use_agent = conf().get(\"agent\", False)\n\n        if use_agent:\n            try:\n                logger.info(\"[Channel] Using agent mode\")\n\n                # Add channel_type to context if not present\n                if context and \"channel_type\" not in context:\n                    context[\"channel_type\"] = self.channel_type\n\n                # Read on_event callback injected by the channel (e.g. web SSE)\n                on_event = context.get(\"on_event\") if context else None\n\n                # Use agent bridge to handle the query\n                return Bridge().fetch_agent_reply(\n                    query=query,\n                    context=context,\n                    on_event=on_event,\n                    clear_history=False\n                )\n            except Exception as e:\n                logger.error(f\"[Channel] Agent mode failed, fallback to normal mode: {e}\")\n                # Fallback to normal mode if agent fails\n                return Bridge().fetch_reply_content(query, context)\n        else:\n            # Normal mode\n            return Bridge().fetch_reply_content(query, context)\n\n    def build_voice_to_text(self, voice_file) -> Reply:\n        return Bridge().fetch_voice_to_text(voice_file)\n\n    def build_text_to_voice(self, text) -> Reply:\n        return Bridge().fetch_text_to_voice(text)\n"
  },
  {
    "path": "channel/channel_factory.py",
    "content": "\"\"\"\nchannel factory\n\"\"\"\nfrom common import const\nfrom .channel import Channel\n\n\ndef create_channel(channel_type) -> Channel:\n    \"\"\"\n    create a channel instance\n    :param channel_type: channel type code\n    :return: channel instance\n    \"\"\"\n    ch = Channel()\n    if channel_type == \"terminal\":\n        from channel.terminal.terminal_channel import TerminalChannel\n        ch = TerminalChannel()\n    elif channel_type == 'web':\n        from channel.web.web_channel import WebChannel\n        ch = WebChannel()\n    elif channel_type == \"wechatmp\":\n        from channel.wechatmp.wechatmp_channel import WechatMPChannel\n        ch = WechatMPChannel(passive_reply=True)\n    elif channel_type == \"wechatmp_service\":\n        from channel.wechatmp.wechatmp_channel import WechatMPChannel\n        ch = WechatMPChannel(passive_reply=False)\n    elif channel_type == \"wechatcom_app\":\n        from channel.wechatcom.wechatcomapp_channel import WechatComAppChannel\n        ch = WechatComAppChannel()\n    elif channel_type == const.FEISHU:\n        from channel.feishu.feishu_channel import FeiShuChanel\n        ch = FeiShuChanel()\n    elif channel_type == const.DINGTALK:\n        from channel.dingtalk.dingtalk_channel import DingTalkChanel\n        ch = DingTalkChanel()\n    elif channel_type == const.WECOM_BOT:\n        from channel.wecom_bot.wecom_bot_channel import WecomBotChannel\n        ch = WecomBotChannel()\n    elif channel_type == const.QQ:\n        from channel.qq.qq_channel import QQChannel\n        ch = QQChannel()\n    else:\n        raise RuntimeError\n    ch.channel_type = channel_type\n    return ch\n"
  },
  {
    "path": "channel/chat_channel.py",
    "content": "import os\nimport re\nimport threading\nimport time\nfrom asyncio import CancelledError\nfrom concurrent.futures import Future, ThreadPoolExecutor\n\nfrom bridge.context import *\nfrom bridge.reply import *\nfrom channel.channel import Channel\nfrom common.dequeue import Dequeue\nfrom common import memory\nfrom plugins import *\n\ntry:\n    from voice.audio_convert import any_to_wav\nexcept Exception as e:\n    pass\n\nhandler_pool = ThreadPoolExecutor(max_workers=8)  # 处理消息的线程池\n\n\n# 抽象类, 它包含了与消息通道无关的通用处理逻辑\nclass ChatChannel(Channel):\n    name = None  # 登录的用户名\n    user_id = None  # 登录的用户id\n\n    def __init__(self):\n        super().__init__()\n        # Instance-level attributes so each channel subclass has its own\n        # independent session queue and lock. Previously these were class-level,\n        # which caused contexts from one channel (e.g. Feishu) to be consumed\n        # by another channel's consume() thread (e.g. Web), leading to errors\n        # like \"No request_id found in context\".\n        self.futures = {}\n        self.sessions = {}\n        self.lock = threading.Lock()\n        _thread = threading.Thread(target=self.consume)\n        _thread.setDaemon(True)\n        _thread.start()\n\n    # 根据消息构造context，消息内容相关的触发项写在这里\n    def _compose_context(self, ctype: ContextType, content, **kwargs):\n        context = Context(ctype, content)\n        context.kwargs = kwargs\n        if \"channel_type\" not in context:\n            context[\"channel_type\"] = self.channel_type\n        if \"origin_ctype\" not in context:\n            context[\"origin_ctype\"] = ctype\n        # context首次传入时，receiver是None，根据类型设置receiver\n        first_in = \"receiver\" not in context\n        # 群名匹配过程，设置session_id和receiver\n        if first_in:  # context首次传入时，receiver是None，根据类型设置receiver\n            config = conf()\n            cmsg = context[\"msg\"]\n            user_data = conf().get_user_data(cmsg.from_user_id)\n            context[\"openai_api_key\"] = user_data.get(\"openai_api_key\")\n            context[\"gpt_model\"] = user_data.get(\"gpt_model\")\n            if context.get(\"isgroup\", False):\n                group_name = cmsg.other_user_nickname\n                group_id = cmsg.other_user_id\n\n                group_name_white_list = config.get(\"group_name_white_list\", [])\n                group_name_keyword_white_list = config.get(\"group_name_keyword_white_list\", [])\n                if any(\n                    [\n                        group_name in group_name_white_list,\n                        \"ALL_GROUP\" in group_name_white_list,\n                        check_contain(group_name, group_name_keyword_white_list),\n                    ]\n                ):\n                    # Check global group_shared_session config first\n                    group_shared_session = conf().get(\"group_shared_session\", True)\n                    if group_shared_session:\n                        # All users in the group share the same session\n                        session_id = group_id\n                    else:\n                        # Check group-specific whitelist (legacy behavior)\n                        group_chat_in_one_session = conf().get(\"group_chat_in_one_session\", [])\n                        session_id = cmsg.actual_user_id\n                        if any(\n                            [\n                                group_name in group_chat_in_one_session,\n                                \"ALL_GROUP\" in group_chat_in_one_session,\n                            ]\n                        ):\n                            session_id = group_id\n                else:\n                    logger.debug(f\"No need reply, groupName not in whitelist, group_name={group_name}\")\n                    return None\n                context[\"session_id\"] = session_id\n                context[\"receiver\"] = group_id\n            else:\n                context[\"session_id\"] = cmsg.other_user_id\n                context[\"receiver\"] = cmsg.other_user_id\n            e_context = PluginManager().emit_event(EventContext(Event.ON_RECEIVE_MESSAGE, {\"channel\": self, \"context\": context}))\n            context = e_context[\"context\"]\n            if e_context.is_pass() or context is None:\n                return context\n            if cmsg.from_user_id == self.user_id and not config.get(\"trigger_by_self\", True):\n                logger.debug(\"[chat_channel]self message skipped\")\n                return None\n\n        # 消息内容匹配过程，并处理content\n        if ctype == ContextType.TEXT:\n            if first_in and \"」\\n- - - - - - -\" in content:  # 初次匹配 过滤引用消息\n                logger.debug(content)\n                logger.debug(\"[chat_channel]reference query skipped\")\n                return None\n\n            nick_name_black_list = conf().get(\"nick_name_black_list\", [])\n            if context.get(\"isgroup\", False):  # 群聊\n                # 校验关键字\n                match_prefix = check_prefix(content, conf().get(\"group_chat_prefix\"))\n                match_contain = check_contain(content, conf().get(\"group_chat_keyword\"))\n                flag = False\n                if context[\"msg\"].to_user_id != context[\"msg\"].actual_user_id:\n                    if match_prefix is not None or match_contain is not None:\n                        flag = True\n                        if match_prefix:\n                            content = content.replace(match_prefix, \"\", 1).strip()\n                    if context[\"msg\"].is_at:\n                        nick_name = context[\"msg\"].actual_user_nickname\n                        if nick_name and nick_name in nick_name_black_list:\n                            # 黑名单过滤\n                            logger.warning(f\"[chat_channel] Nickname {nick_name} in In BlackList, ignore\")\n                            return None\n\n                        logger.info(\"[chat_channel]receive group at\")\n                        if not conf().get(\"group_at_off\", False):\n                            flag = True\n                        self.name = self.name if self.name is not None else \"\"  # 部分渠道self.name可能没有赋值\n                        pattern = f\"@{re.escape(self.name)}(\\u2005|\\u0020)\"\n                        subtract_res = re.sub(pattern, r\"\", content)\n                        if isinstance(context[\"msg\"].at_list, list):\n                            for at in context[\"msg\"].at_list:\n                                pattern = f\"@{re.escape(at)}(\\u2005|\\u0020)\"\n                                subtract_res = re.sub(pattern, r\"\", subtract_res)\n                        if subtract_res == content and context[\"msg\"].self_display_name:\n                            # 前缀移除后没有变化，使用群昵称再次移除\n                            pattern = f\"@{re.escape(context['msg'].self_display_name)}(\\u2005|\\u0020)\"\n                            subtract_res = re.sub(pattern, r\"\", content)\n                        content = subtract_res\n                if not flag:\n                    if context[\"origin_ctype\"] == ContextType.VOICE:\n                        logger.info(\"[chat_channel]receive group voice, but checkprefix didn't match\")\n                    return None\n            else:  # 单聊\n                nick_name = context[\"msg\"].from_user_nickname\n                if nick_name and nick_name in nick_name_black_list:\n                    # 黑名单过滤\n                    logger.warning(f\"[chat_channel] Nickname '{nick_name}' in In BlackList, ignore\")\n                    return None\n\n                match_prefix = check_prefix(content, conf().get(\"single_chat_prefix\", [\"\"]))\n                if match_prefix is not None:  # 判断如果匹配到自定义前缀，则返回过滤掉前缀+空格后的内容\n                    content = content.replace(match_prefix, \"\", 1).strip()\n                elif context[\"origin_ctype\"] == ContextType.VOICE:  # 如果源消息是私聊的语音消息，允许不匹配前缀，放宽条件\n                    pass\n                else:\n                    logger.info(\"[chat_channel]receive single chat msg, but checkprefix didn't match\")\n                    return None\n            content = content.strip()\n            img_match_prefix = check_prefix(content, conf().get(\"image_create_prefix\",[\"\"]))\n            if img_match_prefix:\n                content = content.replace(img_match_prefix, \"\", 1)\n                context.type = ContextType.IMAGE_CREATE\n            else:\n                context.type = ContextType.TEXT\n            context.content = content.strip()\n            if \"desire_rtype\" not in context and conf().get(\"always_reply_voice\") and ReplyType.VOICE not in self.NOT_SUPPORT_REPLYTYPE:\n                context[\"desire_rtype\"] = ReplyType.VOICE\n        elif context.type == ContextType.VOICE:\n            if \"desire_rtype\" not in context and conf().get(\"voice_reply_voice\") and ReplyType.VOICE not in self.NOT_SUPPORT_REPLYTYPE:\n                context[\"desire_rtype\"] = ReplyType.VOICE\n        return context\n\n    def _handle(self, context: Context):\n        if context is None or not context.content:\n            return\n        logger.debug(\"[chat_channel] handling context: {}\".format(context))\n        # reply的构建步骤\n        reply = self._generate_reply(context)\n\n        logger.debug(\"[chat_channel] decorating reply: {}\".format(reply))\n\n        # reply的包装步骤\n        if reply and reply.content:\n            reply = self._decorate_reply(context, reply)\n\n            # reply的发送步骤\n            self._send_reply(context, reply)\n\n    def _generate_reply(self, context: Context, reply: Reply = Reply()) -> Reply:\n        e_context = PluginManager().emit_event(\n            EventContext(\n                Event.ON_HANDLE_CONTEXT,\n                {\"channel\": self, \"context\": context, \"reply\": reply},\n            )\n        )\n        reply = e_context[\"reply\"]\n        if not e_context.is_pass():\n            logger.debug(\"[chat_channel] type={}, content={}\".format(context.type, context.content))\n            if context.type == ContextType.TEXT or context.type == ContextType.IMAGE_CREATE:  # 文字和图片消息\n                context[\"channel\"] = e_context[\"channel\"]\n                reply = super().build_reply_content(context.content, context)\n            elif context.type == ContextType.VOICE:  # 语音消息\n                cmsg = context[\"msg\"]\n                cmsg.prepare()\n                file_path = context.content\n                wav_path = os.path.splitext(file_path)[0] + \".wav\"\n                try:\n                    any_to_wav(file_path, wav_path)\n                except Exception as e:  # 转换失败，直接使用mp3，对于某些api，mp3也可以识别\n                    logger.warning(\"[chat_channel]any to wav error, use raw path. \" + str(e))\n                    wav_path = file_path\n                # 语音识别\n                reply = super().build_voice_to_text(wav_path)\n                # 删除临时文件\n                try:\n                    os.remove(file_path)\n                    if wav_path != file_path:\n                        os.remove(wav_path)\n                except Exception as e:\n                    pass\n                    # logger.warning(\"[chat_channel]delete temp file error: \" + str(e))\n\n                if reply.type == ReplyType.TEXT:\n                    new_context = self._compose_context(ContextType.TEXT, reply.content, **context.kwargs)\n                    if new_context:\n                        reply = self._generate_reply(new_context)\n                    else:\n                        return\n            elif context.type == ContextType.IMAGE:  # 图片消息，当前仅做下载保存到本地的逻辑\n                memory.USER_IMAGE_CACHE[context[\"session_id\"]] = {\n                    \"path\": context.content,\n                    \"msg\": context.get(\"msg\")\n                }\n            elif context.type == ContextType.SHARING:  # 分享信息，当前无默认逻辑\n                pass\n            elif context.type == ContextType.FUNCTION or context.type == ContextType.FILE:  # 文件消息及函数调用等，当前无默认逻辑\n                pass\n            else:\n                logger.warning(\"[chat_channel] unknown context type: {}\".format(context.type))\n                return\n        return reply\n\n    def _decorate_reply(self, context: Context, reply: Reply) -> Reply:\n        if reply and reply.type:\n            e_context = PluginManager().emit_event(\n                EventContext(\n                    Event.ON_DECORATE_REPLY,\n                    {\"channel\": self, \"context\": context, \"reply\": reply},\n                )\n            )\n            reply = e_context[\"reply\"]\n            desire_rtype = context.get(\"desire_rtype\")\n            if not e_context.is_pass() and reply and reply.type:\n                if reply.type in self.NOT_SUPPORT_REPLYTYPE:\n                    logger.error(\"[chat_channel]reply type not support: \" + str(reply.type))\n                    reply.type = ReplyType.ERROR\n                    reply.content = \"不支持发送的消息类型: \" + str(reply.type)\n\n                if reply.type == ReplyType.TEXT:\n                    reply_text = reply.content\n                    if desire_rtype == ReplyType.VOICE and ReplyType.VOICE not in self.NOT_SUPPORT_REPLYTYPE:\n                        reply = super().build_text_to_voice(reply.content)\n                        return self._decorate_reply(context, reply)\n                    if context.get(\"isgroup\", False):\n                        if not context.get(\"no_need_at\", False):\n                            reply_text = \"@\" + context[\"msg\"].actual_user_nickname + \"\\n\" + reply_text.strip()\n                        reply_text = conf().get(\"group_chat_reply_prefix\", \"\") + reply_text + conf().get(\"group_chat_reply_suffix\", \"\")\n                    else:\n                        reply_text = conf().get(\"single_chat_reply_prefix\", \"\") + reply_text + conf().get(\"single_chat_reply_suffix\", \"\")\n                    reply.content = reply_text\n                elif reply.type == ReplyType.ERROR or reply.type == ReplyType.INFO:\n                    reply.content = \"[\" + str(reply.type) + \"]\\n\" + reply.content\n                elif reply.type == ReplyType.IMAGE_URL or reply.type == ReplyType.VOICE or reply.type == ReplyType.IMAGE or reply.type == ReplyType.FILE or reply.type == ReplyType.VIDEO or reply.type == ReplyType.VIDEO_URL:\n                    pass\n                else:\n                    logger.error(\"[chat_channel] unknown reply type: {}\".format(reply.type))\n                    return\n            if desire_rtype and desire_rtype != reply.type and reply.type not in [ReplyType.ERROR, ReplyType.INFO]:\n                logger.warning(\"[chat_channel] desire_rtype: {}, but reply type: {}\".format(context.get(\"desire_rtype\"), reply.type))\n            return reply\n\n    def _send_reply(self, context: Context, reply: Reply):\n        if reply and reply.type:\n            e_context = PluginManager().emit_event(\n                EventContext(\n                    Event.ON_SEND_REPLY,\n                    {\"channel\": self, \"context\": context, \"reply\": reply},\n                )\n            )\n            reply = e_context[\"reply\"]\n            if not e_context.is_pass() and reply and reply.type:\n                logger.debug(\"[chat_channel] sending reply: {}, context: {}\".format(reply, context))\n                \n                # 如果是文本回复，尝试提取并发送图片\n                if reply.type == ReplyType.TEXT:\n                    self._extract_and_send_images(reply, context)\n                # 如果是图片回复但带有文本内容，先发文本再发图片\n                elif reply.type == ReplyType.IMAGE_URL and hasattr(reply, 'text_content') and reply.text_content:\n                    # 先发送文本\n                    text_reply = Reply(ReplyType.TEXT, reply.text_content)\n                    self._send(text_reply, context)\n                    # 短暂延迟后发送图片\n                    time.sleep(0.3)\n                    self._send(reply, context)\n                else:\n                    self._send(reply, context)\n    \n    def _extract_and_send_images(self, reply: Reply, context: Context):\n        \"\"\"\n        从文本回复中提取图片/视频URL并单独发送\n        支持格式：[图片: /path/to/image.png], [视频: /path/to/video.mp4], ![](url), <img src=\"url\">\n        最多发送5个媒体文件\n        \"\"\"\n        content = reply.content\n        media_items = []  # [(url, type), ...]\n        \n        # 正则提取各种格式的媒体URL\n        patterns = [\n            (r'\\[图片:\\s*([^\\]]+)\\]', 'image'),   # [图片: /path/to/image.png]\n            (r'\\[视频:\\s*([^\\]]+)\\]', 'video'),   # [视频: /path/to/video.mp4]\n            (r'!\\[.*?\\]\\(([^\\)]+)\\)', 'image'),   # ![alt](url) - 默认图片\n            (r'<img[^>]+src=[\"\\']([^\"\\']+)[\"\\']', 'image'),  # <img src=\"url\">\n            (r'<video[^>]+src=[\"\\']([^\"\\']+)[\"\\']', 'video'),  # <video src=\"url\">\n            (r'https?://[^\\s]+\\.(?:jpg|jpeg|png|gif|webp)', 'image'),  # 直接的图片URL\n            (r'https?://[^\\s]+\\.(?:mp4|avi|mov|wmv|flv)', 'video'),  # 直接的视频URL\n        ]\n        \n        for pattern, media_type in patterns:\n            matches = re.findall(pattern, content, re.IGNORECASE)\n            for match in matches:\n                media_items.append((match, media_type))\n        \n        # 去重（保持顺序）并限制最多5个\n        seen = set()\n        unique_items = []\n        for url, mtype in media_items:\n            if url not in seen:\n                seen.add(url)\n                unique_items.append((url, mtype))\n        media_items = unique_items[:5]\n        \n        if media_items:\n            logger.info(f\"[chat_channel] Extracted {len(media_items)} media item(s) from reply\")\n            \n            # 先发送文本（保持原文本不变）\n            logger.info(f\"[chat_channel] Sending text content before media: {reply.content[:100]}...\")\n            self._send(reply, context)\n            logger.info(f\"[chat_channel] Text sent, now sending {len(media_items)} media item(s)\")\n            \n            # 然后逐个发送媒体文件\n            for i, (url, media_type) in enumerate(media_items):\n                try:\n                    # 判断是本地文件还是URL\n                    if url.startswith(('http://', 'https://')):\n                        # 网络资源\n                        if media_type == 'video':\n                            # 视频使用 FILE 类型发送\n                            media_reply = Reply(ReplyType.FILE, url)\n                            media_reply.file_name = os.path.basename(url)\n                        else:\n                            # 图片使用 IMAGE_URL 类型\n                            media_reply = Reply(ReplyType.IMAGE_URL, url)\n                    elif os.path.exists(url):\n                        # 本地文件\n                        if media_type == 'video':\n                            # 视频使用 FILE 类型，转换为 file:// URL\n                            media_reply = Reply(ReplyType.FILE, f\"file://{url}\")\n                            media_reply.file_name = os.path.basename(url)\n                        else:\n                            # 图片使用 IMAGE_URL 类型，转换为 file:// URL\n                            media_reply = Reply(ReplyType.IMAGE_URL, f\"file://{url}\")\n                    else:\n                        logger.warning(f\"[chat_channel] Media file not found or invalid URL: {url}\")\n                        continue\n                    \n                    # 发送媒体文件（添加小延迟避免频率限制）\n                    if i > 0:\n                        time.sleep(0.5)\n                    self._send(media_reply, context)\n                    logger.info(f\"[chat_channel] Sent {media_type} {i+1}/{len(media_items)}: {url[:50]}...\")\n                    \n                except Exception as e:\n                    logger.error(f\"[chat_channel] Failed to send {media_type} {url}: {e}\")\n        else:\n            # 没有媒体文件，正常发送文本\n                self._send(reply, context)\n\n    def _send(self, reply: Reply, context: Context, retry_cnt=0):\n        try:\n            self.send(reply, context)\n        except Exception as e:\n            logger.error(\"[chat_channel] sendMsg error: {}\".format(str(e)))\n            if isinstance(e, NotImplementedError):\n                return\n            logger.exception(e)\n            if retry_cnt < 2:\n                time.sleep(3 + 3 * retry_cnt)\n                self._send(reply, context, retry_cnt + 1)\n\n    def _success_callback(self, session_id, **kwargs):  # 线程正常结束时的回调函数\n        logger.debug(\"Worker return success, session_id = {}\".format(session_id))\n\n    def _fail_callback(self, session_id, exception, **kwargs):  # 线程异常结束时的回调函数\n        logger.exception(\"Worker return exception: {}\".format(exception))\n\n    def _thread_pool_callback(self, session_id, **kwargs):\n        def func(worker: Future):\n            try:\n                worker_exception = worker.exception()\n                if worker_exception:\n                    self._fail_callback(session_id, exception=worker_exception, **kwargs)\n                else:\n                    self._success_callback(session_id, **kwargs)\n            except CancelledError as e:\n                logger.info(\"Worker cancelled, session_id = {}\".format(session_id))\n            except Exception as e:\n                logger.exception(\"Worker raise exception: {}\".format(e))\n            with self.lock:\n                self.sessions[session_id][1].release()\n\n        return func\n\n    def produce(self, context: Context):\n        session_id = context[\"session_id\"]\n        with self.lock:\n            if session_id not in self.sessions:\n                self.sessions[session_id] = [\n                    Dequeue(),\n                    threading.BoundedSemaphore(conf().get(\"concurrency_in_session\", 1)),\n                ]\n            if context.type == ContextType.TEXT and context.content.startswith(\"#\"):\n                self.sessions[session_id][0].putleft(context)  # 优先处理管理命令\n            else:\n                self.sessions[session_id][0].put(context)\n\n    # 消费者函数，单独线程，用于从消息队列中取出消息并处理\n    def consume(self):\n        while True:\n            with self.lock:\n                session_ids = list(self.sessions.keys())\n            for session_id in session_ids:\n                with self.lock:\n                    context_queue, semaphore = self.sessions[session_id]\n                if semaphore.acquire(blocking=False):  # 等线程处理完毕才能删除\n                    if not context_queue.empty():\n                        context = context_queue.get()\n                        logger.debug(\"[chat_channel] consume context: {}\".format(context))\n                        future: Future = handler_pool.submit(self._handle, context)\n                        future.add_done_callback(self._thread_pool_callback(session_id, context=context))\n                        with self.lock:\n                            if session_id not in self.futures:\n                                self.futures[session_id] = []\n                            self.futures[session_id].append(future)\n                    elif semaphore._initial_value == semaphore._value + 1:  # 除了当前，没有任务再申请到信号量，说明所有任务都处理完毕\n                        with self.lock:\n                            self.futures[session_id] = [t for t in self.futures[session_id] if not t.done()]\n                            assert len(self.futures[session_id]) == 0, \"thread pool error\"\n                            del self.sessions[session_id]\n                    else:\n                        semaphore.release()\n            time.sleep(0.2)\n\n    # 取消session_id对应的所有任务，只能取消排队的消息和已提交线程池但未执行的任务\n    def cancel_session(self, session_id):\n        with self.lock:\n            if session_id in self.sessions:\n                for future in self.futures[session_id]:\n                    future.cancel()\n                cnt = self.sessions[session_id][0].qsize()\n                if cnt > 0:\n                    logger.info(\"Cancel {} messages in session {}\".format(cnt, session_id))\n                self.sessions[session_id][0] = Dequeue()\n\n    def cancel_all_session(self):\n        with self.lock:\n            for session_id in self.sessions:\n                for future in self.futures[session_id]:\n                    future.cancel()\n                cnt = self.sessions[session_id][0].qsize()\n                if cnt > 0:\n                    logger.info(\"Cancel {} messages in session {}\".format(cnt, session_id))\n                self.sessions[session_id][0] = Dequeue()\n\n\ndef check_prefix(content, prefix_list):\n    if not prefix_list:\n        return None\n    for prefix in prefix_list:\n        if content.startswith(prefix):\n            return prefix\n    return None\n\n\ndef check_contain(content, keyword_list):\n    if not keyword_list:\n        return None\n    for ky in keyword_list:\n        if content.find(ky) != -1:\n            return True\n    return None\n"
  },
  {
    "path": "channel/chat_message.py",
    "content": "\"\"\"\nUnified chat message class for different channel implementations.\n\n填好必填项(群聊6个，非群聊8个)，即可接入ChatChannel，并支持插件，参考TerminalChannel\n\nChatMessage\nmsg_id: 消息id (必填)\ncreate_time: 消息创建时间\n\nctype: 消息类型 : ContextType (必填)\ncontent: 消息内容, 如果是声音/图片，这里是文件路径 (必填)\n\nfrom_user_id: 发送者id (必填)\nfrom_user_nickname: 发送者昵称\nto_user_id: 接收者id (必填)\nto_user_nickname: 接收者昵称\n\nother_user_id: 对方的id，如果你是发送者，那这个就是接收者id，如果你是接收者，那这个就是发送者id，如果是群消息，那这一直是群id (必填)\nother_user_nickname: 同上\n\nis_group: 是否是群消息 (群聊必填)\nis_at: 是否被at\n\n- (群消息时，一般会存在实际发送者，是群内某个成员的id和昵称，下列项仅在群消息时存在)\nactual_user_id: 实际发送者id (群聊必填)\nactual_user_nickname：实际发送者昵称\nself_display_name: 自身的展示名，设置群昵称时，该字段表示群昵称\n\n_prepare_fn: 准备函数，用于准备消息的内容，比如下载图片等,\n_prepared: 是否已经调用过准备函数\n_rawmsg: 原始消息对象\n\n\"\"\"\n\n\nclass ChatMessage(object):\n    msg_id = None\n    create_time = None\n\n    ctype = None\n    content = None\n\n    from_user_id = None\n    from_user_nickname = None\n    to_user_id = None\n    to_user_nickname = None\n    other_user_id = None\n    other_user_nickname = None\n    my_msg = False\n    self_display_name = None\n\n    is_group = False\n    is_at = False\n    actual_user_id = None\n    actual_user_nickname = None\n    at_list = None\n\n    _prepare_fn = None\n    _prepared = False\n    _rawmsg = None\n\n    def __init__(self, _rawmsg):\n        self._rawmsg = _rawmsg\n\n    def prepare(self):\n        if self._prepare_fn and not self._prepared:\n            self._prepared = True\n            self._prepare_fn()\n\n    def __str__(self):\n        return \"ChatMessage: id={}, create_time={}, ctype={}, content={}, from_user_id={}, from_user_nickname={}, to_user_id={}, to_user_nickname={}, other_user_id={}, other_user_nickname={}, is_group={}, is_at={}, actual_user_id={}, actual_user_nickname={}, at_list={}\".format(\n            self.msg_id,\n            self.create_time,\n            self.ctype,\n            self.content,\n            self.from_user_id,\n            self.from_user_nickname,\n            self.to_user_id,\n            self.to_user_nickname,\n            self.other_user_id,\n            self.other_user_nickname,\n            self.is_group,\n            self.is_at,\n            self.actual_user_id,\n            self.actual_user_nickname,\n            self.at_list\n        )\n"
  },
  {
    "path": "channel/dingtalk/dingtalk_channel.py",
    "content": "\"\"\"\n钉钉通道接入\n\n@author huiwen\n@Date 2023/11/28\n\"\"\"\nimport copy\nimport json\n# -*- coding=utf-8 -*-\nimport logging\nimport os\nimport time\nimport requests\n\nimport dingtalk_stream\nfrom dingtalk_stream import AckMessage\nfrom dingtalk_stream.card_replier import AICardReplier\nfrom dingtalk_stream.card_replier import AICardStatus\nfrom dingtalk_stream.card_replier import CardReplier\n\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import ChatChannel\nfrom common.utils import expand_path\nfrom channel.dingtalk.dingtalk_message import DingTalkMessage\nfrom common.expired_dict import ExpiredDict\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom common.time_check import time_checker\nfrom config import conf\n\n\nclass CustomAICardReplier(CardReplier):\n    def __init__(self, dingtalk_client, incoming_message):\n        super(AICardReplier, self).__init__(dingtalk_client, incoming_message)\n\n    def start(\n            self,\n            card_template_id: str,\n            card_data: dict,\n            recipients: list = None,\n            support_forward: bool = True,\n    ) -> str:\n        \"\"\"\n        AI卡片的创建接口\n        :param support_forward:\n        :param recipients:\n        :param card_template_id:\n        :param card_data:\n        :return:\n        \"\"\"\n        card_data_with_status = copy.deepcopy(card_data)\n        card_data_with_status[\"flowStatus\"] = AICardStatus.PROCESSING\n        return self.create_and_send_card(\n            card_template_id,\n            card_data_with_status,\n            at_sender=True,\n            at_all=False,\n            recipients=recipients,\n            support_forward=support_forward,\n        )\n\n\n# 对 AICardReplier 进行猴子补丁\nAICardReplier.start = CustomAICardReplier.start\n\n\ndef _check(func):\n    def wrapper(self, cmsg: DingTalkMessage):\n        msgId = cmsg.msg_id\n        if msgId in self.receivedMsgs:\n            logger.info(\"DingTalk message {} already received, ignore\".format(msgId))\n            return\n        self.receivedMsgs[msgId] = True\n        create_time = cmsg.create_time  # 消息时间戳\n        if conf().get(\"hot_reload\") == True and int(create_time) < int(time.time()) - 60:  # 跳过1分钟前的历史消息\n            logger.debug(\"[DingTalk] History message {} skipped\".format(msgId))\n            return\n        if cmsg.my_msg and not cmsg.is_group:\n            logger.debug(\"[DingTalk] My message {} skipped\".format(msgId))\n            return\n        return func(self, cmsg)\n\n    return wrapper\n\n\n@singleton\nclass DingTalkChanel(ChatChannel, dingtalk_stream.ChatbotHandler):\n    dingtalk_client_id = conf().get('dingtalk_client_id')\n    dingtalk_client_secret = conf().get('dingtalk_client_secret')\n\n    def setup_logger(self):\n        # Suppress verbose logs from dingtalk_stream SDK\n        logging.getLogger(\"dingtalk_stream\").setLevel(logging.WARNING)\n        return logging.getLogger(\"DingTalk\")\n\n    def __init__(self):\n        super().__init__()\n        super(dingtalk_stream.ChatbotHandler, self).__init__()\n        self.logger = self.setup_logger()\n        # 历史消息id暂存，用于幂等控制\n        self.receivedMsgs = ExpiredDict(conf().get(\"expires_in_seconds\", 3600))\n        self._stream_client = None\n        self._running = False\n        self._event_loop = None\n        logger.debug(\"[DingTalk] client_id={}, client_secret={} \".format(\n            self.dingtalk_client_id, self.dingtalk_client_secret))\n        # 无需群校验和前缀\n        conf()[\"group_name_white_list\"] = [\"ALL_GROUP\"]\n        # 单聊无需前缀\n        conf()[\"single_chat_prefix\"] = [\"\"]\n        # Access token cache\n        self._access_token = None\n        self._access_token_expires_at = 0\n        # Robot code cache (extracted from incoming messages)\n        self._robot_code = None\n\n    def _open_connection(self, client):\n        \"\"\"\n        Open a DingTalk stream connection directly, bypassing SDK's internal error-swallowing.\n        Returns (connection_dict, error_str). On success error_str is empty; on failure\n        connection_dict is None and error_str contains a human-readable message.\n        \"\"\"\n        try:\n            resp = requests.post(\n                \"https://api.dingtalk.com/v1.0/gateway/connections/open\",\n                headers={\"Content-Type\": \"application/json\", \"Accept\": \"application/json\"},\n                json={\n                    \"clientId\": client.credential.client_id,\n                    \"clientSecret\": client.credential.client_secret,\n                    \"subscriptions\": [{\"type\": \"CALLBACK\",\n                                       \"topic\": dingtalk_stream.chatbot.ChatbotMessage.TOPIC}],\n                    \"ua\": \"dingtalk-sdk-python/cow\",\n                    \"localIp\": \"\",\n                },\n                timeout=10,\n            )\n            body = resp.json()\n            if not resp.ok:\n                code = body.get(\"code\", resp.status_code)\n                message = body.get(\"message\", resp.reason)\n                return None, f\"open connection failed: [{code}] {message}\"\n            return body, \"\"\n        except Exception as e:\n            return None, f\"open connection failed: {e}\"\n\n    def startup(self):\n        import asyncio\n        self.dingtalk_client_id = conf().get('dingtalk_client_id')\n        self.dingtalk_client_secret = conf().get('dingtalk_client_secret')\n        self._running = True\n        credential = dingtalk_stream.Credential(self.dingtalk_client_id, self.dingtalk_client_secret)\n        client = dingtalk_stream.DingTalkStreamClient(credential)\n        self._stream_client = client\n        client.register_callback_handler(dingtalk_stream.chatbot.ChatbotMessage.TOPIC, self)\n        logger.info(\"[DingTalk] ✅ Stream client initialized, ready to receive messages\")\n\n        # Run the connection loop ourselves instead of delegating to client.start(),\n        # so we can get detailed error messages and respond to stop() quickly.\n        import urllib.parse as _urlparse\n        import websockets as _ws\n        import json as _json\n        client.pre_start()\n        _first_connect = True\n        while self._running:\n            # Open connection using our own request so we get detailed error info.\n            connection, err_msg = self._open_connection(client)\n\n            if connection is None:\n                if _first_connect:\n                    logger.warning(f\"[DingTalk] {err_msg}\")\n                    self.report_startup_error(err_msg)\n                    _first_connect = False\n                else:\n                    logger.warning(f\"[DingTalk] {err_msg}, retrying in 10s...\")\n\n                # Interruptible sleep: checks _running every 100ms.\n                for _ in range(100):\n                    if not self._running:\n                        break\n                    time.sleep(0.1)\n                continue\n\n            if _first_connect:\n                logger.info(\"[DingTalk] ✅ Connected to DingTalk stream\")\n                self.report_startup_success()\n                _first_connect = False\n            else:\n                logger.info(\"[DingTalk] Reconnected to DingTalk stream\")\n\n            # Run the WebSocket session in an asyncio loop.\n            uri = '%s?ticket=%s' % (\n                connection['endpoint'],\n                _urlparse.quote_plus(connection['ticket'])\n            )\n            loop = asyncio.new_event_loop()\n            asyncio.set_event_loop(loop)\n            self._event_loop = loop\n            try:\n                async def _session():\n                    async with _ws.connect(uri) as websocket:\n                        client.websocket = websocket\n                        async for raw_message in websocket:\n                            json_message = _json.loads(raw_message)\n                            result = await client.route_message(json_message)\n                            if result == dingtalk_stream.DingTalkStreamClient.TAG_DISCONNECT:\n                                break\n\n                loop.run_until_complete(_session())\n            except (KeyboardInterrupt, SystemExit):\n                logger.info(\"[DingTalk] Session loop received stop signal, exiting\")\n                break\n            except Exception as e:\n                if not self._running:\n                    break\n                logger.warning(f\"[DingTalk] Stream session error: {e}, reconnecting in 3s...\")\n                for _ in range(30):\n                    if not self._running:\n                        break\n                    time.sleep(0.1)\n            finally:\n                self._event_loop = None\n                try:\n                    loop.close()\n                except Exception:\n                    pass\n\n        logger.info(\"[DingTalk] Startup loop exited\")\n\n    def stop(self):\n        logger.info(\"[DingTalk] stop() called, setting _running=False\")\n        self._running = False\n        loop = self._event_loop\n        if loop and not loop.is_closed():\n            try:\n                loop.call_soon_threadsafe(loop.stop)\n                logger.info(\"[DingTalk] Sent stop signal to event loop\")\n            except Exception as e:\n                logger.warning(f\"[DingTalk] Error stopping event loop: {e}\")\n        self._stream_client = None\n        logger.info(\"[DingTalk] stop() completed\")\n    \n    def get_access_token(self):\n        \"\"\"\n        获取企业内部应用的 access_token\n        文档: https://open.dingtalk.com/document/orgapp/obtain-orgapp-token\n        \"\"\"\n        current_time = time.time()\n        \n        # 如果 token 还没过期，直接返回缓存的 token\n        if self._access_token and current_time < self._access_token_expires_at:\n            return self._access_token\n        \n        # 获取新的 access_token\n        url = \"https://api.dingtalk.com/v1.0/oauth2/accessToken\"\n        headers = {\"Content-Type\": \"application/json\"}\n        data = {\n            \"appKey\": self.dingtalk_client_id,\n            \"appSecret\": self.dingtalk_client_secret\n        }\n        \n        try:\n            response = requests.post(url, headers=headers, json=data, timeout=10)\n            result = response.json()\n            \n            if response.status_code == 200 and \"accessToken\" in result:\n                self._access_token = result[\"accessToken\"]\n                # Token 有效期为 2 小时，提前 5 分钟刷新\n                self._access_token_expires_at = current_time + result.get(\"expireIn\", 7200) - 300\n                logger.info(\"[DingTalk] Access token refreshed successfully\")\n                return self._access_token\n            else:\n                logger.error(f\"[DingTalk] Failed to get access token: {result}\")\n                return None\n        except Exception as e:\n            logger.error(f\"[DingTalk] Error getting access token: {e}\")\n            return None\n    \n    def send_single_message(self, user_id: str, content: str, robot_code: str) -> bool:\n        \"\"\"\n        Send message to single user (private chat)\n        API: https://open.dingtalk.com/document/orgapp/chatbots-send-one-on-one-chat-messages-in-batches\n        \"\"\"\n        access_token = self.get_access_token()\n        if not access_token:\n            logger.error(\"[DingTalk] Failed to send single message: Access token not available.\")\n            return False\n\n        if not robot_code:\n            logger.error(\"[DingTalk] Cannot send single message: robot_code is required\")\n            return False\n\n        url = \"https://api.dingtalk.com/v1.0/robot/oToMessages/batchSend\"\n        headers = {\n            \"x-acs-dingtalk-access-token\": access_token,\n            \"Content-Type\": \"application/json\"\n        }\n        data = {\n            \"msgParam\": json.dumps({\"content\": content}),\n            \"msgKey\": \"sampleText\",\n            \"userIds\": [user_id],\n            \"robotCode\": robot_code\n        }\n\n        logger.info(f\"[DingTalk] Sending single message to user {user_id} with robot_code {robot_code}\")\n        try:\n            response = requests.post(url, headers=headers, json=data, timeout=10)\n            result = response.json()\n            \n            if response.status_code == 200 and result.get(\"processQueryKey\"):\n                logger.info(f\"[DingTalk] Single message sent successfully to {user_id}\")\n                return True\n            else:\n                logger.error(f\"[DingTalk] Failed to send single message: {result}\")\n                return False\n        except Exception as e:\n            logger.error(f\"[DingTalk] Error sending single message: {e}\")\n            return False\n    \n    def send_group_message(self, conversation_id: str, content: str, robot_code: str = None):\n        \"\"\"\n        主动发送群消息\n        文档: https://open.dingtalk.com/document/orgapp/the-robot-sends-a-group-message\n        \n        Args:\n            conversation_id: 会话ID (openConversationId)\n            content: 消息内容\n            robot_code: 机器人编码，默认使用 dingtalk_client_id\n        \"\"\"\n        access_token = self.get_access_token()\n        if not access_token:\n            logger.error(\"[DingTalk] Cannot send group message: no access token\")\n            return False\n        \n        # Validate robot_code\n        if not robot_code:\n            logger.error(\"[DingTalk] Cannot send group message: robot_code is required\")\n            return False\n        \n        url = \"https://api.dingtalk.com/v1.0/robot/groupMessages/send\"\n        headers = {\n            \"x-acs-dingtalk-access-token\": access_token,\n            \"Content-Type\": \"application/json\"\n        }\n        data = {\n            \"msgParam\": json.dumps({\"content\": content}),\n            \"msgKey\": \"sampleText\",\n            \"openConversationId\": conversation_id,\n            \"robotCode\": robot_code\n        }\n        \n        try:\n            response = requests.post(url, headers=headers, json=data, timeout=10)\n            result = response.json()\n            \n            if response.status_code == 200:\n                logger.info(f\"[DingTalk] Group message sent successfully to {conversation_id}\")\n                return True\n            else:\n                logger.error(f\"[DingTalk] Failed to send group message: {result}\")\n                return False\n        except Exception as e:\n            logger.error(f\"[DingTalk] Error sending group message: {e}\")\n            return False\n    \n    def upload_media(self, file_path: str, media_type: str = \"image\") -> str:\n        \"\"\"\n        上传媒体文件到钉钉\n        \n        Args:\n            file_path: 本地文件路径或URL\n            media_type: 媒体类型 (image, video, voice, file)\n        \n        Returns:\n            media_id，如果上传失败返回 None\n        \"\"\"\n        access_token = self.get_access_token()\n        if not access_token:\n            logger.error(\"[DingTalk] Cannot upload media: no access token\")\n            return None\n        \n        # 处理 file:// URL\n        if file_path.startswith(\"file://\"):\n            file_path = file_path[7:]\n        \n        # 如果是 HTTP URL，先下载\n        if file_path.startswith(\"http://\") or file_path.startswith(\"https://\"):\n            try:\n                import uuid\n                response = requests.get(file_path, timeout=(5, 60))\n                if response.status_code != 200:\n                    logger.error(f\"[DingTalk] Failed to download file from URL: {file_path}\")\n                    return None\n                \n                # 保存到临时文件\n                file_name = os.path.basename(file_path) or f\"media_{uuid.uuid4()}\"\n                workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n                tmp_dir = os.path.join(workspace_root, \"tmp\")\n                os.makedirs(tmp_dir, exist_ok=True)\n                temp_file = os.path.join(tmp_dir, file_name)\n                \n                with open(temp_file, \"wb\") as f:\n                    f.write(response.content)\n                \n                file_path = temp_file\n                logger.info(f\"[DingTalk] Downloaded file to {file_path}\")\n            except Exception as e:\n                logger.error(f\"[DingTalk] Error downloading file: {e}\")\n                return None\n        \n        if not os.path.exists(file_path):\n            logger.error(f\"[DingTalk] File not found: {file_path}\")\n            return None\n        \n        # 上传到钉钉\n        # 钉钉上传媒体文件 API: https://open.dingtalk.com/document/orgapp/upload-media-files\n        url = \"https://oapi.dingtalk.com/media/upload\"\n        params = {\n            \"access_token\": access_token,\n            \"type\": media_type\n        }\n        \n        try:\n            with open(file_path, \"rb\") as f:\n                files = {\"media\": (os.path.basename(file_path), f)}\n                response = requests.post(url, params=params, files=files, timeout=(5, 60))\n                result = response.json()\n                \n                if result.get(\"errcode\") == 0:\n                    media_id = result.get(\"media_id\")\n                    logger.info(f\"[DingTalk] Media uploaded successfully, media_id={media_id}\")\n                    return media_id\n                else:\n                    logger.error(f\"[DingTalk] Failed to upload media: {result}\")\n                    return None\n        except Exception as e:\n            logger.error(f\"[DingTalk] Error uploading media: {e}\")\n            return None\n    \n    def send_image_with_media_id(self, access_token: str, media_id: str, incoming_message, is_group: bool) -> bool:\n        \"\"\"\n        发送图片消息（使用 media_id）\n        \n        Args:\n            access_token: 访问令牌\n            media_id: 媒体ID\n            incoming_message: 钉钉消息对象\n            is_group: 是否为群聊\n        \n        Returns:\n            是否发送成功\n        \"\"\"\n        headers = {\n            \"x-acs-dingtalk-access-token\": access_token,\n            'Content-Type': 'application/json'\n        }\n        \n        msg_param = {\n            \"photoURL\": media_id  # 钉钉图片消息使用 photoURL 字段\n        }\n        \n        body = {\n            \"robotCode\": incoming_message.robot_code,\n            \"msgKey\": \"sampleImageMsg\",\n            \"msgParam\": json.dumps(msg_param),\n        }\n        \n        if is_group:\n            # 群聊\n            url = \"https://api.dingtalk.com/v1.0/robot/groupMessages/send\"\n            body[\"openConversationId\"] = incoming_message.conversation_id\n        else:\n            # 单聊\n            url = \"https://api.dingtalk.com/v1.0/robot/oToMessages/batchSend\"\n            body[\"userIds\"] = [incoming_message.sender_staff_id]\n        \n        try:\n            response = requests.post(url=url, headers=headers, json=body, timeout=10)\n            result = response.json()\n            \n            logger.info(f\"[DingTalk] Image send result: {response.text}\")\n            \n            if response.status_code == 200:\n                return True\n            else:\n                logger.error(f\"[DingTalk] Send image error: {response.text}\")\n                return False\n        except Exception as e:\n            logger.error(f\"[DingTalk] Send image exception: {e}\")\n            return False\n\n    def send_image_message(self, receiver: str, media_id: str, is_group: bool, robot_code: str) -> bool:\n        \"\"\"\n        发送图片消息\n        \n        Args:\n            receiver: 接收者ID (user_id 或 conversation_id)\n            media_id: 媒体ID\n            is_group: 是否为群聊\n            robot_code: 机器人编码\n        \n        Returns:\n            是否发送成功\n        \"\"\"\n        access_token = self.get_access_token()\n        if not access_token:\n            logger.error(\"[DingTalk] Cannot send image: no access token\")\n            return False\n        \n        if not robot_code:\n            logger.error(\"[DingTalk] Cannot send image: robot_code is required\")\n            return False\n        \n        if is_group:\n            # 发送群聊图片\n            url = \"https://api.dingtalk.com/v1.0/robot/groupMessages/send\"\n            headers = {\n                \"x-acs-dingtalk-access-token\": access_token,\n                \"Content-Type\": \"application/json\"\n            }\n            data = {\n                \"msgParam\": json.dumps({\"mediaId\": media_id}),\n                \"msgKey\": \"sampleImageMsg\",\n                \"openConversationId\": receiver,\n                \"robotCode\": robot_code\n            }\n        else:\n            # 发送单聊图片\n            url = \"https://api.dingtalk.com/v1.0/robot/oToMessages/batchSend\"\n            headers = {\n                \"x-acs-dingtalk-access-token\": access_token,\n                \"Content-Type\": \"application/json\"\n            }\n            data = {\n                \"msgParam\": json.dumps({\"mediaId\": media_id}),\n                \"msgKey\": \"sampleImageMsg\",\n                \"userIds\": [receiver],\n                \"robotCode\": robot_code\n            }\n        \n        try:\n            response = requests.post(url, headers=headers, json=data, timeout=10)\n            result = response.json()\n            \n            if response.status_code == 200:\n                logger.info(f\"[DingTalk] Image message sent successfully\")\n                return True\n            else:\n                logger.error(f\"[DingTalk] Failed to send image message: {result}\")\n                return False\n        except Exception as e:\n            logger.error(f\"[DingTalk] Error sending image message: {e}\")\n            return False\n    \n    def get_image_download_url(self, download_code: str) -> str:\n        \"\"\"\n        获取图片下载地址\n        返回一个特殊的 URL 格式：dingtalk://download/{robot_code}:{download_code}\n        后续会在 download_image_file 中使用新版 API 下载\n        \"\"\"\n        # 获取 robot_code\n        if not hasattr(self, '_robot_code_cache'):\n            self._robot_code_cache = None\n        \n        robot_code = self._robot_code_cache\n        \n        if not robot_code:\n            logger.error(\"[DingTalk] robot_code not available for image download\")\n            return None\n        \n        # 返回一个特殊的 URL，包含 robot_code 和 download_code\n        logger.info(f\"[DingTalk] Successfully got image download URL for code: {download_code}\")\n        return f\"dingtalk://download/{robot_code}:{download_code}\"\n\n    async def process(self, callback: dingtalk_stream.CallbackMessage):\n        try:\n            incoming_message = dingtalk_stream.ChatbotMessage.from_dict(callback.data)\n\n            # 缓存 robot_code，用于后续图片下载\n            if hasattr(incoming_message, 'robot_code'):\n                self._robot_code_cache = incoming_message.robot_code\n\n            # Filter out stale messages from before channel startup (offline backlog)\n            create_at = getattr(incoming_message, 'create_at', None)\n            if create_at:\n                msg_age_s = time.time() - int(create_at) / 1000\n                if msg_age_s > 60:\n                    logger.warning(f\"[DingTalk] stale msg filtered (age={msg_age_s:.0f}s), \"\n                                   f\"msg_id={getattr(incoming_message, 'message_id', 'N/A')}\")\n                    return AckMessage.STATUS_OK, 'OK'\n\n            image_download_handler = self\n            dingtalk_msg = DingTalkMessage(incoming_message, image_download_handler)\n\n            if dingtalk_msg.is_group:\n                self.handle_group(dingtalk_msg)\n            else:\n                self.handle_single(dingtalk_msg)\n            return AckMessage.STATUS_OK, 'OK'\n        except Exception as e:\n            logger.error(f\"[DingTalk] process error: {e}\", exc_info=True)\n            return AckMessage.STATUS_SYSTEM_EXCEPTION, 'ERROR'\n\n    @time_checker\n    @_check\n    def handle_single(self, cmsg: DingTalkMessage):\n        # 处理单聊消息\n        if cmsg.ctype == ContextType.VOICE:\n            logger.debug(\"[DingTalk]receive voice msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.IMAGE:\n            logger.debug(\"[DingTalk]receive image msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.IMAGE_CREATE:\n            logger.debug(\"[DingTalk]receive image create msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.PATPAT:\n            logger.debug(\"[DingTalk]receive patpat msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.TEXT:\n            logger.debug(\"[DingTalk]receive text msg: {}\".format(cmsg.content))\n        else:\n            logger.debug(\"[DingTalk]receive other msg: {}\".format(cmsg.content))\n        \n        # 处理文件缓存逻辑\n        from channel.file_cache import get_file_cache\n        file_cache = get_file_cache()\n        \n        # 单聊的 session_id 就是 sender_id\n        session_id = cmsg.from_user_id\n        \n        # 如果是单张图片消息，缓存起来\n        if cmsg.ctype == ContextType.IMAGE:\n            if hasattr(cmsg, 'image_path') and cmsg.image_path:\n                file_cache.add(session_id, cmsg.image_path, file_type='image')\n                logger.info(f\"[DingTalk] Image cached for session {session_id}, waiting for user query...\")\n            # 单张图片不直接处理，等待用户提问\n            return\n        \n        # 如果是文本消息，检查是否有缓存的文件\n        if cmsg.ctype == ContextType.TEXT:\n            cached_files = file_cache.get(session_id)\n            if cached_files:\n                # 将缓存的文件附加到文本消息中\n                file_refs = []\n                for file_info in cached_files:\n                    file_path = file_info['path']\n                    file_type = file_info['type']\n                    if file_type == 'image':\n                        file_refs.append(f\"[图片: {file_path}]\")\n                    elif file_type == 'video':\n                        file_refs.append(f\"[视频: {file_path}]\")\n                    else:\n                        file_refs.append(f\"[文件: {file_path}]\")\n                \n                cmsg.content = cmsg.content + \"\\n\" + \"\\n\".join(file_refs)\n                logger.info(f\"[DingTalk] Attached {len(cached_files)} cached file(s) to user query\")\n                # 清除缓存\n                file_cache.clear(session_id)\n        \n        context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=False, msg=cmsg)\n        if context:\n            self.produce(context)\n\n\n    @time_checker\n    @_check\n    def handle_group(self, cmsg: DingTalkMessage):\n        # 处理群聊消息\n        if cmsg.ctype == ContextType.VOICE:\n            logger.debug(\"[DingTalk]receive voice msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.IMAGE:\n            logger.debug(\"[DingTalk]receive image msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.IMAGE_CREATE:\n            logger.debug(\"[DingTalk]receive image create msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.PATPAT:\n            logger.debug(\"[DingTalk]receive patpat msg: {}\".format(cmsg.content))\n        elif cmsg.ctype == ContextType.TEXT:\n            logger.debug(\"[DingTalk]receive text msg: {}\".format(cmsg.content))\n        else:\n            logger.debug(\"[DingTalk]receive other msg: {}\".format(cmsg.content))\n        \n        # 处理文件缓存逻辑\n        from channel.file_cache import get_file_cache\n        file_cache = get_file_cache()\n        \n        # 群聊的 session_id\n        if conf().get(\"group_shared_session\", True):\n            session_id = cmsg.other_user_id  # conversation_id\n        else:\n            session_id = cmsg.from_user_id + \"_\" + cmsg.other_user_id\n        \n        # 如果是单张图片消息，缓存起来\n        if cmsg.ctype == ContextType.IMAGE:\n            if hasattr(cmsg, 'image_path') and cmsg.image_path:\n                file_cache.add(session_id, cmsg.image_path, file_type='image')\n                logger.info(f\"[DingTalk] Image cached for session {session_id}, waiting for user query...\")\n            # 单张图片不直接处理，等待用户提问\n            return\n        \n        # 如果是文本消息，检查是否有缓存的文件\n        if cmsg.ctype == ContextType.TEXT:\n            cached_files = file_cache.get(session_id)\n            if cached_files:\n                # 将缓存的文件附加到文本消息中\n                file_refs = []\n                for file_info in cached_files:\n                    file_path = file_info['path']\n                    file_type = file_info['type']\n                    if file_type == 'image':\n                        file_refs.append(f\"[图片: {file_path}]\")\n                    elif file_type == 'video':\n                        file_refs.append(f\"[视频: {file_path}]\")\n                    else:\n                        file_refs.append(f\"[文件: {file_path}]\")\n                \n                cmsg.content = cmsg.content + \"\\n\" + \"\\n\".join(file_refs)\n                logger.info(f\"[DingTalk] Attached {len(cached_files)} cached file(s) to user query\")\n                # 清除缓存\n                file_cache.clear(session_id)\n        \n        context = self._compose_context(cmsg.ctype, cmsg.content, isgroup=True, msg=cmsg)\n        context['no_need_at'] = True\n        if context:\n            self.produce(context)\n\n\n    def send(self, reply: Reply, context: Context):\n        logger.debug(f\"[DingTalk] send() called with reply.type={reply.type}, content_length={len(str(reply.content))}\")\n        receiver = context[\"receiver\"]\n        \n        # Check if msg exists (for scheduled tasks, msg might be None)\n        msg = context.kwargs.get('msg')\n        if msg is None:\n            # 定时任务场景：使用主动发送 API\n            is_group = context.get(\"isgroup\", False)\n            logger.info(f\"[DingTalk] Sending scheduled task message to {receiver} (is_group={is_group})\")\n            \n            # 使用缓存的 robot_code 或配置的值\n            robot_code = self._robot_code or conf().get(\"dingtalk_robot_code\")\n            logger.info(f\"[DingTalk] Using robot_code: {robot_code}, cached: {self._robot_code}, config: {conf().get('dingtalk_robot_code')}\")\n            \n            if not robot_code:\n                logger.error(f\"[DingTalk] Cannot send scheduled task: robot_code not available. Please send at least one message to the bot first, or configure dingtalk_robot_code in config.json\")\n                return\n            \n            # 根据是否群聊选择不同的 API\n            if is_group:\n                success = self.send_group_message(receiver, reply.content, robot_code)\n            else:\n                # 单聊场景：尝试从 context 中获取 dingtalk_sender_staff_id\n                sender_staff_id = context.get(\"dingtalk_sender_staff_id\")\n                if not sender_staff_id:\n                    logger.error(f\"[DingTalk] Cannot send single chat scheduled message: sender_staff_id not available in context\")\n                    return\n                \n                logger.info(f\"[DingTalk] Sending single message to staff_id: {sender_staff_id}\")\n                success = self.send_single_message(sender_staff_id, reply.content, robot_code)\n            \n            if not success:\n                logger.error(f\"[DingTalk] Failed to send scheduled task message\")\n            return\n        \n        # 从正常消息中提取并缓存 robot_code\n        if hasattr(msg, 'robot_code'):\n            robot_code = msg.robot_code\n            if robot_code and robot_code != self._robot_code:\n                self._robot_code = robot_code\n                logger.debug(f\"[DingTalk] Cached robot_code: {robot_code}\")\n        \n        isgroup = msg.is_group\n        incoming_message = msg.incoming_message\n        robot_code = self._robot_code or conf().get(\"dingtalk_robot_code\")\n        \n        # 处理图片和视频发送\n        if reply.type == ReplyType.IMAGE_URL:\n            logger.info(f\"[DingTalk] Sending image: {reply.content}\")\n            \n            # 如果有附加的文本内容，先发送文本\n            if hasattr(reply, 'text_content') and reply.text_content:\n                self.reply_text(reply.text_content, incoming_message)\n                import time\n                time.sleep(0.3)  # 短暂延迟，确保文本先到达\n            \n            media_id = self.upload_media(reply.content, media_type=\"image\")\n            if media_id:\n                # 使用主动发送 API 发送图片\n                access_token = self.get_access_token()\n                if access_token:\n                    success = self.send_image_with_media_id(\n                        access_token,\n                        media_id,\n                        incoming_message,\n                        isgroup\n                    )\n                    if not success:\n                        logger.error(\"[DingTalk] Failed to send image message\")\n                        self.reply_text(\"抱歉，图片发送失败\", incoming_message)\n                else:\n                    logger.error(\"[DingTalk] Cannot get access token\")\n                    self.reply_text(\"抱歉，图片发送失败（无法获取token）\", incoming_message)\n            else:\n                logger.error(\"[DingTalk] Failed to upload image\")\n                self.reply_text(\"抱歉，图片上传失败\", incoming_message)\n            return\n        \n        elif reply.type == ReplyType.FILE:\n            # 如果有附加的文本内容，先发送文本\n            if hasattr(reply, 'text_content') and reply.text_content:\n                self.reply_text(reply.text_content, incoming_message)\n                import time\n                time.sleep(0.3)  # 短暂延迟，确保文本先到达\n            \n            # 判断是否为视频文件\n            file_path = reply.content\n            if file_path.startswith(\"file://\"):\n                file_path = file_path[7:]\n            \n            is_video = file_path.lower().endswith(('.mp4', '.avi', '.mov', '.wmv', '.flv'))\n            \n            access_token = self.get_access_token()\n            if not access_token:\n                logger.error(\"[DingTalk] Cannot get access token\")\n                self.reply_text(\"抱歉，文件发送失败（无法获取token）\", incoming_message)\n                return\n            \n            if is_video:\n                logger.info(f\"[DingTalk] Sending video: {reply.content}\")\n                media_id = self.upload_media(reply.content, media_type=\"video\")\n                if media_id:\n                    # 发送视频消息\n                    msg_param = {\n                        \"duration\": \"30\",  # TODO: 获取实际视频时长\n                        \"videoMediaId\": media_id,\n                        \"videoType\": \"mp4\",\n                        \"height\": \"400\",\n                        \"width\": \"600\",\n                    }\n                    success = self._send_file_message(\n                        access_token,\n                        incoming_message,\n                        \"sampleVideo\",\n                        msg_param,\n                        isgroup\n                    )\n                    if not success:\n                        self.reply_text(\"抱歉，视频发送失败\", incoming_message)\n                else:\n                    logger.error(\"[DingTalk] Failed to upload video\")\n                    self.reply_text(\"抱歉，视频上传失败\", incoming_message)\n            else:\n                # 其他文件类型\n                logger.info(f\"[DingTalk] Sending file: {reply.content}\")\n                media_id = self.upload_media(reply.content, media_type=\"file\")\n                if media_id:\n                    file_name = os.path.basename(file_path)\n                    file_base, file_extension = os.path.splitext(file_name)\n                    msg_param = {\n                        \"mediaId\": media_id,\n                        \"fileName\": file_name,\n                        \"fileType\": file_extension[1:] if file_extension else \"file\"\n                    }\n                    success = self._send_file_message(\n                        access_token,\n                        incoming_message,\n                        \"sampleFile\",\n                        msg_param,\n                        isgroup\n                    )\n                    if not success:\n                        self.reply_text(\"抱歉，文件发送失败\", incoming_message)\n                else:\n                    logger.error(\"[DingTalk] Failed to upload file\")\n                    self.reply_text(\"抱歉，文件上传失败\", incoming_message)\n            return\n        \n        # 处理文本消息\n        elif reply.type == ReplyType.TEXT:\n            logger.info(f\"[DingTalk] Sending text message, length={len(reply.content)}\")\n            if conf().get(\"dingtalk_card_enabled\"):\n                logger.info(\"[Dingtalk] sendMsg={}, receiver={}\".format(reply, receiver))\n                def reply_with_text():\n                    self.reply_text(reply.content, incoming_message)\n                def reply_with_at_text():\n                    self.reply_text(\"📢 您有一条新的消息，请查看。\", incoming_message)\n                def reply_with_ai_markdown():\n                    button_list, markdown_content = self.generate_button_markdown_content(context, reply)\n                    self.reply_ai_markdown_button(incoming_message, markdown_content, button_list, \"\", \"📌 内容由AI生成\", \"\",[incoming_message.sender_staff_id])\n\n                if reply.type in [ReplyType.IMAGE_URL, ReplyType.IMAGE, ReplyType.TEXT]:\n                    if isgroup:\n                        reply_with_ai_markdown()\n                        reply_with_at_text()\n                    else:\n                        reply_with_ai_markdown()\n                else:\n                    # 暂不支持其它类型消息回复\n                    reply_with_text()\n            else:\n                self.reply_text(reply.content, incoming_message)\n            return\n    \n    def _send_file_message(self, access_token: str, incoming_message, msg_key: str, msg_param: dict, is_group: bool) -> bool:\n        \"\"\"\n        发送文件/视频消息的通用方法\n        \n        Args:\n            access_token: 访问令牌\n            incoming_message: 钉钉消息对象\n            msg_key: 消息类型 (sampleFile, sampleVideo, sampleAudio)\n            msg_param: 消息参数\n            is_group: 是否为群聊\n        \n        Returns:\n            是否发送成功\n        \"\"\"\n        headers = {\n            \"x-acs-dingtalk-access-token\": access_token,\n            'Content-Type': 'application/json'\n        }\n        \n        body = {\n            \"robotCode\": incoming_message.robot_code,\n            \"msgKey\": msg_key,\n            \"msgParam\": json.dumps(msg_param),\n        }\n        \n        if is_group:\n            # 群聊\n            url = \"https://api.dingtalk.com/v1.0/robot/groupMessages/send\"\n            body[\"openConversationId\"] = incoming_message.conversation_id\n        else:\n            # 单聊\n            url = \"https://api.dingtalk.com/v1.0/robot/oToMessages/batchSend\"\n            body[\"userIds\"] = [incoming_message.sender_staff_id]\n        \n        try:\n            response = requests.post(url=url, headers=headers, json=body, timeout=10)\n            result = response.json()\n            \n            logger.info(f\"[DingTalk] File send result: {response.text}\")\n            \n            if response.status_code == 200:\n                return True\n            else:\n                logger.error(f\"[DingTalk] Send file error: {response.text}\")\n                return False\n        except Exception as e:\n            logger.error(f\"[DingTalk] Send file exception: {e}\")\n            return False\n\n    def generate_button_markdown_content(self, context, reply):\n        image_url = context.kwargs.get(\"image_url\")\n        promptEn = context.kwargs.get(\"promptEn\")\n        reply_text = reply.content\n        button_list = []\n        markdown_content = f\"\"\"\n{reply.content}\n                                \"\"\"\n        if image_url is not None and promptEn is not None:\n            button_list = [\n                {\"text\": \"查看原图\", \"url\": image_url, \"iosUrl\": image_url, \"color\": \"blue\"}\n            ]\n            markdown_content = f\"\"\"\n{promptEn}\n\n![\"图片\"]({image_url})\n\n{reply_text}\n\n                                \"\"\"\n        logger.debug(f\"[Dingtalk] generate_button_markdown_content, button_list={button_list} , markdown_content={markdown_content}\")\n\n        return button_list, markdown_content\n"
  },
  {
    "path": "channel/dingtalk/dingtalk_message.py",
    "content": "import os\nimport re\n\nimport requests\nfrom dingtalk_stream import ChatbotMessage\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\n# -*- coding=utf-8 -*-\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom common.utils import expand_path\nfrom config import conf\n\n\nclass DingTalkMessage(ChatMessage):\n    def __init__(self, event: ChatbotMessage, image_download_handler):\n        super().__init__(event)\n        self.image_download_handler = image_download_handler\n        self.msg_id = event.message_id\n        self.message_type = event.message_type\n        self.incoming_message = event\n        self.sender_staff_id = event.sender_staff_id\n        self.other_user_id = event.conversation_id\n        self.create_time = event.create_at\n        self.image_content = event.image_content\n        self.rich_text_content = event.rich_text_content\n        self.robot_code = event.robot_code  # 机器人编码\n        if event.conversation_type == \"1\":\n            self.is_group = False\n        else:\n            self.is_group = True\n\n        if self.message_type == \"text\":\n            self.ctype = ContextType.TEXT\n\n            self.content = event.text.content.strip()\n        elif self.message_type == \"audio\":\n            # 钉钉支持直接识别语音，所以此处将直接提取文字，当文字处理\n            self.content = event.extensions['content']['recognition'].strip()\n            self.ctype = ContextType.TEXT\n        elif (self.message_type == 'picture') or (self.message_type == 'richText'):\n            # 钉钉图片类型或富文本类型消息处理\n            image_list = event.get_image_list()\n            \n            if self.message_type == 'picture' and len(image_list) > 0:\n                # 单张图片消息：下载到工作空间，用于文件缓存\n                self.ctype = ContextType.IMAGE\n                download_code = image_list[0]\n                download_url = image_download_handler.get_image_download_url(download_code)\n                \n                # 下载到工作空间 tmp 目录\n                workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n                tmp_dir = os.path.join(workspace_root, \"tmp\")\n                os.makedirs(tmp_dir, exist_ok=True)\n                \n                image_path = download_image_file(download_url, tmp_dir)\n                if image_path:\n                    self.content = image_path\n                    self.image_path = image_path  # 保存图片路径用于缓存\n                    logger.info(f\"[DingTalk] Downloaded single image to {image_path}\")\n                else:\n                    self.content = \"[图片下载失败]\"\n                    self.image_path = None\n            \n            elif self.message_type == 'richText' and len(image_list) > 0:\n                # 富文本消息：下载所有图片并附加到文本中\n                self.ctype = ContextType.TEXT\n                \n                # 下载到工作空间 tmp 目录\n                workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n                tmp_dir = os.path.join(workspace_root, \"tmp\")\n                os.makedirs(tmp_dir, exist_ok=True)\n                \n                # 提取富文本中的文本内容\n                text_content = \"\"\n                if self.rich_text_content:\n                    # rich_text_content 是一个 RichTextContent 对象，需要从中提取文本\n                    text_list = event.get_text_list()\n                    if text_list:\n                        text_content = \"\".join(text_list).strip()\n                \n                # 下载所有图片\n                image_paths = []\n                for download_code in image_list:\n                    download_url = image_download_handler.get_image_download_url(download_code)\n                    image_path = download_image_file(download_url, tmp_dir)\n                    if image_path:\n                        image_paths.append(image_path)\n                \n                # 构建消息内容：文本 + 图片路径\n                content_parts = []\n                if text_content:\n                    content_parts.append(text_content)\n                for img_path in image_paths:\n                    content_parts.append(f\"[图片: {img_path}]\")\n                \n                self.content = \"\\n\".join(content_parts) if content_parts else \"[富文本消息]\"\n                logger.info(f\"[DingTalk] Received richText with {len(image_paths)} image(s): {self.content}\")\n            else:\n                self.ctype = ContextType.IMAGE\n                self.content = \"[未找到图片]\"\n                logger.debug(f\"[DingTalk] messageType: {self.message_type}, imageList isEmpty\")\n\n        if self.is_group:\n            self.from_user_id = event.conversation_id\n            self.actual_user_id = event.sender_id\n            self.is_at = True\n        else:\n            self.from_user_id = event.sender_id\n            self.actual_user_id = event.sender_id\n        self.to_user_id = event.chatbot_user_id\n        self.other_user_nickname = event.conversation_title\n\n\ndef download_image_file(image_url, temp_dir):\n    \"\"\"\n    下载图片文件\n    支持两种方式：\n    1. 普通 HTTP(S) URL\n    2. 钉钉 downloadCode: dingtalk://download/{download_code}\n    \"\"\"\n    # 检查临时目录是否存在，如果不存在则创建\n    if not os.path.exists(temp_dir):\n        os.makedirs(temp_dir)\n    \n    # 处理钉钉 downloadCode\n    if image_url.startswith(\"dingtalk://download/\"):\n        download_code = image_url.replace(\"dingtalk://download/\", \"\")\n        logger.info(f\"[DingTalk] Downloading image with downloadCode: {download_code[:20]}...\")\n        \n        # 需要从外部传入 access_token，这里先用一个临时方案\n        # 从 config 获取 dingtalk_client_id 和 dingtalk_client_secret\n        from config import conf\n        client_id = conf().get(\"dingtalk_client_id\")\n        client_secret = conf().get(\"dingtalk_client_secret\")\n        \n        if not client_id or not client_secret:\n            logger.error(\"[DingTalk] Missing dingtalk_client_id or dingtalk_client_secret\")\n            return None\n        \n        # 解析 robot_code 和 download_code\n        parts = download_code.split(\":\", 1)\n        if len(parts) != 2:\n            logger.error(f\"[DingTalk] Invalid download_code format (expected robot_code:download_code): {download_code[:50]}\")\n            return None\n        \n        robot_code, actual_download_code = parts\n        \n        # 获取 access_token（使用新版 API）\n        token_url = \"https://api.dingtalk.com/v1.0/oauth2/accessToken\"\n        token_headers = {\n            \"Content-Type\": \"application/json\"\n        }\n        token_body = {\n            \"appKey\": client_id,\n            \"appSecret\": client_secret\n        }\n        \n        try:\n            token_response = requests.post(token_url, json=token_body, headers=token_headers, timeout=10)\n            \n            if token_response.status_code == 200:\n                token_data = token_response.json()\n                access_token = token_data.get(\"accessToken\")\n                \n                if not access_token:\n                    logger.error(f\"[DingTalk] Failed to get access token: {token_data}\")\n                    return None\n                \n                # 获取下载 URL（使用新版 API）\n                download_api_url = \"https://api.dingtalk.com/v1.0/robot/messageFiles/download\"\n                download_headers = {\n                    \"x-acs-dingtalk-access-token\": access_token,\n                    \"Content-Type\": \"application/json\"\n                }\n                download_body = {\n                    \"downloadCode\": actual_download_code,\n                    \"robotCode\": robot_code\n                }\n                \n                download_response = requests.post(download_api_url, json=download_body, headers=download_headers, timeout=10)\n                \n                if download_response.status_code == 200:\n                    download_data = download_response.json()\n                    download_url = download_data.get(\"downloadUrl\")\n                    \n                    if not download_url:\n                        logger.error(f\"[DingTalk] No downloadUrl in response: {download_data}\")\n                        return None\n                    \n                    # 从 downloadUrl 下载实际图片\n                    image_response = requests.get(download_url, stream=True, timeout=60)\n                    \n                    if image_response.status_code == 200:\n                        # 生成文件名（使用 download_code 的 hash，避免特殊字符）\n                        import hashlib\n                        file_hash = hashlib.md5(actual_download_code.encode()).hexdigest()[:16]\n                        file_name = f\"{file_hash}.png\"\n                        file_path = os.path.join(temp_dir, file_name)\n                        \n                        with open(file_path, 'wb') as file:\n                            file.write(image_response.content)\n                        \n                        logger.info(f\"[DingTalk] Image downloaded successfully: {file_path}\")\n                        return file_path\n                    else:\n                        logger.error(f\"[DingTalk] Failed to download image from URL: {image_response.status_code}\")\n                        return None\n                else:\n                    logger.error(f\"[DingTalk] Failed to get download URL: {download_response.status_code}, {download_response.text}\")\n                    return None\n            else:\n                logger.error(f\"[DingTalk] Failed to get access token: {token_response.status_code}, {token_response.text}\")\n                return None\n        except Exception as e:\n            logger.error(f\"[DingTalk] Exception downloading image: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            return None\n    \n    # 普通 HTTP(S) URL\n    else:\n        headers = {\n            'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36'\n        }\n        \n        try:\n            response = requests.get(image_url, headers=headers, stream=True, timeout=60 * 5)\n            if response.status_code == 200:\n                # 生成文件名\n                file_name = image_url.split(\"/\")[-1].split(\"?\")[0]\n                \n                # 将文件保存到临时目录\n                file_path = os.path.join(temp_dir, file_name)\n                with open(file_path, 'wb') as file:\n                    file.write(response.content)\n                return file_path\n            else:\n                logger.info(f\"[Dingtalk] Failed to download image file, {response.content}\")\n                return None\n        except Exception as e:\n            logger.error(f\"[Dingtalk] Exception downloading image: {e}\")\n            return None\n"
  },
  {
    "path": "channel/feishu/README.md",
    "content": "# 飞书Channel使用说明\n\n飞书Channel支持两种事件接收模式，可以根据部署环境灵活选择。\n\n## 模式对比\n\n| 模式 | 适用场景 | 优点 | 缺点 |\n|------|---------|------|------|\n| **webhook** | 生产环境 | 稳定可靠，官方推荐 | 需要公网IP或域名 |\n| **websocket** | 本地开发 | 无需公网IP，开发便捷 | 需要额外依赖 |\n\n## 配置说明\n\n### 基础配置\n\n在 `config.json` 中添加以下配置:\n\n```json\n{\n  \"channel_type\": \"feishu\",\n  \"feishu_app_id\": \"cli_xxxxx\",\n  \"feishu_app_secret\": \"your_app_secret\",\n  \"feishu_token\": \"your_verification_token\",\n  \"feishu_bot_name\": \"你的机器人名称\",\n  \"feishu_event_mode\": \"webhook\",\n  \"feishu_port\": 9891\n}\n```\n\n### 配置项说明\n\n- `feishu_app_id`: 飞书应用的App ID\n- `feishu_app_secret`: 飞书应用的App Secret\n- `feishu_token`: 事件订阅的Verification Token\n- `feishu_bot_name`: 机器人名称(用于群聊@判断)\n- `feishu_event_mode`: 事件接收模式，可选值:\n  - `\"websocket\"`: 长连接模式(默认)\n  - `\"webhook\"`: HTTP服务器模式\n- `feishu_port`: webhook模式下的HTTP服务端口(默认9891)\n\n## 模式一: Webhook模式(推荐生产环境)\n\n### 1. 配置\n\n```json\n{\n  \"feishu_event_mode\": \"webhook\",\n  \"feishu_port\": 9891\n}\n```\n\n### 2. 启动服务\n\n```bash\npython3 app.py\n```\n\n服务将在 `http://0.0.0.0:9891` 启动。\n\n### 3. 配置飞书应用\n\n1. 登录[飞书开放平台](https://open.feishu.cn/)\n2. 进入应用详情 -> 事件订阅\n3. 选择 **将事件发送至开发者服务器**\n4. 填写请求地址: `http://your-domain:9891/`\n5. 添加事件: `im.message.receive_v1` (接收消息v2.0)\n6. 保存配置\n\n### 4. 注意事项\n\n- 需要有公网IP或域名\n- 确保防火墙开放对应端口\n- 建议使用HTTPS(需要配置反向代理)\n\n## 模式二: WebSocket模式(推荐本地开发)\n\n### 1. 安装依赖\n\n```bash\npip install lark-oapi\n```\n\n### 2. 配置\n\n```json\n{\n  \"feishu_event_mode\": \"websocket\"\n}\n```\n\n### 3. 启动服务\n\n```bash\npython3 app.py\n```\n\n程序将自动建立与飞书开放平台的长连接。\n\n### 4. 配置飞书应用\n\n1. 登录[飞书开放平台](https://open.feishu.cn/)\n2. 进入应用详情 -> 事件订阅\n3. 选择 **使用长连接接收事件**\n4. 添加事件: `im.message.receive_v1` (接收消息v2.0)\n5. 保存配置\n\n### 5. 注意事项\n\n- 无需公网IP\n- 需要能访问公网(建立WebSocket连接)\n- 每个应用最多50个连接\n- 集群模式下消息随机分发到一个客户端\n\n## 平滑迁移\n\n从webhook模式切换到websocket模式(或反向切换):\n\n1. 修改 `config.json` 中的 `feishu_event_mode`\n2. 如果切换到websocket模式，安装 `lark-oapi` 依赖\n3. 重启服务\n4. 在飞书开放平台修改事件订阅方式\n\n**重要**: 同一时间只能使用一种模式，否则会导致消息重复接收。\n\n## 消息去重机制\n\n两种模式都使用相同的消息去重机制:\n\n- 使用 `ExpiredDict` 存储已处理的消息ID\n- 过期时间: 7.1小时\n- 确保消息不会重复处理\n\n## 故障排查\n\n### WebSocket模式连接失败\n\n```\n[FeiShu] lark_oapi not installed\n```\n\n**解决**: 安装依赖 `pip install lark-oapi`\n\n### SSL证书验证失败\n\n```\n[Lark][ERROR] connect failed, err:[SSL:CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain\n```\n\n**原因**: 网络环境中存在自签名证书或SSL中间人代理(如企业代理、VPN等)\n\n**解决**: 程序会自动检测SSL证书验证失败，并自动重试禁用证书验证的连接。无需手动配置。\n\n当遇到证书错误时，日志会显示：\n```\n[FeiShu] SSL certificate verification disabled due to certificate error. This may happen when using corporate proxy or self-signed certificates.\n```\n\n这是正常现象，程序会自动处理并继续运行。\n\n### Webhook模式端口被占用\n\n```\nAddress already in use\n```\n\n**解决**: 修改 `feishu_port` 配置或关闭占用端口的进程\n\n### 收不到消息\n\n1. 检查飞书应用的事件订阅配置\n2. 确认已添加 `im.message.receive_v1` 事件\n3. 检查应用权限: 需要 `im:message` 权限\n4. 查看日志中的错误信息\n\n## 开发建议\n\n- **本地开发**: 使用websocket模式，快速迭代\n- **测试环境**: 可以使用webhook模式 + 内网穿透工具(如ngrok)\n- **生产环境**: 使用webhook模式，配置正式域名和HTTPS\n\n## 参考文档\n\n- [飞书开放平台 - 事件订阅](https://open.feishu.cn/document/ukTMukTMukTM/uUTNz4SN1MjL1UzM)\n- [飞书SDK - Python](https://github.com/larksuite/oapi-sdk-python)\n"
  },
  {
    "path": "channel/feishu/feishu_channel.py",
    "content": "\"\"\"\n飞书通道接入\n\n支持两种事件接收模式:\n1. webhook模式: 通过HTTP服务器接收事件(需要公网IP)\n2. websocket模式: 通过长连接接收事件(本地开发友好)\n\n通过配置项 feishu_event_mode 选择模式: \"webhook\" 或 \"websocket\"\n\n@author Saboteur7\n@Date 2023/11/19\n\"\"\"\n\nimport importlib.util\nimport json\nimport logging\nimport os\nimport ssl\nimport threading\n# -*- coding=utf-8 -*-\nimport uuid\n\nimport requests\nimport web\n\nfrom bridge.context import Context\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import ChatChannel, check_prefix\nfrom channel.feishu.feishu_message import FeishuMessage\nfrom common import utils\nfrom common.expired_dict import ExpiredDict\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom config import conf\n\n# Suppress verbose logs from Lark SDK\nlogging.getLogger(\"Lark\").setLevel(logging.WARNING)\n\nURL_VERIFICATION = \"url_verification\"\n\n# Lazy-check for lark_oapi SDK availability without importing it at module level.\n# The full `import lark_oapi` pulls in 10k+ files and takes 4-10s, so we defer\n# the actual import to _startup_websocket() where it is needed.\nLARK_SDK_AVAILABLE = importlib.util.find_spec(\"lark_oapi\") is not None\nlark = None  # will be populated on first use via _ensure_lark_imported()\n\n\ndef _ensure_lark_imported():\n    \"\"\"Import lark_oapi on first use (takes 4-10s due to 10k+ source files).\"\"\"\n    global lark\n    if lark is None:\n        import lark_oapi as _lark\n        lark = _lark\n    return lark\n\n\n@singleton\nclass FeiShuChanel(ChatChannel):\n    feishu_app_id = conf().get('feishu_app_id')\n    feishu_app_secret = conf().get('feishu_app_secret')\n    feishu_token = conf().get('feishu_token')\n    feishu_event_mode = conf().get('feishu_event_mode', 'websocket')  # webhook 或 websocket\n\n    def __init__(self):\n        super().__init__()\n        # 历史消息id暂存，用于幂等控制\n        self.receivedMsgs = ExpiredDict(60 * 60 * 7.1)\n        self._http_server = None\n        self._ws_client = None\n        self._ws_thread = None\n        self._bot_open_id = None  # cached bot open_id for @-mention matching\n        logger.debug(\"[FeiShu] app_id={}, app_secret={}, verification_token={}, event_mode={}\".format(\n            self.feishu_app_id, self.feishu_app_secret, self.feishu_token, self.feishu_event_mode))\n        # 无需群校验和前缀\n        conf()[\"group_name_white_list\"] = [\"ALL_GROUP\"]\n        conf()[\"single_chat_prefix\"] = [\"\"]\n\n        # 验证配置\n        if self.feishu_event_mode == 'websocket' and not LARK_SDK_AVAILABLE:\n            logger.error(\"[FeiShu] websocket mode requires lark_oapi. Please install: pip install lark-oapi\")\n            raise Exception(\"lark_oapi not installed\")\n\n    def startup(self):\n        self.feishu_app_id = conf().get('feishu_app_id')\n        self.feishu_app_secret = conf().get('feishu_app_secret')\n        self.feishu_token = conf().get('feishu_token')\n        self.feishu_event_mode = conf().get('feishu_event_mode', 'websocket')\n        self._fetch_bot_open_id()\n        if self.feishu_event_mode == 'websocket':\n            self._startup_websocket()\n        else:\n            self._startup_webhook()\n\n    def _fetch_bot_open_id(self):\n        \"\"\"Fetch the bot's own open_id via API so we can match @-mentions without feishu_bot_name.\"\"\"\n        try:\n            access_token = self.fetch_access_token()\n            if not access_token:\n                logger.warning(\"[FeiShu] Cannot fetch bot info: no access_token\")\n                return\n            headers = {\"Authorization\": \"Bearer \" + access_token}\n            resp = requests.get(\"https://open.feishu.cn/open-apis/bot/v3/info/\", headers=headers, timeout=5)\n            if resp.status_code == 200:\n                data = resp.json()\n                if data.get(\"code\") == 0:\n                    self._bot_open_id = data.get(\"bot\", {}).get(\"open_id\")\n                    logger.info(f\"[FeiShu] Bot open_id fetched: {self._bot_open_id}\")\n                else:\n                    logger.warning(f\"[FeiShu] Fetch bot info failed: code={data.get('code')}, msg={data.get('msg')}\")\n        except Exception as e:\n            logger.warning(f\"[FeiShu] Fetch bot open_id error: {e}\")\n\n    def stop(self):\n        import ctypes\n        logger.info(\"[FeiShu] stop() called\")\n        ws_client = self._ws_client\n        self._ws_client = None\n        ws_thread = self._ws_thread\n        self._ws_thread = None\n        # Interrupt the ws thread first so its blocking start() unblocks\n        if ws_thread and ws_thread.is_alive():\n            try:\n                tid = ws_thread.ident\n                if tid:\n                    res = ctypes.pythonapi.PyThreadState_SetAsyncExc(\n                        ctypes.c_ulong(tid), ctypes.py_object(SystemExit)\n                    )\n                    if res == 1:\n                        logger.info(\"[FeiShu] Interrupted ws thread via ctypes\")\n                    elif res > 1:\n                        ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_ulong(tid), None)\n            except Exception as e:\n                logger.warning(f\"[FeiShu] Error interrupting ws thread: {e}\")\n        # lark.ws.Client has no stop() method; thread interruption above is sufficient\n        if self._http_server:\n            try:\n                self._http_server.stop()\n                logger.info(\"[FeiShu] HTTP server stopped\")\n            except Exception as e:\n                logger.warning(f\"[FeiShu] Error stopping HTTP server: {e}\")\n            self._http_server = None\n        logger.info(\"[FeiShu] stop() completed\")\n\n    def _startup_webhook(self):\n        \"\"\"启动HTTP服务器接收事件(webhook模式)\"\"\"\n        logger.debug(\"[FeiShu] Starting in webhook mode...\")\n        urls = (\n            '/', 'channel.feishu.feishu_channel.FeishuController'\n        )\n        app = web.application(urls, globals(), autoreload=False)\n        port = conf().get(\"feishu_port\", 9891)\n        func = web.httpserver.StaticMiddleware(app.wsgifunc())\n        func = web.httpserver.LogMiddleware(func)\n        server = web.httpserver.WSGIServer((\"0.0.0.0\", port), func)\n        self._http_server = server\n        try:\n            server.start()\n        except (KeyboardInterrupt, SystemExit):\n            server.stop()\n\n    def _startup_websocket(self):\n        \"\"\"启动长连接接收事件(websocket模式)\"\"\"\n        _ensure_lark_imported()\n        logger.debug(\"[FeiShu] Starting in websocket mode...\")\n\n        # 创建事件处理器\n        def handle_message_event(data: lark.im.v1.P2ImMessageReceiveV1) -> None:\n            \"\"\"处理接收消息事件 v2.0\"\"\"\n            try:\n                event_dict = json.loads(lark.JSON.marshal(data))\n                event = event_dict.get(\"event\", {})\n                msg = event.get(\"message\", {})\n\n                # Skip group messages that don't @-mention the bot (reduce log noise)\n                if msg.get(\"chat_type\") == \"group\" and not msg.get(\"mentions\") and msg.get(\"message_type\") == \"text\":\n                    return\n\n                logger.debug(f\"[FeiShu] websocket receive event: {lark.JSON.marshal(data, indent=2)}\")\n\n                # 处理消息\n                self._handle_message_event(event)\n\n            except Exception as e:\n                logger.error(f\"[FeiShu] websocket handle message error: {e}\", exc_info=True)\n\n        # 构建事件分发器\n        event_handler = lark.EventDispatcherHandler.builder(\"\", \"\") \\\n            .register_p2_im_message_receive_v1(handle_message_event) \\\n            .build()\n\n        def start_client_with_retry():\n            \"\"\"Run ws client in this thread with its own event loop to avoid conflicts.\"\"\"\n            import asyncio\n            import ssl as ssl_module\n            original_create_default_context = ssl_module.create_default_context\n\n            def create_unverified_context(*args, **kwargs):\n                context = original_create_default_context(*args, **kwargs)\n                context.check_hostname = False\n                context.verify_mode = ssl.CERT_NONE\n                return context\n\n            # lark_oapi.ws.client captures the event loop at module-import time as a module-\n            # level global variable.  When a previous ws thread is force-killed via ctypes its\n            # loop may still be marked as \"running\", which causes the next ws_client.start()\n            # call (in this new thread) to raise \"This event loop is already running\".\n            # Fix: replace the module-level loop with a brand-new, idle loop before starting.\n            loop = asyncio.new_event_loop()\n            asyncio.set_event_loop(loop)\n            try:\n                import lark_oapi.ws.client as _lark_ws_client_mod\n                _lark_ws_client_mod.loop = loop\n            except Exception:\n                pass\n\n            startup_error = None\n            for attempt in range(2):\n                try:\n                    if attempt == 1:\n                        logger.warning(\"[FeiShu] Retrying with SSL verification disabled...\")\n                        ssl_module.create_default_context = create_unverified_context\n                        ssl_module._create_unverified_context = create_unverified_context\n\n                    ws_client = lark.ws.Client(\n                        self.feishu_app_id,\n                        self.feishu_app_secret,\n                        event_handler=event_handler,\n                        log_level=lark.LogLevel.WARNING\n                    )\n                    self._ws_client = ws_client\n                    logger.debug(\"[FeiShu] Websocket client starting...\")\n                    ws_client.start()\n                    break\n\n                except (SystemExit, KeyboardInterrupt):\n                    logger.info(\"[FeiShu] Websocket thread received stop signal\")\n                    break\n                except Exception as e:\n                    error_msg = str(e)\n                    is_ssl_error = (\"CERTIFICATE_VERIFY_FAILED\" in error_msg\n                                    or \"certificate verify failed\" in error_msg.lower())\n                    if is_ssl_error and attempt == 0:\n                        logger.warning(f\"[FeiShu] SSL error: {error_msg}, retrying...\")\n                        continue\n                    logger.error(f\"[FeiShu] Websocket client error: {e}\", exc_info=True)\n                    startup_error = error_msg\n                    ssl_module.create_default_context = original_create_default_context\n                    break\n            if startup_error:\n                self.report_startup_error(startup_error)\n            try:\n                loop.close()\n            except Exception:\n                pass\n            logger.info(\"[FeiShu] Websocket thread exited\")\n\n        ws_thread = threading.Thread(target=start_client_with_retry, daemon=True)\n        self._ws_thread = ws_thread\n        ws_thread.start()\n        logger.info(\"[FeiShu] ✅ Websocket thread started, ready to receive messages\")\n        ws_thread.join()\n\n    def _is_mention_bot(self, mentions: list) -> bool:\n        \"\"\"Check whether any mention in the list refers to this bot.\n\n        Priority:\n        1. Match by open_id (obtained from /bot/v3/info at startup, no config needed)\n        2. Fallback to feishu_bot_name config for backward compatibility\n        3. If neither is available, assume the first mention is the bot (Feishu only\n           delivers group messages that @-mention the bot, so this is usually correct)\n        \"\"\"\n        if self._bot_open_id:\n            return any(\n                m.get(\"id\", {}).get(\"open_id\") == self._bot_open_id\n                for m in mentions\n            )\n        bot_name = conf().get(\"feishu_bot_name\")\n        if bot_name:\n            return any(m.get(\"name\") == bot_name for m in mentions)\n        # Feishu event subscription only delivers messages that @-mention the bot,\n        # so reaching here means the bot was indeed mentioned.\n        return True\n\n    def _handle_message_event(self, event: dict):\n        \"\"\"\n        处理消息事件的核心逻辑\n        webhook和websocket模式共用此方法\n        \"\"\"\n        if not event.get(\"message\") or not event.get(\"sender\"):\n            logger.warning(f\"[FeiShu] invalid message, event={event}\")\n            return\n\n        msg = event.get(\"message\")\n\n        # 幂等判断\n        msg_id = msg.get(\"message_id\")\n        if self.receivedMsgs.get(msg_id):\n            logger.warning(f\"[FeiShu] repeat msg filtered, msg_id={msg_id}\")\n            return\n        self.receivedMsgs[msg_id] = True\n\n        # Filter out stale messages from before channel startup (offline backlog)\n        import time as _time\n        create_time_ms = msg.get(\"create_time\")\n        if create_time_ms:\n            msg_age_s = _time.time() - int(create_time_ms) / 1000\n            if msg_age_s > 60:\n                logger.warning(f\"[FeiShu] stale msg filtered (age={msg_age_s:.0f}s), msg_id={msg_id}\")\n                return\n\n        is_group = False\n        chat_type = msg.get(\"chat_type\")\n\n        if chat_type == \"group\":\n            if not msg.get(\"mentions\") and msg.get(\"message_type\") == \"text\":\n                # 群聊中未@不响应\n                return\n            if msg.get(\"mentions\") and msg.get(\"message_type\") == \"text\":\n                if not self._is_mention_bot(msg.get(\"mentions\")):\n                    return\n            # 群聊\n            is_group = True\n            receive_id_type = \"chat_id\"\n        elif chat_type == \"p2p\":\n            receive_id_type = \"open_id\"\n        else:\n            logger.warning(\"[FeiShu] message ignore\")\n            return\n\n        # 构造飞书消息对象\n        feishu_msg = FeishuMessage(event, is_group=is_group, access_token=self.fetch_access_token())\n        if not feishu_msg:\n            return\n\n        # 处理文件缓存逻辑\n        from channel.file_cache import get_file_cache\n        file_cache = get_file_cache()\n\n        # 获取 session_id（用于缓存关联）\n        if is_group:\n            if conf().get(\"group_shared_session\", True):\n                session_id = msg.get(\"chat_id\")  # 群共享会话\n            else:\n                session_id = feishu_msg.from_user_id + \"_\" + msg.get(\"chat_id\")\n        else:\n            session_id = feishu_msg.from_user_id\n\n        # 如果是单张图片消息，缓存起来\n        if feishu_msg.ctype == ContextType.IMAGE:\n            if hasattr(feishu_msg, 'image_path') and feishu_msg.image_path:\n                file_cache.add(session_id, feishu_msg.image_path, file_type='image')\n                logger.info(f\"[FeiShu] Image cached for session {session_id}, waiting for user query...\")\n            # 单张图片不直接处理，等待用户提问\n            return\n\n        # 如果是文本消息，检查是否有缓存的文件\n        if feishu_msg.ctype == ContextType.TEXT:\n            cached_files = file_cache.get(session_id)\n            if cached_files:\n                # 将缓存的文件附加到文本消息中\n                file_refs = []\n                for file_info in cached_files:\n                    file_path = file_info['path']\n                    file_type = file_info['type']\n                    if file_type == 'image':\n                        file_refs.append(f\"[图片: {file_path}]\")\n                    elif file_type == 'video':\n                        file_refs.append(f\"[视频: {file_path}]\")\n                    else:\n                        file_refs.append(f\"[文件: {file_path}]\")\n\n                feishu_msg.content = feishu_msg.content + \"\\n\" + \"\\n\".join(file_refs)\n                logger.info(f\"[FeiShu] Attached {len(cached_files)} cached file(s) to user query\")\n                # 清除缓存\n                file_cache.clear(session_id)\n\n        context = self._compose_context(\n            feishu_msg.ctype,\n            feishu_msg.content,\n            isgroup=is_group,\n            msg=feishu_msg,\n            receive_id_type=receive_id_type,\n            no_need_at=True\n        )\n        if context:\n            self.produce(context)\n        logger.debug(f\"[FeiShu] query={feishu_msg.content}, type={feishu_msg.ctype}\")\n\n    def send(self, reply: Reply, context: Context):\n        msg = context.get(\"msg\")\n        is_group = context[\"isgroup\"]\n        if msg:\n            access_token = msg.access_token\n        else:\n            access_token = self.fetch_access_token()\n        headers = {\n            \"Authorization\": \"Bearer \" + access_token,\n            \"Content-Type\": \"application/json\",\n        }\n        msg_type = \"text\"\n        logger.debug(f\"[FeiShu] sending reply, type={context.type}, content={reply.content[:100]}...\")\n        reply_content = reply.content\n        content_key = \"text\"\n        if reply.type == ReplyType.IMAGE_URL:\n            # 图片上传\n            reply_content = self._upload_image_url(reply.content, access_token)\n            if not reply_content:\n                logger.warning(\"[FeiShu] upload image failed\")\n                return\n            msg_type = \"image\"\n            content_key = \"image_key\"\n        elif reply.type == ReplyType.FILE:\n            # 如果有附加的文本内容，先发送文本\n            if hasattr(reply, 'text_content') and reply.text_content:\n                logger.info(f\"[FeiShu] Sending text before file: {reply.text_content[:50]}...\")\n                text_reply = Reply(ReplyType.TEXT, reply.text_content)\n                self._send(text_reply, context)\n                import time\n                time.sleep(0.3)  # 短暂延迟，确保文本先到达\n\n            # 判断是否为视频文件\n            file_path = reply.content\n            if file_path.startswith(\"file://\"):\n                file_path = file_path[7:]\n\n            is_video = file_path.lower().endswith(('.mp4', '.avi', '.mov', '.wmv', '.flv'))\n\n            if is_video:\n                # 视频上传（包含duration信息）\n                upload_data = self._upload_video_url(reply.content, access_token)\n                if not upload_data or not upload_data.get('file_key'):\n                    logger.warning(\"[FeiShu] upload video failed\")\n                    return\n\n                # 视频使用 media 类型（根据官方文档）\n                # 错误码 230055 说明：上传 mp4 时必须使用 msg_type=\"media\"\n                msg_type = \"media\"\n                reply_content = upload_data  # 完整的上传响应数据（包含file_key和duration）\n                logger.info(\n                    f\"[FeiShu] Sending video: file_key={upload_data.get('file_key')}, duration={upload_data.get('duration')}ms\")\n                content_key = None  # 直接序列化整个对象\n            else:\n                # 其他文件使用 file 类型\n                file_key = self._upload_file_url(reply.content, access_token)\n                if not file_key:\n                    logger.warning(\"[FeiShu] upload file failed\")\n                    return\n                reply_content = file_key\n                msg_type = \"file\"\n                content_key = \"file_key\"\n\n        # Check if we can reply to an existing message (need msg_id)\n        can_reply = is_group and msg and hasattr(msg, 'msg_id') and msg.msg_id\n\n        # Build content JSON\n        content_json = json.dumps(reply_content, ensure_ascii=False) if content_key is None else json.dumps({content_key: reply_content}, ensure_ascii=False)\n        logger.debug(f\"[FeiShu] Sending message: msg_type={msg_type}, content={content_json[:200]}\")\n\n        if can_reply:\n            # 群聊中回复已有消息\n            url = f\"https://open.feishu.cn/open-apis/im/v1/messages/{msg.msg_id}/reply\"\n            data = {\n                \"msg_type\": msg_type,\n                \"content\": content_json\n            }\n            res = requests.post(url=url, headers=headers, json=data, timeout=(5, 10))\n        else:\n            # 发送新消息（私聊或群聊中无msg_id的情况，如定时任务）\n            url = \"https://open.feishu.cn/open-apis/im/v1/messages\"\n            params = {\"receive_id_type\": context.get(\"receive_id_type\") or \"open_id\"}\n            data = {\n                \"receive_id\": context.get(\"receiver\"),\n                \"msg_type\": msg_type,\n                \"content\": content_json\n            }\n            res = requests.post(url=url, headers=headers, params=params, json=data, timeout=(5, 10))\n        res = res.json()\n        if res.get(\"code\") == 0:\n            logger.info(f\"[FeiShu] send message success\")\n        else:\n            logger.error(f\"[FeiShu] send message failed, code={res.get('code')}, msg={res.get('msg')}\")\n\n    def fetch_access_token(self) -> str:\n        url = \"https://open.feishu.cn/open-apis/auth/v3/tenant_access_token/internal/\"\n        headers = {\n            \"Content-Type\": \"application/json\"\n        }\n        req_body = {\n            \"app_id\": self.feishu_app_id,\n            \"app_secret\": self.feishu_app_secret\n        }\n        data = bytes(json.dumps(req_body), encoding='utf8')\n        response = requests.post(url=url, data=data, headers=headers)\n        if response.status_code == 200:\n            res = response.json()\n            if res.get(\"code\") != 0:\n                logger.error(f\"[FeiShu] get tenant_access_token error, code={res.get('code')}, msg={res.get('msg')}\")\n                return \"\"\n            else:\n                return res.get(\"tenant_access_token\")\n        else:\n            logger.error(f\"[FeiShu] fetch token error, res={response}\")\n\n    def _upload_image_url(self, img_url, access_token):\n        logger.debug(f\"[FeiShu] start process image, img_url={img_url}\")\n\n        # Check if it's a local file path (file:// protocol)\n        if img_url.startswith(\"file://\"):\n            local_path = img_url[7:]  # Remove \"file://\" prefix\n            logger.info(f\"[FeiShu] uploading local file: {local_path}\")\n\n            if not os.path.exists(local_path):\n                logger.error(f\"[FeiShu] local file not found: {local_path}\")\n                return None\n\n            # Upload directly from local file\n            upload_url = \"https://open.feishu.cn/open-apis/im/v1/images\"\n            data = {'image_type': 'message'}\n            headers = {'Authorization': f'Bearer {access_token}'}\n\n            with open(local_path, \"rb\") as file:\n                upload_response = requests.post(upload_url, files={\"image\": file}, data=data, headers=headers)\n                logger.info(f\"[FeiShu] upload file, res={upload_response.content}\")\n\n                response_data = upload_response.json()\n                if response_data.get(\"code\") == 0:\n                    return response_data.get(\"data\").get(\"image_key\")\n                else:\n                    logger.error(f\"[FeiShu] upload failed: {response_data}\")\n                    return None\n\n        # Original logic for HTTP URLs\n        response = requests.get(img_url)\n        suffix = utils.get_path_suffix(img_url)\n        temp_name = str(uuid.uuid4()) + \".\" + suffix\n        if response.status_code == 200:\n            # 将图片内容保存为临时文件\n            with open(temp_name, \"wb\") as file:\n                file.write(response.content)\n\n        # upload\n        upload_url = \"https://open.feishu.cn/open-apis/im/v1/images\"\n        data = {\n            'image_type': 'message'\n        }\n        headers = {\n            'Authorization': f'Bearer {access_token}',\n        }\n        with open(temp_name, \"rb\") as file:\n            upload_response = requests.post(upload_url, files={\"image\": file}, data=data, headers=headers)\n            logger.info(f\"[FeiShu] upload file, res={upload_response.content}\")\n            os.remove(temp_name)\n            return upload_response.json().get(\"data\").get(\"image_key\")\n\n    def _get_video_duration(self, file_path: str) -> int:\n        \"\"\"\n        获取视频时长（毫秒）\n        \n        Args:\n            file_path: 视频文件路径\n        \n        Returns:\n            视频时长（毫秒），如果获取失败返回0\n        \"\"\"\n        try:\n            import subprocess\n\n            # 使用 ffprobe 获取视频时长\n            cmd = [\n                'ffprobe',\n                '-v', 'error',\n                '-show_entries', 'format=duration',\n                '-of', 'default=noprint_wrappers=1:nokey=1',\n                file_path\n            ]\n\n            result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)\n            if result.returncode == 0:\n                duration_seconds = float(result.stdout.strip())\n                duration_ms = int(duration_seconds * 1000)\n                logger.info(f\"[FeiShu] Video duration: {duration_seconds:.2f}s ({duration_ms}ms)\")\n                return duration_ms\n            else:\n                logger.warning(f\"[FeiShu] Failed to get video duration via ffprobe: {result.stderr}\")\n                return 0\n        except FileNotFoundError:\n            logger.warning(\"[FeiShu] ffprobe not found, video duration will be 0. Install ffmpeg to fix this.\")\n            return 0\n        except Exception as e:\n            logger.warning(f\"[FeiShu] Failed to get video duration: {e}\")\n            return 0\n\n    def _upload_video_url(self, video_url, access_token):\n        \"\"\"\n        Upload video to Feishu and return video info (file_key and duration)\n        Supports:\n        - file:// URLs for local files\n        - http(s):// URLs (download then upload)\n        \n        Returns:\n            dict with 'file_key' and 'duration' (milliseconds), or None if failed\n        \"\"\"\n        local_path = None\n        temp_file = None\n\n        try:\n            # For file:// URLs (local files), upload directly\n            if video_url.startswith(\"file://\"):\n                local_path = video_url[7:]  # Remove file:// prefix\n                if not os.path.exists(local_path):\n                    logger.error(f\"[FeiShu] local video file not found: {local_path}\")\n                    return None\n            else:\n                # For HTTP URLs, download first\n                logger.info(f\"[FeiShu] Downloading video from URL: {video_url}\")\n                response = requests.get(video_url, timeout=(5, 60))\n                if response.status_code != 200:\n                    logger.error(f\"[FeiShu] download video failed, status={response.status_code}\")\n                    return None\n\n                # Save to temp file\n                import uuid\n                file_name = os.path.basename(video_url) or \"video.mp4\"\n                temp_file = str(uuid.uuid4()) + \"_\" + file_name\n\n                with open(temp_file, \"wb\") as file:\n                    file.write(response.content)\n\n                logger.info(f\"[FeiShu] Video downloaded, size={len(response.content)} bytes\")\n                local_path = temp_file\n\n            # Get video duration\n            duration = self._get_video_duration(local_path)\n\n            # Upload to Feishu\n            file_name = os.path.basename(local_path)\n            file_ext = os.path.splitext(file_name)[1].lower()\n            file_type_map = {'.mp4': 'mp4'}\n            file_type = file_type_map.get(file_ext, 'mp4')\n\n            upload_url = \"https://open.feishu.cn/open-apis/im/v1/files\"\n            data = {\n                'file_type': file_type,\n                'file_name': file_name\n            }\n            # Add duration only if available (required for video/audio)\n            if duration:\n                data['duration'] = duration  # Must be int, not string\n\n            headers = {'Authorization': f'Bearer {access_token}'}\n\n            logger.info(f\"[FeiShu] Uploading video: file_name={file_name}, duration={duration}ms\")\n\n            with open(local_path, \"rb\") as file:\n                upload_response = requests.post(\n                    upload_url,\n                    files={\"file\": file},\n                    data=data,\n                    headers=headers,\n                    timeout=(5, 60)\n                )\n                logger.info(\n                    f\"[FeiShu] upload video response, status={upload_response.status_code}, res={upload_response.content}\")\n\n                response_data = upload_response.json()\n                if response_data.get(\"code\") == 0:\n                    # Add duration to the response data (API doesn't return it)\n                    upload_data = response_data.get(\"data\")\n                    upload_data['duration'] = duration  # Add our calculated duration\n                    logger.info(\n                        f\"[FeiShu] Upload complete: file_key={upload_data.get('file_key')}, duration={duration}ms\")\n                    return upload_data\n                else:\n                    logger.error(f\"[FeiShu] upload video failed: {response_data}\")\n                    return None\n\n        except Exception as e:\n            logger.error(f\"[FeiShu] upload video exception: {e}\")\n            return None\n\n        finally:\n            # Clean up temp file\n            if temp_file and os.path.exists(temp_file):\n                try:\n                    os.remove(temp_file)\n                except Exception as e:\n                    logger.warning(f\"[FeiShu] Failed to remove temp file {temp_file}: {e}\")\n\n    def _upload_file_url(self, file_url, access_token):\n        \"\"\"\n        Upload file to Feishu\n        Supports both local files (file://) and HTTP URLs\n        \"\"\"\n        logger.debug(f\"[FeiShu] start process file, file_url={file_url}\")\n\n        # Check if it's a local file path (file:// protocol)\n        if file_url.startswith(\"file://\"):\n            local_path = file_url[7:]  # Remove \"file://\" prefix\n            logger.info(f\"[FeiShu] uploading local file: {local_path}\")\n\n            if not os.path.exists(local_path):\n                logger.error(f\"[FeiShu] local file not found: {local_path}\")\n                return None\n\n            # Get file info\n            file_name = os.path.basename(local_path)\n            file_ext = os.path.splitext(file_name)[1].lower()\n\n            # Determine file type for Feishu API\n            # Feishu supports: opus, mp4, pdf, doc, xls, ppt, stream (other types)\n            file_type_map = {\n                '.opus': 'opus',\n                '.mp4': 'mp4',\n                '.pdf': 'pdf',\n                '.doc': 'doc', '.docx': 'doc',\n                '.xls': 'xls', '.xlsx': 'xls',\n                '.ppt': 'ppt', '.pptx': 'ppt',\n            }\n            file_type = file_type_map.get(file_ext, 'stream')  # Default to stream for other types\n\n            # Upload file to Feishu\n            upload_url = \"https://open.feishu.cn/open-apis/im/v1/files\"\n            data = {'file_type': file_type, 'file_name': file_name}\n            headers = {'Authorization': f'Bearer {access_token}'}\n\n            try:\n                with open(local_path, \"rb\") as file:\n                    upload_response = requests.post(\n                        upload_url,\n                        files={\"file\": file},\n                        data=data,\n                        headers=headers,\n                        timeout=(5, 30)  # 5s connect, 30s read timeout\n                    )\n                    logger.info(\n                        f\"[FeiShu] upload file response, status={upload_response.status_code}, res={upload_response.content}\")\n\n                    response_data = upload_response.json()\n                    if response_data.get(\"code\") == 0:\n                        return response_data.get(\"data\").get(\"file_key\")\n                    else:\n                        logger.error(f\"[FeiShu] upload file failed: {response_data}\")\n                        return None\n            except Exception as e:\n                logger.error(f\"[FeiShu] upload file exception: {e}\")\n                return None\n\n        # For HTTP URLs, download first then upload\n        try:\n            response = requests.get(file_url, timeout=(5, 30))\n            if response.status_code != 200:\n                logger.error(f\"[FeiShu] download file failed, status={response.status_code}\")\n                return None\n\n            # Save to temp file\n            import uuid\n            file_name = os.path.basename(file_url)\n            temp_name = str(uuid.uuid4()) + \"_\" + file_name\n\n            with open(temp_name, \"wb\") as file:\n                file.write(response.content)\n\n            # Upload\n            file_ext = os.path.splitext(file_name)[1].lower()\n            file_type_map = {\n                '.opus': 'opus', '.mp4': 'mp4', '.pdf': 'pdf',\n                '.doc': 'doc', '.docx': 'doc',\n                '.xls': 'xls', '.xlsx': 'xls',\n                '.ppt': 'ppt', '.pptx': 'ppt',\n            }\n            file_type = file_type_map.get(file_ext, 'stream')\n\n            upload_url = \"https://open.feishu.cn/open-apis/im/v1/files\"\n            data = {'file_type': file_type, 'file_name': file_name}\n            headers = {'Authorization': f'Bearer {access_token}'}\n\n            with open(temp_name, \"rb\") as file:\n                upload_response = requests.post(upload_url, files={\"file\": file}, data=data, headers=headers)\n                logger.info(f\"[FeiShu] upload file, res={upload_response.content}\")\n\n                response_data = upload_response.json()\n                os.remove(temp_name)  # Clean up temp file\n\n                if response_data.get(\"code\") == 0:\n                    return response_data.get(\"data\").get(\"file_key\")\n                else:\n                    logger.error(f\"[FeiShu] upload file failed: {response_data}\")\n                    return None\n        except Exception as e:\n            logger.error(f\"[FeiShu] upload file from URL exception: {e}\")\n            return None\n\n    def _compose_context(self, ctype: ContextType, content, **kwargs):\n        context = Context(ctype, content)\n        context.kwargs = kwargs\n        if \"channel_type\" not in context:\n            context[\"channel_type\"] = self.channel_type\n        if \"origin_ctype\" not in context:\n            context[\"origin_ctype\"] = ctype\n\n        cmsg = context[\"msg\"]\n\n        # Set session_id based on chat type\n        if cmsg.is_group:\n            # Group chat: check if group_shared_session is enabled\n            if conf().get(\"group_shared_session\", True):\n                # All users in the group share the same session context\n                context[\"session_id\"] = cmsg.other_user_id  # group_id\n            else:\n                # Each user has their own session within the group\n                # This ensures:\n                # - Same user in different groups have separate conversation histories\n                # - Same user in private chat and group chat have separate histories\n                context[\"session_id\"] = f\"{cmsg.from_user_id}:{cmsg.other_user_id}\"\n        else:\n            # Private chat: use user_id only\n            context[\"session_id\"] = cmsg.from_user_id\n\n        context[\"receiver\"] = cmsg.other_user_id\n\n        if ctype == ContextType.TEXT:\n            # 1.文本请求\n            # 图片生成处理\n            img_match_prefix = check_prefix(content, conf().get(\"image_create_prefix\"))\n            if img_match_prefix:\n                content = content.replace(img_match_prefix, \"\", 1)\n                context.type = ContextType.IMAGE_CREATE\n            else:\n                context.type = ContextType.TEXT\n            context.content = content.strip()\n\n        elif context.type == ContextType.VOICE:\n            # 2.语音请求\n            if \"desire_rtype\" not in context and conf().get(\"voice_reply_voice\"):\n                context[\"desire_rtype\"] = ReplyType.VOICE\n\n        return context\n\n\nclass FeishuController:\n    \"\"\"\n    HTTP服务器控制器，用于webhook模式\n    \"\"\"\n    # 类常量\n    FAILED_MSG = '{\"success\": false}'\n    SUCCESS_MSG = '{\"success\": true}'\n    MESSAGE_RECEIVE_TYPE = \"im.message.receive_v1\"\n\n    def GET(self):\n        return \"Feishu service start success!\"\n\n    def POST(self):\n        try:\n            channel = FeiShuChanel()\n\n            request = json.loads(web.data().decode(\"utf-8\"))\n            logger.debug(f\"[FeiShu] receive request: {request}\")\n\n            # 1.事件订阅回调验证\n            if request.get(\"type\") == URL_VERIFICATION:\n                varify_res = {\"challenge\": request.get(\"challenge\")}\n                return json.dumps(varify_res)\n\n            # 2.消息接收处理\n            # token 校验\n            header = request.get(\"header\")\n            if not header or header.get(\"token\") != channel.feishu_token:\n                return self.FAILED_MSG\n\n            # 处理消息事件\n            event = request.get(\"event\")\n            if header.get(\"event_type\") == self.MESSAGE_RECEIVE_TYPE and event:\n                channel._handle_message_event(event)\n\n            return self.SUCCESS_MSG\n\n        except Exception as e:\n            logger.error(e)\n            return self.FAILED_MSG\n"
  },
  {
    "path": "channel/feishu/feishu_message.py",
    "content": "from bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nimport json\nimport os\nimport requests\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom common import utils\nfrom common.utils import expand_path\nfrom config import conf\n\n\nclass FeishuMessage(ChatMessage):\n    def __init__(self, event: dict, is_group=False, access_token=None):\n        super().__init__(event)\n        msg = event.get(\"message\")\n        sender = event.get(\"sender\")\n        self.access_token = access_token\n        self.msg_id = msg.get(\"message_id\")\n        self.create_time = msg.get(\"create_time\")\n        self.is_group = is_group\n        msg_type = msg.get(\"message_type\")\n\n        if msg_type == \"text\":\n            self.ctype = ContextType.TEXT\n            content = json.loads(msg.get('content'))\n            self.content = content.get(\"text\").strip()\n        elif msg_type == \"image\":\n            # 单张图片消息：下载并缓存，等待用户提问时一起发送\n            self.ctype = ContextType.IMAGE\n            content = json.loads(msg.get(\"content\"))\n            image_key = content.get(\"image_key\")\n            \n            # 下载图片到工作空间临时目录\n            workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n            tmp_dir = os.path.join(workspace_root, \"tmp\")\n            os.makedirs(tmp_dir, exist_ok=True)\n            image_path = os.path.join(tmp_dir, f\"{image_key}.png\")\n            \n            # 下载图片\n            url = f\"https://open.feishu.cn/open-apis/im/v1/messages/{msg.get('message_id')}/resources/{image_key}\"\n            headers = {\"Authorization\": \"Bearer \" + access_token}\n            params = {\"type\": \"image\"}\n            response = requests.get(url=url, headers=headers, params=params)\n            \n            if response.status_code == 200:\n                with open(image_path, \"wb\") as f:\n                    f.write(response.content)\n                logger.info(f\"[FeiShu] Downloaded single image, key={image_key}, path={image_path}\")\n                self.content = image_path\n                self.image_path = image_path  # 保存图片路径\n            else:\n                logger.error(f\"[FeiShu] Failed to download single image, key={image_key}, status={response.status_code}\")\n                self.content = f\"[图片下载失败: {image_key}]\"\n                self.image_path = None\n        elif msg_type == \"post\":\n            # 富文本消息，可能包含图片、文本等多种元素\n            content = json.loads(msg.get(\"content\"))\n            \n            # 飞书富文本消息结构：content 直接包含 title 和 content 数组\n            # 不是嵌套在 post 字段下\n            title = content.get(\"title\", \"\")\n            content_list = content.get(\"content\", [])\n            \n            logger.info(f\"[FeiShu] Post message - title: '{title}', content_list length: {len(content_list)}\")\n            \n            # 收集所有图片和文本\n            image_keys = []\n            text_parts = []\n            \n            if title:\n                text_parts.append(title)\n            \n            for block in content_list:\n                logger.debug(f\"[FeiShu] Processing block: {block}\")\n                # block 本身就是元素列表\n                if not isinstance(block, list):\n                    continue\n                    \n                for element in block:\n                    element_tag = element.get(\"tag\")\n                    logger.debug(f\"[FeiShu] Element tag: {element_tag}, element: {element}\")\n                    if element_tag == \"img\":\n                        # 找到图片元素\n                        image_key = element.get(\"image_key\")\n                        if image_key:\n                            image_keys.append(image_key)\n                    elif element_tag == \"text\":\n                        # 文本元素\n                        text_content = element.get(\"text\", \"\")\n                        if text_content:\n                            text_parts.append(text_content)\n            \n            logger.info(f\"[FeiShu] Parsed - images: {len(image_keys)}, text_parts: {text_parts}\")\n            \n            # 富文本消息统一作为文本消息处理\n            self.ctype = ContextType.TEXT\n            \n            if image_keys:\n                # 如果包含图片，下载并在文本中引用本地路径\n                workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n                tmp_dir = os.path.join(workspace_root, \"tmp\")\n                os.makedirs(tmp_dir, exist_ok=True)\n                \n                # 保存图片路径映射\n                self.image_paths = {}\n                for image_key in image_keys:\n                    image_path = os.path.join(tmp_dir, f\"{image_key}.png\")\n                    self.image_paths[image_key] = image_path\n                \n                def _download_images():\n                    for image_key, image_path in self.image_paths.items():\n                        url = f\"https://open.feishu.cn/open-apis/im/v1/messages/{self.msg_id}/resources/{image_key}\"\n                        headers = {\"Authorization\": \"Bearer \" + access_token}\n                        params = {\"type\": \"image\"}\n                        response = requests.get(url=url, headers=headers, params=params)\n                        if response.status_code == 200:\n                            with open(image_path, \"wb\") as f:\n                                f.write(response.content)\n                            logger.info(f\"[FeiShu] Image downloaded from post message, key={image_key}, path={image_path}\")\n                        else:\n                            logger.error(f\"[FeiShu] Failed to download image from post, key={image_key}, status={response.status_code}\")\n                \n                # 立即下载图片，不使用延迟下载\n                # 因为 TEXT 类型消息不会调用 prepare()\n                _download_images()\n                \n                # 构建消息内容：文本 + 图片路径\n                content_parts = []\n                if text_parts:\n                    content_parts.append(\"\\n\".join(text_parts).strip())\n                for image_key, image_path in self.image_paths.items():\n                    content_parts.append(f\"[图片: {image_path}]\")\n                \n                self.content = \"\\n\".join(content_parts)\n                logger.info(f\"[FeiShu] Received post message with {len(image_keys)} image(s) and text: {self.content}\")\n            else:\n                # 纯文本富文本消息\n                self.content = \"\\n\".join(text_parts).strip() if text_parts else \"[富文本消息]\"\n                logger.info(f\"[FeiShu] Received post message (text only): {self.content}\")\n        elif msg_type == \"file\":\n            self.ctype = ContextType.FILE\n            content = json.loads(msg.get(\"content\"))\n            file_key = content.get(\"file_key\")\n            file_name = content.get(\"file_name\")\n\n            self.content = TmpDir().path() + file_key + \".\" + utils.get_path_suffix(file_name)\n\n            def _download_file():\n                # 如果响应状态码是200，则将响应内容写入本地文件\n                url = f\"https://open.feishu.cn/open-apis/im/v1/messages/{self.msg_id}/resources/{file_key}\"\n                headers = {\n                    \"Authorization\": \"Bearer \" + access_token,\n                }\n                params = {\n                    \"type\": \"file\"\n                }\n                response = requests.get(url=url, headers=headers, params=params)\n                if response.status_code == 200:\n                    with open(self.content, \"wb\") as f:\n                        f.write(response.content)\n                else:\n                    logger.info(f\"[FeiShu] Failed to download file, key={file_key}, res={response.text}\")\n            self._prepare_fn = _download_file\n        else:\n            raise NotImplementedError(\"Unsupported message type: Type:{} \".format(msg_type))\n\n        self.from_user_id = sender.get(\"sender_id\").get(\"open_id\")\n        self.to_user_id = event.get(\"app_id\")\n        if is_group:\n            # 群聊\n            self.other_user_id = msg.get(\"chat_id\")\n            self.actual_user_id = self.from_user_id\n            self.content = self.content.replace(\"@_user_1\", \"\").strip()\n            self.actual_user_nickname = \"\"\n        else:\n            # 私聊\n            self.other_user_id = self.from_user_id\n            self.actual_user_id = self.from_user_id\n"
  },
  {
    "path": "channel/file_cache.py",
    "content": "\"\"\"\n文件缓存管理器\n用于缓存单独发送的文件消息（图片、视频、文档等），在用户提问时自动附加\n\"\"\"\nimport time\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass FileCache:\n    \"\"\"文件缓存管理器，按 session_id 缓存文件，TTL=2分钟\"\"\"\n    \n    def __init__(self, ttl=120):\n        \"\"\"\n        Args:\n            ttl: 缓存过期时间（秒），默认2分钟\n        \"\"\"\n        self.cache = {}\n        self.ttl = ttl\n    \n    def add(self, session_id: str, file_path: str, file_type: str = \"image\"):\n        \"\"\"\n        添加文件到缓存\n        \n        Args:\n            session_id: 会话ID\n            file_path: 文件本地路径\n            file_type: 文件类型（image, video, file 等）\n        \"\"\"\n        if session_id not in self.cache:\n            self.cache[session_id] = {\n                'files': [],\n                'timestamp': time.time()\n            }\n        \n        # 添加文件（去重）\n        file_info = {'path': file_path, 'type': file_type}\n        if file_info not in self.cache[session_id]['files']:\n            self.cache[session_id]['files'].append(file_info)\n            logger.info(f\"[FileCache] Added {file_type} to cache for session {session_id}: {file_path}\")\n    \n    def get(self, session_id: str) -> list:\n        \"\"\"\n        获取缓存的文件列表\n        \n        Args:\n            session_id: 会话ID\n        \n        Returns:\n            文件信息列表 [{'path': '...', 'type': 'image'}, ...]，如果没有或已过期返回空列表\n        \"\"\"\n        if session_id not in self.cache:\n            return []\n        \n        item = self.cache[session_id]\n        \n        # 检查是否过期\n        if time.time() - item['timestamp'] > self.ttl:\n            logger.info(f\"[FileCache] Cache expired for session {session_id}, clearing...\")\n            del self.cache[session_id]\n            return []\n        \n        return item['files']\n    \n    def clear(self, session_id: str):\n        \"\"\"\n        清除指定会话的缓存\n        \n        Args:\n            session_id: 会话ID\n        \"\"\"\n        if session_id in self.cache:\n            logger.info(f\"[FileCache] Cleared cache for session {session_id}\")\n            del self.cache[session_id]\n    \n    def cleanup_expired(self):\n        \"\"\"清理所有过期的缓存\"\"\"\n        current_time = time.time()\n        expired_sessions = []\n        \n        for session_id, item in self.cache.items():\n            if current_time - item['timestamp'] > self.ttl:\n                expired_sessions.append(session_id)\n        \n        for session_id in expired_sessions:\n            del self.cache[session_id]\n            logger.debug(f\"[FileCache] Cleaned up expired cache for session {session_id}\")\n        \n        if expired_sessions:\n            logger.info(f\"[FileCache] Cleaned up {len(expired_sessions)} expired cache(s)\")\n\n\n# 全局单例\n_file_cache = FileCache()\n\n\ndef get_file_cache() -> FileCache:\n    \"\"\"获取全局文件缓存实例\"\"\"\n    return _file_cache\n"
  },
  {
    "path": "channel/qq/__init__.py",
    "content": ""
  },
  {
    "path": "channel/qq/qq_channel.py",
    "content": "\"\"\"\nQQ Bot channel via WebSocket long connection.\n\nSupports:\n- Group chat (@bot), single chat (C2C), guild channel, guild DM\n- Text / image / file message send & receive\n- Heartbeat keep-alive and auto-reconnect with session resume\n\"\"\"\n\nimport base64\nimport json\nimport os\nimport threading\nimport time\n\nimport requests\nimport websocket\n\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import ChatChannel, check_prefix\nfrom channel.qq.qq_message import QQMessage\nfrom common.expired_dict import ExpiredDict\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom config import conf\n\n# Rich media file_type constants\nQQ_FILE_TYPE_IMAGE = 1\nQQ_FILE_TYPE_VIDEO = 2\nQQ_FILE_TYPE_VOICE = 3\nQQ_FILE_TYPE_FILE = 4\n\nQQ_API_BASE = \"https://api.sgroup.qq.com\"\n\n# Intents: GROUP_AND_C2C_EVENT(1<<25) | PUBLIC_GUILD_MESSAGES(1<<30)\nDEFAULT_INTENTS = (1 << 25) | (1 << 30)\n\n# OpCode constants\nOP_DISPATCH = 0\nOP_HEARTBEAT = 1\nOP_IDENTIFY = 2\nOP_RESUME = 6\nOP_RECONNECT = 7\nOP_INVALID_SESSION = 9\nOP_HELLO = 10\nOP_HEARTBEAT_ACK = 11\n\n# Resumable error codes\nRESUMABLE_CLOSE_CODES = {4008, 4009}\n\n\n@singleton\nclass QQChannel(ChatChannel):\n\n    def __init__(self):\n        super().__init__()\n        self.app_id = \"\"\n        self.app_secret = \"\"\n\n        self._access_token = \"\"\n        self._token_expires_at = 0\n\n        self._ws = None\n        self._ws_thread = None\n        self._heartbeat_thread = None\n        self._connected = False\n        self._stop_event = threading.Event()\n        self._token_lock = threading.Lock()\n\n        self._session_id = None\n        self._last_seq = None\n        self._heartbeat_interval = 45000\n        self._can_resume = False\n\n        self.received_msgs = ExpiredDict(60 * 60 * 7.1)\n        self._msg_seq_counter = {}\n\n        conf()[\"group_name_white_list\"] = [\"ALL_GROUP\"]\n        conf()[\"single_chat_prefix\"] = [\"\"]\n\n    # ------------------------------------------------------------------\n    # Lifecycle\n    # ------------------------------------------------------------------\n\n    def startup(self):\n        self.app_id = conf().get(\"qq_app_id\", \"\")\n        self.app_secret = conf().get(\"qq_app_secret\", \"\")\n\n        if not self.app_id or not self.app_secret:\n            err = \"[QQ] qq_app_id and qq_app_secret are required\"\n            logger.error(err)\n            self.report_startup_error(err)\n            return\n\n        self._refresh_access_token()\n        if not self._access_token:\n            err = \"[QQ] Failed to get initial access_token\"\n            logger.error(err)\n            self.report_startup_error(err)\n            return\n\n        self._stop_event.clear()\n        self._start_ws()\n\n    def stop(self):\n        logger.info(\"[QQ] stop() called\")\n        self._stop_event.set()\n        if self._ws:\n            try:\n                self._ws.close()\n            except Exception:\n                pass\n        self._ws = None\n        self._connected = False\n\n    # ------------------------------------------------------------------\n    # Access Token\n    # ------------------------------------------------------------------\n\n    def _refresh_access_token(self):\n        try:\n            resp = requests.post(\n                \"https://bots.qq.com/app/getAppAccessToken\",\n                json={\"appId\": self.app_id, \"clientSecret\": self.app_secret},\n                timeout=10,\n            )\n            resp.raise_for_status()\n            data = resp.json()\n            self._access_token = data.get(\"access_token\", \"\")\n            expires_in = int(data.get(\"expires_in\", 7200))\n            self._token_expires_at = time.time() + expires_in - 60\n            logger.debug(f\"[QQ] Access token refreshed, expires_in={expires_in}s\")\n        except Exception as e:\n            logger.error(f\"[QQ] Failed to refresh access_token: {e}\")\n\n    def _get_access_token(self) -> str:\n        with self._token_lock:\n            if time.time() >= self._token_expires_at:\n                self._refresh_access_token()\n            return self._access_token\n\n    def _get_auth_headers(self) -> dict:\n        return {\n            \"Authorization\": f\"QQBot {self._get_access_token()}\",\n            \"Content-Type\": \"application/json\",\n        }\n\n    # ------------------------------------------------------------------\n    # WebSocket connection\n    # ------------------------------------------------------------------\n\n    def _get_ws_url(self) -> str:\n        try:\n            resp = requests.get(\n                f\"{QQ_API_BASE}/gateway\",\n                headers=self._get_auth_headers(),\n                timeout=10,\n            )\n            resp.raise_for_status()\n            url = resp.json().get(\"url\", \"\")\n            logger.debug(f\"[QQ] Gateway URL: {url}\")\n            return url\n        except Exception as e:\n            logger.error(f\"[QQ] Failed to get gateway URL: {e}\")\n            return \"\"\n\n    def _start_ws(self):\n        ws_url = self._get_ws_url()\n        if not ws_url:\n            logger.error(\"[QQ] Cannot start WebSocket without gateway URL\")\n            self.report_startup_error(\"Failed to get gateway URL\")\n            return\n\n        def _on_open(ws):\n            logger.debug(\"[QQ] WebSocket connected, waiting for Hello...\")\n\n        def _on_message(ws, raw):\n            try:\n                data = json.loads(raw)\n                self._handle_ws_message(data)\n            except Exception as e:\n                logger.error(f\"[QQ] Failed to handle ws message: {e}\", exc_info=True)\n\n        def _on_error(ws, error):\n            logger.error(f\"[QQ] WebSocket error: {error}\")\n\n        def _on_close(ws, close_status_code, close_msg):\n            logger.warning(f\"[QQ] WebSocket closed: status={close_status_code}, msg={close_msg}\")\n            self._connected = False\n            if not self._stop_event.is_set():\n                if close_status_code in RESUMABLE_CLOSE_CODES and self._session_id:\n                    self._can_resume = True\n                    logger.info(\"[QQ] Will attempt resume in 3s...\")\n                    time.sleep(3)\n                else:\n                    self._can_resume = False\n                    logger.info(\"[QQ] Will reconnect in 5s...\")\n                    time.sleep(5)\n                if not self._stop_event.is_set():\n                    self._start_ws()\n\n        self._ws = websocket.WebSocketApp(\n            ws_url,\n            on_open=_on_open,\n            on_message=_on_message,\n            on_error=_on_error,\n            on_close=_on_close,\n        )\n\n        def run_forever():\n            try:\n                self._ws.run_forever(ping_interval=0, reconnect=0)\n            except (SystemExit, KeyboardInterrupt):\n                logger.info(\"[QQ] WebSocket thread interrupted\")\n            except Exception as e:\n                logger.error(f\"[QQ] WebSocket run_forever error: {e}\")\n\n        self._ws_thread = threading.Thread(target=run_forever, daemon=True)\n        self._ws_thread.start()\n        self._ws_thread.join()\n\n    def _ws_send(self, data: dict):\n        if self._ws:\n            self._ws.send(json.dumps(data, ensure_ascii=False))\n\n    # ------------------------------------------------------------------\n    # Identify & Resume & Heartbeat\n    # ------------------------------------------------------------------\n\n    def _send_identify(self):\n        self._ws_send({\n            \"op\": OP_IDENTIFY,\n            \"d\": {\n                \"token\": f\"QQBot {self._get_access_token()}\",\n                \"intents\": DEFAULT_INTENTS,\n                \"shard\": [0, 1],\n                \"properties\": {\n                    \"$os\": \"linux\",\n                    \"$browser\": \"chatgpt-on-wechat\",\n                    \"$device\": \"chatgpt-on-wechat\",\n                },\n            },\n        })\n        logger.debug(f\"[QQ] Identify sent with intents={DEFAULT_INTENTS}\")\n\n    def _send_resume(self):\n        self._ws_send({\n            \"op\": OP_RESUME,\n            \"d\": {\n                \"token\": f\"QQBot {self._get_access_token()}\",\n                \"session_id\": self._session_id,\n                \"seq\": self._last_seq,\n            },\n        })\n        logger.debug(f\"[QQ] Resume sent: session_id={self._session_id}, seq={self._last_seq}\")\n\n    def _start_heartbeat(self, interval_ms: int):\n        if self._heartbeat_thread and self._heartbeat_thread.is_alive():\n            return\n        self._heartbeat_interval = interval_ms\n        interval_sec = interval_ms / 1000.0\n\n        def heartbeat_loop():\n            while not self._stop_event.is_set() and self._connected:\n                try:\n                    self._ws_send({\n                        \"op\": OP_HEARTBEAT,\n                        \"d\": self._last_seq,\n                    })\n                except Exception as e:\n                    logger.warning(f\"[QQ] Heartbeat send failed: {e}\")\n                    break\n                self._stop_event.wait(interval_sec)\n\n        self._heartbeat_thread = threading.Thread(target=heartbeat_loop, daemon=True)\n        self._heartbeat_thread.start()\n\n    # ------------------------------------------------------------------\n    # Incoming message dispatch\n    # ------------------------------------------------------------------\n\n    def _handle_ws_message(self, data: dict):\n        op = data.get(\"op\")\n        d = data.get(\"d\")\n        t = data.get(\"t\")\n        s = data.get(\"s\")\n\n        if s is not None:\n            self._last_seq = s\n\n        if op == OP_HELLO:\n            heartbeat_interval = d.get(\"heartbeat_interval\", 45000) if d else 45000\n            logger.debug(f\"[QQ] Received Hello, heartbeat_interval={heartbeat_interval}ms\")\n            self._heartbeat_interval = heartbeat_interval\n            if self._can_resume and self._session_id:\n                self._send_resume()\n            else:\n                self._send_identify()\n\n        elif op == OP_HEARTBEAT_ACK:\n            pass\n\n        elif op == OP_HEARTBEAT:\n            self._ws_send({\"op\": OP_HEARTBEAT, \"d\": self._last_seq})\n\n        elif op == OP_RECONNECT:\n            logger.warning(\"[QQ] Server requested reconnect\")\n            self._can_resume = True\n            if self._ws:\n                self._ws.close()\n\n        elif op == OP_INVALID_SESSION:\n            logger.warning(\"[QQ] Invalid session, re-identifying...\")\n            self._session_id = None\n            self._can_resume = False\n            time.sleep(2)\n            self._send_identify()\n\n        elif op == OP_DISPATCH:\n            if t == \"READY\":\n                self._session_id = d.get(\"session_id\", \"\")\n                user = d.get(\"user\", {})\n                bot_name = user.get('username', '')\n                logger.info(f\"[QQ] ✅ Connected successfully (bot={bot_name})\")\n                self._connected = True\n                self._can_resume = False\n                self._start_heartbeat(self._heartbeat_interval)\n                self.report_startup_success()\n\n            elif t == \"RESUMED\":\n                logger.info(\"[QQ] Session resumed successfully\")\n                self._connected = True\n                self._can_resume = False\n                self._start_heartbeat(self._heartbeat_interval)\n\n            elif t in (\"GROUP_AT_MESSAGE_CREATE\", \"C2C_MESSAGE_CREATE\",\n                        \"AT_MESSAGE_CREATE\", \"DIRECT_MESSAGE_CREATE\"):\n                self._handle_msg_event(d, t)\n\n            elif t in (\"GROUP_ADD_ROBOT\", \"FRIEND_ADD\"):\n                logger.info(f\"[QQ] Event: {t}\")\n\n            else:\n                logger.debug(f\"[QQ] Dispatch event: {t}\")\n\n    # ------------------------------------------------------------------\n    # Message event handling\n    # ------------------------------------------------------------------\n\n    def _handle_msg_event(self, event_data: dict, event_type: str):\n        msg_id = event_data.get(\"id\", \"\")\n        if self.received_msgs.get(msg_id):\n            logger.debug(f\"[QQ] Duplicate msg filtered: {msg_id}\")\n            return\n        self.received_msgs[msg_id] = True\n\n        try:\n            qq_msg = QQMessage(event_data, event_type)\n        except NotImplementedError as e:\n            logger.warning(f\"[QQ] {e}\")\n            return\n        except Exception as e:\n            logger.error(f\"[QQ] Failed to parse message: {e}\", exc_info=True)\n            return\n\n        is_group = qq_msg.is_group\n\n        from channel.file_cache import get_file_cache\n        file_cache = get_file_cache()\n\n        if is_group:\n            session_id = qq_msg.other_user_id\n        else:\n            session_id = qq_msg.from_user_id\n\n        if qq_msg.ctype == ContextType.IMAGE:\n            if hasattr(qq_msg, \"image_path\") and qq_msg.image_path:\n                file_cache.add(session_id, qq_msg.image_path, file_type=\"image\")\n                logger.info(f\"[QQ] Image cached for session {session_id}\")\n            return\n\n        if qq_msg.ctype == ContextType.TEXT:\n            cached_files = file_cache.get(session_id)\n            if cached_files:\n                file_refs = []\n                for fi in cached_files:\n                    ftype = fi[\"type\"]\n                    fpath = fi[\"path\"]\n                    if ftype == \"image\":\n                        file_refs.append(f\"[图片: {fpath}]\")\n                    elif ftype == \"video\":\n                        file_refs.append(f\"[视频: {fpath}]\")\n                    else:\n                        file_refs.append(f\"[文件: {fpath}]\")\n                qq_msg.content = qq_msg.content + \"\\n\" + \"\\n\".join(file_refs)\n                logger.info(f\"[QQ] Attached {len(cached_files)} cached file(s)\")\n                file_cache.clear(session_id)\n\n        context = self._compose_context(\n            qq_msg.ctype,\n            qq_msg.content,\n            isgroup=is_group,\n            msg=qq_msg,\n            no_need_at=True,\n        )\n        if context:\n            self.produce(context)\n\n    # ------------------------------------------------------------------\n    # _compose_context\n    # ------------------------------------------------------------------\n\n    def _compose_context(self, ctype: ContextType, content, **kwargs):\n        context = Context(ctype, content)\n        context.kwargs = kwargs\n        if \"channel_type\" not in context:\n            context[\"channel_type\"] = self.channel_type\n        if \"origin_ctype\" not in context:\n            context[\"origin_ctype\"] = ctype\n\n        cmsg = context[\"msg\"]\n\n        if cmsg.is_group:\n            context[\"session_id\"] = cmsg.other_user_id\n        else:\n            context[\"session_id\"] = cmsg.from_user_id\n\n        context[\"receiver\"] = cmsg.other_user_id\n\n        if ctype == ContextType.TEXT:\n            img_match_prefix = check_prefix(content, conf().get(\"image_create_prefix\"))\n            if img_match_prefix:\n                content = content.replace(img_match_prefix, \"\", 1)\n                context.type = ContextType.IMAGE_CREATE\n            else:\n                context.type = ContextType.TEXT\n            context.content = content.strip()\n\n        return context\n\n    # ------------------------------------------------------------------\n    # Send reply\n    # ------------------------------------------------------------------\n\n    def send(self, reply: Reply, context: Context):\n        msg = context.get(\"msg\")\n        is_group = context.get(\"isgroup\", False)\n        receiver = context.get(\"receiver\", \"\")\n\n        if not msg:\n            # Active send (e.g. scheduled tasks), no original message to reply to\n            self._active_send_text(reply.content if reply.type == ReplyType.TEXT else str(reply.content),\n                                   receiver, is_group)\n            return\n\n        event_type = getattr(msg, \"event_type\", \"\")\n        msg_id = getattr(msg, \"msg_id\", \"\")\n\n        if reply.type == ReplyType.TEXT:\n            self._send_text(reply.content, msg, event_type, msg_id)\n        elif reply.type in (ReplyType.IMAGE_URL, ReplyType.IMAGE):\n            self._send_image(reply.content, msg, event_type, msg_id)\n        elif reply.type == ReplyType.FILE:\n            if hasattr(reply, \"text_content\") and reply.text_content:\n                self._send_text(reply.text_content, msg, event_type, msg_id)\n                time.sleep(0.3)\n            self._send_file(reply.content, msg, event_type, msg_id)\n        elif reply.type in (ReplyType.VIDEO, ReplyType.VIDEO_URL):\n            self._send_media(reply.content, msg, event_type, msg_id, QQ_FILE_TYPE_VIDEO)\n        else:\n            logger.warning(f\"[QQ] Unsupported reply type: {reply.type}, falling back to text\")\n            self._send_text(str(reply.content), msg, event_type, msg_id)\n\n    # ------------------------------------------------------------------\n    # Send helpers\n    # ------------------------------------------------------------------\n\n    def _get_next_msg_seq(self, msg_id: str) -> int:\n        seq = self._msg_seq_counter.get(msg_id, 1)\n        self._msg_seq_counter[msg_id] = seq + 1\n        return seq\n\n    def _build_msg_url_and_base_body(self, msg: QQMessage, event_type: str, msg_id: str):\n        \"\"\"Build the API URL and base body dict for sending a message.\"\"\"\n        if event_type == \"GROUP_AT_MESSAGE_CREATE\":\n            group_openid = msg._rawmsg.get(\"group_openid\", \"\")\n            url = f\"{QQ_API_BASE}/v2/groups/{group_openid}/messages\"\n            body = {\n                \"msg_id\": msg_id,\n                \"msg_seq\": self._get_next_msg_seq(msg_id),\n            }\n            return url, body, \"group\", group_openid\n\n        elif event_type == \"C2C_MESSAGE_CREATE\":\n            user_openid = msg._rawmsg.get(\"author\", {}).get(\"user_openid\", \"\") or msg.from_user_id\n            url = f\"{QQ_API_BASE}/v2/users/{user_openid}/messages\"\n            body = {\n                \"msg_id\": msg_id,\n                \"msg_seq\": self._get_next_msg_seq(msg_id),\n            }\n            return url, body, \"c2c\", user_openid\n\n        elif event_type == \"AT_MESSAGE_CREATE\":\n            channel_id = msg._rawmsg.get(\"channel_id\", \"\")\n            url = f\"{QQ_API_BASE}/channels/{channel_id}/messages\"\n            body = {\"msg_id\": msg_id}\n            return url, body, \"channel\", channel_id\n\n        elif event_type == \"DIRECT_MESSAGE_CREATE\":\n            guild_id = msg._rawmsg.get(\"guild_id\", \"\")\n            url = f\"{QQ_API_BASE}/dms/{guild_id}/messages\"\n            body = {\"msg_id\": msg_id}\n            return url, body, \"dm\", guild_id\n\n        return None, None, None, None\n\n    def _post_message(self, url: str, body: dict, event_type: str):\n        try:\n            resp = requests.post(url, json=body, headers=self._get_auth_headers(), timeout=10)\n            if resp.status_code in (200, 201, 202, 204):\n                logger.info(f\"[QQ] Message sent successfully: event_type={event_type}\")\n            else:\n                logger.error(f\"[QQ] Failed to send message: status={resp.status_code}, \"\n                             f\"body={resp.text}\")\n        except Exception as e:\n            logger.error(f\"[QQ] Send message error: {e}\")\n\n    # ------------------------------------------------------------------\n    # Active send (no original message, e.g. scheduled tasks)\n    # ------------------------------------------------------------------\n\n    def _active_send_text(self, content: str, receiver: str, is_group: bool):\n        \"\"\"Send text without an original message (active push). QQ limits active messages to 4/month per user.\"\"\"\n        if not receiver:\n            logger.warning(\"[QQ] No receiver for active send\")\n            return\n        if is_group:\n            url = f\"{QQ_API_BASE}/v2/groups/{receiver}/messages\"\n        else:\n            url = f\"{QQ_API_BASE}/v2/users/{receiver}/messages\"\n        body = {\n            \"content\": content,\n            \"msg_type\": 0,\n        }\n        event_label = \"GROUP_ACTIVE\" if is_group else \"C2C_ACTIVE\"\n        self._post_message(url, body, event_label)\n\n    # ------------------------------------------------------------------\n    # Send text\n    # ------------------------------------------------------------------\n\n    def _send_text(self, content: str, msg: QQMessage, event_type: str, msg_id: str):\n        url, body, _, _ = self._build_msg_url_and_base_body(msg, event_type, msg_id)\n        if not url:\n            logger.warning(f\"[QQ] Cannot send reply for event_type: {event_type}\")\n            return\n        body[\"content\"] = content\n        body[\"msg_type\"] = 0\n        self._post_message(url, body, event_type)\n\n    # ------------------------------------------------------------------\n    # Rich media upload & send (image / video / file)\n    # ------------------------------------------------------------------\n\n    def _upload_rich_media(self, file_url: str, file_type: int, msg: QQMessage,\n                           event_type: str) -> str:\n        \"\"\"\n        Upload media via QQ rich media API and return file_info.\n        For group: POST /v2/groups/{group_openid}/files\n        For c2c:   POST /v2/users/{openid}/files\n        \"\"\"\n        if event_type == \"GROUP_AT_MESSAGE_CREATE\":\n            group_openid = msg._rawmsg.get(\"group_openid\", \"\")\n            upload_url = f\"{QQ_API_BASE}/v2/groups/{group_openid}/files\"\n        elif event_type == \"C2C_MESSAGE_CREATE\":\n            user_openid = (msg._rawmsg.get(\"author\", {}).get(\"user_openid\", \"\")\n                           or msg.from_user_id)\n            upload_url = f\"{QQ_API_BASE}/v2/users/{user_openid}/files\"\n        else:\n            logger.warning(f\"[QQ] Rich media upload not supported for event_type: {event_type}\")\n            return \"\"\n\n        upload_body = {\n            \"file_type\": file_type,\n            \"url\": file_url,\n            \"srv_send_msg\": False,\n        }\n\n        try:\n            resp = requests.post(\n                upload_url, json=upload_body,\n                headers=self._get_auth_headers(), timeout=30,\n            )\n            if resp.status_code in (200, 201):\n                data = resp.json()\n                file_info = data.get(\"file_info\", \"\")\n                logger.info(f\"[QQ] Rich media uploaded: file_type={file_type}, \"\n                            f\"file_uuid={data.get('file_uuid', '')}\")\n                return file_info\n            else:\n                logger.error(f\"[QQ] Rich media upload failed: status={resp.status_code}, \"\n                             f\"body={resp.text}\")\n                return \"\"\n        except Exception as e:\n            logger.error(f\"[QQ] Rich media upload error: {e}\")\n            return \"\"\n\n    def _upload_rich_media_base64(self, file_path: str, file_type: int, msg: QQMessage,\n                                  event_type: str) -> str:\n        \"\"\"Upload local file via base64 file_data field.\"\"\"\n        if event_type == \"GROUP_AT_MESSAGE_CREATE\":\n            group_openid = msg._rawmsg.get(\"group_openid\", \"\")\n            upload_url = f\"{QQ_API_BASE}/v2/groups/{group_openid}/files\"\n        elif event_type == \"C2C_MESSAGE_CREATE\":\n            user_openid = (msg._rawmsg.get(\"author\", {}).get(\"user_openid\", \"\")\n                           or msg.from_user_id)\n            upload_url = f\"{QQ_API_BASE}/v2/users/{user_openid}/files\"\n        else:\n            logger.warning(f\"[QQ] Rich media upload not supported for event_type: {event_type}\")\n            return \"\"\n\n        try:\n            with open(file_path, \"rb\") as f:\n                file_data = base64.b64encode(f.read()).decode(\"utf-8\")\n        except Exception as e:\n            logger.error(f\"[QQ] Failed to read file for upload: {e}\")\n            return \"\"\n\n        upload_body = {\n            \"file_type\": file_type,\n            \"file_data\": file_data,\n            \"srv_send_msg\": False,\n        }\n\n        try:\n            resp = requests.post(\n                upload_url, json=upload_body,\n                headers=self._get_auth_headers(), timeout=30,\n            )\n            if resp.status_code in (200, 201):\n                data = resp.json()\n                file_info = data.get(\"file_info\", \"\")\n                logger.info(f\"[QQ] Rich media uploaded (base64): file_type={file_type}, \"\n                            f\"file_uuid={data.get('file_uuid', '')}\")\n                return file_info\n            else:\n                logger.error(f\"[QQ] Rich media upload (base64) failed: status={resp.status_code}, \"\n                             f\"body={resp.text}\")\n                return \"\"\n        except Exception as e:\n            logger.error(f\"[QQ] Rich media upload (base64) error: {e}\")\n            return \"\"\n\n    def _send_media_msg(self, file_info: str, msg: QQMessage, event_type: str, msg_id: str):\n        \"\"\"Send a message with msg_type=7 (rich media) using file_info.\"\"\"\n        url, body, _, _ = self._build_msg_url_and_base_body(msg, event_type, msg_id)\n        if not url:\n            return\n        body[\"msg_type\"] = 7\n        body[\"media\"] = {\"file_info\": file_info}\n        self._post_message(url, body, event_type)\n\n    def _send_image(self, img_path_or_url: str, msg: QQMessage, event_type: str, msg_id: str):\n        \"\"\"Send image reply. Supports URL and local file path.\"\"\"\n        if event_type not in (\"GROUP_AT_MESSAGE_CREATE\", \"C2C_MESSAGE_CREATE\"):\n            self._send_text(str(img_path_or_url), msg, event_type, msg_id)\n            return\n\n        if img_path_or_url.startswith(\"file://\"):\n            img_path_or_url = img_path_or_url[7:]\n\n        if img_path_or_url.startswith((\"http://\", \"https://\")):\n            file_info = self._upload_rich_media(\n                img_path_or_url, QQ_FILE_TYPE_IMAGE, msg, event_type)\n        elif os.path.exists(img_path_or_url):\n            file_info = self._upload_rich_media_base64(\n                img_path_or_url, QQ_FILE_TYPE_IMAGE, msg, event_type)\n        else:\n            logger.error(f\"[QQ] Image not found: {img_path_or_url}\")\n            self._send_text(\"[Image send failed]\", msg, event_type, msg_id)\n            return\n\n        if file_info:\n            self._send_media_msg(file_info, msg, event_type, msg_id)\n        else:\n            self._send_text(\"[Image upload failed]\", msg, event_type, msg_id)\n\n    def _send_file(self, file_path_or_url: str, msg: QQMessage, event_type: str, msg_id: str):\n        \"\"\"Send file reply.\"\"\"\n        if event_type not in (\"GROUP_AT_MESSAGE_CREATE\", \"C2C_MESSAGE_CREATE\"):\n            self._send_text(str(file_path_or_url), msg, event_type, msg_id)\n            return\n\n        if file_path_or_url.startswith(\"file://\"):\n            file_path_or_url = file_path_or_url[7:]\n\n        if file_path_or_url.startswith((\"http://\", \"https://\")):\n            file_info = self._upload_rich_media(\n                file_path_or_url, QQ_FILE_TYPE_FILE, msg, event_type)\n        elif os.path.exists(file_path_or_url):\n            file_info = self._upload_rich_media_base64(\n                file_path_or_url, QQ_FILE_TYPE_FILE, msg, event_type)\n        else:\n            logger.error(f\"[QQ] File not found: {file_path_or_url}\")\n            self._send_text(\"[File send failed]\", msg, event_type, msg_id)\n            return\n\n        if file_info:\n            self._send_media_msg(file_info, msg, event_type, msg_id)\n        else:\n            self._send_text(\"[File upload failed]\", msg, event_type, msg_id)\n\n    def _send_media(self, path_or_url: str, msg: QQMessage, event_type: str,\n                    msg_id: str, file_type: int):\n        \"\"\"Generic media send for video/voice etc.\"\"\"\n        if event_type not in (\"GROUP_AT_MESSAGE_CREATE\", \"C2C_MESSAGE_CREATE\"):\n            self._send_text(str(path_or_url), msg, event_type, msg_id)\n            return\n\n        if path_or_url.startswith(\"file://\"):\n            path_or_url = path_or_url[7:]\n\n        if path_or_url.startswith((\"http://\", \"https://\")):\n            file_info = self._upload_rich_media(path_or_url, file_type, msg, event_type)\n        elif os.path.exists(path_or_url):\n            file_info = self._upload_rich_media_base64(path_or_url, file_type, msg, event_type)\n        else:\n            logger.error(f\"[QQ] Media not found: {path_or_url}\")\n            return\n\n        if file_info:\n            self._send_media_msg(file_info, msg, event_type, msg_id)\n        else:\n            logger.error(f\"[QQ] Media upload failed: {path_or_url}\")\n"
  },
  {
    "path": "channel/qq/qq_message.py",
    "content": "import os\nimport requests\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nfrom common.log import logger\nfrom common.utils import expand_path\nfrom config import conf\n\n\ndef _get_tmp_dir() -> str:\n    \"\"\"Return the workspace tmp directory (absolute path), creating it if needed.\"\"\"\n    ws_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n    tmp_dir = os.path.join(ws_root, \"tmp\")\n    os.makedirs(tmp_dir, exist_ok=True)\n    return tmp_dir\n\n\nclass QQMessage(ChatMessage):\n    \"\"\"Message wrapper for QQ Bot (websocket long-connection mode).\"\"\"\n\n    def __init__(self, event_data: dict, event_type: str):\n        super().__init__(event_data)\n        self.msg_id = event_data.get(\"id\", \"\")\n        self.create_time = event_data.get(\"timestamp\", \"\")\n        self.is_group = event_type in (\"GROUP_AT_MESSAGE_CREATE\",)\n        self.event_type = event_type\n\n        author = event_data.get(\"author\", {})\n        from_user_id = author.get(\"member_openid\", \"\") or author.get(\"id\", \"\")\n        group_openid = event_data.get(\"group_openid\", \"\")\n\n        content = event_data.get(\"content\", \"\").strip()\n\n        attachments = event_data.get(\"attachments\", [])\n        has_image = any(\n            a.get(\"content_type\", \"\").startswith(\"image/\") for a in attachments\n        ) if attachments else False\n\n        if has_image and not content:\n            self.ctype = ContextType.IMAGE\n            img_attachment = next(\n                a for a in attachments if a.get(\"content_type\", \"\").startswith(\"image/\")\n            )\n            img_url = img_attachment.get(\"url\", \"\")\n            if img_url and not img_url.startswith(\"http\"):\n                img_url = \"https://\" + img_url\n            tmp_dir = _get_tmp_dir()\n            image_path = os.path.join(tmp_dir, f\"qq_{self.msg_id}.png\")\n            try:\n                resp = requests.get(img_url, timeout=30)\n                resp.raise_for_status()\n                with open(image_path, \"wb\") as f:\n                    f.write(resp.content)\n                self.content = image_path\n                self.image_path = image_path\n                logger.info(f\"[QQ] Image downloaded: {image_path}\")\n            except Exception as e:\n                logger.error(f\"[QQ] Failed to download image: {e}\")\n                self.content = \"[Image download failed]\"\n                self.image_path = None\n        elif has_image and content:\n            self.ctype = ContextType.TEXT\n            image_paths = []\n            tmp_dir = _get_tmp_dir()\n            for idx, att in enumerate(attachments):\n                if not att.get(\"content_type\", \"\").startswith(\"image/\"):\n                    continue\n                img_url = att.get(\"url\", \"\")\n                if img_url and not img_url.startswith(\"http\"):\n                    img_url = \"https://\" + img_url\n                img_path = os.path.join(tmp_dir, f\"qq_{self.msg_id}_{idx}.png\")\n                try:\n                    resp = requests.get(img_url, timeout=30)\n                    resp.raise_for_status()\n                    with open(img_path, \"wb\") as f:\n                        f.write(resp.content)\n                    image_paths.append(img_path)\n                except Exception as e:\n                    logger.error(f\"[QQ] Failed to download mixed image: {e}\")\n            content_parts = [content]\n            for p in image_paths:\n                content_parts.append(f\"[图片: {p}]\")\n            self.content = \"\\n\".join(content_parts)\n        else:\n            self.ctype = ContextType.TEXT\n            self.content = content\n\n        if event_type == \"GROUP_AT_MESSAGE_CREATE\":\n            self.from_user_id = from_user_id\n            self.to_user_id = \"\"\n            self.other_user_id = group_openid\n            self.actual_user_id = from_user_id\n            self.actual_user_nickname = from_user_id\n\n        elif event_type == \"C2C_MESSAGE_CREATE\":\n            user_openid = author.get(\"user_openid\", \"\") or from_user_id\n            self.from_user_id = user_openid\n            self.to_user_id = \"\"\n            self.other_user_id = user_openid\n            self.actual_user_id = user_openid\n\n        elif event_type == \"AT_MESSAGE_CREATE\":\n            self.from_user_id = from_user_id\n            self.to_user_id = \"\"\n            channel_id = event_data.get(\"channel_id\", \"\")\n            self.other_user_id = channel_id\n            self.actual_user_id = from_user_id\n            self.actual_user_nickname = author.get(\"username\", from_user_id)\n\n        elif event_type == \"DIRECT_MESSAGE_CREATE\":\n            self.from_user_id = from_user_id\n            self.to_user_id = \"\"\n            guild_id = event_data.get(\"guild_id\", \"\")\n            self.other_user_id = f\"dm_{guild_id}_{from_user_id}\"\n            self.actual_user_id = from_user_id\n            self.actual_user_nickname = author.get(\"username\", from_user_id)\n\n        else:\n            raise NotImplementedError(f\"Unsupported QQ event type: {event_type}\")\n\n        logger.debug(f\"[QQ] Message parsed: type={event_type}, ctype={self.ctype}, \"\n                     f\"from={self.from_user_id}, content_len={len(self.content)}\")\n"
  },
  {
    "path": "channel/terminal/terminal_channel.py",
    "content": "import sys\n\nfrom bridge.context import *\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import ChatChannel, check_prefix\nfrom channel.chat_message import ChatMessage\nfrom common.log import logger\nfrom config import conf\n\n\nclass TerminalMessage(ChatMessage):\n    def __init__(\n        self,\n        msg_id,\n        content,\n        ctype=ContextType.TEXT,\n        from_user_id=\"User\",\n        to_user_id=\"Chatgpt\",\n        other_user_id=\"Chatgpt\",\n    ):\n        self.msg_id = msg_id\n        self.ctype = ctype\n        self.content = content\n        self.from_user_id = from_user_id\n        self.to_user_id = to_user_id\n        self.other_user_id = other_user_id\n\n\nclass TerminalChannel(ChatChannel):\n    NOT_SUPPORT_REPLYTYPE = [ReplyType.VOICE]\n\n    def send(self, reply: Reply, context: Context):\n        print(\"\\nBot:\")\n        if reply.type == ReplyType.IMAGE:\n            from PIL import Image\n\n            image_storage = reply.content\n            image_storage.seek(0)\n            img = Image.open(image_storage)\n            print(\"<IMAGE>\")\n            img.show()\n        elif reply.type == ReplyType.IMAGE_URL:  # 从网络下载图片\n            import io\n\n            import requests\n            from PIL import Image\n\n            img_url = reply.content\n            pic_res = requests.get(img_url, stream=True)\n            image_storage = io.BytesIO()\n            for block in pic_res.iter_content(1024):\n                image_storage.write(block)\n            image_storage.seek(0)\n            img = Image.open(image_storage)\n            print(img_url)\n            img.show()\n        else:\n            print(reply.content)\n        print(\"\\nUser:\", end=\"\")\n        sys.stdout.flush()\n        return\n\n    def startup(self):\n        context = Context()\n        logger.setLevel(\"WARN\")\n        print(\"\\nPlease input your question:\\nUser:\", end=\"\")\n        sys.stdout.flush()\n        msg_id = 0\n        while True:\n            try:\n                prompt = self.get_input()\n            except KeyboardInterrupt:\n                print(\"\\nExiting...\")\n                sys.exit()\n            msg_id += 1\n            trigger_prefixs = conf().get(\"single_chat_prefix\", [\"\"])\n            if check_prefix(prompt, trigger_prefixs) is None:\n                prompt = trigger_prefixs[0] + prompt  # 给没触发的消息加上触发前缀\n\n            context = self._compose_context(ContextType.TEXT, prompt, msg=TerminalMessage(msg_id, prompt))\n            context[\"isgroup\"] = False\n            if context:\n                self.produce(context)\n            else:\n                raise Exception(\"context is None\")\n\n    def get_input(self):\n        \"\"\"\n        Multi-line input function\n        \"\"\"\n        sys.stdout.flush()\n        line = input()\n        return line\n"
  },
  {
    "path": "channel/web/README.md",
    "content": "# Web Channel\n\n提供了一个默认的AI对话页面，可展示文本、图片等消息交互，支持markdown语法渲染，兼容插件执行。\n\n# 使用说明\n\n - 在 `config.json` 配置文件中的 `channel_type` 字段填入 `web`\n - 程序运行后将监听9899端口，浏览器访问 http://localhost:9899/chat 即可使用\n - 监听端口可以在配置文件 `web_port` 中自定义\n - 对于Docker运行方式，如果需要外部访问，需要在 `docker-compose.yml` 中通过 ports配置将端口监听映射到宿主机\n"
  },
  {
    "path": "channel/web/chat.html",
    "content": "<!DOCTYPE html>\n<html lang=\"zh\" class=\"\">\n<head>\n    <meta charset=\"UTF-8\">\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n    <title>CowAgent Console</title>\n    <link rel=\"icon\" href=\"assets/favicon.ico\" type=\"image/x-icon\">\n    <link rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css\">\n    <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\">\n    <link rel=\"preconnect\" href=\"https://fonts.gstatic.com\" crossorigin>\n    <link href=\"https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700&display=swap\" rel=\"stylesheet\">\n    <script src=\"https://cdn.tailwindcss.com\"></script>\n    <script src=\"https://cdn.jsdelivr.net/npm/markdown-it@13.0.1/dist/markdown-it.min.js\"></script>\n    <link id=\"hljs-light\" rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/github.min.css\">\n    <link id=\"hljs-dark\" rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/github-dark.min.css\" disabled>\n    <script src=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js\"></script>\n    <script src=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/python.min.js\"></script>\n    <script src=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/javascript.min.js\"></script>\n    <script src=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/java.min.js\"></script>\n    <script src=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/go.min.js\"></script>\n    <script src=\"https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/bash.min.js\"></script>\n    <script>\n    tailwind.config = {\n        darkMode: 'class',\n        theme: {\n            extend: {\n                fontFamily: {\n                    sans: ['Inter', 'system-ui', '-apple-system', 'sans-serif'],\n                    mono: ['\"JetBrains Mono\"', '\"Fira Code\"', 'Consolas', 'monospace'],\n                },\n                colors: {\n                    primary: {\n                        50: '#EDFDF3', 100: '#D4FAE2', 200: '#ABF4C7', 300: '#74E9A4',\n                        400: '#4ABE6E', 500: '#35A85B', 600: '#228547', 700: '#1C6B3B',\n                        800: '#1A5532', 900: '#16462A',\n                    }\n                },\n                animation: {\n                    'pulse-dot': 'pulseDot 1.4s infinite ease-in-out both',\n                }\n            }\n        }\n    }\n    </script>\n    <link rel=\"stylesheet\" href=\"assets/css/console.css\">\n    <!-- Apply theme/lang before first paint to avoid flash of unstyled content.\n         This runs synchronously in <head> so the correct class is on <html>\n         before any CSS or body rendering occurs. -->\n    <script>\n    (function() {\n        var theme = localStorage.getItem('cow_theme') || 'dark';\n        if (theme === 'dark') document.documentElement.classList.add('dark');\n    })();\n    </script>\n</head>\n<body class=\"h-screen overflow-hidden bg-gray-50 dark:bg-[#111111] text-slate-800 dark:text-slate-200 font-sans\">\n    <div id=\"app\" class=\"flex h-screen\">\n\n        <!-- ================================================================ -->\n        <!-- SIDEBAR                                                          -->\n        <!-- ================================================================ -->\n        <aside id=\"sidebar\" class=\"fixed inset-y-0 left-0 z-50 w-64 bg-[#0A0A0A] text-neutral-400 flex flex-col\n                                    transform -translate-x-full lg:relative lg:translate-x-0\n                                    transition-transform duration-300 ease-in-out\">\n            <!-- Logo -->\n            <div class=\"flex items-center gap-3 px-5 h-14 border-b border-white/10 flex-shrink-0\">\n                <img src=\"assets/logo.jpg\" alt=\"CowAgent\" class=\"w-8 h-8 rounded-lg flex-shrink-0\">\n                <div class=\"flex flex-col min-w-0\">\n                    <span class=\"text-white font-semibold text-sm truncate\">CowAgent</span>\n                    <span class=\"text-neutral-500 text-xs\" data-i18n=\"console\">Console</span>\n                </div>\n            </div>\n\n            <!-- Navigation -->\n            <nav class=\"flex-1 overflow-y-auto py-4 px-3 space-y-1\">\n                <!-- Chat Group -->\n                <div class=\"menu-group open\" data-group=\"chat\">\n                    <button class=\"w-full flex items-center gap-2 px-3 py-2 text-xs font-semibold uppercase tracking-wider text-neutral-500 hover:text-neutral-300 cursor-pointer transition-colors duration-150\">\n                        <i class=\"fas fa-chevron-right text-[10px] chevron\"></i>\n                        <span data-i18n=\"nav_chat\">Chat</span>\n                    </button>\n                    <div class=\"menu-group-items pl-2\">\n                        <a class=\"sidebar-item active flex items-center gap-3 px-3 py-2 rounded-lg cursor-pointer transition-all duration-150 hover:bg-white/5 hover:text-neutral-200 text-[14px]\"\n                           data-view=\"chat\">\n                            <i class=\"fas fa-message item-icon text-xs w-5 text-center\"></i>\n                            <span data-i18n=\"menu_chat\">Chat</span>\n                        </a>\n                    </div>\n                </div>\n\n                <!-- Management Group -->\n                <div class=\"menu-group open\" data-group=\"manage\">\n                    <button class=\"w-full flex items-center gap-2 px-3 py-2 text-xs font-semibold uppercase tracking-wider text-neutral-500 hover:text-neutral-300 cursor-pointer transition-colors duration-150\">\n                        <i class=\"fas fa-chevron-right text-[10px] chevron\"></i>\n                        <span data-i18n=\"nav_manage\">Management</span>\n                    </button>\n                    <div class=\"menu-group-items pl-2\">\n                        <a class=\"sidebar-item flex items-center gap-3 px-3 py-2 rounded-lg cursor-pointer transition-all duration-150 hover:bg-white/5 hover:text-neutral-200 text-[14px]\"\n                           data-view=\"config\">\n                            <i class=\"fas fa-sliders item-icon text-xs w-5 text-center\"></i>\n                            <span data-i18n=\"menu_config\">Config</span>\n                        </a>\n                        <a class=\"sidebar-item flex items-center gap-3 px-3 py-2 rounded-lg cursor-pointer transition-all duration-150 hover:bg-white/5 hover:text-neutral-200 text-[14px]\"\n                           data-view=\"skills\">\n                            <i class=\"fas fa-bolt item-icon text-xs w-5 text-center\"></i>\n                            <span data-i18n=\"menu_skills\">Skills</span>\n                        </a>\n                        <a class=\"sidebar-item flex items-center gap-3 px-3 py-2 rounded-lg cursor-pointer transition-all duration-150 hover:bg-white/5 hover:text-neutral-200 text-[14px]\"\n                           data-view=\"memory\">\n                            <i class=\"fas fa-brain item-icon text-xs w-5 text-center\"></i>\n                            <span data-i18n=\"menu_memory\">Memory</span>\n                        </a>\n                        <a class=\"sidebar-item flex items-center gap-3 px-3 py-2 rounded-lg cursor-pointer transition-all duration-150 hover:bg-white/5 hover:text-neutral-200 text-[14px]\"\n                           data-view=\"channels\">\n                            <i class=\"fas fa-tower-broadcast item-icon text-xs w-5 text-center\"></i>\n                            <span data-i18n=\"menu_channels\">Channels</span>\n                        </a>\n                        <a class=\"sidebar-item flex items-center gap-3 px-3 py-2 rounded-lg cursor-pointer transition-all duration-150 hover:bg-white/5 hover:text-neutral-200 text-[14px]\"\n                           data-view=\"tasks\">\n                            <i class=\"fas fa-clock item-icon text-xs w-5 text-center\"></i>\n                            <span data-i18n=\"menu_tasks\">Tasks</span>\n                        </a>\n                    </div>\n                </div>\n\n                <!-- Monitor Group -->\n                <div class=\"menu-group open\" data-group=\"monitor\">\n                    <button class=\"w-full flex items-center gap-2 px-3 py-2 text-xs font-semibold uppercase tracking-wider text-neutral-500 hover:text-neutral-300 cursor-pointer transition-colors duration-150\">\n                        <i class=\"fas fa-chevron-right text-[10px] chevron\"></i>\n                        <span data-i18n=\"nav_monitor\">Monitor</span>\n                    </button>\n                    <div class=\"menu-group-items pl-2\">\n                        <a class=\"sidebar-item flex items-center gap-3 px-3 py-2 rounded-lg cursor-pointer transition-all duration-150 hover:bg-white/5 hover:text-neutral-200 text-[14px]\"\n                           data-view=\"logs\">\n                            <i class=\"fas fa-terminal item-icon text-xs w-5 text-center\"></i>\n                            <span data-i18n=\"menu_logs\">Logs</span>\n                        </a>\n                    </div>\n                </div>\n            </nav>\n\n            <!-- Sidebar Footer -->\n            <div class=\"px-4 py-3 border-t border-white/10 flex-shrink-0\">\n                <div class=\"flex items-center gap-2 text-xs text-neutral-600\">\n                    <i class=\"fas fa-circle text-[6px] text-primary-400\"></i>\n                    <a id=\"sidebar-version\"\n                       href=\"https://github.com/zhayujie/chatgpt-on-wechat/releases\"\n                       target=\"_blank\" rel=\"noopener noreferrer\"\n                       class=\"hover:text-primary-400 transition-colors duration-150 cursor-pointer\"></a>\n                </div>\n            </div>\n        </aside>\n\n        <!-- Mobile Overlay -->\n        <div id=\"sidebar-overlay\" class=\"fixed inset-0 bg-black/50 z-40 hidden lg:hidden cursor-pointer\" onclick=\"toggleSidebar()\"></div>\n\n        <!-- ================================================================ -->\n        <!-- MAIN CONTENT                                                     -->\n        <!-- ================================================================ -->\n        <div id=\"main-content\" class=\"flex-1 flex flex-col min-w-0 h-screen\">\n            <!-- Top Header -->\n            <header class=\"h-14 flex items-center gap-3 px-4 border-b border-slate-200 dark:border-white/10 bg-white dark:bg-[#1A1A1A] flex-shrink-0 z-10\">\n                <!-- Mobile menu toggle -->\n                <button id=\"menu-toggle\" class=\"lg:hidden p-2 rounded-lg hover:bg-slate-100 dark:hover:bg-white/10 cursor-pointer transition-colors duration-150\"\n                        onclick=\"toggleSidebar()\">\n                    <i class=\"fas fa-bars text-slate-600 dark:text-slate-300\"></i>\n                </button>\n\n                <!-- Breadcrumb -->\n                <div class=\"flex items-center gap-2 text-sm min-w-0\">\n                    <span id=\"breadcrumb-group\" class=\"text-slate-400 dark:text-slate-500 truncate\" data-i18n=\"nav_chat\">Chat</span>\n                    <i class=\"fas fa-chevron-right text-[10px] text-slate-300 dark:text-slate-600\"></i>\n                    <span id=\"breadcrumb-page\" class=\"font-medium text-slate-700 dark:text-slate-200 truncate\" data-i18n=\"menu_chat\">Chat</span>\n                </div>\n\n                <div class=\"flex-1\"></div>\n\n                <!-- Language Toggle -->\n                <button id=\"lang-toggle\" class=\"flex items-center gap-1.5 px-3 py-1.5 rounded-lg text-sm font-medium\n                                                 text-slate-500 dark:text-slate-400 hover:bg-slate-100 dark:hover:bg-white/10\n                                                 cursor-pointer transition-colors duration-150\"\n                        onclick=\"toggleLanguage()\">\n                    <i class=\"fas fa-globe text-xs\"></i>\n                    <span id=\"lang-label\">EN</span>\n                </button>\n\n                <!-- Theme Toggle -->\n                <button id=\"theme-toggle\" class=\"p-2 rounded-lg text-slate-500 dark:text-slate-400\n                                                  hover:bg-slate-100 dark:hover:bg-white/10\n                                                  cursor-pointer transition-colors duration-150\"\n                        onclick=\"toggleTheme()\">\n                    <i id=\"theme-icon\" class=\"fas fa-moon\"></i>\n                </button>\n\n                <!-- Docs Link -->\n                <a href=\"https://docs.cowagent.ai\" target=\"_blank\" rel=\"noopener noreferrer\"\n                   class=\"p-2 rounded-lg text-slate-500 dark:text-slate-400 hover:bg-slate-100 dark:hover:bg-white/10\n                          cursor-pointer transition-colors duration-150\" title=\"Documentation\">\n                    <i class=\"fas fa-book text-base\"></i>\n                </a>\n\n                <!-- Website Link -->\n                <a href=\"https://cowagent.ai\" target=\"_blank\" rel=\"noopener noreferrer\"\n                   class=\"p-2 rounded-lg text-slate-500 dark:text-slate-400 hover:bg-slate-100 dark:hover:bg-white/10\n                          cursor-pointer transition-colors duration-150\" title=\"Website\">\n                    <i class=\"fas fa-home text-base\"></i>\n                </a>\n\n                <!-- GitHub Link -->\n                <a href=\"https://github.com/zhayujie/chatgpt-on-wechat\" target=\"_blank\" rel=\"noopener noreferrer\"\n                   class=\"p-2 rounded-lg text-slate-500 dark:text-slate-400 hover:bg-slate-100 dark:hover:bg-white/10\n                          cursor-pointer transition-colors duration-150\" title=\"GitHub\">\n                    <i class=\"fab fa-github text-lg\"></i>\n                </a>\n            </header>\n\n            <!-- Content Area -->\n            <div id=\"content-area\" class=\"flex-1 overflow-hidden\">\n\n                <!-- ====================================================== -->\n                <!-- VIEW: Chat                                              -->\n                <!-- ====================================================== -->\n                <div id=\"view-chat\" class=\"view active\">\n                    <!-- Messages -->\n                    <div id=\"chat-messages\" class=\"flex-1 overflow-y-auto\">\n                        <!-- Welcome Screen -->\n                        <div id=\"welcome-screen\" class=\"flex flex-col items-center justify-center h-full px-6 py-12\">\n                            <img src=\"assets/logo.jpg\" alt=\"CowAgent\" class=\"w-16 h-16 rounded-2xl mb-6 shadow-lg shadow-primary-500/20\">\n                            <h1 id=\"welcome-title\" class=\"text-2xl font-bold text-slate-800 dark:text-slate-100 mb-3\">CowAgent</h1>\n                            <p id=\"welcome-subtitle\" class=\"text-slate-500 dark:text-slate-400 text-center max-w-lg mb-10 leading-relaxed\"\n                               data-i18n-html=\"welcome_subtitle\">I can help you answer questions, manage your computer, create and execute skills,<br>and keep growing through long-term memory.</p>\n\n                            <div class=\"grid grid-cols-1 sm:grid-cols-3 gap-4 w-full max-w-2xl\">\n                                <div class=\"example-card group bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-xl p-4\n                                            cursor-pointer hover:border-primary-300 dark:hover:border-primary-600 hover:shadow-md transition-all duration-200\">\n                                    <div class=\"flex items-center gap-2 mb-2\">\n                                        <div class=\"w-7 h-7 rounded-lg bg-blue-50 dark:bg-blue-900/30 flex items-center justify-center\">\n                                            <i class=\"fas fa-folder-open text-blue-500 text-xs\"></i>\n                                        </div>\n                                        <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200\" data-i18n=\"example_sys_title\">System</span>\n                                    </div>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 leading-relaxed\" data-i18n=\"example_sys_text\">Show me the files in the workspace</p>\n                                </div>\n                                <div class=\"example-card group bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-xl p-4\n                                            cursor-pointer hover:border-primary-300 dark:hover:border-primary-600 hover:shadow-md transition-all duration-200\">\n                                    <div class=\"flex items-center gap-2 mb-2\">\n                                        <div class=\"w-7 h-7 rounded-lg bg-amber-50 dark:bg-amber-900/30 flex items-center justify-center\">\n                                            <i class=\"fas fa-clock text-amber-500 text-xs\"></i>\n                                        </div>\n                                        <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200\" data-i18n=\"example_task_title\">Smart Task</span>\n                                    </div>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 leading-relaxed\" data-i18n=\"example_task_text\">Remind me to check the server in 5 minutes</p>\n                                </div>\n                                <div class=\"example-card group bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-xl p-4\n                                            cursor-pointer hover:border-primary-300 dark:hover:border-primary-600 hover:shadow-md transition-all duration-200\">\n                                    <div class=\"flex items-center gap-2 mb-2\">\n                                        <div class=\"w-7 h-7 rounded-lg bg-emerald-50 dark:bg-emerald-900/30 flex items-center justify-center\">\n                                            <i class=\"fas fa-code text-emerald-500 text-xs\"></i>\n                                        </div>\n                                        <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200\" data-i18n=\"example_code_title\">Coding</span>\n                                    </div>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 leading-relaxed\" data-i18n=\"example_code_text\">Write a Python web scraper script</p>\n                                </div>\n                            </div>\n                        </div>\n                    </div>\n\n                    <!-- Chat Input -->\n                    <div class=\"flex-shrink-0 border-t border-slate-200 dark:border-white/10 bg-white dark:bg-[#1A1A1A] px-4 py-3\">\n                        <div class=\"max-w-3xl mx-auto\">\n                            <!-- Attachment preview bar -->\n                            <div id=\"attachment-preview\" class=\"attachment-preview hidden\"></div>\n                            <div class=\"flex items-center gap-2\">\n                                <div class=\"flex items-center flex-shrink-0\">\n                                    <button id=\"new-chat-btn\" class=\"w-9 h-10 flex items-center justify-center rounded-lg\n                                                                     text-slate-400 hover:text-primary-500 hover:bg-primary-50 dark:hover:bg-primary-900/20\n                                                                     cursor-pointer transition-colors duration-150\" title=\"New Chat\"\n                                            onclick=\"newChat()\">\n                                        <i class=\"fas fa-plus text-base\"></i>\n                                    </button>\n                                    <button id=\"attach-btn\" class=\"w-9 h-10 flex items-center justify-center rounded-lg\n                                                                   text-slate-400 hover:text-primary-500 hover:bg-primary-50 dark:hover:bg-primary-900/20\n                                                                   cursor-pointer transition-colors duration-150\"\n                                            title=\"Attach file\" onclick=\"document.getElementById('file-input').click()\">\n                                        <i class=\"fas fa-paperclip text-base\"></i>\n                                    </button>\n                                </div>\n                                <input type=\"file\" id=\"file-input\" class=\"hidden\" multiple\n                                       accept=\"image/*,.pdf,.doc,.docx,.xls,.xlsx,.ppt,.pptx,.txt,.csv,.json,.xml,.zip,.rar,.7z,.py,.js,.ts,.java,.c,.cpp,.go,.rs,.md\">\n                                <textarea id=\"chat-input\"\n                                          class=\"flex-1 min-w-0 px-4 py-[10px] rounded-xl border border-slate-200 dark:border-slate-600\n                                                 bg-slate-50 dark:bg-white/5 text-slate-800 dark:text-slate-100\n                                                 placeholder:text-slate-400 dark:placeholder:text-slate-500\n                                                 focus:outline-none focus:ring-0 focus:border-primary-600\n                                                 text-sm leading-relaxed\"\n                                          rows=\"1\"\n                                          data-i18n-placeholder=\"input_placeholder\"\n                                          placeholder=\"Type a message...\"></textarea>\n                                <button id=\"send-btn\"\n                                        class=\"flex-shrink-0 w-10 h-10 flex items-center justify-center rounded-lg\n                                               bg-primary-400 text-white hover:bg-primary-500\n                                               disabled:bg-slate-300 dark:disabled:bg-slate-600\n                                               disabled:cursor-not-allowed cursor-pointer transition-colors duration-150\"\n                                        disabled onclick=\"sendMessage()\">\n                                    <i class=\"fas fa-paper-plane text-sm\"></i>\n                                </button>\n                            </div>\n                        </div>\n                    </div>\n                </div>\n\n                <!-- ====================================================== -->\n                <!-- VIEW: Config                                            -->\n                <!-- ====================================================== -->\n                <div id=\"view-config\" class=\"view\">\n                    <div class=\"flex-1 overflow-y-auto p-6\">\n                        <div class=\"max-w-4xl mx-auto\">\n                            <div class=\"flex items-center justify-between mb-6\">\n                                <div>\n                                    <h2 class=\"text-xl font-bold text-slate-800 dark:text-slate-100\" data-i18n=\"config_title\">Configuration</h2>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 mt-1\" data-i18n=\"config_desc\">Manage model and agent settings</p>\n                                </div>\n                            </div>\n                            <div class=\"grid gap-6\">\n\n                                <!-- Model Config Card -->\n                                <div class=\"bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 p-6\">\n                                    <div class=\"flex items-center gap-3 mb-5\">\n                                        <div class=\"w-9 h-9 rounded-lg bg-primary-50 dark:bg-primary-900/30 flex items-center justify-center\">\n                                            <i class=\"fas fa-microchip text-primary-500 text-sm\"></i>\n                                        </div>\n                                        <h3 class=\"font-semibold text-slate-800 dark:text-slate-100\" data-i18n=\"config_model\">Model Configuration</h3>\n                                    </div>\n                                    <div class=\"space-y-5\">\n                                        <!-- Provider -->\n                                        <div>\n                                            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\" data-i18n=\"config_provider\">Provider</label>\n                                            <div id=\"cfg-provider\" class=\"cfg-dropdown\" tabindex=\"0\">\n                                                <div class=\"cfg-dropdown-selected\">\n                                                    <span class=\"cfg-dropdown-text\">--</span>\n                                                    <i class=\"fas fa-chevron-down cfg-dropdown-arrow\"></i>\n                                                </div>\n                                                <div class=\"cfg-dropdown-menu\"></div>\n                                            </div>\n                                        </div>\n                                        <!-- Model -->\n                                        <div>\n                                            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\" data-i18n=\"config_model_name\">Model</label>\n                                            <div id=\"cfg-model-select\" class=\"cfg-dropdown\" tabindex=\"0\">\n                                                <div class=\"cfg-dropdown-selected\">\n                                                    <span class=\"cfg-dropdown-text\">--</span>\n                                                    <i class=\"fas fa-chevron-down cfg-dropdown-arrow\"></i>\n                                                </div>\n                                                <div class=\"cfg-dropdown-menu\"></div>\n                                            </div>\n                                            <div id=\"cfg-model-custom-wrap\" class=\"mt-2 hidden\">\n                                                <input id=\"cfg-model-custom\" type=\"text\"\n                                                       class=\"w-full px-3 py-2 rounded-lg border border-slate-200 dark:border-slate-600\n                                                              bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                                                              focus:outline-none focus:border-primary-500 font-mono transition-colors\"\n                                                       data-i18n-placeholder=\"config_custom_model_hint\" placeholder=\"Enter custom model name\">\n                                            </div>\n                                        </div>\n                                        <!-- API Key -->\n                                        <div id=\"cfg-api-key-wrap\">\n                                            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\">API Key</label>\n                                            <div class=\"relative\">\n                                                <input id=\"cfg-api-key\" type=\"text\" autocomplete=\"off\" data-1p-ignore data-lpignore=\"true\"\n                                                       class=\"w-full px-3 py-2 pr-10 rounded-lg border border-slate-200 dark:border-slate-600\n                                                              bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                                                              focus:outline-none focus:border-primary-500 font-mono transition-colors cfg-key-masked\"\n                                                       placeholder=\"sk-...\">\n                                                <button type=\"button\" id=\"cfg-api-key-toggle\"\n                                                        class=\"absolute right-2.5 top-1/2 -translate-y-1/2 text-slate-400 hover:text-slate-600\n                                                               dark:hover:text-slate-300 cursor-pointer transition-colors p-1\"\n                                                        onclick=\"toggleApiKeyVisibility()\">\n                                                    <i class=\"fas fa-eye text-xs\"></i>\n                                                </button>\n                                            </div>\n                                        </div>\n                                        <!-- API Base -->\n                                        <div id=\"cfg-api-base-wrap\" class=\"hidden\">\n                                            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\">API Base</label>\n                                            <input id=\"cfg-api-base\" type=\"text\"\n                                                   class=\"w-full px-3 py-2 rounded-lg border border-slate-200 dark:border-slate-600\n                                                          bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                                                          focus:outline-none focus:border-primary-500 font-mono transition-colors\"\n                                                   placeholder=\"https://...\">\n                                        </div>\n                                        <!-- Save Model Button -->\n                                        <div class=\"flex items-center justify-end gap-3 pt-1\">\n                                            <span id=\"cfg-model-status\" class=\"text-xs text-primary-500 opacity-0 transition-opacity duration-300\"></span>\n                                            <button id=\"cfg-model-save\"\n                                                    class=\"px-4 py-2 rounded-lg bg-primary-500 hover:bg-primary-600 text-white text-sm font-medium\n                                                           cursor-pointer transition-colors duration-150 disabled:opacity-50 disabled:cursor-not-allowed\"\n                                                    onclick=\"saveModelConfig()\" data-i18n=\"config_save\">Save</button>\n                                        </div>\n                                    </div>\n                                </div>\n\n                                <!-- Agent Config Card -->\n                                <div class=\"bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 p-6\">\n                                    <div class=\"flex items-center gap-3 mb-5\">\n                                        <div class=\"w-9 h-9 rounded-lg bg-emerald-50 dark:bg-emerald-900/30 flex items-center justify-center\">\n                                            <i class=\"fas fa-robot text-emerald-500 text-sm\"></i>\n                                        </div>\n                                        <h3 class=\"font-semibold text-slate-800 dark:text-slate-100\" data-i18n=\"config_agent\">Agent Configuration</h3>\n                                    </div>\n                                    <div class=\"space-y-4\">\n                                        <div>\n                                            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\" data-i18n=\"config_max_tokens\">Max Context Tokens</label>\n                                            <input id=\"cfg-max-tokens\" type=\"number\" min=\"1000\" max=\"200000\" step=\"1000\"\n                                                   class=\"w-full px-3 py-2 rounded-lg border border-slate-200 dark:border-slate-600\n                                                          bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                                                          focus:outline-none focus:border-primary-500 font-mono transition-colors\">\n                                        </div>\n                                        <div>\n                                            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\" data-i18n=\"config_max_turns\">Max Context Turns</label>\n                                            <input id=\"cfg-max-turns\" type=\"number\" min=\"1\" max=\"100\" step=\"1\"\n                                                   class=\"w-full px-3 py-2 rounded-lg border border-slate-200 dark:border-slate-600\n                                                          bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                                                          focus:outline-none focus:border-primary-500 font-mono transition-colors\">\n                                        </div>\n                                        <div>\n                                            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\" data-i18n=\"config_max_steps\">Max Steps</label>\n                                            <input id=\"cfg-max-steps\" type=\"number\" min=\"1\" max=\"50\" step=\"1\"\n                                                   class=\"w-full px-3 py-2 rounded-lg border border-slate-200 dark:border-slate-600\n                                                          bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                                                          focus:outline-none focus:border-primary-500 font-mono transition-colors\">\n                                        </div>\n                                        <div class=\"flex items-center justify-end gap-3 pt-1\">\n                                            <span id=\"cfg-agent-status\" class=\"text-xs text-primary-500 opacity-0 transition-opacity duration-300\"></span>\n                                            <button id=\"cfg-agent-save\"\n                                                    class=\"px-4 py-2 rounded-lg bg-primary-500 hover:bg-primary-600 text-white text-sm font-medium\n                                                           cursor-pointer transition-colors duration-150 disabled:opacity-50 disabled:cursor-not-allowed\"\n                                                    onclick=\"saveAgentConfig()\" data-i18n=\"config_save\">Save</button>\n                                        </div>\n                                    </div>\n                                </div>\n\n                            </div>\n                        </div>\n                    </div>\n                </div>\n\n                <!-- ====================================================== -->\n                <!-- VIEW: Skills                                            -->\n                <!-- ====================================================== -->\n                <div id=\"view-skills\" class=\"view\">\n                    <div class=\"flex-1 overflow-y-auto p-6\">\n                        <div class=\"max-w-4xl mx-auto\">\n                            <div class=\"flex items-center justify-between mb-6\">\n                                <div>\n                                    <h2 class=\"text-xl font-bold text-slate-800 dark:text-slate-100\" data-i18n=\"skills_title\">Skills</h2>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 mt-1\" data-i18n=\"skills_desc\">View, enable, or disable agent skills</p>\n                                </div>\n                            </div>\n\n                            <!-- Built-in Tools Section -->\n                            <div class=\"mb-8\">\n                                <div class=\"flex items-center gap-2 mb-3\">\n                                    <span class=\"text-xs font-semibold uppercase tracking-wider text-slate-400 dark:text-slate-500\" data-i18n=\"tools_section_title\">Built-in Tools</span>\n                                    <span id=\"tools-count-badge\" class=\"hidden px-2 py-0.5 rounded-full text-xs bg-slate-100 dark:bg-white/10 text-slate-500 dark:text-slate-400\"></span>\n                                </div>\n                                <div id=\"tools-empty\" class=\"flex items-center gap-2 py-4 text-slate-400 dark:text-slate-500 text-sm\">\n                                    <i class=\"fas fa-spinner fa-spin text-xs\"></i>\n                                    <span data-i18n=\"tools_loading\">Loading tools...</span>\n                                </div>\n                                <div id=\"tools-list\" class=\"grid gap-3 sm:grid-cols-2 hidden\"></div>\n                            </div>\n\n                            <!-- Skills Section -->\n                            <div>\n                                <div class=\"flex items-center gap-2 mb-3\">\n                                    <span class=\"text-xs font-semibold uppercase tracking-wider text-slate-400 dark:text-slate-500\" data-i18n=\"skills_section_title\">Skills</span>\n                                    <span id=\"skills-count-badge\" class=\"hidden px-2 py-0.5 rounded-full text-xs bg-slate-100 dark:bg-white/10 text-slate-500 dark:text-slate-400\"></span>\n                                </div>\n                                <div id=\"skills-empty\" class=\"flex flex-col items-center justify-center py-12\">\n                                    <div class=\"w-14 h-14 rounded-2xl bg-amber-50 dark:bg-amber-900/20 flex items-center justify-center mb-3\">\n                                        <i class=\"fas fa-bolt text-amber-400 text-lg\"></i>\n                                    </div>\n                                    <p class=\"text-slate-500 dark:text-slate-400 font-medium\" data-i18n=\"skills_loading\">Loading skills...</p>\n                                    <p class=\"text-sm text-slate-400 dark:text-slate-500 mt-1\" data-i18n=\"skills_loading_desc\">Skills will be displayed here after loading</p>\n                                </div>\n                                <div id=\"skills-list\" class=\"grid gap-4 sm:grid-cols-2\"></div>\n                            </div>\n                        </div>\n                    </div>\n                </div>\n\n                <!-- ====================================================== -->\n                <!-- VIEW: Memory                                            -->\n                <!-- ====================================================== -->\n                <div id=\"view-memory\" class=\"view\">\n                    <div class=\"flex-1 overflow-y-auto p-6\">\n                        <div class=\"max-w-4xl mx-auto\">\n\n                            <!-- Panel: list -->\n                            <div id=\"memory-panel-list\">\n                                <div class=\"flex items-center justify-between mb-6\">\n                                    <div>\n                                        <h2 class=\"text-xl font-bold text-slate-800 dark:text-slate-100\" data-i18n=\"memory_title\">Memory</h2>\n                                        <p class=\"text-sm text-slate-500 dark:text-slate-400 mt-1\" data-i18n=\"memory_desc\">View agent memory files and contents</p>\n                                    </div>\n                                </div>\n                                <div id=\"memory-empty\" class=\"flex flex-col items-center justify-center py-20\">\n                                    <div class=\"w-16 h-16 rounded-2xl bg-purple-50 dark:bg-purple-900/20 flex items-center justify-center mb-4\">\n                                        <i class=\"fas fa-brain text-purple-400 text-xl\"></i>\n                                    </div>\n                                    <p class=\"text-slate-500 dark:text-slate-400 font-medium\" data-i18n=\"memory_loading\">Loading memory files...</p>\n                                    <p class=\"text-sm text-slate-400 dark:text-slate-500 mt-1\" data-i18n=\"memory_loading_desc\">Memory files will be displayed here</p>\n                                </div>\n                                <div id=\"memory-list\" class=\"hidden\">\n                                    <div class=\"bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 overflow-hidden\">\n                                        <table class=\"w-full\">\n                                            <thead>\n                                                <tr class=\"border-b border-slate-200 dark:border-white/10\">\n                                                    <th class=\"text-left px-4 py-3 text-xs font-semibold uppercase tracking-wider text-slate-500 dark:text-slate-400\" data-i18n=\"memory_col_name\">Filename</th>\n                                                    <th class=\"text-left px-4 py-3 text-xs font-semibold uppercase tracking-wider text-slate-500 dark:text-slate-400\" data-i18n=\"memory_col_type\">Type</th>\n                                                    <th class=\"text-left px-4 py-3 text-xs font-semibold uppercase tracking-wider text-slate-500 dark:text-slate-400\" data-i18n=\"memory_col_size\">Size</th>\n                                                    <th class=\"text-left px-4 py-3 text-xs font-semibold uppercase tracking-wider text-slate-500 dark:text-slate-400\" data-i18n=\"memory_col_updated\">Updated</th>\n                                                </tr>\n                                            </thead>\n                                            <tbody id=\"memory-table-body\"></tbody>\n                                        </table>\n                                    </div>\n                                    <div id=\"memory-pagination\" class=\"flex items-center justify-between mt-4 text-sm text-slate-500 dark:text-slate-400\"></div>\n                                </div>\n                            </div>\n\n                            <!-- Panel: file viewer (replaces list) -->\n                            <div id=\"memory-panel-viewer\" class=\"hidden\">\n                                <div class=\"flex items-center gap-3 mb-6\">\n                                    <button onclick=\"closeMemoryViewer()\"\n                                            class=\"flex items-center gap-1.5 px-3 py-1.5 rounded-lg text-sm\n                                                   text-slate-500 dark:text-slate-400 hover:bg-slate-100 dark:hover:bg-white/10\n                                                   border border-slate-200 dark:border-white/10 transition-colors cursor-pointer\">\n                                        <i class=\"fas fa-arrow-left text-xs\"></i>\n                                        <span data-i18n=\"memory_back\">Back</span>\n                                    </button>\n                                    <h2 id=\"memory-viewer-title\"\n                                        class=\"text-base font-semibold text-slate-800 dark:text-slate-100 font-mono truncate\"></h2>\n                                </div>\n                                <div class=\"bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 overflow-hidden\">\n                                    <div id=\"memory-viewer-content\"\n                                         class=\"p-5 overflow-y-auto text-sm msg-content text-slate-700 dark:text-slate-200\"\n                                         style=\"max-height: calc(100vh - 220px)\"></div>\n                                </div>\n                            </div>\n\n                        </div>\n                    </div>\n                </div>\n\n                <!-- ====================================================== -->\n                <!-- VIEW: Channels                                          -->\n                <!-- ====================================================== -->\n                <div id=\"view-channels\" class=\"view\">\n                    <div class=\"flex-1 overflow-y-auto p-6\">\n                        <div class=\"max-w-4xl mx-auto\">\n                            <div class=\"flex items-center justify-between mb-6\">\n                                <div>\n                                    <h2 class=\"text-xl font-bold text-slate-800 dark:text-slate-100\" data-i18n=\"channels_title\">Channels</h2>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 mt-1\" data-i18n=\"channels_desc\">View and manage messaging channels</p>\n                                </div>\n                                <button id=\"add-channel-btn\" onclick=\"openAddChannelPanel()\"\n                                        class=\"flex items-center gap-2 px-4 py-2 rounded-lg bg-primary-500 hover:bg-primary-600\n                                               text-white text-sm font-medium cursor-pointer transition-colors duration-150\">\n                                    <i class=\"fas fa-plus text-xs\"></i>\n                                    <span data-i18n=\"channels_add\">Connect</span>\n                                </button>\n                            </div>\n                            <div id=\"channels-content\" class=\"grid gap-4\"></div>\n                            <div id=\"channels-add-panel\" class=\"hidden mt-4\"></div>\n                        </div>\n                    </div>\n                </div>\n\n                <!-- ====================================================== -->\n                <!-- VIEW: Tasks                                             -->\n                <!-- ====================================================== -->\n                <div id=\"view-tasks\" class=\"view\">\n                    <div class=\"flex-1 overflow-y-auto p-6\">\n                        <div class=\"max-w-4xl mx-auto\">\n                            <div class=\"flex items-center justify-between mb-6\">\n                                <div>\n                                    <h2 class=\"text-xl font-bold text-slate-800 dark:text-slate-100\" data-i18n=\"tasks_title\">Scheduled Tasks</h2>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 mt-1\" data-i18n=\"tasks_desc\">View and manage scheduled tasks</p>\n                                </div>\n                            </div>\n                            <div id=\"tasks-empty\" class=\"flex flex-col items-center justify-center py-20\">\n                                <div class=\"w-16 h-16 rounded-2xl bg-rose-50 dark:bg-rose-900/20 flex items-center justify-center mb-4\">\n                                    <i class=\"fas fa-clock text-rose-400 text-xl\"></i>\n                                </div>\n                                <p class=\"text-slate-500 dark:text-slate-400 font-medium\">Loading...</p>\n                            </div>\n                            <div id=\"tasks-list\" class=\"grid gap-4 hidden\"></div>\n                        </div>\n                    </div>\n                </div>\n\n                <!-- ====================================================== -->\n                <!-- VIEW: Logs                                              -->\n                <!-- ====================================================== -->\n                <div id=\"view-logs\" class=\"view\">\n                    <div class=\"flex-1 overflow-y-auto p-6\">\n                        <div class=\"max-w-5xl mx-auto\">\n                            <div class=\"flex items-center justify-between mb-6\">\n                                <div>\n                                    <h2 class=\"text-xl font-bold text-slate-800 dark:text-slate-100\" data-i18n=\"logs_title\">Logs</h2>\n                                    <p class=\"text-sm text-slate-500 dark:text-slate-400 mt-1\" data-i18n=\"logs_desc\">Real-time log output (run.log)</p>\n                                </div>\n                            </div>\n                            <!-- Log Terminal -->\n                            <div class=\"bg-slate-900 rounded-xl border border-slate-700 overflow-hidden shadow-lg\">\n                                <div class=\"flex items-center gap-2 px-4 py-2.5 bg-slate-800 border-b border-slate-700\">\n                                    <div class=\"flex gap-1.5\">\n                                        <span class=\"w-3 h-3 rounded-full bg-red-500/80\"></span>\n                                        <span class=\"w-3 h-3 rounded-full bg-amber-500/80\"></span>\n                                        <span class=\"w-3 h-3 rounded-full bg-emerald-500/80\"></span>\n                                    </div>\n                                    <span class=\"text-xs text-slate-400 ml-2 font-mono\">run.log</span>\n                                    <div class=\"flex-1\"></div>\n                                    <div class=\"flex items-center gap-1.5\">\n                                        <span class=\"w-2 h-2 rounded-full bg-emerald-500 animate-pulse\"></span>\n                                        <span class=\"text-xs text-slate-500\" data-i18n=\"logs_live\">Live</span>\n                                    </div>\n                                </div>\n                                <div id=\"log-output\" class=\"p-4 overflow-y-auto font-mono text-xs leading-relaxed text-slate-300 whitespace-pre-wrap break-all\" style=\"height: calc(100vh - 272px)\">\n                                    <p class=\"text-slate-500\" data-i18n=\"logs_coming_msg\">Log streaming will be available here. Connects to run.log for real-time output similar to tail -f.</p>\n                                </div>\n                            </div>\n                        </div>\n                    </div>\n                </div>\n\n            </div><!-- /content-area -->\n        </div><!-- /main-content -->\n    </div><!-- /app -->\n\n    <!-- Confirm Dialog -->\n    <div id=\"confirm-dialog-overlay\" class=\"fixed inset-0 bg-black/50 z-[100] hidden flex items-center justify-center\">\n        <div class=\"bg-white dark:bg-[#1A1A1A] rounded-2xl border border-slate-200 dark:border-white/10 shadow-xl\n                    w-full max-w-sm mx-4 overflow-hidden\">\n            <div class=\"p-6\">\n                <div class=\"flex items-center gap-3 mb-3\">\n                    <div class=\"w-10 h-10 rounded-xl bg-red-50 dark:bg-red-900/20 flex items-center justify-center flex-shrink-0\">\n                        <i class=\"fas fa-triangle-exclamation text-red-500\"></i>\n                    </div>\n                    <h3 id=\"confirm-dialog-title\" class=\"font-semibold text-slate-800 dark:text-slate-100 text-base\"></h3>\n                </div>\n                <p id=\"confirm-dialog-message\" class=\"text-sm text-slate-500 dark:text-slate-400 leading-relaxed ml-[52px]\"></p>\n            </div>\n            <div class=\"flex items-center justify-end gap-3 px-6 py-4 border-t border-slate-100 dark:border-white/5\">\n                <button id=\"confirm-dialog-cancel\"\n                        class=\"px-4 py-2 rounded-lg border border-slate-200 dark:border-white/10\n                               text-slate-600 dark:text-slate-300 text-sm font-medium\n                               hover:bg-slate-50 dark:hover:bg-white/5\n                               cursor-pointer transition-colors duration-150\"></button>\n                <button id=\"confirm-dialog-ok\"\n                        class=\"px-4 py-2 rounded-lg bg-red-500 hover:bg-red-600 text-white text-sm font-medium\n                               cursor-pointer transition-colors duration-150\"></button>\n            </div>\n        </div>\n    </div>\n\n    <script src=\"assets/js/console.js\"></script>\n</body>\n</html>\n"
  },
  {
    "path": "channel/web/static/css/console.css",
    "content": "/* =====================================================================\n   CowAgent Console Styles\n   ===================================================================== */\n\n/* Animations */\n@keyframes pulseDot {\n    0%, 80%, 100% { transform: scale(0.6); opacity: 0.4; }\n    40% { transform: scale(1); opacity: 1; }\n}\n\n/* Scrollbar */\n* { scrollbar-width: thin; scrollbar-color: #94a3b8 transparent; }\n::-webkit-scrollbar { width: 6px; height: 6px; }\n::-webkit-scrollbar-track { background: transparent; }\n::-webkit-scrollbar-thumb { background: #94a3b8; border-radius: 3px; }\n::-webkit-scrollbar-thumb:hover { background: #64748b; }\n.dark ::-webkit-scrollbar-thumb { background: #475569; }\n.dark ::-webkit-scrollbar-thumb:hover { background: #64748b; }\n\n/* Sidebar */\n.sidebar-item.active {\n    background: rgba(255, 255, 255, 0.08);\n    color: #FFFFFF;\n}\n.sidebar-item.active .item-icon { color: #4ABE6E; }\n\n/* Menu Groups */\n.menu-group-items { max-height: 0; overflow: hidden; transition: max-height 0.25s ease-out; }\n.menu-group.open .menu-group-items { max-height: 500px; transition: max-height 0.35s ease-in; }\n.menu-group .chevron { transition: transform 0.25s ease; }\n.menu-group.open .chevron { transform: rotate(90deg); }\n\n/* View Switching */\n.view { display: none; height: 100%; }\n.view.active { display: flex; flex-direction: column; }\n\n/* Markdown Content */\n.msg-content p { margin: 0.5em 0; line-height: 1.7; }\n.msg-content p:first-child { margin-top: 0; }\n.msg-content p:last-child { margin-bottom: 0; }\n.msg-content h1, .msg-content h2, .msg-content h3,\n.msg-content h4, .msg-content h5, .msg-content h6 {\n    margin-top: 1.2em; margin-bottom: 0.6em; font-weight: 600; line-height: 1.3;\n}\n.msg-content h1 { font-size: 1.4em; }\n.msg-content h2 { font-size: 1.25em; }\n.msg-content h3 { font-size: 1.1em; }\n.msg-content ul, .msg-content ol { margin: 0.5em 0; padding-left: 1.8em; }\n.msg-content li { margin: 0.25em 0; }\n.msg-content pre {\n    border-radius: 8px; overflow-x: auto; margin: 0.8em 0;\n    background: #f1f5f9; padding: 1em;\n}\n.dark .msg-content pre { background: #111111; }\n.msg-content code {\n    font-family: 'JetBrains Mono', 'Fira Code', Consolas, monospace;\n    font-size: 0.875em;\n}\n.msg-content :not(pre) > code {\n    background: rgba(74, 190, 110, 0.1); color: #1C6B3B;\n    padding: 2px 6px; border-radius: 4px;\n}\n.dark .msg-content :not(pre) > code {\n    background: rgba(74, 190, 110, 0.15); color: #74E9A4;\n}\n.msg-content pre code { background: transparent; padding: 0; color: inherit; }\n.msg-content blockquote {\n    border-left: 3px solid #4ABE6E; padding: 0.5em 1em;\n    margin: 0.8em 0; background: rgba(74, 190, 110, 0.05); border-radius: 0 6px 6px 0;\n}\n.dark .msg-content blockquote { background: rgba(74, 190, 110, 0.08); }\n.msg-content table { border-collapse: collapse; width: 100%; margin: 0.8em 0; }\n.msg-content th, .msg-content td {\n    border: 1px solid #e2e8f0; padding: 8px 12px; text-align: left;\n}\n.dark .msg-content th, .dark .msg-content td { border-color: rgba(255,255,255,0.1); }\n.msg-content th { background: #f1f5f9; font-weight: 600; }\n.dark .msg-content th { background: #111111; }\n.msg-content img { max-width: 100%; height: auto; border-radius: 8px; margin: 0.5em 0; }\n.msg-content a { color: #35A85B; text-decoration: underline; }\n.msg-content a:hover { color: #228547; }\n.msg-content hr { border: none; height: 1px; background: #e2e8f0; margin: 1.2em 0; }\n.dark .msg-content hr { background: rgba(255,255,255,0.1); }\n\n/* SSE Streaming cursor */\n@keyframes blink { 0%, 100% { opacity: 1; } 50% { opacity: 0; } }\n.sse-streaming::after {\n    content: '▋';\n    display: inline-block;\n    margin-left: 2px;\n    color: #4ABE6E;\n    animation: blink 0.9s step-end infinite;\n    font-size: 0.85em;\n    vertical-align: middle;\n}\n\n/* Agent steps (thinking summaries + tool indicators) */\n.agent-steps:empty { display: none; }\n.agent-steps:not(:empty) {\n    margin-bottom: 0.625rem;\n    padding-bottom: 0.5rem;\n    border-bottom: 1px dashed rgba(0, 0, 0, 0.08);\n}\n.dark .agent-steps:not(:empty) { border-bottom-color: rgba(255, 255, 255, 0.08); }\n\n.agent-step {\n    font-size: 0.75rem;\n    line-height: 1.4;\n    color: #94a3b8;\n    margin-bottom: 0.25rem;\n}\n.agent-step:last-child { margin-bottom: 0; }\n\n/* Thinking step - collapsible */\n.agent-thinking-step .thinking-header {\n    display: flex;\n    align-items: center;\n    gap: 0.375rem;\n    cursor: pointer;\n    user-select: none;\n}\n.agent-thinking-step .thinking-header.no-toggle { cursor: default; }\n.agent-thinking-step .thinking-header:not(.no-toggle):hover { color: #64748b; }\n.dark .agent-thinking-step .thinking-header:not(.no-toggle):hover { color: #cbd5e1; }\n.agent-thinking-step .thinking-header i:first-child { font-size: 0.625rem; margin-top: 1px; }\n.agent-thinking-step .thinking-chevron {\n    font-size: 0.5rem;\n    margin-left: auto;\n    transition: transform 0.2s ease;\n    opacity: 0.5;\n}\n.agent-thinking-step.expanded .thinking-chevron { transform: rotate(90deg); }\n.agent-thinking-step .thinking-full {\n    display: none;\n    margin-top: 0.375rem;\n    margin-left: 1rem;\n    padding: 0.5rem;\n    background: rgba(0, 0, 0, 0.02);\n    border-radius: 6px;\n    border: 1px solid rgba(0, 0, 0, 0.04);\n    font-size: 0.75rem;\n    line-height: 1.5;\n    color: #94a3b8;\n    max-height: 200px;\n    overflow-y: auto;\n}\n.dark .agent-thinking-step .thinking-full {\n    background: rgba(255, 255, 255, 0.02);\n    border-color: rgba(255, 255, 255, 0.04);\n}\n.agent-thinking-step.expanded .thinking-full { display: block; }\n.agent-thinking-step .thinking-full p { margin: 0.25em 0; }\n.agent-thinking-step .thinking-full p:first-child { margin-top: 0; }\n.agent-thinking-step .thinking-full p:last-child { margin-bottom: 0; }\n\n/* Tool step - collapsible */\n.agent-tool-step .tool-header {\n    display: flex;\n    align-items: center;\n    gap: 0.375rem;\n    cursor: pointer;\n    user-select: none;\n    padding: 1px 0;\n    border-radius: 4px;\n}\n.agent-tool-step .tool-header:hover { color: #64748b; }\n.dark .agent-tool-step .tool-header:hover { color: #cbd5e1; }\n.agent-tool-step .tool-icon { font-size: 0.625rem; }\n.agent-tool-step .tool-chevron {\n    font-size: 0.5rem;\n    margin-left: auto;\n    transition: transform 0.2s ease;\n    opacity: 0.5;\n}\n.agent-tool-step.expanded .tool-chevron { transform: rotate(90deg); }\n.agent-tool-step .tool-time {\n    font-size: 0.65rem;\n    opacity: 0.6;\n    margin-left: 0.25rem;\n}\n\n/* Tool detail panel */\n.agent-tool-step .tool-detail {\n    display: none;\n    margin-top: 0.375rem;\n    margin-left: 1rem;\n    padding: 0.5rem;\n    background: rgba(0, 0, 0, 0.02);\n    border-radius: 6px;\n    border: 1px solid rgba(0, 0, 0, 0.04);\n}\n.dark .agent-tool-step .tool-detail {\n    background: rgba(255, 255, 255, 0.02);\n    border-color: rgba(255, 255, 255, 0.04);\n}\n.agent-tool-step.expanded .tool-detail { display: block; }\n.tool-detail-section { margin-bottom: 0.375rem; }\n.tool-detail-section:last-child { margin-bottom: 0; }\n.tool-detail-label {\n    font-size: 0.625rem;\n    font-weight: 600;\n    text-transform: uppercase;\n    letter-spacing: 0.05em;\n    opacity: 0.6;\n    margin-bottom: 0.125rem;\n}\n.tool-detail-content {\n    font-family: 'JetBrains Mono', 'Fira Code', Consolas, monospace;\n    font-size: 0.7rem;\n    line-height: 1.5;\n    white-space: pre-wrap;\n    word-break: break-all;\n    max-height: 200px;\n    overflow-y: auto;\n    margin: 0;\n    padding: 0.25rem 0;\n    background: transparent;\n    color: inherit;\n}\n.tool-error-text { color: #f87171; }\n\n/* Tool failed state */\n.agent-tool-step.tool-failed .tool-name { color: #f87171; }\n\n/* Config form controls */\n#view-config input[type=\"text\"],\n#view-config input[type=\"number\"],\n#view-config input[type=\"password\"] {\n    height: 40px;\n    transition: border-color 0.2s ease, box-shadow 0.2s ease;\n}\n#view-config input:focus {\n    border-color: #4ABE6E;\n    box-shadow: 0 0 0 3px rgba(74, 190, 110, 0.12);\n}\n#view-config input[type=\"text\"]:hover,\n#view-config input[type=\"number\"]:hover,\n#view-config input[type=\"password\"]:hover {\n    border-color: #94a3b8;\n}\n.dark #view-config input[type=\"text\"]:hover,\n.dark #view-config input[type=\"number\"]:hover,\n.dark #view-config input[type=\"password\"]:hover {\n    border-color: #64748b;\n}\n\n/* Custom dropdown */\n.cfg-dropdown {\n    position: relative;\n    outline: none;\n}\n.cfg-dropdown-selected {\n    display: flex;\n    align-items: center;\n    justify-content: space-between;\n    height: 40px;\n    padding: 0 0.75rem;\n    border-radius: 0.5rem;\n    border: 1px solid #e2e8f0;\n    background: #f8fafc;\n    font-size: 0.875rem;\n    color: #1e293b;\n    cursor: pointer;\n    transition: border-color 0.2s ease, box-shadow 0.2s ease;\n    user-select: none;\n}\n.dark .cfg-dropdown-selected {\n    border-color: #475569;\n    background: rgba(255, 255, 255, 0.05);\n    color: #f1f5f9;\n}\n.cfg-dropdown-selected:hover { border-color: #94a3b8; }\n.dark .cfg-dropdown-selected:hover { border-color: #64748b; }\n.cfg-dropdown.open .cfg-dropdown-selected,\n.cfg-dropdown:focus .cfg-dropdown-selected {\n    border-color: #4ABE6E;\n    box-shadow: 0 0 0 3px rgba(74, 190, 110, 0.12);\n}\n.cfg-dropdown-arrow {\n    font-size: 0.625rem;\n    color: #94a3b8;\n    transition: transform 0.2s ease;\n    flex-shrink: 0;\n    margin-left: 0.5rem;\n}\n.cfg-dropdown.open .cfg-dropdown-arrow { transform: rotate(180deg); }\n.cfg-dropdown-menu {\n    display: none;\n    position: absolute;\n    top: calc(100% + 4px);\n    left: 0;\n    right: 0;\n    z-index: 50;\n    max-height: 240px;\n    overflow-y: auto;\n    border-radius: 0.5rem;\n    border: 1px solid #e2e8f0;\n    background: #ffffff;\n    box-shadow: 0 10px 25px -5px rgba(0, 0, 0, 0.1), 0 4px 10px -5px rgba(0, 0, 0, 0.04);\n    padding: 4px;\n}\n.dark .cfg-dropdown-menu {\n    border-color: #334155;\n    background: #1e1e1e;\n    box-shadow: 0 10px 25px -5px rgba(0, 0, 0, 0.4);\n}\n.cfg-dropdown.open .cfg-dropdown-menu { display: block; }\n.cfg-dropdown-item {\n    display: flex;\n    align-items: center;\n    padding: 8px 10px;\n    border-radius: 6px;\n    font-size: 0.875rem;\n    color: #334155;\n    cursor: pointer;\n    transition: background 0.15s ease;\n    white-space: nowrap;\n    overflow: hidden;\n    text-overflow: ellipsis;\n}\n.dark .cfg-dropdown-item { color: #cbd5e1; }\n.cfg-dropdown-item:hover { background: #f1f5f9; }\n.dark .cfg-dropdown-item:hover { background: rgba(255, 255, 255, 0.08); }\n.cfg-dropdown-item.active {\n    background: rgba(74, 190, 110, 0.1);\n    color: #228547;\n    font-weight: 500;\n}\n.dark .cfg-dropdown-item.active {\n    background: rgba(74, 190, 110, 0.15);\n    color: #74E9A4;\n}\n\n/* API Key masking via CSS (avoids browser password prompts) */\n.cfg-key-masked {\n    -webkit-text-security: disc;\n    text-security: disc;\n}\n\n/* Chat Input */\n#chat-input {\n    resize: none; height: 42px; max-height: 180px;\n    overflow-y: hidden;\n    transition: border-color 0.2s ease;\n}\n\n/* Attachment Preview Bar */\n.attachment-preview {\n    display: flex;\n    flex-wrap: wrap;\n    gap: 8px;\n    padding: 8px 0;\n}\n.attachment-preview.hidden { display: none; }\n\n.att-thumb {\n    position: relative;\n    width: 64px; height: 64px;\n    border-radius: 8px;\n    overflow: hidden;\n    border: 1px solid #e2e8f0;\n    flex-shrink: 0;\n}\n.dark .att-thumb { border-color: rgba(255,255,255,0.1); }\n.att-thumb img {\n    width: 100%; height: 100%;\n    object-fit: cover;\n}\n\n.att-chip {\n    position: relative;\n    display: flex;\n    align-items: center;\n    gap: 6px;\n    padding: 6px 28px 6px 10px;\n    border-radius: 8px;\n    background: #f1f5f9;\n    border: 1px solid #e2e8f0;\n    font-size: 12px;\n    color: #475569;\n    max-width: 180px;\n}\n.dark .att-chip { background: rgba(255,255,255,0.05); border-color: rgba(255,255,255,0.1); color: #94a3b8; }\n.att-uploading { opacity: 0.6; pointer-events: none; }\n.att-name {\n    overflow: hidden;\n    text-overflow: ellipsis;\n    white-space: nowrap;\n}\n\n.att-remove {\n    position: absolute;\n    top: -4px; right: -4px;\n    width: 18px; height: 18px;\n    border-radius: 50%;\n    background: #ef4444;\n    color: #fff;\n    border: none;\n    font-size: 12px;\n    line-height: 18px;\n    text-align: center;\n    cursor: pointer;\n    padding: 0;\n    opacity: 0;\n    transition: opacity 0.15s;\n}\n.att-thumb:hover .att-remove,\n.att-chip:hover .att-remove { opacity: 1; }\n\n/* Drag-over highlight */\n.drag-over {\n    background: rgba(74, 190, 110, 0.08) !important;\n    border-color: #4ABE6E !important;\n}\n\n/* User message attachments */\n.user-msg-attachments {\n    display: flex;\n    flex-wrap: wrap;\n    gap: 6px;\n    margin-bottom: 6px;\n}\n.user-msg-image {\n    max-width: 200px;\n    max-height: 160px;\n    border-radius: 8px;\n    object-fit: cover;\n    cursor: pointer;\n}\n.user-msg-image:hover { opacity: 0.9; }\n.user-msg-file {\n    display: flex;\n    align-items: center;\n    gap: 6px;\n    padding: 4px 10px;\n    border-radius: 6px;\n    background: rgba(255,255,255,0.15);\n    font-size: 12px;\n}\n\n/* Placeholder Cards */\n.placeholder-card {\n    transition: transform 0.2s ease, box-shadow 0.2s ease;\n}\n.placeholder-card:hover {\n    transform: translateY(-2px);\n    box-shadow: 0 8px 25px -5px rgba(0, 0, 0, 0.1);\n}\n"
  },
  {
    "path": "channel/web/static/js/console.js",
    "content": "/* =====================================================================\n   CowAgent Console - Main Application Script\n   ===================================================================== */\n\n// =====================================================================\n// Version — update this before each release\n// =====================================================================\nconst APP_VERSION = 'v2.0.3';\n\n// =====================================================================\n// i18n\n// =====================================================================\nconst I18N = {\n    zh: {\n        console: '控制台',\n        nav_chat: '对话', nav_manage: '管理', nav_monitor: '监控',\n        menu_chat: '对话', menu_config: '配置', menu_skills: '技能',\n        menu_memory: '记忆', menu_channels: '通道', menu_tasks: '定时',\n        menu_logs: '日志',\n        welcome_subtitle: '我可以帮你解答问题、管理计算机、创造和执行技能，并通过长期记忆<br>不断成长',\n        example_sys_title: '系统管理', example_sys_text: '帮我查看工作空间里有哪些文件',\n        example_task_title: '技能系统', example_task_text: '查看所有支持的工具和技能',\n        example_code_title: '编程助手', example_code_text: '帮我编写一个Python爬虫脚本',\n        input_placeholder: '输入消息...',\n        config_title: '配置管理', config_desc: '管理模型和 Agent 配置',\n        config_model: '模型配置', config_agent: 'Agent 配置',\n        config_channel: '通道配置',\n        config_agent_enabled: 'Agent 模式', config_max_tokens: '最大 Token',\n        config_max_turns: '最大轮次', config_max_steps: '最大步数',\n        config_channel_type: '通道类型',\n        config_provider: '模型厂商', config_model_name: '模型',\n        config_custom_model_hint: '输入自定义模型名称',\n        config_save: '保存', config_saved: '已保存',\n        config_save_error: '保存失败',\n        config_custom_option: '自定义...',\n        skills_title: '技能管理', skills_desc: '查看、启用或禁用 Agent 技能',\n        skills_loading: '加载技能中...', skills_loading_desc: '技能加载后将显示在此处',\n        tools_section_title: '内置工具', tools_loading: '加载工具中...',\n        skills_section_title: '技能', skill_enable: '启用', skill_disable: '禁用',\n        skill_toggle_error: '操作失败，请稍后再试',\n        memory_title: '记忆管理', memory_desc: '查看 Agent 记忆文件和内容',\n        memory_loading: '加载记忆文件中...', memory_loading_desc: '记忆文件将显示在此处',\n        memory_back: '返回列表',\n        memory_col_name: '文件名', memory_col_type: '类型', memory_col_size: '大小', memory_col_updated: '更新时间',\n        channels_title: '通道管理', channels_desc: '管理已接入的消息通道',\n        channels_add: '接入通道', channels_disconnect: '断开',\n        channels_save: '保存配置', channels_saved: '已保存', channels_save_error: '保存失败',\n        channels_restarted: '已保存并重启',\n        channels_connect_btn: '接入', channels_cancel: '取消',\n        channels_select_placeholder: '选择要接入的通道...',\n        channels_empty: '暂未接入任何通道', channels_empty_desc: '点击右上角「接入通道」按钮开始配置',\n        channels_disconnect_confirm: '确认断开该通道？配置将保留但通道会停止运行。',\n        channels_connected: '已接入', channels_connecting: '接入中...',\n        tasks_title: '定时任务', tasks_desc: '查看和管理定时任务',\n        tasks_coming: '即将推出', tasks_coming_desc: '定时任务管理功能即将在此提供',\n        logs_title: '日志', logs_desc: '实时日志输出 (run.log)',\n        logs_live: '实时', logs_coming_msg: '日志流即将在此提供。将连接 run.log 实现类似 tail -f 的实时输出。',\n        error_send: '发送失败，请稍后再试。', error_timeout: '请求超时，请再试一次。',\n    },\n    en: {\n        console: 'Console',\n        nav_chat: 'Chat', nav_manage: 'Management', nav_monitor: 'Monitor',\n        menu_chat: 'Chat', menu_config: 'Config', menu_skills: 'Skills',\n        menu_memory: 'Memory', menu_channels: 'Channels', menu_tasks: 'Tasks',\n        menu_logs: 'Logs',\n        welcome_subtitle: 'I can help you answer questions, manage your computer, create and execute skills, and keep growing through <br> long-term memory.',\n        example_sys_title: 'System', example_sys_text: 'Show me the files in the workspace',\n        example_task_title: 'Skills', example_task_text: 'Show current tools and skills',\n        example_code_title: 'Coding', example_code_text: 'Write a Python web scraper script',\n        input_placeholder: 'Type a message...',\n        config_title: 'Configuration', config_desc: 'Manage model and agent settings',\n        config_model: 'Model Configuration', config_agent: 'Agent Configuration',\n        config_channel: 'Channel Configuration',\n        config_agent_enabled: 'Agent Mode', config_max_tokens: 'Max Tokens',\n        config_max_turns: 'Max Turns', config_max_steps: 'Max Steps',\n        config_channel_type: 'Channel Type',\n        config_provider: 'Provider', config_model_name: 'Model',\n        config_custom_model_hint: 'Enter custom model name',\n        config_save: 'Save', config_saved: 'Saved',\n        config_save_error: 'Save failed',\n        config_custom_option: 'Custom...',\n        skills_title: 'Skills', skills_desc: 'View, enable, or disable agent skills',\n        skills_loading: 'Loading skills...', skills_loading_desc: 'Skills will be displayed here after loading',\n        tools_section_title: 'Built-in Tools', tools_loading: 'Loading tools...',\n        skills_section_title: 'Skills', skill_enable: 'Enable', skill_disable: 'Disable',\n        skill_toggle_error: 'Operation failed, please try again',\n        memory_title: 'Memory', memory_desc: 'View agent memory files and contents',\n        memory_loading: 'Loading memory files...', memory_loading_desc: 'Memory files will be displayed here',\n        memory_back: 'Back to list',\n        memory_col_name: 'Filename', memory_col_type: 'Type', memory_col_size: 'Size', memory_col_updated: 'Updated',\n        channels_title: 'Channels', channels_desc: 'Manage connected messaging channels',\n        channels_add: 'Connect', channels_disconnect: 'Disconnect',\n        channels_save: 'Save', channels_saved: 'Saved', channels_save_error: 'Save failed',\n        channels_restarted: 'Saved & Restarted',\n        channels_connect_btn: 'Connect', channels_cancel: 'Cancel',\n        channels_select_placeholder: 'Select a channel to connect...',\n        channels_empty: 'No channels connected', channels_empty_desc: 'Click the \"Connect\" button above to get started',\n        channels_disconnect_confirm: 'Disconnect this channel? Config will be preserved but the channel will stop.',\n        channels_connected: 'Connected', channels_connecting: 'Connecting...',\n        tasks_title: 'Scheduled Tasks', tasks_desc: 'View and manage scheduled tasks',\n        tasks_coming: 'Coming Soon', tasks_coming_desc: 'Scheduled task management will be available here',\n        logs_title: 'Logs', logs_desc: 'Real-time log output (run.log)',\n        logs_live: 'Live', logs_coming_msg: 'Log streaming will be available here. Connects to run.log for real-time output similar to tail -f.',\n        error_send: 'Failed to send. Please try again.', error_timeout: 'Request timeout. Please try again.',\n    }\n};\n\nlet currentLang = localStorage.getItem('cow_lang') || 'zh';\n\nfunction t(key) {\n    return (I18N[currentLang] && I18N[currentLang][key]) || (I18N.en[key]) || key;\n}\n\nfunction applyI18n() {\n    document.querySelectorAll('[data-i18n]').forEach(el => {\n        el.textContent = t(el.dataset.i18n);\n    });\n    document.querySelectorAll('[data-i18n-html]').forEach(el => {\n        el.innerHTML = t(el.dataset.i18nHtml);\n    });\n    document.querySelectorAll('[data-i18n-placeholder]').forEach(el => {\n        el.placeholder = t(el.dataset['i18nPlaceholder']);\n    });\n    document.getElementById('lang-label').textContent = currentLang === 'zh' ? 'EN' : '中文';\n}\n\nfunction toggleLanguage() {\n    currentLang = currentLang === 'zh' ? 'en' : 'zh';\n    localStorage.setItem('cow_lang', currentLang);\n    applyI18n();\n}\n\n// =====================================================================\n// Theme\n// =====================================================================\nlet currentTheme = localStorage.getItem('cow_theme') || 'dark';\n\nfunction applyTheme() {\n    const root = document.documentElement;\n    if (currentTheme === 'dark') {\n        root.classList.add('dark');\n        document.getElementById('theme-icon').className = 'fas fa-sun';\n        document.getElementById('hljs-light').disabled = true;\n        document.getElementById('hljs-dark').disabled = false;\n    } else {\n        root.classList.remove('dark');\n        document.getElementById('theme-icon').className = 'fas fa-moon';\n        document.getElementById('hljs-light').disabled = false;\n        document.getElementById('hljs-dark').disabled = true;\n    }\n}\n\nfunction toggleTheme() {\n    currentTheme = currentTheme === 'dark' ? 'light' : 'dark';\n    localStorage.setItem('cow_theme', currentTheme);\n    applyTheme();\n}\n\n// =====================================================================\n// Sidebar & Navigation\n// =====================================================================\nconst VIEW_META = {\n    chat:     { group: 'nav_chat',    page: 'menu_chat' },\n    config:   { group: 'nav_manage',  page: 'menu_config' },\n    skills:   { group: 'nav_manage',  page: 'menu_skills' },\n    memory:   { group: 'nav_manage',  page: 'menu_memory' },\n    channels: { group: 'nav_manage',  page: 'menu_channels' },\n    tasks:    { group: 'nav_manage',  page: 'menu_tasks' },\n    logs:     { group: 'nav_monitor', page: 'menu_logs' },\n};\n\nlet currentView = 'chat';\n\nfunction navigateTo(viewId) {\n    if (!VIEW_META[viewId]) return;\n    document.querySelectorAll('.view').forEach(v => v.classList.remove('active'));\n    const target = document.getElementById('view-' + viewId);\n    if (target) target.classList.add('active');\n    document.querySelectorAll('.sidebar-item').forEach(item => {\n        item.classList.toggle('active', item.dataset.view === viewId);\n    });\n    const meta = VIEW_META[viewId];\n    document.getElementById('breadcrumb-group').textContent = t(meta.group);\n    document.getElementById('breadcrumb-group').dataset.i18n = meta.group;\n    document.getElementById('breadcrumb-page').textContent = t(meta.page);\n    document.getElementById('breadcrumb-page').dataset.i18n = meta.page;\n    currentView = viewId;\n    if (window.innerWidth < 1024) closeSidebar();\n}\n\nfunction toggleSidebar() {\n    const sidebar = document.getElementById('sidebar');\n    const overlay = document.getElementById('sidebar-overlay');\n    const isOpen = !sidebar.classList.contains('-translate-x-full');\n    if (isOpen) {\n        closeSidebar();\n    } else {\n        sidebar.classList.remove('-translate-x-full');\n        overlay.classList.remove('hidden');\n    }\n}\n\nfunction closeSidebar() {\n    document.getElementById('sidebar').classList.add('-translate-x-full');\n    document.getElementById('sidebar-overlay').classList.add('hidden');\n}\n\ndocument.querySelectorAll('.menu-group > button').forEach(btn => {\n    btn.addEventListener('click', () => {\n        btn.parentElement.classList.toggle('open');\n    });\n});\n\ndocument.querySelectorAll('.sidebar-item').forEach(item => {\n    item.addEventListener('click', () => navigateTo(item.dataset.view));\n});\n\nwindow.addEventListener('resize', () => {\n    if (window.innerWidth >= 1024) {\n        document.getElementById('sidebar').classList.remove('-translate-x-full');\n        document.getElementById('sidebar-overlay').classList.add('hidden');\n    } else {\n        if (!document.getElementById('sidebar').classList.contains('-translate-x-full')) {\n            closeSidebar();\n        }\n    }\n});\n\n// =====================================================================\n// Markdown Renderer\n// =====================================================================\nfunction createMd() {\n    const md = window.markdownit({\n        html: false, breaks: true, linkify: true, typographer: true,\n        highlight: function(str, lang) {\n            if (lang && hljs.getLanguage(lang)) {\n                try { return hljs.highlight(str, { language: lang }).value; } catch (_) {}\n            }\n            return hljs.highlightAuto(str).value;\n        }\n    });\n    const defaultLinkOpen = md.renderer.rules.link_open || function(tokens, idx, options, env, self) {\n        return self.renderToken(tokens, idx, options);\n    };\n    md.renderer.rules.link_open = function(tokens, idx, options, env, self) {\n        tokens[idx].attrPush(['target', '_blank']);\n        tokens[idx].attrPush(['rel', 'noopener noreferrer']);\n        return defaultLinkOpen(tokens, idx, options, env, self);\n    };\n    return md;\n}\n\nconst md = createMd();\n\nfunction renderMarkdown(text) {\n    try { return md.render(text); }\n    catch (e) { return text.replace(/\\n/g, '<br>'); }\n}\n\n// =====================================================================\n// Chat Module\n// =====================================================================\nlet isPolling = false;\nlet loadingContainers = {};\nlet activeStreams = {};   // request_id -> EventSource\nlet isComposing = false;\nlet appConfig = { use_agent: false, title: 'CowAgent', subtitle: '', providers: {}, api_bases: {} };\n\nconst SESSION_ID_KEY = 'cow_session_id';\n\nfunction generateSessionId() {\n    return 'session_' + ([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g, c =>\n        (c ^ crypto.getRandomValues(new Uint8Array(1))[0] & 15 >> c / 4).toString(16)\n    );\n}\n\n// Restore session_id from localStorage so conversation history survives page refresh.\n// A new id is only generated when the user explicitly starts a new chat.\nfunction loadOrCreateSessionId() {\n    const stored = localStorage.getItem(SESSION_ID_KEY);\n    if (stored) return stored;\n    const fresh = generateSessionId();\n    localStorage.setItem(SESSION_ID_KEY, fresh);\n    return fresh;\n}\n\nlet sessionId = loadOrCreateSessionId();\n\n// ---- Conversation history state ----\nlet historyPage = 0;       // last page fetched (0 = nothing fetched yet)\nlet historyHasMore = false;\nlet historyLoading = false;\n\nfetch('/config').then(r => r.json()).then(data => {\n    if (data.status === 'success') {\n        appConfig = data;\n        const title = data.title || 'CowAgent';\n        document.getElementById('welcome-title').textContent = title;\n        initConfigView(data);\n    }\n    loadHistory(1);\n}).catch(() => { loadHistory(1); });\n\nconst chatInput = document.getElementById('chat-input');\nconst sendBtn = document.getElementById('send-btn');\nconst messagesDiv = document.getElementById('chat-messages');\nconst fileInput = document.getElementById('file-input');\nconst attachmentPreview = document.getElementById('attachment-preview');\n\n// Pending attachments: [{file_path, file_name, file_type, preview_url}]\n// Items with _uploading=true are still in flight.\nlet pendingAttachments = [];\nlet uploadingCount = 0;\n\nfunction updateSendBtnState() {\n    sendBtn.disabled = uploadingCount > 0 || (!chatInput.value.trim() && pendingAttachments.length === 0);\n}\n\nfunction renderAttachmentPreview() {\n    if (pendingAttachments.length === 0) {\n        attachmentPreview.classList.add('hidden');\n        attachmentPreview.innerHTML = '';\n        updateSendBtnState();\n        return;\n    }\n    attachmentPreview.classList.remove('hidden');\n    attachmentPreview.innerHTML = pendingAttachments.map((att, idx) => {\n        if (att._uploading) {\n            return `<div class=\"att-chip att-uploading\" data-idx=\"${idx}\">\n                <i class=\"fas fa-spinner fa-spin\"></i>\n                <span class=\"att-name\">${escapeHtml(att.file_name)}</span>\n            </div>`;\n        }\n        if (att.file_type === 'image') {\n            return `<div class=\"att-thumb\" data-idx=\"${idx}\">\n                <img src=\"${att.preview_url}\" alt=\"${escapeHtml(att.file_name)}\">\n                <button class=\"att-remove\" onclick=\"removeAttachment(${idx})\">&times;</button>\n            </div>`;\n        }\n        const icon = att.file_type === 'video' ? 'fa-film' : 'fa-file-alt';\n        return `<div class=\"att-chip\" data-idx=\"${idx}\">\n            <i class=\"fas ${icon}\"></i>\n            <span class=\"att-name\">${escapeHtml(att.file_name)}</span>\n            <button class=\"att-remove\" onclick=\"removeAttachment(${idx})\">&times;</button>\n        </div>`;\n    }).join('');\n    updateSendBtnState();\n}\n\nfunction removeAttachment(idx) {\n    if (pendingAttachments[idx]?._uploading) return;\n    pendingAttachments.splice(idx, 1);\n    renderAttachmentPreview();\n}\n\nasync function handleFileSelect(files) {\n    if (!files || files.length === 0) return;\n    const tasks = [];\n    for (const file of files) {\n        const placeholder = { file_name: file.name, file_type: 'file', _uploading: true };\n        pendingAttachments.push(placeholder);\n        uploadingCount++;\n        renderAttachmentPreview();\n\n        tasks.push((async () => {\n            const formData = new FormData();\n            formData.append('file', file);\n            formData.append('session_id', sessionId);\n            try {\n                const resp = await fetch('/upload', { method: 'POST', body: formData });\n                const data = await resp.json();\n                if (data.status === 'success') {\n                    placeholder.file_path = data.file_path;\n                    placeholder.file_name = data.file_name;\n                    placeholder.file_type = data.file_type;\n                    placeholder.preview_url = data.preview_url;\n                    delete placeholder._uploading;\n                } else {\n                    const i = pendingAttachments.indexOf(placeholder);\n                    if (i !== -1) pendingAttachments.splice(i, 1);\n                }\n            } catch (e) {\n                console.error('Upload failed:', e);\n                const i = pendingAttachments.indexOf(placeholder);\n                if (i !== -1) pendingAttachments.splice(i, 1);\n            }\n            uploadingCount--;\n            renderAttachmentPreview();\n        })());\n    }\n    await Promise.all(tasks);\n}\n\nfileInput.addEventListener('change', function() {\n    handleFileSelect(this.files);\n    this.value = '';\n});\n\n// Drag-and-drop support on chat input area\nconst chatInputArea = chatInput.closest('.flex-shrink-0');\nchatInputArea.addEventListener('dragover', (e) => { e.preventDefault(); e.stopPropagation(); chatInputArea.classList.add('drag-over'); });\nchatInputArea.addEventListener('dragleave', (e) => { e.preventDefault(); e.stopPropagation(); chatInputArea.classList.remove('drag-over'); });\nchatInputArea.addEventListener('drop', (e) => {\n    e.preventDefault(); e.stopPropagation();\n    chatInputArea.classList.remove('drag-over');\n    if (e.dataTransfer.files.length) handleFileSelect(e.dataTransfer.files);\n});\n\n// Paste image support\nchatInput.addEventListener('paste', (e) => {\n    const items = e.clipboardData?.items;\n    if (!items) return;\n    const files = [];\n    for (const item of items) {\n        if (item.kind === 'file') {\n            files.push(item.getAsFile());\n        }\n    }\n    if (files.length) {\n        e.preventDefault();\n        handleFileSelect(files);\n    }\n});\n\nchatInput.addEventListener('compositionstart', () => { isComposing = true; });\nchatInput.addEventListener('compositionend', () => { setTimeout(() => { isComposing = false; }, 100); });\n\nchatInput.addEventListener('input', function() {\n    this.style.height = '42px';\n    const scrollH = this.scrollHeight;\n    const newH = Math.min(scrollH, 180);\n    this.style.height = newH + 'px';\n    this.style.overflowY = scrollH > 180 ? 'auto' : 'hidden';\n    updateSendBtnState();\n});\n\nchatInput.addEventListener('keydown', function(e) {\n    // keyCode 229 indicates an IME is processing the keystroke (reliable across browsers)\n    if (e.keyCode === 229 || e.isComposing || isComposing) return;\n    if ((e.ctrlKey || e.shiftKey) && e.key === 'Enter') {\n        const start = this.selectionStart;\n        const end = this.selectionEnd;\n        this.value = this.value.substring(0, start) + '\\n' + this.value.substring(end);\n        this.selectionStart = this.selectionEnd = start + 1;\n        this.dispatchEvent(new Event('input'));\n        e.preventDefault();\n    } else if (e.key === 'Enter' && !e.shiftKey && !e.ctrlKey) {\n        sendMessage();\n        e.preventDefault();\n    }\n});\n\ndocument.querySelectorAll('.example-card').forEach(card => {\n    card.addEventListener('click', () => {\n        const textEl = card.querySelector('[data-i18n*=\"text\"]');\n        if (textEl) {\n            chatInput.value = textEl.textContent;\n            chatInput.dispatchEvent(new Event('input'));\n            chatInput.focus();\n        }\n    });\n});\n\nfunction sendMessage() {\n    const text = chatInput.value.trim();\n    if (!text && pendingAttachments.length === 0) return;\n\n    const ws = document.getElementById('welcome-screen');\n    if (ws) ws.remove();\n\n    const timestamp = new Date();\n    const attachments = [...pendingAttachments];\n    addUserMessage(text, timestamp, attachments);\n\n    const loadingEl = addLoadingIndicator();\n\n    chatInput.value = '';\n    chatInput.style.height = '42px';\n    chatInput.style.overflowY = 'hidden';\n    pendingAttachments = [];\n    renderAttachmentPreview();\n    sendBtn.disabled = true;\n\n    const body = { session_id: sessionId, message: text, stream: true, timestamp: timestamp.toISOString() };\n    if (attachments.length > 0) {\n        body.attachments = attachments.map(a => ({\n            file_path: a.file_path,\n            file_name: a.file_name,\n            file_type: a.file_type,\n        }));\n    }\n\n    fetch('/message', {\n        method: 'POST',\n        headers: { 'Content-Type': 'application/json' },\n        body: JSON.stringify(body)\n    })\n    .then(r => r.json())\n    .then(data => {\n        if (data.status === 'success') {\n            if (data.stream) {\n                startSSE(data.request_id, loadingEl, timestamp);\n            } else {\n                loadingContainers[data.request_id] = loadingEl;\n                if (!isPolling) startPolling();\n            }\n        } else {\n            loadingEl.remove();\n            addBotMessage(t('error_send'), new Date());\n        }\n    })\n    .catch(err => {\n        loadingEl.remove();\n        addBotMessage(err.name === 'AbortError' ? t('error_timeout') : t('error_send'), new Date());\n    });\n}\n\nfunction startSSE(requestId, loadingEl, timestamp) {\n    const es = new EventSource(`/stream?request_id=${encodeURIComponent(requestId)}`);\n    activeStreams[requestId] = es;\n\n    let botEl = null;\n    let stepsEl = null;    // .agent-steps  (thinking summaries + tool indicators)\n    let contentEl = null;  // .answer-content (final streaming answer)\n    let accumulatedText = '';\n    let currentToolEl = null;\n\n    function ensureBotEl() {\n        if (botEl) return;\n        if (loadingEl) { loadingEl.remove(); loadingEl = null; }\n        botEl = document.createElement('div');\n        botEl.className = 'flex gap-3 px-4 sm:px-6 py-3';\n        botEl.dataset.requestId = requestId;\n        botEl.innerHTML = `\n            <img src=\"assets/logo.jpg\" alt=\"CowAgent\" class=\"w-8 h-8 rounded-lg flex-shrink-0\">\n            <div class=\"min-w-0 flex-1 max-w-[85%]\">\n                <div class=\"bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-2xl px-4 py-3 text-sm leading-relaxed msg-content text-slate-700 dark:text-slate-200\">\n                    <div class=\"agent-steps\"></div>\n                    <div class=\"answer-content sse-streaming\"></div>\n                </div>\n                <div class=\"text-xs text-slate-400 dark:text-slate-500 mt-1.5\">${formatTime(timestamp)}</div>\n            </div>\n        `;\n        messagesDiv.appendChild(botEl);\n        stepsEl = botEl.querySelector('.agent-steps');\n        contentEl = botEl.querySelector('.answer-content');\n    }\n\n    es.onmessage = function(e) {\n        let item;\n        try { item = JSON.parse(e.data); } catch (_) { return; }\n\n        if (item.type === 'delta') {\n            ensureBotEl();\n            accumulatedText += item.content;\n            contentEl.innerHTML = renderMarkdown(accumulatedText);\n            scrollChatToBottom();\n\n        } else if (item.type === 'tool_start') {\n            ensureBotEl();\n\n            // Save current thinking as a collapsible step\n            if (accumulatedText.trim()) {\n                const fullText = accumulatedText.trim();\n                const oneLine = fullText.replace(/\\n+/g, ' ');\n                const needsTruncate = oneLine.length > 80;\n                const stepEl = document.createElement('div');\n                stepEl.className = 'agent-step agent-thinking-step' + (needsTruncate ? '' : ' no-expand');\n                if (needsTruncate) {\n                    const truncated = oneLine.substring(0, 80) + '…';\n                    stepEl.innerHTML = `\n                        <div class=\"thinking-header\" onclick=\"this.parentElement.classList.toggle('expanded')\">\n                            <i class=\"fas fa-lightbulb text-amber-400 flex-shrink-0\"></i>\n                            <span class=\"thinking-summary\">${escapeHtml(truncated)}</span>\n                            <i class=\"fas fa-chevron-right thinking-chevron\"></i>\n                        </div>\n                        <div class=\"thinking-full\">${renderMarkdown(fullText)}</div>`;\n                } else {\n                    stepEl.innerHTML = `\n                        <div class=\"thinking-header no-toggle\">\n                            <i class=\"fas fa-lightbulb text-amber-400 flex-shrink-0\"></i>\n                            <span>${escapeHtml(oneLine)}</span>\n                        </div>`;\n                }\n                stepsEl.appendChild(stepEl);\n            }\n            accumulatedText = '';\n            contentEl.innerHTML = '';\n\n            // Add tool execution indicator (collapsible)\n            currentToolEl = document.createElement('div');\n            currentToolEl.className = 'agent-step agent-tool-step';\n            const argsStr = formatToolArgs(item.arguments || {});\n            currentToolEl.innerHTML = `\n                <div class=\"tool-header\" onclick=\"this.parentElement.classList.toggle('expanded')\">\n                    <i class=\"fas fa-cog fa-spin text-primary-400 flex-shrink-0 tool-icon\"></i>\n                    <span class=\"tool-name\">${item.tool}</span>\n                    <i class=\"fas fa-chevron-right tool-chevron\"></i>\n                </div>\n                <div class=\"tool-detail\">\n                    <div class=\"tool-detail-section\">\n                        <div class=\"tool-detail-label\">Input</div>\n                        <pre class=\"tool-detail-content\">${argsStr}</pre>\n                    </div>\n                    <div class=\"tool-detail-section tool-output-section\"></div>\n                </div>`;\n            stepsEl.appendChild(currentToolEl);\n\n            scrollChatToBottom();\n\n        } else if (item.type === 'tool_end') {\n            if (currentToolEl) {\n                const isError = item.status !== 'success';\n                const icon = currentToolEl.querySelector('.tool-icon');\n                icon.className = isError\n                    ? 'fas fa-times text-red-400 flex-shrink-0 tool-icon'\n                    : 'fas fa-check text-primary-400 flex-shrink-0 tool-icon';\n\n                // Show execution time\n                const nameEl = currentToolEl.querySelector('.tool-name');\n                if (item.execution_time !== undefined) {\n                    nameEl.innerHTML += ` <span class=\"tool-time\">${item.execution_time}s</span>`;\n                }\n\n                // Fill output section\n                const outputSection = currentToolEl.querySelector('.tool-output-section');\n                if (outputSection && item.result) {\n                    outputSection.innerHTML = `\n                        <div class=\"tool-detail-label\">${isError ? 'Error' : 'Output'}</div>\n                        <pre class=\"tool-detail-content ${isError ? 'tool-error-text' : ''}\">${escapeHtml(String(item.result))}</pre>`;\n                }\n\n                if (isError) currentToolEl.classList.add('tool-failed');\n                currentToolEl = null;\n            }\n\n        } else if (item.type === 'done') {\n            es.close();\n            delete activeStreams[requestId];\n\n            const finalText = item.content || accumulatedText;\n\n            if (!botEl && finalText) {\n                if (loadingEl) { loadingEl.remove(); loadingEl = null; }\n                addBotMessage(finalText, new Date((item.timestamp || Date.now() / 1000) * 1000), requestId);\n            } else if (botEl) {\n                contentEl.classList.remove('sse-streaming');\n                if (finalText) contentEl.innerHTML = renderMarkdown(finalText);\n                applyHighlighting(botEl);\n            }\n            scrollChatToBottom();\n\n        } else if (item.type === 'error') {\n            es.close();\n            delete activeStreams[requestId];\n            if (loadingEl) { loadingEl.remove(); loadingEl = null; }\n            addBotMessage(t('error_send'), new Date());\n        }\n    };\n\n    es.onerror = function() {\n        es.close();\n        delete activeStreams[requestId];\n        if (loadingEl) { loadingEl.remove(); loadingEl = null; }\n        if (!botEl) {\n            addBotMessage(t('error_send'), new Date());\n        } else if (accumulatedText) {\n            contentEl.classList.remove('sse-streaming');\n            contentEl.innerHTML = renderMarkdown(accumulatedText);\n            applyHighlighting(botEl);\n        }\n    };\n}\n\nfunction startPolling() {\n    if (isPolling) return;\n    isPolling = true;\n\n    function poll() {\n        if (!isPolling) return;\n        if (document.hidden) { setTimeout(poll, 5000); return; }\n\n        fetch('/poll', {\n            method: 'POST',\n            headers: { 'Content-Type': 'application/json' },\n            body: JSON.stringify({ session_id: sessionId })\n        })\n        .then(r => r.json())\n        .then(data => {\n            if (data.status === 'success' && data.has_content) {\n                const rid = data.request_id;\n                if (loadingContainers[rid]) {\n                    loadingContainers[rid].remove();\n                    delete loadingContainers[rid];\n                }\n                addBotMessage(data.content, new Date(data.timestamp * 1000), rid);\n                scrollChatToBottom();\n            }\n            setTimeout(poll, 2000);\n        })\n        .catch(() => { setTimeout(poll, 3000); });\n    }\n    poll();\n}\n\nfunction createUserMessageEl(content, timestamp, attachments) {\n    const el = document.createElement('div');\n    el.className = 'flex justify-end px-4 sm:px-6 py-3';\n\n    let attachHtml = '';\n    if (attachments && attachments.length > 0) {\n        const items = attachments.map(a => {\n            if (a.file_type === 'image') {\n                return `<img src=\"${a.preview_url}\" alt=\"${escapeHtml(a.file_name)}\" class=\"user-msg-image\">`;\n            }\n            const icon = a.file_type === 'video' ? 'fa-film' : 'fa-file-alt';\n            return `<div class=\"user-msg-file\"><i class=\"fas ${icon}\"></i> ${escapeHtml(a.file_name)}</div>`;\n        }).join('');\n        attachHtml = `<div class=\"user-msg-attachments\">${items}</div>`;\n    }\n\n    const textHtml = content ? renderMarkdown(content) : '';\n    el.innerHTML = `\n        <div class=\"max-w-[75%] sm:max-w-[60%]\">\n            <div class=\"bg-primary-400 text-white rounded-2xl px-4 py-2.5 text-sm leading-relaxed msg-content\">\n                ${attachHtml}${textHtml}\n            </div>\n            <div class=\"text-xs text-slate-400 dark:text-slate-500 mt-1.5 text-right\">${formatTime(timestamp)}</div>\n        </div>\n    `;\n    return el;\n}\n\nfunction renderToolCallsHtml(toolCalls) {\n    if (!toolCalls || toolCalls.length === 0) return '';\n    return toolCalls.map(tc => {\n        const argsStr = formatToolArgs(tc.arguments || {});\n        const resultStr = tc.result ? escapeHtml(String(tc.result)) : '';\n        const hasResult = !!resultStr;\n        return `\n<div class=\"agent-step agent-tool-step\">\n    <div class=\"tool-header\" onclick=\"this.parentElement.classList.toggle('expanded')\">\n        <i class=\"fas fa-check text-primary-400 flex-shrink-0 tool-icon\"></i>\n        <span class=\"tool-name\">${escapeHtml(tc.name || '')}</span>\n        <i class=\"fas fa-chevron-right tool-chevron\"></i>\n    </div>\n    <div class=\"tool-detail\">\n        <div class=\"tool-detail-section\">\n            <div class=\"tool-detail-label\">Input</div>\n            <pre class=\"tool-detail-content\">${argsStr}</pre>\n        </div>\n        ${hasResult ? `\n        <div class=\"tool-detail-section tool-output-section\">\n            <div class=\"tool-detail-label\">Output</div>\n            <pre class=\"tool-detail-content\">${resultStr}</pre>\n        </div>` : ''}\n    </div>\n</div>`;\n    }).join('');\n}\n\nfunction createBotMessageEl(content, timestamp, requestId, toolCalls) {\n    const el = document.createElement('div');\n    el.className = 'flex gap-3 px-4 sm:px-6 py-3';\n    if (requestId) el.dataset.requestId = requestId;\n    const toolsHtml = renderToolCallsHtml(toolCalls);\n    el.innerHTML = `\n        <img src=\"assets/logo.jpg\" alt=\"CowAgent\" class=\"w-8 h-8 rounded-lg flex-shrink-0\">\n        <div class=\"min-w-0 flex-1 max-w-[85%]\">\n            <div class=\"bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-2xl px-4 py-3 text-sm leading-relaxed msg-content text-slate-700 dark:text-slate-200\">\n                ${toolsHtml ? `<div class=\"agent-steps\">${toolsHtml}</div>` : ''}\n                <div class=\"answer-content\">${renderMarkdown(content)}</div>\n            </div>\n            <div class=\"text-xs text-slate-400 dark:text-slate-500 mt-1.5\">${formatTime(timestamp)}</div>\n        </div>\n    `;\n    applyHighlighting(el);\n    return el;\n}\n\nfunction addUserMessage(content, timestamp, attachments) {\n    const el = createUserMessageEl(content, timestamp, attachments);\n    messagesDiv.appendChild(el);\n    scrollChatToBottom();\n}\n\nfunction addBotMessage(content, timestamp, requestId) {\n    const el = createBotMessageEl(content, timestamp, requestId);\n    messagesDiv.appendChild(el);\n    scrollChatToBottom();\n}\n\n// Load conversation history from the server (page 1 = most recent messages).\n// Subsequent pages prepend older messages when the user scrolls to the top.\nfunction loadHistory(page) {\n    if (historyLoading) return;\n    historyLoading = true;\n\n    fetch(`/api/history?session_id=${encodeURIComponent(sessionId)}&page=${page}&page_size=20`)\n        .then(r => r.json())\n        .then(data => {\n            if (data.status !== 'success' || data.messages.length === 0) return;\n\n            const prevScrollHeight = messagesDiv.scrollHeight;\n            const isFirstLoad = page === 1;\n\n            // On first load, remove the welcome screen if history exists\n            if (isFirstLoad) {\n                const ws = document.getElementById('welcome-screen');\n                if (ws) ws.remove();\n            }\n\n            // Build a fragment of history message elements in chronological order\n            const fragment = document.createDocumentFragment();\n\n            if (data.has_more && page > 1) {\n                // Keep the \"load more\" sentinel in place (inserted below)\n            }\n\n            data.messages.forEach(msg => {\n                const hasContent = msg.content && msg.content.trim();\n                const hasToolCalls = msg.role === 'assistant' && msg.tool_calls && msg.tool_calls.length > 0;\n                if (!hasContent && !hasToolCalls) return;\n                const ts = new Date(msg.created_at * 1000);\n                const el = msg.role === 'user'\n                    ? createUserMessageEl(msg.content, ts)\n                    : createBotMessageEl(msg.content || '', ts, null, msg.tool_calls);\n                fragment.appendChild(el);\n            });\n\n            // Prepend history above any existing messages\n            const sentinel = document.getElementById('history-load-more');\n            const insertBefore = sentinel ? sentinel.nextSibling : messagesDiv.firstChild;\n            messagesDiv.insertBefore(fragment, insertBefore);\n\n            // Manage the \"load more\" sentinel at the very top\n            if (data.has_more) {\n                if (!document.getElementById('history-load-more')) {\n                    const btn = document.createElement('div');\n                    btn.id = 'history-load-more';\n                    btn.className = 'flex justify-center py-3';\n                    btn.innerHTML = `<button class=\"text-xs text-slate-400 dark:text-slate-500 hover:text-primary-400 transition-colors\" onclick=\"loadHistory(historyPage + 1)\">Load earlier messages</button>`;\n                    messagesDiv.insertBefore(btn, messagesDiv.firstChild);\n                }\n            } else {\n                const sentinel = document.getElementById('history-load-more');\n                if (sentinel) sentinel.remove();\n            }\n\n            historyHasMore = data.has_more;\n            historyPage = page;\n\n            if (isFirstLoad) {\n                // Use requestAnimationFrame to ensure the DOM has fully rendered\n                // before scrolling, otherwise scrollHeight may not reflect new content.\n                requestAnimationFrame(() => scrollChatToBottom());\n            } else {\n                // Restore scroll position so loading older messages doesn't jump the view\n                messagesDiv.scrollTop = messagesDiv.scrollHeight - prevScrollHeight;\n            }\n        })\n        .catch(() => {})\n        .finally(() => { historyLoading = false; });\n}\n\nfunction addLoadingIndicator() {\n    const el = document.createElement('div');\n    el.className = 'flex gap-3 px-4 sm:px-6 py-3';\n    el.innerHTML = `\n        <img src=\"assets/logo.jpg\" alt=\"CowAgent\" class=\"w-8 h-8 rounded-lg flex-shrink-0\">\n        <div class=\"bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-2xl px-4 py-3\">\n            <div class=\"flex items-center gap-1.5\">\n                <span class=\"w-2 h-2 rounded-full bg-primary-400 animate-pulse-dot\" style=\"animation-delay: 0s\"></span>\n                <span class=\"w-2 h-2 rounded-full bg-primary-400 animate-pulse-dot\" style=\"animation-delay: 0.2s\"></span>\n                <span class=\"w-2 h-2 rounded-full bg-primary-400 animate-pulse-dot\" style=\"animation-delay: 0.4s\"></span>\n            </div>\n        </div>\n    `;\n    messagesDiv.appendChild(el);\n    scrollChatToBottom();\n    return el;\n}\n\nfunction newChat() {\n    // Close all active SSE connections for the current session\n    Object.values(activeStreams).forEach(es => { try { es.close(); } catch (_) {} });\n    activeStreams = {};\n\n    // Generate a fresh session and persist it so the next page load also starts clean\n    sessionId = generateSessionId();\n    localStorage.setItem(SESSION_ID_KEY, sessionId);\n    isPolling = false;\n    loadingContainers = {};\n    messagesDiv.innerHTML = '';\n    const ws = document.createElement('div');\n    ws.id = 'welcome-screen';\n    ws.className = 'flex flex-col items-center justify-center h-full px-6 py-12';\n    ws.innerHTML = `\n        <img src=\"assets/logo.jpg\" alt=\"CowAgent\" class=\"w-16 h-16 rounded-2xl mb-6 shadow-lg shadow-primary-500/20\">\n        <h1 class=\"text-2xl font-bold text-slate-800 dark:text-slate-100 mb-3\">${appConfig.title || 'CowAgent'}</h1>\n        <p class=\"text-slate-500 dark:text-slate-400 text-center max-w-lg mb-10 leading-relaxed\" data-i18n=\"welcome_subtitle\">${t('welcome_subtitle')}</p>\n        <div class=\"grid grid-cols-1 sm:grid-cols-3 gap-4 w-full max-w-2xl\">\n            <div class=\"example-card group bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-xl p-4 cursor-pointer hover:border-primary-300 dark:hover:border-primary-600 hover:shadow-md transition-all duration-200\">\n                <div class=\"flex items-center gap-2 mb-2\">\n                    <div class=\"w-7 h-7 rounded-lg bg-blue-50 dark:bg-blue-900/30 flex items-center justify-center\">\n                        <i class=\"fas fa-folder-open text-blue-500 text-xs\"></i>\n                    </div>\n                    <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200\" data-i18n=\"example_sys_title\">${t('example_sys_title')}</span>\n                </div>\n                <p class=\"text-sm text-slate-500 dark:text-slate-400 leading-relaxed\" data-i18n=\"example_sys_text\">${t('example_sys_text')}</p>\n            </div>\n            <div class=\"example-card group bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-xl p-4 cursor-pointer hover:border-primary-300 dark:hover:border-primary-600 hover:shadow-md transition-all duration-200\">\n                <div class=\"flex items-center gap-2 mb-2\">\n                    <div class=\"w-7 h-7 rounded-lg bg-amber-50 dark:bg-amber-900/30 flex items-center justify-center\">\n                        <i class=\"fas fa-clock text-amber-500 text-xs\"></i>\n                    </div>\n                    <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200\" data-i18n=\"example_task_title\">${t('example_task_title')}</span>\n                </div>\n                <p class=\"text-sm text-slate-500 dark:text-slate-400 leading-relaxed\" data-i18n=\"example_task_text\">${t('example_task_text')}</p>\n            </div>\n            <div class=\"example-card group bg-white dark:bg-[#1A1A1A] border border-slate-200 dark:border-white/10 rounded-xl p-4 cursor-pointer hover:border-primary-300 dark:hover:border-primary-600 hover:shadow-md transition-all duration-200\">\n                <div class=\"flex items-center gap-2 mb-2\">\n                    <div class=\"w-7 h-7 rounded-lg bg-emerald-50 dark:bg-emerald-900/30 flex items-center justify-center\">\n                        <i class=\"fas fa-code text-emerald-500 text-xs\"></i>\n                    </div>\n                    <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200\" data-i18n=\"example_code_title\">${t('example_code_title')}</span>\n                </div>\n                <p class=\"text-sm text-slate-500 dark:text-slate-400 leading-relaxed\" data-i18n=\"example_code_text\">${t('example_code_text')}</p>\n            </div>\n        </div>\n    `;\n    messagesDiv.appendChild(ws);\n    ws.querySelectorAll('.example-card').forEach(card => {\n        card.addEventListener('click', () => {\n            const textEl = card.querySelector('[data-i18n*=\"text\"]');\n            if (textEl) {\n                chatInput.value = textEl.textContent;\n                chatInput.dispatchEvent(new Event('input'));\n                chatInput.focus();\n            }\n        });\n    });\n    if (currentView !== 'chat') navigateTo('chat');\n}\n\n// =====================================================================\n// Utilities\n// =====================================================================\nfunction formatTime(date) {\n    return date.toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' });\n}\n\nfunction escapeHtml(str) {\n    const div = document.createElement('div');\n    div.appendChild(document.createTextNode(str));\n    return div.innerHTML;\n}\n\nfunction ChannelsHandler_maskSecret(val) {\n    if (!val || val.length <= 8) return val;\n    return val.slice(0, 4) + '*'.repeat(val.length - 8) + val.slice(-4);\n}\n\nfunction formatToolArgs(args) {\n    if (!args || Object.keys(args).length === 0) return '(none)';\n    try {\n        return escapeHtml(JSON.stringify(args, null, 2));\n    } catch (_) {\n        return escapeHtml(String(args));\n    }\n}\n\nfunction scrollChatToBottom() {\n    messagesDiv.scrollTop = messagesDiv.scrollHeight;\n}\n\nfunction applyHighlighting(container) {\n    const root = container || document;\n    setTimeout(() => {\n        root.querySelectorAll('pre code').forEach(block => {\n            if (!block.classList.contains('hljs')) {\n                hljs.highlightElement(block);\n            }\n        });\n    }, 0);\n}\n\n// =====================================================================\n// Config View\n// =====================================================================\nlet configProviders = {};\nlet configApiBases = {};\nlet configApiKeys = {};\nlet configCurrentModel = '';\nlet cfgProviderValue = '';\nlet cfgModelValue = '';\n\n// --- Custom dropdown helper ---\nfunction initDropdown(el, options, selectedValue, onChange) {\n    const textEl = el.querySelector('.cfg-dropdown-text');\n    const menuEl = el.querySelector('.cfg-dropdown-menu');\n    const selEl = el.querySelector('.cfg-dropdown-selected');\n\n    el._ddValue = selectedValue || '';\n    el._ddOnChange = onChange;\n\n    function render() {\n        menuEl.innerHTML = '';\n        options.forEach(opt => {\n            const item = document.createElement('div');\n            item.className = 'cfg-dropdown-item' + (opt.value === el._ddValue ? ' active' : '');\n            item.textContent = opt.label;\n            item.dataset.value = opt.value;\n            item.addEventListener('click', (e) => {\n                e.stopPropagation();\n                el._ddValue = opt.value;\n                textEl.textContent = opt.label;\n                menuEl.querySelectorAll('.cfg-dropdown-item').forEach(i => i.classList.remove('active'));\n                item.classList.add('active');\n                el.classList.remove('open');\n                if (el._ddOnChange) el._ddOnChange(opt.value);\n            });\n            menuEl.appendChild(item);\n        });\n        const sel = options.find(o => o.value === el._ddValue);\n        textEl.textContent = sel ? sel.label : (options[0] ? options[0].label : '--');\n        if (!sel && options[0]) el._ddValue = options[0].value;\n    }\n\n    render();\n\n    if (!el._ddBound) {\n        selEl.addEventListener('click', (e) => {\n            e.stopPropagation();\n            document.querySelectorAll('.cfg-dropdown.open').forEach(d => { if (d !== el) d.classList.remove('open'); });\n            el.classList.toggle('open');\n        });\n        el._ddBound = true;\n    }\n}\n\ndocument.addEventListener('click', () => {\n    document.querySelectorAll('.cfg-dropdown.open').forEach(d => d.classList.remove('open'));\n});\n\nfunction getDropdownValue(el) { return el._ddValue || ''; }\n\n// --- Config init ---\nfunction initConfigView(data) {\n    configProviders = data.providers || {};\n    configApiBases = data.api_bases || {};\n    configApiKeys = data.api_keys || {};\n    configCurrentModel = data.model || '';\n\n    const providerEl = document.getElementById('cfg-provider');\n    const providerOpts = Object.entries(configProviders).map(([pid, p]) => ({ value: pid, label: p.label }));\n\n    // if use_linkai is enabled, always select linkai as the provider\n    // Otherwise prefer bot_type from config, fall back to model-based detection\n    const detected = data.use_linkai ? 'linkai'\n        : (data.bot_type && configProviders[data.bot_type] ? data.bot_type : detectProvider(configCurrentModel));\n    cfgProviderValue = detected || (providerOpts[0] ? providerOpts[0].value : '');\n\n    initDropdown(providerEl, providerOpts, cfgProviderValue, onProviderChange);\n\n    onProviderChange(cfgProviderValue);\n    syncModelSelection(configCurrentModel);\n\n    document.getElementById('cfg-max-tokens').value = data.agent_max_context_tokens || 50000;\n    document.getElementById('cfg-max-turns').value = data.agent_max_context_turns || 30;\n    document.getElementById('cfg-max-steps').value = data.agent_max_steps || 15;\n}\n\nfunction detectProvider(model) {\n    if (!model) return Object.keys(configProviders)[0] || '';\n    for (const [pid, p] of Object.entries(configProviders)) {\n        if (pid === 'linkai') continue;\n        if (p.models && p.models.includes(model)) return pid;\n    }\n    return Object.keys(configProviders)[0] || '';\n}\n\nfunction onProviderChange(pid) {\n    cfgProviderValue = pid || getDropdownValue(document.getElementById('cfg-provider'));\n    const p = configProviders[cfgProviderValue];\n    if (!p) return;\n\n    const modelEl = document.getElementById('cfg-model-select');\n    const modelOpts = (p.models || []).map(m => ({ value: m, label: m }));\n    modelOpts.push({ value: '__custom__', label: t('config_custom_option') });\n\n    initDropdown(modelEl, modelOpts, modelOpts[0] ? modelOpts[0].value : '', onModelSelectChange);\n\n    // API Key\n    const keyField = p.api_key_field;\n    const keyWrap = document.getElementById('cfg-api-key-wrap');\n    const keyInput = document.getElementById('cfg-api-key');\n    if (keyField) {\n        keyWrap.classList.remove('hidden');\n        keyInput.classList.add('cfg-key-masked');\n        const maskedVal = configApiKeys[keyField] || '';\n        keyInput.value = maskedVal;\n        keyInput.dataset.field = keyField;\n        keyInput.dataset.masked = maskedVal ? '1' : '';\n        keyInput.dataset.maskedVal = maskedVal;\n        const toggleIcon = document.querySelector('#cfg-api-key-toggle i');\n        if (toggleIcon) toggleIcon.className = 'fas fa-eye text-xs';\n\n        if (!keyInput._cfgBound) {\n            keyInput.addEventListener('focus', function() {\n                if (this.dataset.masked === '1') {\n                    this.value = '';\n                    this.dataset.masked = '';\n                    this.classList.remove('cfg-key-masked');\n                }\n            });\n            keyInput.addEventListener('blur', function() {\n                if (!this.value.trim() && this.dataset.maskedVal) {\n                    this.value = this.dataset.maskedVal;\n                    this.dataset.masked = '1';\n                    this.classList.add('cfg-key-masked');\n                }\n            });\n            keyInput.addEventListener('input', function() {\n                this.dataset.masked = '';\n            });\n            keyInput._cfgBound = true;\n        }\n    } else {\n        keyWrap.classList.add('hidden');\n        keyInput.value = '';\n        keyInput.dataset.field = '';\n    }\n\n    // API Base\n    if (p.api_base_key) {\n        document.getElementById('cfg-api-base-wrap').classList.remove('hidden');\n        document.getElementById('cfg-api-base').value = configApiBases[p.api_base_key] || p.api_base_default || '';\n    } else {\n        document.getElementById('cfg-api-base-wrap').classList.add('hidden');\n        document.getElementById('cfg-api-base').value = '';\n    }\n\n    onModelSelectChange(modelOpts[0] ? modelOpts[0].value : '');\n}\n\nfunction onModelSelectChange(val) {\n    cfgModelValue = val || getDropdownValue(document.getElementById('cfg-model-select'));\n    const customWrap = document.getElementById('cfg-model-custom-wrap');\n    if (cfgModelValue === '__custom__') {\n        customWrap.classList.remove('hidden');\n        document.getElementById('cfg-model-custom').focus();\n    } else {\n        customWrap.classList.add('hidden');\n        document.getElementById('cfg-model-custom').value = '';\n    }\n}\n\nfunction syncModelSelection(model) {\n    const p = configProviders[cfgProviderValue];\n    if (!p) return;\n\n    const modelEl = document.getElementById('cfg-model-select');\n    if (p.models && p.models.includes(model)) {\n        const modelOpts = (p.models || []).map(m => ({ value: m, label: m }));\n        modelOpts.push({ value: '__custom__', label: t('config_custom_option') });\n        initDropdown(modelEl, modelOpts, model, onModelSelectChange);\n        cfgModelValue = model;\n        document.getElementById('cfg-model-custom-wrap').classList.add('hidden');\n    } else {\n        cfgModelValue = '__custom__';\n        const modelOpts = (p.models || []).map(m => ({ value: m, label: m }));\n        modelOpts.push({ value: '__custom__', label: t('config_custom_option') });\n        initDropdown(modelEl, modelOpts, '__custom__', onModelSelectChange);\n        document.getElementById('cfg-model-custom-wrap').classList.remove('hidden');\n        document.getElementById('cfg-model-custom').value = model;\n    }\n}\n\nfunction getSelectedModel() {\n    if (cfgModelValue === '__custom__') {\n        return document.getElementById('cfg-model-custom').value.trim();\n    }\n    return cfgModelValue;\n}\n\nfunction toggleApiKeyVisibility() {\n    const input = document.getElementById('cfg-api-key');\n    const icon = document.querySelector('#cfg-api-key-toggle i');\n    if (input.classList.contains('cfg-key-masked')) {\n        input.classList.remove('cfg-key-masked');\n        icon.className = 'fas fa-eye-slash text-xs';\n    } else {\n        input.classList.add('cfg-key-masked');\n        icon.className = 'fas fa-eye text-xs';\n    }\n}\n\nfunction showStatus(elId, msgKey, isError) {\n    const el = document.getElementById(elId);\n    el.textContent = t(msgKey);\n    el.classList.toggle('text-red-500', !!isError);\n    el.classList.toggle('text-primary-500', !isError);\n    el.classList.remove('opacity-0');\n    setTimeout(() => el.classList.add('opacity-0'), 2500);\n}\n\nfunction saveModelConfig() {\n    const model = getSelectedModel();\n    if (!model) return;\n\n    const updates = { model: model };\n    const p = configProviders[cfgProviderValue];\n    updates.use_linkai = (cfgProviderValue === 'linkai');\n    if (cfgProviderValue === 'linkai') {\n        updates.bot_type = '';\n    } else {\n        updates.bot_type = cfgProviderValue;\n    }\n    if (p && p.api_base_key) {\n        const base = document.getElementById('cfg-api-base').value.trim();\n        if (base) updates[p.api_base_key] = base;\n    }\n    if (p && p.api_key_field) {\n        const keyInput = document.getElementById('cfg-api-key');\n        const rawVal = keyInput.value.trim();\n        if (rawVal && keyInput.dataset.masked !== '1') {\n            updates[p.api_key_field] = rawVal;\n        }\n    }\n\n    const btn = document.getElementById('cfg-model-save');\n    btn.disabled = true;\n    fetch('/config', {\n        method: 'POST',\n        headers: { 'Content-Type': 'application/json' },\n        body: JSON.stringify({ updates })\n    })\n    .then(r => r.json())\n    .then(data => {\n        if (data.status === 'success') {\n            configCurrentModel = model;\n            if (data.applied) {\n                const keyInput = document.getElementById('cfg-api-key');\n                Object.entries(data.applied).forEach(([k, v]) => {\n                    if (k === 'model') return;\n                    if (k.includes('api_key')) {\n                        const masked = v.length > 8\n                            ? v.substring(0, 4) + '*'.repeat(v.length - 8) + v.substring(v.length - 4)\n                            : v;\n                        configApiKeys[k] = masked;\n                        if (keyInput.dataset.field === k) {\n                            keyInput.value = masked;\n                            keyInput.dataset.masked = '1';\n                            keyInput.dataset.maskedVal = masked;\n                            keyInput.classList.add('cfg-key-masked');\n                            const toggleIcon = document.querySelector('#cfg-api-key-toggle i');\n                            if (toggleIcon) toggleIcon.className = 'fas fa-eye text-xs';\n                        }\n                    } else {\n                        configApiBases[k] = v;\n                    }\n                });\n            }\n            showStatus('cfg-model-status', 'config_saved', false);\n        } else {\n            showStatus('cfg-model-status', 'config_save_error', true);\n        }\n    })\n    .catch(() => showStatus('cfg-model-status', 'config_save_error', true))\n    .finally(() => { btn.disabled = false; });\n}\n\nfunction saveAgentConfig() {\n    const updates = {\n        agent_max_context_tokens: parseInt(document.getElementById('cfg-max-tokens').value) || 50000,\n        agent_max_context_turns: parseInt(document.getElementById('cfg-max-turns').value) || 30,\n        agent_max_steps: parseInt(document.getElementById('cfg-max-steps').value) || 15,\n    };\n\n    const btn = document.getElementById('cfg-agent-save');\n    btn.disabled = true;\n    fetch('/config', {\n        method: 'POST',\n        headers: { 'Content-Type': 'application/json' },\n        body: JSON.stringify({ updates })\n    })\n    .then(r => r.json())\n    .then(data => {\n        if (data.status === 'success') {\n            showStatus('cfg-agent-status', 'config_saved', false);\n        } else {\n            showStatus('cfg-agent-status', 'config_save_error', true);\n        }\n    })\n    .catch(() => showStatus('cfg-agent-status', 'config_save_error', true))\n    .finally(() => { btn.disabled = false; });\n}\n\nfunction loadConfigView() {\n    fetch('/config').then(r => r.json()).then(data => {\n        if (data.status !== 'success') return;\n        appConfig = data;\n        initConfigView(data);\n    }).catch(() => {});\n}\n\n// =====================================================================\n// Skills View\n// =====================================================================\nlet toolsLoaded = false;\n\nconst TOOL_ICONS = {\n    bash: 'fa-terminal',\n    edit: 'fa-pen-to-square',\n    read: 'fa-file-lines',\n    write: 'fa-file-pen',\n    ls: 'fa-folder-open',\n    send: 'fa-paper-plane',\n    web_search: 'fa-magnifying-glass',\n    browser: 'fa-globe',\n    env_config: 'fa-key',\n    scheduler: 'fa-clock',\n    memory_get: 'fa-brain',\n    memory_search: 'fa-brain',\n};\n\nfunction getToolIcon(name) {\n    return TOOL_ICONS[name] || 'fa-wrench';\n}\n\nfunction loadSkillsView() {\n    loadToolsSection();\n    loadSkillsSection();\n}\n\nfunction loadToolsSection() {\n    if (toolsLoaded) return;\n    const emptyEl = document.getElementById('tools-empty');\n    const listEl = document.getElementById('tools-list');\n    const badge = document.getElementById('tools-count-badge');\n\n    fetch('/api/tools').then(r => r.json()).then(data => {\n        if (data.status !== 'success') return;\n        const tools = data.tools || [];\n        emptyEl.classList.add('hidden');\n        if (tools.length === 0) {\n            emptyEl.classList.remove('hidden');\n            emptyEl.innerHTML = `<span class=\"text-sm text-slate-400 dark:text-slate-500\">${currentLang === 'zh' ? '暂无内置工具' : 'No built-in tools'}</span>`;\n            return;\n        }\n        badge.textContent = tools.length;\n        badge.classList.remove('hidden');\n        listEl.innerHTML = '';\n        tools.forEach(tool => {\n            const card = document.createElement('div');\n            card.className = 'bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 p-4 flex items-start gap-3';\n            card.innerHTML = `\n                <div class=\"w-9 h-9 rounded-lg bg-blue-50 dark:bg-blue-900/20 flex items-center justify-center flex-shrink-0\">\n                    <i class=\"fas ${getToolIcon(tool.name)} text-blue-500 dark:text-blue-400 text-sm\"></i>\n                </div>\n                <div class=\"flex-1 min-w-0\">\n                    <div class=\"flex items-center gap-2\">\n                        <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200 font-mono\">${escapeHtml(tool.name)}</span>\n                    </div>\n                    <p class=\"text-xs text-slate-400 dark:text-slate-500 mt-1 line-clamp-2\">${escapeHtml(tool.description || '--')}</p>\n                </div>`;\n            listEl.appendChild(card);\n        });\n        listEl.classList.remove('hidden');\n        toolsLoaded = true;\n    }).catch(() => {\n        emptyEl.classList.remove('hidden');\n        emptyEl.innerHTML = `<span class=\"text-sm text-slate-400 dark:text-slate-500\">${currentLang === 'zh' ? '加载失败' : 'Failed to load'}</span>`;\n    });\n}\n\nfunction loadSkillsSection() {\n    const emptyEl = document.getElementById('skills-empty');\n    const listEl = document.getElementById('skills-list');\n    const badge = document.getElementById('skills-count-badge');\n\n    fetch('/api/skills').then(r => r.json()).then(data => {\n        if (data.status !== 'success') return;\n        const skills = data.skills || [];\n        if (skills.length === 0) {\n            const p = emptyEl.querySelector('p');\n            if (p) p.textContent = currentLang === 'zh' ? '暂无技能' : 'No skills found';\n            return;\n        }\n        badge.textContent = skills.length;\n        badge.classList.remove('hidden');\n        emptyEl.classList.add('hidden');\n        listEl.innerHTML = '';\n\n        skills.forEach(sk => {\n            const card = document.createElement('div');\n            card.className = 'bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 p-4 flex items-start gap-3 transition-opacity';\n            card.dataset.skillName = sk.name;\n            card.dataset.skillDesc = sk.description || '';\n            card.dataset.enabled = sk.enabled ? '1' : '0';\n            renderSkillCard(card, sk);\n            listEl.appendChild(card);\n        });\n    }).catch(() => {});\n}\n\nfunction renderSkillCard(card, sk) {\n    const enabled = sk.enabled;\n    const iconColor = enabled ? 'text-primary-400' : 'text-slate-300 dark:text-slate-600';\n    const trackClass = enabled\n        ? 'bg-primary-400'\n        : 'bg-slate-200 dark:bg-slate-700';\n    const thumbTranslate = enabled ? 'translate-x-3' : 'translate-x-0.5';\n    card.innerHTML = `\n        <div class=\"w-9 h-9 rounded-lg bg-amber-50 dark:bg-amber-900/20 flex items-center justify-center flex-shrink-0\">\n            <i class=\"fas fa-bolt ${iconColor} text-sm\"></i>\n        </div>\n        <div class=\"flex-1 min-w-0\">\n            <div class=\"flex items-center gap-2 mb-1\">\n                <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200 truncate flex-1\">${escapeHtml(sk.name)}</span>\n                <button\n                    role=\"switch\"\n                    aria-checked=\"${enabled}\"\n                    onclick=\"toggleSkill('${escapeHtml(sk.name)}', ${enabled})\"\n                    class=\"relative inline-flex h-4 w-7 flex-shrink-0 cursor-pointer rounded-full transition-colors duration-200 ease-in-out focus:outline-none ${trackClass}\"\n                    title=\"${enabled ? (currentLang === 'zh' ? '点击禁用' : 'Click to disable') : (currentLang === 'zh' ? '点击启用' : 'Click to enable')}\"\n                >\n                    <span class=\"inline-block h-3 w-3 mt-0.5 rounded-full bg-white shadow transform transition-transform duration-200 ease-in-out ${thumbTranslate}\"></span>\n                </button>\n            </div>\n            <p class=\"text-xs text-slate-400 dark:text-slate-500 line-clamp-2\">${escapeHtml(sk.description || '--')}</p>\n        </div>`;\n}\n\nfunction toggleSkill(name, currentlyEnabled) {\n    const action = currentlyEnabled ? 'close' : 'open';\n    const card = document.querySelector(`[data-skill-name=\"${CSS.escape(name)}\"]`);\n    if (card) card.style.opacity = '0.5';\n\n    fetch('/api/skills', {\n        method: 'POST',\n        headers: { 'Content-Type': 'application/json' },\n        body: JSON.stringify({ action, name })\n    })\n    .then(r => r.json())\n    .then(data => {\n        if (data.status === 'success') {\n            if (card) {\n                const desc = card.dataset.skillDesc || '';\n                card.dataset.enabled = currentlyEnabled ? '0' : '1';\n                card.style.opacity = '1';\n                renderSkillCard(card, { name, description: desc, enabled: !currentlyEnabled });\n            }\n        } else {\n            if (card) card.style.opacity = '1';\n            alert(currentLang === 'zh' ? '操作失败，请稍后再试' : 'Operation failed, please try again');\n        }\n    })\n    .catch(() => {\n        if (card) card.style.opacity = '1';\n        alert(currentLang === 'zh' ? '操作失败，请稍后再试' : 'Operation failed, please try again');\n    });\n}\n\n// =====================================================================\n// Memory View\n// =====================================================================\nlet memoryPage = 1;\nconst memoryPageSize = 10;\n\nfunction loadMemoryView(page) {\n    page = page || 1;\n    memoryPage = page;\n    fetch(`/api/memory?page=${page}&page_size=${memoryPageSize}`).then(r => r.json()).then(data => {\n        if (data.status !== 'success') return;\n        const emptyEl = document.getElementById('memory-empty');\n        const listEl = document.getElementById('memory-list');\n        const files = data.list || [];\n        const total = data.total || 0;\n\n        if (total === 0) {\n            emptyEl.querySelector('p').textContent = currentLang === 'zh' ? '暂无记忆文件' : 'No memory files';\n            emptyEl.classList.remove('hidden');\n            listEl.classList.add('hidden');\n            return;\n        }\n        emptyEl.classList.add('hidden');\n        listEl.classList.remove('hidden');\n\n        const tbody = document.getElementById('memory-table-body');\n        tbody.innerHTML = '';\n        files.forEach(f => {\n            const tr = document.createElement('tr');\n            tr.className = 'border-b border-slate-100 dark:border-white/5 hover:bg-slate-50 dark:hover:bg-white/5 cursor-pointer transition-colors';\n            tr.onclick = () => openMemoryFile(f.filename);\n            const typeLabel = f.type === 'global'\n                ? '<span class=\"px-2 py-0.5 rounded-full text-xs bg-primary-50 dark:bg-primary-900/30 text-primary-600 dark:text-primary-400\">Global</span>'\n                : '<span class=\"px-2 py-0.5 rounded-full text-xs bg-blue-50 dark:bg-blue-900/30 text-blue-600 dark:text-blue-400\">Daily</span>';\n            const sizeStr = f.size < 1024 ? f.size + ' B' : (f.size / 1024).toFixed(1) + ' KB';\n            tr.innerHTML = `\n                <td class=\"px-4 py-3 text-sm font-mono text-slate-700 dark:text-slate-200\">${escapeHtml(f.filename)}</td>\n                <td class=\"px-4 py-3 text-sm\">${typeLabel}</td>\n                <td class=\"px-4 py-3 text-sm text-slate-500 dark:text-slate-400\">${sizeStr}</td>\n                <td class=\"px-4 py-3 text-sm text-slate-500 dark:text-slate-400\">${escapeHtml(f.updated_at)}</td>`;\n            tbody.appendChild(tr);\n        });\n\n        // Pagination\n        const totalPages = Math.ceil(total / memoryPageSize);\n        const pagEl = document.getElementById('memory-pagination');\n        if (totalPages <= 1) { pagEl.innerHTML = ''; return; }\n        let pagHtml = `<span>${page} / ${totalPages}</span><div class=\"flex gap-2\">`;\n        if (page > 1) pagHtml += `<button onclick=\"loadMemoryView(${page - 1})\" class=\"px-3 py-1 rounded-lg border border-slate-200 dark:border-white/10 hover:bg-slate-100 dark:hover:bg-white/10 text-xs\">Prev</button>`;\n        if (page < totalPages) pagHtml += `<button onclick=\"loadMemoryView(${page + 1})\" class=\"px-3 py-1 rounded-lg border border-slate-200 dark:border-white/10 hover:bg-slate-100 dark:hover:bg-white/10 text-xs\">Next</button>`;\n        pagHtml += '</div>';\n        pagEl.innerHTML = pagHtml;\n    }).catch(() => {});\n}\n\nfunction openMemoryFile(filename) {\n    fetch(`/api/memory/content?filename=${encodeURIComponent(filename)}`).then(r => r.json()).then(data => {\n        if (data.status !== 'success') return;\n        document.getElementById('memory-panel-list').classList.add('hidden');\n        const panel = document.getElementById('memory-panel-viewer');\n        document.getElementById('memory-viewer-title').textContent = filename;\n        document.getElementById('memory-viewer-content').innerHTML = renderMarkdown(data.content || '');\n        panel.classList.remove('hidden');\n        applyHighlighting(panel);\n    }).catch(() => {});\n}\n\nfunction closeMemoryViewer() {\n    document.getElementById('memory-panel-viewer').classList.add('hidden');\n    document.getElementById('memory-panel-list').classList.remove('hidden');\n}\n\n// =====================================================================\n// Custom Confirm Dialog\n// =====================================================================\nfunction showConfirmDialog({ title, message, okText, cancelText, onConfirm }) {\n    const overlay = document.getElementById('confirm-dialog-overlay');\n    document.getElementById('confirm-dialog-title').textContent = title || '';\n    document.getElementById('confirm-dialog-message').textContent = message || '';\n    document.getElementById('confirm-dialog-ok').textContent = okText || 'OK';\n    document.getElementById('confirm-dialog-cancel').textContent = cancelText || t('channels_cancel');\n\n    function cleanup() {\n        overlay.classList.add('hidden');\n        okBtn.removeEventListener('click', onOk);\n        cancelBtn.removeEventListener('click', onCancel);\n        overlay.removeEventListener('click', onOverlayClick);\n    }\n    function onOk() { cleanup(); if (onConfirm) onConfirm(); }\n    function onCancel() { cleanup(); }\n    function onOverlayClick(e) { if (e.target === overlay) cleanup(); }\n\n    const okBtn = document.getElementById('confirm-dialog-ok');\n    const cancelBtn = document.getElementById('confirm-dialog-cancel');\n    okBtn.addEventListener('click', onOk);\n    cancelBtn.addEventListener('click', onCancel);\n    overlay.addEventListener('click', onOverlayClick);\n    overlay.classList.remove('hidden');\n}\n\n// =====================================================================\n// Channels View\n// =====================================================================\nlet channelsData = [];\n\nfunction loadChannelsView() {\n    const container = document.getElementById('channels-content');\n    container.innerHTML = `<div class=\"flex items-center gap-2 py-8 justify-center text-slate-400 dark:text-slate-500 text-sm\">\n        <i class=\"fas fa-spinner fa-spin text-xs\"></i><span>Loading...</span></div>`;\n\n    fetch('/api/channels').then(r => r.json()).then(data => {\n        if (data.status !== 'success') return;\n        channelsData = data.channels || [];\n        renderActiveChannels();\n    }).catch(() => {\n        container.innerHTML = '<p class=\"text-sm text-red-400 py-8 text-center\">Failed to load channels</p>';\n    });\n}\n\nfunction renderActiveChannels() {\n    const container = document.getElementById('channels-content');\n    container.innerHTML = '';\n    closeAddChannelPanel();\n\n    const activeChannels = channelsData.filter(ch => ch.active);\n\n    if (activeChannels.length === 0) {\n        container.innerHTML = `\n            <div class=\"flex flex-col items-center justify-center py-20\">\n                <div class=\"w-16 h-16 rounded-2xl bg-blue-50 dark:bg-blue-900/20 flex items-center justify-center mb-4\">\n                    <i class=\"fas fa-tower-broadcast text-blue-400 text-xl\"></i>\n                </div>\n                <p class=\"text-slate-500 dark:text-slate-400 font-medium\">${t('channels_empty')}</p>\n                <p class=\"text-sm text-slate-400 dark:text-slate-500 mt-1\">${t('channels_empty_desc')}</p>\n            </div>`;\n        return;\n    }\n\n    activeChannels.forEach(ch => {\n        const label = (typeof ch.label === 'object') ? (ch.label[currentLang] || ch.label.en) : ch.label;\n        const card = document.createElement('div');\n        card.className = 'bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 p-6';\n        card.id = `channel-card-${ch.name}`;\n\n        const fieldsHtml = buildChannelFieldsHtml(ch.name, ch.fields || []);\n\n        card.innerHTML = `\n            <div class=\"flex items-center gap-4 mb-5\">\n                <div class=\"w-10 h-10 rounded-xl bg-${ch.color}-50 dark:bg-${ch.color}-900/20 flex items-center justify-center flex-shrink-0\">\n                    <i class=\"fas ${ch.icon} text-${ch.color}-500 text-base\"></i>\n                </div>\n                <div class=\"flex-1 min-w-0\">\n                    <div class=\"flex items-center gap-2\">\n                        <span class=\"font-semibold text-slate-800 dark:text-slate-100\">${escapeHtml(label)}</span>\n                        <span class=\"w-2 h-2 rounded-full bg-primary-400\"></span>\n                        <span class=\"text-xs text-primary-500\">${t('channels_connected')}</span>\n                    </div>\n                    <p class=\"text-xs text-slate-500 dark:text-slate-400 mt-0.5 font-mono\">${escapeHtml(ch.name)}</p>\n                </div>\n                <button onclick=\"disconnectChannel('${ch.name}')\"\n                    class=\"px-3 py-1.5 rounded-lg text-xs font-medium\n                           bg-red-50 dark:bg-red-900/20 text-red-500 dark:text-red-400\n                           hover:bg-red-100 dark:hover:bg-red-900/40\n                           cursor-pointer transition-colors flex-shrink-0\">\n                    ${t('channels_disconnect')}\n                </button>\n            </div>\n            <div class=\"space-y-4\">\n                ${fieldsHtml}\n                <div class=\"flex items-center justify-end gap-3 pt-1\">\n                    <span id=\"ch-status-${ch.name}\" class=\"text-xs text-primary-500 opacity-0 transition-opacity duration-300\"></span>\n                    <button onclick=\"saveChannelConfig('${ch.name}')\"\n                        class=\"px-4 py-2 rounded-lg bg-primary-500 hover:bg-primary-600 text-white text-sm font-medium\n                               cursor-pointer transition-colors duration-150 disabled:opacity-50 disabled:cursor-not-allowed\"\n                        id=\"ch-save-${ch.name}\">${t('channels_save')}</button>\n                </div>\n            </div>`;\n\n        container.appendChild(card);\n        bindSecretFieldEvents(card);\n    });\n}\n\nfunction buildChannelFieldsHtml(chName, fields) {\n    let html = '';\n    fields.forEach(f => {\n        const inputId = `ch-${chName}-${f.key}`;\n        let inputHtml = '';\n        if (f.type === 'bool') {\n            const checked = f.value ? 'checked' : '';\n            inputHtml = `<label class=\"relative inline-flex items-center cursor-pointer\">\n                <input id=\"${inputId}\" type=\"checkbox\" ${checked} class=\"sr-only peer\" data-field=\"${f.key}\" data-ch=\"${chName}\">\n                <div class=\"w-9 h-5 bg-slate-200 dark:bg-slate-700 peer-checked:bg-primary-400 rounded-full\n                            after:content-[''] after:absolute after:top-[2px] after:left-[2px] after:bg-white\n                            after:rounded-full after:h-4 after:w-4 after:transition-all peer-checked:after:translate-x-full\"></div>\n            </label>`;\n        } else if (f.type === 'secret') {\n            inputHtml = `<input id=\"${inputId}\" type=\"text\" value=\"${escapeHtml(String(f.value || ''))}\"\n                data-field=\"${f.key}\" data-ch=\"${chName}\" data-masked=\"${f.value ? '1' : ''}\"\n                class=\"w-full px-3 py-2 rounded-lg border border-slate-200 dark:border-slate-600\n                       bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                       focus:outline-none focus:border-primary-500 font-mono transition-colors\n                       ${f.value ? 'cfg-key-masked' : ''}\"\n                placeholder=\"${escapeHtml(f.label)}\">`;\n        } else {\n            const inputType = f.type === 'number' ? 'number' : 'text';\n            inputHtml = `<input id=\"${inputId}\" type=\"${inputType}\" value=\"${escapeHtml(String(f.value ?? f.default ?? ''))}\"\n                data-field=\"${f.key}\" data-ch=\"${chName}\"\n                class=\"w-full px-3 py-2 rounded-lg border border-slate-200 dark:border-slate-600\n                       bg-slate-50 dark:bg-white/5 text-sm text-slate-800 dark:text-slate-100\n                       focus:outline-none focus:border-primary-500 font-mono transition-colors\"\n                placeholder=\"${escapeHtml(f.label)}\">`;\n        }\n        html += `<div>\n            <label class=\"block text-sm font-medium text-slate-600 dark:text-slate-400 mb-1.5\">${escapeHtml(f.label)}</label>\n            ${inputHtml}\n        </div>`;\n    });\n    return html;\n}\n\nfunction bindSecretFieldEvents(container) {\n    container.querySelectorAll('input[data-masked=\"1\"]').forEach(inp => {\n        inp.addEventListener('focus', function() {\n            if (this.dataset.masked === '1') {\n                this.value = '';\n                this.dataset.masked = '';\n                this.classList.remove('cfg-key-masked');\n            }\n        });\n    });\n}\n\nfunction showChannelStatus(chName, msgKey, isError) {\n    const el = document.getElementById(`ch-status-${chName}`);\n    if (!el) return;\n    el.textContent = t(msgKey);\n    el.classList.toggle('text-red-500', !!isError);\n    el.classList.toggle('text-primary-500', !isError);\n    el.classList.remove('opacity-0');\n    setTimeout(() => el.classList.add('opacity-0'), 2500);\n}\n\nfunction saveChannelConfig(chName) {\n    const card = document.getElementById(`channel-card-${chName}`);\n    if (!card) return;\n\n    const updates = {};\n    card.querySelectorAll('input[data-ch=\"' + chName + '\"]').forEach(inp => {\n        const key = inp.dataset.field;\n        if (inp.type === 'checkbox') {\n            updates[key] = inp.checked;\n        } else {\n            if (inp.dataset.masked === '1') return;\n            updates[key] = inp.value;\n        }\n    });\n\n    const btn = document.getElementById(`ch-save-${chName}`);\n    if (btn) btn.disabled = true;\n\n    fetch('/api/channels', {\n        method: 'POST',\n        headers: { 'Content-Type': 'application/json' },\n        body: JSON.stringify({ action: 'save', channel: chName, config: updates })\n    })\n    .then(r => r.json())\n    .then(data => {\n        if (data.status === 'success') {\n            showChannelStatus(chName, data.restarted ? 'channels_restarted' : 'channels_saved', false);\n        } else {\n            showChannelStatus(chName, 'channels_save_error', true);\n        }\n    })\n    .catch(() => showChannelStatus(chName, 'channels_save_error', true))\n    .finally(() => { if (btn) btn.disabled = false; });\n}\n\nfunction disconnectChannel(chName) {\n    const ch = channelsData.find(c => c.name === chName);\n    const label = ch ? ((typeof ch.label === 'object') ? (ch.label[currentLang] || ch.label.en) : ch.label) : chName;\n\n    showConfirmDialog({\n        title: t('channels_disconnect'),\n        message: t('channels_disconnect_confirm'),\n        okText: t('channels_disconnect'),\n        cancelText: t('channels_cancel'),\n        onConfirm: () => {\n            fetch('/api/channels', {\n                method: 'POST',\n                headers: { 'Content-Type': 'application/json' },\n                body: JSON.stringify({ action: 'disconnect', channel: chName })\n            })\n            .then(r => r.json())\n            .then(data => {\n                if (data.status === 'success') {\n                    if (ch) ch.active = false;\n                    renderActiveChannels();\n                }\n            })\n            .catch(() => {});\n        }\n    });\n}\n\n// --- Add channel panel ---\nfunction openAddChannelPanel() {\n    const panel = document.getElementById('channels-add-panel');\n    const activeNames = new Set(channelsData.filter(c => c.active).map(c => c.name));\n    const available = channelsData.filter(c => !activeNames.has(c.name));\n\n    if (available.length === 0) {\n        panel.innerHTML = `<div class=\"bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 p-6 text-center\">\n            <p class=\"text-sm text-slate-500 dark:text-slate-400\">${currentLang === 'zh' ? '所有通道均已接入' : 'All channels are already connected'}</p>\n            <button onclick=\"closeAddChannelPanel()\" class=\"mt-3 text-xs text-slate-400 hover:text-slate-600 dark:hover:text-slate-300 cursor-pointer\">${t('channels_cancel')}</button>\n        </div>`;\n        panel.classList.remove('hidden');\n        return;\n    }\n\n    const ddOptions = [\n        { value: '', label: t('channels_select_placeholder') },\n        ...available.map(ch => {\n            const label = (typeof ch.label === 'object') ? (ch.label[currentLang] || ch.label.en) : ch.label;\n            return { value: ch.name, label: `${label} (${ch.name})` };\n        })\n    ];\n\n    panel.innerHTML = `\n        <div class=\"bg-white dark:bg-[#1A1A1A] rounded-xl border border-primary-200 dark:border-primary-800 p-6\">\n            <div class=\"flex items-center gap-3 mb-5\">\n                <div class=\"w-9 h-9 rounded-lg bg-primary-50 dark:bg-primary-900/30 flex items-center justify-center\">\n                    <i class=\"fas fa-plus text-primary-500 text-sm\"></i>\n                </div>\n                <h3 class=\"font-semibold text-slate-800 dark:text-slate-100\">${t('channels_add')}</h3>\n            </div>\n            <div class=\"mb-4\">\n                <div id=\"add-channel-select\" class=\"cfg-dropdown\" tabindex=\"0\">\n                    <div class=\"cfg-dropdown-selected\">\n                        <span class=\"cfg-dropdown-text\">--</span>\n                        <i class=\"fas fa-chevron-down cfg-dropdown-arrow\"></i>\n                    </div>\n                    <div class=\"cfg-dropdown-menu\"></div>\n                </div>\n            </div>\n            <div id=\"add-channel-fields\" class=\"space-y-4\"></div>\n            <div id=\"add-channel-actions\" class=\"hidden flex items-center justify-end gap-3 pt-4\">\n                <button onclick=\"closeAddChannelPanel()\"\n                    class=\"px-4 py-2 rounded-lg border border-slate-200 dark:border-white/10\n                           text-slate-600 dark:text-slate-300 text-sm font-medium\n                           hover:bg-slate-50 dark:hover:bg-white/5\n                           cursor-pointer transition-colors duration-150\">${t('channels_cancel')}</button>\n                <button id=\"add-channel-submit\" onclick=\"submitAddChannel()\"\n                    class=\"px-4 py-2 rounded-lg bg-primary-500 hover:bg-primary-600 text-white text-sm font-medium\n                           cursor-pointer transition-colors duration-150 disabled:opacity-50 disabled:cursor-not-allowed\">${t('channels_connect_btn')}</button>\n            </div>\n        </div>`;\n    panel.classList.remove('hidden');\n    panel.scrollIntoView({ behavior: 'smooth', block: 'nearest' });\n\n    const ddEl = document.getElementById('add-channel-select');\n    initDropdown(ddEl, ddOptions, '', onAddChannelSelect);\n}\n\nfunction closeAddChannelPanel() {\n    const panel = document.getElementById('channels-add-panel');\n    if (panel) {\n        panel.classList.add('hidden');\n        panel.innerHTML = '';\n    }\n}\n\nfunction onAddChannelSelect(chName) {\n    const fieldsContainer = document.getElementById('add-channel-fields');\n    const actions = document.getElementById('add-channel-actions');\n\n    if (!chName) {\n        fieldsContainer.innerHTML = '';\n        actions.classList.add('hidden');\n        return;\n    }\n\n    const ch = channelsData.find(c => c.name === chName);\n    if (!ch) return;\n\n    fieldsContainer.innerHTML = buildChannelFieldsHtml(chName, ch.fields || []);\n    bindSecretFieldEvents(fieldsContainer);\n    actions.classList.remove('hidden');\n}\n\nfunction submitAddChannel() {\n    const ddEl = document.getElementById('add-channel-select');\n    const chName = getDropdownValue(ddEl);\n    if (!chName) return;\n\n    const fieldsContainer = document.getElementById('add-channel-fields');\n    const updates = {};\n    fieldsContainer.querySelectorAll('input[data-ch=\"' + chName + '\"]').forEach(inp => {\n        const key = inp.dataset.field;\n        if (inp.type === 'checkbox') {\n            updates[key] = inp.checked;\n        } else {\n            if (inp.dataset.masked === '1') return;\n            updates[key] = inp.value;\n        }\n    });\n\n    const btn = document.getElementById('add-channel-submit');\n    if (btn) { btn.disabled = true; btn.textContent = t('channels_connecting'); }\n\n    fetch('/api/channels', {\n        method: 'POST',\n        headers: { 'Content-Type': 'application/json' },\n        body: JSON.stringify({ action: 'connect', channel: chName, config: updates })\n    })\n    .then(r => r.json())\n    .then(data => {\n        if (data.status === 'success') {\n            const ch = channelsData.find(c => c.name === chName);\n            if (ch) {\n                ch.active = true;\n                (ch.fields || []).forEach(f => {\n                    if (updates[f.key] !== undefined) {\n                        f.value = f.type === 'secret' ? ChannelsHandler_maskSecret(updates[f.key]) : updates[f.key];\n                    }\n                });\n            }\n            renderActiveChannels();\n        } else {\n            if (btn) { btn.disabled = false; btn.textContent = t('channels_connect_btn'); }\n        }\n    })\n    .catch(() => {\n        if (btn) { btn.disabled = false; btn.textContent = t('channels_connect_btn'); }\n    });\n}\n\n// =====================================================================\n// Scheduler View\n// =====================================================================\nlet tasksLoaded = false;\nfunction loadTasksView() {\n    if (tasksLoaded) return;\n    fetch('/api/scheduler').then(r => r.json()).then(data => {\n        if (data.status !== 'success') return;\n        const emptyEl = document.getElementById('tasks-empty');\n        const listEl = document.getElementById('tasks-list');\n        const allTasks = data.tasks || [];\n        // Only show active (enabled) tasks\n        const tasks = allTasks.filter(t => t.enabled !== false);\n        if (tasks.length === 0) {\n            emptyEl.querySelector('p').textContent = currentLang === 'zh' ? '暂无定时任务' : 'No scheduled tasks';\n            return;\n        }\n        emptyEl.classList.add('hidden');\n        listEl.classList.remove('hidden');\n        listEl.innerHTML = '';\n\n        tasks.forEach(task => {\n            const card = document.createElement('div');\n            card.className = 'bg-white dark:bg-[#1A1A1A] rounded-xl border border-slate-200 dark:border-white/10 p-4';\n            const typeLabel = task.type === 'cron'\n                ? `<span class=\"text-xs font-mono text-slate-400\">${escapeHtml(task.cron || '')}</span>`\n                : `<span class=\"text-xs text-slate-400\">${escapeHtml(task.type || 'once')}</span>`;\n            let nextRun = '--';\n            if (task.next_run_at) {\n                // next_run_at is an ISO string, not a Unix timestamp\n                const d = new Date(task.next_run_at);\n                if (!isNaN(d.getTime())) nextRun = d.toLocaleString();\n            }\n            card.innerHTML = `\n                <div class=\"flex items-center gap-2 mb-2\">\n                    <span class=\"w-2 h-2 rounded-full bg-primary-400\"></span>\n                    <span class=\"font-medium text-sm text-slate-700 dark:text-slate-200\">${escapeHtml(task.name || task.id || '--')}</span>\n                    <div class=\"flex-1\"></div>\n                    ${typeLabel}\n                </div>\n                <p class=\"text-xs text-slate-500 dark:text-slate-400 mb-2 line-clamp-2\">${escapeHtml(task.prompt || task.description || '')}</p>\n                <div class=\"flex items-center gap-4 text-xs text-slate-400 dark:text-slate-500\">\n                    <span><i class=\"fas fa-clock mr-1\"></i>${currentLang === 'zh' ? '下次执行' : 'Next run'}: ${nextRun}</span>\n                </div>`;\n            listEl.appendChild(card);\n        });\n        tasksLoaded = true;\n    }).catch(() => {});\n}\n\n// =====================================================================\n// Logs View\n// =====================================================================\nlet logEventSource = null;\n\nfunction startLogStream() {\n    if (logEventSource) return;\n    const output = document.getElementById('log-output');\n    output.innerHTML = '';\n\n    logEventSource = new EventSource('/api/logs');\n    logEventSource.onmessage = function(e) {\n        let item;\n        try { item = JSON.parse(e.data); } catch (_) { return; }\n\n        if (item.type === 'init') {\n            output.textContent = item.content || '';\n            output.scrollTop = output.scrollHeight;\n        } else if (item.type === 'line') {\n            output.textContent += item.content;\n            output.scrollTop = output.scrollHeight;\n        } else if (item.type === 'error') {\n            output.textContent = item.message || 'Error loading logs';\n        }\n    };\n    logEventSource.onerror = function() {\n        logEventSource.close();\n        logEventSource = null;\n    };\n}\n\nfunction stopLogStream() {\n    if (logEventSource) {\n        logEventSource.close();\n        logEventSource = null;\n    }\n}\n\n// =====================================================================\n// View Navigation Hook\n// =====================================================================\nconst _origNavigateTo = navigateTo;\nnavigateTo = function(viewId) {\n    // Stop log stream when leaving logs view\n    if (currentView === 'logs' && viewId !== 'logs') stopLogStream();\n\n    _origNavigateTo(viewId);\n\n    // Lazy-load view data\n    if (viewId === 'config') loadConfigView();\n    else if (viewId === 'skills') loadSkillsView();\n    else if (viewId === 'memory') {\n        // Always start from the list panel when navigating to memory\n        document.getElementById('memory-panel-viewer').classList.add('hidden');\n        document.getElementById('memory-panel-list').classList.remove('hidden');\n        loadMemoryView(1);\n    }\n    else if (viewId === 'channels') loadChannelsView();\n    else if (viewId === 'tasks') loadTasksView();\n    else if (viewId === 'logs') startLogStream();\n};\n\n// =====================================================================\n// Initialization\n// =====================================================================\napplyTheme();\napplyI18n();\ndocument.getElementById('sidebar-version').textContent = `CowAgent ${APP_VERSION}`;\nchatInput.focus();\n\n// Re-enable color transition AFTER first paint so the theme applied in <head>\n// doesn't produce an animated flash on load.  The class is missing from the\n// body initially; adding it here means transitions only fire on user-triggered\n// theme toggles, not on page load.\nrequestAnimationFrame(() => {\n    document.body.classList.add('transition-colors', 'duration-200');\n});\n"
  },
  {
    "path": "channel/web/web_channel.py",
    "content": "import time\nimport json\nimport logging\nimport mimetypes\nimport os\nimport threading\nimport time\nimport uuid\nfrom queue import Queue, Empty\n\nimport web\n\nfrom bridge.context import *\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import ChatChannel, check_prefix\nfrom channel.chat_message import ChatMessage\nfrom collections import OrderedDict\nfrom common import const\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom config import conf\n\nIMAGE_EXTENSIONS = {\".jpg\", \".jpeg\", \".png\", \".gif\", \".webp\", \".bmp\", \".svg\"}\nVIDEO_EXTENSIONS = {\".mp4\", \".webm\", \".avi\", \".mov\", \".mkv\"}\n\n\ndef _get_upload_dir() -> str:\n    from common.utils import expand_path\n    ws_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n    tmp_dir = os.path.join(ws_root, \"tmp\")\n    os.makedirs(tmp_dir, exist_ok=True)\n    return tmp_dir\n\n\nclass WebMessage(ChatMessage):\n    def __init__(\n            self,\n            msg_id,\n            content,\n            ctype=ContextType.TEXT,\n            from_user_id=\"User\",\n            to_user_id=\"Chatgpt\",\n            other_user_id=\"Chatgpt\",\n    ):\n        self.msg_id = msg_id\n        self.ctype = ctype\n        self.content = content\n        self.from_user_id = from_user_id\n        self.to_user_id = to_user_id\n        self.other_user_id = other_user_id\n\n\n@singleton\nclass WebChannel(ChatChannel):\n    NOT_SUPPORT_REPLYTYPE = [ReplyType.VOICE]\n    _instance = None\n\n    # def __new__(cls):\n    #     if cls._instance is None:\n    #         cls._instance = super(WebChannel, cls).__new__(cls)\n    #     return cls._instance\n\n    def __init__(self):\n        super().__init__()\n        self.msg_id_counter = 0\n        self.session_queues = {}  # session_id -> Queue (fallback polling)\n        self.request_to_session = {}  # request_id -> session_id\n        self.sse_queues = {}  # request_id -> Queue (SSE streaming)\n        self._http_server = None\n\n    def _generate_msg_id(self):\n        \"\"\"生成唯一的消息ID\"\"\"\n        self.msg_id_counter += 1\n        return str(int(time.time())) + str(self.msg_id_counter)\n\n    def _generate_request_id(self):\n        \"\"\"生成唯一的请求ID\"\"\"\n        return str(uuid.uuid4())\n\n    def send(self, reply: Reply, context: Context):\n        try:\n            if reply.type in self.NOT_SUPPORT_REPLYTYPE:\n                logger.warning(f\"Web channel doesn't support {reply.type} yet\")\n                return\n\n            if reply.type == ReplyType.IMAGE_URL:\n                time.sleep(0.5)\n\n            request_id = context.get(\"request_id\", None)\n            if not request_id:\n                logger.error(\"No request_id found in context, cannot send message\")\n                return\n\n            session_id = self.request_to_session.get(request_id)\n            if not session_id:\n                logger.error(f\"No session_id found for request {request_id}\")\n                return\n\n            # SSE mode: push done event to SSE queue\n            if request_id in self.sse_queues:\n                content = reply.content if reply.content is not None else \"\"\n                self.sse_queues[request_id].put({\n                    \"type\": \"done\",\n                    \"content\": content,\n                    \"request_id\": request_id,\n                    \"timestamp\": time.time()\n                })\n                logger.debug(f\"SSE done sent for request {request_id}\")\n                return\n\n            # Fallback: polling mode\n            if session_id in self.session_queues:\n                response_data = {\n                    \"type\": str(reply.type),\n                    \"content\": reply.content,\n                    \"timestamp\": time.time(),\n                    \"request_id\": request_id\n                }\n                self.session_queues[session_id].put(response_data)\n                logger.debug(f\"Response sent to poll queue for session {session_id}, request {request_id}\")\n            else:\n                logger.warning(f\"No response queue found for session {session_id}, response dropped\")\n\n        except Exception as e:\n            logger.error(f\"Error in send method: {e}\")\n\n    def _make_sse_callback(self, request_id: str):\n        \"\"\"Build an on_event callback that pushes agent stream events into the SSE queue.\"\"\"\n\n        def on_event(event: dict):\n            if request_id not in self.sse_queues:\n                return\n            q = self.sse_queues[request_id]\n            event_type = event.get(\"type\")\n            data = event.get(\"data\", {})\n\n            if event_type == \"message_update\":\n                delta = data.get(\"delta\", \"\")\n                if delta:\n                    q.put({\"type\": \"delta\", \"content\": delta})\n\n            elif event_type == \"tool_execution_start\":\n                tool_name = data.get(\"tool_name\", \"tool\")\n                arguments = data.get(\"arguments\", {})\n                q.put({\"type\": \"tool_start\", \"tool\": tool_name, \"arguments\": arguments})\n\n            elif event_type == \"tool_execution_end\":\n                tool_name = data.get(\"tool_name\", \"tool\")\n                status = data.get(\"status\", \"success\")\n                result = data.get(\"result\", \"\")\n                exec_time = data.get(\"execution_time\", 0)\n                # Truncate long results to avoid huge SSE payloads\n                result_str = str(result)\n                if len(result_str) > 2000:\n                    result_str = result_str[:2000] + \"…\"\n                q.put({\n                    \"type\": \"tool_end\",\n                    \"tool\": tool_name,\n                    \"status\": status,\n                    \"result\": result_str,\n                    \"execution_time\": round(exec_time, 2)\n                })\n\n        return on_event\n\n    def upload_file(self):\n        \"\"\"Handle file upload via multipart/form-data. Save to workspace/tmp/ and return metadata.\"\"\"\n        try:\n            params = web.input(file={}, session_id=\"\")\n            file_obj = params.get(\"file\")\n            session_id = params.get(\"session_id\", \"\")\n            if file_obj is None or not hasattr(file_obj, \"filename\") or not file_obj.filename:\n                return json.dumps({\"status\": \"error\", \"message\": \"No file uploaded\"})\n\n            upload_dir = _get_upload_dir()\n\n            original_name = file_obj.filename\n            ext = os.path.splitext(original_name)[1].lower()\n            safe_name = f\"web_{uuid.uuid4().hex[:8]}{ext}\"\n            save_path = os.path.join(upload_dir, safe_name)\n\n            with open(save_path, \"wb\") as f:\n                f.write(file_obj.read() if hasattr(file_obj, \"read\") else file_obj.value)\n\n            if ext in IMAGE_EXTENSIONS:\n                file_type = \"image\"\n            elif ext in VIDEO_EXTENSIONS:\n                file_type = \"video\"\n            else:\n                file_type = \"file\"\n\n            preview_url = f\"/uploads/{safe_name}\"\n\n            logger.info(f\"[WebChannel] File uploaded: {original_name} -> {save_path} ({file_type})\")\n\n            return json.dumps({\n                \"status\": \"success\",\n                \"file_path\": save_path,\n                \"file_name\": original_name,\n                \"file_type\": file_type,\n                \"preview_url\": preview_url,\n            }, ensure_ascii=False)\n\n        except Exception as e:\n            logger.error(f\"[WebChannel] File upload error: {e}\", exc_info=True)\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n    def post_message(self):\n        \"\"\"\n        Handle incoming messages from users via POST request.\n        Returns a request_id for tracking this specific request.\n        Supports optional attachments (file paths from /upload).\n        \"\"\"\n        try:\n            data = web.data()\n            json_data = json.loads(data)\n            session_id = json_data.get('session_id', f'session_{int(time.time())}')\n            prompt = json_data.get('message', '')\n            use_sse = json_data.get('stream', True)\n            attachments = json_data.get('attachments', [])\n\n            # Append file references to the prompt (same format as QQ channel)\n            if attachments:\n                file_refs = []\n                for att in attachments:\n                    ftype = att.get(\"file_type\", \"file\")\n                    fpath = att.get(\"file_path\", \"\")\n                    if not fpath:\n                        continue\n                    if ftype == \"image\":\n                        file_refs.append(f\"[图片: {fpath}]\")\n                    elif ftype == \"video\":\n                        file_refs.append(f\"[视频: {fpath}]\")\n                    else:\n                        file_refs.append(f\"[文件: {fpath}]\")\n                if file_refs:\n                    prompt = prompt + \"\\n\" + \"\\n\".join(file_refs)\n                    logger.info(f\"[WebChannel] Attached {len(file_refs)} file(s) to message\")\n\n            request_id = self._generate_request_id()\n            self.request_to_session[request_id] = session_id\n\n            if session_id not in self.session_queues:\n                self.session_queues[session_id] = Queue()\n\n            if use_sse:\n                self.sse_queues[request_id] = Queue()\n\n            trigger_prefixs = conf().get(\"single_chat_prefix\", [\"\"])\n            if check_prefix(prompt, trigger_prefixs) is None:\n                if trigger_prefixs:\n                    prompt = trigger_prefixs[0] + prompt\n                    logger.debug(f\"[WebChannel] Added prefix to message: {prompt}\")\n\n            msg = WebMessage(self._generate_msg_id(), prompt)\n            msg.from_user_id = session_id\n\n            context = self._compose_context(ContextType.TEXT, prompt, msg=msg, isgroup=False)\n\n            if context is None:\n                logger.warning(f\"[WebChannel] Context is None for session {session_id}, message may be filtered\")\n                if request_id in self.sse_queues:\n                    del self.sse_queues[request_id]\n                return json.dumps({\"status\": \"error\", \"message\": \"Message was filtered\"})\n\n            context[\"session_id\"] = session_id\n            context[\"receiver\"] = session_id\n            context[\"request_id\"] = request_id\n\n            if use_sse:\n                context[\"on_event\"] = self._make_sse_callback(request_id)\n\n            threading.Thread(target=self.produce, args=(context,)).start()\n\n            return json.dumps({\"status\": \"success\", \"request_id\": request_id, \"stream\": use_sse})\n\n        except Exception as e:\n            logger.error(f\"Error processing message: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n    def stream_response(self, request_id: str):\n        \"\"\"\n        SSE generator for a given request_id.\n        Yields UTF-8 encoded bytes to avoid WSGI Latin-1 mangling.\n        \"\"\"\n        if request_id not in self.sse_queues:\n            yield b\"data: {\\\"type\\\": \\\"error\\\", \\\"message\\\": \\\"invalid request_id\\\"}\\n\\n\"\n            return\n\n        q = self.sse_queues[request_id]\n        timeout = 300  # 5 minutes max\n        deadline = time.time() + timeout\n\n        try:\n            while time.time() < deadline:\n                try:\n                    item = q.get(timeout=1)\n                except Empty:\n                    yield b\": keepalive\\n\\n\"\n                    continue\n\n                payload = json.dumps(item, ensure_ascii=False)\n                yield f\"data: {payload}\\n\\n\".encode(\"utf-8\")\n\n                if item.get(\"type\") == \"done\":\n                    break\n        finally:\n            self.sse_queues.pop(request_id, None)\n\n    def poll_response(self):\n        \"\"\"\n        Poll for responses using the session_id.\n        \"\"\"\n        try:\n            data = web.data()\n            json_data = json.loads(data)\n            session_id = json_data.get('session_id')\n\n            if not session_id or session_id not in self.session_queues:\n                return json.dumps({\"status\": \"error\", \"message\": \"Invalid session ID\"})\n\n            # 尝试从队列获取响应，不等待\n            try:\n                # 使用peek而不是get，这样如果前端没有成功处理，下次还能获取到\n                response = self.session_queues[session_id].get(block=False)\n\n                # 返回响应，包含请求ID以区分不同请求\n                return json.dumps({\n                    \"status\": \"success\",\n                    \"has_content\": True,\n                    \"content\": response[\"content\"],\n                    \"request_id\": response[\"request_id\"],\n                    \"timestamp\": response[\"timestamp\"]\n                })\n\n            except Empty:\n                # 没有新响应\n                return json.dumps({\"status\": \"success\", \"has_content\": False})\n\n        except Exception as e:\n            logger.error(f\"Error polling response: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n    def chat_page(self):\n        \"\"\"Serve the chat HTML page.\"\"\"\n        file_path = os.path.join(os.path.dirname(__file__), 'chat.html')  # 使用绝对路径\n        with open(file_path, 'r', encoding='utf-8') as f:\n            return f.read()\n\n    def startup(self):\n        port = conf().get(\"web_port\", 9899)\n\n        # 打印可用渠道类型提示\n        logger.info(\n            \"[WebChannel] 全部可用通道如下，可修改 config.json 配置文件中的 channel_type 字段进行切换，多个通道用逗号分隔：\")\n        logger.info(\"[WebChannel]   1. web              - 网页\")\n        logger.info(\"[WebChannel]   2. terminal         - 终端\")\n        logger.info(\"[WebChannel]   3. feishu           - 飞书\")\n        logger.info(\"[WebChannel]   4. dingtalk         - 钉钉\")\n        logger.info(\"[WebChannel]   5. wechatcom_app    - 企微自建应用\")\n        logger.info(\"[WebChannel]   6. wechatmp         - 个人公众号\")\n        logger.info(\"[WebChannel]   7. wechatmp_service - 企业公众号\")\n        logger.info(\"[WebChannel] ✅ Web控制台已运行\")\n        logger.info(f\"[WebChannel] 🌐 本地访问: http://localhost:{port}\")\n        logger.info(f\"[WebChannel] 🌍 服务器访问: http://YOUR_IP:{port} (请将YOUR_IP替换为服务器IP)\")\n\n        # 确保静态文件目录存在\n        static_dir = os.path.join(os.path.dirname(__file__), 'static')\n        if not os.path.exists(static_dir):\n            os.makedirs(static_dir)\n            logger.debug(f\"[WebChannel] Created static directory: {static_dir}\")\n\n        urls = (\n            '/', 'RootHandler',\n            '/message', 'MessageHandler',\n            '/upload', 'UploadHandler',\n            '/uploads/(.*)', 'UploadsHandler',\n            '/poll', 'PollHandler',\n            '/stream', 'StreamHandler',\n            '/chat', 'ChatHandler',\n            '/config', 'ConfigHandler',\n            '/api/channels', 'ChannelsHandler',\n            '/api/tools', 'ToolsHandler',\n            '/api/skills', 'SkillsHandler',\n            '/api/memory', 'MemoryHandler',\n            '/api/memory/content', 'MemoryContentHandler',\n            '/api/scheduler', 'SchedulerHandler',\n            '/api/history', 'HistoryHandler',\n            '/api/logs', 'LogsHandler',\n            '/assets/(.*)', 'AssetsHandler',\n        )\n        app = web.application(urls, globals(), autoreload=False)\n\n        # 完全禁用web.py的HTTP日志输出\n        web.httpserver.LogMiddleware.log = lambda self, status, environ: None\n\n        # 配置web.py的日志级别为ERROR\n        logging.getLogger(\"web\").setLevel(logging.ERROR)\n        logging.getLogger(\"web.httpserver\").setLevel(logging.ERROR)\n\n        # Build WSGI app with middleware (same as runsimple but without print)\n        func = web.httpserver.StaticMiddleware(app.wsgifunc())\n        func = web.httpserver.LogMiddleware(func)\n        server = web.httpserver.WSGIServer((\"0.0.0.0\", port), func)\n        # Allow concurrent requests by not blocking on in-flight handler threads\n        server.daemon_threads = True\n        self._http_server = server\n        try:\n            server.start()\n        except (KeyboardInterrupt, SystemExit):\n            server.stop()\n\n    def stop(self):\n        if self._http_server:\n            try:\n                self._http_server.stop()\n                logger.info(\"[WebChannel] HTTP server stopped\")\n            except Exception as e:\n                logger.warning(f\"[WebChannel] Error stopping HTTP server: {e}\")\n            self._http_server = None\n\n\nclass RootHandler:\n    def GET(self):\n        # 重定向到/chat\n        raise web.seeother('/chat')\n\n\nclass MessageHandler:\n    def POST(self):\n        return WebChannel().post_message()\n\n\nclass UploadHandler:\n    def POST(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        return WebChannel().upload_file()\n\n\nclass UploadsHandler:\n    def GET(self, file_name):\n        \"\"\"Serve uploaded files from workspace/tmp/ for preview.\"\"\"\n        try:\n            upload_dir = _get_upload_dir()\n            full_path = os.path.normpath(os.path.join(upload_dir, file_name))\n            if not os.path.abspath(full_path).startswith(os.path.abspath(upload_dir)):\n                raise web.notfound()\n            if not os.path.isfile(full_path):\n                raise web.notfound()\n            content_type = mimetypes.guess_type(full_path)[0] or \"application/octet-stream\"\n            web.header('Content-Type', content_type)\n            web.header('Cache-Control', 'public, max-age=86400')\n            with open(full_path, 'rb') as f:\n                return f.read()\n        except web.HTTPError:\n            raise\n        except Exception as e:\n            logger.error(f\"[WebChannel] Error serving upload: {e}\")\n            raise web.notfound()\n\n\nclass PollHandler:\n    def POST(self):\n        return WebChannel().poll_response()\n\n\nclass StreamHandler:\n    def GET(self):\n        params = web.input(request_id='')\n        request_id = params.request_id\n        if not request_id:\n            raise web.badrequest()\n\n        web.header('Content-Type', 'text/event-stream; charset=utf-8')\n        web.header('Cache-Control', 'no-cache')\n        web.header('X-Accel-Buffering', 'no')\n        web.header('Access-Control-Allow-Origin', '*')\n\n        return WebChannel().stream_response(request_id)\n\n\nclass ChatHandler:\n    def GET(self):\n        # 正常返回聊天页面\n        file_path = os.path.join(os.path.dirname(__file__), 'chat.html')\n        with open(file_path, 'r', encoding='utf-8') as f:\n            return f.read()\n\n\nclass ConfigHandler:\n\n    _RECOMMENDED_MODELS = [\n        const.MINIMAX_M2_5, const.MINIMAX_M2_1, const.MINIMAX_M2_1_LIGHTNING,\n        const.GLM_5, const.GLM_4_7,\n        const.QWEN3_MAX, const.QWEN35_PLUS,\n        const.KIMI_K2_5, const.KIMI_K2,\n        const.DOUBAO_SEED_2_PRO, const.DOUBAO_SEED_2_CODE,\n        const.CLAUDE_4_6_SONNET, const.CLAUDE_4_6_OPUS, const.CLAUDE_4_5_SONNET,\n        const.GEMINI_31_FLASH_LITE_PRE, const.GEMINI_31_PRO_PRE, const.GEMINI_3_FLASH_PRE,\n        const.GPT_54, const.GPT_54_MINI, const.GPT_54_NANO, const.GPT_5, const.GPT_41, const.GPT_4o,\n        const.DEEPSEEK_CHAT, const.DEEPSEEK_REASONER,\n    ]\n\n    PROVIDER_MODELS = OrderedDict([\n        (\"minimax\", {\n            \"label\": \"MiniMax\",\n            \"api_key_field\": \"minimax_api_key\",\n            \"api_base_key\": None,\n            \"api_base_default\": None,\n            \"models\": [const.MINIMAX_M2_5, const.MINIMAX_M2_1, const.MINIMAX_M2_1_LIGHTNING],\n        }),\n        (\"zhipu\", {\n            \"label\": \"智谱AI\",\n            \"api_key_field\": \"zhipu_ai_api_key\",\n            \"api_base_key\": \"zhipu_ai_api_base\",\n            \"api_base_default\": \"https://open.bigmodel.cn/api/paas/v4\",\n            \"models\": [const.GLM_5, const.GLM_4_7],\n        }),\n        (\"dashscope\", {\n            \"label\": \"通义千问\",\n            \"api_key_field\": \"dashscope_api_key\",\n            \"api_base_key\": None,\n            \"api_base_default\": None,\n            \"models\": [const.QWEN3_MAX, const.QWEN35_PLUS],\n        }),\n        (\"moonshot\", {\n            \"label\": \"Kimi\",\n            \"api_key_field\": \"moonshot_api_key\",\n            \"api_base_key\": \"moonshot_base_url\",\n            \"api_base_default\": \"https://api.moonshot.cn/v1\",\n            \"models\": [const.KIMI_K2_5, const.KIMI_K2],\n        }),\n        (\"doubao\", {\n            \"label\": \"豆包\",\n            \"api_key_field\": \"ark_api_key\",\n            \"api_base_key\": \"ark_base_url\",\n            \"api_base_default\": \"https://ark.cn-beijing.volces.com/api/v3\",\n            \"models\": [const.DOUBAO_SEED_2_PRO, const.DOUBAO_SEED_2_CODE],\n        }),\n        (\"claudeAPI\", {\n            \"label\": \"Claude\",\n            \"api_key_field\": \"claude_api_key\",\n            \"api_base_key\": \"claude_api_base\",\n            \"api_base_default\": \"https://api.anthropic.com/v1\",\n            \"models\": [const.CLAUDE_4_6_SONNET, const.CLAUDE_4_6_OPUS, const.CLAUDE_4_5_SONNET],\n        }),\n        (\"gemini\", {\n            \"label\": \"Gemini\",\n            \"api_key_field\": \"gemini_api_key\",\n            \"api_base_key\": \"gemini_api_base\",\n            \"api_base_default\": \"https://generativelanguage.googleapis.com\",\n            \"models\": [const.GEMINI_31_FLASH_LITE_PRE, const.GEMINI_31_PRO_PRE, const.GEMINI_3_FLASH_PRE],\n        }),\n        (\"openai\", {\n            \"label\": \"OpenAI\",\n            \"api_key_field\": \"open_ai_api_key\",\n            \"api_base_key\": \"open_ai_api_base\",\n            \"api_base_default\": \"https://api.openai.com/v1\",\n            \"models\": [const.GPT_54, const.GPT_54_MINI, const.GPT_54_NANO, const.GPT_5, const.GPT_41, const.GPT_4o],\n        }),\n        (\"deepseek\", {\n            \"label\": \"DeepSeek\",\n            \"api_key_field\": \"open_ai_api_key\",\n            \"api_base_key\": None,\n            \"api_base_default\": None,\n            \"models\": [const.DEEPSEEK_CHAT, const.DEEPSEEK_REASONER],\n        }),\n        (\"linkai\", {\n            \"label\": \"LinkAI\",\n            \"api_key_field\": \"linkai_api_key\",\n            \"api_base_key\": None,\n            \"api_base_default\": None,\n            \"models\": _RECOMMENDED_MODELS,\n        }),\n    ])\n\n    EDITABLE_KEYS = {\n        \"model\", \"bot_type\", \"use_linkai\",\n        \"open_ai_api_base\", \"claude_api_base\", \"gemini_api_base\",\n        \"zhipu_ai_api_base\", \"moonshot_base_url\", \"ark_base_url\",\n        \"open_ai_api_key\", \"claude_api_key\", \"gemini_api_key\",\n        \"zhipu_ai_api_key\", \"dashscope_api_key\", \"moonshot_api_key\",\n        \"ark_api_key\", \"minimax_api_key\", \"linkai_api_key\",\n        \"agent_max_context_tokens\", \"agent_max_context_turns\", \"agent_max_steps\",\n    }\n\n    @staticmethod\n    def _mask_key(value: str) -> str:\n        \"\"\"Mask the middle part of an API key for display.\"\"\"\n        if not value or len(value) <= 8:\n            return value\n        return value[:4] + \"*\" * (len(value) - 8) + value[-4:]\n\n    def GET(self):\n        \"\"\"Return configuration info and provider/model metadata.\"\"\"\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            local_config = conf()\n            use_agent = local_config.get(\"agent\", False)\n            title = \"CowAgent\" if use_agent else \"AI Assistant\"\n\n            api_bases = {}\n            api_keys_masked = {}\n            for pid, pinfo in self.PROVIDER_MODELS.items():\n                base_key = pinfo.get(\"api_base_key\")\n                if base_key:\n                    api_bases[base_key] = local_config.get(base_key, pinfo[\"api_base_default\"])\n                key_field = pinfo.get(\"api_key_field\")\n                if key_field and key_field not in api_keys_masked:\n                    raw = local_config.get(key_field, \"\")\n                    api_keys_masked[key_field] = self._mask_key(raw) if raw else \"\"\n\n            providers = {}\n            for pid, p in self.PROVIDER_MODELS.items():\n                providers[pid] = {\n                    \"label\": p[\"label\"],\n                    \"models\": p[\"models\"],\n                    \"api_base_key\": p[\"api_base_key\"],\n                    \"api_base_default\": p[\"api_base_default\"],\n                    \"api_key_field\": p.get(\"api_key_field\"),\n                }\n\n            return json.dumps({\n                \"status\": \"success\",\n                \"use_agent\": use_agent,\n                \"title\": title,\n                \"model\": local_config.get(\"model\", \"\"),\n                \"bot_type\": \"openai\" if local_config.get(\"bot_type\") == \"chatGPT\" else local_config.get(\"bot_type\", \"\"),\n                \"use_linkai\": bool(local_config.get(\"use_linkai\", False)),\n                \"channel_type\": local_config.get(\"channel_type\", \"\"),\n                \"agent_max_context_tokens\": local_config.get(\"agent_max_context_tokens\", 50000),\n                \"agent_max_context_turns\": local_config.get(\"agent_max_context_turns\", 20),\n                \"agent_max_steps\": local_config.get(\"agent_max_steps\", 15),\n                \"api_bases\": api_bases,\n                \"api_keys\": api_keys_masked,\n                \"providers\": providers,\n            }, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"Error getting config: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n    def POST(self):\n        \"\"\"Update configuration values in memory and persist to config.json.\"\"\"\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            data = json.loads(web.data())\n            updates = data.get(\"updates\", {})\n            if not updates:\n                return json.dumps({\"status\": \"error\", \"message\": \"no updates provided\"})\n\n            local_config = conf()\n            applied = {}\n            for key, value in updates.items():\n                if key not in self.EDITABLE_KEYS:\n                    continue\n                if key in (\"agent_max_context_tokens\", \"agent_max_context_turns\", \"agent_max_steps\"):\n                    value = int(value)\n                if key == \"use_linkai\":\n                    value = bool(value)\n                local_config[key] = value\n                applied[key] = value\n\n            if not applied:\n                return json.dumps({\"status\": \"error\", \"message\": \"no valid keys to update\"})\n\n            config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(\n                os.path.abspath(__file__)))), \"config.json\")\n            if os.path.exists(config_path):\n                with open(config_path, \"r\", encoding=\"utf-8\") as f:\n                    file_cfg = json.load(f)\n            else:\n                file_cfg = {}\n            file_cfg.update(applied)\n            with open(config_path, \"w\", encoding=\"utf-8\") as f:\n                json.dump(file_cfg, f, indent=4, ensure_ascii=False)\n\n            logger.info(f\"[WebChannel] Config updated: {list(applied.keys())}\")\n            return json.dumps({\"status\": \"success\", \"applied\": applied}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"Error updating config: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n\nclass ChannelsHandler:\n    \"\"\"API for managing external channel configurations (feishu, dingtalk, etc).\"\"\"\n\n    CHANNEL_DEFS = OrderedDict([\n        (\"feishu\", {\n            \"label\": {\"zh\": \"飞书\", \"en\": \"Feishu\"},\n            \"icon\": \"fa-paper-plane\",\n            \"color\": \"blue\",\n            \"fields\": [\n                {\"key\": \"feishu_app_id\", \"label\": \"App ID\", \"type\": \"text\"},\n                {\"key\": \"feishu_app_secret\", \"label\": \"App Secret\", \"type\": \"secret\"},\n                {\"key\": \"feishu_token\", \"label\": \"Verification Token\", \"type\": \"secret\"},\n                {\"key\": \"feishu_bot_name\", \"label\": \"Bot Name\", \"type\": \"text\"},\n            ],\n        }),\n        (\"dingtalk\", {\n            \"label\": {\"zh\": \"钉钉\", \"en\": \"DingTalk\"},\n            \"icon\": \"fa-comments\",\n            \"color\": \"blue\",\n            \"fields\": [\n                {\"key\": \"dingtalk_client_id\", \"label\": \"Client ID\", \"type\": \"text\"},\n                {\"key\": \"dingtalk_client_secret\", \"label\": \"Client Secret\", \"type\": \"secret\"},\n            ],\n        }),\n        (\"wecom_bot\", {\n            \"label\": {\"zh\": \"企微智能机器人\", \"en\": \"WeCom Bot\"},\n            \"icon\": \"fa-robot\",\n            \"color\": \"emerald\",\n            \"fields\": [\n                {\"key\": \"wecom_bot_id\", \"label\": \"Bot ID\", \"type\": \"text\"},\n                {\"key\": \"wecom_bot_secret\", \"label\": \"Secret\", \"type\": \"secret\"},\n            ],\n        }),\n        (\"qq\", {\n            \"label\": {\"zh\": \"QQ 机器人\", \"en\": \"QQ Bot\"},\n            \"icon\": \"fa-comment\",\n            \"color\": \"blue\",\n            \"fields\": [\n                {\"key\": \"qq_app_id\", \"label\": \"App ID\", \"type\": \"text\"},\n                {\"key\": \"qq_app_secret\", \"label\": \"App Secret\", \"type\": \"secret\"},\n            ],\n        }),\n        (\"wechatcom_app\", {\n            \"label\": {\"zh\": \"企微自建应用\", \"en\": \"WeCom App\"},\n            \"icon\": \"fa-building\",\n            \"color\": \"emerald\",\n            \"fields\": [\n                {\"key\": \"wechatcom_corp_id\", \"label\": \"Corp ID\", \"type\": \"text\"},\n                {\"key\": \"wechatcomapp_agent_id\", \"label\": \"Agent ID\", \"type\": \"text\"},\n                {\"key\": \"wechatcomapp_secret\", \"label\": \"Secret\", \"type\": \"secret\"},\n                {\"key\": \"wechatcomapp_token\", \"label\": \"Token\", \"type\": \"secret\"},\n                {\"key\": \"wechatcomapp_aes_key\", \"label\": \"AES Key\", \"type\": \"secret\"},\n                {\"key\": \"wechatcomapp_port\", \"label\": \"Port\", \"type\": \"number\", \"default\": 9898},\n            ],\n        }),\n        (\"wechatmp\", {\n            \"label\": {\"zh\": \"公众号\", \"en\": \"WeChat MP\"},\n            \"icon\": \"fa-comment-dots\",\n            \"color\": \"emerald\",\n            \"fields\": [\n                {\"key\": \"wechatmp_app_id\", \"label\": \"App ID\", \"type\": \"text\"},\n                {\"key\": \"wechatmp_app_secret\", \"label\": \"App Secret\", \"type\": \"secret\"},\n                {\"key\": \"wechatmp_token\", \"label\": \"Token\", \"type\": \"secret\"},\n                {\"key\": \"wechatmp_aes_key\", \"label\": \"AES Key\", \"type\": \"secret\"},\n                {\"key\": \"wechatmp_port\", \"label\": \"Port\", \"type\": \"number\", \"default\": 8080},\n            ],\n        }),\n    ])\n\n    @staticmethod\n    def _mask_secret(value: str) -> str:\n        if not value or len(value) <= 8:\n            return value\n        return value[:4] + \"*\" * (len(value) - 8) + value[-4:]\n\n    @staticmethod\n    def _parse_channel_list(raw) -> list:\n        if isinstance(raw, list):\n            return [ch.strip() for ch in raw if ch.strip()]\n        if isinstance(raw, str):\n            return [ch.strip() for ch in raw.split(\",\") if ch.strip()]\n        return []\n\n    @classmethod\n    def _active_channel_set(cls) -> set:\n        return set(cls._parse_channel_list(conf().get(\"channel_type\", \"\")))\n\n    def GET(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            local_config = conf()\n            active_channels = self._active_channel_set()\n            channels = []\n            for ch_name, ch_def in self.CHANNEL_DEFS.items():\n                fields_out = []\n                for f in ch_def[\"fields\"]:\n                    raw_val = local_config.get(f[\"key\"], f.get(\"default\", \"\"))\n                    if f[\"type\"] == \"secret\" and raw_val:\n                        display_val = self._mask_secret(str(raw_val))\n                    else:\n                        display_val = raw_val\n                    fields_out.append({\n                        \"key\": f[\"key\"],\n                        \"label\": f[\"label\"],\n                        \"type\": f[\"type\"],\n                        \"value\": display_val,\n                        \"default\": f.get(\"default\", \"\"),\n                    })\n                channels.append({\n                    \"name\": ch_name,\n                    \"label\": ch_def[\"label\"],\n                    \"icon\": ch_def[\"icon\"],\n                    \"color\": ch_def[\"color\"],\n                    \"active\": ch_name in active_channels,\n                    \"fields\": fields_out,\n                })\n            return json.dumps({\"status\": \"success\", \"channels\": channels}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[WebChannel] Channels API error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n    def POST(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            body = json.loads(web.data())\n            action = body.get(\"action\")\n            channel_name = body.get(\"channel\")\n\n            if not action or not channel_name:\n                return json.dumps({\"status\": \"error\", \"message\": \"action and channel required\"})\n\n            if channel_name not in self.CHANNEL_DEFS:\n                return json.dumps({\"status\": \"error\", \"message\": f\"unknown channel: {channel_name}\"})\n\n            if action == \"save\":\n                return self._handle_save(channel_name, body.get(\"config\", {}))\n            elif action == \"connect\":\n                return self._handle_connect(channel_name, body.get(\"config\", {}))\n            elif action == \"disconnect\":\n                return self._handle_disconnect(channel_name)\n            else:\n                return json.dumps({\"status\": \"error\", \"message\": f\"unknown action: {action}\"})\n        except Exception as e:\n            logger.error(f\"[WebChannel] Channels POST error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n    def _handle_save(self, channel_name: str, updates: dict):\n        ch_def = self.CHANNEL_DEFS[channel_name]\n        valid_keys = {f[\"key\"] for f in ch_def[\"fields\"]}\n        secret_keys = {f[\"key\"] for f in ch_def[\"fields\"] if f[\"type\"] == \"secret\"}\n\n        local_config = conf()\n        applied = {}\n        for key, value in updates.items():\n            if key not in valid_keys:\n                continue\n            if key in secret_keys:\n                if not value or (len(value) > 8 and \"*\" * 4 in value):\n                    continue\n            field_def = next((f for f in ch_def[\"fields\"] if f[\"key\"] == key), None)\n            if field_def:\n                if field_def[\"type\"] == \"number\":\n                    value = int(value)\n                elif field_def[\"type\"] == \"bool\":\n                    value = bool(value)\n            local_config[key] = value\n            applied[key] = value\n\n        if not applied:\n            return json.dumps({\"status\": \"error\", \"message\": \"no valid fields to update\"})\n\n        config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(\n            os.path.abspath(__file__)))), \"config.json\")\n        if os.path.exists(config_path):\n            with open(config_path, \"r\", encoding=\"utf-8\") as f:\n                file_cfg = json.load(f)\n        else:\n            file_cfg = {}\n        file_cfg.update(applied)\n        with open(config_path, \"w\", encoding=\"utf-8\") as f:\n            json.dump(file_cfg, f, indent=4, ensure_ascii=False)\n\n        logger.info(f\"[WebChannel] Channel '{channel_name}' config updated: {list(applied.keys())}\")\n\n        should_restart = False\n        active_channels = self._active_channel_set()\n        if channel_name in active_channels:\n            should_restart = True\n            try:\n                import sys\n                app_module = sys.modules.get('__main__') or sys.modules.get('app')\n                mgr = getattr(app_module, '_channel_mgr', None) if app_module else None\n                if mgr:\n                    threading.Thread(\n                        target=mgr.restart,\n                        args=(channel_name,),\n                        daemon=True,\n                    ).start()\n                    logger.info(f\"[WebChannel] Channel '{channel_name}' restart triggered\")\n            except Exception as e:\n                logger.warning(f\"[WebChannel] Failed to restart channel '{channel_name}': {e}\")\n\n        return json.dumps({\n            \"status\": \"success\",\n            \"applied\": list(applied.keys()),\n            \"restarted\": should_restart,\n        }, ensure_ascii=False)\n\n    def _handle_connect(self, channel_name: str, updates: dict):\n        \"\"\"Save config fields, add channel to channel_type, and start it.\"\"\"\n        ch_def = self.CHANNEL_DEFS[channel_name]\n        valid_keys = {f[\"key\"] for f in ch_def[\"fields\"]}\n        secret_keys = {f[\"key\"] for f in ch_def[\"fields\"] if f[\"type\"] == \"secret\"}\n\n        # Feishu connected via web console must use websocket (long connection) mode\n        if channel_name == \"feishu\":\n            updates.setdefault(\"feishu_event_mode\", \"websocket\")\n            valid_keys.add(\"feishu_event_mode\")\n\n        local_config = conf()\n        applied = {}\n        for key, value in updates.items():\n            if key not in valid_keys:\n                continue\n            if key in secret_keys:\n                if not value or (len(value) > 8 and \"*\" * 4 in value):\n                    continue\n            field_def = next((f for f in ch_def[\"fields\"] if f[\"key\"] == key), None)\n            if field_def:\n                if field_def[\"type\"] == \"number\":\n                    value = int(value)\n                elif field_def[\"type\"] == \"bool\":\n                    value = bool(value)\n            local_config[key] = value\n            applied[key] = value\n\n        existing = self._parse_channel_list(conf().get(\"channel_type\", \"\"))\n        if channel_name not in existing:\n            existing.append(channel_name)\n        new_channel_type = \",\".join(existing)\n        local_config[\"channel_type\"] = new_channel_type\n\n        config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(\n            os.path.abspath(__file__)))), \"config.json\")\n        if os.path.exists(config_path):\n            with open(config_path, \"r\", encoding=\"utf-8\") as f:\n                file_cfg = json.load(f)\n        else:\n            file_cfg = {}\n        file_cfg.update(applied)\n        file_cfg[\"channel_type\"] = new_channel_type\n        with open(config_path, \"w\", encoding=\"utf-8\") as f:\n            json.dump(file_cfg, f, indent=4, ensure_ascii=False)\n\n        logger.info(f\"[WebChannel] Channel '{channel_name}' connecting, channel_type={new_channel_type}\")\n\n        def _do_start():\n            try:\n                import sys\n                app_module = sys.modules.get('__main__') or sys.modules.get('app')\n                clear_fn = getattr(app_module, '_clear_singleton_cache', None) if app_module else None\n                mgr = getattr(app_module, '_channel_mgr', None) if app_module else None\n                if mgr is None:\n                    logger.warning(f\"[WebChannel] ChannelManager not available, cannot start '{channel_name}'\")\n                    return\n                # Stop existing instance first if still running (e.g. re-connect without disconnect)\n                existing_ch = mgr.get_channel(channel_name)\n                if existing_ch is not None:\n                    logger.info(f\"[WebChannel] Stopping existing '{channel_name}' before reconnect...\")\n                    mgr.stop(channel_name)\n                # Always wait for the remote service to release the old connection before\n                # establishing a new one (DingTalk drops callbacks on duplicate connections)\n                logger.info(f\"[WebChannel] Waiting for '{channel_name}' old connection to close...\")\n                time.sleep(5)\n                if clear_fn:\n                    clear_fn(channel_name)\n                logger.info(f\"[WebChannel] Starting channel '{channel_name}'...\")\n                mgr.start([channel_name], first_start=False)\n                logger.info(f\"[WebChannel] Channel '{channel_name}' start completed\")\n            except Exception as e:\n                logger.error(f\"[WebChannel] Failed to start channel '{channel_name}': {e}\",\n                             exc_info=True)\n\n        threading.Thread(target=_do_start, daemon=True).start()\n\n        return json.dumps({\n            \"status\": \"success\",\n            \"channel_type\": new_channel_type,\n        }, ensure_ascii=False)\n\n    def _handle_disconnect(self, channel_name: str):\n        existing = self._parse_channel_list(conf().get(\"channel_type\", \"\"))\n        existing = [ch for ch in existing if ch != channel_name]\n        new_channel_type = \",\".join(existing)\n\n        local_config = conf()\n        local_config[\"channel_type\"] = new_channel_type\n\n        config_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(\n            os.path.abspath(__file__)))), \"config.json\")\n        if os.path.exists(config_path):\n            with open(config_path, \"r\", encoding=\"utf-8\") as f:\n                file_cfg = json.load(f)\n        else:\n            file_cfg = {}\n        file_cfg[\"channel_type\"] = new_channel_type\n        with open(config_path, \"w\", encoding=\"utf-8\") as f:\n            json.dump(file_cfg, f, indent=4, ensure_ascii=False)\n\n        def _do_stop():\n            try:\n                import sys\n                app_module = sys.modules.get('__main__') or sys.modules.get('app')\n                mgr = getattr(app_module, '_channel_mgr', None) if app_module else None\n                clear_fn = getattr(app_module, '_clear_singleton_cache', None) if app_module else None\n                if mgr:\n                    mgr.stop(channel_name)\n                else:\n                    logger.warning(f\"[WebChannel] ChannelManager not found, cannot stop '{channel_name}'\")\n                if clear_fn:\n                    clear_fn(channel_name)\n                logger.info(f\"[WebChannel] Channel '{channel_name}' disconnected, \"\n                            f\"channel_type={new_channel_type}\")\n            except Exception as e:\n                logger.warning(f\"[WebChannel] Failed to stop channel '{channel_name}': {e}\",\n                               exc_info=True)\n\n        threading.Thread(target=_do_stop, daemon=True).start()\n\n        return json.dumps({\n            \"status\": \"success\",\n            \"channel_type\": new_channel_type,\n        }, ensure_ascii=False)\n\n\ndef _get_workspace_root():\n    \"\"\"Resolve the agent workspace directory.\"\"\"\n    from common.utils import expand_path\n    return expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n\n\nclass ToolsHandler:\n    def GET(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            from agent.tools.tool_manager import ToolManager\n            tm = ToolManager()\n            if not tm.tool_classes:\n                tm.load_tools()\n            tools = []\n            for name, cls in tm.tool_classes.items():\n                try:\n                    instance = cls()\n                    tools.append({\n                        \"name\": name,\n                        \"description\": instance.description,\n                    })\n                except Exception:\n                    tools.append({\"name\": name, \"description\": \"\"})\n            return json.dumps({\"status\": \"success\", \"tools\": tools}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[WebChannel] Tools API error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n\nclass SkillsHandler:\n    def GET(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            from agent.skills.service import SkillService\n            from agent.skills.manager import SkillManager\n            workspace_root = _get_workspace_root()\n            manager = SkillManager(custom_dir=os.path.join(workspace_root, \"skills\"))\n            service = SkillService(manager)\n            skills = service.query()\n            return json.dumps({\"status\": \"success\", \"skills\": skills}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[WebChannel] Skills API error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n    def POST(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            from agent.skills.service import SkillService\n            from agent.skills.manager import SkillManager\n            body = json.loads(web.data())\n            action = body.get(\"action\")\n            name = body.get(\"name\")\n            if not action or not name:\n                return json.dumps({\"status\": \"error\", \"message\": \"action and name are required\"})\n            workspace_root = _get_workspace_root()\n            manager = SkillManager(custom_dir=os.path.join(workspace_root, \"skills\"))\n            service = SkillService(manager)\n            if action == \"open\":\n                service.open({\"name\": name})\n            elif action == \"close\":\n                service.close({\"name\": name})\n            else:\n                return json.dumps({\"status\": \"error\", \"message\": f\"unknown action: {action}\"})\n            return json.dumps({\"status\": \"success\"}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[WebChannel] Skills POST error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n\nclass MemoryHandler:\n    def GET(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            from agent.memory.service import MemoryService\n            params = web.input(page='1', page_size='20')\n            workspace_root = _get_workspace_root()\n            service = MemoryService(workspace_root)\n            result = service.list_files(page=int(params.page), page_size=int(params.page_size))\n            return json.dumps({\"status\": \"success\", **result}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[WebChannel] Memory API error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n\nclass MemoryContentHandler:\n    def GET(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            from agent.memory.service import MemoryService\n            params = web.input(filename='')\n            if not params.filename:\n                return json.dumps({\"status\": \"error\", \"message\": \"filename required\"})\n            workspace_root = _get_workspace_root()\n            service = MemoryService(workspace_root)\n            result = service.get_content(params.filename)\n            return json.dumps({\"status\": \"success\", **result}, ensure_ascii=False)\n        except FileNotFoundError:\n            return json.dumps({\"status\": \"error\", \"message\": \"file not found\"})\n        except Exception as e:\n            logger.error(f\"[WebChannel] Memory content API error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n\nclass SchedulerHandler:\n    def GET(self):\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        try:\n            from agent.tools.scheduler.task_store import TaskStore\n            workspace_root = _get_workspace_root()\n            store_path = os.path.join(workspace_root, \"scheduler\", \"tasks.json\")\n            store = TaskStore(store_path)\n            tasks = store.list_tasks()\n            return json.dumps({\"status\": \"success\", \"tasks\": tasks}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[WebChannel] Scheduler API error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n\nclass HistoryHandler:\n    def GET(self):\n        \"\"\"\n        Return paginated conversation history for a session.\n\n        Query params:\n            session_id  (required)\n            page        int, default 1  (1 = most recent messages)\n            page_size   int, default 20\n        \"\"\"\n        web.header('Content-Type', 'application/json; charset=utf-8')\n        web.header('Access-Control-Allow-Origin', '*')\n        try:\n            params = web.input(session_id='', page='1', page_size='20')\n            session_id = params.session_id.strip()\n            if not session_id:\n                return json.dumps({\"status\": \"error\", \"message\": \"session_id required\"})\n\n            from agent.memory import get_conversation_store\n            store = get_conversation_store()\n            result = store.load_history_page(\n                session_id=session_id,\n                page=int(params.page),\n                page_size=int(params.page_size),\n            )\n            return json.dumps({\"status\": \"success\", **result}, ensure_ascii=False)\n        except Exception as e:\n            logger.error(f\"[WebChannel] History API error: {e}\")\n            return json.dumps({\"status\": \"error\", \"message\": str(e)})\n\n\nclass LogsHandler:\n    def GET(self):\n        \"\"\"Stream the last N lines of run.log as SSE, then tail new lines.\"\"\"\n        web.header('Content-Type', 'text/event-stream; charset=utf-8')\n        web.header('Cache-Control', 'no-cache')\n        web.header('X-Accel-Buffering', 'no')\n\n        from config import get_root\n        log_path = os.path.join(get_root(), \"run.log\")\n\n        def generate():\n            if not os.path.isfile(log_path):\n                yield b\"data: {\\\"type\\\": \\\"error\\\", \\\"message\\\": \\\"run.log not found\\\"}\\n\\n\"\n                return\n\n            # Read last 200 lines for initial display\n            try:\n                with open(log_path, 'r', encoding='utf-8', errors='replace') as f:\n                    lines = f.readlines()\n                tail_lines = lines[-200:]\n                chunk = ''.join(tail_lines)\n                payload = json.dumps({\"type\": \"init\", \"content\": chunk}, ensure_ascii=False)\n                yield f\"data: {payload}\\n\\n\".encode('utf-8')\n            except Exception as e:\n                yield f\"data: {{\\\"type\\\": \\\"error\\\", \\\"message\\\": \\\"{e}\\\"}}\\n\\n\".encode('utf-8')\n                return\n\n            # Tail new lines\n            try:\n                with open(log_path, 'r', encoding='utf-8', errors='replace') as f:\n                    f.seek(0, 2)  # seek to end\n                    deadline = time.time() + 600  # 10 min max\n                    while time.time() < deadline:\n                        line = f.readline()\n                        if line:\n                            payload = json.dumps({\"type\": \"line\", \"content\": line}, ensure_ascii=False)\n                            yield f\"data: {payload}\\n\\n\".encode('utf-8')\n                        else:\n                            yield b\": keepalive\\n\\n\"\n                            time.sleep(1)\n            except GeneratorExit:\n                return\n            except Exception:\n                return\n\n        return generate()\n\n\nclass AssetsHandler:\n    def GET(self, file_path):  # 修改默认参数\n        try:\n            # 如果请求是/static/，需要处理\n            if file_path == '':\n                # 返回目录列表...\n                pass\n\n            # 获取当前文件的绝对路径\n            current_dir = os.path.dirname(os.path.abspath(__file__))\n            static_dir = os.path.join(current_dir, 'static')\n\n            full_path = os.path.normpath(os.path.join(static_dir, file_path))\n\n            # 安全检查：确保请求的文件在static目录内\n            if not os.path.abspath(full_path).startswith(os.path.abspath(static_dir)):\n                logger.error(f\"Security check failed for path: {full_path}\")\n                raise web.notfound()\n\n            if not os.path.exists(full_path) or not os.path.isfile(full_path):\n                logger.error(f\"File not found: {full_path}\")\n                raise web.notfound()\n\n            # 设置正确的Content-Type\n            content_type = mimetypes.guess_type(full_path)[0]\n            if content_type:\n                web.header('Content-Type', content_type)\n            else:\n                # 默认为二进制流\n                web.header('Content-Type', 'application/octet-stream')\n\n            # 读取并返回文件内容\n            with open(full_path, 'rb') as f:\n                return f.read()\n\n        except Exception as e:\n            logger.error(f\"Error serving static file: {e}\", exc_info=True)  # 添加更详细的错误信息\n            raise web.notfound()\n"
  },
  {
    "path": "channel/wechatcom/README.md",
    "content": "# 企业微信应用号channel\n\n企业微信官方提供了客服、应用等API，本channel使用的是企业微信的自建应用API的能力。\n\n因为未来可能还会开发客服能力，所以本channel的类型名叫作`wechatcom_app`。\n\n`wechatcom_app` channel支持插件系统和图片声音交互等能力，除了无法加入群聊，作为个人使用的私人助理已绰绰有余。\n\n## 开始之前\n\n- 在企业中确认自己拥有在企业内自建应用的权限。\n- 如果没有权限或者是个人用户，也可创建未认证的企业。操作方式：登录手机企业微信，选择`创建/加入企业`来创建企业，类型请选择企业，企业名称可随意填写。\n    未认证的企业有100人的服务人数上限，其他功能与认证企业没有差异。\n\n本channel需安装的依赖与公众号一致，需要安装`wechatpy`和`web.py`，它们包含在`requirements-optional.txt`中。\n\n此外，如果你是`Linux`系统，除了`ffmpeg`还需要安装`amr`编码器，否则会出现找不到编码器的错误，无法正常使用语音功能。\n\n- Ubuntu/Debian\n\n```bash\napt-get install libavcodec-extra\n```\n\n- Alpine\n\n需自行编译`ffmpeg`，在编译参数里加入`amr`编码器的支持\n\n## 使用方法\n\n1.查看企业ID\n\n- 扫码登陆[企业微信后台](https://work.weixin.qq.com)\n- 选择`我的企业`，点击`企业信息`，记住该`企业ID`\n\n2.创建自建应用\n\n- 选择应用管理, 在自建区选创建应用来创建企业自建应用\n- 上传应用logo，填写应用名称等项\n- 创建应用后进入应用详情页面，记住`AgentId`和`Secert`\n\n3.配置应用\n\n- 在详情页点击`企业可信IP`的配置(没看到可以不管)，填入你服务器的公网IP，如果不知道可以先不填\n- 点击`接收消息`下的启用API接收消息\n- `URL`填写格式为`http://url:port/wxcomapp`，`port`是程序监听的端口，默认是9898\n    如果是未认证的企业，url可直接使用服务器的IP。如果是认证企业，需要使用备案的域名，可使用二级域名。\n- `Token`可随意填写，停留在这个页面\n- 在程序根目录`config.json`中增加配置（**去掉注释**），`wechatcomapp_aes_key`是当前页面的`wechatcomapp_aes_key`\n\n```python\n    \"channel_type\": \"wechatcom_app\",\n    \"wechatcom_corp_id\": \"\",  # 企业微信公司的corpID\n    \"wechatcomapp_token\": \"\",  # 企业微信app的token\n    \"wechatcomapp_port\": 9898,  # 企业微信app的服务端口, 不需要端口转发\n    \"wechatcomapp_secret\": \"\",  # 企业微信app的secret\n    \"wechatcomapp_agent_id\": \"\",  # 企业微信app的agent_id\n    \"wechatcomapp_aes_key\": \"\",  # 企业微信app的aes_key\n```\n\n- 运行程序，在页面中点击保存，保存成功说明验证成功\n\n4.连接个人微信\n\n选择`我的企业`，点击`微信插件`，下面有个邀请关注的二维码。微信扫码后，即可在微信中看到对应企业，在这里你便可以和机器人沟通。\n\n向机器人发送消息，如果日志里出现报错:\n\n```bash\nError code: 60020, message: \"not allow to access from your ip, ...from ip: xx.xx.xx.xx\"\n```\n\n意思是IP不可信，需要参考上一步的`企业可信IP`配置，把这里的IP加进去。\n\n~~### Railway部署方式~~（2023-06-08已失效）\n\n~~公众号不能在`Railway`上部署，但企业微信应用[可以](https://railway.app/template/-FHS--?referralCode=RC3znh)!~~\n\n~~填写配置后，将部署完成后的网址```**.railway.app/wxcomapp```，填写在上一步的URL中。发送信息后观察日志，把报错的IP加入到可信IP。（每次重启后都需要加入可信IP）~~\n\n~~## 测试体验~~\n\n~~AIGC开放社区中已经部署了多个可免费使用的Bot，扫描下方的二维码会自动邀请你来体验。~~\n\n~~<img width=\"200\" src=\"../../docs/images/aigcopen.png\">~~\n"
  },
  {
    "path": "channel/wechatcom/wechatcomapp_channel.py",
    "content": "# -*- coding=utf-8 -*-\nimport io\nimport os\nimport sys\nimport time\n\nimport requests\nimport web\nfrom wechatpy.enterprise import create_reply, parse_message\nfrom wechatpy.enterprise.crypto import WeChatCrypto\nfrom wechatpy.enterprise.exceptions import InvalidCorpIdException\nfrom wechatpy.exceptions import InvalidSignatureException, WeChatClientException\n\nfrom bridge.context import Context\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import ChatChannel\nfrom channel.wechatcom.wechatcomapp_client import WechatComAppClient\nfrom channel.wechatcom.wechatcomapp_message import WechatComAppMessage\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom common.utils import compress_imgfile, fsize, split_string_by_utf8_length, convert_webp_to_png, remove_markdown_symbol\nfrom config import conf, subscribe_msg\nfrom voice.audio_convert import any_to_amr, split_audio\n\nMAX_UTF8_LEN = 2048\n\n\n@singleton\nclass WechatComAppChannel(ChatChannel):\n    NOT_SUPPORT_REPLYTYPE = []\n\n    def __init__(self):\n        super().__init__()\n        self.corp_id = conf().get(\"wechatcom_corp_id\")\n        self.secret = conf().get(\"wechatcomapp_secret\")\n        self.agent_id = conf().get(\"wechatcomapp_agent_id\")\n        self.token = conf().get(\"wechatcomapp_token\")\n        self.aes_key = conf().get(\"wechatcomapp_aes_key\")\n        self._http_server = None\n        logger.info(\n            \"[wechatcom] Initializing WeCom app channel, corp_id: {}, agent_id: {}\".format(self.corp_id, self.agent_id)\n        )\n        self.crypto = WeChatCrypto(self.token, self.aes_key, self.corp_id)\n        self.client = WechatComAppClient(self.corp_id, self.secret)\n\n    def startup(self):\n        # start message listener\n        urls = (\"/wxcomapp/?\", \"channel.wechatcom.wechatcomapp_channel.Query\")\n        app = web.application(urls, globals(), autoreload=False)\n        port = conf().get(\"wechatcomapp_port\", 9898)\n        logger.info(\"[wechatcom] ✅ WeCom app channel started successfully\")\n        logger.info(\"[wechatcom] 📡 Listening on http://0.0.0.0:{}/wxcomapp/\".format(port))\n        logger.info(\"[wechatcom] 🤖 Ready to receive messages\")\n        \n        # Build WSGI app with middleware (same as runsimple but without print)\n        func = web.httpserver.StaticMiddleware(app.wsgifunc())\n        func = web.httpserver.LogMiddleware(func)\n        server = web.httpserver.WSGIServer((\"0.0.0.0\", port), func)\n        self._http_server = server\n        try:\n            server.start()\n        except (KeyboardInterrupt, SystemExit):\n            server.stop()\n\n    def stop(self):\n        if self._http_server:\n            try:\n                self._http_server.stop()\n                logger.info(\"[wechatcom] HTTP server stopped\")\n            except Exception as e:\n                logger.warning(f\"[wechatcom] Error stopping HTTP server: {e}\")\n            self._http_server = None\n\n    def send(self, reply: Reply, context: Context):\n        receiver = context[\"receiver\"]\n        if reply.type in [ReplyType.TEXT, ReplyType.ERROR, ReplyType.INFO]:\n            reply_text = remove_markdown_symbol(reply.content)\n            texts = split_string_by_utf8_length(reply_text, MAX_UTF8_LEN)\n            if len(texts) > 1:\n                logger.info(\"[wechatcom] text too long, split into {} parts\".format(len(texts)))\n            for i, text in enumerate(texts):\n                self.client.message.send_text(self.agent_id, receiver, text)\n                if i != len(texts) - 1:\n                    time.sleep(0.5)  # 休眠0.5秒，防止发送过快乱序\n            logger.info(\"[wechatcom] Do send text to {}: {}\".format(receiver, reply_text))\n        elif reply.type == ReplyType.VOICE:\n            try:\n                media_ids = []\n                file_path = reply.content\n                amr_file = os.path.splitext(file_path)[0] + \".amr\"\n                any_to_amr(file_path, amr_file)\n                duration, files = split_audio(amr_file, 60 * 1000)\n                if len(files) > 1:\n                    logger.info(\"[wechatcom] voice too long {}s > 60s , split into {} parts\".format(duration / 1000.0, len(files)))\n                for path in files:\n                    response = self.client.media.upload(\"voice\", open(path, \"rb\"))\n                    logger.debug(\"[wechatcom] upload voice response: {}\".format(response))\n                    media_ids.append(response[\"media_id\"])\n            except ImportError as e:\n                logger.error(\"[wechatcom] voice conversion failed: {}\".format(e))\n                logger.error(\"[wechatcom] please install pydub: pip install pydub\")\n                return\n            except WeChatClientException as e:\n                logger.error(\"[wechatcom] upload voice failed: {}\".format(e))\n                return\n            try:\n                os.remove(file_path)\n                if amr_file != file_path:\n                    os.remove(amr_file)\n            except Exception:\n                pass\n            for media_id in media_ids:\n                self.client.message.send_voice(self.agent_id, receiver, media_id)\n                time.sleep(1)\n            logger.info(\"[wechatcom] sendVoice={}, receiver={}\".format(reply.content, receiver))\n        elif reply.type == ReplyType.IMAGE_URL:  # 从网络下载图片\n            img_url = reply.content\n            pic_res = requests.get(img_url, stream=True)\n            image_storage = io.BytesIO()\n            for block in pic_res.iter_content(1024):\n                image_storage.write(block)\n            sz = fsize(image_storage)\n            if sz >= 10 * 1024 * 1024:\n                logger.info(\"[wechatcom] image too large, ready to compress, sz={}\".format(sz))\n                image_storage = compress_imgfile(image_storage, 10 * 1024 * 1024 - 1)\n                logger.info(\"[wechatcom] image compressed, sz={}\".format(fsize(image_storage)))\n            image_storage.seek(0)\n            if \".webp\" in img_url:\n                try:\n                    image_storage = convert_webp_to_png(image_storage)\n                except Exception as e:\n                    logger.error(f\"Failed to convert image: {e}\")\n                    return\n            try:\n                response = self.client.media.upload(\"image\", image_storage)\n                logger.debug(\"[wechatcom] upload image response: {}\".format(response))\n            except WeChatClientException as e:\n                logger.error(\"[wechatcom] upload image failed: {}\".format(e))\n                return\n\n            self.client.message.send_image(self.agent_id, receiver, response[\"media_id\"])\n            logger.info(\"[wechatcom] sendImage url={}, receiver={}\".format(img_url, receiver))\n        elif reply.type == ReplyType.IMAGE:  # 从文件读取图片\n            image_storage = reply.content\n            sz = fsize(image_storage)\n            if sz >= 10 * 1024 * 1024:\n                logger.info(\"[wechatcom] image too large, ready to compress, sz={}\".format(sz))\n                image_storage = compress_imgfile(image_storage, 10 * 1024 * 1024 - 1)\n                logger.info(\"[wechatcom] image compressed, sz={}\".format(fsize(image_storage)))\n            image_storage.seek(0)\n            try:\n                response = self.client.media.upload(\"image\", image_storage)\n                logger.debug(\"[wechatcom] upload image response: {}\".format(response))\n            except WeChatClientException as e:\n                logger.error(\"[wechatcom] upload image failed: {}\".format(e))\n                return\n            self.client.message.send_image(self.agent_id, receiver, response[\"media_id\"])\n            logger.info(\"[wechatcom] sendImage, receiver={}\".format(receiver))\n\n\nclass Query:\n    def GET(self):\n        channel = WechatComAppChannel()\n        params = web.input()\n        logger.info(\"[wechatcom] receive params: {}\".format(params))\n        try:\n            signature = params.msg_signature\n            timestamp = params.timestamp\n            nonce = params.nonce\n            echostr = params.echostr\n            echostr = channel.crypto.check_signature(signature, timestamp, nonce, echostr)\n        except InvalidSignatureException:\n            raise web.Forbidden()\n        return echostr\n\n    def POST(self):\n        channel = WechatComAppChannel()\n        params = web.input()\n        logger.info(\"[wechatcom] receive params: {}\".format(params))\n        try:\n            signature = params.msg_signature\n            timestamp = params.timestamp\n            nonce = params.nonce\n            message = channel.crypto.decrypt_message(web.data(), signature, timestamp, nonce)\n        except (InvalidSignatureException, InvalidCorpIdException):\n            raise web.Forbidden()\n        msg = parse_message(message)\n        logger.debug(\"[wechatcom] receive message: {}, msg= {}\".format(message, msg))\n        if msg.type == \"event\":\n            if msg.event == \"subscribe\":\n                pass\n                # reply_content = subscribe_msg()\n                # if reply_content:\n                #     reply = create_reply(reply_content, msg).render()\n                #     res = channel.crypto.encrypt_message(reply, nonce, timestamp)\n                #     return res\n        else:\n            try:\n                wechatcom_msg = WechatComAppMessage(msg, client=channel.client)\n            except NotImplementedError as e:\n                logger.debug(\"[wechatcom] \" + str(e))\n                return \"success\"\n            context = channel._compose_context(\n                wechatcom_msg.ctype,\n                wechatcom_msg.content,\n                isgroup=False,\n                msg=wechatcom_msg,\n            )\n            if context:\n                channel.produce(context)\n        return \"success\"\n"
  },
  {
    "path": "channel/wechatcom/wechatcomapp_client.py",
    "content": "# wechatcomapp_client.py\nimport threading\nimport time\nfrom wechatpy.enterprise import WeChatClient\n\nclass WechatComAppClient(WeChatClient):\n    def __init__(self, corp_id, secret, access_token=None, session=None, timeout=None, auto_retry=True):\n        super(WechatComAppClient, self).__init__(corp_id, secret, access_token, session, timeout, auto_retry)\n        self.fetch_access_token_lock = threading.Lock()\n        self._active_refresh()\n        \n    def _active_refresh(self):\n        \"\"\"启动主动刷新的后台线程\"\"\"\n        def refresh_loop():\n            while True:\n                now = time.time()\n                expires_at = self.session.get(f\"{self.corp_id}_expires_at\", 0)\n                \n                # 提前10分钟刷新(600秒)\n                if expires_at - now < 600:\n                    with self.fetch_access_token_lock:\n                        # 双重检查避免重复刷新\n                        if self.session.get(f\"{self.corp_id}_expires_at\", 0) - time.time() < 600:\n                            super(WechatComAppClient, self).fetch_access_token()\n                # 每次检查间隔60秒\n                time.sleep(60)\n                \n        # 启动守护线程\n        refresh_thread = threading.Thread(\n            target=refresh_loop,\n            daemon=True,\n            name=\"wechatcom_token_refresh_thread\"\n        )\n        refresh_thread.start()\n\n    def fetch_access_token(self):\n        with self.fetch_access_token_lock:\n            access_token = self.session.get(self.access_token_key)\n            expires_at = self.session.get(f\"{self.corp_id}_expires_at\", 0)\n            \n            if access_token and expires_at > time.time() + 60:\n                return access_token\n            return super().fetch_access_token()"
  },
  {
    "path": "channel/wechatcom/wechatcomapp_message.py",
    "content": "from wechatpy.enterprise import WeChatClient\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\n\n\nclass WechatComAppMessage(ChatMessage):\n    def __init__(self, msg, client: WeChatClient, is_group=False):\n        super().__init__(msg)\n        self.msg_id = msg.id\n        self.create_time = msg.time\n        self.is_group = is_group\n\n        if msg.type == \"text\":\n            self.ctype = ContextType.TEXT\n            self.content = msg.content\n        elif msg.type == \"voice\":\n            self.ctype = ContextType.VOICE\n            self.content = TmpDir().path() + msg.media_id + \".\" + msg.format  # content直接存临时目录路径\n\n            def download_voice():\n                # 如果响应状态码是200，则将响应内容写入本地文件\n                response = client.media.download(msg.media_id)\n                if response.status_code == 200:\n                    with open(self.content, \"wb\") as f:\n                        f.write(response.content)\n                else:\n                    logger.info(f\"[wechatcom] Failed to download voice file, {response.content}\")\n\n            self._prepare_fn = download_voice\n        elif msg.type == \"image\":\n            self.ctype = ContextType.IMAGE\n            self.content = TmpDir().path() + msg.media_id + \".png\"  # content直接存临时目录路径\n\n            def download_image():\n                # 如果响应状态码是200，则将响应内容写入本地文件\n                response = client.media.download(msg.media_id)\n                if response.status_code == 200:\n                    with open(self.content, \"wb\") as f:\n                        f.write(response.content)\n                else:\n                    logger.info(f\"[wechatcom] Failed to download image file, {response.content}\")\n\n            self._prepare_fn = download_image\n        else:\n            raise NotImplementedError(\"Unsupported message type: Type:{} \".format(msg.type))\n\n        self.from_user_id = msg.source\n        self.to_user_id = msg.target\n        self.other_user_id = msg.source\n"
  },
  {
    "path": "channel/wechatmp/README.md",
    "content": "# 微信公众号channel\n\n微信公众号channel，提供稳定的服务。\n目前支持订阅号和服务号两种类型的公众号，它们都支持文本交互，语音和图片输入。其中个人主体的微信订阅号由于无法通过微信认证，存在回复时间限制，每天的图片和声音回复次数也有限制。\n\n## 使用方法（订阅号，服务号类似）\n\n在开始部署前，你需要一个拥有公网IP的服务器，以提供微信服务器和我们自己服务器的连接。或者你需要进行内网穿透，否则微信服务器无法将消息发送给我们的服务器。\n\n此外，需要在我们的服务器上安装python的web框架web.py和wechatpy。\n以ubuntu为例(在ubuntu 22.04上测试):\n```\npip3 install web.py\npip3 install wechatpy\n```\n\n然后在[微信公众平台](https://mp.weixin.qq.com)注册一个自己的公众号，类型选择订阅号，主体为个人即可。\n\n然后根据[接入指南](https://developers.weixin.qq.com/doc/offiaccount/Basic_Information/Access_Overview.html)的说明，在[微信公众平台](https://mp.weixin.qq.com)的“设置与开发”-“基本配置”-“服务器配置”中填写服务器地址`URL`和令牌`Token`。`URL`填写格式为`http://url/wx`，可使用IP（成功几率看脸），`Token`是你自己编的一个特定的令牌。消息加解密方式如果选择了需要加密的模式，需要在配置中填写`wechatmp_aes_key`。\n\n相关的服务器验证代码已经写好，你不需要再添加任何代码。你只需要在本项目根目录的`config.json`中添加\n```\n\"channel_type\": \"wechatmp\",     # 如果通过了微信认证，将\"wechatmp\"替换为\"wechatmp_service\"，可极大的优化使用体验\n\"wechatmp_token\": \"xxxx\",       # 微信公众平台的Token\n\"wechatmp_port\": 8080,          # 微信公众平台的端口,需要端口转发到80或443\n\"wechatmp_app_id\": \"xxxx\",      # 微信公众平台的appID\n\"wechatmp_app_secret\": \"xxxx\",  # 微信公众平台的appsecret\n\"wechatmp_aes_key\": \"\",         # 微信公众平台的EncodingAESKey，加密模式需要\n\"single_chat_prefix\": [\"\"],     # 推荐设置，任意对话都可以触发回复，不添加前缀\n\"single_chat_reply_prefix\": \"\", # 推荐设置，回复不设置前缀\n\"plugin_trigger_prefix\": \"&\",   # 推荐设置，在手机微信客户端中，$%^等符号与中文连在一起时会自动显示一段较大的间隔，用户体验不好。请不要使用管理员指令前缀\"#\"，这会造成未知问题。\n```\n然后运行`python3 app.py`启动web服务器。这里会默认监听8080端口，但是微信公众号的服务器配置只支持80/443端口，有两种方法来解决这个问题。第一个是推荐的方法，使用端口转发命令将80端口转发到8080端口：\n```\nsudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080\nsudo iptables-save > /etc/iptables/rules.v4\n```\n第二个方法是让python程序直接监听80端口，在配置文件中设置`\"wechatmp_port\": 80` ，在linux上需要使用`sudo python3 app.py`启动程序。然而这会导致一系列环境和权限问题，因此不是推荐的方法。\n\n443端口同理，注意需要支持SSL，也就是https的访问，在`wechatmp_channel.py`中需要修改相应的证书路径。\n\n程序启动并监听端口后，在刚才的“服务器配置”中点击`提交`即可验证你的服务器。\n随后在[微信公众平台](https://mp.weixin.qq.com)启用服务器，关闭手动填写规则的自动回复，即可实现ChatGPT的自动回复。\n\n之后需要在公众号开发信息下将本机IP加入到IP白名单。\n\n不然在启用后，发送语音、图片等消息可能会遇到如下报错：\n```\n'errcode': 40164, 'errmsg': 'invalid ip xx.xx.xx.xx not in whitelist rid\n```\n\n\n## 个人微信公众号的限制\n由于人微信公众号不能通过微信认证，所以没有客服接口，因此公众号无法主动发出消息，只能被动回复。而微信官方对被动回复有5秒的时间限制，最多重试2次，因此最多只有15秒的自动回复时间窗口。因此如果问题比较复杂或者我们的服务器比较忙，ChatGPT的回答就没办法及时回复给用户。为了解决这个问题，这里做了回答缓存，它需要你在回复超时后，再次主动发送任意文字（例如1）来尝试拿到回答缓存。为了优化使用体验，目前设置了两分钟（120秒）的timeout，用户在至多两分钟后即可得到查询到回复或者错误原因。\n\n另外，由于微信官方的限制，自动回复有长度限制。因此这里将ChatGPT的回答进行了拆分，以满足限制。\n\n## 私有api_key\n公共api有访问频率限制（免费账号每分钟最多3次ChatGPT的API调用），这在服务多人的时候会遇到问题。因此这里多加了一个设置私有api_key的功能。目前通过godcmd插件的命令来设置私有api_key。\n\n## 语音输入\n利用微信自带的语音识别功能，提供语音输入能力。需要在公众号管理页面的“设置与开发”->“接口权限”页面开启“接收语音识别结果”。\n\n## 语音回复\n请在配置文件中添加以下词条：\n```\n  \"voice_reply_voice\": true,\n```\n这样公众号将会用语音回复语音消息，实现语音对话。\n\n默认的语音合成引擎是`google`，它是免费使用的。\n\n如果要选择其他的语音合成引擎，请添加以下配置项：\n```\n\"text_to_voice\": \"pytts\"\n```\n\npytts是本地的语音合成引擎。还支持baidu,azure，这些你需要自行配置相关的依赖和key。\n\n如果使用pytts，在ubuntu上需要安装如下依赖：\n```\nsudo apt update\nsudo apt install espeak\nsudo apt install ffmpeg\npython3 -m pip install pyttsx3\n```\n不是很建议开启pytts语音回复，因为它是离线本地计算，算的慢会拖垮服务器，且声音不好听。\n\n## 图片回复\n现在认证公众号和非认证公众号都可以实现的图片和语音回复。但是非认证公众号使用了永久素材接口，每天有1000次的调用上限（每个月有10次重置机会，程序中已设定遇到上限会自动重置），且永久素材库存也有上限。因此对于非认证公众号，我们会在回复图片或者语音消息后的10秒内从永久素材库存内删除该素材。\n\n## 测试\n目前在`RoboStyle`这个公众号上进行了测试（基于[wechatmp分支](https://github.com/JS00000/chatgpt-on-wechat/tree/wechatmp)），感兴趣的可以关注并体验。开启了godcmd, Banwords, role, dungeon, finish这五个插件，其他的插件还没有详尽测试。百度的接口暂未测试。[wechatmp-stable分支](https://github.com/JS00000/chatgpt-on-wechat/tree/wechatmp-stable)是较稳定的上个版本，但也缺少最新的功能支持。\n\n## TODO\n - [x] 语音输入\n - [x] 图片输入\n - [x] 使用临时素材接口提供认证公众号的图片和语音回复\n - [x] 使用永久素材接口提供未认证公众号的图片和语音回复\n - [ ] 高并发支持\n"
  },
  {
    "path": "channel/wechatmp/active_reply.py",
    "content": "import time\n\nimport web\nfrom wechatpy import parse_message\nfrom wechatpy.replies import create_reply\n\nfrom bridge.context import *\nfrom bridge.reply import *\nfrom channel.wechatmp.common import *\nfrom channel.wechatmp.wechatmp_channel import WechatMPChannel\nfrom channel.wechatmp.wechatmp_message import WeChatMPMessage\nfrom common.log import logger\nfrom config import conf, subscribe_msg\n\n\n# This class is instantiated once per query\nclass Query:\n    def GET(self):\n        return verify_server(web.input())\n\n    def POST(self):\n        # Make sure to return the instance that first created, @singleton will do that.\n        try:\n            args = web.input()\n            verify_server(args)\n            channel = WechatMPChannel()\n            message = web.data()\n            encrypt_func = lambda x: x\n            if args.get(\"encrypt_type\") == \"aes\":\n                logger.debug(\"[wechatmp] Receive encrypted post data:\\n\" + message.decode(\"utf-8\"))\n                if not channel.crypto:\n                    raise Exception(\"Crypto not initialized, Please set wechatmp_aes_key in config.json\")\n                message = channel.crypto.decrypt_message(message, args.msg_signature, args.timestamp, args.nonce)\n                encrypt_func = lambda x: channel.crypto.encrypt_message(x, args.nonce, args.timestamp)\n            else:\n                logger.debug(\"[wechatmp] Receive post data:\\n\" + message.decode(\"utf-8\"))\n            msg = parse_message(message)\n            if msg.type in [\"text\", \"voice\", \"image\"]:\n                wechatmp_msg = WeChatMPMessage(msg, client=channel.client)\n                from_user = wechatmp_msg.from_user_id\n                content = wechatmp_msg.content\n                message_id = wechatmp_msg.msg_id\n\n                logger.info(\n                    \"[wechatmp] {}:{} Receive post query {} {}: {}\".format(\n                        web.ctx.env.get(\"REMOTE_ADDR\"),\n                        web.ctx.env.get(\"REMOTE_PORT\"),\n                        from_user,\n                        message_id,\n                        content,\n                    )\n                )\n                if msg.type == \"voice\" and wechatmp_msg.ctype == ContextType.TEXT and conf().get(\"voice_reply_voice\", False):\n                    context = channel._compose_context(wechatmp_msg.ctype, content, isgroup=False, desire_rtype=ReplyType.VOICE, msg=wechatmp_msg)\n                else:\n                    context = channel._compose_context(wechatmp_msg.ctype, content, isgroup=False, msg=wechatmp_msg)\n                if context:\n                    channel.produce(context)\n                # The reply will be sent by channel.send() in another thread\n                return \"success\"\n            elif msg.type == \"event\":\n                logger.info(\"[wechatmp] Event {} from {}\".format(msg.event, msg.source))\n                if msg.event in [\"subscribe\", \"subscribe_scan\"]:\n                    reply_text = subscribe_msg()\n                    if reply_text:\n                        replyPost = create_reply(reply_text, msg)\n                        return encrypt_func(replyPost.render())\n                else:\n                    return \"success\"\n            else:\n                logger.info(\"暂且不处理\")\n            return \"success\"\n        except Exception as exc:\n            logger.exception(exc)\n            return exc\n"
  },
  {
    "path": "channel/wechatmp/common.py",
    "content": "import web\nfrom wechatpy.crypto import WeChatCrypto\nfrom wechatpy.exceptions import InvalidSignatureException\nfrom wechatpy.utils import check_signature\n\nfrom config import conf\n\nMAX_UTF8_LEN = 2048\n\n\nclass WeChatAPIException(Exception):\n    pass\n\n\ndef verify_server(data):\n    try:\n        signature = data.signature\n        timestamp = data.timestamp\n        nonce = data.nonce\n        echostr = data.get(\"echostr\", None)\n        token = conf().get(\"wechatmp_token\")  # 请按照公众平台官网\\基本配置中信息填写\n        check_signature(token, signature, timestamp, nonce)\n        return echostr\n    except InvalidSignatureException:\n        raise web.Forbidden(\"Invalid signature\")\n    except Exception as e:\n        raise web.Forbidden(str(e))\n"
  },
  {
    "path": "channel/wechatmp/passive_reply.py",
    "content": "import asyncio\nimport time\n\nimport web\nfrom wechatpy import parse_message\nfrom wechatpy.replies import ImageReply, VoiceReply, create_reply\nimport textwrap\nfrom bridge.context import *\nfrom bridge.reply import *\nfrom channel.wechatmp.common import *\nfrom channel.wechatmp.wechatmp_channel import WechatMPChannel\nfrom channel.wechatmp.wechatmp_message import WeChatMPMessage\nfrom common.log import logger\nfrom common.utils import split_string_by_utf8_length\nfrom config import conf, subscribe_msg\n\n\n# This class is instantiated once per query\nclass Query:\n    def GET(self):\n        return verify_server(web.input())\n\n    def POST(self):\n        try:\n            args = web.input()\n            verify_server(args)\n            request_time = time.time()\n            channel = WechatMPChannel()\n            message = web.data()\n            encrypt_func = lambda x: x\n            if args.get(\"encrypt_type\") == \"aes\":\n                logger.debug(\"[wechatmp] Receive encrypted post data:\\n\" + message.decode(\"utf-8\"))\n                if not channel.crypto:\n                    raise Exception(\"Crypto not initialized, Please set wechatmp_aes_key in config.json\")\n                message = channel.crypto.decrypt_message(message, args.msg_signature, args.timestamp, args.nonce)\n                encrypt_func = lambda x: channel.crypto.encrypt_message(x, args.nonce, args.timestamp)\n            else:\n                logger.debug(\"[wechatmp] Receive post data:\\n\" + message.decode(\"utf-8\"))\n            msg = parse_message(message)\n            if msg.type in [\"text\", \"voice\", \"image\"]:\n                wechatmp_msg = WeChatMPMessage(msg, client=channel.client)\n                from_user = wechatmp_msg.from_user_id\n                content = wechatmp_msg.content\n                message_id = wechatmp_msg.msg_id\n\n                supported = True\n                if \"【收到不支持的消息类型，暂无法显示】\" in content:\n                    supported = False  # not supported, used to refresh\n\n                # New request\n                if (\n                    channel.cache_dict.get(from_user) is None\n                    and from_user not in channel.running\n                    or content.startswith(\"#\")\n                    and message_id not in channel.request_cnt  # insert the godcmd\n                ):\n                    # The first query begin\n                    if msg.type == \"voice\" and wechatmp_msg.ctype == ContextType.TEXT and conf().get(\"voice_reply_voice\", False):\n                        context = channel._compose_context(wechatmp_msg.ctype, content, isgroup=False, desire_rtype=ReplyType.VOICE, msg=wechatmp_msg)\n                    else:\n                        context = channel._compose_context(wechatmp_msg.ctype, content, isgroup=False, msg=wechatmp_msg)\n                    logger.debug(\"[wechatmp] context: {} {} {}\".format(context, wechatmp_msg, supported))\n\n                    if supported and context:\n                        channel.running.add(from_user)\n                        channel.produce(context)\n                    else:\n                        trigger_prefix = conf().get(\"single_chat_prefix\", [\"\"])[0]\n                        if trigger_prefix or not supported:\n                            if trigger_prefix:\n                                reply_text = textwrap.dedent(\n                                    f\"\"\"\\\n                                    请输入'{trigger_prefix}'接你想说的话跟我说话。\n                                    例如:\n                                    {trigger_prefix}你好，很高兴见到你。\"\"\"\n                                )\n                            else:\n                                reply_text = textwrap.dedent(\n                                    \"\"\"\\\n                                    你好，很高兴见到你。\n                                    请跟我说话吧。\"\"\"\n                                )\n                        else:\n                            logger.error(f\"[wechatmp] unknown error\")\n                            reply_text = textwrap.dedent(\n                                \"\"\"\\\n                                未知错误，请稍后再试\"\"\"\n                            )\n\n                        replyPost = create_reply(reply_text, msg)\n                        return encrypt_func(replyPost.render())\n\n                # Wechat official server will request 3 times (5 seconds each), with the same message_id.\n                # Because the interval is 5 seconds, here assumed that do not have multithreading problems.\n                request_cnt = channel.request_cnt.get(message_id, 0) + 1\n                channel.request_cnt[message_id] = request_cnt\n                logger.info(\n                    \"[wechatmp] Request {} from {} {} {}:{}\\n{}\".format(\n                        request_cnt, from_user, message_id, web.ctx.env.get(\"REMOTE_ADDR\"), web.ctx.env.get(\"REMOTE_PORT\"), content\n                    )\n                )\n\n                task_running = True\n                waiting_until = request_time + 4\n                while time.time() < waiting_until:\n                    if from_user in channel.running:\n                        time.sleep(0.1)\n                    else:\n                        task_running = False\n                        break\n\n                reply_text = \"\"\n                if task_running:\n                    if request_cnt < 3:\n                        # waiting for timeout (the POST request will be closed by Wechat official server)\n                        time.sleep(2)\n                        # and do nothing, waiting for the next request\n                        return \"success\"\n                    else:  # request_cnt == 3:\n                        # return timeout message\n                        reply_text = \"【正在思考中，回复任意文字尝试获取回复】\"\n                        replyPost = create_reply(reply_text, msg)\n                        return encrypt_func(replyPost.render())\n\n                # reply is ready\n                channel.request_cnt.pop(message_id)\n\n                # no return because of bandwords or other reasons\n                if from_user not in channel.cache_dict and from_user not in channel.running:\n                    return \"success\"\n\n                # Only one request can access to the cached data\n                try:\n                    (reply_type, reply_content) = channel.cache_dict[from_user].pop(0)\n                    if not channel.cache_dict[from_user]:  # If popping the message makes the list empty, delete the user entry from cache\n                        del channel.cache_dict[from_user]\n                except IndexError:\n                    return \"success\"\n\n                if reply_type == \"text\":\n                    if len(reply_content.encode(\"utf8\")) <= MAX_UTF8_LEN:\n                        reply_text = reply_content\n                    else:\n                        continue_text = \"\\n【未完待续，回复任意文字以继续】\"\n                        splits = split_string_by_utf8_length(\n                            reply_content,\n                            MAX_UTF8_LEN - len(continue_text.encode(\"utf-8\")),\n                            max_split=1,\n                        )\n                        reply_text = splits[0] + continue_text\n                        channel.cache_dict[from_user].append((\"text\", splits[1]))\n\n                    logger.info(\n                        \"[wechatmp] Request {} do send to {} {}: {}\\n{}\".format(\n                            request_cnt,\n                            from_user,\n                            message_id,\n                            content,\n                            reply_text,\n                        )\n                    )\n                    replyPost = create_reply(reply_text, msg)\n                    return encrypt_func(replyPost.render())\n\n                elif reply_type == \"voice\":\n                    media_id = reply_content\n                    asyncio.run_coroutine_threadsafe(channel.delete_media(media_id), channel.delete_media_loop)\n                    logger.info(\n                        \"[wechatmp] Request {} do send to {} {}: {} voice media_id {}\".format(\n                            request_cnt,\n                            from_user,\n                            message_id,\n                            content,\n                            media_id,\n                        )\n                    )\n                    replyPost = VoiceReply(message=msg)\n                    replyPost.media_id = media_id\n                    return encrypt_func(replyPost.render())\n\n                elif reply_type == \"image\":\n                    media_id = reply_content\n                    asyncio.run_coroutine_threadsafe(channel.delete_media(media_id), channel.delete_media_loop)\n                    logger.info(\n                        \"[wechatmp] Request {} do send to {} {}: {} image media_id {}\".format(\n                            request_cnt,\n                            from_user,\n                            message_id,\n                            content,\n                            media_id,\n                        )\n                    )\n                    replyPost = ImageReply(message=msg)\n                    replyPost.media_id = media_id\n                    return encrypt_func(replyPost.render())\n\n            elif msg.type == \"event\":\n                logger.info(\"[wechatmp] Event {} from {}\".format(msg.event, msg.source))\n                if msg.event in [\"subscribe\", \"subscribe_scan\"]:\n                    reply_text = subscribe_msg()\n                    if reply_text:\n                        replyPost = create_reply(reply_text, msg)\n                        return encrypt_func(replyPost.render())\n                else:\n                    return \"success\"\n            else:\n                logger.info(\"暂且不处理\")\n            return \"success\"\n        except Exception as exc:\n            logger.exception(exc)\n            return exc\n"
  },
  {
    "path": "channel/wechatmp/wechatmp_channel.py",
    "content": "# -*- coding: utf-8 -*-\nimport asyncio\nimport imghdr\nimport io\nimport os\nimport threading\nimport time\n\nimport requests\nimport web\nfrom wechatpy.crypto import WeChatCrypto\nfrom wechatpy.exceptions import WeChatClientException\nfrom collections import defaultdict\n\nfrom bridge.context import *\nfrom bridge.reply import *\nfrom channel.chat_channel import ChatChannel\nfrom channel.wechatmp.common import *\nfrom channel.wechatmp.wechatmp_client import WechatMPClient\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom common.utils import split_string_by_utf8_length, remove_markdown_symbol\nfrom config import conf\n\ntry:\n    from voice.audio_convert import any_to_mp3, split_audio\nexcept ImportError as e:\n    logger.debug(\"import voice.audio_convert failed, voice features will not be supported: {}\".format(e))\n\n# If using SSL, uncomment the following lines, and modify the certificate path.\n# from cheroot.server import HTTPServer\n# from cheroot.ssl.builtin import BuiltinSSLAdapter\n# HTTPServer.ssl_adapter = BuiltinSSLAdapter(\n#         certificate='/ssl/cert.pem',\n#         private_key='/ssl/cert.key')\n\n\n@singleton\nclass WechatMPChannel(ChatChannel):\n    def __init__(self, passive_reply=True):\n        super().__init__()\n        self.passive_reply = passive_reply\n        self.NOT_SUPPORT_REPLYTYPE = []\n        self._http_server = None\n        appid = conf().get(\"wechatmp_app_id\")\n        secret = conf().get(\"wechatmp_app_secret\")\n        token = conf().get(\"wechatmp_token\")\n        aes_key = conf().get(\"wechatmp_aes_key\")\n        self.client = WechatMPClient(appid, secret)\n        self.crypto = None\n        if aes_key:\n            self.crypto = WeChatCrypto(token, aes_key, appid)\n        if self.passive_reply:\n            # Cache the reply to the user's first message\n            self.cache_dict = defaultdict(list)\n            # Record whether the current message is being processed\n            self.running = set()\n            # Count the request from wechat official server by message_id\n            self.request_cnt = dict()\n            # The permanent media need to be deleted to avoid media number limit\n            self.delete_media_loop = asyncio.new_event_loop()\n            t = threading.Thread(target=self.start_loop, args=(self.delete_media_loop,))\n            t.setDaemon(True)\n            t.start()\n\n    def startup(self):\n        if self.passive_reply:\n            urls = (\"/wx\", \"channel.wechatmp.passive_reply.Query\")\n        else:\n            urls = (\"/wx\", \"channel.wechatmp.active_reply.Query\")\n        app = web.application(urls, globals(), autoreload=False)\n        port = conf().get(\"wechatmp_port\", 8080)\n        func = web.httpserver.StaticMiddleware(app.wsgifunc())\n        func = web.httpserver.LogMiddleware(func)\n        server = web.httpserver.WSGIServer((\"0.0.0.0\", port), func)\n        self._http_server = server\n        try:\n            server.start()\n        except (KeyboardInterrupt, SystemExit):\n            server.stop()\n\n    def stop(self):\n        if self._http_server:\n            try:\n                self._http_server.stop()\n                logger.info(\"[wechatmp] HTTP server stopped\")\n            except Exception as e:\n                logger.warning(f\"[wechatmp] Error stopping HTTP server: {e}\")\n            self._http_server = None\n\n    def start_loop(self, loop):\n        asyncio.set_event_loop(loop)\n        loop.run_forever()\n\n    async def delete_media(self, media_id):\n        logger.debug(\"[wechatmp] permanent media {} will be deleted in 10s\".format(media_id))\n        await asyncio.sleep(10)\n        self.client.material.delete(media_id)\n        logger.info(\"[wechatmp] permanent media {} has been deleted\".format(media_id))\n\n    def send(self, reply: Reply, context: Context):\n        receiver = context[\"receiver\"]\n        if self.passive_reply:\n            if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:\n                reply_text = remove_markdown_symbol(reply.content)\n                logger.info(\"[wechatmp] text cached, receiver {}\\n{}\".format(receiver, reply_text))\n                self.cache_dict[receiver].append((\"text\", reply_text))\n            elif reply.type == ReplyType.VOICE:\n                try:\n                    voice_file_path = reply.content\n                    duration, files = split_audio(voice_file_path, 60 * 1000)\n                    if len(files) > 1:\n                        logger.info(\"[wechatmp] voice too long {}s > 60s , split into {} parts\".format(duration / 1000.0, len(files)))\n\n                    for path in files:\n                        # support: <2M, <60s, mp3/wma/wav/amr\n                        try:\n                            with open(path, \"rb\") as f:\n                                response = self.client.material.add(\"voice\", f)\n                                logger.debug(\"[wechatmp] upload voice response: {}\".format(response))\n                                f_size = os.fstat(f.fileno()).st_size\n                                time.sleep(1.0 + 2 * f_size / 1024 / 1024)\n                                # todo check media_id\n                        except WeChatClientException as e:\n                            logger.error(\"[wechatmp] upload voice failed: {}\".format(e))\n                            return\n                        media_id = response[\"media_id\"]\n                        logger.info(\"[wechatmp] voice uploaded, receiver {}, media_id {}\".format(receiver, media_id))\n                        self.cache_dict[receiver].append((\"voice\", media_id))\n                except ImportError as e:\n                    logger.error(\"[wechatmp] voice conversion failed: {}\".format(e))\n                    logger.error(\"[wechatmp] please install pydub: pip install pydub\")\n                    return\n\n            elif reply.type == ReplyType.IMAGE_URL:  # 从网络下载图片\n                img_url = reply.content\n                pic_res = requests.get(img_url, stream=True)\n                image_storage = io.BytesIO()\n                for block in pic_res.iter_content(1024):\n                    image_storage.write(block)\n                image_storage.seek(0)\n                image_type = imghdr.what(image_storage)\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + image_type\n                content_type = \"image/\" + image_type\n                try:\n                    response = self.client.material.add(\"image\", (filename, image_storage, content_type))\n                    logger.debug(\"[wechatmp] upload image response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload image failed: {}\".format(e))\n                    return\n                media_id = response[\"media_id\"]\n                logger.info(\"[wechatmp] image uploaded, receiver {}, media_id {}\".format(receiver, media_id))\n                self.cache_dict[receiver].append((\"image\", media_id))\n            elif reply.type == ReplyType.IMAGE:  # 从文件读取图片\n                image_storage = reply.content\n                image_storage.seek(0)\n                image_type = imghdr.what(image_storage)\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + image_type\n                content_type = \"image/\" + image_type\n                try:\n                    response = self.client.material.add(\"image\", (filename, image_storage, content_type))\n                    logger.debug(\"[wechatmp] upload image response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload image failed: {}\".format(e))\n                    return\n                media_id = response[\"media_id\"]\n                logger.info(\"[wechatmp] image uploaded, receiver {}, media_id {}\".format(receiver, media_id))\n                self.cache_dict[receiver].append((\"image\", media_id))\n            elif reply.type == ReplyType.VIDEO_URL:  # 从网络下载视频\n                video_url = reply.content\n                video_res = requests.get(video_url, stream=True)\n                video_storage = io.BytesIO()\n                for block in video_res.iter_content(1024):\n                    video_storage.write(block)\n                video_storage.seek(0)\n                video_type = 'mp4'\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + video_type\n                content_type = \"video/\" + video_type\n                try:\n                    response = self.client.material.add(\"video\", (filename, video_storage, content_type))\n                    logger.debug(\"[wechatmp] upload video response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload video failed: {}\".format(e))\n                    return\n                media_id = response[\"media_id\"]\n                logger.info(\"[wechatmp] video uploaded, receiver {}, media_id {}\".format(receiver, media_id))\n                self.cache_dict[receiver].append((\"video\", media_id))\n\n            elif reply.type == ReplyType.VIDEO:  # 从文件读取视频\n                video_storage = reply.content\n                video_storage.seek(0)\n                video_type = 'mp4'\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + video_type\n                content_type = \"video/\" + video_type\n                try:\n                    response = self.client.material.add(\"video\", (filename, video_storage, content_type))\n                    logger.debug(\"[wechatmp] upload video response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload video failed: {}\".format(e))\n                    return\n                media_id = response[\"media_id\"]\n                logger.info(\"[wechatmp] video uploaded, receiver {}, media_id {}\".format(receiver, media_id))\n                self.cache_dict[receiver].append((\"video\", media_id))\n\n        else:\n            if reply.type == ReplyType.TEXT or reply.type == ReplyType.INFO or reply.type == ReplyType.ERROR:\n                reply_text = reply.content\n                texts = split_string_by_utf8_length(reply_text, MAX_UTF8_LEN)\n                if len(texts) > 1:\n                    logger.info(\"[wechatmp] text too long, split into {} parts\".format(len(texts)))\n                for i, text in enumerate(texts):\n                    self.client.message.send_text(receiver, text)\n                    if i != len(texts) - 1:\n                        time.sleep(0.5)  # 休眠0.5秒，防止发送过快乱序\n                logger.info(\"[wechatmp] Do send text to {}: {}\".format(receiver, reply_text))\n            elif reply.type == ReplyType.VOICE:\n                try:\n                    file_path = reply.content\n                    file_name = os.path.basename(file_path)\n                    file_type = os.path.splitext(file_name)[1]\n                    if file_type == \".mp3\":\n                        file_type = \"audio/mpeg\"\n                    elif file_type == \".amr\":\n                        file_type = \"audio/amr\"\n                    else:\n                        mp3_file = os.path.splitext(file_path)[0] + \".mp3\"\n                        any_to_mp3(file_path, mp3_file)\n                        file_path = mp3_file\n                        file_name = os.path.basename(file_path)\n                        file_type = \"audio/mpeg\"\n                    logger.info(\"[wechatmp] file_name: {}, file_type: {} \".format(file_name, file_type))\n                    media_ids = []\n                    duration, files = split_audio(file_path, 60 * 1000)\n                    if len(files) > 1:\n                        logger.info(\"[wechatmp] voice too long {}s > 60s , split into {} parts\".format(duration / 1000.0, len(files)))\n                    for path in files:\n                        # support: <2M, <60s, AMR\\MP3\n                        response = self.client.media.upload(\"voice\", (os.path.basename(path), open(path, \"rb\"), file_type))\n                        logger.debug(\"[wechatcom] upload voice response: {}\".format(response))\n                        media_ids.append(response[\"media_id\"])\n                        os.remove(path)\n                except ImportError as e:\n                    logger.error(\"[wechatmp] voice conversion failed: {}\".format(e))\n                    logger.error(\"[wechatmp] please install pydub: pip install pydub\")\n                    return\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload voice failed: {}\".format(e))\n                    return\n\n                try:\n                    os.remove(file_path)\n                except Exception:\n                    pass\n\n                for media_id in media_ids:\n                    self.client.message.send_voice(receiver, media_id)\n                    time.sleep(1)\n                logger.info(\"[wechatmp] Do send voice to {}\".format(receiver))\n            elif reply.type == ReplyType.IMAGE_URL:  # 从网络下载图片\n                img_url = reply.content\n                pic_res = requests.get(img_url, stream=True)\n                image_storage = io.BytesIO()\n                for block in pic_res.iter_content(1024):\n                    image_storage.write(block)\n                image_storage.seek(0)\n                image_type = imghdr.what(image_storage)\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + image_type\n                content_type = \"image/\" + image_type\n                try:\n                    response = self.client.media.upload(\"image\", (filename, image_storage, content_type))\n                    logger.debug(\"[wechatmp] upload image response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload image failed: {}\".format(e))\n                    return\n                self.client.message.send_image(receiver, response[\"media_id\"])\n                logger.info(\"[wechatmp] Do send image to {}\".format(receiver))\n            elif reply.type == ReplyType.IMAGE:  # 从文件读取图片\n                image_storage = reply.content\n                image_storage.seek(0)\n                image_type = imghdr.what(image_storage)\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + image_type\n                content_type = \"image/\" + image_type\n                try:\n                    response = self.client.media.upload(\"image\", (filename, image_storage, content_type))\n                    logger.debug(\"[wechatmp] upload image response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload image failed: {}\".format(e))\n                    return\n                self.client.message.send_image(receiver, response[\"media_id\"])\n                logger.info(\"[wechatmp] Do send image to {}\".format(receiver))\n            elif reply.type == ReplyType.VIDEO_URL:  # 从网络下载视频\n                video_url = reply.content\n                video_res = requests.get(video_url, stream=True)\n                video_storage = io.BytesIO()\n                for block in video_res.iter_content(1024):\n                    video_storage.write(block)\n                video_storage.seek(0)\n                video_type = 'mp4'\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + video_type\n                content_type = \"video/\" + video_type\n                try:\n                    response = self.client.media.upload(\"video\", (filename, video_storage, content_type))\n                    logger.debug(\"[wechatmp] upload video response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload video failed: {}\".format(e))\n                    return\n                self.client.message.send_video(receiver, response[\"media_id\"])\n                logger.info(\"[wechatmp] Do send video to {}\".format(receiver))\n            elif reply.type == ReplyType.VIDEO:  # 从文件读取视频\n                video_storage = reply.content\n                video_storage.seek(0)\n                video_type = 'mp4'\n                filename = receiver + \"-\" + str(context[\"msg\"].msg_id) + \".\" + video_type\n                content_type = \"video/\" + video_type\n                try:\n                    response = self.client.media.upload(\"video\", (filename, video_storage, content_type))\n                    logger.debug(\"[wechatmp] upload video response: {}\".format(response))\n                except WeChatClientException as e:\n                    logger.error(\"[wechatmp] upload video failed: {}\".format(e))\n                    return\n                self.client.message.send_video(receiver, response[\"media_id\"])\n                logger.info(\"[wechatmp] Do send video to {}\".format(receiver))\n        return\n\n    def _success_callback(self, session_id, context, **kwargs):  # 线程异常结束时的回调函数\n        logger.debug(\"[wechatmp] Success to generate reply, msgId={}\".format(context[\"msg\"].msg_id))\n        if self.passive_reply:\n            self.running.remove(session_id)\n\n    def _fail_callback(self, session_id, exception, context, **kwargs):  # 线程异常结束时的回调函数\n        logger.exception(\"[wechatmp] Fail to generate reply to user, msgId={}, exception={}\".format(context[\"msg\"].msg_id, exception))\n        if self.passive_reply:\n            assert session_id not in self.cache_dict\n            self.running.remove(session_id)\n"
  },
  {
    "path": "channel/wechatmp/wechatmp_client.py",
    "content": "import threading\nimport time\n\nfrom wechatpy.client import WeChatClient\nfrom wechatpy.exceptions import APILimitedException\n\nfrom channel.wechatmp.common import *\nfrom common.log import logger\n\n\nclass WechatMPClient(WeChatClient):\n    def __init__(self, appid, secret, access_token=None, session=None, timeout=None, auto_retry=True):\n        super(WechatMPClient, self).__init__(appid, secret, access_token, session, timeout, auto_retry)\n        self.fetch_access_token_lock = threading.Lock()\n        self.clear_quota_lock = threading.Lock()\n        self.last_clear_quota_time = -1\n\n    def clear_quota(self):\n        return self.post(\"clear_quota\", data={\"appid\": self.appid})\n\n    def clear_quota_v2(self):\n        return self.post(\"clear_quota/v2\", params={\"appid\": self.appid, \"appsecret\": self.secret})\n\n    def fetch_access_token(self):  # 重载父类方法，加锁避免多线程重复获取access_token\n        with self.fetch_access_token_lock:\n            access_token = self.session.get(self.access_token_key)\n            if access_token:\n                if not self.expires_at:\n                    return access_token\n                timestamp = time.time()\n                if self.expires_at - timestamp > 60:\n                    return access_token\n            return super().fetch_access_token()\n\n    def _request(self, method, url_or_endpoint, **kwargs):  # 重载父类方法，遇到API限流时，清除quota后重试\n        try:\n            return super()._request(method, url_or_endpoint, **kwargs)\n        except APILimitedException as e:\n            logger.error(\"[wechatmp] API quata has been used up. {}\".format(e))\n            if self.last_clear_quota_time == -1 or time.time() - self.last_clear_quota_time > 60:\n                with self.clear_quota_lock:\n                    if self.last_clear_quota_time == -1 or time.time() - self.last_clear_quota_time > 60:\n                        self.last_clear_quota_time = time.time()\n                        response = self.clear_quota_v2()\n                        logger.debug(\"[wechatmp] API quata has been cleard, {}\".format(response))\n                return super()._request(method, url_or_endpoint, **kwargs)\n            else:\n                logger.error(\"[wechatmp] last clear quota time is {}, less than 60s, skip clear quota\")\n                raise e\n"
  },
  {
    "path": "channel/wechatmp/wechatmp_message.py",
    "content": "# -*- coding: utf-8 -*-#\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\n\n\nclass WeChatMPMessage(ChatMessage):\n    def __init__(self, msg, client=None):\n        super().__init__(msg)\n        self.msg_id = msg.id\n        self.create_time = msg.time\n        self.is_group = False\n\n        if msg.type == \"text\":\n            self.ctype = ContextType.TEXT\n            self.content = msg.content\n        elif msg.type == \"voice\":\n            if msg.recognition == None:\n                self.ctype = ContextType.VOICE\n                self.content = TmpDir().path() + msg.media_id + \".\" + msg.format  # content直接存临时目录路径\n\n                def download_voice():\n                    # 如果响应状态码是200，则将响应内容写入本地文件\n                    response = client.media.download(msg.media_id)\n                    if response.status_code == 200:\n                        with open(self.content, \"wb\") as f:\n                            f.write(response.content)\n                    else:\n                        logger.info(f\"[wechatmp] Failed to download voice file, {response.content}\")\n\n                self._prepare_fn = download_voice\n            else:\n                self.ctype = ContextType.TEXT\n                self.content = msg.recognition\n        elif msg.type == \"image\":\n            self.ctype = ContextType.IMAGE\n            self.content = TmpDir().path() + msg.media_id + \".png\"  # content直接存临时目录路径\n\n            def download_image():\n                # 如果响应状态码是200，则将响应内容写入本地文件\n                response = client.media.download(msg.media_id)\n                if response.status_code == 200:\n                    with open(self.content, \"wb\") as f:\n                        f.write(response.content)\n                else:\n                    logger.info(f\"[wechatmp] Failed to download image file, {response.content}\")\n\n            self._prepare_fn = download_image\n        else:\n            raise NotImplementedError(\"Unsupported message type: Type:{} \".format(msg.type))\n\n        self.from_user_id = msg.source\n        self.to_user_id = msg.target\n        self.other_user_id = msg.source\n"
  },
  {
    "path": "channel/wecom_bot/__init__.py",
    "content": ""
  },
  {
    "path": "channel/wecom_bot/wecom_bot_channel.py",
    "content": "\"\"\"\nWeCom (企业微信) AI Bot channel via WebSocket long connection.\n\nSupports:\n- Single chat and group chat (text / image / file input & output)\n- Scheduled task push via aibot_send_msg\n- Heartbeat keep-alive and auto-reconnect\n\"\"\"\n\nimport base64\nimport hashlib\nimport json\nimport math\nimport os\nimport threading\nimport time\nimport uuid\n\nimport requests\nimport websocket\n\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import ChatChannel, check_prefix\nfrom channel.wecom_bot.wecom_bot_message import WecomBotMessage\nfrom common.expired_dict import ExpiredDict\nfrom common.log import logger\nfrom common.singleton import singleton\nfrom config import conf\n\nWECOM_WS_URL = \"wss://openws.work.weixin.qq.com\"\nHEARTBEAT_INTERVAL = 30\nMEDIA_CHUNK_SIZE = 512 * 1024  # 512KB per chunk (before base64 encoding)\n\n\n@singleton\nclass WecomBotChannel(ChatChannel):\n\n    def __init__(self):\n        super().__init__()\n        self.bot_id = \"\"\n        self.bot_secret = \"\"\n        self.received_msgs = ExpiredDict(60 * 60 * 7.1)\n        self._ws = None\n        self._ws_thread = None\n        self._heartbeat_thread = None\n        self._connected = False\n        self._stop_event = threading.Event()\n        self._pending_responses = {}  # req_id -> (threading.Event, result_holder)\n        self._pending_lock = threading.Lock()\n        self._stream_states = {}  # req_id -> {\"stream_id\": str, \"content\": str}\n\n        conf()[\"group_name_white_list\"] = [\"ALL_GROUP\"]\n        conf()[\"single_chat_prefix\"] = [\"\"]\n\n    # ------------------------------------------------------------------\n    # Lifecycle\n    # ------------------------------------------------------------------\n\n    def startup(self):\n        self.bot_id = conf().get(\"wecom_bot_id\", \"\")\n        self.bot_secret = conf().get(\"wecom_bot_secret\", \"\")\n\n        if not self.bot_id or not self.bot_secret:\n            err = \"[WecomBot] wecom_bot_id and wecom_bot_secret are required\"\n            logger.error(err)\n            self.report_startup_error(err)\n            return\n\n        self._stop_event.clear()\n        self._start_ws()\n\n    def stop(self):\n        logger.info(\"[WecomBot] stop() called\")\n        self._stop_event.set()\n        if self._ws:\n            try:\n                self._ws.close()\n            except Exception:\n                pass\n        self._ws = None\n        self._connected = False\n\n    # ------------------------------------------------------------------\n    # WebSocket connection\n    # ------------------------------------------------------------------\n\n    def _start_ws(self):\n        def _on_open(ws):\n            logger.info(\"[WecomBot] WebSocket connected, sending subscribe...\")\n            self._send_subscribe()\n\n        def _on_message(ws, raw):\n            try:\n                data = json.loads(raw)\n                self._handle_ws_message(data)\n            except Exception as e:\n                logger.error(f\"[WecomBot] Failed to handle ws message: {e}\", exc_info=True)\n\n        def _on_error(ws, error):\n            logger.error(f\"[WecomBot] WebSocket error: {error}\")\n\n        def _on_close(ws, close_status_code, close_msg):\n            logger.warning(f\"[WecomBot] WebSocket closed: status={close_status_code}, msg={close_msg}\")\n            self._connected = False\n            if not self._stop_event.is_set():\n                logger.info(\"[WecomBot] Will reconnect in 5s...\")\n                time.sleep(5)\n                if not self._stop_event.is_set():\n                    self._start_ws()\n\n        self._ws = websocket.WebSocketApp(\n            WECOM_WS_URL,\n            on_open=_on_open,\n            on_message=_on_message,\n            on_error=_on_error,\n            on_close=_on_close,\n        )\n\n        def run_forever():\n            try:\n                self._ws.run_forever(ping_interval=0, reconnect=0)\n            except (SystemExit, KeyboardInterrupt):\n                logger.info(\"[WecomBot] WebSocket thread interrupted\")\n            except Exception as e:\n                logger.error(f\"[WecomBot] WebSocket run_forever error: {e}\")\n\n        self._ws_thread = threading.Thread(target=run_forever, daemon=True)\n        self._ws_thread.start()\n        self._ws_thread.join()\n\n    def _ws_send(self, data: dict):\n        if self._ws:\n            self._ws.send(json.dumps(data, ensure_ascii=False))\n\n    def _gen_req_id(self) -> str:\n        return uuid.uuid4().hex[:16]\n\n    # ------------------------------------------------------------------\n    # Subscribe & heartbeat\n    # ------------------------------------------------------------------\n\n    def _send_subscribe(self):\n        self._ws_send({\n            \"cmd\": \"aibot_subscribe\",\n            \"headers\": {\"req_id\": self._gen_req_id()},\n            \"body\": {\n                \"bot_id\": self.bot_id,\n                \"secret\": self.bot_secret,\n            },\n        })\n\n    def _start_heartbeat(self):\n        if self._heartbeat_thread and self._heartbeat_thread.is_alive():\n            return\n\n        def heartbeat_loop():\n            while not self._stop_event.is_set() and self._connected:\n                try:\n                    self._ws_send({\n                        \"cmd\": \"ping\",\n                        \"headers\": {\"req_id\": self._gen_req_id()},\n                    })\n                except Exception as e:\n                    logger.warning(f\"[WecomBot] Heartbeat send failed: {e}\")\n                    break\n                self._stop_event.wait(HEARTBEAT_INTERVAL)\n\n        self._heartbeat_thread = threading.Thread(target=heartbeat_loop, daemon=True)\n        self._heartbeat_thread.start()\n\n    # ------------------------------------------------------------------\n    # Incoming message dispatch\n    # ------------------------------------------------------------------\n\n    def _send_and_wait(self, data: dict, timeout: float = 15) -> dict:\n        \"\"\"Send a ws message and wait for the matching response by req_id.\"\"\"\n        req_id = data.get(\"headers\", {}).get(\"req_id\", \"\")\n        event = threading.Event()\n        holder = {\"data\": None}\n        with self._pending_lock:\n            self._pending_responses[req_id] = (event, holder)\n        self._ws_send(data)\n        event.wait(timeout=timeout)\n        with self._pending_lock:\n            self._pending_responses.pop(req_id, None)\n        return holder[\"data\"] or {}\n\n    def _handle_ws_message(self, data: dict):\n        cmd = data.get(\"cmd\", \"\")\n        errcode = data.get(\"errcode\")\n        req_id = data.get(\"headers\", {}).get(\"req_id\", \"\")\n\n        # Check if this is a response to a pending request\n        if req_id:\n            with self._pending_lock:\n                pending = self._pending_responses.get(req_id)\n            if pending:\n                event, holder = pending\n                holder[\"data\"] = data\n                event.set()\n                return\n\n        # Subscribe response (only handle once before connected)\n        if errcode is not None and cmd == \"\":\n            if not self._connected:\n                if errcode == 0:\n                    logger.info(\"[WecomBot] ✅ Subscribe success\")\n                    self._connected = True\n                    self._start_heartbeat()\n                    self.report_startup_success()\n                else:\n                    errmsg = data.get(\"errmsg\", \"unknown error\")\n                    logger.error(f\"[WecomBot] Subscribe failed: errcode={errcode}, errmsg={errmsg}\")\n                    self.report_startup_error(errmsg)\n            return\n\n        if cmd == \"aibot_msg_callback\":\n            self._handle_msg_callback(data)\n        elif cmd == \"aibot_event_callback\":\n            self._handle_event_callback(data)\n        elif cmd == \"\":\n            if errcode and errcode != 0:\n                logger.warning(f\"[WecomBot] Response error: {data}\")\n\n    # ------------------------------------------------------------------\n    # Message callback\n    # ------------------------------------------------------------------\n\n    def _handle_msg_callback(self, data: dict):\n        body = data.get(\"body\", {})\n        req_id = data.get(\"headers\", {}).get(\"req_id\", \"\")\n        msg_id = body.get(\"msgid\", \"\")\n\n        if self.received_msgs.get(msg_id):\n            logger.debug(f\"[WecomBot] Duplicate msg filtered: {msg_id}\")\n            return\n        self.received_msgs[msg_id] = True\n\n        chattype = body.get(\"chattype\", \"single\")\n        is_group = chattype == \"group\"\n\n        try:\n            wecom_msg = WecomBotMessage(body, is_group=is_group)\n        except NotImplementedError as e:\n            logger.warning(f\"[WecomBot] {e}\")\n            return\n        except Exception as e:\n            logger.error(f\"[WecomBot] Failed to parse message: {e}\", exc_info=True)\n            return\n\n        wecom_msg.req_id = req_id\n\n        # File cache logic (same pattern as feishu)\n        from channel.file_cache import get_file_cache\n        file_cache = get_file_cache()\n\n        if is_group:\n            if conf().get(\"group_shared_session\", True):\n                session_id = body.get(\"chatid\", \"\")\n            else:\n                session_id = wecom_msg.from_user_id + \"_\" + body.get(\"chatid\", \"\")\n        else:\n            session_id = wecom_msg.from_user_id\n\n        if wecom_msg.ctype == ContextType.IMAGE:\n            if hasattr(wecom_msg, \"image_path\") and wecom_msg.image_path:\n                file_cache.add(session_id, wecom_msg.image_path, file_type=\"image\")\n                logger.info(f\"[WecomBot] Image cached for session {session_id}\")\n            return\n\n        if wecom_msg.ctype == ContextType.FILE:\n            wecom_msg.prepare()\n            file_cache.add(session_id, wecom_msg.content, file_type=\"file\")\n            logger.info(f\"[WecomBot] File cached for session {session_id}: {wecom_msg.content}\")\n            return\n\n        if wecom_msg.ctype == ContextType.TEXT:\n            cached_files = file_cache.get(session_id)\n            if cached_files:\n                file_refs = []\n                for fi in cached_files:\n                    ftype = fi[\"type\"]\n                    fpath = fi[\"path\"]\n                    if ftype == \"image\":\n                        file_refs.append(f\"[图片: {fpath}]\")\n                    elif ftype == \"video\":\n                        file_refs.append(f\"[视频: {fpath}]\")\n                    else:\n                        file_refs.append(f\"[文件: {fpath}]\")\n                wecom_msg.content = wecom_msg.content + \"\\n\" + \"\\n\".join(file_refs)\n                logger.info(f\"[WecomBot] Attached {len(cached_files)} cached file(s)\")\n                file_cache.clear(session_id)\n\n        context = self._compose_context(\n            wecom_msg.ctype,\n            wecom_msg.content,\n            isgroup=is_group,\n            msg=wecom_msg,\n            no_need_at=True,\n        )\n        if context:\n            if req_id:\n                context[\"on_event\"] = self._make_stream_callback(req_id)\n            self.produce(context)\n\n    # ------------------------------------------------------------------\n    # Event callback\n    # ------------------------------------------------------------------\n\n    def _handle_event_callback(self, data: dict):\n        body = data.get(\"body\", {})\n        event = body.get(\"event\", {})\n        event_type = event.get(\"eventtype\", \"\")\n\n        if event_type == \"enter_chat\":\n            logger.info(f\"[WecomBot] User entered chat: {body.get('from', {}).get('userid')}\")\n        elif event_type == \"disconnected_event\":\n            logger.warning(\"[WecomBot] Received disconnected_event, another connection took over\")\n        else:\n            logger.debug(f\"[WecomBot] Event: {event_type}\")\n\n    # ------------------------------------------------------------------\n    # Stream callback (for agent on_event)\n    # ------------------------------------------------------------------\n\n    def _make_stream_callback(self, req_id: str):\n        \"\"\"Build an on_event callback that pushes agent stream deltas to wecom via stream message.\n\n        All intermediate segments (thinking before tool calls) and the final answer\n        are accumulated into a single stream message, separated by '---'.\n        \"\"\"\n        stream_id = uuid.uuid4().hex[:16]\n        self._stream_states[req_id] = {\n            \"stream_id\": stream_id,\n            \"committed\": \"\",  # finalized content from previous segments\n            \"current\": \"\",    # current segment being streamed\n        }\n\n        def _push_stream(state: dict):\n            \"\"\"Push current stream content to wecom.\"\"\"\n            self._ws_send({\n                \"cmd\": \"aibot_respond_msg\",\n                \"headers\": {\"req_id\": req_id},\n                \"body\": {\n                    \"msgtype\": \"stream\",\n                    \"stream\": {\n                        \"id\": state[\"stream_id\"],\n                        \"finish\": False,\n                        \"content\": state[\"committed\"] + state[\"current\"],\n                    },\n                },\n            })\n\n        def on_event(event: dict):\n            event_type = event.get(\"type\")\n            data = event.get(\"data\", {})\n            state = self._stream_states.get(req_id)\n            if not state:\n                return\n\n            if event_type == \"turn_start\":\n                state[\"current\"] = \"\"\n\n            elif event_type == \"message_update\":\n                delta = data.get(\"delta\", \"\")\n                if delta:\n                    state[\"current\"] += delta\n                    _push_stream(state)\n\n            elif event_type == \"message_end\":\n                tool_calls = data.get(\"tool_calls\", [])\n                if tool_calls:\n                    if state[\"current\"].strip():\n                        state[\"committed\"] += state[\"current\"].strip() + \"\\n\\n---\\n\\n\"\n                        state[\"current\"] = \"\"\n                else:\n                    state[\"committed\"] += state[\"current\"]\n                    state[\"current\"] = \"\"\n\n        return on_event\n\n    # ------------------------------------------------------------------\n    # _compose_context (same pattern as feishu)\n    # ------------------------------------------------------------------\n\n    def _compose_context(self, ctype: ContextType, content, **kwargs):\n        context = Context(ctype, content)\n        context.kwargs = kwargs\n        if \"channel_type\" not in context:\n            context[\"channel_type\"] = self.channel_type\n        if \"origin_ctype\" not in context:\n            context[\"origin_ctype\"] = ctype\n\n        cmsg = context[\"msg\"]\n\n        if cmsg.is_group:\n            if conf().get(\"group_shared_session\", True):\n                context[\"session_id\"] = cmsg.other_user_id\n            else:\n                context[\"session_id\"] = f\"{cmsg.from_user_id}:{cmsg.other_user_id}\"\n        else:\n            context[\"session_id\"] = cmsg.from_user_id\n\n        context[\"receiver\"] = cmsg.other_user_id\n\n        if ctype == ContextType.TEXT:\n            img_match_prefix = check_prefix(content, conf().get(\"image_create_prefix\"))\n            if img_match_prefix:\n                content = content.replace(img_match_prefix, \"\", 1)\n                context.type = ContextType.IMAGE_CREATE\n            else:\n                context.type = ContextType.TEXT\n            context.content = content.strip()\n\n        return context\n\n    # ------------------------------------------------------------------\n    # Send reply\n    # ------------------------------------------------------------------\n\n    def send(self, reply: Reply, context: Context):\n        msg = context.get(\"msg\")\n        is_group = context.get(\"isgroup\", False)\n        receiver = context.get(\"receiver\", \"\")\n\n        # Determine req_id for responding or use send_msg for scheduled push\n        req_id = getattr(msg, \"req_id\", None) if msg else None\n\n        if reply.type == ReplyType.TEXT:\n            self._send_text(reply.content, receiver, is_group, req_id)\n        elif reply.type in (ReplyType.IMAGE_URL, ReplyType.IMAGE):\n            self._send_image(reply.content, receiver, is_group, req_id)\n        elif reply.type == ReplyType.FILE:\n            if hasattr(reply, \"text_content\") and reply.text_content:\n                self._send_text(reply.text_content, receiver, is_group, req_id)\n                time.sleep(0.3)\n            self._send_file(reply.content, receiver, is_group, req_id)\n        elif reply.type == ReplyType.VIDEO or reply.type == ReplyType.VIDEO_URL:\n            self._send_file(reply.content, receiver, is_group, req_id, media_type=\"video\")\n        else:\n            logger.warning(f\"[WecomBot] Unsupported reply type: {reply.type}, falling back to text\")\n            self._send_text(str(reply.content), receiver, is_group, req_id)\n\n    # ------------------------------------------------------------------\n    # Respond message (via websocket)\n    # ------------------------------------------------------------------\n\n    def _send_text(self, content: str, receiver: str, is_group: bool, req_id: str = None):\n        \"\"\"Send text/markdown reply. Reuses stream state if available (streaming mode).\"\"\"\n        if req_id:\n            state = self._stream_states.pop(req_id, None)\n            if state:\n                final_content = state[\"committed\"]\n                stream_id = state[\"stream_id\"]\n            else:\n                final_content = content\n                stream_id = uuid.uuid4().hex[:16]\n            self._ws_send({\n                \"cmd\": \"aibot_respond_msg\",\n                \"headers\": {\"req_id\": req_id},\n                \"body\": {\n                    \"msgtype\": \"stream\",\n                    \"stream\": {\n                        \"id\": stream_id,\n                        \"finish\": True,\n                        \"content\": final_content,\n                    },\n                },\n            })\n        else:\n            self._active_send_markdown(content, receiver, is_group)\n\n    def _send_image(self, img_path_or_url: str, receiver: str, is_group: bool, req_id: str = None):\n        \"\"\"Send image reply. Converts to JPG/PNG and compresses if >2MB.\"\"\"\n        local_path = img_path_or_url\n        if local_path.startswith(\"file://\"):\n            local_path = local_path[7:]\n\n        if local_path.startswith((\"http://\", \"https://\")):\n            try:\n                resp = requests.get(local_path, timeout=30)\n                resp.raise_for_status()\n                ct = resp.headers.get(\"Content-Type\", \"\")\n                if \"jpeg\" in ct or \"jpg\" in ct:\n                    ext = \".jpg\"\n                elif \"webp\" in ct:\n                    ext = \".webp\"\n                elif \"gif\" in ct:\n                    ext = \".gif\"\n                else:\n                    ext = \".png\"\n                tmp_path = f\"/tmp/wecom_img_{uuid.uuid4().hex[:8]}{ext}\"\n                with open(tmp_path, \"wb\") as f:\n                    f.write(resp.content)\n                logger.info(f\"[WecomBot] Image downloaded: size={len(resp.content)}, \"\n                            f\"content-type={ct}, path={tmp_path}\")\n                local_path = tmp_path\n            except Exception as e:\n                logger.error(f\"[WecomBot] Failed to download image for sending: {e}\")\n                self._send_text(\"[Image send failed]\", receiver, is_group, req_id)\n                return\n\n        if not os.path.exists(local_path):\n            logger.error(f\"[WecomBot] Image file not found: {local_path}\")\n            return\n\n        max_image_size = 2 * 1024 * 1024  # 2MB limit for image upload\n        local_path = self._ensure_image_format(local_path)\n        if not local_path:\n            self._send_text(\"[Image format conversion failed]\", receiver, is_group, req_id)\n            return\n\n        if os.path.getsize(local_path) > max_image_size:\n            local_path = self._compress_image(local_path, max_image_size)\n            if not local_path:\n                self._send_text(\"[Image too large]\", receiver, is_group, req_id)\n                return\n\n        file_size = os.path.getsize(local_path)\n        logger.info(f\"[WecomBot] Uploading image: path={local_path}, size={file_size} bytes\")\n        media_id = self._upload_media(local_path, \"image\")\n        if not media_id:\n            logger.error(\"[WecomBot] Failed to upload image\")\n            self._send_text(\"[Image upload failed]\", receiver, is_group, req_id)\n            return\n\n        if req_id:\n            self._ws_send({\n                \"cmd\": \"aibot_respond_msg\",\n                \"headers\": {\"req_id\": req_id},\n                \"body\": {\n                    \"msgtype\": \"image\",\n                    \"image\": {\"media_id\": media_id},\n                },\n            })\n        else:\n            self._ws_send({\n                \"cmd\": \"aibot_send_msg\",\n                \"headers\": {\"req_id\": self._gen_req_id()},\n                \"body\": {\n                    \"chatid\": receiver,\n                    \"chat_type\": 2 if is_group else 1,\n                    \"msgtype\": \"image\",\n                    \"image\": {\"media_id\": media_id},\n                },\n            })\n\n    @staticmethod\n    def _ensure_image_format(file_path: str) -> str:\n        \"\"\"Ensure image is JPG or PNG (the only formats wecom supports). Convert if needed.\"\"\"\n        try:\n            from PIL import Image\n            img = Image.open(file_path)\n            fmt = (img.format or \"\").upper()\n            if fmt in (\"JPEG\", \"PNG\"):\n                # Already a supported format, but make sure the filename extension matches\n                ext = os.path.splitext(file_path)[1].lower()\n                if fmt == \"JPEG\" and ext in (\".jpg\", \".jpeg\"):\n                    return file_path\n                if fmt == \"PNG\" and ext == \".png\":\n                    return file_path\n                # Extension doesn't match — rename/copy with correct extension\n                correct_ext = \".jpg\" if fmt == \"JPEG\" else \".png\"\n                out_path = f\"/tmp/wecom_fmt_{uuid.uuid4().hex[:8]}{correct_ext}\"\n                img.save(out_path, fmt)\n                logger.info(f\"[WecomBot] Image renamed: {file_path} -> {out_path} ({fmt})\")\n                return out_path\n\n            # Unsupported format (WebP, GIF, BMP, etc.) — convert to PNG\n            if img.mode == \"RGBA\":\n                out_path = f\"/tmp/wecom_fmt_{uuid.uuid4().hex[:8]}.png\"\n                img.save(out_path, \"PNG\")\n            else:\n                out_path = f\"/tmp/wecom_fmt_{uuid.uuid4().hex[:8]}.jpg\"\n                img.convert(\"RGB\").save(out_path, \"JPEG\", quality=90)\n            logger.info(f\"[WecomBot] Image converted from {fmt} -> {out_path}\")\n            return out_path\n        except Exception as e:\n            logger.error(f\"[WecomBot] Image format check failed: {e}\")\n            return file_path\n\n    @staticmethod\n    def _compress_image(file_path: str, max_bytes: int) -> str:\n        \"\"\"Compress image to fit within max_bytes. Returns new path or empty string.\"\"\"\n        try:\n            from PIL import Image\n            img = Image.open(file_path)\n            if img.mode == \"RGBA\":\n                img = img.convert(\"RGB\")\n\n            out_path = f\"/tmp/wecom_compressed_{uuid.uuid4().hex[:8]}.jpg\"\n            quality = 85\n            while quality >= 30:\n                img.save(out_path, \"JPEG\", quality=quality, optimize=True)\n                if os.path.getsize(out_path) <= max_bytes:\n                    logger.info(f\"[WecomBot] Image compressed: quality={quality}, \"\n                                f\"size={os.path.getsize(out_path)} bytes\")\n                    return out_path\n                quality -= 10\n\n            # Still too large — resize\n            ratio = (max_bytes / os.path.getsize(out_path)) ** 0.5\n            new_size = (int(img.width * ratio), int(img.height * ratio))\n            img = img.resize(new_size, Image.LANCZOS)\n            img.save(out_path, \"JPEG\", quality=70, optimize=True)\n            if os.path.getsize(out_path) <= max_bytes:\n                logger.info(f\"[WecomBot] Image compressed with resize: {new_size}, \"\n                            f\"size={os.path.getsize(out_path)} bytes\")\n                return out_path\n\n            logger.error(f\"[WecomBot] Cannot compress image below {max_bytes} bytes\")\n            return \"\"\n        except Exception as e:\n            logger.error(f\"[WecomBot] Image compression failed: {e}\")\n            return \"\"\n\n    def _send_file(self, file_path: str, receiver: str, is_group: bool,\n                   req_id: str = None, media_type: str = \"file\"):\n        \"\"\"Send file/video reply by uploading media first.\"\"\"\n        local_path = file_path\n        if local_path.startswith(\"file://\"):\n            local_path = local_path[7:]\n\n        if local_path.startswith((\"http://\", \"https://\")):\n            try:\n                resp = requests.get(local_path, timeout=60)\n                resp.raise_for_status()\n                ext = os.path.splitext(local_path)[1] or \".bin\"\n                tmp_path = f\"/tmp/wecom_file_{uuid.uuid4().hex[:8]}{ext}\"\n                with open(tmp_path, \"wb\") as f:\n                    f.write(resp.content)\n                local_path = tmp_path\n            except Exception as e:\n                logger.error(f\"[WecomBot] Failed to download file for sending: {e}\")\n                return\n\n        if not os.path.exists(local_path):\n            logger.error(f\"[WecomBot] File not found: {local_path}\")\n            return\n\n        media_id = self._upload_media(local_path, media_type)\n        if not media_id:\n            logger.error(f\"[WecomBot] Failed to upload {media_type}\")\n            return\n\n        if req_id:\n            self._ws_send({\n                \"cmd\": \"aibot_respond_msg\",\n                \"headers\": {\"req_id\": req_id},\n                \"body\": {\n                    \"msgtype\": media_type,\n                    media_type: {\"media_id\": media_id},\n                },\n            })\n        else:\n            self._ws_send({\n                \"cmd\": \"aibot_send_msg\",\n                \"headers\": {\"req_id\": self._gen_req_id()},\n                \"body\": {\n                    \"chatid\": receiver,\n                    \"chat_type\": 2 if is_group else 1,\n                    \"msgtype\": media_type,\n                    media_type: {\"media_id\": media_id},\n                },\n            })\n\n    def _active_send_markdown(self, content: str, receiver: str, is_group: bool):\n        \"\"\"Proactively send markdown message (for scheduled tasks, no req_id).\"\"\"\n        self._ws_send({\n            \"cmd\": \"aibot_send_msg\",\n            \"headers\": {\"req_id\": self._gen_req_id()},\n            \"body\": {\n                \"chatid\": receiver,\n                \"chat_type\": 2 if is_group else 1,\n                \"msgtype\": \"markdown\",\n                \"markdown\": {\"content\": content},\n            },\n        })\n\n    # ------------------------------------------------------------------\n    # Media upload (chunked)\n    # ------------------------------------------------------------------\n\n    def _upload_media(self, file_path: str, media_type: str = \"file\") -> str:\n        \"\"\"\n        Upload a local file to wecom bot via chunked upload protocol.\n        Returns media_id on success, empty string on failure.\n        \"\"\"\n        if not os.path.exists(file_path):\n            logger.error(f\"[WecomBot] Upload file not found: {file_path}\")\n            return \"\"\n\n        file_size = os.path.getsize(file_path)\n        if file_size < 5:\n            logger.error(f\"[WecomBot] File too small: {file_size} bytes\")\n            return \"\"\n\n        filename = os.path.basename(file_path)\n        total_chunks = math.ceil(file_size / MEDIA_CHUNK_SIZE)\n        if total_chunks > 100:\n            logger.error(f\"[WecomBot] Too many chunks: {total_chunks} > 100\")\n            return \"\"\n\n        file_md5 = hashlib.md5()\n        with open(file_path, \"rb\") as f:\n            for block in iter(lambda: f.read(8192), b\"\"):\n                file_md5.update(block)\n        md5_hex = file_md5.hexdigest()\n\n        # 1. Init upload\n        init_resp = self._send_and_wait({\n            \"cmd\": \"aibot_upload_media_init\",\n            \"headers\": {\"req_id\": self._gen_req_id()},\n            \"body\": {\n                \"type\": media_type,\n                \"filename\": filename,\n                \"total_size\": file_size,\n                \"total_chunks\": total_chunks,\n                \"md5\": md5_hex,\n            },\n        }, timeout=15)\n\n        if init_resp.get(\"errcode\") != 0:\n            logger.error(f\"[WecomBot] Upload init failed: {init_resp}\")\n            return \"\"\n\n        upload_id = init_resp.get(\"body\", {}).get(\"upload_id\")\n        if not upload_id:\n            logger.error(\"[WecomBot] Failed to get upload_id\")\n            return \"\"\n\n        # 2. Upload chunks\n        with open(file_path, \"rb\") as f:\n            for idx in range(total_chunks):\n                chunk = f.read(MEDIA_CHUNK_SIZE)\n                b64_data = base64.b64encode(chunk).decode(\"utf-8\")\n                chunk_resp = self._send_and_wait({\n                    \"cmd\": \"aibot_upload_media_chunk\",\n                    \"headers\": {\"req_id\": self._gen_req_id()},\n                    \"body\": {\n                        \"upload_id\": upload_id,\n                        \"chunk_index\": idx,\n                        \"base64_data\": b64_data,\n                    },\n                }, timeout=30)\n                if chunk_resp.get(\"errcode\") != 0:\n                    logger.error(f\"[WecomBot] Chunk {idx} upload failed: {chunk_resp}\")\n                    return \"\"\n\n        # 3. Finish upload\n        finish_resp = self._send_and_wait({\n            \"cmd\": \"aibot_upload_media_finish\",\n            \"headers\": {\"req_id\": self._gen_req_id()},\n            \"body\": {\"upload_id\": upload_id},\n        }, timeout=30)\n\n        if finish_resp.get(\"errcode\") != 0:\n            logger.error(f\"[WecomBot] Upload finish failed: {finish_resp}\")\n            return \"\"\n\n        media_id = finish_resp.get(\"body\", {}).get(\"media_id\", \"\")\n        if media_id:\n            logger.info(f\"[WecomBot] Media uploaded: media_id={media_id}\")\n        else:\n            logger.error(\"[WecomBot] Failed to get media_id from finish response\")\n        return media_id\n"
  },
  {
    "path": "channel/wecom_bot/wecom_bot_message.py",
    "content": "import os\nimport re\nimport base64\nimport requests\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nfrom common.log import logger\nfrom common.utils import expand_path\nfrom config import conf\nfrom Crypto.Cipher import AES\n\n\nMAGIC_SIGNATURES = [\n    (b\"%PDF\", \".pdf\"),\n    (b\"\\x89PNG\\r\\n\\x1a\\n\", \".png\"),\n    (b\"\\xff\\xd8\\xff\", \".jpg\"),\n    (b\"GIF87a\", \".gif\"),\n    (b\"GIF89a\", \".gif\"),\n    (b\"RIFF\", \".webp\"),  # RIFF....WEBP, further checked below\n    (b\"PK\\x03\\x04\", \".zip\"),  # zip / docx / xlsx / pptx\n    (b\"\\x1f\\x8b\", \".gz\"),\n    (b\"Rar!\\x1a\\x07\", \".rar\"),\n    (b\"7z\\xbc\\xaf\\x27\\x1c\", \".7z\"),\n    (b\"\\x00\\x00\\x00\", \".mp4\"),  # ftyp box, further checked below\n    (b\"#!AMR\", \".amr\"),\n]\n\nOFFICE_ZIP_MARKERS = {\n    b\"word/\": \".docx\",\n    b\"xl/\": \".xlsx\",\n    b\"ppt/\": \".pptx\",\n}\n\n\ndef _guess_ext_from_bytes(data: bytes) -> str:\n    \"\"\"Guess file extension from file content magic bytes.\"\"\"\n    if not data or len(data) < 8:\n        return \"\"\n    for sig, ext in MAGIC_SIGNATURES:\n        if data[:len(sig)] == sig:\n            if ext == \".webp\" and data[8:12] != b\"WEBP\":\n                continue\n            if ext == \".mp4\":\n                if b\"ftyp\" not in data[4:12]:\n                    continue\n            if ext == \".zip\":\n                for marker, office_ext in OFFICE_ZIP_MARKERS.items():\n                    if marker in data[:2000]:\n                        return office_ext\n                return \".zip\"\n            return ext\n    return \"\"\n\n\ndef _decrypt_media(url: str, aeskey: str) -> bytes:\n    \"\"\"\n    Download and decrypt AES-256-CBC encrypted media from wecom bot.\n    Returns decrypted bytes.\n    \"\"\"\n    resp = requests.get(url, timeout=30)\n    resp.raise_for_status()\n    encrypted = resp.content\n\n    key = base64.b64decode(aeskey + \"=\" * (-len(aeskey) % 4))\n    if len(key) != 32:\n        raise ValueError(f\"Invalid AES key length: {len(key)}, expected 32\")\n\n    iv = key[:16]\n    cipher = AES.new(key, AES.MODE_CBC, iv)\n    decrypted = cipher.decrypt(encrypted)\n\n    pad_len = decrypted[-1]\n    if pad_len > 32:\n        raise ValueError(f\"Invalid PKCS7 padding length: {pad_len}\")\n    return decrypted[:-pad_len]\n\n\ndef _get_tmp_dir() -> str:\n    \"\"\"Return the workspace tmp directory (absolute path), creating it if needed.\"\"\"\n    ws_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n    tmp_dir = os.path.join(ws_root, \"tmp\")\n    os.makedirs(tmp_dir, exist_ok=True)\n    return tmp_dir\n\n\nclass WecomBotMessage(ChatMessage):\n    \"\"\"Message wrapper for wecom bot (websocket long-connection mode).\"\"\"\n\n    def __init__(self, msg_body: dict, is_group: bool = False):\n        super().__init__(msg_body)\n        self.msg_id = msg_body.get(\"msgid\")\n        self.create_time = msg_body.get(\"create_time\")\n        self.is_group = is_group\n\n        msg_type = msg_body.get(\"msgtype\")\n        from_userid = msg_body.get(\"from\", {}).get(\"userid\", \"\")\n        chat_id = msg_body.get(\"chatid\", \"\")\n        bot_id = msg_body.get(\"aibotid\", \"\")\n\n        if msg_type == \"text\":\n            self.ctype = ContextType.TEXT\n            content = msg_body.get(\"text\", {}).get(\"content\", \"\")\n            if is_group:\n                content = re.sub(r\"@\\S+\\s*\", \"\", content).strip()\n            self.content = content\n\n        elif msg_type == \"voice\":\n            self.ctype = ContextType.TEXT\n            self.content = msg_body.get(\"voice\", {}).get(\"content\", \"\")\n\n        elif msg_type == \"image\":\n            self.ctype = ContextType.IMAGE\n            image_info = msg_body.get(\"image\", {})\n            image_url = image_info.get(\"url\", \"\")\n            aeskey = image_info.get(\"aeskey\", \"\")\n            tmp_dir = _get_tmp_dir()\n            image_path = os.path.join(tmp_dir, f\"wecom_{self.msg_id}.png\")\n\n            try:\n                data = _decrypt_media(image_url, aeskey)\n                with open(image_path, \"wb\") as f:\n                    f.write(data)\n                self.content = image_path\n                self.image_path = image_path\n                logger.info(f\"[WecomBot] Image downloaded: {image_path}\")\n            except Exception as e:\n                logger.error(f\"[WecomBot] Failed to download image: {e}\")\n                self.content = \"[Image download failed]\"\n                self.image_path = None\n\n        elif msg_type == \"mixed\":\n            self.ctype = ContextType.TEXT\n            text_parts = []\n            image_paths = []\n            mixed_items = msg_body.get(\"mixed\", {}).get(\"msg_item\", [])\n            tmp_dir = _get_tmp_dir()\n\n            for idx, item in enumerate(mixed_items):\n                item_type = item.get(\"msgtype\")\n                if item_type == \"text\":\n                    txt = item.get(\"text\", {}).get(\"content\", \"\")\n                    if is_group:\n                        txt = re.sub(r\"@\\S+\\s*\", \"\", txt).strip()\n                    if txt:\n                        text_parts.append(txt)\n                elif item_type == \"image\":\n                    img_info = item.get(\"image\", {})\n                    img_url = img_info.get(\"url\", \"\")\n                    img_aeskey = img_info.get(\"aeskey\", \"\")\n                    img_path = os.path.join(tmp_dir, f\"wecom_{self.msg_id}_{idx}.png\")\n                    try:\n                        img_data = _decrypt_media(img_url, img_aeskey)\n                        with open(img_path, \"wb\") as f:\n                            f.write(img_data)\n                        image_paths.append(img_path)\n                    except Exception as e:\n                        logger.error(f\"[WecomBot] Failed to download mixed image: {e}\")\n\n            content_parts = text_parts[:]\n            for p in image_paths:\n                content_parts.append(f\"[图片: {p}]\")\n            self.content = \"\\n\".join(content_parts) if content_parts else \"[Mixed message]\"\n\n        elif msg_type == \"file\":\n            self.ctype = ContextType.FILE\n            file_info = msg_body.get(\"file\", {})\n            file_url = file_info.get(\"url\", \"\")\n            aeskey = file_info.get(\"aeskey\", \"\")\n            tmp_dir = _get_tmp_dir()\n            base_path = os.path.join(tmp_dir, f\"wecom_{self.msg_id}\")\n            self.content = base_path\n\n            def _download_file():\n                try:\n                    data = _decrypt_media(file_url, aeskey)\n                    ext = _guess_ext_from_bytes(data)\n                    final_path = base_path + ext\n                    with open(final_path, \"wb\") as f:\n                        f.write(data)\n                    self.content = final_path\n                    logger.info(f\"[WecomBot] File downloaded: {final_path}\")\n                except Exception as e:\n                    logger.error(f\"[WecomBot] Failed to download file: {e}\")\n            self._prepare_fn = _download_file\n\n        elif msg_type == \"video\":\n            self.ctype = ContextType.FILE\n            video_info = msg_body.get(\"video\", {})\n            video_url = video_info.get(\"url\", \"\")\n            aeskey = video_info.get(\"aeskey\", \"\")\n            tmp_dir = _get_tmp_dir()\n            self.content = os.path.join(tmp_dir, f\"wecom_{self.msg_id}.mp4\")\n\n            def _download_video():\n                try:\n                    data = _decrypt_media(video_url, aeskey)\n                    with open(self.content, \"wb\") as f:\n                        f.write(data)\n                    logger.info(f\"[WecomBot] Video downloaded: {self.content}\")\n                except Exception as e:\n                    logger.error(f\"[WecomBot] Failed to download video: {e}\")\n            self._prepare_fn = _download_video\n\n        else:\n            raise NotImplementedError(f\"Unsupported message type: {msg_type}\")\n\n        self.from_user_id = from_userid\n        self.to_user_id = bot_id\n        if is_group:\n            self.other_user_id = chat_id\n            self.actual_user_id = from_userid\n            self.actual_user_nickname = from_userid\n        else:\n            self.other_user_id = from_userid\n            self.actual_user_id = from_userid\n"
  },
  {
    "path": "common/cloud_client.py",
    "content": "\"\"\"\nCloud management client for connecting to the LinkAI control console.\n\nHandles remote configuration sync, message push, and skill management\nvia the LinkAI socket protocol.\n\"\"\"\n\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom linkai import LinkAIClient, PushMsg\nfrom config import conf, pconf, plugin_config, available_setting, write_plugin_config, get_root\nfrom plugins import PluginManager\nimport threading\nimport time\nimport json\nimport os\n\n\nchat_client: LinkAIClient\n\n\nCHANNEL_ACTIONS = {\"channel_create\", \"channel_update\", \"channel_delete\"}\n\n# channelType -> config key mapping for app credentials\nCREDENTIAL_MAP = {\n    \"feishu\":            (\"feishu_app_id\",          \"feishu_app_secret\"),\n    \"dingtalk\":          (\"dingtalk_client_id\",      \"dingtalk_client_secret\"),\n    \"wecom_bot\":         (\"wecom_bot_id\",            \"wecom_bot_secret\"),\n    \"qq\":                (\"qq_app_id\",               \"qq_app_secret\"),\n    \"wechatmp\":          (\"wechatmp_app_id\",         \"wechatmp_app_secret\"),\n    \"wechatmp_service\":  (\"wechatmp_app_id\",         \"wechatmp_app_secret\"),\n    \"wechatcom_app\":     (\"wechatcomapp_agent_id\",   \"wechatcomapp_secret\"),\n}\n\n\nclass CloudClient(LinkAIClient):\n    def __init__(self, api_key: str, channel, host: str = \"\"):\n        super().__init__(api_key, host)\n        self.channel = channel\n        self.client_type = channel.channel_type\n        self.channel_mgr = None\n        self._skill_service = None\n        self._memory_service = None\n        self._chat_service = None\n\n    @property\n    def skill_service(self):\n        \"\"\"Lazy-init SkillService so it is available once SkillManager exists.\"\"\"\n        if self._skill_service is None:\n            try:\n                from agent.skills.manager import SkillManager\n                from agent.skills.service import SkillService\n                from config import conf\n                from common.utils import expand_path\n                workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n                manager = SkillManager(custom_dir=os.path.join(workspace_root, \"skills\"))\n                self._skill_service = SkillService(manager)\n                logger.debug(\"[CloudClient] SkillService initialised\")\n            except Exception as e:\n                logger.error(f\"[CloudClient] Failed to init SkillService: {e}\")\n        return self._skill_service\n\n    @property\n    def memory_service(self):\n        \"\"\"Lazy-init MemoryService.\"\"\"\n        if self._memory_service is None:\n            try:\n                from agent.memory.service import MemoryService\n                from config import conf\n                from common.utils import expand_path\n                workspace_root = expand_path(conf().get(\"agent_workspace\", \"~/cow\"))\n                self._memory_service = MemoryService(workspace_root)\n                logger.debug(\"[CloudClient] MemoryService initialised\")\n            except Exception as e:\n                logger.error(f\"[CloudClient] Failed to init MemoryService: {e}\")\n        return self._memory_service\n\n    @property\n    def chat_service(self):\n        \"\"\"Lazy-init ChatService (requires AgentBridge via Bridge singleton).\"\"\"\n        if self._chat_service is None:\n            try:\n                from agent.chat.service import ChatService\n                from bridge.bridge import Bridge\n                agent_bridge = Bridge().get_agent_bridge()\n                self._chat_service = ChatService(agent_bridge)\n                logger.debug(\"[CloudClient] ChatService initialised\")\n            except Exception as e:\n                logger.error(f\"[CloudClient] Failed to init ChatService: {e}\")\n        return self._chat_service\n\n    # ------------------------------------------------------------------\n    # message push callback\n    # ------------------------------------------------------------------\n    def on_message(self, push_msg: PushMsg):\n        session_id = push_msg.session_id\n        msg_content = push_msg.msg_content\n        logger.info(f\"receive msg push, session_id={session_id}, msg_content={msg_content}\")\n        context = Context()\n        context.type = ContextType.TEXT\n        context[\"receiver\"] = session_id\n        context[\"isgroup\"] = push_msg.is_group\n        self.channel.send(Reply(ReplyType.TEXT, content=msg_content), context)\n\n    # ------------------------------------------------------------------\n    # config callback\n    # ------------------------------------------------------------------\n    def on_config(self, config: dict):\n        if not self.client_id:\n            return\n        logger.info(f\"[CloudClient] Loading remote config: {config}\")\n\n        action = config.get(\"action\")\n        if action in CHANNEL_ACTIONS:\n            self._dispatch_channel_action(action, config.get(\"data\", {}))\n            return\n\n        if config.get(\"enabled\") != \"Y\":\n            return\n\n        local_config = conf()\n        need_restart_channel = False\n\n        for key in config.keys():\n            if key in available_setting and config.get(key) is not None:\n                local_config[key] = config.get(key)\n\n        # Voice settings\n        reply_voice_mode = config.get(\"reply_voice_mode\")\n        if reply_voice_mode:\n            if reply_voice_mode == \"voice_reply_voice\":\n                local_config[\"voice_reply_voice\"] = True\n                local_config[\"always_reply_voice\"] = False\n            elif reply_voice_mode == \"always_reply_voice\":\n                local_config[\"always_reply_voice\"] = True\n                local_config[\"voice_reply_voice\"] = True\n            elif reply_voice_mode == \"no_reply_voice\":\n                local_config[\"always_reply_voice\"] = False\n                local_config[\"voice_reply_voice\"] = False\n\n        # Model configuration\n        if config.get(\"model\"):\n            local_config[\"model\"] = config.get(\"model\")\n\n        # Channel configuration (legacy single-channel path)\n        if config.get(\"channelType\"):\n            if local_config.get(\"channel_type\") != config.get(\"channelType\"):\n                local_config[\"channel_type\"] = config.get(\"channelType\")\n                need_restart_channel = True\n\n        # Channel-specific app credentials (legacy single-channel path)\n        current_channel_type = local_config.get(\"channel_type\", \"\")\n        if self._set_channel_credentials(local_config, current_channel_type,\n                                         config.get(\"app_id\"), config.get(\"app_secret\")):\n            need_restart_channel = True\n\n        if config.get(\"admin_password\"):\n            if not pconf(\"Godcmd\"):\n                write_plugin_config({\"Godcmd\": {\"password\": config.get(\"admin_password\"), \"admin_users\": []}})\n            else:\n                pconf(\"Godcmd\")[\"password\"] = config.get(\"admin_password\")\n            PluginManager().instances[\"GODCMD\"].reload()\n\n        if config.get(\"group_app_map\") and pconf(\"linkai\"):\n            local_group_map = {}\n            for mapping in config.get(\"group_app_map\"):\n                local_group_map[mapping.get(\"group_name\")] = mapping.get(\"app_code\")\n            pconf(\"linkai\")[\"group_app_map\"] = local_group_map\n            PluginManager().instances[\"LINKAI\"].reload()\n\n        if config.get(\"text_to_image\") and config.get(\"text_to_image\") == \"midjourney\" and pconf(\"linkai\"):\n            if pconf(\"linkai\")[\"midjourney\"]:\n                pconf(\"linkai\")[\"midjourney\"][\"enabled\"] = True\n                pconf(\"linkai\")[\"midjourney\"][\"use_image_create_prefix\"] = True\n        elif config.get(\"text_to_image\") and config.get(\"text_to_image\") in [\"dall-e-2\", \"dall-e-3\"]:\n            if pconf(\"linkai\")[\"midjourney\"]:\n                pconf(\"linkai\")[\"midjourney\"][\"use_image_create_prefix\"] = False\n\n        self._save_config_to_file(local_config)\n\n        if need_restart_channel:\n            self._restart_channel(local_config.get(\"channel_type\", \"\"))\n\n    # ------------------------------------------------------------------\n    # channel CRUD operations\n    # ------------------------------------------------------------------\n    def _dispatch_channel_action(self, action: str, data: dict):\n        channel_type = data.get(\"channelType\")\n        if not channel_type:\n            logger.warning(f\"[CloudClient] Channel action '{action}' missing channelType, data={data}\")\n            return\n        logger.info(f\"[CloudClient] Channel action: {action}, channelType={channel_type}\")\n\n        if action == \"channel_create\":\n            self._handle_channel_create(channel_type, data)\n        elif action == \"channel_update\":\n            self._handle_channel_update(channel_type, data)\n        elif action == \"channel_delete\":\n            self._handle_channel_delete(channel_type, data)\n\n    def _handle_channel_create(self, channel_type: str, data: dict):\n        local_config = conf()\n        cred_changed = self._set_channel_credentials(\n            local_config, channel_type, data.get(\"appId\"), data.get(\"appSecret\"))\n        self._add_channel_type(local_config, channel_type)\n        self._save_config_to_file(local_config)\n\n        if not self.channel_mgr:\n            return\n\n        existing_ch = self.channel_mgr.get_channel(channel_type)\n        if existing_ch and not cred_changed:\n            logger.info(f\"[CloudClient] Channel '{channel_type}' already running with same config, \"\n                        \"skip restart, reporting status only\")\n            threading.Thread(\n                target=self._report_channel_startup, args=(channel_type,), daemon=True\n            ).start()\n            return\n\n        threading.Thread(\n            target=self._do_add_channel, args=(channel_type,), daemon=True\n        ).start()\n\n    def _handle_channel_update(self, channel_type: str, data: dict):\n        local_config = conf()\n        enabled = data.get(\"enabled\", \"Y\")\n\n        cred_changed = self._set_channel_credentials(\n            local_config, channel_type, data.get(\"appId\"), data.get(\"appSecret\"))\n        if enabled == \"N\":\n            self._remove_channel_type(local_config, channel_type)\n        else:\n            self._add_channel_type(local_config, channel_type)\n        self._save_config_to_file(local_config)\n\n        if not self.channel_mgr:\n            return\n\n        if enabled == \"N\":\n            threading.Thread(\n                target=self._do_remove_channel, args=(channel_type,), daemon=True\n            ).start()\n        else:\n            existing_ch = self.channel_mgr.get_channel(channel_type)\n            if existing_ch and not cred_changed:\n                logger.info(f\"[CloudClient] Channel '{channel_type}' already running with same config, \"\n                            \"skip restart, reporting status only\")\n                threading.Thread(\n                    target=self._report_channel_startup, args=(channel_type,), daemon=True\n                ).start()\n            else:\n                threading.Thread(\n                    target=self._do_restart_channel, args=(self.channel_mgr, channel_type), daemon=True\n                ).start()\n\n    def _handle_channel_delete(self, channel_type: str, data: dict):\n        local_config = conf()\n        self._clear_channel_credentials(local_config, channel_type)\n        self._remove_channel_type(local_config, channel_type)\n        self._save_config_to_file(local_config)\n\n        if self.channel_mgr:\n            threading.Thread(\n                target=self._do_remove_channel, args=(channel_type,), daemon=True\n            ).start()\n\n    # ------------------------------------------------------------------\n    # channel credentials helpers\n    # ------------------------------------------------------------------\n    @staticmethod\n    def _set_channel_credentials(local_config: dict, channel_type: str,\n                                 app_id, app_secret) -> bool:\n        \"\"\"\n        Write app_id / app_secret into the correct config keys for *channel_type*.\n        Also syncs the values to environment variables (upper-cased key) so that\n        skills that rely on env-based checks (e.g. has_env_var) work immediately.\n        Returns True if any value actually changed.\n        \"\"\"\n        cred = CREDENTIAL_MAP.get(channel_type)\n        if not cred:\n            return False\n        id_key, secret_key = cred\n        changed = False\n        if app_id is not None and local_config.get(id_key) != app_id:\n            local_config[id_key] = app_id\n            os.environ[id_key.upper()] = str(app_id)\n            changed = True\n        if app_secret is not None and local_config.get(secret_key) != app_secret:\n            local_config[secret_key] = app_secret\n            os.environ[secret_key.upper()] = str(app_secret)\n            changed = True\n        if changed:\n            logger.info(f\"[CloudClient] Synced {channel_type} credentials to conf and env\")\n        return changed\n\n    @staticmethod\n    def _clear_channel_credentials(local_config: dict, channel_type: str):\n        cred = CREDENTIAL_MAP.get(channel_type)\n        if not cred:\n            return\n        id_key, secret_key = cred\n        local_config.pop(id_key, None)\n        local_config.pop(secret_key, None)\n        os.environ.pop(id_key.upper(), None)\n        os.environ.pop(secret_key.upper(), None)\n\n    # ------------------------------------------------------------------\n    # channel_type list helpers\n    # ------------------------------------------------------------------\n    @staticmethod\n    def _parse_channel_types(local_config: dict) -> list:\n        raw = local_config.get(\"channel_type\", \"\")\n        if isinstance(raw, list):\n            return [ch.strip() for ch in raw if ch.strip()]\n        if isinstance(raw, str):\n            return [ch.strip() for ch in raw.split(\",\") if ch.strip()]\n        return []\n\n    @staticmethod\n    def _add_channel_type(local_config: dict, channel_type: str):\n        types = CloudClient._parse_channel_types(local_config)\n        if channel_type not in types:\n            types.append(channel_type)\n            local_config[\"channel_type\"] = \", \".join(types)\n\n    @staticmethod\n    def _remove_channel_type(local_config: dict, channel_type: str):\n        types = CloudClient._parse_channel_types(local_config)\n        if channel_type in types:\n            types.remove(channel_type)\n            local_config[\"channel_type\"] = \", \".join(types)\n\n    # ------------------------------------------------------------------\n    # channel manager thread helpers\n    # ------------------------------------------------------------------\n    def _do_add_channel(self, channel_type: str):\n        try:\n            self.channel_mgr.add_channel(channel_type)\n            logger.info(f\"[CloudClient] Channel '{channel_type}' added successfully\")\n        except Exception as e:\n            logger.error(f\"[CloudClient] Failed to add channel '{channel_type}': {e}\", exc_info=True)\n            self.send_channel_status(channel_type, \"error\", str(e))\n            return\n        self._report_channel_startup(channel_type)\n\n    def _do_remove_channel(self, channel_type: str):\n        try:\n            self.channel_mgr.remove_channel(channel_type)\n            logger.info(f\"[CloudClient] Channel '{channel_type}' removed successfully\")\n        except Exception as e:\n            logger.error(f\"[CloudClient] Failed to remove channel '{channel_type}': {e}\")\n\n    def _report_channel_startup(self, channel_type: str):\n        \"\"\"Wait for channel startup result and report to cloud.\"\"\"\n        ch = self.channel_mgr.get_channel(channel_type)\n        if not ch:\n            self.send_channel_status(channel_type, \"error\", \"channel instance not found\")\n            return\n        success, error = ch.wait_startup(timeout=3)\n        if success:\n            logger.info(f\"[CloudClient] Channel '{channel_type}' connected, reporting status\")\n            self.send_channel_status(channel_type, \"connected\")\n        else:\n            logger.warning(f\"[CloudClient] Channel '{channel_type}' startup failed: {error}\")\n            self.send_channel_status(channel_type, \"error\", error)\n\n    # ------------------------------------------------------------------\n    # skill callback\n    # ------------------------------------------------------------------\n    def on_skill(self, data: dict) -> dict:\n        \"\"\"\n        Handle SKILL messages from the cloud console.\n        Delegates to SkillService.dispatch for the actual operations.\n\n        :param data: message data with 'action', 'clientId', 'payload'\n        :return: response dict\n        \"\"\"\n        action = data.get(\"action\", \"\")\n        payload = data.get(\"payload\")\n        logger.info(f\"[CloudClient] on_skill: action={action}\")\n\n        svc = self.skill_service\n        if svc is None:\n            return {\"action\": action, \"code\": 500, \"message\": \"SkillService not available\", \"payload\": None}\n\n        return svc.dispatch(action, payload)\n\n    # ------------------------------------------------------------------\n    # memory callback\n    # ------------------------------------------------------------------\n    def on_memory(self, data: dict) -> dict:\n        \"\"\"\n        Handle MEMORY messages from the cloud console.\n        Delegates to MemoryService.dispatch for the actual operations.\n\n        :param data: message data with 'action', 'clientId', 'payload'\n        :return: response dict\n        \"\"\"\n        action = data.get(\"action\", \"\")\n        payload = data.get(\"payload\")\n        logger.info(f\"[CloudClient] on_memory: action={action}\")\n\n        svc = self.memory_service\n        if svc is None:\n            return {\"action\": action, \"code\": 500, \"message\": \"MemoryService not available\", \"payload\": None}\n\n        return svc.dispatch(action, payload)\n\n    # ------------------------------------------------------------------\n    # chat callback\n    # ------------------------------------------------------------------\n    def on_chat(self, data: dict, send_chunk_fn):\n        \"\"\"\n        Handle CHAT messages from the cloud console.\n        Runs the agent in streaming mode and sends chunks back via send_chunk_fn.\n\n        :param data: message data with 'action' and 'payload' (query, session_id)\n        :param send_chunk_fn: callable(chunk_data: dict) to send one streaming chunk\n        \"\"\"\n        payload = data.get(\"payload\", {})\n        query = payload.get(\"query\", \"\")\n        session_id = payload.get(\"session_id\", \"cloud_console\")\n        channel_type = payload.get(\"channel_type\", \"\")\n        if not session_id.startswith(\"session_\"):\n            session_id = f\"session_{session_id}\"\n        logger.info(f\"[CloudClient] on_chat: session={session_id}, channel={channel_type}, query={query[:80]}\")\n\n        svc = self.chat_service\n        if svc is None:\n            raise RuntimeError(\"ChatService not available\")\n\n        svc.run(query=query, session_id=session_id, channel_type=channel_type, send_chunk_fn=send_chunk_fn)\n\n    # ------------------------------------------------------------------\n    # history callback\n    # ------------------------------------------------------------------\n    def on_history(self, data: dict) -> dict:\n        \"\"\"\n        Handle HISTORY messages from the cloud console.\n        Returns paginated conversation history for a session.\n\n        :param data: message data with 'action' and 'payload' (session_id, page, page_size)\n        :return: response dict\n        \"\"\"\n        action = data.get(\"action\", \"query\")\n        payload = data.get(\"payload\", {})\n        logger.info(f\"[CloudClient] on_history: action={action}\")\n\n        if action == \"query\":\n            return self._query_history(payload)\n\n        return {\"action\": action, \"code\": 404, \"message\": f\"unknown action: {action}\", \"payload\": None}\n\n    def _query_history(self, payload: dict) -> dict:\n        \"\"\"Query paginated conversation history using ConversationStore.\"\"\"\n        session_id = payload.get(\"session_id\", \"\")\n        page = int(payload.get(\"page\", 1))\n        page_size = int(payload.get(\"page_size\", 20))\n\n        if not session_id:\n            return {\n                \"action\": \"query\",\n                \"payload\": {\"status\": \"error\", \"message\": \"session_id required\"},\n            }\n\n        # Web channel stores sessions with a \"session_\" prefix\n        if not session_id.startswith(\"session_\"):\n            session_id = f\"session_{session_id}\"\n        logger.info(f\"[CloudClient] history query: session={session_id}, page={page}, page_size={page_size}\")\n\n        try:\n            from agent.memory.conversation_store import get_conversation_store\n            store = get_conversation_store()\n            result = store.load_history_page(\n                session_id=session_id,\n                page=page,\n                page_size=page_size,\n            )\n            return {\n                \"action\": \"query\",\n                \"payload\": {\"status\": \"success\", **result},\n            }\n        except Exception as e:\n            logger.error(f\"[CloudClient] History query error: {e}\")\n            return {\n                \"action\": \"query\",\n                \"payload\": {\"status\": \"error\", \"message\": str(e)},\n            }\n\n    # ------------------------------------------------------------------\n    # channel restart helpers\n    # ------------------------------------------------------------------\n    def _restart_channel(self, new_channel_type: str):\n        \"\"\"\n        Restart the channel via ChannelManager when channel type changes.\n        \"\"\"\n        if self.channel_mgr:\n            logger.info(f\"[CloudClient] Restarting channel to '{new_channel_type}'...\")\n            threading.Thread(target=self._do_restart_channel, args=(self.channel_mgr, new_channel_type), daemon=True).start()\n        else:\n            logger.warning(\"[CloudClient] ChannelManager not available, please restart the application manually\")\n\n    def _do_restart_channel(self, mgr, new_channel_type: str):\n        \"\"\"\n        Perform the channel restart in a separate thread to avoid blocking the config callback.\n        \"\"\"\n        try:\n            mgr.restart(new_channel_type)\n            if mgr.channel:\n                self.channel = mgr.channel\n                self.client_type = mgr.channel.channel_type\n                logger.info(f\"[CloudClient] Channel reference updated to '{new_channel_type}'\")\n        except Exception as e:\n            logger.error(f\"[CloudClient] Channel restart failed: {e}\")\n            self.send_channel_status(new_channel_type, \"error\", str(e))\n            return\n        self._report_channel_startup(new_channel_type)\n\n    # ------------------------------------------------------------------\n    # config persistence\n    # ------------------------------------------------------------------\n    def _save_config_to_file(self, local_config: dict):\n        \"\"\"\n        Save configuration to config.json file.\n        \"\"\"\n        try:\n            config_path = os.path.join(get_root(), \"config.json\")\n            if not os.path.exists(config_path):\n                logger.warning(f\"[CloudClient] config.json not found at {config_path}, skip saving\")\n                return\n\n            with open(config_path, \"r\", encoding=\"utf-8\") as f:\n                file_config = json.load(f)\n\n            file_config.update(dict(local_config))\n\n            with open(config_path, \"w\", encoding=\"utf-8\") as f:\n                json.dump(file_config, f, indent=4, ensure_ascii=False)\n\n            logger.info(\"[CloudClient] Configuration saved to config.json successfully\")\n        except Exception as e:\n            logger.error(f\"[CloudClient] Failed to save configuration to config.json: {e}\")\n\n\ndef get_root_domain(host: str = \"\") -> str:\n    \"\"\"Extract root domain from a hostname.\n\n    If *host* is empty, reads CLOUD_HOST env var / cloud_host config.\n    \"\"\"\n    if not host:\n        host = os.environ.get(\"CLOUD_HOST\") or conf().get(\"cloud_host\", \"\")\n    if not host:\n        return \"\"\n    host = host.strip().rstrip(\"/\")\n    if \"://\" in host:\n        host = host.split(\"://\", 1)[1]\n    host = host.split(\"/\", 1)[0].split(\":\")[0]\n    parts = host.split(\".\")\n    if len(parts) >= 2:\n        return \".\".join(parts[-2:])\n    return host\n\n\ndef get_deployment_id() -> str:\n    \"\"\"Return cloud deployment id from env var or config.\"\"\"\n    return os.environ.get(\"CLOUD_DEPLOYMENT_ID\") or conf().get(\"cloud_deployment_id\", \"\")\n\n\ndef get_website_base_url() -> str:\n    \"\"\"Return the public URL prefix that maps to the workspace websites/ dir.\n\n    Returns empty string when cloud deployment is not configured.\n    \"\"\"\n    deployment_id = get_deployment_id()\n    if not deployment_id:\n        return \"\"\n\n    websites_domain = os.environ.get(\"CLOUD_WEBSITES_DOMAIN\") or conf().get(\"cloud_websites_domain\", \"\")\n    if websites_domain:\n        websites_domain = websites_domain.strip().rstrip(\"/\")\n        return f\"https://{websites_domain}/{deployment_id}\"\n\n    domain = get_root_domain()\n    if not domain:\n        return \"\"\n    return f\"https://app.{domain}/{deployment_id}\"\n\n\ndef build_website_prompt(workspace_dir: str) -> list:\n    \"\"\"Build system prompt lines for cloud website/file sharing rules.\n\n    Returns an empty list when cloud deployment is not configured,\n    so callers can safely do ``lines.extend(build_website_prompt(...))``.\n    \"\"\"\n    base_url = get_website_base_url()\n    if not base_url:\n        return []\n\n    return [\n        \"**文件分享与网页生成规则** (非常重要 — 当前为云部署模式):\",\n        \"\",\n        f\"云端已为工作空间的 `websites/` 目录配置好公网路由映射，访问地址前缀为: `{base_url}`\",\n        \"\",\n        \"1. **网页/网站**: 编写网页、H5页面等前端代码时，**必须**将文件放到 `websites/` 目录中\",\n        f\"   - 例如: `websites/index.html` → `{base_url}/index.html`\",\n        f\"   - 例如: `websites/my-app/index.html` → `{base_url}/my-app/index.html`\",\n        \"\",\n        \"2. **生成文件分享** (PPT、PDF、图片、音视频等): 当你为用户生成了需要下载或查看的文件时，**可以**将文件保存到 `websites/` 目录中\",\n        f\"   - 例如: 生成的PPT保存到 `websites/files/report.pptx` → 下载链接为 `{base_url}/files/report.pptx`\",\n        \"   - 你仍然可以同时使用 `send` 工具发送文件（在飞书、钉钉等IM渠道中有效），但**必须同时在回复文本中提供下载链接**作为兜底，因为部分渠道（如网页端）无法通过 send 接收本地文件\",\n        \"\",\n        \"3. **必须发送链接**: 无论是网页还是文件，生成后**必须将完整的访问/下载链接直接写在回复文本中发送给用户**\",\n        \"\",\n        \"4. **文件名和路径尽量使用英文/拼音/数字等**，不要使用中文，避免链接无法访问\",\n        \"\",\n        \"5. 建议为每个独立项目在 `websites/` 下创建子目录，保持结构清晰\",\n        \"\",\n    ]\n\ndef start(channel, channel_mgr=None):\n    if not get_deployment_id():\n        return\n\n    global chat_client\n    chat_client = CloudClient(api_key=conf().get(\"linkai_api_key\"), host=conf().get(\"cloud_host\", \"\"), channel=channel)\n    chat_client.channel_mgr = channel_mgr\n    chat_client.config = _build_config()\n    chat_client.start()\n    time.sleep(1.5)\n    if chat_client.client_id:\n        logger.info(\"[CloudClient] Console: https://link-ai.tech/console/clients\")\n        if channel_mgr:\n            channel_mgr.cloud_mode = True\n            threading.Thread(target=_report_existing_channels, args=(chat_client, channel_mgr), daemon=True).start()\n\n\ndef _report_existing_channels(client: CloudClient, mgr):\n    \"\"\"Report status for all channels that were started before cloud client connected.\"\"\"\n    try:\n        for name, ch in list(mgr._channels.items()):\n            if name == \"web\":\n                continue\n            ch.cloud_mode = True\n            client._report_channel_startup(name)\n    except Exception as e:\n        logger.warning(f\"[CloudClient] Failed to report existing channel status: {e}\")\n\n\ndef _build_config():\n    local_conf = conf()\n    config = {\n        \"linkai_app_code\": local_conf.get(\"linkai_app_code\"),\n        \"single_chat_prefix\": local_conf.get(\"single_chat_prefix\"),\n        \"single_chat_reply_prefix\": local_conf.get(\"single_chat_reply_prefix\"),\n        \"single_chat_reply_suffix\": local_conf.get(\"single_chat_reply_suffix\"),\n        \"group_chat_prefix\": local_conf.get(\"group_chat_prefix\"),\n        \"group_chat_reply_prefix\": local_conf.get(\"group_chat_reply_prefix\"),\n        \"group_chat_reply_suffix\": local_conf.get(\"group_chat_reply_suffix\"),\n        \"group_name_white_list\": local_conf.get(\"group_name_white_list\"),\n        \"nick_name_black_list\": local_conf.get(\"nick_name_black_list\"),\n        \"speech_recognition\": \"Y\" if local_conf.get(\"speech_recognition\") else \"N\",\n        \"text_to_image\": local_conf.get(\"text_to_image\"),\n        \"image_create_prefix\": local_conf.get(\"image_create_prefix\"),\n        \"model\": local_conf.get(\"model\"),\n        \"agent_max_context_turns\": local_conf.get(\"agent_max_context_turns\"),\n        \"agent_max_context_tokens\": local_conf.get(\"agent_max_context_tokens\"),\n        \"agent_max_steps\": local_conf.get(\"agent_max_steps\"),\n        \"channelType\": local_conf.get(\"channel_type\"),\n    }\n\n    if local_conf.get(\"always_reply_voice\"):\n        config[\"reply_voice_mode\"] = \"always_reply_voice\"\n    elif local_conf.get(\"voice_reply_voice\"):\n        config[\"reply_voice_mode\"] = \"voice_reply_voice\"\n\n    if pconf(\"linkai\"):\n        config[\"group_app_map\"] = pconf(\"linkai\").get(\"group_app_map\")\n\n    if plugin_config.get(\"Godcmd\"):\n        config[\"admin_password\"] = plugin_config.get(\"Godcmd\").get(\"password\")\n\n    # Add channel-specific app credentials\n    current_channel_type = local_conf.get(\"channel_type\", \"\")\n    if current_channel_type == \"feishu\":\n        config[\"app_id\"] = local_conf.get(\"feishu_app_id\")\n        config[\"app_secret\"] = local_conf.get(\"feishu_app_secret\")\n    elif current_channel_type == \"dingtalk\":\n        config[\"app_id\"] = local_conf.get(\"dingtalk_client_id\")\n        config[\"app_secret\"] = local_conf.get(\"dingtalk_client_secret\")\n    elif current_channel_type in (\"wechatmp\", \"wechatmp_service\"):\n        config[\"app_id\"] = local_conf.get(\"wechatmp_app_id\")\n        config[\"app_secret\"] = local_conf.get(\"wechatmp_app_secret\")\n    elif current_channel_type == \"wecom_bot\":\n        config[\"app_id\"] = local_conf.get(\"wecom_bot_id\")\n        config[\"app_secret\"] = local_conf.get(\"wecom_bot_secret\")\n    elif current_channel_type == \"qq\":\n        config[\"app_id\"] = local_conf.get(\"qq_app_id\")\n        config[\"app_secret\"] = local_conf.get(\"qq_app_secret\")\n    elif current_channel_type == \"wechatcom_app\":\n        config[\"app_id\"] = local_conf.get(\"wechatcomapp_agent_id\")\n        config[\"app_secret\"] = local_conf.get(\"wechatcomapp_secret\")\n\n    return config\n"
  },
  {
    "path": "common/const.py",
    "content": "# 厂商类型\nOPEN_AI = \"openAI\"\nOPENAI = \"openai\"\nCHATGPT = \"chatGPT\"  # legacy alias for OPENAI, kept for backward compatibility\nBAIDU = \"baidu\"\nXUNFEI = \"xunfei\"\nCHATGPTONAZURE = \"chatGPTOnAzure\"\nLINKAI = \"linkai\"\nCLAUDEAPI= \"claudeAPI\"\nQWEN = \"qwen\"  # 旧版千问接入\nQWEN_DASHSCOPE = \"dashscope\"  # 新版千问接入(百炼)\nGEMINI = \"gemini\" \nZHIPU_AI = \"zhipu\"  \nMOONSHOT = \"moonshot\"\nMiniMax = \"minimax\"\nDEEPSEEK = \"deepseek\"\nMODELSCOPE = \"modelscope\"\n\n# 模型列表\n# Claude (Anthropic)\nCLAUDE3 = \"claude-3-opus-20240229\"\nCLAUDE_3_OPUS = \"claude-3-opus-latest\"\nCLAUDE_3_OPUS_0229 = \"claude-3-opus-20240229\"\nCLAUDE_3_SONNET = \"claude-3-sonnet-20240229\"\nCLAUDE_3_HAIKU = \"claude-3-haiku-20240307\"\nCLAUDE_35_SONNET = \"claude-3-5-sonnet-latest\"  # 带 latest 标签的模型名称，会不断更新指向最新发布的模型\nCLAUDE_35_SONNET_1022 = \"claude-3-5-sonnet-20241022\"  # 带具体日期的模型名称，会固定为该日期发布的模型\nCLAUDE_35_SONNET_0620 = \"claude-3-5-sonnet-20240620\"\nCLAUDE_4_OPUS = \"claude-opus-4-0\"\nCLAUDE_4_6_OPUS = \"claude-opus-4-6\"      # Claude Opus 4.6 - Agent推荐模型\nCLAUDE_4_SONNET = \"claude-sonnet-4-0\"    # Claude Sonnet 4.0\nCLAUDE_4_5_SONNET = \"claude-sonnet-4-5\"  # Claude Sonnet 4.5 - Agent推荐模型\nCLAUDE_4_6_SONNET = \"claude-sonnet-4-6\"  # Claude Sonnet 4.6 - Agent推荐模型\n\n# Gemini (Google)\nGEMINI_PRO = \"gemini-1.0-pro\"\nGEMINI_15_flash = \"gemini-1.5-flash\"\nGEMINI_15_PRO = \"gemini-1.5-pro\"\nGEMINI_20_flash_exp = \"gemini-2.0-flash-exp\"  # exp结尾为实验模型，会逐步不再支持\nGEMINI_20_FLASH = \"gemini-2.0-flash\"  # 正式版模型\nGEMINI_25_FLASH_PRE = \"gemini-2.5-flash-preview-05-20\"\nGEMINI_25_PRO_PRE = \"gemini-2.5-pro-preview-05-06\"\nGEMINI_3_FLASH_PRE = \"gemini-3-flash-preview\"  # Gemini 3 Flash Preview - Agent推荐模型\nGEMINI_3_PRO_PRE = \"gemini-3-pro-preview\"  # Gemini 3 Pro Preview\nGEMINI_31_PRO_PRE = \"gemini-3.1-pro-preview\"  # Gemini 3.1 Pro Preview - Agent推荐模型\nGEMINI_31_FLASH_LITE_PRE = \"gemini-3.1-flash-lite-preview\"  # Gemini 3.1 Flash Lite Preview - Agent推荐模型\n\n# OpenAI\nGPT35 = \"gpt-3.5-turbo\"\nGPT35_0125 = \"gpt-3.5-turbo-0125\"\nGPT35_1106 = \"gpt-3.5-turbo-1106\"\nGPT4 = \"gpt-4\"\nGPT4_06_13 = \"gpt-4-0613\"\nGPT4_32k = \"gpt-4-32k\"\nGPT4_32k_06_13 = \"gpt-4-32k-0613\"\nGPT4_TURBO = \"gpt-4-turbo\"\nGPT4_TURBO_PREVIEW = \"gpt-4-turbo-preview\"\nGPT4_TURBO_01_25 = \"gpt-4-0125-preview\"\nGPT4_TURBO_11_06 = \"gpt-4-1106-preview\"\nGPT4_TURBO_04_09 = \"gpt-4-turbo-2024-04-09\"\nGPT4_VISION_PREVIEW = \"gpt-4-vision-preview\"\nGPT_4o = \"gpt-4o\"\nGPT_4O_0806 = \"gpt-4o-2024-08-06\"\nGPT_4o_MINI = \"gpt-4o-mini\"\nGPT_41 = \"gpt-4.1\"\nGPT_41_MINI = \"gpt-4.1-mini\"\nGPT_41_NANO = \"gpt-4.1-nano\"\nGPT_5 = \"gpt-5\"\nGPT_5_MINI = \"gpt-5-mini\"\nGPT_5_NANO = \"gpt-5-nano\"\nGPT_54 = \"gpt-5.4\"  # GPT-5.4 - Agent recommended model\nGPT_54_MINI = \"gpt-5.4-mini\"\nGPT_54_NANO = \"gpt-5.4-nano\"\nO1 = \"o1-preview\"\nO1_MINI = \"o1-mini\"\nWHISPER_1 = \"whisper-1\"\nTTS_1 = \"tts-1\"\nTTS_1_HD = \"tts-1-hd\"\n\n# DeepSeek\nDEEPSEEK_CHAT = \"deepseek-chat\"  # DeepSeek-V3对话模型\nDEEPSEEK_REASONER = \"deepseek-reasoner\"  # DeepSeek-R1模型\n\n# Qwen (通义千问 - 阿里云)\nQWEN = \"qwen\"\nQWEN_TURBO = \"qwen-turbo\"\nQWEN_PLUS = \"qwen-plus\"\nQWEN_MAX = \"qwen-max\"\nQWEN_LONG = \"qwen-long\"\nQWEN3_MAX = \"qwen3-max\"  # Qwen3 Max - Agent推荐模型\nQWEN35_PLUS = \"qwen3.5-plus\"  # Qwen3.5 Plus - Omni model (MultiModalConversation)\nQWQ_PLUS = \"qwq-plus\"\n\n# MiniMax\nMINIMAX_M2_7 = \"MiniMax-M2.7\"  # MiniMax M2.7 - Latest\nMINIMAX_M2_5 = \"MiniMax-M2.5\"  # MiniMax M2.5\nMINIMAX_M2_1 = \"MiniMax-M2.1\"  # MiniMax M2.1\nMINIMAX_M2_1_LIGHTNING = \"MiniMax-M2.1-lightning\"  # MiniMax M2.1 极速版\nMINIMAX_M2 = \"MiniMax-M2\"  # MiniMax M2\nMINIMAX_ABAB6_5 = \"abab6.5-chat\"  # MiniMax abab6.5\n\n# GLM (智谱AI)\nGLM_5_TURBO = \"glm-5-turbo\"  # 智谱 GLM-5-Turbo - Latest\nGLM_5 = \"glm-5\"  # 智谱 GLM-5\nGLM_4 = \"glm-4\"\nGLM_4_PLUS = \"glm-4-plus\"\nGLM_4_flash = \"glm-4-flash\"\nGLM_4_LONG = \"glm-4-long\"\nGLM_4_ALLTOOLS = \"glm-4-alltools\"\nGLM_4_0520 = \"glm-4-0520\"\nGLM_4_AIR = \"glm-4-air\"\nGLM_4_AIRX = \"glm-4-airx\"\nGLM_4_7 = \"glm-4.7\"  # 智谱 GLM-4.7 - Agent推荐模型\n\n# Kimi (Moonshot)\nMOONSHOT = \"moonshot\"\nKIMI_K2 = \"kimi-k2\"\nKIMI_K2_5 = \"kimi-k2.5\"\n\n# Doubao (Volcengine Ark)\nDOUBAO = \"doubao\"\nDOUBAO_SEED_2_CODE = \"doubao-seed-2-0-code-preview-260215\"\nDOUBAO_SEED_2_PRO = \"doubao-seed-2-0-pro-260215\"\nDOUBAO_SEED_2_LITE = \"doubao-seed-2-0-lite-260215\"\nDOUBAO_SEED_2_MINI = \"doubao-seed-2-0-mini-260215\"\n\n# 其他模型\nWEN_XIN = \"wenxin\"\nWEN_XIN_4 = \"wenxin-4\"\nXUNFEI = \"xunfei\"\nLINKAI_35 = \"linkai-3.5\"\nLINKAI_4_TURBO = \"linkai-4-turbo\"\nLINKAI_4o = \"linkai-4o\"\nMODELSCOPE = \"modelscope\"\n\nGITEE_AI_MODEL_LIST = [\"Yi-34B-Chat\", \"InternVL2-8B\", \"deepseek-coder-33B-instruct\", \"InternVL2.5-26B\", \"Qwen2-VL-72B\", \"Qwen2.5-32B-Instruct\", \"glm-4-9b-chat\", \"codegeex4-all-9b\", \"Qwen2.5-Coder-32B-Instruct\", \"Qwen2.5-72B-Instruct\", \"Qwen2.5-7B-Instruct\", \"Qwen2-72B-Instruct\", \"Qwen2-7B-Instruct\", \"code-raccoon-v1\", \"Qwen2.5-14B-Instruct\"]\n\nMODELSCOPE_MODEL_LIST = [\"LLM-Research/c4ai-command-r-plus-08-2024\",\"mistralai/Mistral-Small-Instruct-2409\",\"mistralai/Ministral-8B-Instruct-2410\",\"mistralai/Mistral-Large-Instruct-2407\",\n                          \"Qwen/Qwen2.5-Coder-32B-Instruct\",\"Qwen/Qwen2.5-Coder-14B-Instruct\",\"Qwen/Qwen2.5-Coder-7B-Instruct\",\"Qwen/Qwen2.5-72B-Instruct\",\"Qwen/Qwen2.5-32B-Instruct\",\"Qwen/Qwen2.5-14B-Instruct\",\"Qwen/Qwen2.5-7B-Instruct\",\"Qwen/QwQ-32B-Preview\",\n                          \"LLM-Research/Llama-3.3-70B-Instruct\",\"opencompass/CompassJudger-1-32B-Instruct\",\"Qwen/QVQ-72B-Preview\",\"LLM-Research/Meta-Llama-3.1-405B-Instruct\",\"LLM-Research/Meta-Llama-3.1-8B-Instruct\",\"Qwen/Qwen2-VL-7B-Instruct\",\"LLM-Research/Meta-Llama-3.1-70B-Instruct\",\n                          \"Qwen/Qwen2.5-14B-Instruct-1M\",\"Qwen/Qwen2.5-7B-Instruct-1M\",\"Qwen/Qwen2.5-VL-3B-Instruct\",\"Qwen/Qwen2.5-VL-7B-Instruct\",\"Qwen/Qwen2.5-VL-72B-Instruct\",\"deepseek-ai/DeepSeek-R1-Distill-Llama-70B\",\"deepseek-ai/DeepSeek-R1-Distill-Llama-8B\",\"deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\",\n                          \"deepseek-ai/DeepSeek-R1-Distill-Qwen-14B\",\"deepseek-ai/DeepSeek-R1-Distill-Qwen-7B\",\"deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B\",\"deepseek-ai/DeepSeek-R1\",\"deepseek-ai/DeepSeek-V3\",\"Qwen/QwQ-32B\"]\n\nMODEL_LIST = [\n              # Claude\n              CLAUDE3, CLAUDE_4_6_SONNET, CLAUDE_4_6_OPUS, CLAUDE_4_OPUS, CLAUDE_4_5_SONNET, CLAUDE_4_SONNET, CLAUDE_3_OPUS, CLAUDE_3_OPUS_0229, \n              CLAUDE_35_SONNET, CLAUDE_35_SONNET_1022, CLAUDE_35_SONNET_0620, CLAUDE_3_SONNET, CLAUDE_3_HAIKU, \n              \"claude\", \"claude-3-haiku\", \"claude-3-sonnet\", \"claude-3-opus\", \"claude-3.5-sonnet\",\n              \n              # Gemini\n              GEMINI_31_FLASH_LITE_PRE, GEMINI_31_PRO_PRE, GEMINI_3_PRO_PRE, GEMINI_3_FLASH_PRE, GEMINI_25_PRO_PRE, GEMINI_25_FLASH_PRE,\n              GEMINI_20_FLASH, GEMINI_20_flash_exp, GEMINI_15_PRO, GEMINI_15_flash, GEMINI_PRO, GEMINI,\n              \n              # OpenAI\n              GPT35, GPT35_0125, GPT35_1106, \"gpt-3.5-turbo-16k\",\n              GPT4, GPT4_06_13, GPT4_32k, GPT4_32k_06_13,\n              GPT4_TURBO, GPT4_TURBO_PREVIEW, GPT4_TURBO_01_25, GPT4_TURBO_11_06, GPT4_TURBO_04_09,\n              GPT_4o, GPT_4O_0806, GPT_4o_MINI,\n              GPT_41, GPT_41_MINI, GPT_41_NANO,\n              GPT_5, GPT_5_MINI, GPT_5_NANO,\n              GPT_54, GPT_54_MINI, GPT_54_NANO,\n              O1, O1_MINI,\n              \n              # DeepSeek\n              DEEPSEEK_CHAT, DEEPSEEK_REASONER,\n              \n              # Qwen\n              QWEN, QWEN_TURBO, QWEN_PLUS, QWEN_MAX, QWEN_LONG, QWEN3_MAX, QWEN35_PLUS,\n              \n              # MiniMax\n              MiniMax, MINIMAX_M2_7, MINIMAX_M2_5, MINIMAX_M2_1, MINIMAX_M2_1_LIGHTNING, MINIMAX_M2, MINIMAX_ABAB6_5,\n\n              # GLM\n              ZHIPU_AI, GLM_5_TURBO, GLM_5, GLM_4, GLM_4_PLUS, GLM_4_flash, GLM_4_LONG, GLM_4_ALLTOOLS,\n              GLM_4_0520, GLM_4_AIR, GLM_4_AIRX, GLM_4_7,\n\n              # Kimi\n              MOONSHOT, \"moonshot-v1-8k\", \"moonshot-v1-32k\", \"moonshot-v1-128k\",\n              KIMI_K2, KIMI_K2_5,\n\n              # Doubao\n              DOUBAO, DOUBAO_SEED_2_CODE, DOUBAO_SEED_2_PRO, DOUBAO_SEED_2_LITE, DOUBAO_SEED_2_MINI,\n\n              # 其他模型\n              WEN_XIN, WEN_XIN_4, XUNFEI,\n              LINKAI_35, LINKAI_4_TURBO, LINKAI_4o,\n              MODELSCOPE\n            ]\n\nMODEL_LIST = MODEL_LIST + GITEE_AI_MODEL_LIST + MODELSCOPE_MODEL_LIST\n# channel\nFEISHU = \"feishu\"\nDINGTALK = \"dingtalk\"\nWECOM_BOT = \"wecom_bot\"\nQQ = \"qq\"\n"
  },
  {
    "path": "common/dequeue.py",
    "content": "from queue import Full, Queue\nfrom time import monotonic as time\n\n\n# add implementation of putleft to Queue\nclass Dequeue(Queue):\n    def putleft(self, item, block=True, timeout=None):\n        with self.not_full:\n            if self.maxsize > 0:\n                if not block:\n                    if self._qsize() >= self.maxsize:\n                        raise Full\n                elif timeout is None:\n                    while self._qsize() >= self.maxsize:\n                        self.not_full.wait()\n                elif timeout < 0:\n                    raise ValueError(\"'timeout' must be a non-negative number\")\n                else:\n                    endtime = time() + timeout\n                    while self._qsize() >= self.maxsize:\n                        remaining = endtime - time()\n                        if remaining <= 0.0:\n                            raise Full\n                        self.not_full.wait(remaining)\n            self._putleft(item)\n            self.unfinished_tasks += 1\n            self.not_empty.notify()\n\n    def putleft_nowait(self, item):\n        return self.putleft(item, block=False)\n\n    def _putleft(self, item):\n        self.queue.appendleft(item)\n"
  },
  {
    "path": "common/expired_dict.py",
    "content": "from datetime import datetime, timedelta\n\n\nclass ExpiredDict(dict):\n    def __init__(self, expires_in_seconds):\n        super().__init__()\n        self.expires_in_seconds = expires_in_seconds\n\n    def __getitem__(self, key):\n        value, expiry_time = super().__getitem__(key)\n        if datetime.now() > expiry_time:\n            del self[key]\n            raise KeyError(\"expired {}\".format(key))\n        self.__setitem__(key, value)\n        return value\n\n    def __setitem__(self, key, value):\n        expiry_time = datetime.now() + timedelta(seconds=self.expires_in_seconds)\n        super().__setitem__(key, (value, expiry_time))\n\n    def get(self, key, default=None):\n        try:\n            return self[key]\n        except KeyError:\n            return default\n\n    def __contains__(self, key):\n        try:\n            self[key]\n            return True\n        except KeyError:\n            return False\n\n    def keys(self):\n        keys = list(super().keys())\n        return [key for key in keys if key in self]\n\n    def items(self):\n        return [(key, self[key]) for key in self.keys()]\n\n    def __iter__(self):\n        return self.keys().__iter__()\n"
  },
  {
    "path": "common/log.py",
    "content": "import logging\nimport sys\n\n\ndef _reset_logger(log):\n    for handler in log.handlers:\n        handler.close()\n        log.removeHandler(handler)\n        del handler\n    log.handlers.clear()\n    log.propagate = False\n    console_handle = logging.StreamHandler(sys.stdout)\n    console_handle.setFormatter(\n        logging.Formatter(\n            \"[%(levelname)s][%(asctime)s][%(filename)s:%(lineno)d] - %(message)s\",\n            datefmt=\"%Y-%m-%d %H:%M:%S\",\n        )\n    )\n    file_handle = logging.FileHandler(\"run.log\", encoding=\"utf-8\")\n    file_handle.setFormatter(\n        logging.Formatter(\n            \"[%(levelname)s][%(asctime)s][%(filename)s:%(lineno)d] - %(message)s\",\n            datefmt=\"%Y-%m-%d %H:%M:%S\",\n        )\n    )\n    log.addHandler(file_handle)\n    log.addHandler(console_handle)\n\n\ndef _get_logger():\n    log = logging.getLogger(\"log\")\n    _reset_logger(log)\n    log.setLevel(logging.INFO)\n    return log\n\n\n# 日志句柄\nlogger = _get_logger()\n"
  },
  {
    "path": "common/memory.py",
    "content": "from common.expired_dict import ExpiredDict\n\nUSER_IMAGE_CACHE = ExpiredDict(60 * 3)"
  },
  {
    "path": "common/package_manager.py",
    "content": "import time\n\nimport pip\nfrom pip._internal import main as pipmain\n\nfrom common.log import _reset_logger, logger\n\n\ndef install(package):\n    pipmain([\"install\", package])\n\n\ndef install_requirements(file):\n    pipmain([\"install\", \"-r\", file, \"--upgrade\"])\n    _reset_logger(logger)\n\n\ndef check_dulwich():\n    needwait = False\n    for i in range(2):\n        if needwait:\n            time.sleep(3)\n            needwait = False\n        try:\n            import dulwich\n\n            return\n        except ImportError:\n            try:\n                install(\"dulwich\")\n            except Exception:\n                needwait = True\n    try:\n        import dulwich\n    except ImportError:\n        raise ImportError(\"Unable to import dulwich\")\n"
  },
  {
    "path": "common/singleton.py",
    "content": "def singleton(cls):\n    instances = {}\n\n    def get_instance(*args, **kwargs):\n        if cls not in instances:\n            instances[cls] = cls(*args, **kwargs)\n        return instances[cls]\n\n    return get_instance\n"
  },
  {
    "path": "common/sorted_dict.py",
    "content": "import heapq\n\n\nclass SortedDict(dict):\n    def __init__(self, sort_func=lambda k, v: k, init_dict=None, reverse=False):\n        if init_dict is None:\n            init_dict = []\n        if isinstance(init_dict, dict):\n            init_dict = init_dict.items()\n        self.sort_func = sort_func\n        self.sorted_keys = None\n        self.reverse = reverse\n        self.heap = []\n        for k, v in init_dict:\n            self[k] = v\n\n    def __setitem__(self, key, value):\n        if key in self:\n            super().__setitem__(key, value)\n            for i, (priority, k) in enumerate(self.heap):\n                if k == key:\n                    self.heap[i] = (self.sort_func(key, value), key)\n                    heapq.heapify(self.heap)\n                    break\n            self.sorted_keys = None\n        else:\n            super().__setitem__(key, value)\n            heapq.heappush(self.heap, (self.sort_func(key, value), key))\n            self.sorted_keys = None\n\n    def __delitem__(self, key):\n        super().__delitem__(key)\n        for i, (priority, k) in enumerate(self.heap):\n            if k == key:\n                del self.heap[i]\n                heapq.heapify(self.heap)\n                break\n        self.sorted_keys = None\n\n    def keys(self):\n        if self.sorted_keys is None:\n            self.sorted_keys = [k for _, k in sorted(self.heap, reverse=self.reverse)]\n        return self.sorted_keys\n\n    def items(self):\n        if self.sorted_keys is None:\n            self.sorted_keys = [k for _, k in sorted(self.heap, reverse=self.reverse)]\n        sorted_items = [(k, self[k]) for k in self.sorted_keys]\n        return sorted_items\n\n    def _update_heap(self, key):\n        for i, (priority, k) in enumerate(self.heap):\n            if k == key:\n                new_priority = self.sort_func(key, self[key])\n                if new_priority != priority:\n                    self.heap[i] = (new_priority, key)\n                    heapq.heapify(self.heap)\n                    self.sorted_keys = None\n                break\n\n    def __iter__(self):\n        return iter(self.keys())\n\n    def __repr__(self):\n        return f\"{type(self).__name__}({dict(self)}, sort_func={self.sort_func.__name__}, reverse={self.reverse})\"\n"
  },
  {
    "path": "common/time_check.py",
    "content": "import re\nimport time\nimport config\nfrom common.log import logger\n\n\ndef time_checker(f):\n    def _time_checker(self, *args, **kwargs):\n        _config = config.conf()\n        chat_time_module = _config.get(\"chat_time_module\", False)\n\n        if chat_time_module:\n            chat_start_time = _config.get(\"chat_start_time\", \"00:00\")\n            chat_stop_time = _config.get(\"chat_stop_time\", \"24:00\")\n\n            time_regex = re.compile(r\"^([01]?[0-9]|2[0-4])(:)([0-5][0-9])$\")\n\n            if not (time_regex.match(chat_start_time) and time_regex.match(chat_stop_time)):\n                logger.warning(\"时间格式不正确，请在config.json中修改CHAT_START_TIME/CHAT_STOP_TIME。\")\n                return None\n\n            now_time = time.strptime(time.strftime(\"%H:%M\"), \"%H:%M\")\n            chat_start_time = time.strptime(chat_start_time, \"%H:%M\")\n            chat_stop_time = time.strptime(chat_stop_time, \"%H:%M\")\n            # 结束时间小于开始时间，跨天了\n            if chat_stop_time < chat_start_time and (chat_start_time <= now_time or now_time <= chat_stop_time):\n                f(self, *args, **kwargs)\n            # 结束大于开始时间代表，没有跨天\n            elif chat_start_time < chat_stop_time and chat_start_time <= now_time <= chat_stop_time:\n                f(self, *args, **kwargs)\n            else:\n                # 定义匹配规则，如果以 #reconf 或者  #更新配置  结尾, 非服务时间可以修改开始/结束时间并重载配置\n                pattern = re.compile(r\"^.*#(?:reconf|更新配置)$\")\n                if args and pattern.match(args[0].content):\n                    f(self, *args, **kwargs)\n                else:\n                    logger.info(\"非服务时间内，不接受访问\")\n                    return None\n        else:\n            f(self, *args, **kwargs)  # 未开启时间模块则直接回答\n\n    return _time_checker\n"
  },
  {
    "path": "common/tmp_dir.py",
    "content": "import os\nimport pathlib\n\nfrom config import conf\n\n\nclass TmpDir(object):\n    \"\"\"A temporary directory that is deleted when the object is destroyed.\"\"\"\n\n    tmpFilePath = pathlib.Path(\"./tmp/\")\n\n    def __init__(self):\n        pathExists = os.path.exists(self.tmpFilePath)\n        if not pathExists:\n            os.makedirs(self.tmpFilePath)\n\n    def path(self):\n        return str(self.tmpFilePath) + \"/\"\n"
  },
  {
    "path": "common/token_bucket.py",
    "content": "import threading\nimport time\n\n\nclass TokenBucket:\n    def __init__(self, tpm, timeout=None):\n        self.capacity = int(tpm)  # 令牌桶容量\n        self.tokens = 0  # 初始令牌数为0\n        self.rate = int(tpm) / 60  # 令牌每秒生成速率\n        self.timeout = timeout  # 等待令牌超时时间\n        self.cond = threading.Condition()  # 条件变量\n        self.is_running = True\n        # 开启令牌生成线程\n        threading.Thread(target=self._generate_tokens).start()\n\n    def _generate_tokens(self):\n        \"\"\"生成令牌\"\"\"\n        while self.is_running:\n            with self.cond:\n                if self.tokens < self.capacity:\n                    self.tokens += 1\n                self.cond.notify()  # 通知获取令牌的线程\n            time.sleep(1 / self.rate)\n\n    def get_token(self):\n        \"\"\"获取令牌\"\"\"\n        with self.cond:\n            while self.tokens <= 0:\n                flag = self.cond.wait(self.timeout)\n                if not flag:  # 超时\n                    return False\n            self.tokens -= 1\n        return True\n\n    def close(self):\n        self.is_running = False\n\n\nif __name__ == \"__main__\":\n    token_bucket = TokenBucket(20, None)  # 创建一个每分钟生产20个tokens的令牌桶\n    # token_bucket = TokenBucket(20, 0.1)\n    for i in range(3):\n        if token_bucket.get_token():\n            print(f\"第{i+1}次请求成功\")\n    token_bucket.close()\n"
  },
  {
    "path": "common/utils.py",
    "content": "import io\nimport os\nimport re\nfrom urllib.parse import urlparse\nfrom common.log import logger\n\ndef fsize(file):\n    if isinstance(file, io.BytesIO):\n        return file.getbuffer().nbytes\n    elif isinstance(file, str):\n        return os.path.getsize(file)\n    elif hasattr(file, \"seek\") and hasattr(file, \"tell\"):\n        pos = file.tell()\n        file.seek(0, os.SEEK_END)\n        size = file.tell()\n        file.seek(pos)\n        return size\n    else:\n        raise TypeError(\"Unsupported type\")\n\n\ndef compress_imgfile(file, max_size):\n    if fsize(file) <= max_size:\n        return file\n    from PIL import Image\n    file.seek(0)\n    img = Image.open(file)\n    rgb_image = img.convert(\"RGB\")\n    quality = 95\n    while True:\n        out_buf = io.BytesIO()\n        rgb_image.save(out_buf, \"JPEG\", quality=quality)\n        if fsize(out_buf) <= max_size:\n            return out_buf\n        quality -= 5\n\n\ndef split_string_by_utf8_length(string, max_length, max_split=0):\n    encoded = string.encode(\"utf-8\")\n    start, end = 0, 0\n    result = []\n    while end < len(encoded):\n        if max_split > 0 and len(result) >= max_split:\n            result.append(encoded[start:].decode(\"utf-8\"))\n            break\n        end = min(start + max_length, len(encoded))\n        # 如果当前字节不是 UTF-8 编码的开始字节，则向前查找直到找到开始字节为止\n        while end < len(encoded) and (encoded[end] & 0b11000000) == 0b10000000:\n            end -= 1\n        result.append(encoded[start:end].decode(\"utf-8\"))\n        start = end\n    return result\n\n\ndef get_path_suffix(path):\n    path = urlparse(path).path\n    return os.path.splitext(path)[-1].lstrip('.')\n\n\ndef convert_webp_to_png(webp_image):\n    from PIL import Image\n    try:\n        webp_image.seek(0)\n        img = Image.open(webp_image).convert(\"RGBA\")\n        png_image = io.BytesIO()\n        img.save(png_image, format=\"PNG\")\n        png_image.seek(0)\n        return png_image\n    except Exception as e:\n        logger.error(f\"Failed to convert WEBP to PNG: {e}\")\n        raise\n\n\ndef remove_markdown_symbol(text: str):\n    # 移除markdown格式，目前先移除**\n    if not text:\n        return text\n    return re.sub(r'\\*\\*(.*?)\\*\\*', r'\\1', text)\n\n\ndef expand_path(path: str) -> str:\n    \"\"\"\n    Expand user path with proper Windows support.\n    \n    On Windows, os.path.expanduser('~') may not work properly in some shells (like PowerShell).\n    This function provides a more robust path expansion.\n    \n    Args:\n        path: Path string that may contain ~\n        \n    Returns:\n        Expanded absolute path\n    \"\"\"\n    if not path:\n        return path\n    \n    # Try standard expansion first\n    expanded = os.path.expanduser(path)\n    \n    # If expansion didn't work (path still starts with ~), use HOME or USERPROFILE\n    if expanded.startswith('~'):\n        import platform\n        if platform.system() == 'Windows':\n            # On Windows, try USERPROFILE first, then HOME\n            home = os.environ.get('USERPROFILE') or os.environ.get('HOME')\n        else:\n            # On Unix-like systems, use HOME\n            home = os.environ.get('HOME')\n        \n        if home:\n            # Replace ~ with home directory\n            if path == '~':\n                expanded = home\n            elif path.startswith('~/') or path.startswith('~\\\\'):\n                expanded = os.path.join(home, path[2:])\n    \n    return expanded\n\n\ndef get_cloud_headers(api_key: str) -> dict:\n    \"\"\"\n    Build standard headers for LinkAI API requests,\n    including client_id when available.\n    \"\"\"\n    headers = {\n        \"Content-Type\": \"application/json\",\n        \"Authorization\": f\"Bearer {api_key}\",\n    }\n    try:\n        from linkai import LinkAIClient\n        client_id = LinkAIClient.fetch_client_id()\n        if client_id:\n            headers[\"X-Client-Id\"] = client_id\n    except Exception:\n        pass\n    return headers\n"
  },
  {
    "path": "config-template.json",
    "content": "{\n  \"channel_type\": \"web\",\n  \"model\": \"MiniMax-M2.7\",\n  \"minimax_api_key\": \"\",\n  \"zhipu_ai_api_key\": \"\",\n  \"ark_api_key\": \"\",\n  \"moonshot_api_key\": \"\",\n  \"dashscope_api_key\": \"\",\n  \"claude_api_key\": \"\",\n  \"claude_api_base\": \"https://api.anthropic.com/v1\",\n  \"open_ai_api_key\": \"\",\n  \"open_ai_api_base\": \"https://api.openai.com/v1\",\n  \"gemini_api_key\": \"\",\n  \"gemini_api_base\": \"https://generativelanguage.googleapis.com\",\n  \"voice_to_text\": \"openai\",\n  \"text_to_voice\": \"openai\",\n  \"voice_reply_voice\": false,\n  \"speech_recognition\": true,\n  \"group_speech_recognition\": false,\n  \"use_linkai\": false,\n  \"linkai_api_key\": \"\",\n  \"linkai_app_code\": \"\",\n  \"feishu_app_id\": \"\",\n  \"feishu_app_secret\": \"\",\n  \"dingtalk_client_id\": \"\",\n  \"dingtalk_client_secret\":\"\",\n  \"wecom_bot_id\": \"\",\n  \"wecom_bot_secret\": \"\",\n  \"agent\": true,\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 20,\n  \"agent_max_steps\": 15\n}\n"
  },
  {
    "path": "config.py",
    "content": "# encoding:utf-8\n\nimport copy\nimport json\nimport logging\nimport os\nimport pickle\n\nfrom common.log import logger\n\n# 将所有可用的配置项写在字典里, 请使用小写字母\n# 此处的配置值无实际意义，程序不会读取此处的配置，仅用于提示格式，请将配置加入到config.json中\navailable_setting = {\n    # openai api配置\n    \"open_ai_api_key\": \"\",  # openai api key\n    # openai apibase，当use_azure_chatgpt为true时，需要设置对应的api base\n    \"open_ai_api_base\": \"https://api.openai.com/v1\",\n    \"claude_api_base\": \"https://api.anthropic.com/v1\",  # claude api base\n    \"gemini_api_base\": \"https://generativelanguage.googleapis.com\",  # gemini api base\n    \"proxy\": \"\",  # openai使用的代理\n    # chatgpt模型， 当use_azure_chatgpt为true时，其名称为Azure上model deployment名称\n    \"model\": \"gpt-3.5-turbo\",  # 可选择: gpt-4o, pt-4o-mini, gpt-4-turbo, claude-3-sonnet, wenxin, moonshot, qwen-turbo, xunfei, glm-4, minimax, gemini等模型，全部可选模型详见common/const.py文件\n    \"bot_type\": \"\",  # 可选配置，使用兼容openai格式的三方服务时候，需填\"openai\"（历史值\"chatGPT\"仍兼容）。bot具体名称详见common/const.py文件，如不填根据model名称判断\n    \"use_azure_chatgpt\": False,  # 是否使用azure的chatgpt\n    \"azure_deployment_id\": \"\",  # azure 模型部署名称\n    \"azure_api_version\": \"\",  # azure api版本\n    # Bot触发配置\n    \"single_chat_prefix\": [\"bot\", \"@bot\"],  # 私聊时文本需要包含该前缀才能触发机器人回复\n    \"single_chat_reply_prefix\": \"[bot] \",  # 私聊时自动回复的前缀，用于区分真人\n    \"single_chat_reply_suffix\": \"\",  # 私聊时自动回复的后缀，\\n 可以换行\n    \"group_chat_prefix\": [\"@bot\"],  # 群聊时包含该前缀则会触发机器人回复\n    \"no_need_at\": False,  # 群聊回复时是否不需要艾特\n    \"group_chat_reply_prefix\": \"\",  # 群聊时自动回复的前缀\n    \"group_chat_reply_suffix\": \"\",  # 群聊时自动回复的后缀，\\n 可以换行\n    \"group_chat_keyword\": [],  # 群聊时包含该关键词则会触发机器人回复\n    \"group_at_off\": False,  # 是否关闭群聊时@bot的触发\n    \"group_name_white_list\": [\"ChatGPT测试群\", \"ChatGPT测试群2\"],  # 开启自动回复的群名称列表\n    \"group_name_keyword_white_list\": [],  # 开启自动回复的群名称关键词列表\n    \"group_chat_in_one_session\": [\"ChatGPT测试群\"],  # 支持会话上下文共享的群名称\n    \"group_shared_session\": False,  # 群聊是否共享会话上下文（所有成员共享）。False时每个用户在群内有独立会话\n    \"nick_name_black_list\": [],  # 用户昵称黑名单\n    \"group_welcome_msg\": \"\",  # 配置新人进群固定欢迎语，不配置则使用随机风格欢迎\n    \"trigger_by_self\": False,  # 是否允许机器人触发\n    \"text_to_image\": \"dall-e-2\",  # 图片生成模型，可选 dall-e-2, dall-e-3\n    # Azure OpenAI dall-e-3 配置\n    \"dalle3_image_style\": \"vivid\", # 图片生成dalle3的风格，可选有 vivid, natural\n    \"dalle3_image_quality\": \"hd\", # 图片生成dalle3的质量，可选有 standard, hd\n    # Azure OpenAI DALL-E API 配置, 当use_azure_chatgpt为true时,用于将文字回复的资源和Dall-E的资源分开.\n    \"azure_openai_dalle_api_base\": \"\", # [可选] azure openai 用于回复图片的资源 endpoint，默认使用 open_ai_api_base\n    \"azure_openai_dalle_api_key\": \"\", # [可选] azure openai 用于回复图片的资源 key，默认使用 open_ai_api_key\n    \"azure_openai_dalle_deployment_id\":\"\", # [可选] azure openai 用于回复图片的资源 deployment id，默认使用 text_to_image\n    \"image_proxy\": True,  # 是否需要图片代理，国内访问LinkAI时需要\n    \"image_create_prefix\": [\"画\", \"看\", \"找\"],  # 开启图片回复的前缀\n    \"concurrency_in_session\": 1,  # 同一会话最多有多少条消息在处理中，大于1可能乱序\n    \"image_create_size\": \"256x256\",  # 图片大小,可选有 256x256, 512x512, 1024x1024 (dall-e-3默认为1024x1024)\n    \"group_chat_exit_group\": False,\n    # chatgpt会话参数\n    \"expires_in_seconds\": 3600,  # 无操作会话的过期时间\n    # 人格描述\n    \"character_desc\": \"你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题，并且可以使用多种语言与人交流。\",\n    \"conversation_max_tokens\": 1000,  # 支持上下文记忆的最多字符数\n    # chatgpt限流配置\n    \"rate_limit_chatgpt\": 20,  # chatgpt的调用频率限制\n    \"rate_limit_dalle\": 50,  # openai dalle的调用频率限制\n    # chatgpt api参数 参考https://platform.openai.com/docs/api-reference/chat/create\n    \"temperature\": 0.9,\n    \"top_p\": 1,\n    \"frequency_penalty\": 0,\n    \"presence_penalty\": 0,\n    \"request_timeout\": 180,  # chatgpt请求超时时间，openai接口默认设置为600，对于难问题一般需要较长时间\n    \"timeout\": 120,  # chatgpt重试超时时间，在这个时间内，将会自动重试\n    # Baidu 文心一言参数\n    \"baidu_wenxin_model\": \"eb-instant\",  # 默认使用ERNIE-Bot-turbo模型\n    \"baidu_wenxin_api_key\": \"\",  # Baidu api key\n    \"baidu_wenxin_secret_key\": \"\",  # Baidu secret key\n    \"baidu_wenxin_prompt_enabled\": False,  # Enable prompt if you are using ernie character model\n    # 讯飞星火API\n    \"xunfei_app_id\": \"\",  # 讯飞应用ID\n    \"xunfei_api_key\": \"\",  # 讯飞 API key\n    \"xunfei_api_secret\": \"\",  # 讯飞 API secret\n    \"xunfei_domain\": \"\",  # 讯飞模型对应的domain参数，Spark4.0 Ultra为 4.0Ultra，其他模型详见: https://www.xfyun.cn/doc/spark/Web.html\n    \"xunfei_spark_url\": \"\",  # 讯飞模型对应的请求地址，Spark4.0 Ultra为 wss://spark-api.xf-yun.com/v4.0/chat，其他模型参考详见: https://www.xfyun.cn/doc/spark/Web.html\n    # claude 配置\n    \"claude_api_cookie\": \"\",\n    \"claude_uuid\": \"\",\n    # claude api key\n    \"claude_api_key\": \"\",\n    # 通义千问API, 获取方式查看文档 https://help.aliyun.com/document_detail/2587494.html\n    \"qwen_access_key_id\": \"\",\n    \"qwen_access_key_secret\": \"\",\n    \"qwen_agent_key\": \"\",\n    \"qwen_app_id\": \"\",\n    \"qwen_node_id\": \"\",  # 流程编排模型用到的id，如果没有用到qwen_node_id，请务必保持为空字符串\n    # 阿里灵积(通义新版sdk)模型api key\n    \"dashscope_api_key\": \"\",\n    # Google Gemini Api Key\n    \"gemini_api_key\": \"\",\n    # 语音设置\n    \"speech_recognition\": True,  # 是否开启语音识别\n    \"group_speech_recognition\": False,  # 是否开启群组语音识别\n    \"voice_reply_voice\": False,  # 是否使用语音回复语音，需要设置对应语音合成引擎的api key\n    \"always_reply_voice\": False,  # 是否一直使用语音回复\n    \"voice_to_text\": \"openai\",  # 语音识别引擎，支持openai,baidu,google,azure,xunfei,ali\n    \"text_to_voice\": \"openai\",  # 语音合成引擎，支持openai,baidu,google,azure,xunfei,ali,pytts(offline),elevenlabs,edge(online)\n    \"text_to_voice_model\": \"tts-1\",\n    \"tts_voice_id\": \"alloy\",\n    # baidu 语音api配置， 使用百度语音识别和语音合成时需要\n    \"baidu_app_id\": \"\",\n    \"baidu_api_key\": \"\",\n    \"baidu_secret_key\": \"\",\n    # 1536普通话(支持简单的英文识别) 1737英语 1637粤语 1837四川话 1936普通话远场\n    \"baidu_dev_pid\": 1536,\n    # azure 语音api配置， 使用azure语音识别和语音合成时需要\n    \"azure_voice_api_key\": \"\",\n    \"azure_voice_region\": \"japaneast\",\n    # elevenlabs 语音api配置\n    \"xi_api_key\": \"\",  # 获取ap的方法可以参考https://docs.elevenlabs.io/api-reference/quick-start/authentication\n    \"xi_voice_id\": \"\",  # ElevenLabs提供了9种英式、美式等英语发音id，分别是“Adam/Antoni/Arnold/Bella/Domi/Elli/Josh/Rachel/Sam”\n    # 服务时间限制\n    \"chat_time_module\": False,  # 是否开启服务时间限制\n    \"chat_start_time\": \"00:00\",  # 服务开始时间\n    \"chat_stop_time\": \"24:00\",  # 服务结束时间\n    # 翻译api\n    \"translate\": \"baidu\",  # 翻译api，支持baidu\n    # baidu翻译api的配置\n    \"baidu_translate_app_id\": \"\",  # 百度翻译api的appid\n    \"baidu_translate_app_key\": \"\",  # 百度翻译api的秘钥\n    # wechatmp的配置\n    \"wechatmp_token\": \"\",  # 微信公众平台的Token\n    \"wechatmp_port\": 8080,  # 微信公众平台的端口,需要端口转发到80或443\n    \"wechatmp_app_id\": \"\",  # 微信公众平台的appID\n    \"wechatmp_app_secret\": \"\",  # 微信公众平台的appsecret\n    \"wechatmp_aes_key\": \"\",  # 微信公众平台的EncodingAESKey，加密模式需要\n    # wechatcom的通用配置\n    \"wechatcom_corp_id\": \"\",  # 企业微信公司的corpID\n    # wechatcomapp的配置\n    \"wechatcomapp_token\": \"\",  # 企业微信app的token\n    \"wechatcomapp_port\": 9898,  # 企业微信app的服务端口,不需要端口转发\n    \"wechatcomapp_secret\": \"\",  # 企业微信app的secret\n    \"wechatcomapp_agent_id\": \"\",  # 企业微信app的agent_id\n    \"wechatcomapp_aes_key\": \"\",  # 企业微信app的aes_key\n    # 飞书配置\n    \"feishu_port\": 80,  # 飞书bot监听端口\n    \"feishu_app_id\": \"\",  # 飞书机器人应用APP Id\n    \"feishu_app_secret\": \"\",  # 飞书机器人APP secret\n    \"feishu_token\": \"\",  # 飞书 verification token\n    \"feishu_bot_name\": \"\",  # 飞书机器人的名字\n    \"feishu_event_mode\": \"websocket\",  # 飞书事件接收模式: webhook(HTTP服务器) 或 websocket(长连接)\n    # 钉钉配置\n    \"dingtalk_client_id\": \"\",  # 钉钉机器人Client ID \n    \"dingtalk_client_secret\": \"\",  # 钉钉机器人Client Secret\n    \"dingtalk_card_enabled\": False,\n    # 企微智能机器人配置(长连接模式)\n    \"wecom_bot_id\": \"\",  # 企微智能机器人BotID\n    \"wecom_bot_secret\": \"\",  # 企微智能机器人长连接Secret\n    # chatgpt指令自定义触发词\n    \"clear_memory_commands\": [\"#清除记忆\"],  # 重置会话指令，必须以#开头\n    # channel配置\n    \"channel_type\": \"\",  # 通道类型，支持多渠道同时运行。单个: \"feishu\"，多个: \"feishu, dingtalk\" 或 [\"feishu\", \"dingtalk\"]。可选值: web,feishu,dingtalk,wecom_bot,wechatmp,wechatmp_service,wechatcom_app\n    \"web_console\": True,  # 是否自动启动Web控制台（默认启动）。设为False可禁用\n    \"subscribe_msg\": \"\",  # 订阅消息, 支持: wechatmp, wechatmp_service, wechatcom_app\n    \"debug\": False,  # 是否开启debug模式，开启后会打印更多日志\n    \"appdata_dir\": \"\",  # 数据目录\n    # 插件配置\n    \"plugin_trigger_prefix\": \"$\",  # 规范插件提供聊天相关指令的前缀，建议不要和管理员指令前缀\"#\"冲突\n    # 是否使用全局插件配置\n    \"use_global_plugin_config\": False,\n    \"max_media_send_count\": 3,  # 单次最大发送媒体资源的个数\n    \"media_send_interval\": 1,  # 发送图片的事件间隔，单位秒\n    # 智谱AI 平台配置\n    \"zhipu_ai_api_key\": \"\",\n    \"zhipu_ai_api_base\": \"https://open.bigmodel.cn/api/paas/v4\",\n    \"moonshot_api_key\": \"\",\n    \"moonshot_base_url\": \"https://api.moonshot.cn/v1\",\n    # 豆包(火山方舟) 平台配置\n    \"ark_api_key\": \"\",\n    \"ark_base_url\": \"https://ark.cn-beijing.volces.com/api/v3\",\n    #魔搭社区 平台配置\n    \"modelscope_api_key\": \"\",\n    \"modelscope_base_url\": \"https://api-inference.modelscope.cn/v1/chat/completions\",\n    # LinkAI平台配置\n    \"use_linkai\": False,\n    \"linkai_api_key\": \"\",\n    \"linkai_app_code\": \"\",\n    \"linkai_api_base\": \"https://api.link-ai.tech\",  # linkAI服务地址\n    \"cloud_host\": \"client.link-ai.tech\",\n    \"cloud_deployment_id\": \"\",\n    \"minimax_api_key\": \"\",\n    \"Minimax_group_id\": \"\",\n    \"Minimax_base_url\": \"\",\n    \"web_port\": 9899,\n    \"agent\": True,  # 是否开启Agent模式\n    \"agent_workspace\": \"~/cow\",  # agent工作空间路径，用于存储skills、memory等\n    \"agent_max_context_tokens\": 50000,  # Agent模式下最大上下文tokens\n    \"agent_max_context_turns\": 30,  # Agent模式下最大上下文记忆轮次\n    \"agent_max_steps\": 15,  # Agent模式下单次运行最大决策步数\n}\n\n\nclass Config(dict):\n    def __init__(self, d=None):\n        super().__init__()\n        if d is None:\n            d = {}\n        for k, v in d.items():\n            self[k] = v\n        # user_datas: 用户数据，key为用户名，value为用户数据，也是dict\n        self.user_datas = {}\n\n    def __getitem__(self, key):\n        # 跳过以下划线开头的注释字段\n        if not key.startswith(\"_\") and key not in available_setting:\n            logger.warning(\"[Config] key '{}' not in available_setting, may not take effect\".format(key))\n        return super().__getitem__(key)\n\n    def __setitem__(self, key, value):\n        # 跳过以下划线开头的注释字段\n        if not key.startswith(\"_\") and key not in available_setting:\n            logger.warning(\"[Config] key '{}' not in available_setting, may not take effect\".format(key))\n        return super().__setitem__(key, value)\n\n    def get(self, key, default=None):\n        # 跳过以下划线开头的注释字段\n        if key.startswith(\"_\"):\n            return super().get(key, default)\n        \n        # 如果key不在available_setting中，直接返回default\n        if key not in available_setting:\n            return super().get(key, default)\n        \n        try:\n            return self[key]\n        except KeyError as e:\n            return default\n        except Exception as e:\n            raise e\n\n    # Make sure to return a dictionary to ensure atomic\n    def get_user_data(self, user) -> dict:\n        if self.user_datas.get(user) is None:\n            self.user_datas[user] = {}\n        return self.user_datas[user]\n\n    def load_user_datas(self):\n        try:\n            with open(os.path.join(get_appdata_dir(), \"user_datas.pkl\"), \"rb\") as f:\n                self.user_datas = pickle.load(f)\n                logger.debug(\"[Config] User datas loaded.\")\n        except FileNotFoundError as e:\n            logger.debug(\"[Config] User datas file not found, ignore.\")\n        except Exception as e:\n            logger.warning(\"[Config] User datas error: {}\".format(e))\n            self.user_datas = {}\n\n    def save_user_datas(self):\n        try:\n            with open(os.path.join(get_appdata_dir(), \"user_datas.pkl\"), \"wb\") as f:\n                pickle.dump(self.user_datas, f)\n                logger.info(\"[Config] User datas saved.\")\n        except Exception as e:\n            logger.info(\"[Config] User datas error: {}\".format(e))\n\n\nconfig = Config()\n\n\ndef drag_sensitive(config):\n    try:\n        if isinstance(config, str):\n            conf_dict: dict = json.loads(config)\n            conf_dict_copy = copy.deepcopy(conf_dict)\n            for key in conf_dict_copy:\n                if \"key\" in key or \"secret\" in key:\n                    if isinstance(conf_dict_copy[key], str):\n                        conf_dict_copy[key] = conf_dict_copy[key][0:3] + \"*\" * 5 + conf_dict_copy[key][-3:]\n            return json.dumps(conf_dict_copy, indent=4)\n\n        elif isinstance(config, dict):\n            config_copy = copy.deepcopy(config)\n            for key in config:\n                if \"key\" in key or \"secret\" in key:\n                    if isinstance(config_copy[key], str):\n                        config_copy[key] = config_copy[key][0:3] + \"*\" * 5 + config_copy[key][-3:]\n            return config_copy\n    except Exception as e:\n        logger.exception(e)\n        return config\n    return config\n\n\ndef load_config():\n    global config\n\n    # 打印 ASCII Logo\n    logger.info(\"  ____                _                    _   \")\n    logger.info(\" / ___|_____      __ / \\\\   __ _  ___ _ __ | |_ \")\n    logger.info(\"| |   / _ \\\\ \\\\ /\\\\ / // _ \\\\ / _` |/ _ \\\\ '_ \\\\| __|\")\n    logger.info(\"| |__| (_) \\\\ V  V // ___ \\\\ (_| |  __/ | | | |_ \")\n    logger.info(\" \\\\____\\\\___/ \\\\_/\\\\_//_/   \\\\_\\\\__, |\\\\___|_| |_|\\\\__|\")\n    logger.info(\"                          |___/                 \")\n    logger.info(\"\")\n    config_path = \"./config.json\"\n    if not os.path.exists(config_path):\n        logger.info(\"配置文件不存在，将使用config-template.json模板\")\n        config_path = \"./config-template.json\"\n\n    config_str = read_file(config_path)\n    logger.debug(\"[INIT] config str: {}\".format(drag_sensitive(config_str)))\n\n    # 将json字符串反序列化为dict类型\n    config = Config(json.loads(config_str))\n\n    # override config with environment variables.\n    # Some online deployment platforms (e.g. Railway) deploy project from github directly. So you shouldn't put your secrets like api key in a config file, instead use environment variables to override the default config.\n    for name, value in os.environ.items():\n        name = name.lower()\n        # 跳过以下划线开头的注释字段\n        if name.startswith(\"_\"):\n            continue\n        if name in available_setting:\n            logger.info(\"[INIT] override config by environ args: {}={}\".format(name, value))\n            try:\n                config[name] = eval(value)\n            except Exception:\n                if value == \"false\":\n                    config[name] = False\n                elif value == \"true\":\n                    config[name] = True\n                else:\n                    config[name] = value\n\n    if config.get(\"debug\", False):\n        logger.setLevel(logging.DEBUG)\n        logger.debug(\"[INIT] set log level to DEBUG\")\n\n    logger.info(\"[INIT] load config: {}\".format(drag_sensitive(config)))\n\n    # 打印系统初始化信息\n    logger.info(\"[INIT] ========================================\")\n    logger.info(\"[INIT] System Initialization\")\n    logger.info(\"[INIT] ========================================\")\n    logger.info(\"[INIT] Channel: {}\".format(config.get(\"channel_type\", \"unknown\")))\n    logger.info(\"[INIT] Model: {}\".format(config.get(\"model\", \"unknown\")))\n\n    # Agent模式信息\n    if config.get(\"agent\", False):\n        workspace = config.get(\"agent_workspace\", \"~/cow\")\n        logger.info(\"[INIT] Mode: Agent (workspace: {})\".format(workspace))\n    else:\n        logger.info(\"[INIT] Mode: Chat (在config.json中设置 \\\"agent\\\":true 可启用Agent模式)\")\n\n    logger.info(\"[INIT] Debug: {}\".format(config.get(\"debug\", False)))\n    logger.info(\"[INIT] ========================================\")\n\n    # Sync selected config values to environment variables so that\n    # subprocesses (e.g. shell skill scripts) can access them directly.\n    # Existing env vars are NOT overwritten (env takes precedence).\n    _CONFIG_TO_ENV = {\n        \"open_ai_api_key\": \"OPENAI_API_KEY\",\n        \"open_ai_api_base\": \"OPENAI_API_BASE\",\n        \"linkai_api_key\": \"LINKAI_API_KEY\",\n        \"linkai_api_base\": \"LINKAI_API_BASE\",\n        \"claude_api_key\": \"CLAUDE_API_KEY\",\n        \"claude_api_base\": \"CLAUDE_API_BASE\",\n        \"gemini_api_key\": \"GEMINI_API_KEY\",\n        \"gemini_api_base\": \"GEMINI_API_BASE\",\n        \"minimax_api_key\": \"MINIMAX_API_KEY\",\n        \"minimax_api_base\": \"MINIMAX_API_BASE\",\n        \"zhipu_ai_api_key\": \"ZHIPU_AI_API_KEY\",\n        \"zhipu_ai_api_base\": \"ZHIPU_AI_API_BASE\",\n        \"moonshot_api_key\": \"MOONSHOT_API_KEY\",\n        \"moonshot_api_base\": \"MOONSHOT_API_BASE\",\n        \"ark_api_key\": \"ARK_API_KEY\",\n        \"ark_api_base\": \"ARK_API_BASE\",\n        # Channel credentials (used by skills that check env vars)\n        \"feishu_app_id\": \"FEISHU_APP_ID\",\n        \"feishu_app_secret\": \"FEISHU_APP_SECRET\",\n        \"dingtalk_client_id\": \"DINGTALK_CLIENT_ID\",\n        \"dingtalk_client_secret\": \"DINGTALK_CLIENT_SECRET\",\n        \"wechatmp_app_id\": \"WECHATMP_APP_ID\",\n        \"wechatmp_app_secret\": \"WECHATMP_APP_SECRET\",\n        \"wechatcomapp_agent_id\": \"WECHATCOMAPP_AGENT_ID\",\n        \"wechatcomapp_secret\": \"WECHATCOMAPP_SECRET\",\n        \"qq_app_id\": \"QQ_APP_ID\",\n        \"qq_app_secret\": \"QQ_APP_SECRET\"\n    }\n    injected = 0\n    for conf_key, env_key in _CONFIG_TO_ENV.items():\n        if env_key not in os.environ:\n            val = config.get(conf_key, \"\")\n            if val:\n                os.environ[env_key] = str(val)\n                injected += 1\n    if injected:\n        logger.info(\"[INIT] Synced {} config values to environment variables\".format(injected))\n\n    config.load_user_datas()\n\n\ndef get_root():\n    return os.path.dirname(os.path.abspath(__file__))\n\n\ndef read_file(path):\n    with open(path, mode=\"r\", encoding=\"utf-8\") as f:\n        return f.read()\n\n\ndef conf():\n    return config\n\n\ndef get_appdata_dir():\n    data_path = os.path.join(get_root(), conf().get(\"appdata_dir\", \"\"))\n    if not os.path.exists(data_path):\n        logger.info(\"[INIT] data path not exists, create it: {}\".format(data_path))\n        os.makedirs(data_path)\n    return data_path\n\n\ndef subscribe_msg():\n    trigger_prefix = conf().get(\"single_chat_prefix\", [\"\"])[0]\n    msg = conf().get(\"subscribe_msg\", \"\")\n    return msg.format(trigger_prefix=trigger_prefix)\n\n\n# global plugin config\nplugin_config = {}\n\n\ndef write_plugin_config(pconf: dict):\n    \"\"\"\n    写入插件全局配置\n    :param pconf: 全量插件配置\n    \"\"\"\n    global plugin_config\n    for k in pconf:\n        plugin_config[k.lower()] = pconf[k]\n\ndef remove_plugin_config(name: str):\n    \"\"\"\n    移除待重新加载的插件全局配置\n    :param name: 待重载的插件名\n    \"\"\"\n    global plugin_config\n    plugin_config.pop(name.lower(), None)\n\n\ndef pconf(plugin_name: str) -> dict:\n    \"\"\"\n    根据插件名称获取配置\n    :param plugin_name: 插件名称\n    :return: 该插件的配置项\n    \"\"\"\n    return plugin_config.get(plugin_name.lower())\n\n\n# 全局配置，用于存放全局生效的状态\nglobal_config = {\"admin_users\": []}\n"
  },
  {
    "path": "docker/Dockerfile.latest",
    "content": "FROM python:3.10-slim-bullseye\n\nLABEL maintainer=\"foo@bar.com\"\nARG TZ='Asia/Shanghai'\n\nARG CHATGPT_ON_WECHAT_VER\n\nRUN echo /etc/apt/sources.list\n# RUN sed -i 's/deb.debian.org/mirrors.tuna.tsinghua.edu.cn/g' /etc/apt/sources.list\nENV BUILD_PREFIX=/app\n\nADD . ${BUILD_PREFIX}\n\nRUN apt-get update \\\n    &&apt-get install -y --no-install-recommends bash ffmpeg espeak libavcodec-extra\\\n    && cd ${BUILD_PREFIX} \\\n    && cp config-template.json config.json \\\n    && /usr/local/bin/python -m pip install --no-cache --upgrade pip \\\n    && pip install --no-cache -r requirements.txt \\\n    && pip install --no-cache -r requirements-optional.txt \\\n    && pip install azure-cognitiveservices-speech\n\nWORKDIR ${BUILD_PREFIX}\n\nADD docker/entrypoint.sh /entrypoint.sh\n\nRUN chmod +x /entrypoint.sh \\\n    && mkdir -p /home/agent/cow \\\n    && groupadd -r agent \\\n    && useradd -r -g agent -s /bin/bash -d /home/agent agent \\\n    && chown -R agent:agent /home/agent ${BUILD_PREFIX} /usr/local/lib\n\nUSER agent\n\nENTRYPOINT [\"/entrypoint.sh\"]\n"
  },
  {
    "path": "docker/build.latest.sh",
    "content": "#!/bin/bash\n\nunset KUBECONFIG\n\ncd .. && docker build -f docker/Dockerfile.latest \\\n             -t zhayujie/chatgpt-on-wechat .\n\ndocker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$(date +%y%m%d)"
  },
  {
    "path": "docker/docker-compose.yml",
    "content": "version: '2.0'\nservices:\n  chatgpt-on-wechat:\n    image: zhayujie/chatgpt-on-wechat\n    container_name: chatgpt-on-wechat\n    security_opt:\n      - seccomp:unconfined\n    ports:\n      - \"9899:9899\"\n    environment:\n      CHANNEL_TYPE: 'web'\n      MODEL: 'MiniMax-M2.5'\n      MINIMAX_API_KEY: ''\n      ZHIPU_AI_API_KEY: ''\n      ARK_API_KEY: ''\n      MOONSHOT_API_KEY: ''\n      DASHSCOPE_API_KEY: ''\n      CLAUDE_API_KEY: ''\n      CLAUDE_API_BASE: 'https://api.anthropic.com/v1'\n      OPEN_AI_API_KEY: ''\n      OPEN_AI_API_BASE: 'https://api.openai.com/v1'\n      GEMINI_API_KEY: ''\n      GEMINI_API_BASE: 'https://generativelanguage.googleapis.com'\n      VOICE_TO_TEXT: 'openai'\n      TEXT_TO_VOICE: 'openai'\n      VOICE_REPLY_VOICE: 'False'\n      SPEECH_RECOGNITION: 'True'\n      GROUP_SPEECH_RECOGNITION: 'False'\n      USE_LINKAI: 'False'\n      LINKAI_API_KEY: ''\n      LINKAI_APP_CODE: ''\n      FEISHU_APP_ID: ''\n      FEISHU_APP_SECRET: ''\n      DINGTALK_CLIENT_ID: ''\n      DINGTALK_CLIENT_SECRET: ''\n      WECOM_BOT_ID: ''\n      WECOM_BOT_SECRET: ''\n      AGENT: 'True'\n      AGENT_MAX_CONTEXT_TOKENS: 40000\n      AGENT_MAX_CONTEXT_TURNS: 20\n      AGENT_MAX_STEPS: 15\n    volumes:\n      - ./cow:/home/agent/cow\n"
  },
  {
    "path": "docker/entrypoint.sh",
    "content": "#!/bin/bash\nset -e\n\n# build prefix\nCHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-\"\"}\n# path to config.json\nCHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-\"\"}\n# execution command line\nCHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-\"\"}\n\n# use environment variables to pass parameters\n# if you have not defined environment variables, set them below\n# export OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-'YOUR API KEY'}\n# export OPEN_AI_PROXY=${OPEN_AI_PROXY:-\"\"}\n# export SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-'[\"bot\", \"@bot\"]'}\n# export SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-'\"[bot] \"'}\n# export GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-'[\"@bot\"]'}\n# export GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-'[\"ChatGPT测试群\", \"ChatGPT测试群2\"]'}\n# export IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-'[\"画\", \"看\", \"找\"]'}\n# export CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-\"1000\"}\n# export SPEECH_RECOGNITION=${SPEECH_RECOGNITION:-\"False\"}\n# export CHARACTER_DESC=${CHARACTER_DESC:-\"你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题，并且可以使用多种语言与人交流。\"}\n# export EXPIRES_IN_SECONDS=${EXPIRES_IN_SECONDS:-\"3600\"}\n\n# CHATGPT_ON_WECHAT_PREFIX is empty, use /app\nif [ \"$CHATGPT_ON_WECHAT_PREFIX\" == \"\" ] ; then\n    CHATGPT_ON_WECHAT_PREFIX=/app\nfi\n\n# CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json'\nif [ \"$CHATGPT_ON_WECHAT_CONFIG_PATH\" == \"\" ] ; then\n    CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json\nfi\n\n# CHATGPT_ON_WECHAT_EXEC is empty, use ‘python app.py’\nif [ \"$CHATGPT_ON_WECHAT_EXEC\" == \"\" ] ; then\n    CHATGPT_ON_WECHAT_EXEC=\"python app.py\"\nfi\n\n# modify content in config.json\n# if [ \"$OPEN_AI_API_KEY\" == \"YOUR API KEY\" ] || [ \"$OPEN_AI_API_KEY\" == \"\" ]; then\n#     echo -e \"\\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\\033[0m\"\n# fi\n\n\n# go to prefix dir\ncd $CHATGPT_ON_WECHAT_PREFIX\n# excute\n$CHATGPT_ON_WECHAT_EXEC\n\n\n"
  },
  {
    "path": "docs/agent.md",
    "content": "# CowAgent介绍\n\n## 概述\n\nCow项目从简单的聊天机器人全面升级为超级智能助理 **CowAgent**，能够主动规思考和规划任务、拥有长期记忆、操作计算机和外部资源、创造和执行Skill，真正理解你并和你一起成长。CowAgent能够长期运行在个人电脑或服务器中，通过飞书、钉钉、企业微信、网页等多种方式进行交互。核心能力如下：\n\n- **复杂任务规划**：能够理解复杂任务并自主规划执行，持续思考和调用工具直到完成目标，支持多轮推理和上下文理解\n- **工具系统**：内置实现10+种工具，包括文件读写、bash终端、浏览器、定时任务、记忆管理等，通过Agent管理你的计算机或服务器\n- **长期记忆**：自动将对话记忆持久化至本地文件和数据库中，包括全局记忆和天级记忆，支持关键词及向量检索\n- **Skills系统**：新增Skill运行引擎，内置多种技能，并支持通过自然语言对话完成自定义Skills开发\n- **多渠道和多模型支持**：支持在Web、飞书、钉钉、企微等多渠道与Agent交互，支持Claude、Gemini、OpenAI、GLM、MiniMax、Qwen、Kimi、Doubao 等多种国内外主流模型\n- **安全和成本**：通过秘钥管理工具、提示词控制、系统权限等手段控制Agent的访问安全；通过最大记忆轮次、最大上下文token、工具执行步数对token成本进行限制\n\n\n## 核心功能\n\n### 1. 长期记忆\n\n> 记忆系统让 Agent 能够长期记住重要信息。Agent 会在用户分享偏好、决策、事实等重要信息时主动存储，也会在对话达到一定长度时自动提取摘要。记忆分为核心记忆、天级记忆，支持语义搜索和向量检索的混合检索模式。\n\n\n第一次启动Agent会主动向用户获取询问关键信息，并记录至工作空间 (默认为 ~/cow) 中的智能体设定、用户身份、记忆文件中。\n\n在后续的长期对话中，Agent会在需要的时候智能记录或检索记忆，并对自身设定、用户偏好、记忆文件等进行不断更新，总结和记录经验和教训，真正实现自主思考和不断成长。\n\n<img width=\"800\" src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" />\n\n\n\n### 2. 任务规划和工具调用\n\n工具是Agent访问操作系统资源的核心，Agent会根据任务需求智能选择和调用工具，完成文件读写、命令执行、定时任务等各类操作。内置工具的视线在项目的 `tools` 目录下。\n\n**主要工具：** 文件读写编辑、Bash终端、浏览器、文件发送、定时调度、记忆搜索、环境配置等。\n\n#### 1.1 终端和文件访问能力\n\n针对操作系统的终端和文件的访问能力，是最基础和核心的工具，其他很多工具或技能都是基于基础工具进行扩展。用户可通过手机端与Agent交互，操作个人电脑或服务器上的资源：\n\n<img width=\"800\" src=\"https://cdn.link-ai.tech/doc/20260202181130.png\" />\n\n#### 1.2 编程能力\n\n基于编程能力和系统访问能力，Agent可以实现从信息搜索、图片等素材生成、编码、测试、部署、Nginx配置修改、发布的 Vibecoding 全流程，通过手机端简单的一句命令完成应用的快速demo：\n\n\n<img width=\"800\" src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" />\n\n\n\n#### 1.3 定时任务\n\n基于 scheduler 工具实现动态定时任务，支持 **一次性任务、固定时间间隔、Cron表达式** 三种形式，任务触发可选择**固定消息发送** 或 **Agent动态任务** 执行两种模式，有很高灵活性：\n\n\n<img width=\"800\" src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" />\n\n同时你也可以通过自然语言快速查看和管理已有的定时任务。\n\n\n#### 1.4 环境变量管理\n\n技能所需要的秘钥存储在环境变量文件中，由 `env_config` 工具进行管理，你可以通过对话的方式更新秘钥，工具内置了安全保护和脱敏策略，会严格保护秘钥安全：\n\n<img width=\"800\" src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" />\n\n### 3. 技能系统\n\n> 技能系统为Agent提供无限的扩展性，每个Skill由说明文件、运行脚本 (可选)、资源 (可选) 组成，描述如何完成特定类型的任务。通过Skill可以让Agent遵循说明完成复杂流程，调用各类工具或对接第三方系统等。\n\n- **内置技能：** 在项目的`skills`目录下，包含技能创造器、网络搜索、图像识别（openai-image-vision）、LinkAI智能体、网页抓取等。内置Skill根据依赖条件 (API Key、系统命令等) 自动判断是否启用。通过技能创造器可以快速创建自定义技能。\n\n- **自定义技能：** 由用户通过对话创建，存放在工作空间中 (`~/cow/skills/`)，基于自定义技能可以实现任何复杂的业务流程和第三方系统对接。\n\n\n#### 3.1 创建技能\n\n通过 `skill-creator` 技能可以通过对话的方式快速创建技能。你可以在与Agent的写作中让他对将某个工作流程固化为技能，或者把任意接口文档和示例发送给Agent，让他直接完成对接：\n\n<img width=\"800\" src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" />\n\n\n#### 3.2 搜索和图像识别\n\n- **搜索技能：** 系统内置实现了 `bocha-search`(博查搜索)的Skill，依赖环境变量 `BOCHA_SEARCH_API_KEY`，可在[控制台](https://open.bochaai.com/)进行创建，并发送给Agent完成配置\n- **图像识别技能：** 实现了 `openai-image-vision` 插件，可使用 gpt-4.1-mini、gpt-4.1 等图像识别模型。依赖秘钥 `OPENAI_API_KEY`，可通过config.json或env_config工具进行维护。\n\n<img width=\"800\" src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" />\n\n\n#### 3.3 三方知识库和插件\n\n`linkai-agent` 技能可以将 [LinkAI](https://link-ai.tech/) 上的所有智能体作为skill交给Agent使用，并实现多智能体决策的效果。\n\n使用方式：需通过对话的方式配置 `LINKAI_API_KEY`，或在config.json中添加 `linkai_api_key`。 并在 `skills/linkai-agent/config.json`中添加智能体说明，示例如下：\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI客服助手\",\n      \"app_description\": \"当用户需要了解LinkAI平台相关问题时才选择该助手，基于LinkAI知识库进行回答\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"内容创作助手\",\n      \"app_description\": \"当用户需要创作图片或视频时才使用该助手，支持Nano Banana、Seedream、即梦、Veo、可灵等多种模型\"\n    }\n  ]\n}\n```\n\nAgent可根据智能体的名称和描述进行决策，并通过 app_code 调用接口访问对应的应用/工作流，通过该技能，可以灵活访问LinkAI平台上的智能体、知识库、插件等能力，实现效果如下：\n\n<img width=\"750\" src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" />\n\n注：需通过 `env_config` 配置 `LINKAI_API_KEY`，或在config.json中添加 `linkai_api_key` 配置。\n\n\n## 使用方式\n\n> 详细使用方式参考项目README.md文档进行\n\n### 1.项目运行\n\n在命令行中执行：\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\n详细说明及后续程序管理参考：[项目启动脚本](https://github.com/zhayujie/chatgpt-on-wechat/wiki/CowAgentQuickStart)\n\n\n### 2.模型选择\n\nAgent模式推荐使用以下模型，可根据效果及成本综合选择：\n\n- **MiniMax**: `MiniMax-M2.7`\n- **GLM**: `glm-5-turbo`\n- **Kimi**: `kimi-k2.5`\n- **Doubao**: `doubao-seed-2-0-code-preview-260215`\n- **Qwen**: `qwen3.5-plus`\n- **Claude**: `claude-sonnet-4-6`\n- **Gemini**: `gemini-3.1-flash-lite-preview`\n- **OpenAI**: `gpt-5.4`\n\n详细模型配置方式参考 [README.md 模型说明](../README.md#模型说明)\n\n### 3.Agent核心配置\n\nAgent模式的核心配置项如下，在 `config.json` 中配置：\n\n```bash\n{\n  \"agent\": true,                           # 是否启用Agent模式\n  \"agent_workspace\": \"~/cow\",              # Agent工作空间路径\n  \"agent_max_context_tokens\": 40000,       # 最大上下文tokens\n  \"agent_max_context_turns\": 30,           # 最大上下文记忆轮次\n  \"agent_max_steps\": 15                    # 单次任务最大决策步数\n}\n```\n\n**配置说明：**\n\n- `agent`: 设为 `true` 启用Agent模式，获得多轮工具决策、长期记忆、Skills等能力\n- `agent_workspace`: 工作空间路径，用于存储 memory、skills、其他系统设定提示词\n- `agent_max_context_tokens`: 上下文token上限，超出将自动丢弃最早的对话\n- `agent_max_context_turns`: 上下文记忆轮次，每轮包括一次提问和回复\n- `agent_max_steps`: 单次任务最大工具调用步数，防止无限循环\n\n\n### 4.渠道接入\n\nAgent支持在多种渠道中使用，只需修改 `config.json` 中的 `channel_type` 配置即可切换。\n\n- **Web网页**：默认使用该渠道，运行后监听本地端口，通过浏览器访问\n- **飞书接入**：[飞书接入文档](https://docs.link-ai.tech/cow/multi-platform/feishu)\n- **钉钉接入**：[钉钉接入文档](https://docs.link-ai.tech/cow/multi-platform/dingtalk)\n- **企业微信应用接入**：[企微应用文档](https://docs.link-ai.tech/cow/multi-platform/wechat-com)\n- **企微智能机器人**：[企微智能机器人文档](https://docs.link-ai.tech/cow/multi-platform/wecom-bot)\n- **QQ机器人**：[QQ机器人文档](https://docs.link-ai.tech/cow/multi-platform/qq)\n\n更多渠道配置参考：[通道说明](../README.md#通道说明)\n"
  },
  {
    "path": "docs/channels/dingtalk.mdx",
    "content": "---\ntitle: 钉钉\ndescription: 将 CowAgent 接入钉钉应用\n---\n\n通过钉钉开放平台创建智能机器人应用，将 CowAgent 接入钉钉。\n\n## 一、创建应用\n\n1. 进入 [钉钉开发者后台](https://open-dev.dingtalk.com/fe/app#/corp/app)，登录后点击 **创建应用**，填写应用相关信息：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-create-app.png\" width=\"800\"/>\n\n2. 点击添加应用能力，选择 **机器人** 能力，点击 **添加**：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-add-bot.png\" width=\"800\"/>\n\n3. 配置机器人信息后点击 **发布**。发布后，点击 \"**点击调试**\"，会自动创建测试群聊，可在客户端查看：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-config-bot.png\" width=\"600\"/>\n\n4. 点击 **版本管理与发布**，创建新版本发布：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-publish-bot.png\" width=\"700\"/>\n\n## 二、项目配置\n\n1. 点击 **凭证与基础信息**，获取 `Client ID` 和 `Client Secret`：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-get-secret.png\" width=\"700\"/>\n\n2. 将以下配置加入项目根目录的 `config.json` 文件：\n\n```json\n{\n  \"channel_type\": \"dingtalk\",\n  \"dingtalk_client_id\": \"YOUR_CLIENT_ID\",\n  \"dingtalk_client_secret\": \"YOUR_CLIENT_SECRET\"\n}\n```\n\n3. 安装依赖：\n\n```bash\npip3 install dingtalk_stream\n```\n\n4. 启动项目后，在钉钉开发者后台点击 **事件订阅**，点击 **已完成接入，验证连接通道**，显示 **连接接入成功** 即表示配置完成：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-event-sub.png\" width=\"700\"/>\n\n## 三、使用\n\n与机器人私聊或将机器人拉入企业群中均可开启对话：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-hosting-demo.png\" width=\"650\"/>\n"
  },
  {
    "path": "docs/channels/feishu.mdx",
    "content": "---\ntitle: 飞书\ndescription: 将 CowAgent 接入飞书应用\n---\n\n通过自建应用将 CowAgent 接入飞书，需要是飞书企业用户且具有企业管理权限。\n\n## 一、创建企业自建应用\n\n### 1. 创建应用\n\n进入 [飞书开发平台](https://open.feishu.cn/app/)，点击 **创建企业自建应用**，填写必要信息后点击 **创建**：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-create-app.jpg\" width=\"500\"/>\n\n### 2. 添加机器人能力\n\n在 **添加应用能力** 菜单中，为应用添加 **机器人** 能力：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-add-bot.jpg\" width=\"800\"/>\n\n### 3. 配置应用权限\n\n点击 **权限管理**，复制以下权限配置，粘贴到 **权限配置** 下方的输入框内，全选筛选出来的权限，点击 **批量开通** 并确认：\n\n```\nim:message,im:message.group_at_msg,im:message.group_at_msg:readonly,im:message.p2p_msg,im:message.p2p_msg:readonly,im:message:send_as_bot,im:resource\n```\n\n<img src=\"https://cdn.link-ai.tech/doc/feishu-hosting-add-auth2.png\" width=\"800\"/>\n\n## 二、项目配置\n\n1. 在 **凭证与基础信息** 中获取 `App ID` 和 `App Secret`：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-appid-secret.jpg\" width=\"800\"/>\n\n2. 将以下配置加入项目根目录的 `config.json` 文件：\n\n```json\n{\n  \"channel_type\": \"feishu\",\n  \"feishu_app_id\": \"YOUR_APP_ID\",\n  \"feishu_app_secret\": \"YOUR_APP_SECRET\",\n  \"feishu_bot_name\": \"YOUR_BOT_NAME\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `feishu_app_id` | 飞书机器人应用 App ID |\n| `feishu_app_secret` | 飞书机器人 App Secret |\n| `feishu_bot_name` | 飞书机器人名称（创建应用时设置），群聊中使用依赖此配置 |\n\n配置完成后启动项目。\n\n## 三、配置事件订阅\n\n1. 成功运行项目后，在飞书开放平台点击 **事件与回调**，选择 **长连接** 方式，点击保存：\n\n<img src=\"https://cdn.link-ai.tech/doc/202601311731183.png\" width=\"600\"/>\n\n2. 点击下方的 **添加事件**，搜索 \"接收消息\"，选择 \"**接收消息v2.0**\"，确认添加。\n\n3. 点击 **版本管理与发布**，创建版本并申请 **线上发布**，在飞书客户端查看审批消息并审核通过：\n\n<img src=\"https://cdn.link-ai.tech/doc/202601311807356.png\" width=\"600\"/>\n\n完成后在飞书中搜索机器人名称，即可开始对话。\n"
  },
  {
    "path": "docs/channels/qq.mdx",
    "content": "---\ntitle: QQ 机器人\ndescription: 将 CowAgent 接入 QQ 机器人（WebSocket 长连接模式）\n---\n\n> 通过 QQ 开放平台的机器人接口接入 CowAgent，支持 QQ 单聊、QQ 群聊（@机器人）、频道消息和频道私信，无需公网 IP，使用 WebSocket 长连接模式。\n\n<Note>\n  QQ 机器人通过 QQ 开放平台创建，使用 WebSocket 长连接接收消息，通过 OpenAPI 发送消息，无需公网 IP 和域名。\n</Note>\n\n## 一、创建 QQ 机器人\n\n> 进入[QQ 开放平台](https://q.qq.com)，QQ扫码登录，如果未注册开放平台账号，请先完成[账号注册](https://q.qq.com/#/register)。\n\n1.在 [QQ开放平台-机器人列表页](https://q.qq.com/#/apps)，点击创建机器人:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317162900.png\" width=\"800\"/>\n\n2.填写机器人名称、头像等基本信息，完成创建：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317163005.png\" width=\"800\"/>\n\n3.点击进入机器人配置页面，选择**开发管理**菜单，完成以下步骤：\n\n  - 复制并记录 **AppID**（机器人ID）\n  - 生成并记录 **AppSecret**（机器人秘钥）\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317164955.png\" width=\"800\"/>\n\n## 二、配置和运行\n\n### 方式一：Web 控制台接入\n\n启动 Cow项目后打开 Web 控制台 (本地链接为: http://127.0.0.1:9899/ )，选择 **通道** 菜单，点击 **接入通道**，选择 **QQ 机器人**，填写上一步保存的 AppID 和 AppSecret，点击接入即可。\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317165425.png\" width=\"800\"/>\n\n### 方式二：配置文件接入\n\n在 `config.json` 中添加以下配置：\n\n```json\n{\n  \"channel_type\": \"qq\",\n  \"qq_app_id\": \"YOUR_APP_ID\",\n  \"qq_app_secret\": \"YOUR_APP_SECRET\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `qq_app_id` | QQ 机器人的 AppID，在开放平台开发管理中获取 |\n| `qq_app_secret` | QQ 机器人的 AppSecret，在开放平台开发管理中获取 |\n\n配置完成后启动程序，日志显示 `[QQ] ✅ Connected successfully` 即表示连接成功。\n\n\n## 三、使用\n\n在 QQ开放平台 - 管理 - **使用范围和人员** 菜单中，使用QQ客户端扫描 \"添加到群和消息列表\" 的二维码，即可开始与QQ机器人的聊天：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317165947.png\" width=\"800\"/>\n\n对话效果：\n<img src=\"https://cdn.link-ai.tech/doc/20260317171508.png\" width=\"800\"/>\n\n## 四、功能说明\n\n> 注意：若需在群聊及频道中使用QQ机器人，需完成发布上架审核并在使用范围配置权限使用范围。\n\n| 功能 | 支持情况 |\n| --- | --- |\n| QQ 单聊 | ✅ |\n| QQ 群聊（@机器人） | ✅ |\n| 频道消息（@机器人） | ✅ |\n| 频道私信 | ✅ |\n| 文本消息 | ✅ 收发 |\n| 图片消息 | ✅ 收发（群聊和单聊） |\n| 文件消息 | ✅ 发送（群聊和单聊） |\n| 定时任务 | ✅ 主动推送（每月每用户限 4 条） |\n\n\n## 五、注意事项\n\n- **被动消息限制**：QQ 单聊被动消息有效期为 60 分钟，每条消息最多回复 5 次；QQ 群聊被动消息有效期为 5 分钟。\n- **主动消息限制**：单聊和群聊每月主动消息上限为 4 条，在使用定时任务功能时需要注意这个限制\n- **事件权限**：默认订阅 `GROUP_AND_C2C_EVENT`（QQ群/单聊）和 `PUBLIC_GUILD_MESSAGES`（频道公域消息），如需其他事件类型请在开放平台申请权限。\n"
  },
  {
    "path": "docs/channels/web.mdx",
    "content": "---\ntitle: Web 控制台\ndescription: 通过 Web 控制台使用 CowAgent\n---\n\nWeb 控制台是 CowAgent 的默认通道，启动后会自动运行，通过浏览器即可与 Agent 对话，并支持在线管理模型、技能、记忆、通道等配置。\n\n## 配置\n\n```json\n{\n  \"channel_type\": \"web\",\n  \"web_port\": 9899\n}\n```\n\n| 参数 | 说明 | 默认值 |\n| --- | --- | --- |\n| `channel_type` | 设为 `web` | `web` |\n| `web_port` | Web 服务监听端口 | `9899` |\n\n## 访问地址\n\n启动项目后访问：\n\n- 本地运行：`http://localhost:9899`\n- 服务器运行：`http://<server-ip>:9899`\n\n<Note>\n  请确保服务器防火墙和安全组已放行对应端口。\n</Note>\n\n## 功能介绍\n\n### 对话界面\n\n支持流式输出，可实时展示 Agent 的思考过程（Reasoning）和工具调用过程（Tool Calls），更直观地观察 Agent 的决策过程：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227180120.png\" />\n\n### 模型管理\n\n支持在线管理模型配置，无需手动编辑配置文件：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173811.png\" />\n\n### 技能管理\n\n支持在线查看和管理 Agent 技能（Skills）：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173403.png\" />\n\n### 记忆管理\n\n支持在线查看和管理 Agent 记忆：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173349.png\" />\n\n### 通道管理\n\n支持在线管理接入通道，支持实时连接/断开操作：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173331.png\" />\n\n### 定时任务\n\n支持在线查看和管理定时任务，包括一次性任务、固定间隔、Cron 表达式等多种调度方式的可视化管理：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173704.png\" />\n\n### 日志\n\n支持在线实时查看 Agent 运行日志，便于监控运行状态和排查问题：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173514.png\" />\n"
  },
  {
    "path": "docs/channels/wechatmp.mdx",
    "content": "---\ntitle: 微信公众号\ndescription: 将 CowAgent 接入微信公众号\n---\n\nCowAgent 支持接入个人订阅号和企业服务号两种公众号类型。\n\n| 类型 | 要求 | 特点 |\n| --- | --- | --- |\n| **个人订阅号** | 个人可申请 | 收到消息时会回复一条提示，回复生成后需用户主动发消息获取 |\n| **企业服务号** | 企业申请，需通过微信认证开通客服接口 | 回复生成后可主动推送给用户 |\n\n<Note>\n  公众号仅支持服务器和 Docker 部署，不支持本地运行。需额外安装扩展依赖：`pip3 install -r requirements-optional.txt`\n</Note>\n\n## 一、个人订阅号\n\n在 `config.json` 中添加以下配置：\n\n```json\n{\n  \"channel_type\": \"wechatmp\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatmp_app_id\": \"wx73f9******d1e48\",\n  \"wechatmp_app_secret\": \"YOUR_APP_SECRET\",\n  \"wechatmp_aes_key\": \"\",\n  \"wechatmp_token\": \"YOUR_TOKEN\",\n  \"wechatmp_port\": 80\n}\n```\n\n### 配置步骤\n\n这些配置需要和 [微信公众号后台](https://mp.weixin.qq.com/advanced/advanced?action=dev&t=advanced/dev) 中的保持一致，进入页面后，在左侧菜单选择 **设置与开发 → 基本配置 → 服务器配置**，按下图进行配置：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103506.png\" width=\"480\"/>\n\n1. 在公众平台启用开发者密码（对应配置 `wechatmp_app_secret`），并将服务器 IP 填入白名单\n2. 按上图填写 `config.json` 中与公众号相关的配置，要与公众号后台的配置一致\n3. 启动程序，启动后会监听 80 端口（若无权限监听，则在启动命令前加上 `sudo`；若 80 端口已被占用，则关闭该占用进程）\n4. 在公众号后台 **启用服务器配置** 并提交，保存成功则表示已成功配置。注意 **\"服务器地址(URL)\"** 需要配置为 `http://{HOST}/wx` 的格式，其中 `{HOST}` 可以是服务器的 IP 或域名\n\n随后关注公众号并发送消息即可看到以下效果：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103522.png\" width=\"720\"/>\n\n由于受订阅号限制，回复内容较短的情况下（15s 内），可以立即完成回复，但耗时较长的回复则会先回复一句 \"正在思考中\"，后续需要用户输入任意文字主动获取答案，而服务号则可以通过客服接口解决这一问题。\n\n<Tip>\n  **语音识别**：可利用微信自带的语音识别功能，需要在公众号管理页面的 \"设置与开发 → 接口权限\" 页面开启 \"接收语音识别结果\"。\n</Tip>\n\n## 二、企业服务号\n\n企业服务号与上述个人订阅号的接入过程基本相同，差异如下：\n\n1. 在公众平台申请企业服务号并完成微信认证，在接口权限中确认已获得 **客服接口** 的权限\n2. 在 `config.json` 中设置 `\"channel_type\": \"wechatmp_service\"`，其他配置与上述订阅号相同\n3. 交互效果上，即使是较长耗时的回复，也可以主动推送给用户，无需用户手动获取\n\n```json\n{\n  \"channel_type\": \"wechatmp_service\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatmp_app_id\": \"YOUR_APP_ID\",\n  \"wechatmp_app_secret\": \"YOUR_APP_SECRET\",\n  \"wechatmp_aes_key\": \"\",\n  \"wechatmp_token\": \"YOUR_TOKEN\",\n  \"wechatmp_port\": 80\n}\n```\n"
  },
  {
    "path": "docs/channels/wecom-bot.mdx",
    "content": "---\ntitle: 企微智能机器人\ndescription: 将 CowAgent 接入企业微信智能机器人（长连接模式）\n---\n\n> 通过企业微信智能机器人接入CowAgent，支持企业内部单聊和内部群聊，无需公网 IP，使用 WebSocket 长连接模式，支持Markdown渲染和流式输出。\n\n<Note>\n  智能机器人与企业微信自建应用是两种不同的接入方式。智能机器人使用 WebSocket 长连接，无需服务器公网 IP 和域名，配置更简单。\n</Note>\n\n## 一、创建智能机器人\n\n1. 打开企业微信客户端，进入工作台，点击**智能机器人**：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316180959.png\" width=\"800\"/>\n\n2. 点击创建机器人 - 手动创建：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181118.png\" width=\"800\"/>\n\n3. 右侧窗口拖到最下方，选择**API模式创建**：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181215.png\" width=\"800\"/>\n\n4. 设置机器人名称、头像、可见范围，并选择**长连接模式**，记录下 **Bot ID** 和 **Secret** 信息后点击保存。\n\n## 二、配置和运行\n\n### 方式一：Web 控制台接入\n\n启动Cow项目后打开 Web 控制台 (本地链接为: http://127.0.0.1:9899/ )，选择 **通道** 菜单，点击 **接入通道**，选择 **企微智能机器人**，填写上一步保存的 Bot ID 和 Secret，点击接入即可。\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181711.png\" width=\"800\"/>\n\n### 方式二：配置文件接入\n\n在 `config.json` 中添加以下配置：\n\n```json\n{\n  \"channel_type\": \"wecom_bot\",\n  \"wecom_bot_id\": \"YOUR_BOT_ID\",\n  \"wecom_bot_secret\": \"YOUR_SECRET\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `wecom_bot_id` | 智能机器人的 BotID |\n| `wecom_bot_secret` | 智能机器人的 Secret |\n\n配置完成后启动程序，日志显示 `[WecomBot] Subscribe success` 即表示连接成功。\n\n## 三、功能说明\n\n| 功能 | 支持情况 |\n| --- | --- |\n| 单聊 | ✅ |\n| 群聊（@机器人） | ✅ |\n| 文本消息 | ✅ 收发 |\n| 图片消息 | ✅ 收发 |\n| 文件消息 | ✅ 收发 |\n| 流式回复 | ✅ |\n| 定时任务主动推送 | ✅ |\n\n## 四、使用\n\n在企业微信中搜索创建的机器人名称，即可开始单聊对话。\n\n如需在企微内部群聊中使用，将机器人添加到群中，@机器人发送消息即可。\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316182902.png\" width=\"800\"/>\n"
  },
  {
    "path": "docs/channels/wecom.mdx",
    "content": "---\ntitle: 企微自建应用\ndescription: 将 CowAgent 接入企业微信自建应用\n---\n\n通过企业微信自建应用接入 CowAgent，支持企业内部人员单聊使用。\n\n<Note>\n  企业微信只能使用 Docker 部署或服务器 Python 部署，不支持本地运行模式。\n</Note>\n\n## 一、准备\n\n需要的资源：\n\n1. 一台服务器（有公网 IP）\n2. 注册一个企业微信（个人也可注册，但无法认证）\n3. 认证企业微信还需要对应主体备案的域名\n\n## 二、创建企业微信应用\n\n1. 在 [企业微信管理后台](https://work.weixin.qq.com/wework_admin/frame#profile) 点击 **我的企业**，在最下方获取 **企业ID**（后续填写到 `wechatcom_corp_id` 字段中）。\n\n2. 切换到 **应用管理**，点击创建应用：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103156.png\" width=\"480\"/>\n\n3. 进入应用创建页面，记录 `AgentId` 和 `Secret`：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103218.png\" width=\"580\"/>\n\n4. 点击 **设置API接收**，配置应用接口：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103211.png\" width=\"520\"/>\n\n- URL 格式为 `http://ip:port/wxcomapp`（认证企业需使用备案域名）\n- 随机获取 `Token` 和 `EncodingAESKey` 并保存\n\n<Note>\n  此时保存 API 接收配置会失败，因为程序还未启动，等项目运行后再回来保存。\n</Note>\n\n## 三、配置和运行\n\n在 `config.json` 中添加以下配置（各参数与企业微信后台的对应关系见上方截图）：\n\n```json\n{\n  \"channel_type\": \"wechatcom_app\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatcom_corp_id\": \"YOUR_CORP_ID\",\n  \"wechatcomapp_token\": \"YOUR_TOKEN\",\n  \"wechatcomapp_secret\": \"YOUR_SECRET\",\n  \"wechatcomapp_agent_id\": \"YOUR_AGENT_ID\",\n  \"wechatcomapp_aes_key\": \"YOUR_AES_KEY\",\n  \"wechatcomapp_port\": 9898\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `wechatcom_corp_id` | 企业 ID |\n| `wechatcomapp_token` | API 接收配置中的 Token |\n| `wechatcomapp_secret` | 应用的 Secret |\n| `wechatcomapp_agent_id` | 应用的 AgentId |\n| `wechatcomapp_aes_key` | API 接收配置中的 EncodingAESKey |\n| `wechatcomapp_port` | 监听端口，默认 9898 |\n\n配置完成后启动程序。当后台日志显示 `http://0.0.0.0:9898/` 时说明程序运行成功，需要将该端口对外开放（如在云服务器安全组中放行）。\n\n程序启动后，回到企业微信后台保存 **消息服务器配置**，保存成功后还需将服务器 IP 添加到 **企业可信IP** 中，否则无法收发消息：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103224.png\" width=\"520\"/>\n\n<Warning>\n  如遇到 URL 配置回调不通过或配置失败：\n  1. 确保服务器防火墙关闭且安全组放行监听端口\n  2. 仔细检查 Token、Secret Key 等参数配置是否一致，URL 格式是否正确\n  3. 认证企业微信需要配置与主体一致的备案域名\n</Warning>\n\n## 四、使用\n\n在企业微信中搜索刚创建的应用名称，即可直接对话：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103228.png\" width=\"720\"/>\n\n如需让外部个人微信用户使用，可在 **我的企业 → 微信插件** 中分享邀请关注二维码，个人微信扫码关注后即可与应用对话：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103232.png\" width=\"520\"/>\n\n## 常见问题\n\n需要确保已安装以下依赖：\n\n```bash\npip install websocket-client pycryptodome\n```\n"
  },
  {
    "path": "docs/docs.json",
    "content": "{\n  \"$schema\": \"https://mintlify.com/docs.json\",\n  \"name\": \"CowAgent\",\n  \"description\": \"CowAgent - AI Super Assistant powered by LLMs, with autonomous task planning, long-term memory, skills system, and multi-channel deployment.\",\n  \"theme\": \"mint\",\n  \"appearance\": {\n    \"default\": \"light\"\n  },\n  \"colors\": {\n    \"primary\": \"#35A85B\",\n    \"light\": \"#4ABE6E\",\n    \"dark\": \"#228547\"\n  },\n  \"logo\": {\n    \"light\": \"/images/logo.jpg\",\n    \"dark\": \"/images/logo.jpg\"\n  },\n  \"favicon\": \"/images/favicon.ico\",\n  \"navbar\": {\n    \"links\": [\n      {\n        \"label\": \"官网\",\n        \"href\": \"https://cowagent.ai/\"\n      },\n      {\n        \"label\": \"GitHub\",\n        \"href\": \"https://github.com/zhayujie/chatgpt-on-wechat\"\n      }\n    ]\n  },\n  \"footer\": {\n    \"socials\": {\n      \"github\": \"https://github.com/zhayujie/chatgpt-on-wechat\"\n    }\n  },\n  \"navigation\": {\n    \"languages\": [\n      {\n        \"language\": \"zh\",\n        \"default\": true,\n        \"tabs\": [\n          {\n            \"tab\": \"项目介绍\",\n            \"groups\": [\n              {\n                \"group\": \"概览\",\n                \"pages\": [\n                  \"intro/index\",\n                  \"intro/architecture\",\n                  \"intro/features\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"快速开始\",\n            \"groups\": [\n              {\n                \"group\": \"安装部署\",\n                \"pages\": [\n                  \"guide/quick-start\",\n                  \"guide/manual-install\",\n                  \"guide/upgrade\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"模型\",\n            \"groups\": [\n              {\n                \"group\": \"模型配置\",\n                \"pages\": [\n                  \"models/index\",\n                  \"models/minimax\",\n                  \"models/glm\",\n                  \"models/qwen\",\n                  \"models/kimi\",\n                  \"models/doubao\",\n                  \"models/claude\",\n                  \"models/gemini\",\n                  \"models/openai\",\n                  \"models/deepseek\",\n                  \"models/linkai\",\n                  \"models/coding-plan\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"工具\",\n            \"groups\": [\n              {\n                \"group\": \"工具系统\",\n                \"pages\": [\n                  \"tools/index\"\n                ]\n              },\n              {\n                \"group\": \"内置工具\",\n                \"pages\": [\n                  \"tools/read\",\n                  \"tools/write\",\n                  \"tools/edit\",\n                  \"tools/ls\",\n                  \"tools/bash\",\n                  \"tools/send\",\n                  \"tools/memory\",\n                  \"tools/env-config\"\n                ]\n              },\n              {\n                \"group\": \"可选工具\",\n                \"pages\": [\n                  \"tools/web-search\",\n                  \"tools/scheduler\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"技能\",\n            \"groups\": [\n              {\n                \"group\": \"技能系统\",\n                \"pages\": [\n                  \"skills/index\",\n                  \"skills/skill-creator\"\n                ]\n              },\n              {\n                \"group\": \"内置技能\",\n                \"pages\": [\n                  \"skills/image-vision\",\n                  \"skills/linkai-agent\",\n                  \"skills/web-fetch\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"记忆\",\n            \"groups\": [\n              {\n                \"group\": \"记忆系统\",\n                \"pages\": [\n                  \"memory\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"通道\",\n            \"groups\": [\n              {\n                \"group\": \"接入渠道\",\n                \"pages\": [\n                  \"channels/web\",\n                  \"channels/feishu\",\n                  \"channels/dingtalk\",\n                  \"channels/wecom-bot\",\n                  \"channels/qq\",\n                  \"channels/wecom\",\n                  \"channels/wechatmp\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"版本\",\n            \"groups\": [\n              {\n                \"group\": \"发布记录\",\n                \"pages\": [\n                  \"releases/overview\",\n                  \"releases/v2.0.3\",\n                  \"releases/v2.0.2\",\n                  \"releases/v2.0.1\",\n                  \"releases/v2.0.0\"\n                ]\n              }\n            ]\n          }\n        ]\n      },\n      {\n        \"language\": \"en\",\n        \"tabs\": [\n          {\n            \"tab\": \"Introduction\",\n            \"groups\": [\n              {\n                \"group\": \"Overview\",\n                \"pages\": [\n                  \"en/intro/index\",\n                  \"en/intro/architecture\",\n                  \"en/intro/features\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"Get Started\",\n            \"groups\": [\n              {\n                \"group\": \"Installation\",\n                \"pages\": [\n                  \"en/guide/quick-start\",\n                  \"en/guide/manual-install\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"Models\",\n            \"groups\": [\n              {\n                \"group\": \"Model Configuration\",\n                \"pages\": [\n                  \"en/models/index\",\n                  \"en/models/minimax\",\n                  \"en/models/glm\",\n                  \"en/models/qwen\",\n                  \"en/models/kimi\",\n                  \"en/models/doubao\",\n                  \"en/models/claude\",\n                  \"en/models/gemini\",\n                  \"en/models/openai\",\n                  \"en/models/deepseek\",\n                  \"en/models/linkai\",\n                  \"en/models/coding-plan\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"Tools\",\n            \"groups\": [\n              {\n                \"group\": \"Tools System\",\n                \"pages\": [\n                  \"en/tools/index\"\n                ]\n              },\n              {\n                \"group\": \"Built-in Tools\",\n                \"pages\": [\n                  \"en/tools/read\",\n                  \"en/tools/write\",\n                  \"en/tools/edit\",\n                  \"en/tools/ls\",\n                  \"en/tools/bash\",\n                  \"en/tools/send\",\n                  \"en/tools/memory\",\n                  \"en/tools/env-config\"\n                ]\n              },\n              {\n                \"group\": \"Optional Tools\",\n                \"pages\": [\n                  \"en/tools/web-search\",\n                  \"en/tools/scheduler\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"Skills\",\n            \"groups\": [\n              {\n                \"group\": \"Skills System\",\n                \"pages\": [\n                  \"en/skills/index\",\n                  \"en/skills/skill-creator\"\n                ]\n              },\n              {\n                \"group\": \"Built-in Skills\",\n                \"pages\": [\n                  \"en/skills/image-vision\",\n                  \"en/skills/linkai-agent\",\n                  \"en/skills/web-fetch\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"Memory\",\n            \"groups\": [\n              {\n                \"group\": \"Memory System\",\n                \"pages\": [\n                  \"en/memory\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"Channels\",\n            \"groups\": [\n              {\n                \"group\": \"Platforms\",\n                \"pages\": [\n                  \"en/channels/web\",\n                  \"en/channels/feishu\",\n                  \"en/channels/dingtalk\",\n                  \"en/channels/wecom-bot\",\n                  \"en/channels/qq\",\n                  \"en/channels/wecom\",\n                  \"en/channels/wechatmp\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"Releases\",\n            \"groups\": [\n              {\n                \"group\": \"Release Notes\",\n                \"pages\": [\n                  \"en/releases/overview\",\n                  \"en/releases/v2.0.2\",\n                  \"en/releases/v2.0.1\",\n                  \"en/releases/v2.0.0\"\n                ]\n              }\n            ]\n          }\n        ]\n      },\n      {\n        \"language\": \"ja\",\n        \"tabs\": [\n          {\n            \"tab\": \"紹介\",\n            \"groups\": [\n              {\n                \"group\": \"概要\",\n                \"pages\": [\n                  \"ja/intro/index\",\n                  \"ja/intro/architecture\",\n                  \"ja/intro/features\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"クイックスタート\",\n            \"groups\": [\n              {\n                \"group\": \"インストール\",\n                \"pages\": [\n                  \"ja/guide/quick-start\",\n                  \"ja/guide/manual-install\",\n                  \"ja/guide/upgrade\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"モデル\",\n            \"groups\": [\n              {\n                \"group\": \"モデル設定\",\n                \"pages\": [\n                  \"ja/models/index\",\n                  \"ja/models/minimax\",\n                  \"ja/models/glm\",\n                  \"ja/models/qwen\",\n                  \"ja/models/kimi\",\n                  \"ja/models/doubao\",\n                  \"ja/models/claude\",\n                  \"ja/models/gemini\",\n                  \"ja/models/openai\",\n                  \"ja/models/deepseek\",\n                  \"ja/models/linkai\",\n                  \"ja/models/coding-plan\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"ツール\",\n            \"groups\": [\n              {\n                \"group\": \"ツールシステム\",\n                \"pages\": [\n                  \"ja/tools/index\"\n                ]\n              },\n              {\n                \"group\": \"内蔵ツール\",\n                \"pages\": [\n                  \"ja/tools/read\",\n                  \"ja/tools/write\",\n                  \"ja/tools/edit\",\n                  \"ja/tools/ls\",\n                  \"ja/tools/bash\",\n                  \"ja/tools/send\",\n                  \"ja/tools/memory\",\n                  \"ja/tools/env-config\",\n                  \"ja/tools/browser\"\n                ]\n              },\n              {\n                \"group\": \"オプションツール\",\n                \"pages\": [\n                  \"ja/tools/web-search\",\n                  \"ja/tools/scheduler\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"スキル\",\n            \"groups\": [\n              {\n                \"group\": \"スキルシステム\",\n                \"pages\": [\n                  \"ja/skills/index\",\n                  \"ja/skills/skill-creator\"\n                ]\n              },\n              {\n                \"group\": \"内蔵スキル\",\n                \"pages\": [\n                  \"ja/skills/image-vision\",\n                  \"ja/skills/linkai-agent\",\n                  \"ja/skills/web-fetch\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"メモリ\",\n            \"groups\": [\n              {\n                \"group\": \"メモリシステム\",\n                \"pages\": [\n                  \"ja/memory\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"チャネル\",\n            \"groups\": [\n              {\n                \"group\": \"プラットフォーム\",\n                \"pages\": [\n                  \"ja/channels/web\",\n                  \"ja/channels/feishu\",\n                  \"ja/channels/dingtalk\",\n                  \"ja/channels/wecom-bot\",\n                  \"ja/channels/qq\",\n                  \"ja/channels/wecom\",\n                  \"ja/channels/wechatmp\"\n                ]\n              }\n            ]\n          },\n          {\n            \"tab\": \"リリース\",\n            \"groups\": [\n              {\n                \"group\": \"リリースノート\",\n                \"pages\": [\n                  \"ja/releases/overview\",\n                  \"ja/releases/v2.0.3\",\n                  \"ja/releases/v2.0.2\",\n                  \"ja/releases/v2.0.1\",\n                  \"ja/releases/v2.0.0\"\n                ]\n              }\n            ]\n          }\n        ]\n      }\n    ]\n  }\n}\n"
  },
  {
    "path": "docs/en/README.md",
    "content": "<p align=\"center\"><img src=\"https://github.com/user-attachments/assets/eca9a9ec-8534-4615-9e0f-96c5ac1d10a3\" alt=\"CowAgent\" width=\"550\" /></p>\n\n<p align=\"center\">\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat/releases/latest\"><img src=\"https://img.shields.io/github/v/release/zhayujie/chatgpt-on-wechat\" alt=\"Latest release\"></a>\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat/blob/master/LICENSE\"><img src=\"https://img.shields.io/github/license/zhayujie/chatgpt-on-wechat\" alt=\"License: MIT\"></a>\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat\"><img src=\"https://img.shields.io/github/stars/zhayujie/chatgpt-on-wechat?style=flat-square\" alt=\"Stars\"></a> <br/>\n  [<a href=\"https://github.com/zhayujie/chatgpt-on-wechat/blob/master/README.md\">中文</a>] | [English] | [<a href=\"https://github.com/zhayujie/chatgpt-on-wechat/blob/master/docs/ja/README.md\">日本語</a>]\n</p>\n\n**CowAgent** is an AI super assistant powered by LLMs, capable of autonomous task planning, operating computers and external resources, creating and executing Skills, and continuously growing with long-term memory. It supports flexible model switching, handles text, voice, images, and files, and can be integrated into Web, Feishu, DingTalk, WeCom Bot, WeCom App, and WeChat Official Account — running 7×24 hours on your personal computer or server.\n\n<p align=\"center\">\n  <a href=\"https://cowagent.ai/\">🌐 Website</a> &nbsp;·&nbsp;\n  <a href=\"https://docs.cowagent.ai/en/intro/index\">📖 Docs</a> &nbsp;·&nbsp;\n  <a href=\"https://docs.cowagent.ai/en/guide/quick-start\">🚀 Quick Start</a> &nbsp;·&nbsp;\n  <a href=\"https://link-ai.tech/cowagent/create\">☁️ Try Online</a>\n</p>\n\n## Introduction\n\n> CowAgent is both an out-of-the-box AI super assistant and a highly extensible Agent framework. You can extend it with new model interfaces, channels, built-in tools, and the Skills system to flexibly implement various customization needs.\n\n- ✅ **Autonomous Task Planning**: Understands complex tasks and autonomously plans execution, continuously thinking and invoking tools until goals are achieved. Supports accessing files, terminal, browser, schedulers, and other system resources via tools.\n- ✅ **Long-term Memory**: Automatically persists conversation memory to local files and databases, including core memory and daily memory, with keyword and vector retrieval support.\n- ✅ **Skills System**: Implements a Skills creation and execution engine with multiple built-in skills, and supports custom Skills development through natural language conversation.\n- ✅ **Multimodal Messages**: Supports parsing, processing, generating, and sending text, images, voice, files, and other message types.\n- ✅ **Multiple Model Support**: Supports OpenAI, Claude, Gemini, DeepSeek, MiniMax, GLM, Qwen, Kimi, Doubao, and other mainstream model providers.\n- ✅ **Multi-platform Deployment**: Runs on local computers or servers, integrable into Web, Feishu, DingTalk, WeChat Official Account, and WeCom applications.\n- ✅ **Knowledge Base**: Integrates enterprise knowledge base capabilities via the [LinkAI](https://link-ai.tech) platform.\n\n## Disclaimer\n\n1. This project follows the [MIT License](/LICENSE) and is intended for technical research and learning. Users must comply with local laws, regulations, policies, and corporate bylaws. Any illegal or rights-infringing use is prohibited.\n2. Agent mode consumes more tokens than normal chat mode. Choose models based on effectiveness and cost. Agent has access to the host OS — please deploy in trusted environments.\n3. CowAgent focuses on open-source development and does not participate in, authorize, or issue any cryptocurrency.\n\n## Demo\n\nTry online (no deployment needed): [CowAgent](https://link-ai.tech/cowagent/create)\n\n## Changelog\n\n> **2026.02.27:** [v2.0.2](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.2) — Web console overhaul (streaming chat, model/skill/memory/channel/scheduler/log management), multi-channel concurrent running, session persistence, new models including Gemini 3.1 Pro / Claude 4.6 Sonnet / Qwen3.5 Plus.\n\n> **2026.02.13:** [v2.0.1](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.1) — Built-in Web Search tool, smart context trimming, runtime info dynamic update, Windows compatibility, fixes for scheduler memory loss, Feishu connection issues, and more.\n\n> **2026.02.03:** [v2.0.0](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.0) — Full upgrade to AI super assistant with multi-step task planning, long-term memory, built-in tools, Skills framework, new models, and optimized channels.\n\n> **2025.05.23:** [v1.7.6](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.6) — Web channel optimization, AgentMesh multi-agent plugin, Baidu TTS, claude-4-sonnet/opus support.\n\n> **2025.04.11:** [v1.7.5](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.5) — wechatferry protocol, DeepSeek model, Tencent Cloud voice, ModelScope and Gitee-AI support.\n\n> **2024.12.13:** [v1.7.4](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.4) — Gemini 2.0 model, Web channel, memory leak fix.\n\nFull changelog: [Release Notes](https://docs.cowagent.ai/en/releases/overview)\n\n<br/>\n\n## 🚀 Quick Start\n\nThe project provides a one-click script for installation, configuration, startup, and management:\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\nAfter running, the Web service starts by default. Access `http://localhost:9899/chat` to chat.\n\nScript usage: [One-click Install](https://docs.cowagent.ai/en/guide/quick-start)\n\n### Manual Installation\n\n**1. Clone the project**\n\n```bash\ngit clone https://github.com/zhayujie/chatgpt-on-wechat\ncd chatgpt-on-wechat/\n```\n\n**2. Install dependencies**\n\n```bash\npip3 install -r requirements.txt\npip3 install -r requirements-optional.txt   # optional but recommended\n```\n\n**3. Configure**\n\n```bash\ncp config-template.json config.json\n```\n\nFill in your model API key and channel type in `config.json`. See the [configuration docs](https://docs.cowagent.ai/en/guide/manual-install) for details.\n\n**4. Run**\n\n```bash\npython3 app.py\n```\n\nFor server background run:\n\n```bash\nnohup python3 app.py & tail -f nohup.out\n```\n\n### Docker Deployment\n\n```bash\ncurl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml\n# Edit docker-compose.yml with your config\nsudo docker compose up -d\nsudo docker logs -f chatgpt-on-wechat\n```\n\n<br/>\n\n## Models\n\nSupports mainstream model providers. Recommended models for Agent mode:\n\n| Provider | Recommended Model |\n| --- | --- |\n| MiniMax | `MiniMax-M2.7` |\n| GLM | `glm-5-turbo` |\n| Kimi | `kimi-k2.5` |\n| Doubao | `doubao-seed-2-0-code-preview-260215` |\n| Qwen | `qwen3.5-plus` |\n| Claude | `claude-sonnet-4-6` |\n| Gemini | `gemini-3.1-pro-preview` |\n| OpenAI | `gpt-5.4` |\n| DeepSeek | `deepseek-chat` |\n\nFor detailed configuration of each model, see the [Models documentation](https://docs.cowagent.ai/en/models/index).\n\n### Coding Plan\n\nCoding Plan is a monthly subscription package offered by various providers, ideal for high-frequency Agent usage. All providers can be accessed via OpenAI-compatible mode:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MODEL_NAME\",\n  \"open_ai_api_base\": \"PROVIDER_CODING_PLAN_API_BASE\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n- `bot_type`: Must be `openai`\n- `model`: Model name supported by the provider\n- `open_ai_api_base`: Provider's Coding Plan API Base (different from standard pay-as-you-go)\n- `open_ai_api_key`: Provider's Coding Plan API Key\n\n> Note: Coding Plan API Base and API Key are usually separate from standard pay-as-you-go ones. Please obtain them from each provider's platform.\n\nSupported providers include Alibaba Cloud, MiniMax, Zhipu GLM, Kimi, Volcengine, and more. For detailed configuration of each provider, see the [Coding Plan documentation](https://docs.cowagent.ai/en/models/coding-plan).\n\n<br/>\n\n## Channels\n\nSupports multiple platforms. Set `channel_type` in `config.json` to switch:\n\n| Channel | `channel_type` | Docs |\n| --- | --- | --- |\n| Web (default) | `web` | [Web Channel](https://docs.cowagent.ai/en/channels/web) |\n| Feishu | `feishu` | [Feishu Setup](https://docs.cowagent.ai/en/channels/feishu) |\n| DingTalk | `dingtalk` | [DingTalk Setup](https://docs.cowagent.ai/en/channels/dingtalk) |\n| WeCom Bot | `wecom_bot` | [WeCom Bot Setup](https://docs.cowagent.ai/en/channels/wecom-bot) |\n| WeCom App | `wechatcom_app` | [WeCom Setup](https://docs.cowagent.ai/en/channels/wecom) |\n| WeChat MP | `wechatmp` / `wechatmp_service` | [WeChat MP Setup](https://docs.cowagent.ai/en/channels/wechatmp) |\n| Terminal | `terminal` | — |\n\nMultiple channels can be enabled simultaneously, separated by commas: `\"channel_type\": \"feishu,dingtalk\"`.\n\n<br/>\n\n## Enterprise Services\n\n<a href=\"https://link-ai.tech\" target=\"_blank\"><img width=\"720\" src=\"https://cdn.link-ai.tech/image/link-ai-intro.jpg\"></a>\n\n> [LinkAI](https://link-ai.tech/) is a one-stop AI agent platform for enterprises and developers, integrating multimodal LLMs, knowledge bases, Agent plugins, and workflows. Supports one-click integration with mainstream platforms, SaaS and private deployment.\n\n<br/>\n\n## 🔗 Related Projects\n\n- [bot-on-anything](https://github.com/zhayujie/bot-on-anything): Lightweight and highly extensible LLM application framework supporting Slack, Telegram, Discord, Gmail, and more.\n- [AgentMesh](https://github.com/MinimalFuture/AgentMesh): Open-source Multi-Agent framework for complex problem solving through agent team collaboration.\n\n## 🔎 FAQ\n\nFAQs: <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>\n\n## 🛠️ Contributing\n\nWelcome to add new channels, referring to the [Feishu channel](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/feishu/feishu_channel.py) as an example. Also welcome to contribute new Skills, referring to the [Skill Creator docs](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/skills/skill-creator/SKILL.md).\n\n## ✉ Contact\n\nWelcome to submit PRs and Issues, and support the project with a 🌟 Star. For questions, check the [FAQ list](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) or search [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues).\n\n## 🌟 Contributors\n\n![cow contributors](https://contrib.rocks/image?repo=zhayujie/chatgpt-on-wechat&max=1000)\n"
  },
  {
    "path": "docs/en/channels/dingtalk.mdx",
    "content": "---\ntitle: DingTalk\ndescription: Integrate CowAgent into DingTalk application\n---\n\nIntegrate CowAgent into DingTalk by creating an intelligent robot app on the DingTalk Open Platform.\n\n## 1. Create App\n\n1. Go to [DingTalk Developer Console](https://open-dev.dingtalk.com/fe/app#/corp/app), log in and click **Create App**, fill in the app information:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-create-app.png\" width=\"800\"/>\n\n2. Click **Add App Capability**, select **Robot** capability and click **Add**:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-add-bot.png\" width=\"800\"/>\n\n3. Configure the robot information and click **Publish**. After publishing, click \"**Debug**\" to automatically create a test group chat, which can be viewed in the client:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-config-bot.png\" width=\"600\"/>\n\n4. Click **Version Management & Release**, create a new version and publish:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-publish-bot.png\" width=\"700\"/>\n\n## 2. Project Configuration\n\n1. Click **Credentials & Basic Info**, get the `Client ID` and `Client Secret`:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-get-secret.png\" width=\"700\"/>\n\n2. Add the following configuration to `config.json` in the project root:\n\n```json\n{\n  \"channel_type\": \"dingtalk\",\n  \"dingtalk_client_id\": \"YOUR_CLIENT_ID\",\n  \"dingtalk_client_secret\": \"YOUR_CLIENT_SECRET\"\n}\n```\n\n3. Install the dependency:\n\n```bash\npip3 install dingtalk_stream\n```\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-app-config.png\" width=\"700\"/>\n\n4. After starting the project, go to the DingTalk Developer Console, click **Event Subscription**, then click **Connection verified, verify channel**. When \"**Connection successful**\" is displayed, the configuration is complete:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-event-sub.png\" width=\"700\"/>\n\n## 3. Usage\n\nChat privately with the robot or add it to an enterprise group to start a conversation:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-hosting-demo.png\" width=\"650\"/>\n"
  },
  {
    "path": "docs/en/channels/feishu.mdx",
    "content": "---\ntitle: Feishu (Lark)\ndescription: Integrate CowAgent into Feishu application\n---\n\nIntegrate CowAgent into Feishu by creating a custom enterprise app. You need to be a Feishu enterprise user with admin privileges.\n\n## 1. Create Enterprise Custom App\n\n### 1.1 Create App\n\nGo to [Feishu Developer Platform](https://open.feishu.cn/app/), click **Create Enterprise Custom App**, fill in the required information and click **Create**:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-create-app.jpg\" width=\"500\"/>\n\n### 1.2 Add Bot Capability\n\nIn **Add App Capabilities**, add **Bot** capability to the app:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-add-bot.jpg\" width=\"800\"/>\n\n### 1.3 Configure App Permissions\n\nClick **Permission Management**, paste the following permission string into the input box below **Permission Configuration**, select all filtered permissions, click **Batch Enable** and confirm:\n\n```\nim:message,im:message.group_at_msg,im:message.group_at_msg:readonly,im:message.p2p_msg,im:message.p2p_msg:readonly,im:message:send_as_bot,im:resource\n```\n\n<img src=\"https://cdn.link-ai.tech/doc/feishu-hosting-add-auth2.png\" width=\"800\"/>\n\n## 2. Project Configuration\n\n1. Get `App ID` and `App Secret` from **Credentials & Basic Info**:\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-appid-secret.jpg\" width=\"800\"/>\n\n2. Add the following configuration to `config.json` in the project root:\n\n```json\n{\n  \"channel_type\": \"feishu\",\n  \"feishu_app_id\": \"YOUR_APP_ID\",\n  \"feishu_app_secret\": \"YOUR_APP_SECRET\",\n  \"feishu_bot_name\": \"YOUR_BOT_NAME\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `feishu_app_id` | Feishu bot App ID |\n| `feishu_app_secret` | Feishu bot App Secret |\n| `feishu_bot_name` | Bot name (set when creating the app), required for group chat usage |\n\nStart the project after configuration is complete.\n\n## 3. Configure Event Subscription\n\n1. After the project is running successfully, go to the Feishu Developer Platform, click **Events & Callbacks**, select **Long Connection** mode, and click save:\n\n<img src=\"https://cdn.link-ai.tech/doc/202601311731183.png\" width=\"600\"/>\n\n2. Click **Add Event** below, search for \"Receive Message\", select \"**Receive Message v2.0**\", and confirm.\n\n3. Click **Version Management & Release**, create a new version and apply for **Production Release**. Check the approval message in the Feishu client and approve:\n\n<img src=\"https://cdn.link-ai.tech/doc/202601311807356.png\" width=\"600\"/>\n\nOnce completed, search for the bot name in Feishu to start chatting.\n"
  },
  {
    "path": "docs/en/channels/qq.mdx",
    "content": "---\ntitle: QQ Bot\ndescription: Connect CowAgent to QQ Bot (WebSocket long connection)\n---\n\n> Connect CowAgent via QQ Open Platform's bot API, supporting QQ direct messages, group chats (@bot), guild channel messages, and guild DMs. No public IP required — uses WebSocket long connection.\n\n<Note>\n  QQ Bot is created through the QQ Open Platform. It uses WebSocket long connection to receive messages and OpenAPI to send messages. No public IP or domain is required.\n</Note>\n\n## 1. Create a QQ Bot\n\n> Visit the [QQ Open Platform](https://q.qq.com), sign in with QQ. If you haven't registered, please complete [account registration](https://q.qq.com/#/register) first.\n\n1.Go to the [QQ Open Platform - Bot List](https://q.qq.com/#/apps), and click **Create Bot**:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317162900.png\" width=\"800\"/>\n\n2.Fill in the bot name, avatar, and other basic information to complete the creation:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317163005.png\" width=\"800\"/>\n\n3.Enter the bot configuration page, go to **Development Management**, and complete the following steps:\n\n  - Copy and save the **AppID** (Bot ID)\n  - Generate and save the **AppSecret** (Bot Secret)\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317164955.png\" width=\"800\"/>\n\n## 2. Configuration and Running\n\n### Option A: Web Console\n\nStart the program and open the Web console (local access: http://127.0.0.1:9899/). Go to the **Channels** tab, click **Connect Channel**, select **QQ Bot**, fill in the AppID and AppSecret from the previous step, and click Connect.\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317165425.png\" width=\"800\"/>\n\n### Option B: Config File\n\nAdd the following to your `config.json`:\n\n```json\n{\n  \"channel_type\": \"qq\",\n  \"qq_app_id\": \"YOUR_APP_ID\",\n  \"qq_app_secret\": \"YOUR_APP_SECRET\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `qq_app_id` | AppID of the QQ Bot, found in Development Management on the open platform |\n| `qq_app_secret` | AppSecret of the QQ Bot, found in Development Management on the open platform |\n\nAfter configuration, start the program. The log message `[QQ] ✅ Connected successfully` indicates a successful connection.\n\n\n## 3. Usage\n\nIn the QQ Open Platform, go to **Management → Usage Scope & Members**, scan the \"Add to group and message list\" QR code with your QQ client to start chatting with the bot:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317165947.png\" width=\"800\"/>\n\nChat example:\n<img src=\"https://cdn.link-ai.tech/doc/20260317171508.png\" width=\"800\"/>\n\n## 4. Supported Features\n\n> Note: To use the QQ bot in group chats and guild channels, you need to complete the publishing review and configure usage scope permissions.\n\n| Feature | Status |\n| --- | --- |\n| QQ Direct Messages | ✅ |\n| QQ Group Chat (@bot) | ✅ |\n| Guild Channel (@bot) | ✅ |\n| Guild DM | ✅ |\n| Text Messages | ✅ Send & Receive |\n| Image Messages | ✅ Send & Receive (group & direct) |\n| File Messages | ✅ Send (group & direct) |\n| Scheduled Tasks | ✅ Active push (4 per user per month) |\n\n\n## 5. Notes\n\n- **Passive message limits**: QQ direct message replies are valid for 60 minutes (max 5 replies per message); group chat replies are valid for 5 minutes.\n- **Active message limits**: Both direct and group chats have a monthly limit of 4 active messages. Keep this in mind when using the scheduled tasks feature.\n- **Event permissions**: By default, `GROUP_AND_C2C_EVENT` (QQ group/direct) and `PUBLIC_GUILD_MESSAGES` (guild public messages) are subscribed. Apply for additional permissions on the open platform if needed.\n"
  },
  {
    "path": "docs/en/channels/web.mdx",
    "content": "---\ntitle: Web Console\ndescription: Use CowAgent through the web console\n---\n\nThe Web Console is CowAgent's default channel. It starts automatically after launch, allowing you to chat with the Agent through a browser and manage models, skills, memory, channels, and other configurations online.\n\n## Configuration\n\n```json\n{\n  \"channel_type\": \"web\",\n  \"web_port\": 9899\n}\n```\n\n| Parameter | Description | Default |\n| --- | --- | --- |\n| `channel_type` | Set to `web` | `web` |\n| `web_port` | Web service listen port | `9899` |\n\n## Access URL\n\nAfter starting the project, visit:\n\n- Local: `http://localhost:9899`\n- Server: `http://<server-ip>:9899`\n\n<Note>\n  Ensure the server firewall and security group allow the corresponding port.\n</Note>\n\n## Features\n\n### Chat Interface\n\nSupports streaming output with real-time display of the Agent's reasoning process and tool calls, providing intuitive observation of the Agent's decision-making:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227180120.png\" />\n\n### Model Management\n\nManage model configurations online without manually editing config files:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173811.png\" />\n\n### Skill Management\n\nView and manage Agent skills (Skills) online:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173403.png\" />\n\n### Memory Management\n\nView and manage Agent memory online:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173349.png\" />\n\n### Channel Management\n\nManage connected channels online with real-time connect/disconnect operations:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173331.png\" />\n\n### Scheduled Tasks\n\nView and manage scheduled tasks online, including one-time tasks, fixed intervals, and Cron expressions:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173704.png\" />\n\n### Logs\n\nView Agent runtime logs in real-time for monitoring and troubleshooting:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173514.png\" />\n"
  },
  {
    "path": "docs/en/channels/wechatmp.mdx",
    "content": "---\ntitle: WeChat Official Account\ndescription: Integrate CowAgent with WeChat Official Accounts\n---\n\nCowAgent supports both personal subscription accounts and enterprise service accounts.\n\n| Type | Requirements | Features |\n| --- | --- | --- |\n| **Personal Subscription** | Available to individuals | Sends a placeholder reply first; users must send a message to retrieve the full response |\n| **Enterprise Service** | Enterprise with verified customer service API | Can proactively push replies to users |\n\n<Note>\n  Official Accounts only support server and Docker deployment, not local run mode. Install extended dependencies: `pip3 install -r requirements-optional.txt`\n</Note>\n\n## 1. Personal Subscription Account\n\nAdd the following configuration to `config.json`:\n\n```json\n{\n  \"channel_type\": \"wechatmp\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatmp_app_id\": \"wx73f9******d1e48\",\n  \"wechatmp_app_secret\": \"YOUR_APP_SECRET\",\n  \"wechatmp_aes_key\": \"\",\n  \"wechatmp_token\": \"YOUR_TOKEN\",\n  \"wechatmp_port\": 80\n}\n```\n\n### Setup Steps\n\nThese configurations must be consistent with the [WeChat Official Account Platform](https://mp.weixin.qq.com/advanced/advanced?action=dev&t=advanced/dev). Navigate to **Settings & Development → Basic Configuration → Server Configuration** and configure as shown below:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103506.png\" width=\"480\"/>\n\n1. Enable the developer secret on the platform (corresponds to `wechatmp_app_secret`), and add the server IP to the whitelist\n2. Fill in the `config.json` with the official account parameters matching the platform configuration\n3. Start the program, which listens on port 80 (use `sudo` if you don't have permission; stop any process occupying port 80)\n4. **Enable server configuration** on the official account platform and submit. A successful save means the configuration is complete. Note that the **\"Server URL\"** must be in the format `http://{HOST}/wx`, where `{HOST}` can be the server IP or domain\n\nAfter following the account and sending a message, you should see the following result:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103522.png\" width=\"720\"/>\n\nDue to subscription account limitations, short replies (within 15s) can be returned immediately, but longer replies will first send a \"Thinking...\" placeholder, requiring users to send any text to retrieve the answer. Enterprise service accounts can solve this with the customer service API.\n\n<Tip>\n  **Voice Recognition**: You can use WeChat's built-in voice recognition. Enable \"Receive Voice Recognition Results\" under \"Settings & Development → API Permissions\" on the official account management page.\n</Tip>\n\n## 2. Enterprise Service Account\n\nThe setup process for enterprise service accounts is essentially the same as personal subscription accounts, with the following differences:\n\n1. Register an enterprise service account on the platform and complete WeChat certification. Confirm that the **Customer Service API** permission has been granted\n2. Set `\"channel_type\": \"wechatmp_service\"` in `config.json`; other configurations remain the same\n3. Even for longer replies, they can be proactively pushed to users without requiring manual retrieval\n\n```json\n{\n  \"channel_type\": \"wechatmp_service\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatmp_app_id\": \"YOUR_APP_ID\",\n  \"wechatmp_app_secret\": \"YOUR_APP_SECRET\",\n  \"wechatmp_aes_key\": \"\",\n  \"wechatmp_token\": \"YOUR_TOKEN\",\n  \"wechatmp_port\": 80\n}\n```\n"
  },
  {
    "path": "docs/en/channels/wecom-bot.mdx",
    "content": "---\ntitle: WeCom Bot\ndescription: Connect CowAgent to WeCom AI Bot (WebSocket long connection)\n---\n\nConnect CowAgent via WeCom AI Bot, supporting both direct messages and group chats. No public IP required — uses WebSocket long connection with Markdown rendering and streaming output.\n\n<Note>\n  WeCom Bot and WeCom App are two different integration methods. WeCom Bot uses WebSocket long connection, requiring no public IP or domain, making it easier to set up.\n</Note>\n\n## 1. Create an AI Bot\n\n1. Open the WeCom client, go to **Workbench**, and click **AI Bot**:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316180959.png\" width=\"800\"/>\n\n2. Click **Create Bot** → **Manual Creation**:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181118.png\" width=\"600\"/>\n\n3. Scroll to the bottom of the right panel and select **API Mode**:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181215.png\" width=\"600\"/>\n\n4. Set the bot name, avatar, and visibility scope. Select **Long Connection** mode, note down the **Bot ID** and **Secret**, then click Save.\n\n## 2. Configuration\n\n### Option A: Web Console\n\nStart the program and open the Web console (local access: http://127.0.0.1:9899). Go to the **Channels** tab, click **Connect Channel**, select **WeCom Bot**, fill in the Bot ID and Secret from the previous step, and click Connect.\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181711.png\" width=\"600\"/>\n\n### Option B: Config File\n\nAdd the following to your `config.json`:\n\n```json\n{\n  \"channel_type\": \"wecom_bot\",\n  \"wecom_bot_id\": \"YOUR_BOT_ID\",\n  \"wecom_bot_secret\": \"YOUR_SECRET\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `wecom_bot_id` | Bot ID of the AI Bot |\n| `wecom_bot_secret` | Secret for the AI Bot |\n\nAfter configuration, start the program. The log message `[WecomBot] Subscribe success` indicates a successful connection.\n\n## 3. Supported Features\n\n| Feature | Status |\n| --- | --- |\n| Direct Messages | ✅ |\n| Group Chat (@bot) | ✅ |\n| Text Messages | ✅ Send & Receive |\n| Image Messages | ✅ Send & Receive |\n| File Messages | ✅ Send & Receive |\n| Streaming Reply | ✅ |\n| Scheduled Push | ✅ |\n\n## 4. Usage\n\nSearch for the bot name in WeCom to start a direct conversation.\n\nTo use in group chats, add the bot to a group and @mention it to send messages.\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316182902.png\" width=\"800\"/>\n"
  },
  {
    "path": "docs/en/channels/wecom.mdx",
    "content": "---\ntitle: WeCom\ndescription: Integrate CowAgent into WeCom enterprise app\n---\n\nIntegrate CowAgent into WeCom through a custom enterprise app, supporting one-on-one chat for internal employees.\n\n<Note>\n  WeCom only supports Docker deployment or server Python deployment. Local run mode is not supported.\n</Note>\n\n## 1. Prerequisites\n\nRequired resources:\n\n1. A server with public IP (overseas server, or domestic server with a proxy for international API access)\n2. A registered WeCom account (individual registration is possible but cannot be certified)\n3. Certified WeCom accounts additionally require a domain filed under the corresponding entity\n\n## 2. Create WeCom App\n\n1. In the [WeCom Admin Console](https://work.weixin.qq.com/wework_admin/frame#profile), click **My Enterprise** and find the **Corp ID** at the bottom of the page. Save this ID for the `wechatcom_corp_id` configuration field.\n\n2. Switch to **Application Management** and click Create Application:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103156.png\" width=\"480\"/>\n\n3. On the application creation page, record the `AgentId` and `Secret`:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103218.png\" width=\"580\"/>\n\n4. Click **Set API Reception** to configure the application interface:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103211.png\" width=\"520\"/>\n\n- URL format: `http://ip:port/wxcomapp` (certified enterprises must use a filed domain)\n- Generate random `Token` and `EncodingAESKey` and save them for the configuration file\n\n<Note>\n  The API reception configuration cannot be saved at this point because the program hasn't started yet. Come back to save it after the project is running.\n</Note>\n\n## 3. Configuration and Run\n\nAdd the following configuration to `config.json` (the mapping between each parameter and the WeCom console is shown in the screenshots above):\n\n```json\n{\n  \"channel_type\": \"wechatcom_app\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatcom_corp_id\": \"YOUR_CORP_ID\",\n  \"wechatcomapp_token\": \"YOUR_TOKEN\",\n  \"wechatcomapp_secret\": \"YOUR_SECRET\",\n  \"wechatcomapp_agent_id\": \"YOUR_AGENT_ID\",\n  \"wechatcomapp_aes_key\": \"YOUR_AES_KEY\",\n  \"wechatcomapp_port\": 9898\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `wechatcom_corp_id` | Corp ID |\n| `wechatcomapp_token` | Token from API reception config |\n| `wechatcomapp_secret` | App Secret |\n| `wechatcomapp_agent_id` | App AgentId |\n| `wechatcomapp_aes_key` | EncodingAESKey from API reception config |\n| `wechatcomapp_port` | Listen port, default 9898 |\n\nAfter configuration, start the program. When the log shows `http://0.0.0.0:9898/`, the program is running successfully. You need to open this port externally (e.g., allow it in the cloud server security group).\n\nAfter the program starts, return to the WeCom Admin Console to save the **Message Server Configuration**. After saving successfully, you also need to add the server IP to **Enterprise Trusted IPs**, otherwise messages cannot be sent or received:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103224.png\" width=\"520\"/>\n\n<Warning>\n  If the URL configuration callback fails or the configuration is unsuccessful:\n  1. Ensure the server firewall is disabled and the security group allows the listening port\n  2. Carefully check that Token, Secret Key and other parameter configurations are consistent, and that the URL format is correct\n  3. Certified WeCom accounts must configure a filed domain matching the entity\n</Warning>\n\n## 4. Usage\n\nSearch for the app name you just created in WeCom to start chatting directly. You can run multiple instances listening on different ports to create multiple WeCom apps:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103228.png\" width=\"720\"/>\n\nTo allow external personal WeChat users to use the app, go to **My Enterprise → WeChat Plugin**, share the invite QR code. After scanning and following, personal WeChat users can join and chat with the app:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103232.png\" width=\"520\"/>\n\n## FAQ\n\nMake sure the following dependencies are installed:\n\n```bash\npip install websocket-client pycryptodome\n```\n"
  },
  {
    "path": "docs/en/guide/manual-install.mdx",
    "content": "---\ntitle: Manual Install\ndescription: Deploy CowAgent manually (source code / Docker)\n---\n\n## Source Code Deployment\n\n### 1. Clone the project\n\n```bash\ngit clone https://github.com/zhayujie/chatgpt-on-wechat\ncd chatgpt-on-wechat/\n```\n\n<Tip>\n  For network issues, use the mirror: https://gitee.com/zhayujie/chatgpt-on-wechat\n</Tip>\n\n### 2. Install dependencies\n\nCore dependencies (required):\n\n```bash\npip3 install -r requirements.txt\n```\n\nOptional dependencies (recommended):\n\n```bash\npip3 install -r requirements-optional.txt\n```\n\n### 3. Configure\n\nCopy the config template and edit:\n\n```bash\ncp config-template.json config.json\n```\n\nFill in model API keys, channel type, and other settings in `config.json`. See the [model docs](/en/models/index) for details.\n\n### 4. Run\n\n**Local run:**\n\n```bash\npython3 app.py\n```\n\nBy default, the Web service starts. Access `http://localhost:9899/chat` to chat.\n\n**Background run on server:**\n\n```bash\nnohup python3 app.py & tail -f nohup.out\n```\n\n## Docker Deployment\n\nDocker deployment does not require cloning source code or installing dependencies. For Agent mode, source deployment is recommended for broader system access.\n\n<Note>\n  Requires [Docker](https://docs.docker.com/engine/install/) and docker-compose.\n</Note>\n\n**1. Download config**\n\n```bash\ncurl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml\n```\n\nEdit `docker-compose.yml` with your configuration.\n\n**2. Start container**\n\n```bash\nsudo docker compose up -d\n```\n\n**3. View logs**\n\n```bash\nsudo docker logs -f chatgpt-on-wechat\n```\n\n## Core Configuration\n\n```json\n{\n  \"channel_type\": \"web\",\n  \"model\": \"MiniMax-M2.5\",\n  \"agent\": true,\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 30,\n  \"agent_max_steps\": 15\n}\n```\n\n| Parameter | Description | Default |\n| --- | --- | --- |\n| `channel_type` | Channel type | `web` |\n| `model` | Model name | `MiniMax-M2.5` |\n| `agent` | Enable Agent mode | `true` |\n| `agent_workspace` | Agent workspace path | `~/cow` |\n| `agent_max_context_tokens` | Max context tokens | `40000` |\n| `agent_max_context_turns` | Max context turns | `30` |\n| `agent_max_steps` | Max decision steps per task | `15` |\n\n<Tip>\n  Full configuration options are in the project [`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py).\n</Tip>\n"
  },
  {
    "path": "docs/en/guide/quick-start.mdx",
    "content": "---\ntitle: One-click Install\ndescription: One-click install and manage CowAgent with scripts\n---\n\nThe project provides scripts for one-click install, configuration, startup, and management. Script-based deployment is recommended for quick setup.\n\nSupports Linux, macOS, and Windows. Requires Python 3.7-3.12 (3.9 recommended).\n\n## Install Command\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\nThe script automatically performs these steps:\n\n1. Check Python environment (requires Python 3.7+)\n2. Install required tools (git, curl, etc.)\n3. Clone project to `~/chatgpt-on-wechat`\n4. Install Python dependencies\n5. Guided configuration for AI model and channel\n6. Start service\n\nBy default, the Web service starts after installation. Access `http://localhost:9899/chat` to begin chatting.\n\n## Management Commands\n\nAfter installation, use these commands to manage the service:\n\n| Command | Description |\n| --- | --- |\n| `./run.sh start` | Start service |\n| `./run.sh stop` | Stop service |\n| `./run.sh restart` | Restart service |\n| `./run.sh status` | Check run status |\n| `./run.sh logs` | View real-time logs |\n| `./run.sh config` | Reconfigure |\n| `./run.sh update` | Update project code |\n"
  },
  {
    "path": "docs/en/intro/architecture.mdx",
    "content": "---\ntitle: Architecture\ndescription: CowAgent 2.0 system architecture and core design\n---\n\nCowAgent 2.0 has evolved from a simple chatbot into a super intelligent assistant with Agent architecture, featuring autonomous thinking, task planning, long-term memory, and skill extensibility.\n\n## System Architecture\n\nCowAgent's architecture consists of the following core modules:\n\n<img src=\"https://cdn.link-ai.tech/doc/68ef7b212c6f791e0e74314b912149f9-sz_5847990.png\" alt=\"CowAgent Architecture\" />\n\n### Core Modules\n\n| Module | Description |\n| --- | --- |\n| **Channels** | Message channel layer for receiving and sending messages. Supports Web, Feishu, DingTalk, WeCom, WeChat Official Account, and more |\n| **Agent Core** | Agent engine including task planning, memory system, and skills engine |\n| **Tools** | Tool layer for Agent to access OS resources. 10+ built-in tools |\n| **Models** | Model layer with unified access to mainstream LLMs |\n\n## Agent Mode Workflow\n\nWhen Agent mode is enabled, CowAgent runs as an autonomous agent with the following workflow:\n\n1. **Receive Message** — Receive user input through channels\n2. **Understand Intent** — Analyze task requirements and context\n3. **Plan Task** — Break complex tasks into multiple steps\n4. **Invoke Tools** — Select and execute appropriate tools for each step\n5. **Update Memory** — Store important information in long-term memory\n6. **Return Result** — Send execution results back to the user\n\n## Workspace Directory Structure\n\nThe Agent workspace is located at `~/cow` by default and stores system prompts, memory files, and skill files:\n\n```\n~/cow/\n├── system.md          # Agent system prompt\n├── user.md            # User profile\n├── memory/            # Long-term memory storage\n│   ├── core.md        # Core memory\n│   └── daily/         # Daily memory\n└── skills/            # Custom skills\n    ├── skill-1/\n    └── skill-2/\n```\n\nSecret keys are stored separately in `~/.cow` directory for security:\n\n```\n~/.cow/\n└── .env               # Secret keys for skills\n```\n\n## Core Configuration\n\nConfigure Agent mode parameters in `config.json`:\n\n```json\n{\n  \"agent\": true,\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 30,\n  \"agent_max_steps\": 15\n}\n```\n\n| Parameter | Description | Default |\n| --- | --- | --- |\n| `agent` | Enable Agent mode | `true` |\n| `agent_workspace` | Workspace path | `~/cow` |\n| `agent_max_context_tokens` | Max context tokens | `40000` |\n| `agent_max_context_turns` | Max context turns | `30` |\n| `agent_max_steps` | Max decision steps per task | `15` |\n"
  },
  {
    "path": "docs/en/intro/features.mdx",
    "content": "---\ntitle: Features\ndescription: CowAgent long-term memory, task planning, and skills system in detail\n---\n\n## 1. Long-term Memory\n\nThe memory system enables the Agent to remember important information over time. The Agent proactively stores information when users share preferences, decisions, or key facts, and automatically extracts summaries when conversations reach a certain length. Memory is divided into core memory and daily memory, with hybrid retrieval supporting both keyword search and vector search.\n\nOn first launch, the Agent proactively asks the user for key information and records it in the workspace (default `~/cow`) — including agent settings, user identity, and memory files.\n\nIn subsequent long-term conversations, the Agent intelligently stores or retrieves memory as needed, continuously updating its own settings, user preferences, and memory files, summarizing experiences and lessons learned — truly achieving autonomous thinking and continuous growth.\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## 2. Task Planning and Tool Use\n\nTools are the core of how the Agent accesses operating system resources. The Agent intelligently selects and invokes tools based on task requirements, performing file read/write, command execution, scheduled tasks, and more. Built-in tools are implemented in the project's `agent/tools/` directory.\n\n**Key tools:** file read/write/edit, Bash terminal, file send, scheduler, memory search, web search, environment config, and more.\n\n### 2.1 Terminal and File Access\n\nAccess to the OS terminal and file system is the most fundamental and core capability. Many other tools and skills build on top of this. Users can interact with the Agent from a mobile device to operate resources on their personal computer or server:\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202181130.png\" width=\"800\" />\n</Frame>\n\n### 2.2 Programming Capability\n\nCombining programming and system access, the Agent can execute the complete **Vibecoding workflow** — from information search, asset generation, coding, testing, deployment, Nginx configuration, to publishing — all triggered by a single command from your phone:\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n\n### 2.3 Scheduled Tasks\n\nThe `scheduler` tool enables dynamic scheduled tasks, supporting **one-time tasks, fixed intervals, and Cron expressions**. Tasks can be triggered as either a **fixed message send** or an **Agent dynamic task** execution:\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n\n### 2.4 Environment Variable Management\n\nSecrets required by skills are stored in an environment variable file, managed by the `env_config` tool. You can update secrets through conversation, with built-in security protection and desensitization:\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" width=\"800\" />\n</Frame>\n\n## 3. Skills System\n\nThe Skills system provides infinite extensibility for the Agent. Each Skill consists of a description file, execution scripts (optional), and resources (optional), describing how to complete specific types of tasks. Skills allow the Agent to follow instructions for complex workflows, invoke tools, or integrate third-party systems.\n\n- **Built-in skills:** Located in the project's `skills/` directory, including skill creator, image recognition, LinkAI agent, web fetch, and more. Built-in skills are automatically enabled based on dependency conditions (API keys, system commands, etc.).\n- **Custom skills:** Created by users through conversation, stored in the workspace (`~/cow/skills/`), capable of implementing any complex business process or third-party integration.\n\n### 3.1 Creating Skills\n\nThe `skill-creator` skill enables rapid skill creation through conversation. You can ask the Agent to codify a workflow as a skill, or send any API documentation and examples for the Agent to complete the integration directly:\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n### 3.2 Web Search and Image Recognition\n\n- **Web search:** Built-in `web_search` tool, supports multiple search engines. Configure `BOCHA_API_KEY` or `LINKAI_API_KEY` to enable.\n- **Image recognition:** Built-in `openai-image-vision` skill, supports `gpt-4.1-mini`, `gpt-4.1`, and other models. Requires `OPENAI_API_KEY`.\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n\n### 3.3 Third-party Knowledge Bases and Plugins\n\nThe `linkai-agent` skill makes all agents on [LinkAI](https://link-ai.tech/) available as Skills for the Agent, enabling multi-agent decision making.\n\nConfiguration: set `LINKAI_API_KEY` via `env_config`, then add agent descriptions in `skills/linkai-agent/config.json`:\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI Customer Support\",\n      \"app_description\": \"Select only when the user needs help with LinkAI platform questions\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"Content Creator\",\n      \"app_description\": \"Use only when the user needs to create images or videos\"\n    }\n  ]\n}\n```\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n"
  },
  {
    "path": "docs/en/intro/index.mdx",
    "content": "---\ntitle: Introduction\ndescription: CowAgent - AI Super Assistant powered by LLMs\n---\n\n<img src=\"https://cdn.link-ai.tech/doc/78c5dd674e2c828642ecc0406669fed7.png\" alt=\"CowAgent\" width=\"600px\"/>\n\n**CowAgent** is an AI super assistant powered by LLMs with autonomous task planning, long-term memory, skills system, multimodal messages, multiple model support, and multi-platform deployment.\n\nCowAgent can proactively think and plan tasks, operate computers and external resources, create and execute Skills, and continuously grow with long-term memory. It supports flexible switching between multiple models, handles text, voice, images, files and other multimodal messages, and can be integrated into web, Feishu, DingTalk, WeCom, and WeChat Official Account. It runs 7x24 hours on your personal computer or server.\n\n<Card title=\"GitHub\" icon=\"github\" href=\"https://github.com/zhayujie/chatgpt-on-wechat\">\n  github.com/zhayujie/chatgpt-on-wechat\n</Card>\n\n## Core Capabilities\n\n<CardGroup cols={2}>\n  <Card title=\"Autonomous Task Planning\" icon=\"brain\" href=\"/en/intro/architecture\">\n    Understands complex tasks and autonomously plans execution, continuously thinking and invoking tools until goals are achieved. Supports accessing file systems, terminals, browsers, schedulers, and other system resources through tools.\n  </Card>\n  <Card title=\"Long-term Memory\" icon=\"database\" href=\"/en/memory\">\n    Automatically persists conversation memory to local files and databases, including core memory and daily memory, with keyword and vector retrieval support.\n  </Card>\n  <Card title=\"Skills System\" icon=\"puzzle-piece\" href=\"/en/skills/index\">\n    Implements a Skills creation and execution engine with built-in skills, and supports custom Skills development through natural language conversation.\n  </Card>\n  <Card title=\"Multimodal Messages\" icon=\"image\" href=\"/en/channels/web\">\n    Supports parsing, processing, generating, and sending text, images, voice, files, and other message types.\n  </Card>\n  <Card title=\"Multiple Model Support\" icon=\"microchip\" href=\"/en/models/index\">\n    Supports mainstream model providers including OpenAI, Claude, Gemini, DeepSeek, MiniMax, GLM, Qwen, Kimi, Doubao, and more.\n  </Card>\n  <Card title=\"Multi-platform Deployment\" icon=\"server\" href=\"/en/channels/web\">\n    Runs on local computers or servers, integrable into web, Feishu, DingTalk, WeChat Official Account, and WeCom applications.\n  </Card>\n</CardGroup>\n\n## Quick Experience\n\nRun the following command in your terminal for one-click install, configuration, and startup:\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\nBy default, the Web service starts after running. Access `http://localhost:9899/chat` to chat in the web interface.\n\n<CardGroup cols={2}>\n  <Card title=\"Quick Start\" icon=\"rocket\" href=\"/en/guide/quick-start\">\n    Complete installation and run guide\n  </Card>\n  <Card title=\"Architecture\" icon=\"sitemap\" href=\"/en/intro/architecture\">\n    CowAgent system architecture design\n  </Card>\n</CardGroup>\n\n## Disclaimer\n\n1. This project follows the [MIT License](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/LICENSE) and is intended for technical research and learning. Users must comply with local laws, regulations, policies, and corporate bylaws. Any illegal or rights-infringing use is prohibited.\n2. Agent mode consumes more tokens than normal chat mode. Choose models based on effectiveness and cost. Agent has access to the host operating system — deploy with caution.\n3. CowAgent focuses on open-source development and does not participate in, authorize, or issue any cryptocurrency.\n\n## Community\n\nAdd our assistant on WeChat to join the open-source community:\n\n<img width=\"140\" src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/open-community.png\" />\n"
  },
  {
    "path": "docs/en/memory.mdx",
    "content": "---\ntitle: Memory\ndescription: CowAgent long-term memory system\n---\n\nThe memory system enables the Agent to remember important information over time, continuously accumulating experience, understanding user preferences, and truly achieving autonomous thinking and continuous growth.\n\n## Memory Types\n\n### Core Memory (MEMORY.md)\n\nStored in `~/cow/MEMORY.md`, containing long-term user preferences, important decisions, key facts, and other information that doesn't fade over time. Automatically injected into the system prompt on every conversation turn as background knowledge.\n\n### Daily Memory (memory/YYYY-MM-DD.md)\n\nStored in `~/cow/memory/` directory, named by date (e.g. `2026-03-08.md`), recording daily conversation summaries and key events. Files are only created on first write to avoid generating empty files.\n\n## Memory Writing\n\nThe Agent automatically persists conversation content to daily memory through the following mechanisms:\n\n- **On context trimming** — When conversation turns or tokens exceed the configured limit, the oldest half of the context is trimmed in batch, and the discarded content is summarized by LLM into key information and written to the daily memory file\n- **Daily scheduled summary** — A full summary is automatically triggered at 23:55 every day, ensuring memory is preserved even on low-activity days (skipped if content hasn't changed)\n- **On API context overflow** — When the model API returns a context overflow error, the current conversation summary is saved as an emergency measure\n\nAll memory writes run asynchronously in a background thread (LLM summarization + file writing), never blocking normal conversation replies.\n\n## First Launch\n\nOn first launch, the Agent will proactively ask the user for key information and save it to the workspace (default `~/cow`):\n\n| File | Description |\n| --- | --- |\n| `system.md` | Agent system prompt and behavior settings |\n| `user.md` | User identity information and preferences |\n| `MEMORY.md` | Core memory (long-term) |\n| `memory/YYYY-MM-DD.md` | Daily memory (created on demand) |\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## Memory Retrieval\n\nThe memory system supports hybrid retrieval modes:\n\n- **Keyword retrieval** — Match historical memory based on keywords\n- **Vector retrieval** — Semantic similarity search, finds relevant memory even with different wording\n\nThe Agent automatically triggers memory retrieval during conversation as needed, incorporating relevant historical information into context. Core memory (`MEMORY.md`) is always injected into the system prompt, while daily memory is loaded on demand via retrieval.\n\n## Configuration\n\n```json\n{\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 20\n}\n```\n\n| Parameter | Description | Default |\n| --- | --- | --- |\n| `agent_workspace` | Workspace path, memory files stored under this directory | `~/cow` |\n| `agent_max_context_tokens` | Max context tokens; when exceeded, half is trimmed and summarized into memory | `40000` |\n| `agent_max_context_turns` | Max context turns; when exceeded, half is trimmed and summarized into memory | `20` |\n"
  },
  {
    "path": "docs/en/models/claude.mdx",
    "content": "---\ntitle: Claude\ndescription: Claude model configuration\n---\n\n```json\n{\n  \"model\": \"claude-sonnet-4-6\",\n  \"claude_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Options include `claude-sonnet-4-6`, `claude-opus-4-6`, `claude-sonnet-4-5`, `claude-sonnet-4-0`, `claude-3-5-sonnet-latest`, etc. See [official models](https://docs.anthropic.com/en/docs/about-claude/models/overview) |\n| `claude_api_key` | Create at [Claude Console](https://console.anthropic.com/settings/keys) |\n| `claude_api_base` | Optional. Defaults to `https://api.anthropic.com/v1`. Change to use third-party proxy |\n"
  },
  {
    "path": "docs/en/models/coding-plan.mdx",
    "content": "---\ntitle: Coding Plan\ndescription: Coding Plan model configuration\n---\n\n> Coding Plan is a monthly subscription package offered by various providers, ideal for high-frequency Agent usage. CowAgent supports all Coding Plan providers via OpenAI-compatible mode.\n\n<Note>\n  Coding Plan API Base and API Key are usually separate from the standard pay-as-you-go ones. Please obtain them from each provider's platform.\n</Note>\n\n## General Configuration\n\nAll providers can be accessed via the OpenAI-compatible protocol, and can be quickly configured through the web console. Set the model provider to **OpenAI**, select a custom model and enter the model code, then fill in the corresponding provider's API Base and API Key:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260318113134.png\" width=\"800\"/>\n\nYou can also configure directly in `config.json`:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MODEL_NAME\",\n  \"open_ai_api_base\": \"PROVIDER_CODING_PLAN_API_BASE\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `bot_type` | Must be `openai` (OpenAI-compatible mode) |\n| `model` | Model name supported by the provider |\n| `open_ai_api_base` | Provider's Coding Plan API Base URL |\n| `open_ai_api_key` | Provider's Coding Plan API Key |\n\n---\n\n## Alibaba Cloud\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"qwen3.5-plus\",\n  \"open_ai_api_base\": \"https://coding.dashscope.aliyuncs.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | `qwen3.5-plus`, `qwen3-max-2026-01-23`, `qwen3-coder-next`, `qwen3-coder-plus`, `glm-5`, `glm-4.7`, `kimi-k2.5`, `MiniMax-M2.5` |\n| `open_ai_api_base` | `https://coding.dashscope.aliyuncs.com/v1` |\n| `open_ai_api_key` | Coding Plan specific key (not shared with pay-as-you-go) |\n\nReference: [Quick Start](https://help.aliyun.com/zh/model-studio/coding-plan-quickstart?spm=a2c4g.11186623.help-menu-2400256.d_0_2_1.70115203zi5Igc), [Model List](https://help.aliyun.com/zh/model-studio/coding-plan)\n\n---\n\n## MiniMax\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MiniMax-M2.5\",\n  \"open_ai_api_base\": \"https://api.minimaxi.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | `MiniMax-M2.5`, `MiniMax-M2.5-highspeed`, `MiniMax-M2.1`, `MiniMax-M2` |\n| `open_ai_api_base` | China: `https://api.minimaxi.com/v1`; Global: `https://api.minimax.io/v1` |\n| `open_ai_api_key` | Coding Plan specific key (not shared with pay-as-you-go) |\n\nReference: [China Key](https://platform.minimaxi.com/docs/coding-plan/quickstart), [Model List](https://platform.minimaxi.com/docs/guides/pricing-coding-plan), [Global Key](https://platform.minimax.io/docs/coding-plan/quickstart)\n\n---\n\n## Zhipu GLM\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"glm-4.7\",\n  \"open_ai_api_base\": \"https://open.bigmodel.cn/api/coding/paas/v4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | `glm-5`, `glm-4.7`, `glm-4.6`, `glm-4.5`, `glm-4.5-air` |\n| `open_ai_api_base` | China: `https://open.bigmodel.cn/api/coding/paas/v4`; Global: `https://api.z.ai/api/coding/paas/v4` |\n| `open_ai_api_key` | Shared with standard API |\n\nReference: [China Quick Start](https://docs.bigmodel.cn/cn/coding-plan/quick-start), [Global Quick Start](https://docs.z.ai/devpack/quick-start)\n\n---\n\n## Kimi\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"kimi-for-coding\",\n  \"open_ai_api_base\": \"https://api.kimi.com/coding/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | `kimi-for-coding` |\n| `open_ai_api_base` | `https://api.kimi.com/coding/v1` |\n| `open_ai_api_key` | Coding Plan specific key (not shared with pay-as-you-go) |\n\nReference: [Key & Docs](https://www.kimi.com/code/docs/)\n\n---\n\n## Volcengine\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"Doubao-Seed-2.0-Code\",\n  \"open_ai_api_base\": \"https://ark.cn-beijing.volces.com/api/coding/v3\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | `Doubao-Seed-2.0-Code`, `Doubao-Seed-2.0-pro`, `Doubao-Seed-2.0-lite`, `Doubao-Seed-Code`, `MiniMax-M2.5`, `Kimi-K2.5`, `GLM-4.7`, `DeepSeek-V3.2` |\n| `open_ai_api_base` | `https://ark.cn-beijing.volces.com/api/coding/v3` |\n| `open_ai_api_key` | Shared with standard API |\n\nReference: [Quick Start](https://www.volcengine.com/docs/82379/1928261?lang=zh)\n"
  },
  {
    "path": "docs/en/models/deepseek.mdx",
    "content": "---\ntitle: DeepSeek\ndescription: DeepSeek model configuration\n---\n\nUse OpenAI-compatible configuration:\n\n```json\n{\n  \"model\": \"deepseek-chat\",\n  \"bot_type\": \"openai\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\",\n  \"open_ai_api_base\": \"https://api.deepseek.com/v1\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | `deepseek-chat` (DeepSeek-V3), `deepseek-reasoner` (DeepSeek-R1) |\n| `bot_type` | Must be `openai` (OpenAI-compatible mode) |\n| `open_ai_api_key` | Create at [DeepSeek Platform](https://platform.deepseek.com/api_keys) |\n| `open_ai_api_base` | DeepSeek platform BASE URL |\n"
  },
  {
    "path": "docs/en/models/doubao.mdx",
    "content": "---\ntitle: Doubao (ByteDance)\ndescription: Doubao (Volcano Ark) model configuration\n---\n\n```json\n{\n  \"model\": \"doubao-seed-2-0-code-preview-260215\",\n  \"ark_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Options include `doubao-seed-2-0-code-preview-260215`, `doubao-seed-2-0-pro-260215`, `doubao-seed-2-0-lite-260215`, etc. |\n| `ark_api_key` | Create at [Volcano Ark Console](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) |\n| `ark_base_url` | Optional. Defaults to `https://ark.cn-beijing.volces.com/api/v3` |\n"
  },
  {
    "path": "docs/en/models/gemini.mdx",
    "content": "---\ntitle: Gemini\ndescription: Google Gemini model configuration\n---\n\n```json\n{\n  \"model\": \"gemini-3.1-pro-preview\",\n  \"gemini_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Options include `gemini-3.1-flash-lite-preview`, `gemini-3.1-pro-preview`, `gemini-3-flash-preview`, `gemini-3-pro-preview`, etc. See [official docs](https://ai.google.dev/gemini-api/docs/models) |\n| `gemini_api_key` | Create at [Google AI Studio](https://aistudio.google.com/app/apikey) |\n"
  },
  {
    "path": "docs/en/models/glm.mdx",
    "content": "---\ntitle: GLM (Zhipu AI)\ndescription: Zhipu AI GLM model configuration\n---\n\n```json\n{\n  \"model\": \"glm-5-turbo\",\n  \"zhipu_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Options include `glm-5-turbo`, `glm-5`, `glm-4.7`, `glm-4-plus`, `glm-4-flash`, `glm-4-air`, etc. See [model codes](https://bigmodel.cn/dev/api/normal-model/glm-4) |\n| `zhipu_ai_api_key` | Create at [Zhipu AI Console](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) |\n\nOpenAI-compatible configuration is also supported:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"glm-5-turbo\",\n  \"open_ai_api_base\": \"https://open.bigmodel.cn/api/paas/v4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/en/models/index.mdx",
    "content": "---\ntitle: Models Overview\ndescription: Supported models and recommended choices for CowAgent\n---\n\nCowAgent supports mainstream LLMs from domestic and international providers. Model interfaces are implemented in the project's `models/` directory.\n\n<Note>\n  For Agent mode, the following models are recommended based on quality and cost: MiniMax-M2.7, glm-5-turbo, kimi-k2.5, qwen3.5-plus, claude-sonnet-4-6, gemini-3.1-pro-preview\n</Note>\n\n## Configuration\n\nConfigure the model name and API key in `config.json` according to your chosen model. Each model also supports OpenAI-compatible access by setting `bot_type` to `openai` and configuring `open_ai_api_base` and `open_ai_api_key`.\n\nYou can also use the [LinkAI](https://link-ai.tech) platform interface to flexibly switch between multiple models with support for knowledge base, workflows, and other Agent capabilities.\n\n## Supported Models\n\n<CardGroup cols={2}>\n  <Card title=\"MiniMax\" href=\"/en/models/minimax\">\n    MiniMax-M2.7 and other series models\n  </Card>\n  <Card title=\"GLM (Zhipu AI)\" href=\"/en/models/glm\">\n    glm-5-turbo, glm-5 and other series models\n  </Card>\n  <Card title=\"Qwen (Tongyi Qianwen)\" href=\"/en/models/qwen\">\n    qwen3.5-plus, qwen3-max and more\n  </Card>\n  <Card title=\"Kimi\" href=\"/en/models/kimi\">\n    kimi-k2.5, kimi-k2 and more\n  </Card>\n  <Card title=\"Doubao (ByteDance)\" href=\"/en/models/doubao\">\n    doubao-seed series models\n  </Card>\n  <Card title=\"Claude\" href=\"/en/models/claude\">\n    claude-sonnet-4-6 and more\n  </Card>\n  <Card title=\"Gemini\" href=\"/en/models/gemini\">\n    gemini-3.1-pro-preview and more\n  </Card>\n  <Card title=\"OpenAI\" href=\"/en/models/openai\">\n    gpt-5.4, gpt-4.1, o-series and more\n  </Card>\n  <Card title=\"DeepSeek\" href=\"/en/models/deepseek\">\n    deepseek-chat, deepseek-reasoner\n  </Card>\n  <Card title=\"LinkAI\" href=\"/en/models/linkai\">\n    Unified multi-model interface + knowledge base\n  </Card>\n</CardGroup>\n\n<Tip>\n  For a full list of model names, refer to the project's [`common/const.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py) file.\n</Tip>\n"
  },
  {
    "path": "docs/en/models/kimi.mdx",
    "content": "---\ntitle: Kimi (Moonshot)\ndescription: Kimi (Moonshot) model configuration\n---\n\n```json\n{\n  \"model\": \"kimi-k2.5\",\n  \"moonshot_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Options include `kimi-k2.5`, `kimi-k2`, `moonshot-v1-8k`, `moonshot-v1-32k`, `moonshot-v1-128k` |\n| `moonshot_api_key` | Create at [Moonshot Console](https://platform.moonshot.cn/console/api-keys) |\n\nOpenAI-compatible configuration is also supported:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"kimi-k2.5\",\n  \"open_ai_api_base\": \"https://api.moonshot.cn/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/en/models/linkai.mdx",
    "content": "---\ntitle: LinkAI\ndescription: Unified access to multiple models via LinkAI platform\n---\n\nThe [LinkAI](https://link-ai.tech) platform lets you flexibly switch between OpenAI, Claude, Gemini, DeepSeek, Qwen, Kimi, and other models, with support for knowledge base, workflows, plugins, and other Agent capabilities.\n\n```json\n{\n  \"use_linkai\": true,\n  \"linkai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `use_linkai` | Set to `true` to enable LinkAI interface |\n| `linkai_api_key` | Create at [LinkAI Console](https://link-ai.tech/console/interface) |\n| `model` | Leave empty to use the agent's default model. Can be switched flexibly on the platform. All models in the [model list](https://link-ai.tech/console/models) are supported |\n\nSee the [API documentation](https://docs.link-ai.tech/platform/api) for more details.\n"
  },
  {
    "path": "docs/en/models/minimax.mdx",
    "content": "---\ntitle: MiniMax\ndescription: MiniMax model configuration\n---\n\n```json\n{\n  \"model\": \"MiniMax-M2.7\",\n  \"minimax_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Options include `MiniMax-M2.7`, `MiniMax-M2.5`, `MiniMax-M2.1`, `MiniMax-M2.1-lightning`, `MiniMax-M2`, etc. |\n| `minimax_api_key` | Create at [MiniMax Console](https://platform.minimaxi.com/user-center/basic-information/interface-key) |\n\nOpenAI-compatible configuration is also supported:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MiniMax-M2.7\",\n  \"open_ai_api_base\": \"https://api.minimaxi.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/en/models/openai.mdx",
    "content": "---\ntitle: OpenAI\ndescription: OpenAI model configuration\n---\n\n```json\n{\n  \"model\": \"gpt-5.4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\",\n  \"open_ai_api_base\": \"https://api.openai.com/v1\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Matches the [model parameter](https://platform.openai.com/docs/models) of the OpenAI API. Supports o-series, gpt-5.4, gpt-5 series, gpt-4.1, etc. Recommended for Agent mode: `gpt-5.4` |\n| `open_ai_api_key` | Create at [OpenAI Platform](https://platform.openai.com/api-keys) |\n| `open_ai_api_base` | Optional. Change to use third-party proxy |\n| `bot_type` | Not required for official OpenAI models. Set to `openai` when using Claude or other non-OpenAI models via proxy |\n"
  },
  {
    "path": "docs/en/models/qwen.mdx",
    "content": "---\ntitle: Qwen (Tongyi Qianwen)\ndescription: Tongyi Qianwen model configuration\n---\n\n```json\n{\n  \"model\": \"qwen3.5-plus\",\n  \"dashscope_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| Parameter | Description |\n| --- | --- |\n| `model` | Options include `qwen3.5-plus`, `qwen3-max`, `qwen-max`, `qwen-plus`, `qwen-turbo`, `qwq-plus`, etc. |\n| `dashscope_api_key` | Create at [Bailian Console](https://bailian.console.aliyun.com/?tab=model#/api-key). See [official docs](https://bailian.console.aliyun.com/?tab=api#/api) |\n\nOpenAI-compatible configuration is also supported:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"qwen3.5-plus\",\n  \"open_ai_api_base\": \"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/en/releases/overview.mdx",
    "content": "---\ntitle: Changelog\ndescription: CowAgent version history\n---\n\n| Version | Date | Description |\n| --- | --- | --- |\n| [2.0.2](/en/releases/v2.0.2) | 2026.02.27 | Web Console upgrade, multi-channel concurrency, session persistence |\n| [2.0.1](/en/releases/v2.0.1) | 2026.02.27 | Built-in Web Search tool, smart context management, multiple fixes |\n| [2.0.0](/en/releases/v2.0.0) | 2026.02.03 | Full upgrade to AI super assistant |\n| 1.7.6 | 2025.05.23 | Web Channel optimization, AgentMesh plugin |\n| 1.7.5 | 2025.04.11 | DeepSeek model |\n| 1.7.4 | 2024.12.13 | Gemini 2.0 model, Web Channel |\n| 1.7.3 | 2024.10.31 | Stability improvements, database features |\n| 1.7.2 | 2024.09.26 | One-click install script, o1 model |\n| 1.7.0 | 2024.08.02 | iFlytek 4.0 model, knowledge base references |\n| 1.6.9 | 2024.07.19 | gpt-4o-mini, Alibaba voice recognition |\n| 1.6.8 | 2024.07.05 | Claude 3.5, Gemini 1.5 Pro |\n| 1.6.0 | 2024.04.26 | Kimi integration, gpt-4-turbo upgrade |\n| 1.5.0 | 2023.11.10 | gpt-4-turbo, dall-e-3, tts multimodal |\n| 1.0.0 | 2022.12.12 | Project created, first ChatGPT integration |\n\nSee [GitHub Releases](https://github.com/zhayujie/chatgpt-on-wechat/releases) for full history.\n"
  },
  {
    "path": "docs/en/releases/v2.0.0.mdx",
    "content": "---\ntitle: v2.0.0\ndescription: CowAgent 2.0 - Full upgrade from chatbot to AI super assistant\n---\n\nCowAgent 2.0 is a comprehensive upgrade from a chatbot to an **AI super assistant** — capable of autonomous thinking and task planning, long-term memory, operating computers, and creating and executing skills.\n\n**Release Date**: 2026.02.03 | [GitHub Release](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.0)\n\n## Key Updates\n\n### Agent Core\n\n- **Complex Task Planning**: Autonomous planning with multi-turn reasoning\n- **Long-term Memory**: Persistent memory with keyword and vector search\n- **Built-in Tools**: 10+ tools including file ops, Bash, browser, scheduler\n- **Web search**: Built-in `web_search` tool, supports multiple search engines, configure corresponding API key to use\n- **Skills System**: Skill engine with built-in and custom skill support\n- **Security & Cost**: Secret management, prompt controls, token limits\n\n### Other\n\n- **Channels**: Feishu/DingTalk WebSocket support, image/file messages\n- **Models**: claude-sonnet-4-5, gemini-3-pro-preview, glm-4.7, MiniMax-M2.1, qwen3-max\n- **Deployment**: One-click install, configure, run, and management script\n\n## Long-term Memory\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## Task Planning & Tools\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202181130.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n\n## Skills System\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n\n## Contributing\n\nWelcome to [submit feedback](https://github.com/zhayujie/chatgpt-on-wechat/issues) and [contribute code](https://github.com/zhayujie/chatgpt-on-wechat/pulls).\n"
  },
  {
    "path": "docs/en/releases/v2.0.1.mdx",
    "content": "---\ntitle: v2.0.1\ndescription: CowAgent 2.0.1 - Built-in Web Search, smart context management, multiple fixes\n---\n\n**Release Date**: 2026.02.27 | [Full Changelog](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.0..2.0.1)\n\n## New Features\n\n- **Built-in Web Search tool**: Integrated web search as a built-in Agent tool, reducing decision cost ([4f0ea5d](https://github.com/zhayujie/chatgpt-on-wechat/commit/4f0ea5d7568d61db91ff69c91c429e785fd1b1c2))\n- **Claude Opus 4.6 model support**: Added support for Claude Opus 4.6 model ([#2661](https://github.com/zhayujie/chatgpt-on-wechat/pull/2661))\n- **WeCom image recognition**: Support image message recognition in WeCom channel ([#2667](https://github.com/zhayujie/chatgpt-on-wechat/pull/2667))\n\n## Improvements\n\n- **Smart context management**: Resolved chat context overflow with intelligent context trimming strategy to prevent token limits ([cea7fb7](https://github.com/zhayujie/chatgpt-on-wechat/commit/cea7fb7490c53454602bf05955a0e9f059bcf0fd), [8acf2db](https://github.com/zhayujie/chatgpt-on-wechat/commit/8acf2dbdfe713b84ad74b761b7f86674b1c1904d)) [#2663](https://github.com/zhayujie/chatgpt-on-wechat/issues/2663)\n- **Runtime info dynamic update**: Automatic update of timestamps and other runtime info in system prompts via dynamic functions ([#2655](https://github.com/zhayujie/chatgpt-on-wechat/pull/2655), [#2657](https://github.com/zhayujie/chatgpt-on-wechat/pull/2657))\n- **Skill prompt optimization**: Improved Skill system prompt generation, simplified tool descriptions for better Agent performance ([6c21833](https://github.com/zhayujie/chatgpt-on-wechat/commit/6c218331b1f1208ea8be6bf226936d3b556ade3e))\n- **GLM custom API Base URL**: Support custom API Base URL for GLM models ([#2660](https://github.com/zhayujie/chatgpt-on-wechat/pull/2660))\n- **Startup script optimization**: Improved `run.sh` script interaction and configuration flow ([#2656](https://github.com/zhayujie/chatgpt-on-wechat/pull/2656))\n- **Decision step logging**: Added Agent decision step logging for debugging ([cb303e6](https://github.com/zhayujie/chatgpt-on-wechat/commit/cb303e6109c50c8dfef1f5e6c1ec47223bf3cd11))\n\n## Bug Fixes\n\n- **Scheduler memory loss**: Fixed memory loss caused by Scheduler dispatcher ([a77a874](https://github.com/zhayujie/chatgpt-on-wechat/commit/a77a8741b500a408c6f5c8868856fb4b018fe9db))\n- **Empty tool calls & long results**: Fixed handling of empty tool calls and excessively long tool results ([0542700](https://github.com/zhayujie/chatgpt-on-wechat/commit/0542700f9091ebb08c1a56103b0f0f45f24aa621))\n- **OpenAI Function Call**: Fixed function call compatibility with OpenAI models ([158c87a](https://github.com/zhayujie/chatgpt-on-wechat/commit/158c87ab8b05bae054cc1b4eacdbb64fc1062ba9))\n- **Claude tool name field**: Removed extraneous tool name field from Claude model responses ([eec10cb](https://github.com/zhayujie/chatgpt-on-wechat/commit/eec10cb5db6a3d5bc12ef606606532237d2c5f6e))\n- **MiniMax reasoning**: Optimized MiniMax model reasoning content handling, hidden thinking process output ([c72cda3](https://github.com/zhayujie/chatgpt-on-wechat/commit/c72cda33864bd1542012ee6e0a8bd8c6c88cb5ed), [72b1cac](https://github.com/zhayujie/chatgpt-on-wechat/commit/72b1cacea1ba0d1f3dedacbab2e088e98fd7e172))\n- **GLM thinking process**: Hidden GLM model thinking process display ([72b1cac](https://github.com/zhayujie/chatgpt-on-wechat/commit/72b1cacea1ba0d1f3dedacbab2e088e98fd7e172))\n- **Feishu connection & SSL**: Fixed Feishu channel SSL certificate errors and connection issues ([229b14b](https://github.com/zhayujie/chatgpt-on-wechat/commit/229b14b6fcabe7123d53cab1dea39f38dab26d6d), [8674421](https://github.com/zhayujie/chatgpt-on-wechat/commit/867442155e7f095b4f38b0856f8c1d8312b5fcf7))\n- **model_type validation**: Fixed `AttributeError` caused by non-string `model_type` ([#2666](https://github.com/zhayujie/chatgpt-on-wechat/pull/2666))\n\n## Platform Compatibility\n\n- **Windows compatibility**: Fixed path handling, file encoding, and `os.getuid()` unavailability on Windows across multiple tool modules ([051ffd7](https://github.com/zhayujie/chatgpt-on-wechat/commit/051ffd78a372f71a967fd3259e37fe19131f83cf), [5264f7c](https://github.com/zhayujie/chatgpt-on-wechat/commit/5264f7ce18360ee4db5dcb4ebe67307977d40014))\n"
  },
  {
    "path": "docs/en/releases/v2.0.2.mdx",
    "content": "---\ntitle: v2.0.2\ndescription: CowAgent 2.0.2 - Web Console upgrade, multi-channel concurrency, session persistence\n---\n\n**Release Date**: 2026.02.27 | [Full Changelog](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.1...master)\n\n## Highlights\n\n### 🖥️ Web Console Upgrade\n\nThe Web Console has been fully upgraded with streaming conversation output, visual display of tool execution and reasoning processes, and online management of **models, skills, memory, channels, and Agent configuration**.\n\n#### Chat Interface\n\nSupports streaming output with real-time display of the Agent's reasoning process and tool calls, providing intuitive observation of the Agent's decision-making:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227180120.png\" />\n\n#### Model Management\n\nManage model configurations online without manually editing config files:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173811.png\" />\n\n#### Skill Management\n\nView and manage Agent skills (Skills) online:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173403.png\" />\n\n#### Memory Management\n\nView and manage Agent memory online:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173349.png\" />\n\n#### Channel Management\n\nManage connected channels online with real-time connect/disconnect operations:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173331.png\" />\n\n#### Scheduled Tasks\n\nView and manage scheduled tasks online, including one-time tasks, fixed intervals, and Cron expressions:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173704.png\" />\n\n#### Logs\n\nView Agent runtime logs in real-time for monitoring and troubleshooting:\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173514.png\" />\n\nRelated commits: [f1a1413](https://github.com/zhayujie/chatgpt-on-wechat/commit/f1a1413), [c0702c8](https://github.com/zhayujie/chatgpt-on-wechat/commit/c0702c8), [394853c](https://github.com/zhayujie/chatgpt-on-wechat/commit/394853c), [1c71c4e](https://github.com/zhayujie/chatgpt-on-wechat/commit/1c71c4e), [5e3eccb](https://github.com/zhayujie/chatgpt-on-wechat/commit/5e3eccb), [e1dc037](https://github.com/zhayujie/chatgpt-on-wechat/commit/e1dc037), [5edbf4c](https://github.com/zhayujie/chatgpt-on-wechat/commit/5edbf4c), [7d258b5](https://github.com/zhayujie/chatgpt-on-wechat/commit/7d258b5)\n\n### 🔀 Multi-Channel Concurrency\n\nMultiple channels (e.g., Feishu, DingTalk, WeCom, Web) can now run simultaneously, each in an independent thread without interference.\n\nConfiguration: Set multiple channels in `config.json` via `channel_type` separated by commas, or connect/disconnect channels in real-time from the Web Console's channel management page.\n\n```json\n{\n  \"channel_type\": \"web,feishu,dingtalk\"\n}\n```\n\nRelated commits: [4694594](https://github.com/zhayujie/chatgpt-on-wechat/commit/4694594), [7cce224](https://github.com/zhayujie/chatgpt-on-wechat/commit/7cce224), [7d258b5](https://github.com/zhayujie/chatgpt-on-wechat/commit/7d258b5), [c9adddb](https://github.com/zhayujie/chatgpt-on-wechat/commit/c9adddb)\n\n### 💾 Session Persistence\n\nSession history is now persisted to a local SQLite database. Conversation context is automatically restored after service restarts. Historical conversations in the Web Console are also restored.\n\nRelated commits: [29bfbec](https://github.com/zhayujie/chatgpt-on-wechat/commit/29bfbec), [9917552](https://github.com/zhayujie/chatgpt-on-wechat/commit/9917552), [925d728](https://github.com/zhayujie/chatgpt-on-wechat/commit/925d728)\n\n## New Models\n\n- **Gemini 3.1 Pro Preview**: Added `gemini-3.1-pro-preview` model support ([52d7cad](https://github.com/zhayujie/chatgpt-on-wechat/commit/52d7cad))\n- **Claude 4.6 Sonnet**: Added `claude-4.6-sonnet` model support ([52d7cad](https://github.com/zhayujie/chatgpt-on-wechat/commit/52d7cad))\n- **Qwen3.5 Plus**: Added `qwen3.5-plus` model support ([e59a289](https://github.com/zhayujie/chatgpt-on-wechat/commit/e59a289))\n- **MiniMax M2.5**: Added `Minimax-M2.5` model support ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **GLM-5**: Added `glm-5` model support ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **Kimi K2.5**: Added `kimi-k2.5` model support ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **Doubao 2.0 Code**: Added `doubao-2.0-code` coding-specialized model ([ab28ee5](https://github.com/zhayujie/chatgpt-on-wechat/commit/ab28ee5))\n- **DashScope Models**: Added Alibaba Cloud DashScope model name support ([ce58f23](https://github.com/zhayujie/chatgpt-on-wechat/commit/ce58f23))\n\n## Website & Documentation\n\n- **Official Website**: [cowagent.ai](https://cowagent.ai/)\n- **Documentation**: [docs.cowagent.ai](https://docs.cowagent.ai/)\n\n## Bug Fixes\n\n- **Gemini DingTalk image recognition**: Fixed Gemini unable to process image markers in DingTalk channel ([05a3304](https://github.com/zhayujie/chatgpt-on-wechat/commit/05a3304)) ([#2670](https://github.com/zhayujie/chatgpt-on-wechat/pull/2670)) Thanks [@SgtPepper114](https://github.com/SgtPepper114)\n- **Startup script dependencies**: Fixed dependency installation issue in `run.sh` script ([b6fc9fa](https://github.com/zhayujie/chatgpt-on-wechat/commit/b6fc9fa))\n- **Bare except cleanup**: Replaced `bare except` with `except Exception` for better exception handling ([adca89b](https://github.com/zhayujie/chatgpt-on-wechat/commit/adca89b)) ([#2674](https://github.com/zhayujie/chatgpt-on-wechat/pull/2674)) Thanks [@haosenwang1018](https://github.com/haosenwang1018)\n"
  },
  {
    "path": "docs/en/skills/image-vision.mdx",
    "content": "---\ntitle: Image Vision\ndescription: Recognize images using OpenAI vision models\n---\n\nAnalyze image content using OpenAI's GPT-4 Vision API, understanding objects, text, colors, and other elements in images.\n\n## Dependencies\n\n| Dependency | Description |\n| --- | --- |\n| `OPENAI_API_KEY` | OpenAI API key |\n| `curl`, `base64` | System commands (usually pre-installed) |\n\nConfiguration:\n\n- Configure `OPENAI_API_KEY` via the `env_config` tool\n- Or set `open_ai_api_key` in `config.json`\n\n## Supported Models\n\n- `gpt-4.1-mini` (recommended, cost-effective)\n- `gpt-4.1`\n\n## Usage\n\nOnce configured, send an image to the Agent to automatically trigger image recognition.\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/en/skills/index.mdx",
    "content": "---\ntitle: Skills Overview\ndescription: CowAgent skills system introduction\n---\n\nSkills provide infinite extensibility for the Agent. Each Skill consists of a description file (`SKILL.md`), execution scripts (optional), and resources (optional), describing how to accomplish specific types of tasks.\n\nThe difference between Skills and Tools: Tools are atomic operations implemented in code (e.g., file read/write, command execution), while Skills are high-level workflows based on description files that can combine multiple Tools to complete complex tasks.\n\n## Built-in Skills\n\nLocated in the project `skills/` directory, automatically enabled based on dependency conditions:\n\n| Skill | Description | Dependencies |\n| --- | --- | --- |\n| [`skill-creator`](/en/skills/skill-creator) | Create custom skills through conversation | None |\n| [`openai-image-vision`](/en/skills/image-vision) | Recognize images using OpenAI vision models | `OPENAI_API_KEY` |\n| [`linkai-agent`](/en/skills/linkai-agent) | Integrate LinkAI platform agents | `LINKAI_API_KEY` |\n| [`web-fetch`](/en/skills/web-fetch) | Fetch web page text content | `curl` (enabled by default) |\n\n## Custom Skills\n\nCreated by users through conversation, stored in workspace (`~/cow/skills/`), can implement any complex business process and third-party system integration.\n\n## Skill Loading Priority\n\n1. **Workspace skills** (highest): `~/cow/skills/`\n2. **Project built-in skills** (lowest): `skills/`\n\nSkills with the same name are overridden by priority.\n\n## Skill File Structure\n\n```\nskills/\n├── my-skill/\n│   ├── SKILL.md          # Skill description (frontmatter + instructions)\n│   ├── scripts/          # Execution scripts (optional)\n│   └── resources/        # Additional resources (optional)\n```\n\n### SKILL.md Format\n\n```markdown\n---\nname: my-skill\ndescription: Brief description of the skill\nmetadata:\n  emoji: 🔧\n  requires:\n    bins: [\"curl\"]\n    env: [\"MY_API_KEY\"]\n  primaryEnv: \"MY_API_KEY\"\n---\n\n# My Skill\n\nDetailed instructions...\n```\n\n| Field | Description |\n| --- | --- |\n| `name` | Skill name, must match directory name |\n| `description` | Skill description, Agent decides whether to invoke based on this |\n| `metadata.requires.bins` | Required system commands |\n| `metadata.requires.env` | Required environment variables |\n| `metadata.always` | Always load (default false) |\n"
  },
  {
    "path": "docs/en/skills/linkai-agent.mdx",
    "content": "---\ntitle: LinkAI Agent\ndescription: Integrate LinkAI platform multi-agent skill\n---\n\nUse agents from the [LinkAI](https://link-ai.tech/) platform as Skills for multi-agent decision-making. The Agent intelligently selects based on agent names and descriptions, calling the corresponding application or workflow via `app_code`.\n\n## Dependencies\n\n| Dependency | Description |\n| --- | --- |\n| `LINKAI_API_KEY` | LinkAI platform API key, created in [Console](https://link-ai.tech/console/interface) |\n| `curl` | System command (usually pre-installed) |\n\nConfiguration:\n\n- Configure `LINKAI_API_KEY` via the `env_config` tool\n- Or set `linkai_api_key` in `config.json`\n\n## Configure Agents\n\nAdd available agents in `skills/linkai-agent/config.json`:\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI Customer Support\",\n      \"app_description\": \"Select this assistant only when the user needs help with LinkAI platform questions\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"Content Creator\",\n      \"app_description\": \"Use this assistant only when the user needs to create images or videos\"\n    }\n  ]\n}\n```\n\n## Usage\n\nOnce configured, the Agent will automatically select the appropriate LinkAI agent based on the user's question.\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n"
  },
  {
    "path": "docs/en/skills/skill-creator.mdx",
    "content": "---\ntitle: Skill Creator\ndescription: Create custom skills through conversation\n---\n\nQuickly create, install, or update skills through natural language conversation.\n\n## Dependencies\n\nNo extra dependencies, always available.\n\n## Usage\n\n- Codify workflows as skills: \"Create a skill from this deployment process\"\n- Integrate third-party APIs: \"Create a skill based on this API documentation\"\n- Install remote skills: \"Install xxx skill for me\"\n\n## Creation Flow\n\n1. Tell the Agent what skill you want to create\n2. Agent automatically generates `SKILL.md` description and execution scripts\n3. Skill is saved to the workspace `~/cow/skills/` directory\n4. Agent will automatically recognize and use the skill in future conversations\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n<Tip>\n  See the [Skill Creator documentation](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/skills/skill-creator/SKILL.md) for details.\n</Tip>\n"
  },
  {
    "path": "docs/en/skills/web-fetch.mdx",
    "content": "---\ntitle: Web Fetch\ndescription: Fetch web page text content\n---\n\nUse curl to fetch web pages and extract readable text content. A lightweight web access method without browser automation.\n\n## Dependencies\n\n| Dependency | Description |\n| --- | --- |\n| `curl` | System command (usually pre-installed) |\n\nThis skill has `always: true` set, enabled by default as long as the system has the `curl` command.\n\n## Usage\n\nAutomatically invoked when the Agent needs to fetch content from a URL, no extra configuration needed.\n\n## Comparison with browser Tool\n\n| Feature | web-fetch (skill) | browser (tool) |\n| --- | --- | --- |\n| Dependencies | curl only | browser-use + playwright |\n| JS rendering | Not supported | Supported |\n| Page interaction | Not supported | Supports click, type, etc. |\n| Best for | Static page text | Dynamic web pages |\n\n<Tip>\n  For most web content retrieval scenarios, web-fetch is sufficient. Only use the browser tool when you need JS rendering or page interaction.\n</Tip>\n"
  },
  {
    "path": "docs/en/tools/bash.mdx",
    "content": "---\ntitle: bash - Terminal\ndescription: Execute system commands\n---\n\nExecute Bash commands in the current working directory, returns stdout and stderr. API keys configured via `env_config` are automatically injected into the environment.\n\n## Dependencies\n\nNo extra dependencies, available by default.\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `command` | string | Yes | Command to execute |\n| `timeout` | integer | No | Timeout in seconds |\n\n## Use Cases\n\n- Install packages and dependencies\n- Run code and tests\n- Deploy applications and services (Nginx config, process management, etc.)\n- System administration and troubleshooting\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/en/tools/browser.mdx",
    "content": "---\ntitle: browser - Browser\ndescription: Access and interact with web pages\n---\n\nUse a browser to access and interact with web pages, supports JavaScript-rendered dynamic pages.\n\n## Dependencies\n\n| Dependency | Install Command |\n| --- | --- |\n| `browser-use` ≥ 0.1.40 | `pip install browser-use` |\n| `markdownify` | `pip install markdownify` |\n| `playwright` + chromium | `pip install playwright && playwright install chromium` |\n\n## Use Cases\n\n- Access specific URLs to get page content\n- Interact with web page elements (click, type, etc.)\n- Verify deployed web pages\n- Scrape dynamic content requiring JS rendering\n\n<Note>\n  The browser tool has heavy dependencies. If not needed, skip installation. For lightweight web content retrieval, use the `web-fetch` skill instead.\n</Note>\n"
  },
  {
    "path": "docs/en/tools/edit.mdx",
    "content": "---\ntitle: edit - File Edit\ndescription: Edit files via precise text replacement\n---\n\nEdit files via precise text replacement. If `oldText` is empty, appends to the end of the file.\n\n## Dependencies\n\nNo extra dependencies, available by default.\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `path` | string | Yes | File path |\n| `oldText` | string | Yes | Original text to replace (empty to append) |\n| `newText` | string | Yes | Replacement text |\n\n## Use Cases\n\n- Modify specific parameters in configuration files\n- Fix bugs in code\n- Insert content at specific positions in files\n"
  },
  {
    "path": "docs/en/tools/env-config.mdx",
    "content": "---\ntitle: env_config - Environment\ndescription: Manage API keys and secrets\n---\n\nManage environment variables (API keys and secrets) in the workspace `.env` file, with secure conversational updates. Built-in security protection and desensitization.\n\n## Dependencies\n\n| Dependency | Install Command |\n| --- | --- |\n| `python-dotenv` ≥ 1.0.0 | `pip install python-dotenv>=1.0.0` |\n\nIncluded when installing optional dependencies: `pip3 install -r requirements-optional.txt`\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `action` | string | Yes | Operation type: `get`, `set`, `list`, `delete` |\n| `key` | string | No | Environment variable name |\n| `value` | string | No | Environment variable value (only for `set`) |\n\n## Usage\n\nTell the Agent what key you need to configure, and it will automatically invoke this tool:\n\n- \"Configure my BOCHA_API_KEY\"\n- \"Set OPENAI_API_KEY to sk-xxx\"\n- \"Show configured environment variables\"\n\nConfigured keys are automatically injected into the `bash` tool's execution environment.\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/en/tools/index.mdx",
    "content": "---\ntitle: Tools Overview\ndescription: CowAgent built-in tools system\n---\n\nTools are the core capability for Agent to access operating system resources. The Agent intelligently selects and invokes tools based on task requirements, performing file operations, command execution, web search, scheduled tasks, and more. Tools are implemented in the `agent/tools/` directory.\n\n## Built-in Tools\n\nThe following tools are available by default with no extra configuration:\n\n<CardGroup cols={2}>\n  <Card title=\"read - File Read\" icon=\"file\" href=\"/en/tools/read\">\n    Read file content, supports text, images, PDF\n  </Card>\n  <Card title=\"write - File Write\" icon=\"pen\" href=\"/en/tools/write\">\n    Create or overwrite files\n  </Card>\n  <Card title=\"edit - File Edit\" icon=\"pen-to-square\" href=\"/en/tools/edit\">\n    Edit files via precise text replacement\n  </Card>\n  <Card title=\"ls - Directory List\" icon=\"folder-open\" href=\"/en/tools/ls\">\n    List directory contents\n  </Card>\n  <Card title=\"bash - Terminal\" icon=\"terminal\" href=\"/en/tools/bash\">\n    Execute system commands\n  </Card>\n  <Card title=\"send - File Send\" icon=\"paper-plane\" href=\"/en/tools/send\">\n    Send files or images to user\n  </Card>\n  <Card title=\"memory - Memory\" icon=\"brain\" href=\"/en/tools/memory\">\n    Search and read long-term memory\n  </Card>\n</CardGroup>\n\n## Optional Tools\n\nThe following tools require additional dependencies or API key configuration:\n\n<CardGroup cols={2}>\n  <Card title=\"env_config - Environment\" icon=\"key\" href=\"/en/tools/env-config\">\n    Manage API keys and secrets\n  </Card>\n  <Card title=\"scheduler - Scheduler\" icon=\"clock\" href=\"/en/tools/scheduler\">\n    Create and manage scheduled tasks\n  </Card>\n  <Card title=\"web_search - Web Search\" icon=\"magnifying-glass\" href=\"/en/tools/web-search\">\n    Search the internet for real-time information\n  </Card>\n</CardGroup>\n"
  },
  {
    "path": "docs/en/tools/ls.mdx",
    "content": "---\ntitle: ls - Directory List\ndescription: List directory contents\n---\n\nList directory contents, sorted alphabetically, directories suffixed with `/`, includes hidden files.\n\n## Dependencies\n\nNo extra dependencies, available by default.\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `path` | string | Yes | Directory path, relative paths are based on workspace directory |\n| `limit` | integer | No | Maximum entries to return, default 500 |\n\n## Use Cases\n\n- Browse project structure\n- Find specific files\n- Check if a directory exists\n"
  },
  {
    "path": "docs/en/tools/memory.mdx",
    "content": "---\ntitle: memory - Memory\ndescription: Search and read long-term memory\n---\n\nThe memory tool contains two sub-tools: `memory_search` (search memory) and `memory_get` (read memory files).\n\n## Dependencies\n\nNo extra dependencies, available by default. Managed by the Agent Core memory system.\n\n## memory_search\n\nSearch historical memory with hybrid keyword and vector retrieval.\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `query` | string | Yes | Search query |\n\n## memory_get\n\nRead the content of a specific memory file.\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `path` | string | Yes | Relative path to memory file (e.g. `MEMORY.md`, `memory/2026-01-01.md`) |\n| `start_line` | integer | No | Start line number |\n| `end_line` | integer | No | End line number |\n\n## How It Works\n\nThe Agent automatically invokes memory tools in these scenarios:\n\n- When the user shares important information → stores to memory\n- When historical context is needed → searches relevant memory\n- When conversation reaches a certain length → extracts summary for storage\n"
  },
  {
    "path": "docs/en/tools/read.mdx",
    "content": "---\ntitle: read - File Read\ndescription: Read file content\n---\n\nRead file content. Supports text files, PDF files, images (returns metadata), and more.\n\n## Dependencies\n\nNo extra dependencies, available by default.\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `path` | string | Yes | File path, relative paths are based on workspace directory |\n| `offset` | integer | No | Start line number (1-indexed), negative values read from the end |\n| `limit` | integer | No | Number of lines to read |\n\n## Use Cases\n\n- View configuration files, log files\n- Read code files for analysis\n- Check image/video file info\n"
  },
  {
    "path": "docs/en/tools/scheduler.mdx",
    "content": "---\ntitle: scheduler - Scheduler\ndescription: Create and manage scheduled tasks\n---\n\nCreate and manage dynamic scheduled tasks with flexible scheduling and execution modes.\n\n## Dependencies\n\n| Dependency | Install Command |\n| --- | --- |\n| `croniter` ≥ 2.0.0 | `pip install croniter>=2.0.0` |\n\nIncluded in core dependencies: `pip3 install -r requirements.txt`\n\n## Scheduling Modes\n\n| Mode | Description |\n| --- | --- |\n| One-time | Execute once at a specified time |\n| Fixed interval | Repeat at fixed time intervals |\n| Cron expression | Define complex schedules using Cron syntax |\n\n## Execution Modes\n\n- **Fixed message**: Send a preset message when triggered\n- **Agent dynamic task**: Agent intelligently executes the task when triggered\n\n## Usage\n\nCreate and manage scheduled tasks with natural language:\n\n- \"Send me a weather report every morning at 9 AM\"\n- \"Check server status every 2 hours\"\n- \"Remind me about the meeting tomorrow at 3 PM\"\n- \"Show all scheduled tasks\"\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/en/tools/send.mdx",
    "content": "---\ntitle: send - File Send\ndescription: Send files to user\n---\n\nSend files to the user (images, videos, audio, documents, etc.), used when the user explicitly requests to send/share a file.\n\n## Dependencies\n\nNo extra dependencies, available by default.\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `path` | string | Yes | File path, can be absolute or relative to workspace |\n| `message` | string | No | Accompanying message |\n\n## Use Cases\n\n- Send generated code or documents to the user\n- Send screenshots, charts\n- Share downloaded files\n"
  },
  {
    "path": "docs/en/tools/web-search.mdx",
    "content": "---\ntitle: web_search - Web Search\ndescription: Search the internet for real-time information\n---\n\nSearch the internet for real-time information, news, research, and more. Supports two search backends with automatic fallback.\n\n## Dependencies\n\nRequires at least one search API key (configured via `env_config` tool or workspace `.env` file):\n\n| Backend | Environment Variable | Priority | How to Get |\n| --- | --- | --- | --- |\n| Bocha Search | `BOCHA_API_KEY` | Primary | [Bocha Open Platform](https://open.bochaai.com/) |\n| LinkAI Search | `LINKAI_API_KEY` | Fallback | [LinkAI Console](https://link-ai.tech/console/interface) |\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `query` | string | Yes | Search keywords |\n| `count` | integer | No | Number of results (1-50, default 10) |\n| `freshness` | string | No | Time range: `noLimit`, `oneDay`, `oneWeek`, `oneMonth`, `oneYear`, or date range like `2025-01-01..2025-02-01` |\n| `summary` | boolean | No | Return page summaries (default false) |\n\n## Use Cases\n\nWhen the user asks about latest information, needs fact-checking, or real-time data, the Agent automatically invokes this tool.\n\n<Note>\n  If no search API key is configured, this tool will not be loaded.\n</Note>\n"
  },
  {
    "path": "docs/en/tools/write.mdx",
    "content": "---\ntitle: write - File Write\ndescription: Create or overwrite files\n---\n\nWrite content to a file. Creates the file if it doesn't exist, overwrites if it does. Automatically creates parent directories.\n\n## Dependencies\n\nNo extra dependencies, available by default.\n\n## Parameters\n\n| Parameter | Type | Required | Description |\n| --- | --- | --- | --- |\n| `path` | string | Yes | File path |\n| `content` | string | Yes | Content to write |\n\n## Use Cases\n\n- Create new code files or scripts\n- Generate configuration files\n- Save processing results\n\n<Note>\n  Single writes should not exceed 10KB. For large files, create a skeleton first, then use the edit tool to add content in chunks.\n</Note>\n"
  },
  {
    "path": "docs/guide/manual-install.mdx",
    "content": "---\ntitle: 手动安装\ndescription: 手动部署 CowAgent（源码 / Docker）\n---\n\n## 源码部署\n\n### 1. 克隆项目代码\n\n```bash\ngit clone https://github.com/zhayujie/chatgpt-on-wechat\ncd chatgpt-on-wechat/\n```\n\n<Tip>\n  若遇到网络问题可使用国内仓库地址：https://gitee.com/zhayujie/chatgpt-on-wechat\n</Tip>\n\n### 2. 安装依赖\n\n核心依赖（必选）：\n\n```bash\npip3 install -r requirements.txt\n```\n\n扩展依赖（可选，建议安装）：\n\n```bash\npip3 install -r requirements-optional.txt\n```\n\n### 3. 配置\n\n复制配置文件模板并编辑：\n\n```bash\ncp config-template.json config.json\n```\n\n在 `config.json` 中填写模型 API Key 和通道类型等配置，详细说明参考各 [模型文档](/models/minimax)。\n\n### 4. 运行\n\n**本地运行：**\n\n```bash\npython3 app.py\n```\n\n运行后默认启动 Web 控制台，访问 `http://localhost:9899` 开始对话和管理Agent。\n\n**服务器后台运行：**\n\n```bash\nnohup python3 app.py & tail -f nohup.out\n```\n\n<Tip>\n  如果在服务器上部署，需要在防火墙或安全组中放行 `9899` 端口才能通过浏览器访问 Web 控制台，建议仅对指定IP开放以保证安全。\n</Tip>\n\n## Docker 部署\n\n使用 Docker 部署无需下载源码和安装依赖。Agent模式下更推荐使用源码部署以获得更多系统访问能力。\n\n<Note>\n  需要安装 [Docker](https://docs.docker.com/engine/install/) 和 docker-compose。\n</Note>\n\n**1. 下载配置文件**\n\n```bash\ncurl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml\n```\n\n打开 `docker-compose.yml` 填写所需配置。\n\n**2. 启动容器**\n\n```bash\nsudo docker compose up -d\n```\n\n**3. 查看日志**\n\n```bash\nsudo docker logs -f chatgpt-on-wechat\n```\n\n<Tip>\n  如果在服务器上部署，需要在防火墙或安全组中放行 `9899` 端口才能通过浏览器访问 Web 控制台，建议仅对指定IP开放以保证安全。\n</Tip>\n\n## 核心配置项\n\n```json\n{\n  \"channel_type\": \"web\",\n  \"model\": \"MiniMax-M2.5\",\n  \"agent\": true,\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 30,\n  \"agent_max_steps\": 15\n}\n```\n\n| 参数 | 说明 | 默认值 |\n| --- | --- | --- |\n| `channel_type` | 接入渠道类型 | `web` |\n| `model` | 模型名称 | `MiniMax-M2.5` |\n| `agent` | 是否启用 Agent 模式 | `true` |\n| `agent_workspace` | Agent 工作空间路径 | `~/cow` |\n| `agent_max_context_tokens` | 最大上下文 tokens | `40000` |\n| `agent_max_context_turns` | 最大上下文记忆轮次 | `30` |\n| `agent_max_steps` | 单次任务最大决策步数 | `15` |\n\n<Tip>\n  全部配置项可在项目 [`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py) 文件中查看。\n</Tip>\n"
  },
  {
    "path": "docs/guide/quick-start.mdx",
    "content": "---\ntitle: 一键安装\ndescription: 使用脚本一键安装和管理 CowAgent\n---\n\n项目提供了一键安装、配置、启动、管理程序的脚本，推荐使用脚本快速运行。\n\n支持 Linux、macOS、Windows 操作系统，需安装 Python 3.7 ~ 3.12（推荐 3.9）。\n\n## 安装命令\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\n脚本自动执行以下流程：\n\n1. 检查 Python 环境（需要 Python 3.7+）\n2. 安装必要工具（git、curl 等）\n3. 克隆项目代码到 `~/chatgpt-on-wechat`\n4. 安装 Python 依赖\n5. 引导配置 AI 模型和通信渠道\n6. 启动服务\n\n运行后默认启动 Web 控制台，访问 `http://localhost:9899` 开始对话和管理Agent。\n\n## 管理命令\n\n安装完成后，可使用以下命令管理服务：\n\n| 命令 | 说明 |\n| --- | --- |\n| `./run.sh start` | 启动服务 |\n| `./run.sh stop` | 停止服务 |\n| `./run.sh restart` | 重启服务 |\n| `./run.sh status` | 查看运行状态 |\n| `./run.sh logs` | 查看实时日志 |\n| `./run.sh config` | 重新配置 |\n| `./run.sh update` | 更新项目代码 |\n"
  },
  {
    "path": "docs/guide/upgrade.mdx",
    "content": "---\ntitle: 更新升级\ndescription: CowAgent 的升级方式说明\n---\n\n## 脚本升级（推荐）\n\n如果使用 `run.sh` 管理服务，执行以下命令即可一键升级：\n\n```bash\n./run.sh update\n```\n\n该命令会自动完成以下流程：\n\n1. 停止当前运行的服务\n2. 拉取最新代码\n3. 重新检查依赖\n4. 启动服务\n\n## 手动升级\n\n在项目根目录下执行：\n\n```bash\ngit pull\npip3 install -r requirements.txt\n```\n\n更新完成后重启服务：\n\n```bash\n# 如果使用 run.sh 管理\n./run.sh restart\n\n# 如果使用 nohup 直接运行\nkill $(ps -ef | grep app.py | grep -v grep | awk '{print $2}')\nnohup python3 app.py & tail -f nohup.out\n```\n\n## Docker 升级\n\n在 `docker-compose.yml` 所在目录下执行：\n\n```bash\nsudo docker compose pull\nsudo docker compose up -d\n```\n\n<Tip>\n  升级前建议备份 `config.json` 配置文件。Docker 环境下如需保留数据，可通过 volume 挂载持久化工作空间目录。\n</Tip>\n"
  },
  {
    "path": "docs/intro/architecture.mdx",
    "content": "---\ntitle: 项目架构\ndescription: CowAgent 2.0 的系统架构和核心设计\n---\n\nCowAgent 2.0 从简单的聊天机器人全面升级为超级智能助理，采用 Agent 架构设计，具备自主思考、规划任务、长期记忆和技能扩展等能力。\n\n## 系统架构\n\nCowAgent 的整体架构由以下核心模块组成：\n\n<img src=\"https://cdn.link-ai.tech/doc/68ef7b212c6f791e0e74314b912149f9-sz_5847990.png\" alt=\"CowAgent Architecture\" />\n\n### 核心模块说明\n\n| 模块 | 说明 |\n| --- | --- |\n| **Channels** | 消息通道层，负责接收和发送消息，支持 Web、飞书、钉钉、企微、公众号等 |\n| **Agent Core** | 智能体核心引擎，包括任务规划、记忆系统和技能引擎 |\n| **Tools** | 工具层，Agent 通过工具访问操作系统资源，内置 10+ 种工具 |\n| **Models** | 模型层，支持国内外主流大语言模型的统一接入 |\n\n## Agent 模式\n\n启用 Agent 模式后，CowAgent 会以自主智能体的方式运行，核心工作流如下：\n\n1. **接收消息** - 通过通道接收用户输入\n2. **理解意图** - 分析任务需求和上下文\n3. **规划任务** - 将复杂任务分解为多个步骤\n4. **调用工具** - 选择合适的工具执行每个步骤\n5. **记忆更新** - 将重要信息存入长期记忆\n6. **返回结果** - 将执行结果发送回用户\n\n## 工作空间\n\nAgent 的工作空间默认位于 `~/cow` 目录，用于存储系统提示词、记忆文件、技能文件等：\n\n```\n~/cow/\n├── system.md          # Agent system prompt\n├── user.md            # User profile\n├── memory/            # Long-term memory storage\n│   ├── core.md        # Core memory\n│   └── daily/         # Daily memory\n└── skills/            # Custom skills\n    ├── skill-1/\n    └── skill-2/\n```\n\n秘钥文件单独存储在 `~/.cow` 目录（出于安全考虑）：\n\n```\n~/.cow/\n└── .env               # Secret keys for skills\n```\n\n## 核心配置\n\n在 `config.json` 中配置 Agent 模式的核心参数：\n\n```json\n{\n  \"agent\": true,\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 30,\n  \"agent_max_steps\": 15\n}\n```\n\n| 参数 | 说明 | 默认值 |\n| --- | --- | --- |\n| `agent` | 是否启用 Agent 模式 | `true` |\n| `agent_workspace` | 工作空间路径 | `~/cow` |\n| `agent_max_context_tokens` | 最大上下文 token 数 | `40000` |\n| `agent_max_context_turns` | 最大上下文记忆轮次 | `30` |\n| `agent_max_steps` | 单次任务最大决策步数 | `15` |\n"
  },
  {
    "path": "docs/intro/features.mdx",
    "content": "---\ntitle: 功能介绍\ndescription: CowAgent 长期记忆、任务规划、技能系统详细说明\n---\n\n## 1. 长期记忆\n\n> 记忆系统让 Agent 能够长期记住重要信息。Agent 会在用户分享偏好、决策、事实等重要信息时主动存储，也会在对话达到一定长度时自动提取摘要。记忆分为核心记忆、天级记忆，支持语义搜索和向量检索的混合检索模式。\n\n第一次启动 Agent 时，Agent 会主动询问关键信息，并记录至工作空间（默认 `~/cow`）中的智能体设定、用户身份、记忆文件中。\n\n在后续的长期对话中，Agent 会在需要时智能记录或检索记忆，并对自身设定、用户偏好、记忆文件等进行不断更新，总结和记录经验和教训，真正实现自主思考和不断成长。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## 2. 任务规划和工具调用\n\n工具是 Agent 访问操作系统资源的核心，Agent 会根据任务需求智能选择和调用工具，完成文件读写、命令执行、定时任务等各类操作。内置工具的实现在项目的 `agent/tools/` 目录下。\n\n**主要工具：** 文件读写编辑、Bash 终端、文件发送、定时调度、记忆搜索、联网搜索、环境配置等。\n\n### 2.1 终端和文件访问\n\n针对操作系统的终端和文件的访问能力，是最基础和核心的工具，其他很多工具或技能都是基于此进行扩展。用户可通过手机端与 Agent 交互，操作个人电脑或服务器上的资源：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202181130.png\" width=\"800\" />\n</Frame>\n\n### 2.2 编程能力\n\n基于编程能力和系统访问能力，Agent 可以实现从信息搜索、图片等素材生成、编码、测试、部署、Nginx 配置修改、发布的 **Vibecoding 全流程**，通过手机端简单的一句命令完成应用的快速 demo：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n\n### 2.3 定时任务\n\n基于 `scheduler` 工具实现动态定时任务，支持**一次性任务、固定时间间隔、Cron 表达式**三种形式，任务触发可选择**固定消息发送**或 **Agent 动态任务**执行两种模式：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n\n### 2.4 环境变量管理\n\n技能所需的秘钥存储在环境变量文件中，由 `env_config` 工具进行管理，你可以通过对话的方式更新秘钥，工具内置安全保护和脱敏策略：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" width=\"800\" />\n</Frame>\n\n## 3. 技能系统\n\n技能系统为 Agent 提供无限的扩展性，每个 Skill 由说明文件、运行脚本（可选）、资源（可选）组成，描述如何完成特定类型的任务。通过 Skill 可以让 Agent 遵循说明完成复杂流程、调用各类工具或对接第三方系统。\n\n- **内置技能：** 在项目的 `skills/` 目录下，包含技能创造器、图像识别、LinkAI 智能体、网页抓取等。内置 Skill 根据依赖条件（API Key、系统命令等）自动判断是否启用。\n- **自定义技能：** 由用户通过对话创建，存放在工作空间中（`~/cow/skills/`），可实现任何复杂的业务流程和第三方系统对接。\n\n### 3.1 创建技能\n\n通过 `skill-creator` 技能可以通过对话的方式快速创建技能。你可以让 Agent 将某个工作流程固化为技能，或者把任意接口文档和示例发送给 Agent，让他直接完成对接：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n### 3.2 搜索和图像识别\n\n- **联网搜索：** 内置 `web_search` 工具，支持多种搜索引擎，配置 `BOCHA_API_KEY` 或 `LINKAI_API_KEY` 后启用。\n- **图像识别：** 内置 `openai-image-vision` 技能，可使用 `gpt-4.1-mini`、`gpt-4.1` 等模型，依赖 `OPENAI_API_KEY`。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n\n### 3.3 三方知识库和插件\n\n`linkai-agent` 技能可以将 [LinkAI](https://link-ai.tech/) 上的所有智能体作为 Skill 交给 Agent 使用，实现多智能体决策效果。\n\n配置方式：通过 `env_config` 配置 `LINKAI_API_KEY`，并在 `skills/linkai-agent/config.json` 中添加智能体说明：\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI客服助手\",\n      \"app_description\": \"当用户需要了解LinkAI平台相关问题时才选择该助手\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"内容创作助手\",\n      \"app_description\": \"当用户需要创作图片或视频时才使用该助手\"\n    }\n  ]\n}\n```\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n"
  },
  {
    "path": "docs/intro/index.mdx",
    "content": "---\ntitle: 项目介绍\ndescription: CowAgent - 基于大模型的超级AI助理\n---\n\n<img src=\"https://cdn.link-ai.tech/doc/78c5dd674e2c828642ecc0406669fed7.png\" alt=\"CowAgent\" width=\"450px\"/>\n\n**CowAgent** 是基于大模型的超级AI助理，能够主动思考和任务规划、操作计算机和外部资源、创造和执行Skills、拥有长期记忆并不断成长。\n\nCowAgent 支持灵活切换多种模型，能处理文本、语音、图片、文件等多模态消息，可接入网页、飞书、钉钉、企业微信应用、微信公众号中使用，7×24小时运行于你的个人电脑或服务器中。\n\n<CardGroup cols={2}>\n  <Card title=\"GitHub\" icon=\"github\" href=\"https://github.com/zhayujie/chatgpt-on-wechat\">\n    开源代码仓库，欢迎 Star 和贡献\n  </Card>\n  <Card title=\"免部署在线体验\" icon=\"cloud\" href=\"https://link-ai.tech/cowagent/create\">\n    无需安装，立即在线体验 CowAgent\n  </Card>\n</CardGroup>\n\n## 核心能力\n\n<CardGroup cols={2}>\n  <Card title=\"复杂任务规划\" icon=\"brain\" href=\"/intro/architecture\">\n    能够理解复杂任务并自主规划执行，持续思考和调用工具直到完成目标，支持通过工具操作访问文件、终端、浏览器、定时任务等系统资源。\n  </Card>\n  <Card title=\"长期记忆\" icon=\"database\" href=\"/memory\">\n    自动将对话记忆持久化至本地文件和数据库中，包括全局记忆和天级记忆，支持关键词及向量检索。\n  </Card>\n  <Card title=\"技能系统\" icon=\"puzzle-piece\" href=\"/skills/index\">\n    实现了Skills创建和运行的引擎，内置多种技能，并支持通过自然语言对话完成自定义Skills开发。\n  </Card>\n  <Card title=\"多模态消息\" icon=\"image\" href=\"/channels/web\">\n    支持对文本、图片、语音、文件等多类型消息进行解析、处理、生成、发送等操作。\n  </Card>\n  <Card title=\"多模型接入\" icon=\"microchip\" href=\"/models/index\">\n    支持 OpenAI, Claude, Gemini, DeepSeek, MiniMax, GLM, Qwen, Kimi, Doubao 等国内外主流模型厂商。\n  </Card>\n  <Card title=\"多端部署\" icon=\"server\" href=\"/channels/web\">\n    支持运行在本地计算机或服务器，可集成到网页、飞书、钉钉、微信公众号、企业微信应用中使用。\n  </Card>\n</CardGroup>\n\n## 快速体验\n\n在终端执行以下命令，即可一键安装、配置、启动 CowAgent：\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\n运行后默认会启动 Web 服务，通过访问 `http://localhost:9899/chat` 在网页端对话。\n\n<CardGroup cols={2}>\n  <Card title=\"快速开始\" icon=\"rocket\" href=\"/guide/quick-start\">\n    查看完整的安装和运行指南\n  </Card>\n  <Card title=\"项目架构\" icon=\"sitemap\" href=\"/intro/architecture\">\n    了解 CowAgent 的系统架构设计\n  </Card>\n</CardGroup>\n\n## 社区\n\n添加小助手微信加入开源项目交流群：\n\n<img width=\"140\" src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/open-community.png\" />\n"
  },
  {
    "path": "docs/ja/README.md",
    "content": "<p align=\"center\"><img src=\"https://github.com/user-attachments/assets/eca9a9ec-8534-4615-9e0f-96c5ac1d10a3\" alt=\"CowAgent\" width=\"550\" /></p>\n\n<p align=\"center\">\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat/releases/latest\"><img src=\"https://img.shields.io/github/v/release/zhayujie/chatgpt-on-wechat\" alt=\"Latest release\"></a>\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat/blob/master/LICENSE\"><img src=\"https://img.shields.io/github/license/zhayujie/chatgpt-on-wechat\" alt=\"License: MIT\"></a>\n  <a href=\"https://github.com/zhayujie/chatgpt-on-wechat\"><img src=\"https://img.shields.io/github/stars/zhayujie/chatgpt-on-wechat?style=flat-square\" alt=\"Stars\"></a> <br/>\n  [<a href=\"https://github.com/zhayujie/chatgpt-on-wechat/blob/master/README.md\">中文</a>] | [<a href=\"https://github.com/zhayujie/chatgpt-on-wechat/blob/master/docs/en/README.md\">English</a>] | [日本語]\n</p>\n\n**CowAgent** はLLMを搭載したAIスーパーアシスタントです。自律的なタスク計画、コンピュータや外部リソースの操作、Skillの作成・実行、長期記憶による継続的な成長が可能です。柔軟なモデル切り替えに対応し、テキスト・音声・画像・ファイルを処理でき、Web、Feishu（飛書）、DingTalk（釘釘）、WeCom Bot（企業微信ボット）、WeComアプリ、WeChat公式アカウントに統合可能で、個人のPCやサーバー上で24時間365日稼働できます。\n\n<p align=\"center\">\n  <a href=\"https://cowagent.ai/\">🌐 ウェブサイト</a> &nbsp;·&nbsp;\n  <a href=\"https://docs.cowagent.ai/en/intro/index\">📖 ドキュメント</a> &nbsp;·&nbsp;\n  <a href=\"https://docs.cowagent.ai/en/guide/quick-start\">🚀 クイックスタート</a> &nbsp;·&nbsp;\n  <a href=\"https://link-ai.tech/cowagent/create\">☁️ オンラインで試す</a>\n</p>\n\n## はじめに\n\n> CowAgentは、すぐに使えるAIスーパーアシスタントであると同時に、高い拡張性を持つAgentフレームワークでもあります。新しいモデルインターフェース、チャネル、組み込みツール、Skillシステムを拡張することで、さまざまなカスタマイズニーズに柔軟に対応できます。\n\n- ✅ **自律的タスク計画**: 複雑なタスクを理解し、自律的に実行計画を立て、目標達成までツールを呼び出しながら継続的に思考します。ツールを通じてファイル、ターミナル、ブラウザ、スケジューラなどのシステムリソースにアクセスできます。\n- ✅ **長期記憶**: 会話の記憶をローカルファイルやデータベースに自動的に永続化します。コアメモリとデイリーメモリを含み、キーワード検索やベクトル検索に対応しています。\n- ✅ **Skillシステム**: Skillの作成・実行エンジンを実装しており、複数の組み込みSkillを備え、自然言語での会話を通じたカスタムSkillの開発もサポートしています。\n- ✅ **マルチモーダルメッセージ**: テキスト、画像、音声、ファイルなど、さまざまなメッセージタイプの解析・処理・生成・送信に対応しています。\n- ✅ **複数モデル対応**: OpenAI、Claude、Gemini、DeepSeek、MiniMax、GLM、Qwen、Kimi、Doubaoなど、主要なモデルプロバイダーに対応しています。\n- ✅ **マルチプラットフォームデプロイ**: ローカルPCやサーバー上で実行でき、Web、Feishu、DingTalk、WeChat公式アカウント、WeComアプリケーションに統合可能です。\n- ✅ **ナレッジベース**: [LinkAI](https://link-ai.tech) プラットフォームを通じて、企業向けナレッジベース機能を統合できます。\n\n## 免責事項\n\n1. 本プロジェクトは [MIT License](/LICENSE) に基づいており、技術研究・学習を目的としています。利用者は現地の法律、規制、ポリシー、企業の社則を遵守する必要があります。違法行為や権利侵害となる利用は禁止されています。\n2. Agentモードは通常のチャットモードよりも多くのトークンを消費します。効果とコストに基づいてモデルを選択してください。AgentはホストOSにアクセスできるため、信頼できる環境にデプロイしてください。\n3. CowAgentはオープンソース開発に注力しており、いかなる暗号通貨の発行・参加・承認も行っていません。\n\n## デモ\n\nオンラインで試す（デプロイ不要）: [CowAgent](https://link-ai.tech/cowagent/create)\n\n## 更新履歴\n\n> **2026.02.27:** [v2.0.2](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.2) — Webコンソールの全面刷新（ストリーミングチャット、モデル/Skill/メモリ/チャネル/スケジューラ/ログ管理）、マルチチャネル同時実行、セッション永続化、Gemini 3.1 Pro / Claude 4.6 Sonnet / Qwen3.5 Plusなど新モデル追加。\n\n> **2026.02.13:** [v2.0.1](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.1) — 組み込みWeb検索ツール、スマートコンテキストトリミング、ランタイム情報の動的更新、Windows互換性、スケジューラのメモリ喪失やFeishu接続問題などの修正。\n\n> **2026.02.03:** [v2.0.0](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.0) — マルチステップタスク計画、長期記憶、組み込みツール、Skillフレームワーク、新モデル、チャネル最適化を備えたAIスーパーアシスタントへの全面アップグレード。\n\n> **2025.05.23:** [v1.7.6](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.6) — Webチャネル最適化、AgentMeshマルチエージェントプラグイン、Baidu TTS、claude-4-sonnet/opus対応。\n\n> **2025.04.11:** [v1.7.5](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.5) — wechatferryプロトコル、DeepSeekモデル、Tencent Cloud音声、ModelScope・Gitee-AI対応。\n\n> **2024.12.13:** [v1.7.4](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.4) — Gemini 2.0モデル、Webチャネル、メモリリーク修正。\n\n全更新履歴: [リリースノート](https://docs.cowagent.ai/en/releases/overview)\n\n<br/>\n\n## 🚀 クイックスタート\n\n本プロジェクトは、インストール・設定・起動・管理をワンクリックで行えるスクリプトを提供しています：\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\n実行後、デフォルトでWebサービスが起動します。`http://localhost:9899/chat` にアクセスしてチャットを開始できます。\n\nスクリプトの使い方: [ワンクリックインストール](https://docs.cowagent.ai/en/guide/quick-start)\n\n### 手動インストール\n\n**1. プロジェクトのクローン**\n\n```bash\ngit clone https://github.com/zhayujie/chatgpt-on-wechat\ncd chatgpt-on-wechat/\n```\n\n**2. 依存関係のインストール**\n\n```bash\npip3 install -r requirements.txt\npip3 install -r requirements-optional.txt   # 任意ですが推奨\n```\n\n**3. 設定**\n\n```bash\ncp config-template.json config.json\n```\n\n`config.json` にモデルのAPIキーとチャネルタイプを記入してください。詳細は[設定ドキュメント](https://docs.cowagent.ai/en/guide/manual-install)を参照してください。\n\n**4. 実行**\n\n```bash\npython3 app.py\n```\n\nサーバーでバックグラウンド実行する場合：\n\n```bash\nnohup python3 app.py & tail -f nohup.out\n```\n\n### Dockerデプロイ\n\n```bash\ncurl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml\n# docker-compose.yml を編集して設定を記入\nsudo docker compose up -d\nsudo docker logs -f chatgpt-on-wechat\n```\n\n<br/>\n\n## モデル\n\n主要なモデルプロバイダーに対応しています。Agentモードの推奨モデル：\n\n| プロバイダー | 推奨モデル |\n| --- | --- |\n| MiniMax | `MiniMax-M2.7` |\n| GLM | `glm-5-turbo` |\n| Kimi | `kimi-k2.5` |\n| Doubao | `doubao-seed-2-0-code-preview-260215` |\n| Qwen | `qwen3.5-plus` |\n| Claude | `claude-sonnet-4-6` |\n| Gemini | `gemini-3.1-pro-preview` |\n| OpenAI | `gpt-5.4` |\n| DeepSeek | `deepseek-chat` |\n\n各モデルの詳細設定については、[モデルドキュメント](https://docs.cowagent.ai/en/models/index)を参照してください。\n\n### Coding Plan\n\nCoding Planは各プロバイダーが提供する月額サブスクリプションパッケージで、高頻度のAgent利用に最適です。すべてのプロバイダーはOpenAI互換モードでアクセスできます：\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MODEL_NAME\",\n  \"open_ai_api_base\": \"PROVIDER_CODING_PLAN_API_BASE\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n- `bot_type`: `openai` を指定\n- `model`: プロバイダーがサポートするモデル名\n- `open_ai_api_base`: プロバイダーのCoding Plan API Base（標準の従量課金とは異なります）\n- `open_ai_api_key`: プロバイダーのCoding Plan APIキー\n\n> 注意：Coding PlanのAPI BaseとAPIキーは、通常の従量課金のものとは別です。各プロバイダーのプラットフォームから取得してください。\n\n対応プロバイダーには、Alibaba Cloud、MiniMax、Zhipu GLM、Kimi、Volcengineなどがあります。各プロバイダーの詳細設定については、[Coding Planドキュメント](https://docs.cowagent.ai/en/models/coding-plan)を参照してください。\n\n<br/>\n\n## チャネル\n\n複数のプラットフォームに対応しています。`config.json` の `channel_type` を設定して切り替えます：\n\n| チャネル | `channel_type` | ドキュメント |\n| --- | --- | --- |\n| Web（デフォルト） | `web` | [Webチャネル](https://docs.cowagent.ai/en/channels/web) |\n| Feishu（飛書） | `feishu` | [Feishu設定](https://docs.cowagent.ai/en/channels/feishu) |\n| DingTalk（釘釘） | `dingtalk` | [DingTalk設定](https://docs.cowagent.ai/en/channels/dingtalk) |\n| WeCom Bot | `wecom_bot` | [WeCom Bot設定](https://docs.cowagent.ai/en/channels/wecom-bot) |\n| WeComアプリ | `wechatcom_app` | [WeCom設定](https://docs.cowagent.ai/en/channels/wecom) |\n| WeChat公式アカウント | `wechatmp` / `wechatmp_service` | [WeChat公式アカウント設定](https://docs.cowagent.ai/en/channels/wechatmp) |\n| ターミナル | `terminal` | — |\n\n複数チャネルを同時に有効化できます。カンマ区切りで指定してください：`\"channel_type\": \"feishu,dingtalk\"`\n\n<br/>\n\n## エンタープライズサービス\n\n<a href=\"https://link-ai.tech\" target=\"_blank\"><img width=\"720\" src=\"https://cdn.link-ai.tech/image/link-ai-intro.jpg\"></a>\n\n> [LinkAI](https://link-ai.tech/) は、企業や開発者向けのワンストップAIエージェントプラットフォームです。マルチモーダルLLM、ナレッジベース、Agentプラグイン、ワークフローを統合しています。主要プラットフォームへのワンクリック統合、SaaSおよびプライベートデプロイに対応しています。\n\n<br/>\n\n## 🔗 関連プロジェクト\n\n- [bot-on-anything](https://github.com/zhayujie/bot-on-anything): 軽量で高い拡張性を持つLLMアプリケーションフレームワーク。Slack、Telegram、Discord、Gmailなどに対応。\n- [AgentMesh](https://github.com/MinimalFuture/AgentMesh): エージェントチームの協調による複雑な問題解決のためのオープンソースのマルチエージェントフレームワーク。\n\n## 🔎 よくある質問\n\nFAQ: <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>\n\n## 🛠️ コントリビューション\n\n新しいチャネルの追加を歓迎します。[Feishuチャネル](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/feishu/feishu_channel.py)を参考にしてください。また、新しいSkillのコントリビューションも歓迎します。[Skill Creatorドキュメント](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/skills/skill-creator/SKILL.md)を参照してください。\n\n## ✉ お問い合わせ\n\nPRやIssueの提出を歓迎します。🌟 Starでプロジェクトをサポートしてください。ご質問がある場合は、[FAQリスト](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs)を確認するか、[Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues)を検索してください。\n\n## 🌟 コントリビューター\n\n![cow contributors](https://contrib.rocks/image?repo=zhayujie/chatgpt-on-wechat&max=1000)\n"
  },
  {
    "path": "docs/ja/channels/dingtalk.mdx",
    "content": "---\ntitle: DingTalk\ndescription: CowAgent を DingTalk アプリケーションに統合する\n---\n\nDingTalk オープンプラットフォームでインテリジェントロボットアプリを作成して、CowAgent を DingTalk に統合します。\n\n## 1. アプリの作成\n\n1. [DingTalk 開発者コンソール](https://open-dev.dingtalk.com/fe/app#/corp/app)にアクセスし、ログインして**アプリを作成**をクリックし、アプリ情報を入力します：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-create-app.png\" width=\"800\"/>\n\n2. **アプリ機能の追加**をクリックし、**ロボット**機能を選択して**追加**をクリックします：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-add-bot.png\" width=\"800\"/>\n\n3. ロボット情報を設定し、**公開**をクリックします。公開後、「**デバッグ**」をクリックすると自動的にテストグループチャットが作成され、クライアントで確認できます：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-config-bot.png\" width=\"600\"/>\n\n4. **バージョン管理とリリース**をクリックし、新しいバージョンを作成して公開します：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-publish-bot.png\" width=\"700\"/>\n\n## 2. プロジェクト設定\n\n1. **認証情報と基本情報**をクリックし、`Client ID` と `Client Secret` を取得します：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-get-secret.png\" width=\"700\"/>\n\n2. プロジェクトルートの `config.json` に以下の設定を追加します：\n\n```json\n{\n  \"channel_type\": \"dingtalk\",\n  \"dingtalk_client_id\": \"YOUR_CLIENT_ID\",\n  \"dingtalk_client_secret\": \"YOUR_CLIENT_SECRET\"\n}\n```\n\n3. 依存パッケージをインストールします：\n\n```bash\npip3 install dingtalk_stream\n```\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-app-config.png\" width=\"700\"/>\n\n4. プロジェクト起動後、DingTalk 開発者コンソールに移動し、**イベントサブスクリプション**をクリックし、**接続確認済み、チャネルを確認**をクリックします。「**接続成功**」と表示されれば設定完了です：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-event-sub.png\" width=\"700\"/>\n\n## 3. 使い方\n\nロボットと個別チャットするか、企業グループに追加して会話を開始します：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/dingtalk-hosting-demo.png\" width=\"650\"/>\n"
  },
  {
    "path": "docs/ja/channels/feishu.mdx",
    "content": "---\ntitle: Feishu (Lark)\ndescription: CowAgent を Feishu アプリケーションに統合する\n---\n\n企業向けカスタムアプリを作成して、CowAgent を Feishu に統合します。管理者権限を持つ Feishu 企業ユーザーである必要があります。\n\n## 1. 企業カスタムアプリの作成\n\n### 1.1 アプリの作成\n\n[Feishu 開発者プラットフォーム](https://open.feishu.cn/app/)にアクセスし、**企業カスタムアプリを作成**をクリックして、必要な情報を入力し**作成**をクリックします：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-create-app.jpg\" width=\"500\"/>\n\n### 1.2 Bot 機能の追加\n\n**アプリ機能の追加**で、アプリに **Bot** 機能を追加します：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-add-bot.jpg\" width=\"800\"/>\n\n### 1.3 アプリ権限の設定\n\n**権限管理**をクリックし、**権限設定**の下の入力欄に以下の権限文字列を貼り付け、フィルタされたすべての権限を選択し、**一括有効化**をクリックして確認します：\n\n```\nim:message,im:message.group_at_msg,im:message.group_at_msg:readonly,im:message.p2p_msg,im:message.p2p_msg:readonly,im:message:send_as_bot,im:resource\n```\n\n<img src=\"https://cdn.link-ai.tech/doc/feishu-hosting-add-auth2.png\" width=\"800\"/>\n\n## 2. プロジェクト設定\n\n1. **認証情報と基本情報**から `App ID` と `App Secret` を取得します：\n\n<img src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/feishu-hosting-appid-secret.jpg\" width=\"800\"/>\n\n2. プロジェクトルートの `config.json` に以下の設定を追加します：\n\n```json\n{\n  \"channel_type\": \"feishu\",\n  \"feishu_app_id\": \"YOUR_APP_ID\",\n  \"feishu_app_secret\": \"YOUR_APP_SECRET\",\n  \"feishu_bot_name\": \"YOUR_BOT_NAME\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `feishu_app_id` | Feishu Bot の App ID |\n| `feishu_app_secret` | Feishu Bot の App Secret |\n| `feishu_bot_name` | Bot 名（アプリ作成時に設定）、グループチャットで使用する際に必要 |\n\n設定完了後、プロジェクトを起動します。\n\n## 3. イベントサブスクリプションの設定\n\n1. プロジェクトが正常に動作した後、Feishu 開発者プラットフォームに移動し、**イベントとコールバック**をクリックし、**ロングコネクション**モードを選択して保存をクリックします：\n\n<img src=\"https://cdn.link-ai.tech/doc/202601311731183.png\" width=\"600\"/>\n\n2. 下の**イベントを追加**をクリックし、「メッセージ受信」を検索して「**メッセージ受信 v2.0**」を選択し、確認します。\n\n3. **バージョン管理とリリース**をクリックし、新しいバージョンを作成して**本番リリース**を申請します。Feishu クライアントで承認メッセージを確認し、承認します：\n\n<img src=\"https://cdn.link-ai.tech/doc/202601311807356.png\" width=\"600\"/>\n\n完了後、Feishu で Bot 名を検索してチャットを開始できます。\n"
  },
  {
    "path": "docs/ja/channels/qq.mdx",
    "content": "---\ntitle: QQ Bot\ndescription: CowAgent を QQ Bot に接続する（WebSocket ロングコネクション）\n---\n\n> QQ オープンプラットフォームの Bot API を介して CowAgent を接続し、QQ のダイレクトメッセージ、グループチャット（@bot）、ギルドチャネルメッセージ、ギルド DM に対応します。パブリック IP は不要で、WebSocket ロングコネクションを使用します。\n\n<Note>\n  QQ Bot は QQ オープンプラットフォームを通じて作成します。WebSocket ロングコネクションでメッセージを受信し、OpenAPI でメッセージを送信します。パブリック IP やドメインは不要です。\n</Note>\n\n## 1. QQ Bot の作成\n\n> [QQ オープンプラットフォーム](https://q.qq.com)にアクセスし、QQ でサインインします。未登録の場合は、先に[アカウント登録](https://q.qq.com/#/register)を完了してください。\n\n1.[QQ オープンプラットフォーム - Bot 一覧](https://q.qq.com/#/apps)に移動し、**Bot を作成**をクリックします：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317162900.png\" width=\"800\"/>\n\n2.Bot 名、アバター、その他の基本情報を入力して作成を完了します：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317163005.png\" width=\"800\"/>\n\n3.Bot 設定ページに入り、**開発管理**に移動して以下の手順を完了します：\n\n  - **AppID**（Bot ID）をコピーして保存します\n  - **AppSecret**（Bot Secret）を生成して保存します\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317164955.png\" width=\"800\"/>\n\n## 2. 設定と起動\n\n### 方法 A: Web コンソール\n\nプログラムを起動し、Web コンソール（ローカルアクセス: http://127.0.0.1:9899/）を開きます。**チャネル**タブに移動し、**チャネルを接続**をクリックして **QQ Bot** を選択し、前のステップで取得した AppID と AppSecret を入力して接続をクリックします。\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317165425.png\" width=\"800\"/>\n\n### 方法 B: 設定ファイル\n\n`config.json` に以下を追加します：\n\n```json\n{\n  \"channel_type\": \"qq\",\n  \"qq_app_id\": \"YOUR_APP_ID\",\n  \"qq_app_secret\": \"YOUR_APP_SECRET\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `qq_app_id` | QQ Bot の AppID。オープンプラットフォームの開発管理で確認できます |\n| `qq_app_secret` | QQ Bot の AppSecret。オープンプラットフォームの開発管理で確認できます |\n\n設定後、プログラムを起動します。ログに `[QQ] ✅ Connected successfully` と表示されれば接続成功です。\n\n\n## 3. 使い方\n\nQQ オープンプラットフォームで、**管理 → 利用範囲とメンバー**に移動し、「グループとメッセージリストに追加」の QR コードを QQ クライアントでスキャンして Bot とのチャットを開始します：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260317165947.png\" width=\"800\"/>\n\nチャット例:\n<img src=\"https://cdn.link-ai.tech/doc/20260317171508.png\" width=\"800\"/>\n\n## 4. 対応機能\n\n> 注意: グループチャットやギルドチャネルで QQ Bot を使用するには、公開審査を完了し、利用範囲の権限を設定する必要があります。\n\n| 機能 | 状態 |\n| --- | --- |\n| QQ ダイレクトメッセージ | ✅ |\n| QQ グループチャット（@bot） | ✅ |\n| ギルドチャネル（@bot） | ✅ |\n| ギルド DM | ✅ |\n| テキストメッセージ | ✅ 送受信 |\n| 画像メッセージ | ✅ 送受信（グループ・ダイレクト） |\n| ファイルメッセージ | ✅ 送信（グループ・ダイレクト） |\n| スケジュールタスク | ✅ 能動的プッシュ（ユーザーあたり月4回） |\n\n\n## 5. 注意事項\n\n- **受動メッセージの制限**: QQ ダイレクトメッセージの返信は60分間有効です（1メッセージあたり最大5回返信可能）。グループチャットの返信は5分間有効です。\n- **能動メッセージの制限**: ダイレクトメッセージとグループチャットの両方で、月あたりの能動メッセージは4件までです。スケジュールタスク機能を使用する際はこの点にご注意ください。\n- **イベント権限**: デフォルトでは `GROUP_AND_C2C_EVENT`（QQ グループ/ダイレクト）と `PUBLIC_GUILD_MESSAGES`（ギルド公開メッセージ）がサブスクライブされています。追加の権限が必要な場合は、オープンプラットフォームで申請してください。\n"
  },
  {
    "path": "docs/ja/channels/web.mdx",
    "content": "---\ntitle: Web コンソール\ndescription: Web コンソールで CowAgent を使用する\n---\n\nWeb コンソールは CowAgent のデフォルトチャネルです。起動後に自動的に開始され、ブラウザを通じて Agent とチャットしたり、モデル、Skill、メモリ、チャネルなどの設定をオンラインで管理できます。\n\n## 設定\n\n```json\n{\n  \"channel_type\": \"web\",\n  \"web_port\": 9899\n}\n```\n\n| パラメータ | 説明 | デフォルト値 |\n| --- | --- | --- |\n| `channel_type` | `web` に設定 | `web` |\n| `web_port` | Web サービスのリスンポート | `9899` |\n\n## アクセス URL\n\nプロジェクト起動後、以下にアクセスしてください：\n\n- ローカル: `http://localhost:9899`\n- サーバー: `http://<server-ip>:9899`\n\n<Note>\n  サーバーのファイアウォールとセキュリティグループで該当ポートが許可されていることを確認してください。\n</Note>\n\n## 機能\n\n### チャット画面\n\nストリーミング出力に対応しており、Agent の推論プロセスやツール呼び出しをリアルタイムで表示し、Agent の意思決定を直感的に観察できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227180120.png\" />\n\n### モデル管理\n\n設定ファイルを手動で編集せずに、オンラインでモデル設定を管理できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173811.png\" />\n\n### Skill 管理\n\nAgent の Skill をオンラインで閲覧・管理できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173403.png\" />\n\n### メモリ管理\n\nAgent のメモリをオンラインで閲覧・管理できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173349.png\" />\n\n### チャネル管理\n\n接続中のチャネルをオンラインで管理し、リアルタイムで接続・切断操作を行えます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173331.png\" />\n\n### スケジュールタスク\n\nスケジュールタスクをオンラインで閲覧・管理できます。一回限りのタスク、固定間隔、Cron 式に対応しています：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173704.png\" />\n\n### ログ\n\nAgent のランタイムログをリアルタイムで確認でき、監視やトラブルシューティングに活用できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173514.png\" />\n"
  },
  {
    "path": "docs/ja/channels/wechatmp.mdx",
    "content": "---\ntitle: WeChat 公式アカウント\ndescription: CowAgent を WeChat 公式アカウントに統合する\n---\n\nCowAgent は個人サブスクリプションアカウントと企業サービスアカウントの両方に対応しています。\n\n| 種類 | 要件 | 特徴 |\n| --- | --- | --- |\n| **個人サブスクリプション** | 個人で利用可能 | まずプレースホルダーの返信を送信し、ユーザーが完全な応答を取得するにはメッセージを送信する必要があります |\n| **企業サービス** | カスタマーサービス API が認証済みの企業 | ユーザーに能動的に返信をプッシュできます |\n\n<Note>\n  公式アカウントはサーバーおよび Docker デプロイのみサポートしており、ローカル実行モードには対応していません。拡張依存パッケージをインストールしてください: `pip3 install -r requirements-optional.txt`\n</Note>\n\n## 1. 個人サブスクリプションアカウント\n\n`config.json` に以下の設定を追加します：\n\n```json\n{\n  \"channel_type\": \"wechatmp\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatmp_app_id\": \"wx73f9******d1e48\",\n  \"wechatmp_app_secret\": \"YOUR_APP_SECRET\",\n  \"wechatmp_aes_key\": \"\",\n  \"wechatmp_token\": \"YOUR_TOKEN\",\n  \"wechatmp_port\": 80\n}\n```\n\n### セットアップ手順\n\nこれらの設定は [WeChat 公式アカウントプラットフォーム](https://mp.weixin.qq.com/advanced/advanced?action=dev&t=advanced/dev)と一致している必要があります。**設定と開発 → 基本設定 → サーバー設定**に移動し、以下のように設定します：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103506.png\" width=\"480\"/>\n\n1. プラットフォームで開発者シークレットを有効化し（`wechatmp_app_secret` に対応）、サーバー IP をホワイトリストに追加します\n2. プラットフォームの設定と一致するように公式アカウントのパラメータを `config.json` に入力します\n3. プログラムを起動します。ポート 80 でリスンします（権限がない場合は `sudo` を使用してください。ポート 80 を占有しているプロセスがあれば停止してください）\n4. 公式アカウントプラットフォームで**サーバー設定を有効化**して送信します。正常に保存できれば設定完了です。**「サーバー URL」**は `http://{HOST}/wx` の形式で入力する必要があり、`{HOST}` にはサーバー IP またはドメインを指定できます\n\nアカウントをフォローしてメッセージを送信すると、以下のような結果が表示されるはずです：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103522.png\" width=\"720\"/>\n\nサブスクリプションアカウントの制限により、短い返信（15秒以内）は即座に返されますが、長い返信の場合はまず「考え中...」というプレースホルダーが送信され、ユーザーは任意のテキストを送信して回答を取得する必要があります。企業サービスアカウントではカスタマーサービス API を使用してこの問題を解決できます。\n\n<Tip>\n  **音声認識**: WeChat 内蔵の音声認識を使用できます。公式アカウント管理ページの「設定と開発 → API 権限」で「音声認識結果の受信」を有効にしてください。\n</Tip>\n\n## 2. 企業サービスアカウント\n\n企業サービスアカウントのセットアップ手順は個人サブスクリプションアカウントとほぼ同じですが、以下の点が異なります：\n\n1. プラットフォームで企業サービスアカウントを登録し、WeChat 認証を完了します。**カスタマーサービス API** の権限が付与されていることを確認してください\n2. `config.json` で `\"channel_type\": \"wechatmp_service\"` に設定します。その他の設定は同じです\n3. 長い返信であっても、ユーザーに能動的にプッシュでき、手動での取得が不要です\n\n```json\n{\n  \"channel_type\": \"wechatmp_service\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatmp_app_id\": \"YOUR_APP_ID\",\n  \"wechatmp_app_secret\": \"YOUR_APP_SECRET\",\n  \"wechatmp_aes_key\": \"\",\n  \"wechatmp_token\": \"YOUR_TOKEN\",\n  \"wechatmp_port\": 80\n}\n```\n"
  },
  {
    "path": "docs/ja/channels/wecom-bot.mdx",
    "content": "---\ntitle: WeCom Bot\ndescription: CowAgent を WeCom AI Bot に接続する（WebSocket ロングコネクション）\n---\n\nWeCom AI Bot を介して CowAgent を接続し、ダイレクトメッセージとグループチャットの両方に対応します。パブリック IP は不要で、WebSocket ロングコネクションを使用し、Markdown レンダリングとストリーミング出力をサポートします。\n\n<Note>\n  WeCom Bot と WeCom App は異なる統合方式です。WeCom Bot は WebSocket ロングコネクションを使用するため、パブリック IP やドメインが不要で、セットアップが簡単です。\n</Note>\n\n## 1. AI Bot の作成\n\n1. WeCom クライアントを開き、**ワークベンチ**に移動し、**AI Bot** をクリックします：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316180959.png\" width=\"800\"/>\n\n2. **Bot を作成** → **手動作成**をクリックします：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181118.png\" width=\"600\"/>\n\n3. 右パネルの一番下までスクロールし、**API モード**を選択します：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181215.png\" width=\"600\"/>\n\n4. Bot 名、アバター、公開範囲を設定します。**ロングコネクション**モードを選択し、**Bot ID** と **Secret** をメモしてから保存をクリックします。\n\n## 2. 設定\n\n### 方法 A: Web コンソール\n\nプログラムを起動し、Web コンソール（ローカルアクセス: http://127.0.0.1:9899）を開きます。**チャネル**タブに移動し、**チャネルを接続**をクリックして **WeCom Bot** を選択し、前のステップで取得した Bot ID と Secret を入力して接続をクリックします。\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316181711.png\" width=\"600\"/>\n\n### 方法 B: 設定ファイル\n\n`config.json` に以下を追加します：\n\n```json\n{\n  \"channel_type\": \"wecom_bot\",\n  \"wecom_bot_id\": \"YOUR_BOT_ID\",\n  \"wecom_bot_secret\": \"YOUR_SECRET\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `wecom_bot_id` | AI Bot の Bot ID |\n| `wecom_bot_secret` | AI Bot の Secret |\n\n設定後、プログラムを起動します。ログに `[WecomBot] Subscribe success` と表示されれば接続成功です。\n\n## 3. 対応機能\n\n| 機能 | 状態 |\n| --- | --- |\n| ダイレクトメッセージ | ✅ |\n| グループチャット（@bot） | ✅ |\n| テキストメッセージ | ✅ 送受信 |\n| 画像メッセージ | ✅ 送受信 |\n| ファイルメッセージ | ✅ 送受信 |\n| ストリーミング返信 | ✅ |\n| スケジュール配信 | ✅ |\n\n## 4. 使い方\n\nWeCom で Bot 名を検索してダイレクトメッセージを開始できます。\n\nグループチャットで使用するには、Bot をグループに追加し、@メンションしてメッセージを送信します。\n\n<img src=\"https://cdn.link-ai.tech/doc/20260316182902.png\" width=\"800\"/>\n"
  },
  {
    "path": "docs/ja/channels/wecom.mdx",
    "content": "---\ntitle: WeCom\ndescription: CowAgent を WeCom 企業アプリに統合する\n---\n\nカスタム企業アプリを通じて CowAgent を WeCom に統合し、社内従業員との1対1チャットに対応します。\n\n<Note>\n  WeCom は Docker デプロイまたはサーバー上の Python デプロイのみサポートしています。ローカル実行モードには対応していません。\n</Note>\n\n## 1. 前提条件\n\n必要なリソース：\n\n1. パブリック IP を持つサーバー（海外サーバー、または国際 API アクセス用のプロキシを持つ国内サーバー）\n2. 登録済みの WeCom アカウント（個人登録は可能ですが認証はできません）\n3. 認証済みの WeCom アカウントには、対応する法人名義で届け出済みのドメインが別途必要です\n\n## 2. WeCom アプリの作成\n\n1. [WeCom 管理コンソール](https://work.weixin.qq.com/wework_admin/frame#profile)で、**自社情報**をクリックし、ページ下部の **Corp ID** を確認します。この ID を `wechatcom_corp_id` 設定フィールド用に保存します。\n\n2. **アプリ管理**に切り替え、アプリを作成をクリックします：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103156.png\" width=\"480\"/>\n\n3. アプリ作成ページで、`AgentId` と `Secret` を記録します：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103218.png\" width=\"580\"/>\n\n4. **API 受信設定**をクリックしてアプリケーションインターフェースを設定します：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103211.png\" width=\"520\"/>\n\n- URL の形式: `http://ip:port/wxcomapp`（認証済み企業は届け出済みドメインを使用する必要があります）\n- ランダムな `Token` と `EncodingAESKey` を生成し、設定ファイル用に保存します\n\n<Note>\n  プログラムがまだ起動していないため、この時点では API 受信設定を保存できません。プロジェクトが動作した後に戻って保存してください。\n</Note>\n\n## 3. 設定と起動\n\n`config.json` に以下の設定を追加します（各パラメータと WeCom コンソールの対応関係は上のスクリーンショットを参照してください）：\n\n```json\n{\n  \"channel_type\": \"wechatcom_app\",\n  \"single_chat_prefix\": [\"\"],\n  \"wechatcom_corp_id\": \"YOUR_CORP_ID\",\n  \"wechatcomapp_token\": \"YOUR_TOKEN\",\n  \"wechatcomapp_secret\": \"YOUR_SECRET\",\n  \"wechatcomapp_agent_id\": \"YOUR_AGENT_ID\",\n  \"wechatcomapp_aes_key\": \"YOUR_AES_KEY\",\n  \"wechatcomapp_port\": 9898\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `wechatcom_corp_id` | Corp ID |\n| `wechatcomapp_token` | API 受信設定の Token |\n| `wechatcomapp_secret` | アプリの Secret |\n| `wechatcomapp_agent_id` | アプリの AgentId |\n| `wechatcomapp_aes_key` | API 受信設定の EncodingAESKey |\n| `wechatcomapp_port` | リスンポート、デフォルトは 9898 |\n\n設定後、プログラムを起動します。ログに `http://0.0.0.0:9898/` と表示されれば、プログラムは正常に動作しています。このポートを外部に公開する必要があります（例：クラウドサーバーのセキュリティグループで許可します）。\n\nプログラム起動後、WeCom 管理コンソールに戻って**メッセージサーバー設定**を保存します。保存が成功したら、サーバー IP を**企業の信頼済み IP** に追加する必要もあります。追加しないとメッセージの送受信ができません：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103224.png\" width=\"520\"/>\n\n<Warning>\n  URL 設定のコールバックが失敗する場合や、設定がうまくいかない場合：\n  1. サーバーのファイアウォールが無効になっており、セキュリティグループでリスンポートが許可されていることを確認してください\n  2. Token、Secret Key などのパラメータ設定が一致しているか、URL の形式が正しいか慎重に確認してください\n  3. 認証済みの WeCom アカウントは、法人に対応する届け出済みドメインを設定する必要があります\n</Warning>\n\n## 4. 使い方\n\nWeCom で作成したアプリ名を検索して、直接チャットを開始できます。異なるポートでリスンする複数のインスタンスを実行して、複数の WeCom アプリを作成できます：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103228.png\" width=\"720\"/>\n\n外部の個人 WeChat ユーザーにアプリを利用してもらうには、**自社情報 → WeChat プラグイン**に移動し、招待 QR コードを共有します。スキャンしてフォローした後、個人 WeChat ユーザーがアプリとチャットできるようになります：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260228103232.png\" width=\"520\"/>\n\n## FAQ\n\n以下の依存パッケージがインストールされていることを確認してください：\n\n```bash\npip install websocket-client pycryptodome\n```\n"
  },
  {
    "path": "docs/ja/guide/manual-install.mdx",
    "content": "---\ntitle: 手動インストール\ndescription: CowAgentの手動デプロイ（ソースコード / Docker）\n---\n\n## ソースコードによるデプロイ\n\n### 1. プロジェクトをクローン\n\n```bash\ngit clone https://github.com/zhayujie/chatgpt-on-wechat\ncd chatgpt-on-wechat/\n```\n\n<Tip>\n  ネットワークに問題がある場合は、ミラーを使用してください: https://gitee.com/zhayujie/chatgpt-on-wechat\n</Tip>\n\n### 2. 依存パッケージをインストール\n\nコア依存パッケージ（必須）：\n\n```bash\npip3 install -r requirements.txt\n```\n\nオプション依存パッケージ（推奨）：\n\n```bash\npip3 install -r requirements-optional.txt\n```\n\n### 3. 設定\n\n設定テンプレートをコピーして編集します：\n\n```bash\ncp config-template.json config.json\n```\n\n`config.json` にモデルの API キー、チャネルタイプ、その他の設定を入力します。詳細は[モデルのドキュメント](/ja/models/index)を参照してください。\n\n### 4. 実行\n\n**ローカルで実行：**\n\n```bash\npython3 app.py\n```\n\nデフォルトではWebサービスが起動します。`http://localhost:9899/chat` にアクセスしてチャットできます。\n\n**サーバーでバックグラウンド実行：**\n\n```bash\nnohup python3 app.py & tail -f nohup.out\n```\n\n## Docker によるデプロイ\n\nDocker デプロイでは、ソースコードのクローンや依存パッケージのインストールは不要です。Agent モードを使用する場合は、より広範なシステムアクセスが可能なソースコードによるデプロイを推奨します。\n\n<Note>\n  [Docker](https://docs.docker.com/engine/install/) と docker-compose が必要です。\n</Note>\n\n**1. 設定ファイルをダウンロード**\n\n```bash\ncurl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml\n```\n\n`docker-compose.yml` を編集して設定を行います。\n\n**2. コンテナを起動**\n\n```bash\nsudo docker compose up -d\n```\n\n**3. ログを確認**\n\n```bash\nsudo docker logs -f chatgpt-on-wechat\n```\n\n## 主要な設定項目\n\n```json\n{\n  \"channel_type\": \"web\",\n  \"model\": \"MiniMax-M2.5\",\n  \"agent\": true,\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 30,\n  \"agent_max_steps\": 15\n}\n```\n\n| パラメータ | 説明 | デフォルト値 |\n| --- | --- | --- |\n| `channel_type` | チャネルタイプ | `web` |\n| `model` | モデル名 | `MiniMax-M2.5` |\n| `agent` | Agent モードを有効化 | `true` |\n| `agent_workspace` | Agent のワークスペースパス | `~/cow` |\n| `agent_max_context_tokens` | 最大コンテキストトークン数 | `40000` |\n| `agent_max_context_turns` | 最大コンテキストターン数 | `30` |\n| `agent_max_steps` | タスクごとの最大判断ステップ数 | `15` |\n\n<Tip>\n  すべての設定オプションはプロジェクトの [`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py) に記載されています。\n</Tip>\n"
  },
  {
    "path": "docs/ja/guide/quick-start.mdx",
    "content": "---\ntitle: ワンクリックインストール\ndescription: スクリプトによるCowAgentのワンクリックインストールと管理\n---\n\n本プロジェクトでは、ワンクリックでのインストール、設定、起動、管理を行うスクリプトを提供しています。素早くセットアップするには、スクリプトによるデプロイを推奨します。\n\nLinux、macOS、Windowsに対応しています。Python 3.7〜3.12が必要です（3.9を推奨）。\n\n## インストールコマンド\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\nスクリプトは以下の手順を自動的に実行します：\n\n1. Python環境の確認（Python 3.7以上が必要）\n2. 必要なツールのインストール（git、curlなど）\n3. プロジェクトを `~/chatgpt-on-wechat` にクローン\n4. Pythonの依存パッケージをインストール\n5. AIモデルとチャネルの対話式設定\n6. サービスの起動\n\nデフォルトでは、インストール後にWebサービスが起動します。`http://localhost:9899/chat` にアクセスしてチャットを開始できます。\n\n## 管理コマンド\n\nインストール後、以下のコマンドでサービスを管理できます：\n\n| コマンド | 説明 |\n| --- | --- |\n| `./run.sh start` | サービスを起動 |\n| `./run.sh stop` | サービスを停止 |\n| `./run.sh restart` | サービスを再起動 |\n| `./run.sh status` | 実行状態を確認 |\n| `./run.sh logs` | リアルタイムログを表示 |\n| `./run.sh config` | 再設定 |\n| `./run.sh update` | プロジェクトコードを更新 |\n"
  },
  {
    "path": "docs/ja/guide/upgrade.mdx",
    "content": "---\ntitle: アップデート\ndescription: CowAgent のアップグレード方法\n---\n\n## スクリプトによるアップグレード（推奨）\n\n`run.sh` でサービスを管理している場合、以下のコマンドでワンクリックアップグレードできます：\n\n```bash\n./run.sh update\n```\n\nこのコマンドは以下のフローを自動的に実行します：\n\n1. 現在実行中のサービスを停止\n2. 最新コードをプル\n3. 依存関係を再チェック\n4. サービスを起動\n\n## 手動アップグレード\n\nプロジェクトのルートディレクトリで以下を実行します：\n\n```bash\ngit pull\npip3 install -r requirements.txt\n```\n\n更新完了後、サービスを再起動します：\n\n```bash\n# run.sh で管理している場合\n./run.sh restart\n\n# nohup で直接実行している場合\nkill $(ps -ef | grep app.py | grep -v grep | awk '{print $2}')\nnohup python3 app.py & tail -f nohup.out\n```\n\n## Docker アップグレード\n\n`docker-compose.yml` があるディレクトリで以下を実行します：\n\n```bash\nsudo docker compose pull\nsudo docker compose up -d\n```\n\n<Tip>\n  アップグレード前に `config.json` 設定ファイルのバックアップを推奨します。Docker 環境でデータを保持する場合は、volume マウントでワークスペースディレクトリを永続化できます。\n</Tip>\n"
  },
  {
    "path": "docs/ja/intro/architecture.mdx",
    "content": "---\ntitle: アーキテクチャ\ndescription: CowAgent 2.0 のシステムアーキテクチャとコア設計\n---\n\nCowAgent 2.0 は、シンプルなチャットボットから、自律的な思考、タスク計画、長期記憶、Skill の拡張性を備えた Agent アーキテクチャのスーパーインテリジェントアシスタントへと進化しました。\n\n## システムアーキテクチャ\n\nCowAgent のアーキテクチャは以下のコアモジュールで構成されています：\n\n<img src=\"https://cdn.link-ai.tech/doc/68ef7b212c6f791e0e74314b912149f9-sz_5847990.png\" alt=\"CowAgent Architecture\" />\n\n### コアモジュール\n\n| モジュール | 説明 |\n| --- | --- |\n| **Channels** | メッセージの受信と送信を行うメッセージチャネル層。Web、Feishu（飛書）、DingTalk（釘釘）、WeCom（企業微信）、WeChat公式アカウントなどをサポート |\n| **Agent Core** | タスク計画、記憶システム、Skill エンジンを含む Agent エンジン |\n| **Tools** | Agent が OS リソースにアクセスするためのツール層。10 以上の組み込みツール |\n| **Models** | 主要な LLM への統一アクセスを提供するモデル層 |\n\n## Agent モードのワークフロー\n\nAgent モードが有効な場合、CowAgent は以下のワークフローで自律的な Agent として動作します：\n\n1. **メッセージ受信** — チャネルを通じてユーザーの入力を受信\n2. **意図の理解** — タスク要件とコンテキストを分析\n3. **タスク計画** — 複雑なタスクを複数のステップに分解\n4. **ツール呼び出し** — 各ステップに適切なツールを選択・実行\n5. **記憶の更新** — 重要な情報を長期記憶に保存\n6. **結果の返却** — 実行結果をユーザーに送信\n\n## ワークスペースのディレクトリ構成\n\nAgent のワークスペースはデフォルトで `~/cow` にあり、システムプロンプト、記憶ファイル、Skill ファイルを格納しています：\n\n```\n~/cow/\n├── system.md          # Agent システムプロンプト\n├── user.md            # ユーザープロフィール\n├── memory/            # 長期記憶ストレージ\n│   ├── core.md        # コアメモリ\n│   └── daily/         # デイリーメモリ\n└── skills/            # カスタム Skill\n    ├── skill-1/\n    └── skill-2/\n```\n\nシークレットキーはセキュリティのため `~/.cow` ディレクトリに別途保存されます：\n\n```\n~/.cow/\n└── .env               # Skill 用のシークレットキー\n```\n\n## コア設定\n\n`config.json` で Agent モードのパラメータを設定します：\n\n```json\n{\n  \"agent\": true,\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 30,\n  \"agent_max_steps\": 15\n}\n```\n\n| パラメータ | 説明 | デフォルト値 |\n| --- | --- | --- |\n| `agent` | Agent モードの有効化 | `true` |\n| `agent_workspace` | ワークスペースのパス | `~/cow` |\n| `agent_max_context_tokens` | 最大コンテキストトークン数 | `40000` |\n| `agent_max_context_turns` | 最大コンテキストターン数 | `30` |\n| `agent_max_steps` | タスクあたりの最大判断ステップ数 | `15` |\n"
  },
  {
    "path": "docs/ja/intro/features.mdx",
    "content": "---\ntitle: 機能詳細\ndescription: CowAgent の長期記憶、タスク計画、Skill システムの詳細\n---\n\n## 1. 長期記憶\n\n記憶システムにより、Agent は重要な情報を長期にわたって記憶できます。ユーザーが好みや決定、重要な事実を共有すると、Agent は自発的に情報を保存し、会話が一定の長さに達すると自動的に要約を抽出します。記憶はコアメモリとデイリーメモリに分かれており、キーワード検索とベクトル検索の両方をサポートするハイブリッド検索が可能です。\n\n初回起動時、Agent はユーザーに重要な情報を自発的に尋ね、ワークスペース（デフォルト `~/cow`）に記録します。これには Agent の設定、ユーザーの身元情報、記憶ファイルが含まれます。\n\nその後の長期的な会話において、Agent は必要に応じてインテリジェントに記憶を保存・取得し、自身の設定やユーザーの好み、記憶ファイルを継続的に更新し、経験と教訓を要約します。これにより、真に自律的な思考と継続的な成長を実現しています。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## 2. タスク計画とツール活用\n\nツールは Agent がオペレーティングシステムのリソースにアクセスするための中核です。Agent はタスク要件に基づいてインテリジェントにツールを選択・呼び出し、ファイルの読み書き、コマンド実行、スケジュールタスクなどを実行します。組み込みツールはプロジェクトの `agent/tools/` ディレクトリに実装されています。\n\n**主なツール：** ファイルの読み書き・編集、Bash ターミナル、ファイル送信、スケジューラ、記憶検索、Web 検索、環境設定など。\n\n### 2.1 ターミナルとファイルアクセス\n\nOS のターミナルとファイルシステムへのアクセスは、最も基本的かつ中核的な機能です。多くの他のツールや Skill はこの機能の上に構築されています。ユーザーはモバイルデバイスから Agent とやり取りし、パソコンやサーバーのリソースを操作できます：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202181130.png\" width=\"800\" />\n</Frame>\n\n### 2.2 プログラミング能力\n\nプログラミングとシステムアクセスを組み合わせることで、Agent は完全な **Vibecoding ワークフロー** を実行できます。情報検索、アセット生成、コーディング、テスト、デプロイ、Nginx 設定、公開まで、すべてスマートフォンからの一つのコマンドで実行可能です：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n\n### 2.3 スケジュールタスク\n\n`scheduler` ツールにより動的なスケジュールタスクが可能で、**ワンタイムタスク、固定間隔、Cron 式**をサポートしています。タスクは**固定メッセージ送信**または **Agent 動的タスク**実行としてトリガーできます：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n\n### 2.4 環境変数管理\n\nSkill が必要とするシークレットキーは環境変数ファイルに保存され、`env_config` ツールによって管理されます。会話を通じてシークレットを更新でき、セキュリティ保護とマスキング機能が組み込まれています：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" width=\"800\" />\n</Frame>\n\n## 3. Skill システム\n\nSkill システムは Agent に無限の拡張性を提供します。各 Skill は説明ファイル、実行スクリプト（任意）、リソース（任意）で構成され、特定のタイプのタスクを完了する方法を記述します。Skill により Agent は複雑なワークフローの指示に従い、ツールを呼び出し、サードパーティシステムと連携できます。\n\n- **組み込み Skill：** プロジェクトの `skills/` ディレクトリにあり、Skill クリエイター、画像認識、LinkAI Agent、Web フェッチなどが含まれます。組み込み Skill は依存条件（API キー、システムコマンドなど）に基づいて自動的に有効化されます。\n- **カスタム Skill：** ユーザーが会話を通じて作成し、ワークスペース（`~/cow/skills/`）に保存されます。あらゆる複雑なビジネスプロセスやサードパーティ連携を実装できます。\n\n### 3.1 Skill の作成\n\n`skill-creator` Skill により、会話を通じて Skill を素早く作成できます。ワークフローを Skill としてコード化するよう Agent に依頼したり、API ドキュメントやサンプルを送信して Agent に直接連携を完成させることができます：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n### 3.2 Web 検索と画像認識\n\n- **Web 検索：** 組み込みの `web_search` ツールで、複数の検索エンジンをサポートします。`BOCHA_API_KEY` または `LINKAI_API_KEY` を設定して有効化してください。\n- **画像認識：** 組み込みの `openai-image-vision` Skill で、`gpt-4.1-mini`、`gpt-4.1` などのモデルをサポートします。`OPENAI_API_KEY` が必要です。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n\n### 3.3 サードパーティナレッジベースとプラグイン\n\n`linkai-agent` Skill により、[LinkAI](https://link-ai.tech/) 上のすべての Agent を Skill として利用でき、マルチ Agent による意思決定が可能になります。\n\n設定方法：`env_config` で `LINKAI_API_KEY` を設定し、`skills/linkai-agent/config.json` に Agent の説明を追加します：\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI Customer Support\",\n      \"app_description\": \"Select only when the user needs help with LinkAI platform questions\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"Content Creator\",\n      \"app_description\": \"Use only when the user needs to create images or videos\"\n    }\n  ]\n}\n```\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n"
  },
  {
    "path": "docs/ja/intro/index.mdx",
    "content": "---\ntitle: はじめに\ndescription: CowAgent - LLM を活用した AI スーパーアシスタント\n---\n\n<img src=\"https://cdn.link-ai.tech/doc/78c5dd674e2c828642ecc0406669fed7.png\" alt=\"CowAgent\" width=\"600px\"/>\n\n**CowAgent** は、自律的なタスク計画、長期記憶、Skill システム、マルチモーダルメッセージ、複数モデル対応、マルチプラットフォームデプロイを備えた、LLM を活用した AI スーパーアシスタントです。\n\nCowAgent は自ら思考しタスクを計画し、コンピュータや外部リソースを操作し、Skill を作成・実行し、長期記憶により継続的に成長します。複数モデルの柔軟な切り替えをサポートし、テキスト、音声、画像、ファイルなどのマルチモーダルメッセージを処理でき、Web、Feishu（飛書）、DingTalk（釘釘）、WeCom（企業微信）、WeChat公式アカウントに統合できます。お使いのパソコンやサーバー上で24時間365日稼働します。\n\n<Card title=\"GitHub\" icon=\"github\" href=\"https://github.com/zhayujie/chatgpt-on-wechat\">\n  github.com/zhayujie/chatgpt-on-wechat\n</Card>\n\n## コア機能\n\n<CardGroup cols={2}>\n  <Card title=\"自律タスク計画\" icon=\"brain\" href=\"/ja/intro/architecture\">\n    複雑なタスクを理解し、自律的に実行計画を立て、目標が達成されるまで思考とツール呼び出しを続けます。ツールを通じてファイルシステム、ターミナル、ブラウザ、スケジューラなどのシステムリソースにアクセスできます。\n  </Card>\n  <Card title=\"長期記憶\" icon=\"database\" href=\"/ja/memory\">\n    会話の記憶をローカルファイルやデータベースに自動的に永続化します。コアメモリとデイリーメモリを含み、キーワード検索とベクトル検索に対応しています。\n  </Card>\n  <Card title=\"Skill システム\" icon=\"puzzle-piece\" href=\"/ja/skills/index\">\n    Skill の作成・実行エンジンを実装し、組み込み Skill を搭載。自然言語の会話を通じてカスタム Skill の開発もサポートしています。\n  </Card>\n  <Card title=\"マルチモーダルメッセージ\" icon=\"image\" href=\"/ja/channels/web\">\n    テキスト、画像、音声、ファイルなどのメッセージタイプの解析、処理、生成、送信をサポートします。\n  </Card>\n  <Card title=\"複数モデル対応\" icon=\"microchip\" href=\"/ja/models/index\">\n    OpenAI、Claude、Gemini、DeepSeek、MiniMax、GLM、Qwen、Kimi、Doubao など、主要なモデルプロバイダーをサポートしています。\n  </Card>\n  <Card title=\"マルチプラットフォームデプロイ\" icon=\"server\" href=\"/ja/channels/web\">\n    ローカルコンピュータやサーバー上で動作し、Web、Feishu（飛書）、DingTalk（釘釘）、WeChat公式アカウント、WeCom（企業微信）アプリケーションに統合できます。\n  </Card>\n</CardGroup>\n\n## クイック体験\n\nターミナルで以下のコマンドを実行すると、ワンクリックでインストール、設定、起動ができます：\n\n```bash\nbash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)\n```\n\nデフォルトでは実行後に Web サービスが起動します。`http://localhost:9899/chat` にアクセスして Web インターフェースでチャットできます。\n\n<CardGroup cols={2}>\n  <Card title=\"クイックスタート\" icon=\"rocket\" href=\"/ja/guide/quick-start\">\n    インストールと実行の完全ガイド\n  </Card>\n  <Card title=\"アーキテクチャ\" icon=\"sitemap\" href=\"/ja/intro/architecture\">\n    CowAgent システムアーキテクチャ設計\n  </Card>\n</CardGroup>\n\n## 免責事項\n\n1. 本プロジェクトは [MIT License](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/LICENSE) に基づき、技術研究および学習を目的としています。利用者は現地の法律、規制、ポリシー、および企業の社内規程を遵守する必要があります。違法行為や権利侵害につながる利用は禁止されています。\n2. Agent モードは通常のチャットモードよりも多くのトークンを消費します。効果とコストを考慮してモデルを選択してください。Agent はホスト OS にアクセスできるため、デプロイには十分注意してください。\n3. CowAgent はオープンソース開発に注力しており、いかなる暗号通貨の発行、認可、参加も行っておりません。\n\n## コミュニティ\n\nWeChat でアシスタントを追加して、オープンソースコミュニティに参加しましょう：\n\n<img width=\"140\" src=\"https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/open-community.png\" />\n"
  },
  {
    "path": "docs/ja/memory.mdx",
    "content": "---\ntitle: 記憶\ndescription: CowAgent 長期記憶システム\n---\n\n記憶システムにより、Agent は重要な情報を長期にわたって記憶し、継続的に経験を蓄積し、ユーザーの好みを理解し、真に自律的な思考と継続的な成長を実現できます。\n\n## 記憶の種類\n\n### コア記憶 (MEMORY.md)\n\n`~/cow/MEMORY.md` に保存され、長期的なユーザーの好み、重要な決定、主要な事実など、時間が経っても薄れない情報を含みます。毎回の会話ターンでバックグラウンド知識としてシステムプロンプトに自動的に注入されます。\n\n### 日次記憶 (memory/YYYY-MM-DD.md)\n\n`~/cow/memory/` ディレクトリに保存され、日付で命名されます（例：`2026-03-08.md`）。日々の会話の要約と主要なイベントを記録します。空ファイルの生成を避けるため、最初の書き込み時にのみファイルが作成されます。\n\n## 記憶の書き込み\n\nAgent は以下のメカニズムにより、会話内容を日次記憶に自動的に永続化します：\n\n- **コンテキストトリミング時** — 会話ターン数またはトークン数が設定上限を超えた場合、コンテキストの古い半分が一括でトリミングされ、破棄されたコンテンツは LLM によって要約されて重要な情報として日次記憶ファイルに書き込まれます\n- **毎日のスケジュール要約** — 毎日 23:55 に自動的にフル要約がトリガーされ、アクティビティが少ない日でも記憶が保存されます（内容が変更されていない場合はスキップ）\n- **API コンテキストオーバーフロー時** — モデル API がコンテキストオーバーフローエラーを返した場合、緊急措置として現在の会話要約が保存されます\n\nすべての記憶書き込みはバックグラウンドスレッドで非同期に実行され（LLM の要約 + ファイル書き込み）、通常の会話応答をブロックしません。\n\n## 初回起動\n\n初回起動時に、Agent はユーザーに主要な情報を積極的に尋ね、ワークスペース（デフォルト `~/cow`）に保存します：\n\n| ファイル | 説明 |\n| --- | --- |\n| `system.md` | Agent のシステムプロンプトと動作設定 |\n| `user.md` | ユーザーの身元情報と好み |\n| `MEMORY.md` | コア記憶（長期） |\n| `memory/YYYY-MM-DD.md` | 日次記憶（オンデマンドで作成） |\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## 記憶の検索\n\n記憶システムはハイブリッド検索モードをサポートしています：\n\n- **キーワード検索** — キーワードに基づいて過去の記憶をマッチング\n- **ベクトル検索** — セマンティック類似性検索により、異なる表現でも関連する記憶を発見\n\nAgent は必要に応じて会話中に自動的に記憶検索をトリガーし、関連する過去の情報をコンテキストに組み込みます。コア記憶（`MEMORY.md`）は常にシステムプロンプトに注入され、日次記憶は検索を通じてオンデマンドで読み込まれます。\n\n## 設定\n\n```json\n{\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 20\n}\n```\n\n| パラメータ | 説明 | デフォルト |\n| --- | --- | --- |\n| `agent_workspace` | ワークスペースパス、記憶ファイルはこのディレクトリ配下に保存されます | `~/cow` |\n| `agent_max_context_tokens` | 最大コンテキストトークン数。超過時に半分がトリミングされ、記憶として要約されます | `40000` |\n| `agent_max_context_turns` | 最大コンテキストターン数。超過時に半分がトリミングされ、記憶として要約されます | `20` |\n"
  },
  {
    "path": "docs/ja/models/claude.mdx",
    "content": "---\ntitle: Claude\ndescription: Claudeモデルの設定\n---\n\n```json\n{\n  \"model\": \"claude-sonnet-4-6\",\n  \"claude_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `claude-sonnet-4-6`、`claude-opus-4-6`、`claude-sonnet-4-5`、`claude-sonnet-4-0`、`claude-3-5-sonnet-latest`などから選択可能。[公式モデル一覧](https://docs.anthropic.com/en/docs/about-claude/models/overview)を参照 |\n| `claude_api_key` | [Claude Console](https://console.anthropic.com/settings/keys)で作成 |\n| `claude_api_base` | 任意。デフォルトは`https://api.anthropic.com/v1`。サードパーティプロキシを使用する場合に変更 |\n"
  },
  {
    "path": "docs/ja/models/coding-plan.mdx",
    "content": "---\ntitle: Coding Plan\ndescription: Coding Planモデルの設定\n---\n\n> Coding Planは各プロバイダーが提供する月額サブスクリプションパッケージで、高頻度のAgent利用に最適です。CowAgentはOpenAI互換モードにより、すべてのCoding Planプロバイダーをサポートしています。\n\n<Note>\n  Coding PlanのAPI BaseとAPI Keyは、通常の従量課金制のものとは別になっています。各プロバイダーのプラットフォームから取得してください。\n</Note>\n\n## 共通設定\n\nすべてのプロバイダーはOpenAI互換プロトコルでアクセスでき、Webコンソールから素早く設定できます。モデルプロバイダーを**OpenAI**に設定し、カスタムモデルを選択してモデルコードを入力し、対応するプロバイダーのAPI BaseとAPI Keyを入力してください:\n\n<img src=\"https://cdn.link-ai.tech/doc/20260318113134.png\" width=\"800\"/>\n\n`config.json`で直接設定することも可能です:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MODEL_NAME\",\n  \"open_ai_api_base\": \"PROVIDER_CODING_PLAN_API_BASE\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `bot_type` | `openai`を指定（OpenAI互換モード） |\n| `model` | プロバイダーがサポートするモデル名 |\n| `open_ai_api_base` | プロバイダーのCoding Plan API Base URL |\n| `open_ai_api_key` | プロバイダーのCoding Plan API Key |\n\n---\n\n## 阿里云\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"qwen3.5-plus\",\n  \"open_ai_api_base\": \"https://coding.dashscope.aliyuncs.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `qwen3.5-plus`、`qwen3-max-2026-01-23`、`qwen3-coder-next`、`qwen3-coder-plus`、`glm-5`、`glm-4.7`、`kimi-k2.5`、`MiniMax-M2.5` |\n| `open_ai_api_base` | `https://coding.dashscope.aliyuncs.com/v1` |\n| `open_ai_api_key` | Coding Plan専用キー（従量課金とは共有不可） |\n\n参考: [クイックスタート](https://help.aliyun.com/zh/model-studio/coding-plan-quickstart?spm=a2c4g.11186623.help-menu-2400256.d_0_2_1.70115203zi5Igc)、[モデル一覧](https://help.aliyun.com/zh/model-studio/coding-plan)\n\n---\n\n## MiniMax\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MiniMax-M2.5\",\n  \"open_ai_api_base\": \"https://api.minimaxi.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `MiniMax-M2.5`、`MiniMax-M2.5-highspeed`、`MiniMax-M2.1`、`MiniMax-M2` |\n| `open_ai_api_base` | 中国: `https://api.minimaxi.com/v1`、グローバル: `https://api.minimax.io/v1` |\n| `open_ai_api_key` | Coding Plan専用キー（従量課金とは共有不可） |\n\n参考: [中国キー](https://platform.minimaxi.com/docs/coding-plan/quickstart)、[モデル一覧](https://platform.minimaxi.com/docs/guides/pricing-coding-plan)、[グローバルキー](https://platform.minimax.io/docs/coding-plan/quickstart)\n\n---\n\n## 智谱 GLM\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"glm-4.7\",\n  \"open_ai_api_base\": \"https://open.bigmodel.cn/api/coding/paas/v4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `glm-5`、`glm-4.7`、`glm-4.6`、`glm-4.5`、`glm-4.5-air` |\n| `open_ai_api_base` | 中国: `https://open.bigmodel.cn/api/coding/paas/v4`、グローバル: `https://api.z.ai/api/coding/paas/v4` |\n| `open_ai_api_key` | 標準APIと共有 |\n\n参考: [中国クイックスタート](https://docs.bigmodel.cn/cn/coding-plan/quick-start)、[グローバルクイックスタート](https://docs.z.ai/devpack/quick-start)\n\n---\n\n## Kimi\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"kimi-for-coding\",\n  \"open_ai_api_base\": \"https://api.kimi.com/coding/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `kimi-for-coding` |\n| `open_ai_api_base` | `https://api.kimi.com/coding/v1` |\n| `open_ai_api_key` | Coding Plan専用キー（従量課金とは共有不可） |\n\n参考: [キー & ドキュメント](https://www.kimi.com/code/docs/)\n\n---\n\n## 火山引擎\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"Doubao-Seed-2.0-Code\",\n  \"open_ai_api_base\": \"https://ark.cn-beijing.volces.com/api/coding/v3\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `Doubao-Seed-2.0-Code`、`Doubao-Seed-2.0-pro`、`Doubao-Seed-2.0-lite`、`Doubao-Seed-Code`、`MiniMax-M2.5`、`Kimi-K2.5`、`GLM-4.7`、`DeepSeek-V3.2` |\n| `open_ai_api_base` | `https://ark.cn-beijing.volces.com/api/coding/v3` |\n| `open_ai_api_key` | 標準APIと共有 |\n\n参考: [クイックスタート](https://www.volcengine.com/docs/82379/1928261?lang=zh)\n"
  },
  {
    "path": "docs/ja/models/deepseek.mdx",
    "content": "---\ntitle: DeepSeek\ndescription: DeepSeekモデルの設定\n---\n\nOpenAI互換の設定を使用します:\n\n```json\n{\n  \"model\": \"deepseek-chat\",\n  \"bot_type\": \"openai\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\",\n  \"open_ai_api_base\": \"https://api.deepseek.com/v1\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `deepseek-chat` (DeepSeek-V3)、`deepseek-reasoner` (DeepSeek-R1) |\n| `bot_type` | `openai`を指定（OpenAI互換モード） |\n| `open_ai_api_key` | [DeepSeek Platform](https://platform.deepseek.com/api_keys)で作成 |\n| `open_ai_api_base` | DeepSeekプラットフォームのBASE URL |\n"
  },
  {
    "path": "docs/ja/models/doubao.mdx",
    "content": "---\ntitle: Doubao (ByteDance)\ndescription: Doubao (火山方舟) モデルの設定\n---\n\n```json\n{\n  \"model\": \"doubao-seed-2-0-code-preview-260215\",\n  \"ark_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `doubao-seed-2-0-code-preview-260215`、`doubao-seed-2-0-pro-260215`、`doubao-seed-2-0-lite-260215`などから選択可能 |\n| `ark_api_key` | [火山方舟 Console](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey)で作成 |\n| `ark_base_url` | 任意。デフォルトは`https://ark.cn-beijing.volces.com/api/v3` |\n"
  },
  {
    "path": "docs/ja/models/gemini.mdx",
    "content": "---\ntitle: Gemini\ndescription: Google Geminiモデルの設定\n---\n\n```json\n{\n  \"model\": \"gemini-3.1-pro-preview\",\n  \"gemini_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `gemini-3.1-flash-lite-preview`、`gemini-3.1-pro-preview`、`gemini-3-flash-preview`、`gemini-3-pro-preview`などから選択可能。[公式ドキュメント](https://ai.google.dev/gemini-api/docs/models)を参照 |\n| `gemini_api_key` | [Google AI Studio](https://aistudio.google.com/app/apikey)で作成 |\n"
  },
  {
    "path": "docs/ja/models/glm.mdx",
    "content": "---\ntitle: GLM (智谱AI)\ndescription: 智谱AI GLMモデルの設定\n---\n\n```json\n{\n  \"model\": \"glm-5-turbo\",\n  \"zhipu_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `glm-5-turbo`、`glm-5`、`glm-4.7`、`glm-4-plus`、`glm-4-flash`、`glm-4-air`などから選択可能。[モデルコード](https://bigmodel.cn/dev/api/normal-model/glm-4)を参照 |\n| `zhipu_ai_api_key` | [智谱AI Console](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys)で作成 |\n\nOpenAI互換の設定もサポートしています:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"glm-5-turbo\",\n  \"open_ai_api_base\": \"https://open.bigmodel.cn/api/paas/v4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/ja/models/index.mdx",
    "content": "---\ntitle: モデル概要\ndescription: CowAgentがサポートするモデルとおすすめの選択肢\n---\n\nCowAgentは国内外の主要なLLMをサポートしています。モデルインターフェースはプロジェクトの`models/`ディレクトリに実装されています。\n\n<Note>\n  Agent モードでは、品質とコストのバランスから以下のモデルをおすすめします: MiniMax-M2.7、glm-5-turbo、kimi-k2.5、qwen3.5-plus、claude-sonnet-4-6、gemini-3.1-pro-preview\n</Note>\n\n## 設定\n\n選択したモデルに応じて、`config.json`にモデル名とAPI Keyを設定してください。各モデルは`bot_type`を`openai`に設定し、`open_ai_api_base`と`open_ai_api_key`を設定することで、OpenAI互換アクセスもサポートしています。\n\nまた、[LinkAI](https://link-ai.tech)プラットフォームインターフェースを使用すると、ナレッジベース、ワークフロー、その他のAgent機能をサポートしながら、複数のモデルを柔軟に切り替えることができます。\n\n## サポートモデル\n\n<CardGroup cols={2}>\n  <Card title=\"MiniMax\" href=\"/ja/models/minimax\">\n    MiniMax-M2.7およびその他のシリーズモデル\n  </Card>\n  <Card title=\"GLM (智谱AI)\" href=\"/ja/models/glm\">\n    glm-5-turbo、glm-5およびその他のシリーズモデル\n  </Card>\n  <Card title=\"Qwen (通义千问)\" href=\"/ja/models/qwen\">\n    qwen3.5-plus、qwen3-maxなど\n  </Card>\n  <Card title=\"Kimi\" href=\"/ja/models/kimi\">\n    kimi-k2.5、kimi-k2など\n  </Card>\n  <Card title=\"Doubao (ByteDance)\" href=\"/ja/models/doubao\">\n    doubao-seedシリーズモデル\n  </Card>\n  <Card title=\"Claude\" href=\"/ja/models/claude\">\n    claude-sonnet-4-6など\n  </Card>\n  <Card title=\"Gemini\" href=\"/ja/models/gemini\">\n    gemini-3.1-pro-previewなど\n  </Card>\n  <Card title=\"OpenAI\" href=\"/ja/models/openai\">\n    gpt-5.4、gpt-4.1、oシリーズなど\n  </Card>\n  <Card title=\"DeepSeek\" href=\"/ja/models/deepseek\">\n    deepseek-chat、deepseek-reasoner\n  </Card>\n  <Card title=\"LinkAI\" href=\"/ja/models/linkai\">\n    統合マルチモデルインターフェース + ナレッジベース\n  </Card>\n</CardGroup>\n\n<Tip>\n  モデル名の完全なリストについては、プロジェクトの[`common/const.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)ファイルを参照してください。\n</Tip>\n"
  },
  {
    "path": "docs/ja/models/kimi.mdx",
    "content": "---\ntitle: Kimi (Moonshot)\ndescription: Kimi (Moonshot) モデルの設定\n---\n\n```json\n{\n  \"model\": \"kimi-k2.5\",\n  \"moonshot_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `kimi-k2.5`、`kimi-k2`、`moonshot-v1-8k`、`moonshot-v1-32k`、`moonshot-v1-128k`から選択可能 |\n| `moonshot_api_key` | [Moonshot Console](https://platform.moonshot.cn/console/api-keys)で作成 |\n\nOpenAI互換の設定もサポートしています:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"kimi-k2.5\",\n  \"open_ai_api_base\": \"https://api.moonshot.cn/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/ja/models/linkai.mdx",
    "content": "---\ntitle: LinkAI\ndescription: LinkAIプラットフォームで複数モデルに統合アクセス\n---\n\n[LinkAI](https://link-ai.tech)プラットフォームでは、OpenAI、Claude、Gemini、DeepSeek、Qwen、Kimiなどのモデルを柔軟に切り替えることができ、ナレッジベース、ワークフロー、プラグイン、その他のAgent機能をサポートしています。\n\n```json\n{\n  \"use_linkai\": true,\n  \"linkai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `use_linkai` | `true`に設定してLinkAIインターフェースを有効化 |\n| `linkai_api_key` | [LinkAI Console](https://link-ai.tech/console/interface)で作成 |\n| `model` | 空のままにするとAgentのデフォルトモデルを使用。プラットフォーム上で柔軟に切り替え可能。[モデル一覧](https://link-ai.tech/console/models)のすべてのモデルをサポート |\n\n詳細は[APIドキュメント](https://docs.link-ai.tech/platform/api)を参照してください。\n"
  },
  {
    "path": "docs/ja/models/minimax.mdx",
    "content": "---\ntitle: MiniMax\ndescription: MiniMaxモデルの設定\n---\n\n```json\n{\n  \"model\": \"MiniMax-M2.7\",\n  \"minimax_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `MiniMax-M2.7`、`MiniMax-M2.5`、`MiniMax-M2.1`、`MiniMax-M2.1-lightning`、`MiniMax-M2`などから選択可能 |\n| `minimax_api_key` | [MiniMax Console](https://platform.minimaxi.com/user-center/basic-information/interface-key)で作成 |\n\nOpenAI互換の設定もサポートしています:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MiniMax-M2.7\",\n  \"open_ai_api_base\": \"https://api.minimaxi.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/ja/models/openai.mdx",
    "content": "---\ntitle: OpenAI\ndescription: OpenAIモデルの設定\n---\n\n```json\n{\n  \"model\": \"gpt-5.4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\",\n  \"open_ai_api_base\": \"https://api.openai.com/v1\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | OpenAI APIの[modelパラメータ](https://platform.openai.com/docs/models)に対応。oシリーズ、gpt-5.4、gpt-5シリーズ、gpt-4.1などをサポート。Agentモードでは`gpt-5.4`を推奨 |\n| `open_ai_api_key` | [OpenAI Platform](https://platform.openai.com/api-keys)で作成 |\n| `open_ai_api_base` | 任意。サードパーティプロキシを使用する場合に変更 |\n| `bot_type` | 公式OpenAIモデルでは不要。Claudeなど非OpenAIモデルをプロキシ経由で使用する場合は`openai`に設定 |\n"
  },
  {
    "path": "docs/ja/models/qwen.mdx",
    "content": "---\ntitle: Qwen (通义千问)\ndescription: 通义千问モデルの設定\n---\n\n```json\n{\n  \"model\": \"qwen3.5-plus\",\n  \"dashscope_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| パラメータ | 説明 |\n| --- | --- |\n| `model` | `qwen3.5-plus`、`qwen3-max`、`qwen-max`、`qwen-plus`、`qwen-turbo`、`qwq-plus`などから選択可能 |\n| `dashscope_api_key` | [百炼 Console](https://bailian.console.aliyun.com/?tab=model#/api-key)で作成。[公式ドキュメント](https://bailian.console.aliyun.com/?tab=api#/api)を参照 |\n\nOpenAI互換の設定もサポートしています:\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"qwen3.5-plus\",\n  \"open_ai_api_base\": \"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/ja/releases/overview.mdx",
    "content": "---\ntitle: 変更履歴\ndescription: CowAgent バージョン履歴\n---\n\n| バージョン | 日付 | 説明 |\n| --- | --- | --- |\n| [2.0.2](/en/releases/v2.0.2) | 2026.02.27 | Web Console アップグレード、マルチチャネル同時実行、セッション永続化 |\n| [2.0.1](/en/releases/v2.0.1) | 2026.02.27 | 組み込み Web Search ツール、スマートコンテキスト管理、複数の修正 |\n| [2.0.0](/en/releases/v2.0.0) | 2026.02.03 | AI スーパーアシスタントへの全面アップグレード |\n| 1.7.6 | 2025.05.23 | Web Channel 最適化、AgentMesh プラグイン |\n| 1.7.5 | 2025.04.11 | DeepSeek モデル |\n| 1.7.4 | 2024.12.13 | Gemini 2.0 モデル、Web Channel |\n| 1.7.3 | 2024.10.31 | 安定性の改善、データベース機能 |\n| 1.7.2 | 2024.09.26 | ワンクリックインストールスクリプト、o1 モデル |\n| 1.7.0 | 2024.08.02 | 讯飞 4.0 モデル、ナレッジベース参照 |\n| 1.6.9 | 2024.07.19 | gpt-4o-mini、阿里音声認識 |\n| 1.6.8 | 2024.07.05 | Claude 3.5、Gemini 1.5 Pro |\n| 1.6.0 | 2024.04.26 | Kimi 統合、gpt-4-turbo アップグレード |\n| 1.5.0 | 2023.11.10 | gpt-4-turbo、dall-e-3、tts マルチモーダル |\n| 1.0.0 | 2022.12.12 | プロジェクト作成、初の ChatGPT 統合 |\n\n完全な履歴は [GitHub Releases](https://github.com/zhayujie/chatgpt-on-wechat/releases) をご覧ください。\n"
  },
  {
    "path": "docs/ja/releases/v2.0.0.mdx",
    "content": "---\ntitle: v2.0.0\ndescription: CowAgent 2.0 - チャットボットから AI スーパーアシスタントへの全面アップグレード\n---\n\nCowAgent 2.0 は、チャットボットから **AI スーパーアシスタント** への包括的なアップグレードです。自律的な思考とタスク計画、長期記憶、コンピューターの操作、Skill の作成と実行が可能です。\n\n**リリース日**: 2026.02.03 | [GitHub Release](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.0)\n\n## 主な更新内容\n\n### Agent コア\n\n- **複雑なタスク計画**: マルチターン推論による自律的な計画\n- **長期記憶**: キーワードおよびベクトル検索による永続的な記憶\n- **組み込みツール**: ファイル操作、Bash、ブラウザ、スケジューラなど 10 以上のツール\n- **Web 検索**: 組み込みの `web_search` ツール、複数の検索エンジンに対応、対応する API キーを設定して使用\n- **Skill システム**: 組み込みおよびカスタム Skill をサポートする Skill エンジン\n- **セキュリティとコスト**: シークレット管理、プロンプト制御、トークン制限\n\n### その他\n\n- **チャネル**: 飞书/钉钉 WebSocket 対応、画像・ファイルメッセージ\n- **モデル**: claude-sonnet-4-5、gemini-3-pro-preview、glm-4.7、MiniMax-M2.1、qwen3-max\n- **デプロイ**: ワンクリックでのインストール、設定、実行、および管理スクリプト\n\n## 長期記憶\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## タスク計画とツール\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202181130.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n\n## Skill システム\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n\n## コントリビューション\n\n[フィードバックの送信](https://github.com/zhayujie/chatgpt-on-wechat/issues) や [コードのコントリビューション](https://github.com/zhayujie/chatgpt-on-wechat/pulls) を歓迎します。\n"
  },
  {
    "path": "docs/ja/releases/v2.0.1.mdx",
    "content": "---\ntitle: v2.0.1\ndescription: CowAgent 2.0.1 - 組み込み Web Search、スマートコンテキスト管理、複数の修正\n---\n\n**リリース日**: 2026.02.27 | [全変更履歴](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.0..2.0.1)\n\n## 新機能\n\n- **組み込み Web Search ツール**: Web 検索を Agent の組み込みツールとして統合し、判断コストを削減 ([4f0ea5d](https://github.com/zhayujie/chatgpt-on-wechat/commit/4f0ea5d7568d61db91ff69c91c429e785fd1b1c2))\n- **Claude Opus 4.6 モデル対応**: Claude Opus 4.6 モデルのサポートを追加 ([#2661](https://github.com/zhayujie/chatgpt-on-wechat/pull/2661))\n- **企业微信の画像認識**: 企业微信チャネルでの画像メッセージ認識をサポート ([#2667](https://github.com/zhayujie/chatgpt-on-wechat/pull/2667))\n\n## 改善\n\n- **スマートコンテキスト管理**: インテリジェントなコンテキストトリミング戦略により、チャットコンテキストのオーバーフローを解決し、トークン制限超過を防止 ([cea7fb7](https://github.com/zhayujie/chatgpt-on-wechat/commit/cea7fb7490c53454602bf05955a0e9f059bcf0fd), [8acf2db](https://github.com/zhayujie/chatgpt-on-wechat/commit/8acf2dbdfe713b84ad74b761b7f86674b1c1904d)) [#2663](https://github.com/zhayujie/chatgpt-on-wechat/issues/2663)\n- **ランタイム情報の動的更新**: 動的関数によるシステムプロンプト内のタイムスタンプおよびその他のランタイム情報の自動更新 ([#2655](https://github.com/zhayujie/chatgpt-on-wechat/pull/2655), [#2657](https://github.com/zhayujie/chatgpt-on-wechat/pull/2657))\n- **Skill プロンプトの最適化**: Skill システムプロンプト生成を改善し、ツールの説明を簡素化して Agent のパフォーマンスを向上 ([6c21833](https://github.com/zhayujie/chatgpt-on-wechat/commit/6c218331b1f1208ea8be6bf226936d3b556ade3e))\n- **GLM カスタム API Base URL**: GLM モデルのカスタム API Base URL をサポート ([#2660](https://github.com/zhayujie/chatgpt-on-wechat/pull/2660))\n- **起動スクリプトの最適化**: `run.sh` スクリプトのインタラクションと設定フローを改善 ([#2656](https://github.com/zhayujie/chatgpt-on-wechat/pull/2656))\n- **判断ステップのログ記録**: デバッグ用の Agent 判断ステップログを追加 ([cb303e6](https://github.com/zhayujie/chatgpt-on-wechat/commit/cb303e6109c50c8dfef1f5e6c1ec47223bf3cd11))\n\n## バグ修正\n\n- **Scheduler の記憶喪失**: Scheduler ディスパッチャーによる記憶喪失を修正 ([a77a874](https://github.com/zhayujie/chatgpt-on-wechat/commit/a77a8741b500a408c6f5c8868856fb4b018fe9db))\n- **空のツール呼び出しと長い結果**: 空のツール呼び出しおよび過度に長いツール結果の処理を修正 ([0542700](https://github.com/zhayujie/chatgpt-on-wechat/commit/0542700f9091ebb08c1a56103b0f0f45f24aa621))\n- **OpenAI Function Call**: OpenAI モデルとの Function Call 互換性を修正 ([158c87a](https://github.com/zhayujie/chatgpt-on-wechat/commit/158c87ab8b05bae054cc1b4eacdbb64fc1062ba9))\n- **Claude ツール名フィールド**: Claude モデルのレスポンスから余分なツール名フィールドを削除 ([eec10cb](https://github.com/zhayujie/chatgpt-on-wechat/commit/eec10cb5db6a3d5bc12ef606606532237d2c5f6e))\n- **MiniMax 推論**: MiniMax モデルの推論コンテンツ処理を最適化し、思考プロセスの出力を非表示化 ([c72cda3](https://github.com/zhayujie/chatgpt-on-wechat/commit/c72cda33864bd1542012ee6e0a8bd8c6c88cb5ed), [72b1cac](https://github.com/zhayujie/chatgpt-on-wechat/commit/72b1cacea1ba0d1f3dedacbab2e088e98fd7e172))\n- **GLM 思考プロセス**: GLM モデルの思考プロセス表示を非表示化 ([72b1cac](https://github.com/zhayujie/chatgpt-on-wechat/commit/72b1cacea1ba0d1f3dedacbab2e088e98fd7e172))\n- **飞书の接続と SSL**: 飞书チャネルの SSL 証明書エラーおよび接続問題を修正 ([229b14b](https://github.com/zhayujie/chatgpt-on-wechat/commit/229b14b6fcabe7123d53cab1dea39f38dab26d6d), [8674421](https://github.com/zhayujie/chatgpt-on-wechat/commit/867442155e7f095b4f38b0856f8c1d8312b5fcf7))\n- **model_type バリデーション**: 非文字列の `model_type` による `AttributeError` を修正 ([#2666](https://github.com/zhayujie/chatgpt-on-wechat/pull/2666))\n\n## プラットフォーム互換性\n\n- **Windows 互換性**: 複数のツールモジュールにおける Windows でのパス処理、ファイルエンコーディング、および `os.getuid()` の利用不可問題を修正 ([051ffd7](https://github.com/zhayujie/chatgpt-on-wechat/commit/051ffd78a372f71a967fd3259e37fe19131f83cf), [5264f7c](https://github.com/zhayujie/chatgpt-on-wechat/commit/5264f7ce18360ee4db5dcb4ebe67307977d40014))\n"
  },
  {
    "path": "docs/ja/releases/v2.0.2.mdx",
    "content": "---\ntitle: v2.0.2\ndescription: CowAgent 2.0.2 - Web Console アップグレード、マルチチャネル同時実行、セッション永続化\n---\n\n**リリース日**: 2026.02.27 | [全変更履歴](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.1...master)\n\n## ハイライト\n\n### 🖥️ Web Console アップグレード\n\nWeb Console が全面的にアップグレードされ、ストリーミング会話出力、ツール実行と推論プロセスの視覚的表示、**モデル、Skill、記憶、チャネル、Agent 設定** のオンライン管理が可能になりました。\n\n#### チャットインターフェース\n\nストリーミング出力に対応し、Agent の推論プロセスとツール呼び出しをリアルタイムに表示することで、Agent の意思決定を直感的に観察できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227180120.png\" />\n\n#### モデル管理\n\n設定ファイルを手動で編集せずに、モデル設定をオンラインで管理できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173811.png\" />\n\n#### Skill 管理\n\nAgent の Skill をオンラインで表示・管理できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173403.png\" />\n\n#### 記憶管理\n\nAgent の記憶をオンラインで表示・管理できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173349.png\" />\n\n#### チャネル管理\n\n接続されたチャネルをオンラインで管理し、リアルタイムで接続・切断操作ができます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173331.png\" />\n\n#### スケジュールタスク\n\nワンタイムタスク、固定間隔、Cron 式を含むスケジュールタスクをオンラインで表示・管理できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173704.png\" />\n\n#### ログ\n\nAgent のランタイムログをリアルタイムで表示し、監視とトラブルシューティングに活用できます：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173514.png\" />\n\n関連コミット: [f1a1413](https://github.com/zhayujie/chatgpt-on-wechat/commit/f1a1413), [c0702c8](https://github.com/zhayujie/chatgpt-on-wechat/commit/c0702c8), [394853c](https://github.com/zhayujie/chatgpt-on-wechat/commit/394853c), [1c71c4e](https://github.com/zhayujie/chatgpt-on-wechat/commit/1c71c4e), [5e3eccb](https://github.com/zhayujie/chatgpt-on-wechat/commit/5e3eccb), [e1dc037](https://github.com/zhayujie/chatgpt-on-wechat/commit/e1dc037), [5edbf4c](https://github.com/zhayujie/chatgpt-on-wechat/commit/5edbf4c), [7d258b5](https://github.com/zhayujie/chatgpt-on-wechat/commit/7d258b5)\n\n### 🔀 マルチチャネル同時実行\n\n複数のチャネル（例：飞书、钉钉、企业微信、Web）を同時に実行できるようになりました。各チャネルは独立したスレッドで動作し、互いに干渉しません。\n\n設定方法: `config.json` の `channel_type` にカンマ区切りで複数のチャネルを設定するか、Web Console のチャネル管理ページからリアルタイムでチャネルの接続・切断を行います。\n\n```json\n{\n  \"channel_type\": \"web,feishu,dingtalk\"\n}\n```\n\n関連コミット: [4694594](https://github.com/zhayujie/chatgpt-on-wechat/commit/4694594), [7cce224](https://github.com/zhayujie/chatgpt-on-wechat/commit/7cce224), [7d258b5](https://github.com/zhayujie/chatgpt-on-wechat/commit/7d258b5), [c9adddb](https://github.com/zhayujie/chatgpt-on-wechat/commit/c9adddb)\n\n### 💾 セッション永続化\n\nセッション履歴がローカルの SQLite データベースに永続化されるようになりました。サービス再起動後も会話コンテキストが自動的に復元されます。Web Console の過去の会話も復元されます。\n\n関連コミット: [29bfbec](https://github.com/zhayujie/chatgpt-on-wechat/commit/29bfbec), [9917552](https://github.com/zhayujie/chatgpt-on-wechat/commit/9917552), [925d728](https://github.com/zhayujie/chatgpt-on-wechat/commit/925d728)\n\n## 新モデル\n\n- **Gemini 3.1 Pro Preview**: `gemini-3.1-pro-preview` モデルのサポートを追加 ([52d7cad](https://github.com/zhayujie/chatgpt-on-wechat/commit/52d7cad))\n- **Claude 4.6 Sonnet**: `claude-4.6-sonnet` モデルのサポートを追加 ([52d7cad](https://github.com/zhayujie/chatgpt-on-wechat/commit/52d7cad))\n- **Qwen3.5 Plus**: `qwen3.5-plus` モデルのサポートを追加 ([e59a289](https://github.com/zhayujie/chatgpt-on-wechat/commit/e59a289))\n- **MiniMax M2.5**: `Minimax-M2.5` モデルのサポートを追加 ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **GLM-5**: `glm-5` モデルのサポートを追加 ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **Kimi K2.5**: `kimi-k2.5` モデルのサポートを追加 ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **Doubao 2.0 Code**: コーディング特化型 `doubao-2.0-code` モデルを追加 ([ab28ee5](https://github.com/zhayujie/chatgpt-on-wechat/commit/ab28ee5))\n- **DashScope モデル**: 阿里云 DashScope モデル名のサポートを追加 ([ce58f23](https://github.com/zhayujie/chatgpt-on-wechat/commit/ce58f23))\n\n## ウェブサイトとドキュメント\n\n- **公式サイト**: [cowagent.ai](https://cowagent.ai/)\n- **ドキュメント**: [docs.cowagent.ai](https://docs.cowagent.ai/)\n\n## バグ修正\n\n- **Gemini 钉钉画像認識**: 钉钉チャネルで Gemini が画像マーカーを処理できない問題を修正 ([05a3304](https://github.com/zhayujie/chatgpt-on-wechat/commit/05a3304)) ([#2670](https://github.com/zhayujie/chatgpt-on-wechat/pull/2670)) Thanks [@SgtPepper114](https://github.com/SgtPepper114)\n- **起動スクリプトの依存関係**: `run.sh` スクリプトの依存関係インストール問題を修正 ([b6fc9fa](https://github.com/zhayujie/chatgpt-on-wechat/commit/b6fc9fa))\n- **bare except の整理**: より適切な例外処理のため `bare except` を `except Exception` に置換 ([adca89b](https://github.com/zhayujie/chatgpt-on-wechat/commit/adca89b)) ([#2674](https://github.com/zhayujie/chatgpt-on-wechat/pull/2674)) Thanks [@haosenwang1018](https://github.com/haosenwang1018)\n"
  },
  {
    "path": "docs/ja/releases/v2.0.3.mdx",
    "content": "---\ntitle: v2.0.3\ndescription: CowAgent 2.0.3 - 企業微信スマートボットとQQチャネルの追加、Webコンソールファイル処理、メモリシステムのアップグレード\n---\n\n## 🔌 新規チャネル\n\n### 企業微信スマートボット\n\n企業微信スマートボット（`wecom_bot`）チャネルを追加しました。ストリーミングカードメッセージ出力、テキストと画像メッセージの送受信をサポートし、Webコンソールでチャネルの設定と管理が可能です。\n\n接続ドキュメント：[企業微信スマートボット接続](https://docs.cowagent.ai/channels/wecom-bot)。\n\n関連コミット：[d4480b6](https://github.com/zhayujie/chatgpt-on-wechat/commit/d4480b6), [a42f31f](https://github.com/zhayujie/chatgpt-on-wechat/commit/a42f31f), [4ecd4df](https://github.com/zhayujie/chatgpt-on-wechat/commit/4ecd4df), [8b45d6c](https://github.com/zhayujie/chatgpt-on-wechat/commit/8b45d6c)\n\n### QQ チャネル\n\nQQ 公式ボット（`qq`）チャネルを追加しました。テキストと画像メッセージの送受信をサポートし、プライベートチャットとグループチャットに対応しています。\n\n接続ドキュメント：[QQボット接続](https://docs.cowagent.ai/channels/qq)。\n\n関連コミット：[005a0e1](https://github.com/zhayujie/chatgpt-on-wechat/commit/005a0e1), [a4d54f5](https://github.com/zhayujie/chatgpt-on-wechat/commit/a4d54f5)\n\n## 🖥️ Web コンソールのファイル入力・処理対応\n\nWeb コンソールのチャット画面でファイルや画像のアップロードが可能になり、Agent に直接ファイルを送信して処理できます。また、Read ツールに Office ドキュメント（Word、Excel、PPT）の解析機能を追加しました。\n\n関連コミット：[30c6d9b](https://github.com/zhayujie/chatgpt-on-wechat/commit/30c6d9b)\n\n## 🤖 新規モデル\n\n- **GPT-5.4 シリーズ**：`gpt-5.4`、`gpt-5.4-mini`、`gpt-5.4-nano` モデルのサポートを追加 ([1623deb](https://github.com/zhayujie/chatgpt-on-wechat/commit/1623deb))\n- **Gemini 3.1 Flash Lite Preview**：`gemini-3.1-flash-lite-preview` モデルのサポートを追加 ([ba915f2](https://github.com/zhayujie/chatgpt-on-wechat/commit/ba915f2))\n\n## 💰 Coding Plan サポート\n\n各ベンダーの Coding Plan（プログラミング月額プラン）への接続をサポートしました。OpenAI 互換方式で統一的に接続できます。現在、阿里雲、MiniMax、智譜 GLM、Kimi、火山エンジンなどのベンダーに対応しています。\n\n詳細設定は [Coding Plan ドキュメント](https://docs.cowagent.ai/models/coding-plan) を参照してください。\n\n## 🧠 メモリシステムのアップグレード\n\nメモリ書き込み（Memory Flush）のアップグレード：\n\n- LLM を使用してコンテキストウィンドウを超えた会話内容をインテリジェントに要約し、精製された日次メモリエントリを生成\n- 要約はバックグラウンドスレッドで非同期実行され、応答をブロックしない\n- コンテキストの一括トリミング戦略を最適化し、フラッシュ頻度を低減\n- 日次定期フラッシュのフォールバック機能を追加し、低アクティビティシナリオでのメモリ損失を防止\n- コンテキストメモリの損失問題を修正\n\n関連コミット：[022c13f](https://github.com/zhayujie/chatgpt-on-wechat/commit/022c13f), [c116235](https://github.com/zhayujie/chatgpt-on-wechat/commit/c116235)\n\n## 🔧 ツールリファクタリング\n\n- **画像認識**：画像認識（Image Vision）を Skill から内蔵 Tool にリファクタリングし、独立した画像ビジョンプロバイダー（Vision Provider）設定を追加。安定性と保守性を向上 ([a50fafa](https://github.com/zhayujie/chatgpt-on-wechat/commit/a50fafa), [3b8b562](https://github.com/zhayujie/chatgpt-on-wechat/commit/3b8b562))\n- **Webスクレイピング**：Webスクレイピング（Web Fetch）を Skill から内蔵 Tool にリファクタリング。リモートドキュメントファイル（PDF、Word、Excel、PPT）のダウンロードと解析をサポート ([ccb9030](https://github.com/zhayujie/chatgpt-on-wechat/commit/ccb9030), [fa61744](https://github.com/zhayujie/chatgpt-on-wechat/commit/fa61744))\n\n## 🐳 Docker デプロイメントの最適化\n\n- **設定テンプレートの整合**：`docker-compose.yml` の環境変数を `config-template.json` と整合し、モデル API Key と Agent 設定項目を完備\n- **Web コンソールポートマッピング**：`9899` ポートマッピングを追加。Docker デプロイ後にブラウザから Web コンソールにアクセス可能\n- **設定のホットリロード**：各モデル Bot の API Key と API Base をリアルタイム読み込みに変更。Web コンソールで設定変更後、再起動不要で即時反映\n- **ワークスペースの永続化**：`./cow` Volume マウントを追加。Agent ワークスペースデータ（メモリ、ペルソナ、スキルなど）をホストマシンに永続化し、コンテナの再構築やアップグレードでデータが失われない\n\n## ⚡ パフォーマンス最適化\n\n- **起動高速化**：飛書チャネルで依存関係の遅延読み込みを採用し、4-10秒の起動遅延を回避 ([924dc79](https://github.com/zhayujie/chatgpt-on-wechat/commit/924dc79))\n- **チャネルの安定性**：チャネル接続の安定性を最適化し、環境変数によるチャネル設定をサポート ([f1c04bc](https://github.com/zhayujie/chatgpt-on-wechat/commit/f1c04bc), [46d97fd](https://github.com/zhayujie/chatgpt-on-wechat/commit/46d97fd))\n\n## 🐛 バグ修正\n\n- **bot_type 設定**：Agent モードでの `bot_type` 設定の受け渡し問題を修正 ([#2691](https://github.com/zhayujie/chatgpt-on-wechat/pull/2691)) Thanks [@Weikjssss](https://github.com/Weikjssss)\n- **bot_type 優先順位**：Agent モードでの `bot_type` の解析優先順位を調整 ([#2692](https://github.com/zhayujie/chatgpt-on-wechat/pull/2692)) Thanks [@6vision](https://github.com/6vision)\n- **智譜モデル設定**：智譜の `bot_type` 命名、Web コンソールの永続化、正規表現エスケープの問題を修正 ([#2693](https://github.com/zhayujie/chatgpt-on-wechat/pull/2693)) Thanks [@6vision](https://github.com/6vision)\n- **OpenAI 互換レイヤー**：`openai_compat` レイヤーによる統一エラー処理 ([#2688](https://github.com/zhayujie/chatgpt-on-wechat/pull/2688)) Thanks [@JasonOA888](https://github.com/JasonOA888)\n- **OpenAI 互換移行**：全モデル Bot の `openai_compat` 移行を完了 ([#2689](https://github.com/zhayujie/chatgpt-on-wechat/pull/2689))\n- **Gemini ツール呼び出し**：Gemini モデルのツール呼び出しマッチング問題を修正 ([eda82ba](https://github.com/zhayujie/chatgpt-on-wechat/commit/eda82ba))\n- **セッション並行処理**：セッション並行シナリオでの競合条件の問題を修正 ([9879878](https://github.com/zhayujie/chatgpt-on-wechat/commit/9879878))\n- **履歴メッセージの復元**：履歴セッションメッセージの不完全な問題を修正。user/assistant のテキストメッセージのみを復元し、ツール呼び出しを除外 ([b788a3d](https://github.com/zhayujie/chatgpt-on-wechat/commit/b788a3d), [a33ce97](https://github.com/zhayujie/chatgpt-on-wechat/commit/a33ce97))\n- **飛書グループチャット**：飛書グループチャットシナリオでの `bot_name` 依存を削除 ([b641bff](https://github.com/zhayujie/chatgpt-on-wechat/commit/b641bff))\n- **Safari 互換性**：Safari ブラウザでの IME Enter キーによるメッセージ誤送信の問題を修正 ([0687916](https://github.com/zhayujie/chatgpt-on-wechat/commit/0687916))\n- **Windows 互換性**：Windows での bash スタイル `$VAR` 環境変数を `%VAR%` に変換する問題を修正 ([7c67513](https://github.com/zhayujie/chatgpt-on-wechat/commit/7c67513))\n- **MiniMax パラメータ**：MiniMax モデルの `max_tokens` 制限を追加 ([1767413](https://github.com/zhayujie/chatgpt-on-wechat/commit/1767413))\n- **.gitignore 更新**：Python ディレクトリの無視ルールを追加 ([#2683](https://github.com/zhayujie/chatgpt-on-wechat/pull/2683)) Thanks [@pelioo](https://github.com/pelioo)\n- **AGENT.md の能動的進化**：システムプロンプトでの AGENT.md 更新ガイダンスを最適化。受動的な「ユーザーが変更した時に更新」から、会話中の性格やスタイルの変化を能動的に検出して自動更新するように改善\n\n## 📦 アップグレード方法\n\nソースコードデプロイの場合は `./run.sh update` でワンクリックアップグレードできます。または手動でコードをプルして再起動してください。詳細は [アップデートドキュメント](https://docs.cowagent.ai/guide/upgrade) を参照。\n\n**リリース日**：2026.03.18 | [Full Changelog](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.2...master)\n"
  },
  {
    "path": "docs/ja/skills/image-vision.mdx",
    "content": "---\ntitle: Image Vision\ndescription: OpenAI の Vision モデルを使用して画像を認識\n---\n\nOpenAI の GPT-4 Vision API を使用して画像の内容を分析し、画像内のオブジェクト、テキスト、色などの要素を理解します。\n\n## 依存関係\n\n| 依存関係 | 説明 |\n| --- | --- |\n| `OPENAI_API_KEY` | OpenAI API キー |\n| `curl`, `base64` | システムコマンド（通常プリインストール済み） |\n\n設定方法：\n\n- `env_config` Tool で `OPENAI_API_KEY` を設定\n- または `config.json` で `open_ai_api_key` を設定\n\n## 対応モデル\n\n- `gpt-4.1-mini`（推奨、コストパフォーマンスに優れる）\n- `gpt-4.1`\n\n## 使い方\n\n設定が完了したら、Agent に画像を送信すると自動的に画像認識がトリガーされます。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/ja/skills/index.mdx",
    "content": "---\ntitle: Skill 概要\ndescription: CowAgent の Skill システム紹介\n---\n\nSkill は Agent に無限の拡張性を提供します。各 Skill は説明ファイル（`SKILL.md`）、実行スクリプト（任意）、リソース（任意）で構成され、特定のタスクをどのように遂行するかを記述します。\n\nSkill と Tool の違い：Tool はコードで実装された原子的な操作（例：ファイルの読み書き、コマンドの実行）であるのに対し、Skill は説明ファイルに基づく高レベルなワークフローであり、複数の Tool を組み合わせて複雑なタスクを完遂できます。\n\n## 組み込み Skill\n\nプロジェクトの `skills/` ディレクトリに配置されており、依存条件に基づいて自動的に有効化されます：\n\n| Skill | 説明 | 依存関係 |\n| --- | --- | --- |\n| [`skill-creator`](/ja/skills/skill-creator) | 会話を通じてカスタム Skill を作成 | なし |\n| [`openai-image-vision`](/ja/skills/image-vision) | OpenAI の Vision モデルを使用して画像を認識 | `OPENAI_API_KEY` |\n| [`linkai-agent`](/ja/skills/linkai-agent) | LinkAI プラットフォームの Agent を統合 | `LINKAI_API_KEY` |\n| [`web-fetch`](/ja/skills/web-fetch) | Web ページのテキストコンテンツを取得 | `curl`（デフォルトで有効） |\n\n## カスタム Skill\n\nユーザーが会話を通じて作成し、ワークスペース（`~/cow/skills/`）に保存されます。任意の複雑なビジネスプロセスやサードパーティシステムとの連携を実装できます。\n\n## Skill の読み込み優先順位\n\n1. **ワークスペースの Skill**（最高優先）：`~/cow/skills/`\n2. **プロジェクト組み込み Skill**（最低優先）：`skills/`\n\n同名の Skill は優先順位に従って上書きされます。\n\n## Skill のファイル構成\n\n```\nskills/\n├── my-skill/\n│   ├── SKILL.md          # Skill の説明（frontmatter + 手順）\n│   ├── scripts/          # 実行スクリプト（任意）\n│   └── resources/        # 追加リソース（任意）\n```\n\n### SKILL.md のフォーマット\n\n```markdown\n---\nname: my-skill\ndescription: Brief description of the skill\nmetadata:\n  emoji: 🔧\n  requires:\n    bins: [\"curl\"]\n    env: [\"MY_API_KEY\"]\n  primaryEnv: \"MY_API_KEY\"\n---\n\n# My Skill\n\nDetailed instructions...\n```\n\n| フィールド | 説明 |\n| --- | --- |\n| `name` | Skill 名。ディレクトリ名と一致する必要があります |\n| `description` | Skill の説明。Agent はこれに基づいて呼び出すかどうかを判断します |\n| `metadata.requires.bins` | 必要なシステムコマンド |\n| `metadata.requires.env` | 必要な環境変数 |\n| `metadata.always` | 常に読み込む（デフォルトは false） |\n"
  },
  {
    "path": "docs/ja/skills/linkai-agent.mdx",
    "content": "---\ntitle: LinkAI Agent\ndescription: LinkAI プラットフォームのマルチ Agent Skill を統合\n---\n\n[LinkAI](https://link-ai.tech/) プラットフォームの Agent を Skill として使用し、マルチ Agent の意思決定を行います。Agent は Agent 名と説明に基づいてインテリジェントに選択し、`app_code` を通じて対応するアプリケーションやワークフローを呼び出します。\n\n## 依存関係\n\n| 依存関係 | 説明 |\n| --- | --- |\n| `LINKAI_API_KEY` | LinkAI プラットフォームの API キー。[コンソール](https://link-ai.tech/console/interface)で作成 |\n| `curl` | システムコマンド（通常プリインストール済み） |\n\n設定方法：\n\n- `env_config` Tool で `LINKAI_API_KEY` を設定\n- または `config.json` で `linkai_api_key` を設定\n\n## Agent の設定\n\n`skills/linkai-agent/config.json` で利用可能な Agent を追加します：\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI Customer Support\",\n      \"app_description\": \"Select this assistant only when the user needs help with LinkAI platform questions\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"Content Creator\",\n      \"app_description\": \"Use this assistant only when the user needs to create images or videos\"\n    }\n  ]\n}\n```\n\n## 使い方\n\n設定が完了すると、Agent はユーザーの質問に基づいて適切な LinkAI Agent を自動的に選択します。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n"
  },
  {
    "path": "docs/ja/skills/skill-creator.mdx",
    "content": "---\ntitle: Skill Creator\ndescription: 会話を通じてカスタム Skill を作成\n---\n\n自然言語の会話を通じて、Skill の作成、インストール、更新を素早く行えます。\n\n## 依存関係\n\n追加の依存関係は不要で、常に利用可能です。\n\n## 使い方\n\n- ワークフローを Skill 化：「このデプロイプロセスから Skill を作成して」\n- サードパーティ API の統合：「この API ドキュメントに基づいて Skill を作成して」\n- リモート Skill のインストール：「xxx Skill をインストールして」\n\n## 作成フロー\n\n1. 作成したい Skill を Agent に伝えます\n2. Agent が自動的に `SKILL.md` の説明と実行スクリプトを生成します\n3. Skill はワークスペースの `~/cow/skills/` ディレクトリに保存されます\n4. 以降の会話で Agent が自動的にその Skill を認識し使用します\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n<Tip>\n  詳細は [Skill Creator のドキュメント](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/skills/skill-creator/SKILL.md)をご覧ください。\n</Tip>\n"
  },
  {
    "path": "docs/ja/skills/web-fetch.mdx",
    "content": "---\ntitle: Web Fetch\ndescription: Web ページのテキストコンテンツを取得\n---\n\ncurl を使用して Web ページを取得し、読み取り可能なテキストコンテンツを抽出します。ブラウザ自動化を必要としない軽量な Web アクセス方法です。\n\n## 依存関係\n\n| 依存関係 | 説明 |\n| --- | --- |\n| `curl` | システムコマンド（通常プリインストール済み） |\n\nこの Skill は `always: true` が設定されており、システムに `curl` コマンドがあればデフォルトで有効になります。\n\n## 使い方\n\nAgent が URL からコンテンツを取得する必要がある場合に自動的に呼び出されます。追加の設定は不要です。\n\n## browser Tool との比較\n\n| 機能 | web-fetch (Skill) | browser (Tool) |\n| --- | --- | --- |\n| 依存関係 | curl のみ | browser-use + playwright |\n| JS レンダリング | 非対応 | 対応 |\n| ページ操作 | 非対応 | クリック、入力などに対応 |\n| 最適な用途 | 静的ページのテキスト | 動的な Web ページ |\n\n<Tip>\n  ほとんどの Web コンテンツ取得シナリオでは、web-fetch で十分です。JS レンダリングやページ操作が必要な場合にのみ browser Tool を使用してください。\n</Tip>\n"
  },
  {
    "path": "docs/ja/tools/bash.mdx",
    "content": "---\ntitle: bash - ターミナル\ndescription: システムコマンドの実行\n---\n\n現在の作業ディレクトリでBashコマンドを実行し、stdoutとstderrを返します。`env_config` で設定されたAPIキーは自動的に環境変数に注入されます。\n\n## 依存関係\n\n追加の依存関係は不要で、デフォルトで利用可能です。\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `command` | string | はい | 実行するコマンド |\n| `timeout` | integer | いいえ | タイムアウト（秒） |\n\n## ユースケース\n\n- パッケージや依存関係のインストール\n- コードやテストの実行\n- アプリケーションやサービスのデプロイ（Nginx設定、プロセス管理など）\n- システム管理とトラブルシューティング\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/ja/tools/browser.mdx",
    "content": "---\ntitle: browser - ブラウザ\ndescription: Webページへのアクセスと操作\n---\n\nブラウザを使用してWebページにアクセス・操作します。JavaScriptでレンダリングされる動的ページにも対応しています。\n\n## 依存関係\n\n| 依存関係 | インストールコマンド |\n| --- | --- |\n| `browser-use` ≥ 0.1.40 | `pip install browser-use` |\n| `markdownify` | `pip install markdownify` |\n| `playwright` + chromium | `pip install playwright && playwright install chromium` |\n\n## ユースケース\n\n- 特定のURLにアクセスしてページ内容を取得\n- Webページの要素を操作（クリック、入力など）\n- デプロイされたWebページの検証\n- JSレンダリングが必要な動的コンテンツのスクレイピング\n\n<Note>\n  ブラウザToolは依存関係が大きいため、不要な場合はインストールを省略できます。軽量なWebコンテンツ取得には、代わりに `web-fetch` Skillをご利用ください。\n</Note>\n"
  },
  {
    "path": "docs/ja/tools/edit.mdx",
    "content": "---\ntitle: edit - ファイル編集\ndescription: テキスト置換によるファイル編集\n---\n\nテキスト置換によるファイル編集を行います。`oldText` が空の場合、ファイル末尾に追記します。\n\n## 依存関係\n\n追加の依存関係は不要で、デフォルトで利用可能です。\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `path` | string | はい | ファイルパス |\n| `oldText` | string | はい | 置換対象の元テキスト（空の場合は追記） |\n| `newText` | string | はい | 置換後のテキスト |\n\n## ユースケース\n\n- 設定ファイルの特定パラメータの変更\n- コードのバグ修正\n- ファイル内の特定位置へのコンテンツ挿入\n"
  },
  {
    "path": "docs/ja/tools/env-config.mdx",
    "content": "---\ntitle: env_config - 環境設定\ndescription: APIキーとシークレットの管理\n---\n\nワークスペースの `.env` ファイルで環境変数（APIキーやシークレット）を管理し、会話形式で安全に更新できます。セキュリティ保護とマスキング機能を内蔵しています。\n\n## 依存関係\n\n| 依存関係 | インストールコマンド |\n| --- | --- |\n| `python-dotenv` ≥ 1.0.0 | `pip install python-dotenv>=1.0.0` |\n\nオプション依存関係のインストールに含まれています：`pip3 install -r requirements-optional.txt`\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `action` | string | はい | 操作タイプ：`get`、`set`、`list`、`delete` |\n| `key` | string | いいえ | 環境変数名 |\n| `value` | string | いいえ | 環境変数の値（`set` の場合のみ） |\n\n## 使い方\n\n設定したいキーをAgentに伝えると、自動的にこのToolが呼び出されます：\n\n- 「BOCHA_API_KEYを設定して」\n- 「OPENAI_API_KEYをsk-xxxに設定して」\n- 「設定済みの環境変数を表示して」\n\n設定されたキーは `bash` Toolの実行環境に自動的に注入されます。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/ja/tools/index.mdx",
    "content": "---\ntitle: Tools 概要\ndescription: CowAgent 組み込みToolシステム\n---\n\nToolは、AgentがOSリソースにアクセスするための中核機能です。Agentはタスクの要件に基づいてToolをインテリジェントに選択・呼び出し、ファイル操作、コマンド実行、Web検索、スケジュールタスクなどを実行します。Toolは `agent/tools/` ディレクトリに実装されています。\n\n## 組み込みTool\n\n以下のToolは追加設定なしでデフォルトで利用可能です：\n\n<CardGroup cols={2}>\n  <Card title=\"read - ファイル読み取り\" icon=\"file\" href=\"/ja/tools/read\">\n    ファイル内容を読み取り、テキスト・画像・PDFに対応\n  </Card>\n  <Card title=\"write - ファイル書き込み\" icon=\"pen\" href=\"/ja/tools/write\">\n    ファイルの作成または上書き\n  </Card>\n  <Card title=\"edit - ファイル編集\" icon=\"pen-to-square\" href=\"/ja/tools/edit\">\n    テキスト置換によるファイル編集\n  </Card>\n  <Card title=\"ls - ディレクトリ一覧\" icon=\"folder-open\" href=\"/ja/tools/ls\">\n    ディレクトリの内容を一覧表示\n  </Card>\n  <Card title=\"bash - ターミナル\" icon=\"terminal\" href=\"/ja/tools/bash\">\n    システムコマンドの実行\n  </Card>\n  <Card title=\"send - ファイル送信\" icon=\"paper-plane\" href=\"/ja/tools/send\">\n    ファイルや画像をユーザーに送信\n  </Card>\n  <Card title=\"memory - メモリ\" icon=\"brain\" href=\"/ja/tools/memory\">\n    長期メモリの検索と読み取り\n  </Card>\n</CardGroup>\n\n## オプションTool\n\n以下のToolは追加の依存関係またはAPIキーの設定が必要です：\n\n<CardGroup cols={2}>\n  <Card title=\"env_config - 環境設定\" icon=\"key\" href=\"/ja/tools/env-config\">\n    APIキーとシークレットの管理\n  </Card>\n  <Card title=\"scheduler - スケジューラ\" icon=\"clock\" href=\"/ja/tools/scheduler\">\n    スケジュールタスクの作成と管理\n  </Card>\n  <Card title=\"web_search - Web検索\" icon=\"magnifying-glass\" href=\"/ja/tools/web-search\">\n    インターネットからリアルタイム情報を検索\n  </Card>\n</CardGroup>\n"
  },
  {
    "path": "docs/ja/tools/ls.mdx",
    "content": "---\ntitle: ls - ディレクトリ一覧\ndescription: ディレクトリの内容を一覧表示\n---\n\nディレクトリの内容をアルファベット順にソートして一覧表示します。ディレクトリには `/` が付与され、隠しファイルも含まれます。\n\n## 依存関係\n\n追加の依存関係は不要で、デフォルトで利用可能です。\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `path` | string | はい | ディレクトリパス。相対パスはワークスペースディレクトリを基準とします |\n| `limit` | integer | いいえ | 返すエントリの最大数、デフォルト500 |\n\n## ユースケース\n\n- プロジェクト構造の閲覧\n- 特定ファイルの検索\n- ディレクトリの存在確認\n"
  },
  {
    "path": "docs/ja/tools/memory.mdx",
    "content": "---\ntitle: memory - メモリ\ndescription: 長期メモリの検索と読み取り\n---\n\nメモリToolには `memory_search`（メモリ検索）と `memory_get`（メモリファイル読み取り）の2つのサブToolがあります。\n\n## 依存関係\n\n追加の依存関係は不要で、デフォルトで利用可能です。Agent Coreのメモリシステムによって管理されます。\n\n## memory_search\n\nキーワードとベクトルのハイブリッド検索で過去のメモリを検索します。\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `query` | string | はい | 検索クエリ |\n\n## memory_get\n\n特定のメモリファイルの内容を読み取ります。\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `path` | string | はい | メモリファイルの相対パス（例：`MEMORY.md`、`memory/2026-01-01.md`） |\n| `start_line` | integer | いいえ | 開始行番号 |\n| `end_line` | integer | いいえ | 終了行番号 |\n\n## 仕組み\n\nAgentは以下のシナリオでメモリToolを自動的に呼び出します：\n\n- ユーザーが重要な情報を共有した場合 → メモリに保存\n- 過去のコンテキストが必要な場合 → 関連するメモリを検索\n- 会話が一定の長さに達した場合 → 要約を抽出して保存\n"
  },
  {
    "path": "docs/ja/tools/read.mdx",
    "content": "---\ntitle: read - ファイル読み取り\ndescription: ファイル内容の読み取り\n---\n\nファイルの内容を読み取ります。テキストファイル、PDFファイル、画像（メタデータを返す）などに対応しています。\n\n## 依存関係\n\n追加の依存関係は不要で、デフォルトで利用可能です。\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `path` | string | はい | ファイルパス。相対パスはワークスペースディレクトリを基準とします |\n| `offset` | integer | いいえ | 開始行番号（1始まり）。負の値は末尾からの読み取り |\n| `limit` | integer | いいえ | 読み取る行数 |\n\n## ユースケース\n\n- 設定ファイルやログファイルの閲覧\n- コードファイルの読み取りと分析\n- 画像・動画ファイルの情報確認\n"
  },
  {
    "path": "docs/ja/tools/scheduler.mdx",
    "content": "---\ntitle: scheduler - スケジューラ\ndescription: スケジュールタスクの作成と管理\n---\n\n柔軟なスケジュール設定と実行モードを備えた、動的スケジュールタスクの作成と管理を行います。\n\n## 依存関係\n\n| 依存関係 | インストールコマンド |\n| --- | --- |\n| `croniter` ≥ 2.0.0 | `pip install croniter>=2.0.0` |\n\nコア依存関係に含まれています：`pip3 install -r requirements.txt`\n\n## スケジュールモード\n\n| モード | 説明 |\n| --- | --- |\n| ワンタイム | 指定した時刻に1回だけ実行 |\n| 固定間隔 | 一定の時間間隔で繰り返し実行 |\n| Cron式 | Cron構文を使用した複雑なスケジュール定義 |\n\n## 実行モード\n\n- **固定メッセージ**: トリガー時にプリセットメッセージを送信\n- **Agent動的タスク**: トリガー時にAgentがインテリジェントにタスクを実行\n\n## 使い方\n\n自然言語でスケジュールタスクを作成・管理できます：\n\n- 「毎朝9時に天気予報を送って」\n- 「2時間ごとにサーバーのステータスを確認して」\n- 「明日の午後3時に会議のリマインドをして」\n- 「すべてのスケジュールタスクを表示して」\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/ja/tools/send.mdx",
    "content": "---\ntitle: send - ファイル送信\ndescription: ユーザーへのファイル送信\n---\n\nユーザーにファイル（画像、動画、音声、ドキュメントなど）を送信します。ユーザーが明示的にファイルの送信・共有を要求した場合に使用されます。\n\n## 依存関係\n\n追加の依存関係は不要で、デフォルトで利用可能です。\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `path` | string | はい | ファイルパス。絶対パスまたはワークスペースからの相対パス |\n| `message` | string | いいえ | 添付メッセージ |\n\n## ユースケース\n\n- 生成したコードやドキュメントをユーザーに送信\n- スクリーンショットやチャートの送信\n- ダウンロードしたファイルの共有\n"
  },
  {
    "path": "docs/ja/tools/web-search.mdx",
    "content": "---\ntitle: web_search - Web検索\ndescription: インターネットからリアルタイム情報を検索\n---\n\nインターネットからリアルタイムの情報、ニュース、リサーチなどを検索します。2つの検索バックエンドに対応し、自動フォールバック機能を備えています。\n\n## 依存関係\n\n少なくとも1つの検索APIキーが必要です（`env_config` Toolまたはワークスペースの `.env` ファイルで設定）：\n\n| バックエンド | 環境変数 | 優先度 | 取得方法 |\n| --- | --- | --- | --- |\n| Bocha Search | `BOCHA_API_KEY` | プライマリ | [Bocha Open Platform](https://open.bochaai.com/) |\n| LinkAI Search | `LINKAI_API_KEY` | フォールバック | [LinkAI Console](https://link-ai.tech/console/interface) |\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `query` | string | はい | 検索キーワード |\n| `count` | integer | いいえ | 結果件数（1-50、デフォルト10） |\n| `freshness` | string | いいえ | 期間指定：`noLimit`、`oneDay`、`oneWeek`、`oneMonth`、`oneYear`、または `2025-01-01..2025-02-01` のような日付範囲 |\n| `summary` | boolean | いいえ | ページ要約を返す（デフォルトfalse） |\n\n## ユースケース\n\nユーザーが最新情報について質問したり、事実確認やリアルタイムデータが必要な場合、AgentはこのToolを自動的に呼び出します。\n\n<Note>\n  検索APIキーが設定されていない場合、このToolは読み込まれません。\n</Note>\n"
  },
  {
    "path": "docs/ja/tools/write.mdx",
    "content": "---\ntitle: write - ファイル書き込み\ndescription: ファイルの作成または上書き\n---\n\nファイルにコンテンツを書き込みます。ファイルが存在しない場合は新規作成し、存在する場合は上書きします。親ディレクトリは自動的に作成されます。\n\n## 依存関係\n\n追加の依存関係は不要で、デフォルトで利用可能です。\n\n## パラメータ\n\n| パラメータ | 型 | 必須 | 説明 |\n| --- | --- | --- | --- |\n| `path` | string | はい | ファイルパス |\n| `content` | string | はい | 書き込む内容 |\n\n## ユースケース\n\n- 新しいコードファイルやスクリプトの作成\n- 設定ファイルの生成\n- 処理結果の保存\n\n<Note>\n  1回の書き込みは10KBを超えないようにしてください。大きなファイルの場合は、まずスケルトンを作成し、editツールを使用してチャンクごとにコンテンツを追加してください。\n</Note>\n"
  },
  {
    "path": "docs/memory.mdx",
    "content": "---\ntitle: 长期记忆\ndescription: CowAgent 的长期记忆系统\n---\n\n记忆系统让 Agent 能够长期记住重要信息，在对话中不断积累经验、理解用户偏好，真正实现自主思考和持续成长。\n\n## 记忆类型\n\n### 核心记忆（MEMORY.md）\n\n存储在 `~/cow/MEMORY.md` 中，包含用户的长期偏好、重要决策、关键事实等不会随时间淡化的信息。每次对话时自动注入系统提示词，作为 Agent 的背景知识。\n\n### 天级记忆（memory/YYYY-MM-DD.md）\n\n存储在 `~/cow/memory/` 目录下，按日期命名（如 `2026-03-08.md`），记录每天的对话摘要和关键事件。仅在首次写入时创建，避免生成空文件。\n\n## 记忆写入\n\nAgent 通过以下机制自动将对话内容持久化为天级记忆：\n\n- **上下文裁剪时** — 当对话轮次或 token 超出配置上限时，批量裁剪最早一半的上下文，并使用 LLM 将被裁剪的内容总结为关键信息写入当天记忆文件\n- **每日定时总结** — 每天 23:55 自动触发一次全量总结，防止低活跃日无记忆留存（内容无变化时自动跳过）\n- **API 上下文溢出时** — 当模型 API 返回上下文溢出错误时，紧急保存当前对话摘要\n\n所有记忆写入均在后台异步执行（LLM 总结 + 文件写入），不阻塞正常对话回复。\n\n## 首次启动\n\n首次启动 Agent 时，Agent 会主动向用户询问关键信息，并记录至工作空间（默认 `~/cow`）中：\n\n| 文件 | 说明 |\n| --- | --- |\n| `system.md` | Agent 的系统提示词和行为设定 |\n| `user.md` | 用户身份信息和偏好 |\n| `MEMORY.md` | 核心记忆（长期） |\n| `memory/YYYY-MM-DD.md` | 天级记忆（按需创建） |\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## 记忆检索\n\n记忆系统支持混合检索模式：\n\n- **关键词检索** — 基于关键词匹配历史记忆\n- **向量检索** — 基于语义相似度搜索，即使表述不同也能找到相关记忆\n\nAgent 会在对话中根据需要自动触发记忆检索，将相关历史信息纳入上下文。核心记忆（`MEMORY.md`）始终注入系统提示词，天级记忆通过检索按需加载。\n\n## 相关配置\n\n```json\n{\n  \"agent_workspace\": \"~/cow\",\n  \"agent_max_context_tokens\": 40000,\n  \"agent_max_context_turns\": 20\n}\n```\n\n| 参数 | 说明 | 默认值 |\n| --- | --- | --- |\n| `agent_workspace` | 工作空间路径，记忆文件存储在此目录下 | `~/cow` |\n| `agent_max_context_tokens` | 最大上下文 token 数，超出时裁剪一半并总结写入记忆 | `40000` |\n| `agent_max_context_turns` | 最大上下文轮次，超出时裁剪一半并总结写入记忆 | `20` |\n"
  },
  {
    "path": "docs/models/claude.mdx",
    "content": "---\ntitle: Claude\ndescription: Claude 模型配置\n---\n\n```json\n{\n  \"model\": \"claude-sonnet-4-6\",\n  \"claude_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 支持 `claude-sonnet-4-6`、`claude-opus-4-6`、`claude-sonnet-4-5`、`claude-sonnet-4-0`、`claude-3-5-sonnet-latest` 等，参考 [官方模型](https://docs.anthropic.com/en/docs/about-claude/models/overview) |\n| `claude_api_key` | 在 [Claude 控制台](https://console.anthropic.com/settings/keys) 创建 |\n| `claude_api_base` | 可选，默认为 `https://api.anthropic.com/v1`，修改可接入第三方代理 |\n"
  },
  {
    "path": "docs/models/coding-plan.mdx",
    "content": "---\ntitle: Coding Plan\ndescription: Coding Plan 模式模型配置\n---\n\n> Coding Plan 是各厂商推出的编程包月套餐，适合高频使用 Agent 的场景。CowAgent 支持通过 OpenAI 兼容方式接入各厂商的 Coding Plan 接口。\n\n<Note>\n  Coding Plan 的 API Base 和 API Key 通常与普通按量计费接口不通用，请在各厂商平台单独获取。\n</Note>\n\n## 通用配置格式\n\n所有厂商均可使用 OpenAI 兼容协议接入，可在web控制台快速配置。设置模型厂商为**OpenAI**，选择自定义模型并填入模型编码，最后填写对应厂商的API Base 和 API Key：\n\n<img src=\"https://cdn.link-ai.tech/doc/20260318113134.png\" width=\"800\"/>\n\n也可通过 `config.json` 配置文件直接修改：\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"模型名称\",\n  \"open_ai_api_base\": \"厂商 Coding Plan API Base\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `bot_type` | 固定为 `openai`（OpenAI 兼容方式） |\n| `model` | 各厂商支持的模型名称 |\n| `open_ai_api_base` | 各厂商 Coding Plan 专用 API Base |\n| `open_ai_api_key` | 各厂商 Coding Plan 专用 API Key |\n\n---\n\n## 阿里云\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"qwen3.5-plus\",\n  \"open_ai_api_base\": \"https://coding.dashscope.aliyuncs.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | `qwen3.5-plus`、`qwen3-max-2026-01-23`、`qwen3-coder-next`、`qwen3-coder-plus`、`glm-5`、`glm-4.7`、`kimi-k2.5`、`MiniMax-M2.5` |\n| `open_ai_api_base` | `https://coding.dashscope.aliyuncs.com/v1` |\n| `open_ai_api_key` | Coding Plan 专用 Key（与按量计费接口不通用） |\n\n官方文档：[快速开始](https://help.aliyun.com/zh/model-studio/coding-plan-quickstart?spm=a2c4g.11186623.help-menu-2400256.d_0_2_1.70115203zi5Igc)、[模型列表](https://help.aliyun.com/zh/model-studio/coding-plan)\n\n---\n\n## MiniMax\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MiniMax-M2.5\",\n  \"open_ai_api_base\": \"https://api.minimaxi.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | `MiniMax-M2.5`、`MiniMax-M2.5-highspeed`、`MiniMax-M2.1`、`MiniMax-M2` |\n| `open_ai_api_base` | 国内：`https://api.minimaxi.com/v1`；海外：`https://api.minimax.io/v1` |\n| `open_ai_api_key` | Coding Plan 专用 Key（与按量计费接口不通用） |\n\n官方文档：[国内 Key 获取](https://platform.minimaxi.com/docs/coding-plan/quickstart)、[模型列表](https://platform.minimaxi.com/docs/guides/pricing-coding-plan)、[国际 Key 获取](https://platform.minimax.io/docs/coding-plan/quickstart)\n\n---\n\n\n## 智谱 GLM\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"glm-4.7\",\n  \"open_ai_api_base\": \"https://open.bigmodel.cn/api/coding/paas/v4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | `glm-5`、`glm-4.7`、`glm-4.6`、`glm-4.5`、`glm-4.5-air` |\n| `open_ai_api_base` | 中国区：`https://open.bigmodel.cn/api/coding/paas/v4`；全球区：`https://api.z.ai/api/coding/paas/v4` |\n| `open_ai_api_key` | API Key 与普通接口通用 |\n\n官方文档：[国内版快速开始](https://docs.bigmodel.cn/cn/coding-plan/quick-start)、[国际版快速开始](https://docs.z.ai/devpack/quick-start)\n\n---\n\n## Kimi\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"kimi-for-coding\",\n  \"open_ai_api_base\": \"https://api.kimi.com/coding/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | `kimi-for-coding` |\n| `open_ai_api_base` | `https://api.kimi.com/coding/v1` |\n| `open_ai_api_key` | Coding Plan 专用 Key（与按量计费接口不通用） |\n\n官方文档：[Key 获取](https://www.kimi.com/code/docs/)\n\n---\n\n## 火山引擎\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"Doubao-Seed-2.0-Code\",\n  \"open_ai_api_base\": \"https://ark.cn-beijing.volces.com/api/coding/v3\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | `Doubao-Seed-2.0-Code`、`Doubao-Seed-2.0-pro`、`Doubao-Seed-2.0-lite`、`Doubao-Seed-Code`、`MiniMax-M2.5`、`Kimi-K2.5`、`GLM-4.7`、`DeepSeek-V3.2` |\n| `open_ai_api_base` | `https://ark.cn-beijing.volces.com/api/coding/v3` |\n| `open_ai_api_key` | API Key 与普通接口通用 |\n\n官方文档：[快速开始](https://www.volcengine.com/docs/82379/1928261?lang=zh)\n"
  },
  {
    "path": "docs/models/deepseek.mdx",
    "content": "---\ntitle: DeepSeek\ndescription: DeepSeek 模型配置\n---\n\n通过 OpenAI 兼容方式接入：\n\n```json\n{\n  \"model\": \"deepseek-chat\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\",\n  \"open_ai_api_base\": \"https://api.deepseek.com/v1\",\n  \"bot_type\": \"openai\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | `deepseek-chat`（DeepSeek-V3）、`deepseek-reasoner`（DeepSeek-R1） |\n| `bot_type` | 固定为 `openai`（OpenAI 兼容方式） |\n| `open_ai_api_key` | 在 [DeepSeek 平台](https://platform.deepseek.com/api_keys) 创建 |\n| `open_ai_api_base` | DeepSeek 平台 BASE URL |\n"
  },
  {
    "path": "docs/models/doubao.mdx",
    "content": "---\ntitle: 豆包 Doubao\ndescription: 豆包 (火山方舟) 模型配置\n---\n\n```json\n{\n  \"model\": \"doubao-seed-2-0-code-preview-260215\",\n  \"ark_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 可填 `doubao-seed-2-0-code-preview-260215`、`doubao-seed-2-0-pro-260215`、`doubao-seed-2-0-lite-260215` 等 |\n| `ark_api_key` | 在 [火山方舟控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建 |\n| `ark_base_url` | 可选，默认为 `https://ark.cn-beijing.volces.com/api/v3` |\n"
  },
  {
    "path": "docs/models/gemini.mdx",
    "content": "---\ntitle: Gemini\ndescription: Google Gemini 模型配置\n---\n\n```json\n{\n  \"model\": \"gemini-3.1-pro-preview\",\n  \"gemini_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 支持 `gemini-3.1-flash-lite-preview`、`gemini-3.1-pro-preview`、`gemini-3-flash-preview`、`gemini-3-pro-preview` 等，参考 [官方文档](https://ai.google.dev/gemini-api/docs/models) |\n| `gemini_api_key` | 在 [Google AI Studio](https://aistudio.google.com/app/apikey) 创建 |\n"
  },
  {
    "path": "docs/models/glm.mdx",
    "content": "---\ntitle: 智谱 GLM\ndescription: 智谱AI GLM 模型配置\n---\n\n```json\n{\n  \"model\": \"glm-5-turbo\",\n  \"zhipu_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 可填 `glm-5-turbo`、`glm-5`、`glm-4.7`、`glm-4-plus`、`glm-4-flash`、`glm-4-air` 等，参考 [模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4) |\n| `zhipu_ai_api_key` | 在 [智谱AI 控制台](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) 创建 |\n\n也支持 OpenAI 兼容方式接入：\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"glm-5-turbo\",\n  \"open_ai_api_base\": \"https://open.bigmodel.cn/api/paas/v4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/models/index.mdx",
    "content": "---\ntitle: 模型概览\ndescription: CowAgent 支持的模型及推荐选择\n---\n\nCowAgent 支持国内外主流厂商的大语言模型，模型接口实现在项目的 `models/` 目录下。\n\n<Note>\n  Agent 模式下推荐使用以下模型，可根据效果及成本综合选择：MiniMax-M2.7、glm-5-turbo、kimi-k2.5、qwen3.5-plus、claude-sonnet-4-6、gemini-3.1-pro-preview\n</Note>\n\n## 配置方式\n\n根据所选模型，在 `config.json` 中填写对应的模型名称和 API Key 即可。每个模型也支持 OpenAI 兼容方式接入，将 `bot_type` 设为 `openai`，配置 `open_ai_api_base` 和 `open_ai_api_key`。\n\n同时支持使用 [LinkAI](https://link-ai.tech) 平台接口，可灵活切换多种模型，并支持知识库、工作流、插件等 Agent 能力。\n\n也可以通过 [Web 控制台](/channels/web) 在线管理模型配置，无需手动编辑配置文件：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173811.png\" />\n\n## 支持的模型\n\n<CardGroup cols={2}>\n  <Card title=\"MiniMax\" href=\"/models/minimax\">\n    MiniMax-M2.7 等系列模型\n  </Card>\n  <Card title=\"智谱 GLM\" href=\"/models/glm\">\n    glm-5-turbo、glm-5 等系列模型\n  </Card>\n  <Card title=\"通义千问 Qwen\" href=\"/models/qwen\">\n    qwen3.5-plus、qwen3-max 等\n  </Card>\n  <Card title=\"Kimi\" href=\"/models/kimi\">\n    kimi-k2.5、kimi-k2 等\n  </Card>\n  <Card title=\"豆包 Doubao\" href=\"/models/doubao\">\n    doubao-seed 系列模型\n  </Card>\n  <Card title=\"Claude\" href=\"/models/claude\">\n    claude-sonnet-4-6 等\n  </Card>\n  <Card title=\"Gemini\" href=\"/models/gemini\">\n    gemini-3.1-pro-preview 等\n  </Card>\n  <Card title=\"OpenAI\" href=\"/models/openai\">\n    gpt-5.4、gpt-4.1、o 系列等\n  </Card>\n  <Card title=\"DeepSeek\" href=\"/models/deepseek\">\n    deepseek-chat、deepseek-reasoner\n  </Card>\n  <Card title=\"LinkAI\" href=\"/models/linkai\">\n    多模型统一接口 + 知识库\n  </Card>\n</CardGroup>\n\n<Tip>\n  全部模型名称可参考项目 [`common/const.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py) 文件。\n</Tip>\n"
  },
  {
    "path": "docs/models/kimi.mdx",
    "content": "---\ntitle: Kimi\ndescription: Kimi (Moonshot) 模型配置\n---\n\n```json\n{\n  \"model\": \"kimi-k2.5\",\n  \"moonshot_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 可填 `kimi-k2.5`、`kimi-k2`、`moonshot-v1-8k`、`moonshot-v1-32k`、`moonshot-v1-128k` |\n| `moonshot_api_key` | 在 [Moonshot 控制台](https://platform.moonshot.cn/console/api-keys) 创建 |\n\n也支持 OpenAI 兼容方式接入：\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"kimi-k2.5\",\n  \"open_ai_api_base\": \"https://api.moonshot.cn/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/models/linkai.mdx",
    "content": "---\ntitle: LinkAI\ndescription: 通过 LinkAI 平台统一接入多种模型\n---\n\n通过 [LinkAI](https://link-ai.tech) 平台可灵活切换 OpenAI、Claude、Gemini、DeepSeek、Qwen、Kimi 等多种模型，并支持知识库、工作流、插件等 Agent 能力。\n\n```json\n{\n  \"use_linkai\": true,\n  \"linkai_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `use_linkai` | 设为 `true` 启用 LinkAI 接口 |\n| `linkai_api_key` | 在 [控制台](https://link-ai.tech/console/interface) 创建 |\n| `model` | 留空则使用智能体默认模型，可在平台中灵活切换，[模型列表](https://link-ai.tech/console/models) 中的全部模型均可使用 |\n\n参考 [接口文档](https://docs.link-ai.tech/platform/api) 了解更多。\n"
  },
  {
    "path": "docs/models/minimax.mdx",
    "content": "---\ntitle: MiniMax\ndescription: MiniMax 模型配置\n---\n\n```json\n{\n  \"model\": \"MiniMax-M2.7\",\n  \"minimax_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 可填 `MiniMax-M2.7`、`MiniMax-M2.5`、`MiniMax-M2.1`、`MiniMax-M2.1-lightning`、`MiniMax-M2` 等 |\n| `minimax_api_key` | 在 [MiniMax 控制台](https://platform.minimaxi.com/user-center/basic-information/interface-key) 创建 |\n\n也支持 OpenAI 兼容方式接入：\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"MiniMax-M2.7\",\n  \"open_ai_api_base\": \"https://api.minimaxi.com/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/models/openai.mdx",
    "content": "---\ntitle: OpenAI\ndescription: OpenAI 模型配置\n---\n\n```json\n{\n  \"model\": \"gpt-5.4\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\",\n  \"open_ai_api_base\": \"https://api.openai.com/v1\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 与 OpenAI 接口的 [model 参数](https://platform.openai.com/docs/models) 一致，支持 o 系列、gpt-5.4、gpt-5.4-mini、gpt-5.4-nano、gpt-5 系列、gpt-4.1 等，Agent 模式推荐使用 `gpt-5.4` |\n| `open_ai_api_key` | 在 [OpenAI 平台](https://platform.openai.com/api-keys) 创建 |\n| `open_ai_api_base` | 可选，修改可接入第三方代理接口 |\n| `bot_type` | 使用 OpenAI 官方模型时无需填写。当通过代理接口使用 Claude 等非 OpenAI 模型时，设为 `openai` |\n"
  },
  {
    "path": "docs/models/qwen.mdx",
    "content": "---\ntitle: 通义千问 Qwen\ndescription: 通义千问模型配置\n---\n\n```json\n{\n  \"model\": \"qwen3.5-plus\",\n  \"dashscope_api_key\": \"YOUR_API_KEY\"\n}\n```\n\n| 参数 | 说明 |\n| --- | --- |\n| `model` | 可填 `qwen3.5-plus`、`qwen3-max`、`qwen-max`、`qwen-plus`、`qwen-turbo`、`qwq-plus` 等 |\n| `dashscope_api_key` | 在 [百炼控制台](https://bailian.console.aliyun.com/?tab=model#/api-key) 创建，参考 [官方文档](https://bailian.console.aliyun.com/?tab=api#/api) |\n\n也支持 OpenAI 兼容方式接入：\n\n```json\n{\n  \"bot_type\": \"openai\",\n  \"model\": \"qwen3.5-plus\",\n  \"open_ai_api_base\": \"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n  \"open_ai_api_key\": \"YOUR_API_KEY\"\n}\n```\n"
  },
  {
    "path": "docs/releases/overview.mdx",
    "content": "---\ntitle: 更新日志\ndescription: CowAgent 版本更新历史\n---\n\n| 版本 | 日期 | 说明 |\n| --- | --- | --- |\n| [2.0.3](/releases/v2.0.3) | 2026.03.18 | 新增企微智能机器人和 QQ 通道、支持Coding Plan、新增多个模型、Web端文件处理、记忆系统升级 |\n| [2.0.2](/releases/v2.0.2) | 2026.02.27 | Web 控制台升级、多通道同时运行、会话持久化 |\n| [2.0.1](/releases/v2.0.1) | 2026.02.13 | 内置 Web Search 工具、智能上下文管理、多项修复 |\n| [2.0.0](/releases/v2.0.0) | 2026.02.03 | 全面升级为超级 Agent 助理 |\n| 1.7.6 | 2025.05.23 | Web Channel 优化、AgentMesh 多智能体插件 |\n| 1.7.5 | 2025.04.11 | DeepSeek 模型 |\n| 1.7.4 | 2024.12.13 | Gemini 2.0 模型、Web Channel |\n| 1.7.3 | 2024.10.31 | 稳定性提升、数据库功能 |\n| 1.7.2 | 2024.09.26 | 一键安装脚本、o1 模型 |\n| 1.7.0 | 2024.08.02 | 讯飞 4.0 模型、知识库引用 |\n| 1.6.9 | 2024.07.19 | gpt-4o-mini、阿里语音识别 |\n| 1.6.8 | 2024.07.05 | Claude 3.5、Gemini 1.5 Pro |\n| 1.6.0 | 2024.04.26 | Kimi 接入、gpt-4-turbo 升级 |\n| 1.5.8 | 2024.03.26 | GLM-4、Claude-3、edge-tts |\n| 1.5.2 | 2023.11.10 | 飞书通道、图像识别对话 |\n| 1.5.0 | 2023.11.10 | gpt-4-turbo、dall-e-3、tts 多模态 |\n| 1.0.0 | 2022.12.12 | 项目创建，首次接入 ChatGPT 模型 |\n\n更多历史版本请查看 [GitHub Releases](https://github.com/zhayujie/chatgpt-on-wechat/releases)。\n"
  },
  {
    "path": "docs/releases/v2.0.0.mdx",
    "content": "---\ntitle: v2.0.0\ndescription: CowAgent 2.0 - 从聊天机器人到超级智能助理的全面升级\n---\n\nCowAgent 2.0 实现了从聊天机器人到**超级智能助理**的全面升级！现在它能够主动思考和规划任务、拥有长期记忆、操作计算机和外部资源、创造和执行技能，真正理解你并和你一起成长。\n\n**发布日期**：2026.02.03 | [GitHub Release](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.0)\n\n## 重点更新\n\n### Agent 核心能力\n\n- **复杂任务规划**：能够理解复杂任务并自主规划执行，持续思考和调用工具直到完成目标，支持多轮推理和上下文理解\n- **长期记忆**：自动将对话记忆持久化至本地文件和数据库中，包括全局记忆和天级记忆，支持关键词及向量检索\n- **内置系统工具**：内置实现 10+ 种工具，包括文件操作、Bash 终端、浏览器、文件发送、定时任务、记忆管理等\n- **Skills**：新增 Skill 运行引擎，内置多种技能，并支持通过自然语言对话完成自定义 Skills 开发\n- **安全和成本**：通过秘钥管理工具、提示词控制、系统权限等手段控制 Agent 的访问安全；通过最大记忆轮次、最大上下文 token、工具执行步数对 token 成本进行限制\n\n### 其他更新\n\n- **渠道优化**：飞书及钉钉接入渠道支持长连接接入（无需公网 IP）、支持图片/文件消息的接收和发送\n- **模型更新**：新增 claude-sonnet-4-5、gemini-3-pro-preview、glm-4.7、MiniMax-M2.1、qwen3-max 等最新模型\n- **部署优化**：增加一键安装、配置、运行、管理的脚本，简化部署流程\n\n## 长期记忆系统\n\nAgent 会在用户分享重要信息时主动存储，也会在对话达到一定长度时自动提取摘要。支持语义搜索和向量检索的混合检索模式。\n\n**首次启动**时，Agent 会主动询问关键信息，并记录至工作空间（默认 `~/cow`）中的智能体设定、用户身份、记忆文件中。\n\n**长期对话**中，Agent 会智能记录或检索记忆，不断更新自身设定、用户偏好，总结经验和教训，真正实现自主思考和持续成长。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203000455.png\" width=\"800\" />\n</Frame>\n\n## 任务规划与工具调用\n\nAgent 根据任务需求智能选择和调用工具，完成各类复杂操作。\n\n### 终端和文件访问\n\n最基础和核心的工具能力，用户可通过手机端与 Agent 交互，操作个人电脑或服务器上的资源：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202181130.png\" width=\"800\" />\n</Frame>\n\n### 应用编程能力\n\n基于编程能力和系统访问能力，Agent 可实现从信息搜索、素材生成、编码、测试、部署、Nginx 配置、发布的 **Vibecoding 全流程**，通过手机端一句命令完成应用快速 demo。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n\n### 定时任务\n\n支持 **一次性任务、固定时间间隔、Cron 表达式** 三种形式，任务触发可选择 **固定消息发送** 或 **Agent 动态任务执行** 两种模式：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n\n### 环境变量管理\n\n通过 `env_config` 工具管理技能所需秘钥，支持对话式更新，内置安全保护和脱敏策略：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" width=\"800\" />\n</Frame>\n\n## 技能系统\n\n每个 Skill 由说明文件、运行脚本（可选）、资源（可选）组成，为 Agent 提供无限扩展性。\n\n### 技能创造器\n\n通过对话方式快速创建技能，将工作流程固化或对接任意第三方接口：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n### 网页搜索和图像识别\n\n- **网页搜索**：内置 `web_search` 工具，支持多种搜索引擎，配置对应 API Key 即可使用\n- **图像识别**：支持 `gpt-4.1-mini`、`gpt-4.1` 等模型，配置 `OPENAI_API_KEY` 即可使用\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n\n### 三方知识库和插件\n\n`linkai-agent` 技能可将 [LinkAI](https://link-ai.tech/) 上的所有智能体作为 Skill 使用，实现多智能体决策：\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n\n## 参与共建\n\n2.0 版本后，项目将持续升级 Agent 能力、拓展接入渠道、内置工具、技能系统，降低模型成本和提升安全性。欢迎 [提出反馈](https://github.com/zhayujie/chatgpt-on-wechat/issues) 和 [贡献代码](https://github.com/zhayujie/chatgpt-on-wechat/pulls)。\n"
  },
  {
    "path": "docs/releases/v2.0.1.mdx",
    "content": "---\ntitle: v2.0.1\ndescription: CowAgent 2.0.1 - 内置 Web Search、智能上下文管理、多项修复\n---\n\n**发布日期**：2026.02 | [GitHub Release](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.1) | [Full Changelog](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.0..2.0.1)\n\n## 新特性\n\n- **内置 Web Search 工具**：将网络搜索作为 Agent 内置工具集成，降低决策成本 ([4f0ea5d](https://github.com/zhayujie/chatgpt-on-wechat/commit/4f0ea5d7568d61db91ff69c91c429e785fd1b1c2))\n- **Claude Opus 4.6 模型支持**：新增对 Claude Opus 4.6 模型的支持 ([#2661](https://github.com/zhayujie/chatgpt-on-wechat/pull/2661))\n- **企业微信图片消息识别**：支持企业微信渠道的图片消息识别功能 ([#2667](https://github.com/zhayujie/chatgpt-on-wechat/pull/2667))\n\n## 优化\n\n- **智能上下文管理**：解决聊天上下文溢出问题，新增智能上下文裁剪策略，防止 token 超限 ([cea7fb7](https://github.com/zhayujie/chatgpt-on-wechat/commit/cea7fb7490c53454602bf05955a0e9f059bcf0fd), [8acf2db](https://github.com/zhayujie/chatgpt-on-wechat/commit/8acf2dbdfe713b84ad74b761b7f86674b1c1904d)) [#2663](https://github.com/zhayujie/chatgpt-on-wechat/issues/2663)\n- **运行时信息动态更新**：通过动态函数方案实现系统提示词中时间戳等运行时信息的自动更新 ([#2655](https://github.com/zhayujie/chatgpt-on-wechat/pull/2655), [#2657](https://github.com/zhayujie/chatgpt-on-wechat/pull/2657))\n- **Skill 提示词优化**：改进 Skill 系统提示词生成逻辑，简化工具描述，提升 Agent 表现 ([6c21833](https://github.com/zhayujie/chatgpt-on-wechat/commit/6c218331b1f1208ea8be6bf226936d3b556ade3e))\n- **智谱 AI 自定义 API Base URL**：支持智谱 AI 配置自定义 API Base URL ([#2660](https://github.com/zhayujie/chatgpt-on-wechat/pull/2660))\n- **启动脚本优化**：改进 `run.sh` 脚本的交互体验和配置流程 ([#2656](https://github.com/zhayujie/chatgpt-on-wechat/pull/2656))\n- **决策轮次日志**：新增 Agent 决策轮次的日志记录，便于调试 ([cb303e6](https://github.com/zhayujie/chatgpt-on-wechat/commit/cb303e6109c50c8dfef1f5e6c1ec47223bf3cd11))\n\n## 问题修复\n\n- **定时任务记忆丢失**：修复 Scheduler 调度器导致的记忆丢失问题 ([a77a874](https://github.com/zhayujie/chatgpt-on-wechat/commit/a77a8741b500a408c6f5c8868856fb4b018fe9db))\n- **空工具调用与超长结果**：修复空 tool calls 及过长工具返回结果的异常处理 ([0542700](https://github.com/zhayujie/chatgpt-on-wechat/commit/0542700f9091ebb08c1a56103b0f0f45f24aa621))\n- **OpenAI Function Call**：修复 OpenAI 模型的 function call 调用兼容性问题 ([158c87a](https://github.com/zhayujie/chatgpt-on-wechat/commit/158c87ab8b05bae054cc1b4eacdbb64fc1062ba9))\n- **Claude 工具名字段**：移除 Claude 模型响应中多余的 tool name 字段 ([eec10cb](https://github.com/zhayujie/chatgpt-on-wechat/commit/eec10cb5db6a3d5bc12ef606606532237d2c5f6e))\n- **MiniMax 推理优化**：优化 MiniMax 模型 reasoning content 处理，隐藏思考过程输出 ([c72cda3](https://github.com/zhayujie/chatgpt-on-wechat/commit/c72cda33864bd1542012ee6e0a8bd8c6c88cb5ed), [72b1cac](https://github.com/zhayujie/chatgpt-on-wechat/commit/72b1cacea1ba0d1f3dedacbab2e088e98fd7e172))\n- **智谱 AI 思考过程**：隐藏智谱 AI 模型的思考过程展示 ([72b1cac](https://github.com/zhayujie/chatgpt-on-wechat/commit/72b1cacea1ba0d1f3dedacbab2e088e98fd7e172))\n- **飞书连接与证书**：修复飞书渠道的 SSL 证书错误和连接异常问题 ([229b14b](https://github.com/zhayujie/chatgpt-on-wechat/commit/229b14b6fcabe7123d53cab1dea39f38dab26d6d), [8674421](https://github.com/zhayujie/chatgpt-on-wechat/commit/867442155e7f095b4f38b0856f8c1d8312b5fcf7))\n- **model_type 类型校验**：修复非字符串 `model_type` 导致的 `AttributeError` ([#2666](https://github.com/zhayujie/chatgpt-on-wechat/pull/2666))\n\n## 平台兼容\n\n- **Windows 兼容性适配**：修复 Windows 平台下路径处理、文件编码及 `os.getuid()` 不可用等问题，涉及多个工具模块 ([051ffd7](https://github.com/zhayujie/chatgpt-on-wechat/commit/051ffd78a372f71a967fd3259e37fe19131f83cf), [5264f7c](https://github.com/zhayujie/chatgpt-on-wechat/commit/5264f7ce18360ee4db5dcb4ebe67307977d40014))\n"
  },
  {
    "path": "docs/releases/v2.0.2.mdx",
    "content": "---\ntitle: v2.0.2\ndescription: CowAgent 2.0.2 - Web 控制台升级、多通道同时运行、会话持久化\n---\n\n## ✨ 重点更新\n\n### 🖥️ Web 控制台升级\n\n本次对 Web 控制台进行了全面升级，支持流式对话输出、工具执行过程和思考过程的可视化展示，并支持对模型、技能、记忆、通道、Agent 配置的在线查看和管理。\n\n#### 对话界面\n\n支持流式输出，可实时展示 Agent 的思考过程（Reasoning）和工具调用过程（Tool Calls），更直观地观察 Agent 的决策过程：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227180120.png\" />\n\n#### 模型管理\n\n支持在线管理模型配置，无需手动编辑配置文件：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173811.png\" />\n\n#### 技能管理\n\n支持在线查看和管理 Agent 技能（Skills）：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173403.png\" />\n\n#### 记忆管理\n\n支持在线查看和管理 Agent 记忆：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173349.png\" />\n\n#### 通道管理\n\n支持在线管理接入通道，支持实时连接/断开操作：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173331.png\" />\n\n#### 定时任务\n\n支持在线查看和管理定时任务，包括一次性任务、固定间隔、Cron 表达式等多种调度方式的可视化管理：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173704.png\" />\n\n#### 日志\n\n支持在线实时查看 Agent 运行日志，便于监控运行状态和排查问题：\n\n<img width=\"850\" src=\"https://cdn.link-ai.tech/doc/20260227173514.png\" />\n\n相关提交：[f1a1413](https://github.com/zhayujie/chatgpt-on-wechat/commit/f1a1413), [c0702c8](https://github.com/zhayujie/chatgpt-on-wechat/commit/c0702c8), [394853c](https://github.com/zhayujie/chatgpt-on-wechat/commit/394853c), [1c71c4e](https://github.com/zhayujie/chatgpt-on-wechat/commit/1c71c4e), [5e3eccb](https://github.com/zhayujie/chatgpt-on-wechat/commit/5e3eccb), [e1dc037](https://github.com/zhayujie/chatgpt-on-wechat/commit/e1dc037), [5edbf4c](https://github.com/zhayujie/chatgpt-on-wechat/commit/5edbf4c), [7d258b5](https://github.com/zhayujie/chatgpt-on-wechat/commit/7d258b5)\n\n### 🔀 多通道同时运行\n\n支持多个接入通道（如飞书、钉钉、企微应用、Web 等）同时运行，每个通道在独立子线程中启动，互不干扰。\n\n配置方式：在 `config.json` 中通过 `channel_type` 配置多个通道，以逗号分隔，也可在 Web 控制台的通道管理页面中实时连接或断开各通道。\n\n```json\n{\n  \"channel_type\": \"web,feishu,dingtalk\"\n}\n```\n\n相关提交：[4694594](https://github.com/zhayujie/chatgpt-on-wechat/commit/4694594), [7cce224](https://github.com/zhayujie/chatgpt-on-wechat/commit/7cce224), [7d258b5](https://github.com/zhayujie/chatgpt-on-wechat/commit/7d258b5), [c9adddb](https://github.com/zhayujie/chatgpt-on-wechat/commit/c9adddb)\n\n### 💾 会话持久化\n\n会话历史支持持久化存储至本地 SQLite 数据库，服务重启后会话上下文自动恢复，不再丢失。Web 控制台中的历史对话记录也会同步恢复展示。\n\n相关提交：[29bfbec](https://github.com/zhayujie/chatgpt-on-wechat/commit/29bfbec), [9917552](https://github.com/zhayujie/chatgpt-on-wechat/commit/9917552), [925d728](https://github.com/zhayujie/chatgpt-on-wechat/commit/925d728)\n\n### 🤖 新增模型\n\n- **Gemini 3.1 Pro Preview**：新增 `gemini-3.1-pro-preview` 模型支持 ([52d7cad](https://github.com/zhayujie/chatgpt-on-wechat/commit/52d7cad))\n- **Claude 4.6 Sonnet**：新增 `claude-4.6-sonnet` 模型支持 ([52d7cad](https://github.com/zhayujie/chatgpt-on-wechat/commit/52d7cad))\n- **Qwen3.5 Plus**：新增 `qwen3.5-plus` 模型支持 ([e59a289](https://github.com/zhayujie/chatgpt-on-wechat/commit/e59a289))\n- **MiniMax M2.5**：新增 `Minimax-M2.5` 模型支持 ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **GLM-5**：新增 `glm-5` 模型支持 ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **Kimi K2.5**：新增 `kimi-k2.5` 模型支持 ([48db538](https://github.com/zhayujie/chatgpt-on-wechat/commit/48db538))\n- **Doubao 2.0 Code**：新增 `doubao-2.0-code` 编程专用模型 ([ab28ee5](https://github.com/zhayujie/chatgpt-on-wechat/commit/ab28ee5))\n- **DashScope 模型**：新增阿里云 DashScope 模型名称支持 ([ce58f23](https://github.com/zhayujie/chatgpt-on-wechat/commit/ce58f23))\n\n### 🌐 新增官网和文档中心\n\n- **官网上线**：[cowagent.ai](https://cowagent.ai/)\n- **文档中心上线**：[docs.cowagent.ai](https://docs.cowagent.ai/)\n\n### 🐛 问题修复\n\n- **Gemini 钉钉图片识别**：修复 Gemini 在钉钉通道中无法处理图片标记的问题 ([05a3304](https://github.com/zhayujie/chatgpt-on-wechat/commit/05a3304)) ([#2670](https://github.com/zhayujie/chatgpt-on-wechat/pull/2670)) Thanks [@SgtPepper114](https://github.com/SgtPepper114)\n- **启动脚本依赖**：修复 `run.sh` 脚本的依赖安装问题 ([b6fc9fa](https://github.com/zhayujie/chatgpt-on-wechat/commit/b6fc9fa))\n- **裸异常捕获**：将代码中的 `bare except` 替换为 `except Exception`，提升异常处理规范性 ([adca89b](https://github.com/zhayujie/chatgpt-on-wechat/commit/adca89b)) ([#2674](https://github.com/zhayujie/chatgpt-on-wechat/pull/2674)) Thanks [@haosenwang1018](https://github.com/haosenwang1018)\n\n**发布日期**：2026.02.27 | [Full Changelog](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.1...master)\n"
  },
  {
    "path": "docs/releases/v2.0.3.mdx",
    "content": "---\ntitle: v2.0.3\ndescription: CowAgent 2.0.3 - 新增企微智能机器人和 QQ 通道、Web 控制台文件处理、记忆系统升级\n---\n\n## 🔌 新增接入通道\n\n### 企业微信智能机器人\n\n新增企业微信智能机器人（`wecom_bot`）通道，支持流式卡片消息输出，支持文本和图片消息的接收与回复，可在 Web 控制台中进行通道配置和管理。\n\n接入文档：[企微智能机器人接入](https://docs.cowagent.ai/channels/wecom-bot)。\n\n相关提交：[d4480b6](https://github.com/zhayujie/chatgpt-on-wechat/commit/d4480b6), [a42f31f](https://github.com/zhayujie/chatgpt-on-wechat/commit/a42f31f), [4ecd4df](https://github.com/zhayujie/chatgpt-on-wechat/commit/4ecd4df), [8b45d6c](https://github.com/zhayujie/chatgpt-on-wechat/commit/8b45d6c)\n\n### QQ 通道\n\n新增 QQ 官方机器人（`qq`）通道，支持文本和图片消息的接收与回复，支持私聊和群聊场景。\n\n接入文档参考：[QQ机器人接入](https://docs.cowagent.ai/channels/qq)。\n\n相关提交：[005a0e1](https://github.com/zhayujie/chatgpt-on-wechat/commit/005a0e1), [a4d54f5](https://github.com/zhayujie/chatgpt-on-wechat/commit/a4d54f5)\n\n## 🖥️ Web 控制台支持文件输入和处理\n\nWeb 控制台对话界面支持文件和图片上传，可直接发送文件给 Agent 进行处理。同时 Read 工具新增对 Office 文档（Word、Excel、PPT）的解析能力。\n\n相关提交：[30c6d9b](https://github.com/zhayujie/chatgpt-on-wechat/commit/30c6d9b)\n\n## 🤖 新增模型\n\n- **GPT-5.4 系列**：新增 `gpt-5.4`、`gpt-5.4-mini`、`gpt-5.4-nano` 模型支持 ([1623deb](https://github.com/zhayujie/chatgpt-on-wechat/commit/1623deb))\n- **Gemini 3.1 Flash Lite Preview**：新增 `gemini-3.1-flash-lite-preview` 模型支持 ([ba915f2](https://github.com/zhayujie/chatgpt-on-wechat/commit/ba915f2))\n\n## 💰 Coding Plan 支持\n\n新增各厂商 Coding Plan（编程包月套餐）的接入支持，通过 OpenAI 兼容方式统一接入。目前已支持阿里云、MiniMax、智谱 GLM、Kimi、火山引擎等厂商。\n\n详细配置参考 [Coding Plan 文档](https://docs.cowagent.ai/models/coding-plan)。\n\n## 🧠 记忆系统升级\n\n记忆写入（Memory Flush）升级：\n\n- 使用 LLM 对超出上下文窗口的对话内容进行智能摘要，生成精炼的每日记忆条目\n- 摘要在后台线程异步执行，不阻塞回复\n- 优化上下文批量裁剪策略，降低冲刷频率\n- 新增每日定时冲刷兜底机制，避免低活跃场景下记忆丢失\n- 修复上下文记忆丢失问题\n\n相关提交：[022c13f](https://github.com/zhayujie/chatgpt-on-wechat/commit/022c13f), [c116235](https://github.com/zhayujie/chatgpt-on-wechat/commit/c116235)\n\n## 🔧 工具重构\n\n- **图片识别**：将图片识别（Image Vision）从 Skill 重构为内置 Tool，新增独立的图片视觉提供方（Vision Provider）配置，提升稳定性和可维护性 ([a50fafa](https://github.com/zhayujie/chatgpt-on-wechat/commit/a50fafa), [3b8b562](https://github.com/zhayujie/chatgpt-on-wechat/commit/3b8b562))\n- **网页抓取**：将网页抓取（Web Fetch）从 Skill 重构为内置 Tool，支持远程文档文件（PDF、Word、Excel、PPT）的下载和解析 ([ccb9030](https://github.com/zhayujie/chatgpt-on-wechat/commit/ccb9030), [fa61744](https://github.com/zhayujie/chatgpt-on-wechat/commit/fa61744))\n\n## 🐳 Docker 部署优化\n\n- **配置模板对齐**：`docker-compose.yml` 环境变量与 `config-template.json` 对齐，补充完整的模型 API Key 和 Agent 等配置项\n- **Web 控制台端口映射**：新增 `9899` 端口映射，Docker 部署后可通过浏览器访问 Web 控制台\n- **配置热更新**：各模型 Bot 的 API Key 和 API Base 改为实时读取，通过 Web 控制台修改配置后无需重启即可生效\n- **工作空间持久化**：新增 `./cow` Volume 挂载，Agent 工作空间数据（记忆、人格、技能等）持久化到宿主机，容器重建或升级不丢失\n\n## ⚡ 性能优化\n\n- **启动加速**：飞书通道采用懒加载方式导入依赖，避免 4-10 秒的启动延迟 ([924dc79](https://github.com/zhayujie/chatgpt-on-wechat/commit/924dc79))\n- **通道稳定性**：优化通道连接稳定性，支持通道配置通过环境变量设置 ([f1c04bc](https://github.com/zhayujie/chatgpt-on-wechat/commit/f1c04bc), [46d97fd](https://github.com/zhayujie/chatgpt-on-wechat/commit/46d97fd))\n\n## 🐛 问题修复\n\n- **bot_type 配置**：修复 Agent 模式下 `bot_type` 配置传递问题 ([#2691](https://github.com/zhayujie/chatgpt-on-wechat/pull/2691)) Thanks [@Weikjssss](https://github.com/Weikjssss)\n- **bot_type 优先级**：调整 Agent 模式下 `bot_type` 的解析优先级 ([#2692](https://github.com/zhayujie/chatgpt-on-wechat/pull/2692)) Thanks [@6vision](https://github.com/6vision)\n- **智谱模型配置**：修复智谱 `bot_type` 命名、Web 控制台持久化及正则转义问题 ([#2693](https://github.com/zhayujie/chatgpt-on-wechat/pull/2693)) Thanks [@6vision](https://github.com/6vision)\n- **OpenAI 兼容层**：使用 `openai_compat` 层统一错误处理 ([#2688](https://github.com/zhayujie/chatgpt-on-wechat/pull/2688)) Thanks [@JasonOA888](https://github.com/JasonOA888)\n- **OpenAI 兼容迁移**：完成所有模型 Bot 的 `openai_compat` 迁移 ([#2689](https://github.com/zhayujie/chatgpt-on-wechat/pull/2689))\n- **Gemini 工具调用**：修复 Gemini 模型的工具调用匹配问题 ([eda82ba](https://github.com/zhayujie/chatgpt-on-wechat/commit/eda82ba))\n- **会话并发**：修复会话并发场景下的竞态条件问题 ([9879878](https://github.com/zhayujie/chatgpt-on-wechat/commit/9879878))\n- **历史消息恢复**：修复历史会话消息不完整问题，仅恢复 user/assistant 文本消息，剥离工具调用 ([b788a3d](https://github.com/zhayujie/chatgpt-on-wechat/commit/b788a3d), [a33ce97](https://github.com/zhayujie/chatgpt-on-wechat/commit/a33ce97))\n- **飞书群聊**：移除飞书群聊场景下对 `bot_name` 的依赖 ([b641bff](https://github.com/zhayujie/chatgpt-on-wechat/commit/b641bff))\n- **Safari 兼容**：修复 Safari 浏览器 IME 回车键误触发消息发送问题 ([0687916](https://github.com/zhayujie/chatgpt-on-wechat/commit/0687916))\n- **Windows 兼容**：修复 Windows 下 bash 风格 `$VAR` 环境变量转换为 `%VAR%` 的问题 ([7c67513](https://github.com/zhayujie/chatgpt-on-wechat/commit/7c67513))\n- **MiniMax 参数**：增加 MiniMax 模型的 `max_tokens` 限制 ([1767413](https://github.com/zhayujie/chatgpt-on-wechat/commit/1767413))\n- **.gitignore 更新**：添加 Python 目录忽略规则 ([#2683](https://github.com/zhayujie/chatgpt-on-wechat/pull/2683)) Thanks [@pelioo](https://github.com/pelioo)\n- **AGENT.md 主动演进**：优化系统提示词中对 AGENT.md 的更新引导，从被动的\"用户修改时更新\"改为主动识别对话中的性格、风格变化并自动更新\n\n## 📦 升级方式\n\n源码部署可执行 `./run.sh update` 一键升级，或手动拉取代码后重启。详见 [更新升级文档](https://docs.cowagent.ai/guide/upgrade)。\n\n**发布日期**：2026.03.18 | [Full Changelog](https://github.com/zhayujie/chatgpt-on-wechat/compare/2.0.2...master)\n"
  },
  {
    "path": "docs/skills/image-vision.mdx",
    "content": "---\ntitle: 图像识别\ndescription: 使用 OpenAI 视觉模型识别图片\n---\n\n使用 OpenAI 的 GPT-4 Vision API 分析图片内容，理解图像中的物体、文字、颜色等元素。\n\n## 依赖\n\n| 依赖 | 说明 |\n| --- | --- |\n| `OPENAI_API_KEY` | OpenAI API 密钥 |\n| `curl`、`base64` | 系统命令（通常已预装） |\n\n配置方式：\n\n- 通过 `env_config` 工具配置 `OPENAI_API_KEY`\n- 或在 `config.json` 中填写 `open_ai_api_key`\n\n## 支持的模型\n\n- `gpt-4.1-mini`（推荐，性价比高）\n- `gpt-4.1`\n\n## 使用方式\n\n配置完成后，向 Agent 发送图片即可自动触发图像识别。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202213219.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/skills/index.mdx",
    "content": "---\ntitle: 技能概览\ndescription: CowAgent 技能系统介绍\n---\n\n技能（Skill）为 Agent 提供无限的扩展性。每个 Skill 由说明文件（`SKILL.md`）、运行脚本（可选）、资源（可选）组成，描述如何完成特定类型的任务。\n\nSkill 与 Tool 的区别：Tool 是由代码实现的原子操作（如读写文件、执行命令），Skill 则是基于说明文件的高级工作流，可以组合调用多个 Tool 来完成复杂任务。\n\n## 内置技能\n\n位于项目 `skills/` 目录下，根据依赖条件自动判断是否启用：\n\n| 技能 | 说明 | 依赖 |\n| --- | --- | --- |\n| [`skill-creator`](/skills/skill-creator) | 通过对话创建自定义技能 | 无 |\n| [`openai-image-vision`](/skills/image-vision) | 使用 OpenAI 视觉模型识别图片 | `OPENAI_API_KEY` |\n| [`linkai-agent`](/skills/linkai-agent) | 对接 LinkAI 平台智能体 | `LINKAI_API_KEY` |\n| [`web-fetch`](/skills/web-fetch) | 抓取网页文本内容 | `curl`（默认启用） |\n\n## 自定义技能\n\n由用户通过对话创建，存放在工作空间中（`~/cow/skills/`），可实现任何复杂的业务流程和第三方系统对接。\n\n## 技能加载优先级\n\n1. **工作空间技能**（最高）：`~/cow/skills/`\n2. **项目内置技能**（最低）：`skills/`\n\n同名技能按优先级覆盖。\n\n## 技能文件结构\n\n```\nskills/\n├── my-skill/\n│   ├── SKILL.md          # Skill description (frontmatter + instructions)\n│   ├── scripts/          # Execution scripts (optional)\n│   └── resources/        # Additional resources (optional)\n```\n\n### SKILL.md 格式\n\n```markdown\n---\nname: my-skill\ndescription: Brief description of the skill\nmetadata:\n  emoji: 🔧\n  requires:\n    bins: [\"curl\"]\n    env: [\"MY_API_KEY\"]\n  primaryEnv: \"MY_API_KEY\"\n---\n\n# My Skill\n\nDetailed instructions...\n```\n\n| 字段 | 说明 |\n| --- | --- |\n| `name` | 技能名称，需与目录名一致 |\n| `description` | 技能描述，Agent 据此决定是否调用 |\n| `metadata.requires.bins` | 依赖的系统命令 |\n| `metadata.requires.env` | 依赖的环境变量 |\n| `metadata.always` | 是否始终加载（默认 false） |\n"
  },
  {
    "path": "docs/skills/linkai-agent.mdx",
    "content": "---\ntitle: LinkAI 智能体\ndescription: 对接 LinkAI 平台的多智能体技能\n---\n\n将 [LinkAI](https://link-ai.tech/) 平台上的智能体作为 Skill 使用，实现多智能体决策。Agent 根据智能体的名称和描述智能选择，通过 `app_code` 调用对应的应用或工作流。\n\n## 依赖\n\n| 依赖 | 说明 |\n| --- | --- |\n| `LINKAI_API_KEY` | LinkAI 平台 API 密钥，在 [控制台](https://link-ai.tech/console/interface) 创建 |\n| `curl` | 系统命令（通常已预装） |\n\n配置方式：\n\n- 通过 `env_config` 工具配置 `LINKAI_API_KEY`\n- 或在 `config.json` 中填写 `linkai_api_key`\n\n## 配置智能体\n\n在 `skills/linkai-agent/config.json` 中添加可用的智能体：\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI客服助手\",\n      \"app_description\": \"当用户需要了解LinkAI平台相关问题时才选择该助手\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"内容创作助手\",\n      \"app_description\": \"当用户需要创作图片或视频时才使用该助手\"\n    }\n  ]\n}\n```\n\n## 使用方式\n\n配置完成后，Agent 会根据用户的问题自动选择合适的 LinkAI 智能体进行回答。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234350.png\" width=\"750\" />\n</Frame>\n"
  },
  {
    "path": "docs/skills/skill-creator.mdx",
    "content": "---\ntitle: 创建技能\ndescription: 通过对话创建自定义技能\n---\n\n通过自然语言对话快速创建、安装或更新技能。\n\n## 依赖\n\n无额外依赖，始终可用。\n\n## 使用方式\n\n- 将工作流程固化为技能：\"帮我把这个部署流程创建为一个技能\"\n- 对接第三方 API：\"根据这个接口文档创建一个技能\"\n- 安装远程技能：\"帮我安装 xxx 技能\"\n\n## 创建流程\n\n1. 告诉 Agent 你想创建的技能功能\n2. Agent 自动生成 `SKILL.md` 说明文件和运行脚本\n3. 技能保存到工作空间的 `~/cow/skills/` 目录\n4. 后续对话中 Agent 会自动识别并使用该技能\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202202247.png\" width=\"800\" />\n</Frame>\n\n<Tip>\n  详细开发文档可参考 [Skill 创造器说明](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/skills/skill-creator/SKILL.md)。\n</Tip>\n"
  },
  {
    "path": "docs/skills/web-fetch.mdx",
    "content": "---\ntitle: 网页抓取\ndescription: 抓取网页文本内容\n---\n\n使用 curl 抓取网页并提取可读文本内容，轻量级的网页访问方式，无需浏览器自动化。\n\n## 依赖\n\n| 依赖 | 说明 |\n| --- | --- |\n| `curl` | 系统命令（通常已预装） |\n\n该技能设置了 `always: true`，只要系统有 `curl` 命令即默认启用。\n\n## 使用方式\n\n当 Agent 需要获取某个 URL 的网页内容时会自动调用，无需额外配置。\n\n## 与 browser 工具的区别\n\n| 特性 | web-fetch（技能） | browser（工具） |\n| --- | --- | --- |\n| 依赖 | 仅 curl | browser-use + playwright |\n| JS 渲染 | 不支持 | 支持 |\n| 页面交互 | 不支持 | 支持点击、输入等 |\n| 适用场景 | 获取静态页面文本 | 操作动态网页 |\n\n<Tip>\n  对于大多数网页内容获取场景，web-fetch 就够用了。只有需要 JS 渲染或页面交互时才需要 browser 工具。\n</Tip>\n"
  },
  {
    "path": "docs/tools/bash.mdx",
    "content": "---\ntitle: bash - 终端\ndescription: 执行系统命令\n---\n\n在当前工作目录执行 Bash 命令，返回 stdout 和 stderr。`env_config` 中配置的 API Key 会自动注入到环境变量中。\n\n## 依赖\n\n无额外依赖，默认可用。\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `command` | string | 是 | 要执行的命令 |\n| `timeout` | integer | 否 | 超时时间（秒） |\n\n## 使用场景\n\n- 安装软件包和依赖\n- 运行代码和测试\n- 部署应用和服务（Nginx 配置、进程管理等）\n- 系统运维和排查\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260203121008.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/tools/browser.mdx",
    "content": "---\ntitle: browser - 浏览器\ndescription: 访问和操作网页\n---\n\n使用浏览器访问和操作网页，支持 JavaScript 渲染的动态页面。\n\n## 依赖\n\n| 依赖 | 安装命令 |\n| --- | --- |\n| `browser-use` ≥ 0.1.40 | `pip install browser-use` |\n| `markdownify` | `pip install markdownify` |\n| `playwright` + chromium | `pip install playwright && playwright install chromium` |\n\n## 使用场景\n\n- 访问指定 URL 获取页面内容\n- 操作网页元素（点击、输入等）\n- 验证部署后的网页效果\n- 抓取需要 JS 渲染的动态内容\n\n<Note>\n  浏览器工具依赖较重，如不需要可不安装。轻量的网页内容获取可使用 `web-fetch` 技能。\n</Note>\n"
  },
  {
    "path": "docs/tools/edit.mdx",
    "content": "---\ntitle: edit - 文件编辑\ndescription: 通过精确文本替换编辑文件\n---\n\n通过精确文本替换编辑文件。如果 `oldText` 为空则追加到文件末尾。\n\n## 依赖\n\n无额外依赖，默认可用。\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `path` | string | 是 | 文件路径 |\n| `oldText` | string | 是 | 要替换的原始文本（为空时追加到末尾） |\n| `newText` | string | 是 | 替换后的文本 |\n\n## 使用场景\n\n- 修改配置文件中的特定参数\n- 修复代码中的 bug\n- 在文件指定位置插入内容\n"
  },
  {
    "path": "docs/tools/env-config.mdx",
    "content": "---\ntitle: env_config - 环境变量\ndescription: 管理 API Key 等秘钥配置\n---\n\n管理工作空间 `.env` 文件中的环境变量（API Key 等秘钥），支持通过对话安全地添加和更新。内置安全保护和脱敏策略。\n\n## 依赖\n\n| 依赖 | 安装命令 |\n| --- | --- |\n| `python-dotenv` ≥ 1.0.0 | `pip install python-dotenv>=1.0.0` |\n\n安装扩展依赖时已包含：`pip3 install -r requirements-optional.txt`\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `action` | string | 是 | 操作类型：`get`、`set`、`list`、`delete` |\n| `key` | string | 否 | 环境变量名称 |\n| `value` | string | 否 | 环境变量值（仅 `set` 时需要） |\n\n## 使用方式\n\n直接告诉 Agent 需要配置的秘钥，Agent 会自动调用该工具：\n\n- \"帮我配置 BOCHA_API_KEY\"\n- \"设置 OPENAI_API_KEY 为 sk-xxx\"\n- \"查看已配置的环境变量\"\n\n配置的秘钥会自动注入到 `bash` 工具的执行环境中。\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202234939.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/tools/index.mdx",
    "content": "---\ntitle: 工具概览\ndescription: CowAgent 内置工具系统\n---\n\n工具是 Agent 访问操作系统资源的核心能力。Agent 会根据任务需求智能选择和调用工具，完成文件操作、命令执行、联网搜索、定时任务等各类操作。工具实现在项目的 `agent/tools/` 目录下。\n\n## 内置工具\n\n以下工具默认可用，无需额外配置：\n\n<CardGroup cols={2}>\n  <Card title=\"read - 文件读取\" icon=\"file\" href=\"/tools/read\">\n    读取文件内容，支持文本、图片、PDF\n  </Card>\n  <Card title=\"write - 文件写入\" icon=\"pen\" href=\"/tools/write\">\n    创建或覆盖写入文件\n  </Card>\n  <Card title=\"edit - 文件编辑\" icon=\"pen-to-square\" href=\"/tools/edit\">\n    通过精确文本替换编辑文件\n  </Card>\n  <Card title=\"ls - 目录列表\" icon=\"folder-open\" href=\"/tools/ls\">\n    列出目录内容\n  </Card>\n  <Card title=\"bash - 终端\" icon=\"terminal\" href=\"/tools/bash\">\n    执行系统命令\n  </Card>\n  <Card title=\"send - 文件发送\" icon=\"paper-plane\" href=\"/tools/send\">\n    向用户发送文件或图片\n  </Card>\n  <Card title=\"memory - 记忆\" icon=\"brain\" href=\"/tools/memory\">\n    搜索和读取长期记忆\n  </Card>\n</CardGroup>\n\n## 可选工具\n\n以下工具需要安装额外依赖或配置 API Key 后启用：\n\n<CardGroup cols={2}>\n  <Card title=\"env_config - 环境变量\" icon=\"key\" href=\"/tools/env-config\">\n    管理 API Key 等秘钥配置\n  </Card>\n  <Card title=\"scheduler - 定时任务\" icon=\"clock\" href=\"/tools/scheduler\">\n    创建和管理定时任务\n  </Card>\n  <Card title=\"web_search - 联网搜索\" icon=\"magnifying-glass\" href=\"/tools/web-search\">\n    搜索互联网获取实时信息\n  </Card>\n</CardGroup>\n"
  },
  {
    "path": "docs/tools/ls.mdx",
    "content": "---\ntitle: ls - 目录列表\ndescription: 列出目录内容\n---\n\n列出目录内容，按字母排序，目录名带 `/` 后缀，包含隐藏文件。\n\n## 依赖\n\n无额外依赖，默认可用。\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `path` | string | 是 | 目录路径，相对路径基于工作空间目录 |\n| `limit` | integer | 否 | 最大返回条目数，默认 500 |\n\n## 使用场景\n\n- 浏览项目结构\n- 查找特定文件\n- 检查目录是否存在\n"
  },
  {
    "path": "docs/tools/memory.mdx",
    "content": "---\ntitle: memory - 记忆\ndescription: 搜索和读取长期记忆\n---\n\n记忆工具包含两个子工具：`memory_search`（搜索记忆）和 `memory_get`（读取记忆文件）。\n\n## 依赖\n\n无额外依赖，默认可用。由 Agent Core 的记忆系统管理。\n\n## memory_search\n\n搜索历史记忆，支持关键词和向量混合检索。\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `query` | string | 是 | 搜索查询 |\n\n## memory_get\n\n读取特定记忆文件的内容。\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `path` | string | 是 | 记忆文件的相对路径（如 `MEMORY.md`、`memory/2026-01-01.md`） |\n| `start_line` | integer | 否 | 起始行号 |\n| `end_line` | integer | 否 | 结束行号 |\n\n## 工作方式\n\nAgent 会在以下场景自动调用记忆工具：\n\n- 用户分享重要信息时 → 存储到记忆\n- 需要参考历史信息时 → 搜索相关记忆\n- 对话达到一定长度时 → 提取摘要存储\n"
  },
  {
    "path": "docs/tools/read.mdx",
    "content": "---\ntitle: read - 文件读取\ndescription: 读取文件内容\n---\n\n读取文件内容。支持文本文件、PDF 文件、图片（返回元数据）等格式。\n\n## 依赖\n\n无额外依赖，默认可用。\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `path` | string | 是 | 文件路径，相对路径基于工作空间目录 |\n| `offset` | integer | 否 | 起始行号（1-indexed），负值表示从末尾读取 |\n| `limit` | integer | 否 | 读取行数 |\n\n## 使用场景\n\n- 查看配置文件、日志文件\n- 读取代码文件进行分析\n- 检查图片/视频的文件信息\n"
  },
  {
    "path": "docs/tools/scheduler.mdx",
    "content": "---\ntitle: scheduler - 定时任务\ndescription: 创建和管理定时任务\n---\n\n创建和管理动态定时任务，支持灵活的调度方式和执行模式。\n\n## 依赖\n\n| 依赖 | 安装命令 |\n| --- | --- |\n| `croniter` ≥ 2.0.0 | `pip install croniter>=2.0.0` |\n\n安装核心依赖时已包含：`pip3 install -r requirements.txt`\n\n## 调度方式\n\n| 方式 | 说明 |\n| --- | --- |\n| 一次性任务 | 在指定时间执行一次 |\n| 固定间隔 | 按固定时间间隔重复执行 |\n| Cron 表达式 | 使用 Cron 语法定义复杂调度规则 |\n\n## 执行模式\n\n- **固定消息发送**：到达触发时间时发送预设消息\n- **Agent 动态任务**：到达触发时间时由 Agent 智能执行任务\n\n## 使用方式\n\n通过自然语言即可创建和管理定时任务：\n\n- \"每天早上 9 点给我发天气预报\"\n- \"每隔 2 小时检查一下服务器状态\"\n- \"明天下午 3 点提醒我开会\"\n- \"查看所有定时任务\"\n\n<Frame>\n  <img src=\"https://cdn.link-ai.tech/doc/20260202195402.png\" width=\"800\" />\n</Frame>\n"
  },
  {
    "path": "docs/tools/send.mdx",
    "content": "---\ntitle: send - 文件发送\ndescription: 向用户发送文件\n---\n\n向用户发送文件（图片、视频、音频、文档等），当用户明确要求发送/分享文件时使用。\n\n## 依赖\n\n无额外依赖，默认可用。\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `path` | string | 是 | 文件路径，可以是绝对路径或相对于工作空间的路径 |\n| `message` | string | 否 | 附带的消息说明 |\n\n## 使用场景\n\n- 将生成的代码或文档发送给用户\n- 发送截图、图表\n- 分享下载的文件\n"
  },
  {
    "path": "docs/tools/web-search.mdx",
    "content": "---\ntitle: web_search - 联网搜索\ndescription: 搜索互联网获取实时信息\n---\n\n搜索互联网获取实时信息、新闻、研究等内容。支持两个搜索后端，自动选择可用的后端。\n\n## 依赖\n\n需要配置至少一个搜索 API Key（通过 `env_config` 工具或工作空间 `.env` 文件配置）：\n\n| 后端 | 环境变量 | 优先级 | 获取方式 |\n| --- | --- | --- | --- |\n| 博查搜索 | `BOCHA_API_KEY` | 优先使用 | [博查开放平台](https://open.bochaai.com/) |\n| LinkAI 搜索 | `LINKAI_API_KEY` | 可选 | [LinkAI 控制台](https://link-ai.tech/console/interface) |\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `query` | string | 是 | 搜索关键词 |\n| `count` | integer | 否 | 返回结果数量（1-50，默认 10） |\n| `freshness` | string | 否 | 时间范围：`noLimit`、`oneDay`、`oneWeek`、`oneMonth`、`oneYear`，或日期范围如 `2025-01-01..2025-02-01` |\n| `summary` | boolean | 否 | 是否返回页面摘要（默认 false） |\n\n## 使用场景\n\n当用户询问最新信息、需要事实核查或获取实时数据时，Agent 会自动调用此工具。\n\n<Note>\n  如果未配置任何搜索 API Key，该工具不会被加载。\n</Note>\n"
  },
  {
    "path": "docs/tools/write.mdx",
    "content": "---\ntitle: write - 文件写入\ndescription: 创建或覆盖写入文件\n---\n\n写入内容到文件。文件不存在则自动创建，已存在则覆盖。自动创建父目录。\n\n## 依赖\n\n无额外依赖，默认可用。\n\n## 参数\n\n| 参数 | 类型 | 必填 | 说明 |\n| --- | --- | --- | --- |\n| `path` | string | 是 | 文件路径 |\n| `content` | string | 是 | 要写入的内容 |\n\n## 使用场景\n\n- 创建新的代码文件或脚本\n- 生成配置文件\n- 保存处理结果\n\n<Note>\n  单次写入不应超过 10KB。对于大文件，建议先创建骨架，再使用 edit 工具分块添加内容。\n</Note>\n"
  },
  {
    "path": "models/ali/ali_qwen_bot.py",
    "content": "# encoding:utf-8\n\nimport json\nimport time\nfrom typing import List, Tuple\n\nimport openai\nfrom models.openai.openai_compat import RateLimitError, Timeout, APIError, APIConnectionError\nimport broadscope_bailian\nfrom broadscope_bailian import ChatQaMessage\n\nfrom models.bot import Bot\nfrom models.ali.ali_qwen_session import AliQwenSession\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common import const\nfrom config import conf, load_config\n\nclass AliQwenBot(Bot):\n    def __init__(self):\n        super().__init__()\n        self.api_key_expired_time = self.set_api_key()\n        self.sessions = SessionManager(AliQwenSession, model=conf().get(\"model\", const.QWEN))\n\n    def api_key_client(self):\n        return broadscope_bailian.AccessTokenClient(access_key_id=self.access_key_id(), access_key_secret=self.access_key_secret())\n\n    def access_key_id(self):\n        return conf().get(\"qwen_access_key_id\")\n\n    def access_key_secret(self):\n        return conf().get(\"qwen_access_key_secret\")\n\n    def agent_key(self):\n        return conf().get(\"qwen_agent_key\")\n\n    def app_id(self):\n        return conf().get(\"qwen_app_id\")\n\n    def node_id(self):\n        return conf().get(\"qwen_node_id\", \"\")\n\n    def temperature(self):\n        return conf().get(\"temperature\", 0.2 )\n\n    def top_p(self):\n        return conf().get(\"top_p\", 1)\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context.type == ContextType.TEXT:\n            logger.info(\"[QWEN] query={}\".format(query))\n\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[QWEN] session query={}\".format(session.messages))\n\n            reply_content = self.reply_text(session)\n            logger.debug(\n                \"[QWEN] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[QWEN] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_text(self, session: AliQwenSession, retry_count=0) -> dict:\n        \"\"\"\n        call bailian's ChatCompletion to get the answer\n        :param session: a conversation session\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            prompt, history = self.convert_messages_format(session.messages)\n            self.update_api_key_if_expired()\n            # NOTE 阿里百炼的call()函数未提供temperature参数，考虑到temperature和top_p参数作用相同，取两者较小的值作为top_p参数传入，详情见文档 https://help.aliyun.com/document_detail/2587502.htm\n            response = broadscope_bailian.Completions().call(app_id=self.app_id(), prompt=prompt, history=history, top_p=min(self.temperature(), self.top_p()))\n            completion_content = self.get_completion_content(response, self.node_id())\n            completion_tokens, total_tokens = self.calc_tokens(session.messages, completion_content)\n            return {\n                \"total_tokens\": total_tokens,\n                \"completion_tokens\": completion_tokens,\n                \"content\": completion_content,\n            }\n        except Exception as e:\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if isinstance(e, RateLimitError):\n                logger.warn(\"[QWEN] RateLimitError: {}\".format(e))\n                result[\"content\"] = \"提问太快啦，请休息一下再问我吧\"\n                if need_retry:\n                    time.sleep(20)\n            elif isinstance(e, Timeout):\n                logger.warn(\"[QWEN] Timeout: {}\".format(e))\n                result[\"content\"] = \"我没有收到你的消息\"\n                if need_retry:\n                    time.sleep(5)\n            elif isinstance(e, APIError):\n                logger.warn(\"[QWEN] Bad Gateway: {}\".format(e))\n                result[\"content\"] = \"请再问我一次\"\n                if need_retry:\n                    time.sleep(10)\n            elif isinstance(e, APIConnectionError):\n                logger.warn(\"[QWEN] APIConnectionError: {}\".format(e))\n                need_retry = False\n                result[\"content\"] = \"我连接不到你的网络\"\n            else:\n                logger.exception(\"[QWEN] Exception: {}\".format(e))\n                need_retry = False\n                self.sessions.clear_session(session.session_id)\n\n            if need_retry:\n                logger.warn(\"[QWEN] 第{}次重试\".format(retry_count + 1))\n                return self.reply_text(session, retry_count + 1)\n            else:\n                return result\n\n    def set_api_key(self):\n        api_key, expired_time = self.api_key_client().create_token(agent_key=self.agent_key())\n        broadscope_bailian.api_key = api_key\n        return expired_time\n\n    def update_api_key_if_expired(self):\n        if time.time() > self.api_key_expired_time:\n            self.api_key_expired_time = self.set_api_key()\n\n    def convert_messages_format(self, messages) -> Tuple[str, List[ChatQaMessage]]:\n        history = []\n        user_content = ''\n        assistant_content = ''\n        system_content = ''\n        for message in messages:\n            role = message.get('role')\n            if role == 'user':\n                user_content += message.get('content')\n            elif role == 'assistant':\n                assistant_content = message.get('content')\n                history.append(ChatQaMessage(user_content, assistant_content))\n                user_content = ''\n                assistant_content = ''\n            elif role =='system':\n                system_content += message.get('content')\n        if user_content == '':\n            raise Exception('no user message')\n        if system_content != '':\n            # NOTE 模拟系统消息，测试发现人格描述以\"你需要扮演ChatGPT\"开头能够起作用，而以\"你是ChatGPT\"开头模型会直接否认\n            system_qa = ChatQaMessage(system_content, '好的，我会严格按照你的设定回答问题')\n            history.insert(0, system_qa)\n        logger.debug(\"[QWEN] converted qa messages: {}\".format([item.to_dict() for item in history]))\n        logger.debug(\"[QWEN] user content as prompt: {}\".format(user_content))\n        return user_content, history\n\n    def get_completion_content(self, response, node_id):\n        if not response['Success']:\n            return f\"[ERROR]\\n{response['Code']}:{response['Message']}\"\n        text = response['Data']['Text']\n        if node_id == '':\n            return text\n        # TODO: 当使用流程编排创建大模型应用时，响应结构如下，最终结果在['finalResult'][node_id]['response']['text']中，暂时先这么写\n        # {\n        #     'Success': True,\n        #     'Code': None,\n        #     'Message': None,\n        #     'Data': {\n        #         'ResponseId': '9822f38dbacf4c9b8daf5ca03a2daf15',\n        #         'SessionId': 'session_id',\n        #         'Text': '{\"finalResult\":{\"LLM_T7islK\":{\"params\":{\"modelId\":\"qwen-plus-v1\",\"prompt\":\"${systemVars.query}${bizVars.Text}\"},\"response\":{\"text\":\"作为一个AI语言模型，我没有年龄，因为我没有生日。\\n我只是一个程序，没有生命和身体。\"}}}}',\n        #         'Thoughts': [],\n        #         'Debug': {},\n        #         'DocReferences': []\n        #     },\n        #     'RequestId': '8e11d31551ce4c3f83f49e6e0dd998b0',\n        #     'Failed': None\n        # }\n        text_dict = json.loads(text)\n        completion_content =  text_dict['finalResult'][node_id]['response']['text']\n        return completion_content\n\n    def calc_tokens(self, messages, completion_content):\n        completion_tokens = len(completion_content)\n        prompt_tokens = 0\n        for message in messages:\n            prompt_tokens += len(message[\"content\"])\n        return completion_tokens, prompt_tokens + completion_tokens\n"
  },
  {
    "path": "models/ali/ali_qwen_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\"\"\"\n    e.g.\n    [\n        {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n        {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n        {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\n        {\"role\": \"user\", \"content\": \"Where was it played?\"}\n    ]\n\"\"\"\n\nclass AliQwenSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"qianwen\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        self.reset()\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"assistant\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"user\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens, len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\ndef num_tokens_from_messages(messages, model):\n    \"\"\"Returns the number of tokens used by a list of messages.\"\"\"\n    # 官方token计算规则：\"对于中文文本来说，1个token通常对应一个汉字；对于英文文本来说，1个token通常对应3至4个字母或1个单词\"\n    # 详情请产看文档：https://help.aliyun.com/document_detail/2586397.html\n    # 目前根据字符串长度粗略估计token数，不影响正常使用\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/baidu/baidu_unit_bot.py",
    "content": "# encoding:utf-8\n\nimport requests\n\nfrom models.bot import Bot\nfrom bridge.reply import Reply, ReplyType\n\n\n# Baidu Unit对话接口 (可用, 但能力较弱)\nclass BaiduUnitBot(Bot):\n    def reply(self, query, context=None):\n        token = self.get_token()\n        url = \"https://aip.baidubce.com/rpc/2.0/unit/service/v3/chat?access_token=\" + token\n        post_data = (\n            '{\"version\":\"3.0\",\"service_id\":\"S73177\",\"session_id\":\"\",\"log_id\":\"7758521\",\"skill_ids\":[\"1221886\"],\"request\":{\"terminal_id\":\"88888\",\"query\":\"'\n            + query\n            + '\", \"hyper_params\": {\"chat_custom_bot_profile\": 1}}}'\n        )\n        print(post_data)\n        headers = {\"content-type\": \"application/x-www-form-urlencoded\"}\n        response = requests.post(url, data=post_data.encode(), headers=headers)\n        if response:\n            reply = Reply(\n                ReplyType.TEXT,\n                response.json()[\"result\"][\"context\"][\"SYS_PRESUMED_HIST\"][1],\n            )\n            return reply\n\n    def get_token(self):\n        access_key = \"YOUR_ACCESS_KEY\"\n        secret_key = \"YOUR_SECRET_KEY\"\n        host = \"https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=\" + access_key + \"&client_secret=\" + secret_key\n        response = requests.get(host)\n        if response:\n            print(response.json())\n            return response.json()[\"access_token\"]\n"
  },
  {
    "path": "models/baidu/baidu_wenxin.py",
    "content": "# encoding:utf-8\n\nimport requests\nimport json\nfrom common import const\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf\nfrom models.baidu.baidu_wenxin_session import BaiduWenxinSession\n\nBAIDU_API_KEY = conf().get(\"baidu_wenxin_api_key\")\nBAIDU_SECRET_KEY = conf().get(\"baidu_wenxin_secret_key\")\n\nclass BaiduWenxinBot(Bot):\n\n    def __init__(self):\n        super().__init__()\n        wenxin_model = conf().get(\"baidu_wenxin_model\")\n        self.prompt_enabled = conf().get(\"baidu_wenxin_prompt_enabled\")\n        if self.prompt_enabled:\n            self.prompt = conf().get(\"character_desc\", \"\")\n            if self.prompt == \"\":\n                logger.warn(\"[BAIDU] Although you enabled model prompt, character_desc is not specified.\")\n        if wenxin_model is not None:\n            wenxin_model = conf().get(\"baidu_wenxin_model\") or \"eb-instant\"\n        else:\n            if conf().get(\"model\") and conf().get(\"model\") == const.WEN_XIN:\n                wenxin_model = \"completions\"\n            elif conf().get(\"model\") and conf().get(\"model\") == const.WEN_XIN_4:\n                wenxin_model = \"completions_pro\"\n\n        self.sessions = SessionManager(BaiduWenxinSession, model=wenxin_model)\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context and context.type:\n            if context.type == ContextType.TEXT:\n                logger.info(\"[BAIDU] query={}\".format(query))\n                session_id = context[\"session_id\"]\n                reply = None\n                if query == \"#清除记忆\":\n                    self.sessions.clear_session(session_id)\n                    reply = Reply(ReplyType.INFO, \"记忆已清除\")\n                elif query == \"#清除所有\":\n                    self.sessions.clear_all_session()\n                    reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n                else:\n                    session = self.sessions.session_query(query, session_id)\n                    result = self.reply_text(session)\n                    total_tokens, completion_tokens, reply_content = (\n                        result[\"total_tokens\"],\n                        result[\"completion_tokens\"],\n                        result[\"content\"],\n                    )\n                    logger.debug(\n                        \"[BAIDU] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(session.messages, session_id, reply_content, completion_tokens)\n                    )\n\n                    if total_tokens == 0:\n                        reply = Reply(ReplyType.ERROR, reply_content)\n                    else:\n                        self.sessions.session_reply(reply_content, session_id, total_tokens)\n                        reply = Reply(ReplyType.TEXT, reply_content)\n                return reply\n            elif context.type == ContextType.IMAGE_CREATE:\n                ok, retstring = self.create_img(query, 0)\n                reply = None\n                if ok:\n                    reply = Reply(ReplyType.IMAGE_URL, retstring)\n                else:\n                    reply = Reply(ReplyType.ERROR, retstring)\n                return reply\n\n    def reply_text(self, session: BaiduWenxinSession, retry_count=0):\n        try:\n            logger.info(\"[BAIDU] model={}\".format(session.model))\n            access_token = self.get_access_token()\n            if access_token == 'None':\n                logger.warn(\"[BAIDU] access token 获取失败\")\n                return {\n                    \"total_tokens\": 0,\n                    \"completion_tokens\": 0,\n                    \"content\": 0,\n                    }\n            url = \"https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/\" + session.model + \"?access_token=\" + access_token\n            headers = {\n                'Content-Type': 'application/json'\n            }\n            payload = {'messages': session.messages, 'system': self.prompt} if self.prompt_enabled else {'messages': session.messages}\n            response = requests.request(\"POST\", url, headers=headers, data=json.dumps(payload))\n            response_text = json.loads(response.text)\n            logger.info(f\"[BAIDU] response text={response_text}\")\n            res_content = response_text[\"result\"]\n            total_tokens = response_text[\"usage\"][\"total_tokens\"]\n            completion_tokens = response_text[\"usage\"][\"completion_tokens\"]\n            logger.info(\"[BAIDU] reply={}\".format(res_content))\n            return {\n                \"total_tokens\": total_tokens,\n                \"completion_tokens\": completion_tokens,\n                \"content\": res_content,\n            }\n        except Exception as e:\n            need_retry = retry_count < 2\n            logger.warn(\"[BAIDU] Exception: {}\".format(e))\n            need_retry = False\n            self.sessions.clear_session(session.session_id)\n            result = {\"total_tokens\": 0, \"completion_tokens\": 0, \"content\": \"出错了: {}\".format(e)}\n            return result\n\n    def get_access_token(self):\n        \"\"\"\n        使用 AK，SK 生成鉴权签名（Access Token）\n        :return: access_token，或是None(如果错误)\n        \"\"\"\n        url = \"https://aip.baidubce.com/oauth/2.0/token\"\n        params = {\"grant_type\": \"client_credentials\", \"client_id\": BAIDU_API_KEY, \"client_secret\": BAIDU_SECRET_KEY}\n        return str(requests.post(url, params=params).json().get(\"access_token\"))\n"
  },
  {
    "path": "models/baidu/baidu_wenxin_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\"\"\"\n    e.g.  [\n        {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n        {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\n        {\"role\": \"user\", \"content\": \"Where was it played?\"}\n    ]\n\"\"\"\n\n\nclass BaiduWenxinSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"gpt-3.5-turbo\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        # 百度文心不支持system prompt\n        # self.reset()\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) >= 2:\n                self.messages.pop(0)\n                self.messages.pop(0)\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens, len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\n\ndef num_tokens_from_messages(messages, model):\n    \"\"\"Returns the number of tokens used by a list of messages.\"\"\"\n    tokens = 0\n    for msg in messages:\n        # 官方token计算规则暂不明确： \"大约为 token数为 \"中文字 + 其他语种单词数 x 1.3\"\n        # 这里先直接根据字数粗略估算吧，暂不影响正常使用，仅在判断是否丢弃历史会话的时候会有偏差\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/bot.py",
    "content": "\"\"\"\nAuto-replay chat robot abstract class\n\"\"\"\n\n\nfrom bridge.context import Context\nfrom bridge.reply import Reply\n\n\nclass Bot(object):\n    def reply(self, query, context: Context = None) -> Reply:\n        \"\"\"\n        bot auto-reply content\n        :param req: received message\n        :return: reply content\n        \"\"\"\n        raise NotImplementedError\n"
  },
  {
    "path": "models/bot_factory.py",
    "content": "\"\"\"\nchannel factory\n\"\"\"\nfrom common import const\n\n\ndef create_bot(bot_type):\n    \"\"\"\n    create a bot_type instance\n    :param bot_type: bot type code\n    :return: bot instance\n    \"\"\"\n    if bot_type == const.BAIDU:\n        # 替换Baidu Unit为Baidu文心千帆对话接口\n        # from models.baidu.baidu_unit_bot import BaiduUnitBot\n        # return BaiduUnitBot()\n        from models.baidu.baidu_wenxin import BaiduWenxinBot\n        return BaiduWenxinBot()\n\n    elif bot_type in (const.OPENAI, const.CHATGPT, const.DEEPSEEK):  # OpenAI-compatible API\n        from models.chatgpt.chat_gpt_bot import ChatGPTBot\n        return ChatGPTBot()\n\n    elif bot_type == const.OPEN_AI:\n        # OpenAI 官方对话模型API\n        from models.openai.open_ai_bot import OpenAIBot\n        return OpenAIBot()\n\n    elif bot_type == const.CHATGPTONAZURE:\n        # Azure chatgpt service https://azure.microsoft.com/en-in/products/cognitive-services/openai-service/\n        from models.chatgpt.chat_gpt_bot import AzureChatGPTBot\n        return AzureChatGPTBot()\n\n    elif bot_type == const.XUNFEI:\n        from models.xunfei.xunfei_spark_bot import XunFeiBot\n        return XunFeiBot()\n\n    elif bot_type == const.LINKAI:\n        from models.linkai.link_ai_bot import LinkAIBot\n        return LinkAIBot()\n\n    elif bot_type == const.CLAUDEAPI:\n        from models.claudeapi.claude_api_bot import ClaudeAPIBot\n        return ClaudeAPIBot()\n    elif bot_type == const.QWEN:\n        from models.ali.ali_qwen_bot import AliQwenBot\n        return AliQwenBot()\n    elif bot_type == const.QWEN_DASHSCOPE:\n        from models.dashscope.dashscope_bot import DashscopeBot\n        return DashscopeBot()\n    elif bot_type == const.GEMINI:\n        from models.gemini.google_gemini_bot import GoogleGeminiBot\n        return GoogleGeminiBot()\n\n    elif bot_type == const.ZHIPU_AI or bot_type == \"glm-4\":  # \"glm-4\" kept for backward compatibility\n        from models.zhipuai.zhipuai_bot import ZHIPUAIBot\n        return ZHIPUAIBot()\n\n    elif bot_type == const.MOONSHOT:\n        from models.moonshot.moonshot_bot import MoonshotBot\n        return MoonshotBot()\n    \n    elif bot_type == const.MiniMax:\n        from models.minimax.minimax_bot import MinimaxBot\n        return MinimaxBot()\n\n    elif bot_type == const.MODELSCOPE:\n        from models.modelscope.modelscope_bot import ModelScopeBot\n        return ModelScopeBot()\n\n    elif bot_type == const.DOUBAO:\n        from models.doubao.doubao_bot import DoubaoBot\n        return DoubaoBot()\n\n    raise RuntimeError\n"
  },
  {
    "path": "models/chatgpt/chat_gpt_bot.py",
    "content": "# encoding:utf-8\n\nimport time\nimport json\n\nimport openai\nfrom models.openai.openai_compat import error as openai_error, RateLimitError, Timeout, APIError, APIConnectionError\nimport requests\nfrom common import const\nfrom models.bot import Bot\nfrom models.openai_compatible_bot import OpenAICompatibleBot\nfrom models.chatgpt.chat_gpt_session import ChatGPTSession\nfrom models.openai.open_ai_image import OpenAIImage\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.token_bucket import TokenBucket\nfrom config import conf, load_config\nfrom models.baidu.baidu_wenxin_session import BaiduWenxinSession\n\n# OpenAI对话模型API (可用)\nclass ChatGPTBot(Bot, OpenAIImage, OpenAICompatibleBot):\n    def __init__(self):\n        super().__init__()\n        # set the default api_key\n        openai.api_key = conf().get(\"open_ai_api_key\")\n        if conf().get(\"open_ai_api_base\"):\n            openai.api_base = conf().get(\"open_ai_api_base\")\n        proxy = conf().get(\"proxy\")\n        if proxy:\n            openai.proxy = proxy\n        if conf().get(\"rate_limit_chatgpt\"):\n            self.tb4chatgpt = TokenBucket(conf().get(\"rate_limit_chatgpt\", 20))\n        conf_model = conf().get(\"model\") or \"gpt-3.5-turbo\"\n        self.sessions = SessionManager(ChatGPTSession, model=conf().get(\"model\") or \"gpt-3.5-turbo\")\n        # o1相关模型不支持system prompt，暂时用文心模型的session\n\n        self.args = {\n            \"model\": conf_model,  # 对话模型的名称\n            \"temperature\": conf().get(\"temperature\", 0.9),  # 值在[0,1]之间，越大表示回复越具有不确定性\n            # \"max_tokens\":4096,  # 回复最大的字符数\n            \"top_p\": conf().get(\"top_p\", 1),\n            \"frequency_penalty\": conf().get(\"frequency_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n            \"presence_penalty\": conf().get(\"presence_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n            \"request_timeout\": conf().get(\"request_timeout\", None),  # 请求超时时间，openai接口默认设置为600，对于难问题一般需要较长时间\n            \"timeout\": conf().get(\"request_timeout\", None),  # 重试超时时间，在这个时间内，将会自动重试\n        }\n        # 部分模型暂不支持一些参数，特殊处理\n        if conf_model in [const.O1, const.O1_MINI, const.GPT_5, const.GPT_5_MINI, const.GPT_5_NANO]:\n            remove_keys = [\"temperature\", \"top_p\", \"frequency_penalty\", \"presence_penalty\"]\n            for key in remove_keys:\n                self.args.pop(key, None)  # 如果键不存在，使用 None 来避免抛出错、\n            if conf_model in [const.O1, const.O1_MINI]:  # o1系列模型不支持系统提示词，使用文心模型的session\n                self.sessions = SessionManager(BaiduWenxinSession, model=conf().get(\"model\") or const.O1_MINI)\n\n    def get_api_config(self):\n        \"\"\"Get API configuration for OpenAI-compatible base class\"\"\"\n        return {\n            'api_key': conf().get(\"open_ai_api_key\"),\n            'api_base': conf().get(\"open_ai_api_base\"),\n            'model': conf().get(\"model\", \"gpt-3.5-turbo\"),\n            'default_temperature': conf().get(\"temperature\", 0.9),\n            'default_top_p': conf().get(\"top_p\", 1.0),\n            'default_frequency_penalty': conf().get(\"frequency_penalty\", 0.0),\n            'default_presence_penalty': conf().get(\"presence_penalty\", 0.0),\n        }\n    \n    def reply(self, query, context=None):\n        # acquire reply content\n        if context.type == ContextType.TEXT:\n            logger.info(\"[CHATGPT] query={}\".format(query))\n\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[CHATGPT] session query={}\".format(session.messages))\n\n            api_key = context.get(\"openai_api_key\")\n            model = context.get(\"gpt_model\")\n            new_args = None\n            if model:\n                new_args = self.args.copy()\n                new_args[\"model\"] = model\n            # if context.get('stream'):\n            #     # reply in stream\n            #     return self.reply_text_stream(query, new_query, session_id)\n\n            reply_content = self.reply_text(session, api_key, args=new_args)\n            logger.debug(\n                \"[CHATGPT] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[CHATGPT] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n\n        elif context.type == ContextType.IMAGE_CREATE:\n            ok, retstring = self.create_img(query, 0)\n            reply = None\n            if ok:\n                reply = Reply(ReplyType.IMAGE_URL, retstring)\n            else:\n                reply = Reply(ReplyType.ERROR, retstring)\n            return reply\n        elif context.type == ContextType.IMAGE:\n            logger.info(\"[CHATGPT] Image message received\")\n            reply = self.reply_image(context)\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_image(self, context):\n        \"\"\"\n        Process image message using OpenAI Vision API\n        \"\"\"\n        import base64\n        import os\n        \n        try:\n            image_path = context.content\n            logger.info(f\"[CHATGPT] Processing image: {image_path}\")\n            \n            # Check if file exists\n            if not os.path.exists(image_path):\n                logger.error(f\"[CHATGPT] Image file not found: {image_path}\")\n                return Reply(ReplyType.ERROR, \"图片文件不存在\")\n            \n            # Read and encode image\n            with open(image_path, \"rb\") as f:\n                image_data = f.read()\n                image_base64 = base64.b64encode(image_data).decode(\"utf-8\")\n            \n            # Detect image format\n            extension = os.path.splitext(image_path)[1].lower()\n            mime_type_map = {\n                \".jpg\": \"image/jpeg\",\n                \".jpeg\": \"image/jpeg\", \n                \".png\": \"image/png\",\n                \".gif\": \"image/gif\",\n                \".webp\": \"image/webp\"\n            }\n            mime_type = mime_type_map.get(extension, \"image/jpeg\")\n            \n            # Get model and API config\n            model = context.get(\"gpt_model\") or conf().get(\"model\", \"gpt-4o\")\n            api_key = context.get(\"openai_api_key\") or conf().get(\"open_ai_api_key\")\n            api_base = conf().get(\"open_ai_api_base\")\n            \n            # Build vision request\n            messages = [\n                {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\"type\": \"text\", \"text\": \"请描述这张图片的内容\"},\n                        {\n                            \"type\": \"image_url\",\n                            \"image_url\": {\n                                \"url\": f\"data:{mime_type};base64,{image_base64}\"\n                            }\n                        }\n                    ]\n                }\n            ]\n            \n            logger.info(f\"[CHATGPT] Calling vision API with model: {model}\")\n            \n            # Call OpenAI API\n            kwargs = {\n                \"model\": model,\n                \"messages\": messages,\n                \"max_tokens\": 1000\n            }\n            if api_key:\n                kwargs[\"api_key\"] = api_key\n            if api_base:\n                kwargs[\"api_base\"] = api_base\n            \n            response = openai.ChatCompletion.create(**kwargs)\n            \n            content = response.choices[0][\"message\"][\"content\"]\n            logger.info(f\"[CHATGPT] Vision API response: {content[:100]}...\")\n            \n            # Clean up temp file\n            try:\n                os.remove(image_path)\n                logger.debug(f\"[CHATGPT] Removed temp image file: {image_path}\")\n            except Exception:\n                pass\n            \n            return Reply(ReplyType.TEXT, content)\n            \n        except Exception as e:\n            logger.error(f\"[CHATGPT] Image processing error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            return Reply(ReplyType.ERROR, f\"图片识别失败: {str(e)}\")\n\n    def reply_text(self, session: ChatGPTSession, api_key=None, args=None, retry_count=0) -> dict:\n        \"\"\"\n        call openai's ChatCompletion to get the answer\n        :param session: a conversation session\n        :param session_id: session id\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            if conf().get(\"rate_limit_chatgpt\") and not self.tb4chatgpt.get_token():\n                raise RateLimitError(\"RateLimitError: rate limit exceeded\")\n            # if api_key == None, the default openai.api_key will be used\n            if args is None:\n                args = self.args\n            response = openai.ChatCompletion.create(api_key=api_key, messages=session.messages, **args)\n            # logger.debug(\"[CHATGPT] response={}\".format(response))\n            logger.info(\"[ChatGPT] reply={}, total_tokens={}\".format(response.choices[0]['message']['content'], response[\"usage\"][\"total_tokens\"]))\n            return {\n                \"total_tokens\": response[\"usage\"][\"total_tokens\"],\n                \"completion_tokens\": response[\"usage\"][\"completion_tokens\"],\n                \"content\": response.choices[0][\"message\"][\"content\"],\n            }\n        except Exception as e:\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if isinstance(e, RateLimitError):\n                logger.warn(\"[CHATGPT] RateLimitError: {}\".format(e))\n                result[\"content\"] = \"提问太快啦，请休息一下再问我吧\"\n                if need_retry:\n                    time.sleep(20)\n            elif isinstance(e, Timeout):\n                logger.warn(\"[CHATGPT] Timeout: {}\".format(e))\n                result[\"content\"] = \"我没有收到你的消息\"\n                if need_retry:\n                    time.sleep(5)\n            elif isinstance(e, APIError):\n                logger.warn(\"[CHATGPT] Bad Gateway: {}\".format(e))\n                result[\"content\"] = \"请再问我一次\"\n                if need_retry:\n                    time.sleep(10)\n            elif isinstance(e, APIConnectionError):\n                logger.warn(\"[CHATGPT] APIConnectionError: {}\".format(e))\n                result[\"content\"] = \"我连接不到你的网络\"\n                if need_retry:\n                    time.sleep(5)\n            else:\n                logger.exception(\"[CHATGPT] Exception: {}\".format(e))\n                need_retry = False\n                self.sessions.clear_session(session.session_id)\n\n            if need_retry:\n                logger.warn(\"[CHATGPT] 第{}次重试\".format(retry_count + 1))\n                return self.reply_text(session, api_key, args, retry_count + 1)\n            else:\n                return result\n\nclass AzureChatGPTBot(ChatGPTBot):\n    def __init__(self):\n        super().__init__()\n        openai.api_type = \"azure\"\n        openai.api_version = conf().get(\"azure_api_version\", \"2023-06-01-preview\")\n        self.args[\"deployment_id\"] = conf().get(\"azure_deployment_id\")\n\n    def create_img(self, query, retry_count=0, api_key=None):\n        text_to_image_model = conf().get(\"text_to_image\")\n        if text_to_image_model == \"dall-e-2\":\n            api_version = \"2023-06-01-preview\"\n            endpoint = conf().get(\"azure_openai_dalle_api_base\",\"open_ai_api_base\")\n            # 检查endpoint是否以/结尾\n            if not endpoint.endswith(\"/\"):\n                endpoint = endpoint + \"/\"\n            url = \"{}openai/images/generations:submit?api-version={}\".format(endpoint, api_version)\n            api_key = conf().get(\"azure_openai_dalle_api_key\",\"open_ai_api_key\")\n            headers = {\"api-key\": api_key, \"Content-Type\": \"application/json\"}\n            try:\n                body = {\"prompt\": query, \"size\": conf().get(\"image_create_size\", \"256x256\"),\"n\": 1}\n                submission = requests.post(url, headers=headers, json=body)\n                operation_location = submission.headers['operation-location']\n                status = \"\"\n                while (status != \"succeeded\"):\n                    if retry_count > 3:\n                        return False, \"图片生成失败\"\n                    response = requests.get(operation_location, headers=headers)\n                    status = response.json()['status']\n                    retry_count += 1\n                image_url = response.json()['result']['data'][0]['url']\n                return True, image_url\n            except Exception as e:\n                logger.error(\"create image error: {}\".format(e))\n                return False, \"图片生成失败\"\n        elif text_to_image_model == \"dall-e-3\":\n            api_version = conf().get(\"azure_api_version\", \"2024-02-15-preview\")\n            endpoint = conf().get(\"azure_openai_dalle_api_base\",\"open_ai_api_base\")\n            # 检查endpoint是否以/结尾\n            if not endpoint.endswith(\"/\"):\n                endpoint = endpoint + \"/\"\n            url = \"{}openai/deployments/{}/images/generations?api-version={}\".format(endpoint, conf().get(\"azure_openai_dalle_deployment_id\",\"text_to_image\"),api_version)\n            api_key = conf().get(\"azure_openai_dalle_api_key\",\"open_ai_api_key\")\n            headers = {\"api-key\": api_key, \"Content-Type\": \"application/json\"}\n            try:\n                body = {\"prompt\": query, \"size\": conf().get(\"image_create_size\", \"1024x1024\"), \"quality\": conf().get(\"dalle3_image_quality\", \"standard\")}\n                response = requests.post(url, headers=headers, json=body)\n                response.raise_for_status()  # 检查请求是否成功\n                data = response.json()\n\n                # 检查响应中是否包含图像 URL\n                if 'data' in data and len(data['data']) > 0 and 'url' in data['data'][0]:\n                    image_url = data['data'][0]['url']\n                    return True, image_url\n                else:\n                    error_message = \"响应中没有图像 URL\"\n                    logger.error(error_message)\n                    return False, \"图片生成失败\"\n\n            except requests.exceptions.RequestException as e:\n                # 捕获所有请求相关的异常\n                try:\n                    error_detail = response.json().get('error', {}).get('message', str(e))\n                except ValueError:\n                    error_detail = str(e)\n                error_message = f\"{error_detail}\"\n                logger.error(error_message)\n                return False, error_message\n\n            except Exception as e:\n                # 捕获所有其他异常\n                error_message = f\"生成图像时发生错误: {e}\"\n                logger.error(error_message)\n                return False, \"图片生成失败\"\n        else:\n            return False, \"图片生成失败，未配置text_to_image参数\"\n"
  },
  {
    "path": "models/chatgpt/chat_gpt_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\nfrom common import const\n\n\"\"\"\n    e.g.  [\n        {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n        {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n        {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\n        {\"role\": \"user\", \"content\": \"Where was it played?\"}\n    ]\n\"\"\"\n\n\nclass ChatGPTSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"gpt-3.5-turbo\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        self.reset()\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"assistant\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"user\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens, len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\n\n# refer to https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb\ndef num_tokens_from_messages(messages, model):\n    \"\"\"Returns the number of tokens used by a list of messages.\"\"\"\n\n    if model in [\"wenxin\", \"xunfei\"] or model.startswith(const.GEMINI):\n        return num_tokens_by_character(messages)\n\n    import tiktoken\n\n    if model in [\"gpt-3.5-turbo-0301\", \"gpt-35-turbo\", \"gpt-3.5-turbo-1106\", \"moonshot\", const.LINKAI_35]:\n        return num_tokens_from_messages(messages, model=\"gpt-3.5-turbo\")\n    elif model in [\"gpt-4-0314\", \"gpt-4-0613\", \"gpt-4-32k\", \"gpt-4-32k-0613\", \"gpt-3.5-turbo-0613\",\n                   \"gpt-3.5-turbo-16k\", \"gpt-3.5-turbo-16k-0613\", \"gpt-35-turbo-16k\", \"gpt-4-turbo-preview\",\n                   \"gpt-4-1106-preview\", const.GPT4_TURBO_PREVIEW, const.GPT4_VISION_PREVIEW, const.GPT4_TURBO_01_25,\n                   const.GPT_4o, const.GPT_4O_0806, const.GPT_4o_MINI, const.LINKAI_4o, const.LINKAI_4_TURBO, const.GPT_5, const.GPT_5_MINI, const.GPT_5_NANO]:\n        return num_tokens_from_messages(messages, model=\"gpt-4\")\n    elif model.startswith(\"claude-3\"):\n        return num_tokens_from_messages(messages, model=\"gpt-3.5-turbo\")\n    try:\n        encoding = tiktoken.encoding_for_model(model)\n    except KeyError:\n        logger.debug(\"Warning: model not found. Using cl100k_base encoding.\")\n        encoding = tiktoken.get_encoding(\"cl100k_base\")\n    if model == \"gpt-3.5-turbo\":\n        tokens_per_message = 4  # every message follows <|start|>{role/name}\\n{content}<|end|>\\n\n        tokens_per_name = -1  # if there's a name, the role is omitted\n    elif model == \"gpt-4\":\n        tokens_per_message = 3\n        tokens_per_name = 1\n    else:\n        logger.debug(f\"num_tokens_from_messages() is not implemented for model {model}. Returning num tokens assuming gpt-3.5-turbo.\")\n        return num_tokens_from_messages(messages, model=\"gpt-3.5-turbo\")\n    num_tokens = 0\n    for message in messages:\n        num_tokens += tokens_per_message\n        for key, value in message.items():\n            num_tokens += len(encoding.encode(value))\n            if key == \"name\":\n                num_tokens += tokens_per_name\n    num_tokens += 3  # every reply is primed with <|start|>assistant<|message|>\n    return num_tokens\n\n\ndef num_tokens_by_character(messages):\n    \"\"\"Returns the number of tokens used by a list of messages.\"\"\"\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/claudeapi/claude_api_bot.py",
    "content": "# encoding:utf-8\n\nimport json\nimport time\n\nimport requests\n\nfrom models.baidu.baidu_wenxin_session import BaiduWenxinSession\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common import const\nfrom common.log import logger\nfrom config import conf\n\n# Optional OpenAI image support\ntry:\n    from models.openai.open_ai_image import OpenAIImage\n    _openai_image_available = True\nexcept Exception as e:\n    logger.warning(f\"OpenAI image support not available: {e}\")\n    _openai_image_available = False\n    OpenAIImage = object  # Fallback to object\n\nuser_session = dict()\n\n\n# OpenAI对话模型API (可用)\nclass ClaudeAPIBot(Bot, OpenAIImage):\n    def __init__(self):\n        super().__init__()\n        self.sessions = SessionManager(BaiduWenxinSession, model=conf().get(\"model\") or \"text-davinci-003\")\n\n    @property\n    def api_key(self):\n        return conf().get(\"claude_api_key\")\n\n    @property\n    def api_base(self):\n        return conf().get(\"claude_api_base\") or \"https://api.anthropic.com/v1\"\n\n    @property\n    def proxy(self):\n        return conf().get(\"proxy\", None)\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context and context.type:\n            if context.type == ContextType.TEXT:\n                logger.info(\"[CLAUDE_API] query={}\".format(query))\n                session_id = context[\"session_id\"]\n                reply = None\n                if query == \"#清除记忆\":\n                    self.sessions.clear_session(session_id)\n                    reply = Reply(ReplyType.INFO, \"记忆已清除\")\n                elif query == \"#清除所有\":\n                    self.sessions.clear_all_session()\n                    reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n                else:\n                    session = self.sessions.session_query(query, session_id)\n                    result = self.reply_text(session)\n                    logger.info(result)\n                    total_tokens, completion_tokens, reply_content = (\n                        result[\"total_tokens\"],\n                        result[\"completion_tokens\"],\n                        result[\"content\"],\n                    )\n                    logger.debug(\n                        \"[CLAUDE_API] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(str(session), session_id, reply_content, completion_tokens)\n                    )\n\n                    if total_tokens == 0:\n                        reply = Reply(ReplyType.ERROR, reply_content)\n                    else:\n                        self.sessions.session_reply(reply_content, session_id, total_tokens)\n                        reply = Reply(ReplyType.TEXT, reply_content)\n                return reply\n            elif context.type == ContextType.IMAGE_CREATE:\n                ok, retstring = self.create_img(query, 0)\n                reply = None\n                if ok:\n                    reply = Reply(ReplyType.IMAGE_URL, retstring)\n                else:\n                    reply = Reply(ReplyType.ERROR, retstring)\n                return reply\n\n    def reply_text(self, session: BaiduWenxinSession, retry_count=0, tools=None):\n        try:\n            actual_model = self._model_mapping(conf().get(\"model\"))\n\n            # Prepare headers\n            headers = {\n                \"x-api-key\": self.api_key,\n                \"anthropic-version\": \"2023-06-01\",\n                \"content-type\": \"application/json\"\n            }\n\n            # Extract system prompt if present and prepare Claude-compatible messages\n            system_prompt = conf().get(\"character_desc\", \"\")\n            claude_messages = []\n\n            for msg in session.messages:\n                if msg.get(\"role\") == \"system\":\n                    system_prompt = msg[\"content\"]\n                else:\n                    claude_messages.append(msg)\n\n            # Prepare request data\n            data = {\n                \"model\": actual_model,\n                \"messages\": claude_messages,\n                \"max_tokens\": self._get_max_tokens(actual_model)\n            }\n\n            if system_prompt:\n                data[\"system\"] = system_prompt\n\n            if tools:\n                data[\"tools\"] = tools\n\n            # Make HTTP request\n            proxies = {\"http\": self.proxy, \"https\": self.proxy} if self.proxy else None\n            response = requests.post(\n                f\"{self.api_base}/messages\",\n                headers=headers,\n                json=data,\n                proxies=proxies\n            )\n\n            if response.status_code != 200:\n                raise Exception(f\"API request failed: {response.status_code} - {response.text}\")\n\n            claude_response = response.json()\n            # Handle response content and tool calls\n            res_content = \"\"\n            tool_calls = []\n\n            content_blocks = claude_response.get(\"content\", [])\n            for block in content_blocks:\n                if block.get(\"type\") == \"text\":\n                    res_content += block.get(\"text\", \"\")\n                elif block.get(\"type\") == \"tool_use\":\n                    tool_calls.append({\n                        \"id\": block.get(\"id\", \"\"),\n                        \"name\": block.get(\"name\", \"\"),\n                        \"arguments\": block.get(\"input\", {})\n                    })\n\n            res_content = res_content.strip().replace(\"<|endoftext|>\", \"\")\n            usage = claude_response.get(\"usage\", {})\n            total_tokens = usage.get(\"input_tokens\", 0) + usage.get(\"output_tokens\", 0)\n            completion_tokens = usage.get(\"output_tokens\", 0)\n\n            logger.info(\"[CLAUDE_API] reply={}\".format(res_content))\n            if tool_calls:\n                logger.info(\"[CLAUDE_API] tool_calls={}\".format(tool_calls))\n\n            result = {\n                \"total_tokens\": total_tokens,\n                \"completion_tokens\": completion_tokens,\n                \"content\": res_content,\n            }\n\n            if tool_calls:\n                result[\"tool_calls\"] = tool_calls\n\n            return result\n        except Exception as e:\n            need_retry = retry_count < 2\n            result = {\"total_tokens\": 0, \"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n\n            # Handle different types of errors\n            error_str = str(e).lower()\n            if \"rate\" in error_str or \"limit\" in error_str:\n                logger.warn(\"[CLAUDE_API] RateLimitError: {}\".format(e))\n                result[\"content\"] = \"提问太快啦，请休息一下再问我吧\"\n                if need_retry:\n                    time.sleep(20)\n            elif \"timeout\" in error_str:\n                logger.warn(\"[CLAUDE_API] Timeout: {}\".format(e))\n                result[\"content\"] = \"我没有收到你的消息\"\n                if need_retry:\n                    time.sleep(5)\n            elif \"connection\" in error_str or \"network\" in error_str:\n                logger.warn(\"[CLAUDE_API] APIConnectionError: {}\".format(e))\n                need_retry = False\n                result[\"content\"] = \"我连接不到你的网络\"\n            else:\n                logger.warn(\"[CLAUDE_API] Exception: {}\".format(e))\n                need_retry = False\n                self.sessions.clear_session(session.session_id)\n\n            if need_retry:\n                logger.warn(\"[CLAUDE_API] 第{}次重试\".format(retry_count + 1))\n                return self.reply_text(session, retry_count + 1, tools)\n            else:\n                return result\n\n    def _model_mapping(self, model) -> str:\n        if model == \"claude-3-opus\":\n            return const.CLAUDE_3_OPUS\n        elif model == \"claude-3-sonnet\":\n            return const.CLAUDE_3_SONNET\n        elif model == \"claude-3-haiku\":\n            return const.CLAUDE_3_HAIKU\n        elif model == \"claude-3.5-sonnet\":\n            return const.CLAUDE_35_SONNET\n        return model\n\n    def _get_max_tokens(self, model: str) -> int:\n        \"\"\"\n        Get max_tokens for the model.\n        Reference from pi-mono:\n        - Claude 3.5/3.7: 8192\n        - Claude 3 Opus: 4096\n        - Default: 8192\n        \"\"\"\n        if model and (model.startswith(\"claude-3-5\") or model.startswith(\"claude-3-7\")):\n            return 8192\n        elif model and model.startswith(\"claude-3\") and \"opus\" in model:\n            return 4096\n        elif model and (model.startswith(\"claude-sonnet-4\") or model.startswith(\"claude-opus-4\")):\n            return 64000\n        return 8192\n\n    def call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n        \"\"\"\n        Call Claude API with tool support for agent integration\n\n        Args:\n            messages: List of messages\n            tools: List of tool definitions\n            stream: Whether to use streaming\n            **kwargs: Additional parameters\n            \n        Returns:\n            Formatted response compatible with OpenAI format or generator for streaming\n        \"\"\"\n        actual_model = self._model_mapping(conf().get(\"model\"))\n\n        # Extract system prompt from messages if present\n        system_prompt = kwargs.get(\"system\", conf().get(\"character_desc\", \"\"))\n        claude_messages = []\n\n        for msg in messages:\n            if msg.get(\"role\") == \"system\":\n                system_prompt = msg[\"content\"]\n            else:\n                claude_messages.append(msg)\n\n        request_params = {\n            \"model\": actual_model,\n            \"max_tokens\": kwargs.get(\"max_tokens\", self._get_max_tokens(actual_model)),\n            \"messages\": claude_messages,\n            \"stream\": stream\n        }\n\n        if system_prompt:\n            request_params[\"system\"] = system_prompt\n\n        if tools:\n            request_params[\"tools\"] = tools\n\n        try:\n            if stream:\n                return self._handle_stream_response(request_params)\n            else:\n                return self._handle_sync_response(request_params)\n        except Exception as e:\n            logger.error(f\"Claude API call error: {e}\")\n            if stream:\n                # Return error generator for stream\n                def error_generator():\n                    yield {\n                        \"error\": True,\n                        \"message\": str(e),\n                        \"status_code\": 500\n                    }\n\n                return error_generator()\n            else:\n                # Return error response for sync\n                return {\n                    \"error\": True,\n                    \"message\": str(e),\n                    \"status_code\": 500\n                }\n\n    def _handle_sync_response(self, request_params):\n        \"\"\"Handle synchronous Claude API response\"\"\"\n        # Prepare headers\n        headers = {\n            \"x-api-key\": self.api_key,\n            \"anthropic-version\": \"2023-06-01\",\n            \"content-type\": \"application/json\"\n        }\n\n        # Make HTTP request\n        proxies = {\"http\": self.proxy, \"https\": self.proxy} if self.proxy else None\n        response = requests.post(\n            f\"{self.api_base}/messages\",\n            headers=headers,\n            json=request_params,\n            proxies=proxies\n        )\n\n        if response.status_code != 200:\n            raise Exception(f\"API request failed: {response.status_code} - {response.text}\")\n\n        claude_response = response.json()\n\n        # Extract content blocks\n        text_content = \"\"\n        tool_calls = []\n\n        content_blocks = claude_response.get(\"content\", [])\n        for block in content_blocks:\n            if block.get(\"type\") == \"text\":\n                text_content += block.get(\"text\", \"\")\n            elif block.get(\"type\") == \"tool_use\":\n                tool_calls.append({\n                    \"id\": block.get(\"id\", \"\"),\n                    \"type\": \"function\",\n                    \"function\": {\n                        \"name\": block.get(\"name\", \"\"),\n                        \"arguments\": json.dumps(block.get(\"input\", {}))\n                    }\n                })\n\n        # Build message in OpenAI format\n        message = {\n            \"role\": \"assistant\",\n            \"content\": text_content\n        }\n        if tool_calls:\n            message[\"tool_calls\"] = tool_calls\n\n        # Format response to match OpenAI structure\n        usage = claude_response.get(\"usage\", {})\n        formatted_response = {\n            \"id\": claude_response.get(\"id\", \"\"),\n            \"object\": \"chat.completion\",\n            \"created\": int(time.time()),\n            \"model\": claude_response.get(\"model\", request_params[\"model\"]),\n            \"choices\": [\n                {\n                    \"index\": 0,\n                    \"message\": message,\n                    \"finish_reason\": claude_response.get(\"stop_reason\", \"stop\")\n                }\n            ],\n            \"usage\": {\n                \"prompt_tokens\": usage.get(\"input_tokens\", 0),\n                \"completion_tokens\": usage.get(\"output_tokens\", 0),\n                \"total_tokens\": usage.get(\"input_tokens\", 0) + usage.get(\"output_tokens\", 0)\n            }\n        }\n\n        return formatted_response\n\n    def _handle_stream_response(self, request_params):\n        \"\"\"Handle streaming Claude API response using HTTP requests\"\"\"\n        # Prepare headers\n        headers = {\n            \"x-api-key\": self.api_key,\n            \"anthropic-version\": \"2023-06-01\",\n            \"content-type\": \"application/json\"\n        }\n\n        # Add stream parameter\n        request_params[\"stream\"] = True\n\n        # Track tool use state\n        tool_uses_map = {}  # {index: {id, name, input}}\n        current_tool_use_index = -1\n        stop_reason = None  # Track stop reason from Claude\n\n        try:\n            # Make streaming HTTP request\n            proxies = {\"http\": self.proxy, \"https\": self.proxy} if self.proxy else None\n            response = requests.post(\n                f\"{self.api_base}/messages\",\n                headers=headers,\n                json=request_params,\n                proxies=proxies,\n                stream=True\n            )\n\n            if response.status_code != 200:\n                error_text = response.text\n                try:\n                    error_data = json.loads(error_text)\n                    error_msg = error_data.get(\"error\", {}).get(\"message\", error_text)\n                except Exception:\n                    error_msg = error_text or \"Unknown error\"\n\n                yield {\n                    \"error\": True,\n                    \"status_code\": response.status_code,\n                    \"message\": error_msg\n                }\n                return\n\n            # Process streaming response\n            for line in response.iter_lines():\n                if line:\n                    line = line.decode('utf-8')\n                    if line.startswith('data: '):\n                        line = line[6:]  # Remove 'data: ' prefix\n                        if line == '[DONE]':\n                            break\n                        try:\n                            event = json.loads(line)\n                            event_type = event.get(\"type\")\n\n                            if event_type == \"content_block_start\":\n                                # New content block\n                                block = event.get(\"content_block\", {})\n                                if block.get(\"type\") == \"tool_use\":\n                                    current_tool_use_index = event.get(\"index\", 0)\n                                    tool_uses_map[current_tool_use_index] = {\n                                        \"id\": block.get(\"id\", \"\"),\n                                        \"name\": block.get(\"name\", \"\"),\n                                        \"input\": \"\"\n                                    }\n\n                            elif event_type == \"content_block_delta\":\n                                delta = event.get(\"delta\", {})\n                                delta_type = delta.get(\"type\")\n\n                                if delta_type == \"text_delta\":\n                                    # Text content\n                                    content = delta.get(\"text\", \"\")\n                                    yield {\n                                        \"id\": event.get(\"id\", \"\"),\n                                        \"object\": \"chat.completion.chunk\",\n                                        \"created\": int(time.time()),\n                                        \"model\": request_params[\"model\"],\n                                        \"choices\": [{\n                                            \"index\": 0,\n                                            \"delta\": {\"content\": content},\n                                            \"finish_reason\": None\n                                        }]\n                                    }\n\n                                elif delta_type == \"input_json_delta\":\n                                    # Tool input accumulation\n                                    if current_tool_use_index >= 0:\n                                        tool_uses_map[current_tool_use_index][\"input\"] += delta.get(\"partial_json\", \"\")\n\n                            elif event_type == \"message_delta\":\n                                # Extract stop_reason from delta\n                                delta = event.get(\"delta\", {})\n                                if \"stop_reason\" in delta:\n                                    stop_reason = delta.get(\"stop_reason\")\n                                    logger.info(f\"[Claude] Stream stop_reason: {stop_reason}\")\n                                \n                                # Message complete - yield tool calls if any\n                                if tool_uses_map:\n                                    for idx in sorted(tool_uses_map.keys()):\n                                        tool_data = tool_uses_map[idx]\n                                        yield {\n                                            \"id\": event.get(\"id\", \"\"),\n                                            \"object\": \"chat.completion.chunk\",\n                                            \"created\": int(time.time()),\n                                            \"model\": request_params[\"model\"],\n                                            \"choices\": [{\n                                                \"index\": 0,\n                                                \"delta\": {\n                                                    \"tool_calls\": [{\n                                                        \"index\": idx,\n                                                        \"id\": tool_data[\"id\"],\n                                                        \"type\": \"function\",\n                                                        \"function\": {\n                                                            \"name\": tool_data[\"name\"],\n                                                            \"arguments\": tool_data[\"input\"]\n                                                        }\n                                                    }]\n                                                },\n                                                \"finish_reason\": stop_reason\n                                            }]\n                                        }\n                            \n                            elif event_type == \"message_stop\":\n                                # Final event - log completion\n                                logger.debug(f\"[Claude] Stream completed with stop_reason: {stop_reason}\")\n\n                        except json.JSONDecodeError:\n                            continue\n\n        except requests.RequestException as e:\n            logger.error(f\"Claude streaming request error: {e}\")\n            yield {\n                \"error\": True,\n                \"message\": f\"Connection error: {str(e)}\",\n                \"status_code\": 0\n            }\n        except Exception as e:\n            logger.error(f\"Claude streaming error: {e}\")\n            yield {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n"
  },
  {
    "path": "models/dashscope/dashscope_bot.py",
    "content": "# encoding:utf-8\n\nimport json\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf, load_config\nfrom .dashscope_session import DashscopeSession\nimport os\nimport dashscope\nfrom dashscope import MultiModalConversation\nfrom http import HTTPStatus\n\n\n\n# Legacy model name mapping for older dashscope SDK constants.\n# New models don't need to be added here — they use their name string directly.\ndashscope_models = {\n    \"qwen-turbo\": dashscope.Generation.Models.qwen_turbo,\n    \"qwen-plus\": dashscope.Generation.Models.qwen_plus,\n    \"qwen-max\": dashscope.Generation.Models.qwen_max,\n    \"qwen-bailian-v1\": dashscope.Generation.Models.bailian_v1,\n}\n\n# Model name prefixes that require MultiModalConversation API instead of Generation API.\n# Qwen3.5+ series are omni models that only support MultiModalConversation.\nMULTIMODAL_MODEL_PREFIXES = (\"qwen3.5-\",)\n\n\n# Qwen对话模型API\nclass DashscopeBot(Bot):\n    def __init__(self):\n        super().__init__()\n        self.sessions = SessionManager(DashscopeSession, model=conf().get(\"model\") or \"qwen-plus\")\n        self.model_name = conf().get(\"model\") or \"qwen-plus\"\n        self.client = dashscope.Generation\n        api_key = conf().get(\"dashscope_api_key\")\n        if api_key:\n            os.environ[\"DASHSCOPE_API_KEY\"] = api_key\n\n    @property\n    def api_key(self):\n        return conf().get(\"dashscope_api_key\")\n\n    @staticmethod\n    def _is_multimodal_model(model_name: str) -> bool:\n        \"\"\"Check if the model requires MultiModalConversation API\"\"\"\n        return model_name.startswith(MULTIMODAL_MODEL_PREFIXES)\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context.type == ContextType.TEXT:\n            logger.info(\"[DASHSCOPE] query={}\".format(query))\n\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[DASHSCOPE] session query={}\".format(session.messages))\n\n            reply_content = self.reply_text(session)\n            logger.debug(\n                \"[DASHSCOPE] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[DASHSCOPE] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_text(self, session: DashscopeSession, retry_count=0) -> dict:\n        \"\"\"\n        call openai's ChatCompletion to get the answer\n        :param session: a conversation session\n        :param session_id: session id\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            dashscope.api_key = self.api_key\n            model = dashscope_models.get(self.model_name, self.model_name)\n            if self._is_multimodal_model(self.model_name):\n                mm_messages = self._prepare_messages_for_multimodal(session.messages)\n                response = MultiModalConversation.call(\n                    model=model,\n                    messages=mm_messages,\n                    result_format=\"message\"\n                )\n            else:\n                response = self.client.call(\n                    model,\n                    messages=session.messages,\n                    result_format=\"message\"\n                )\n            if response.status_code == HTTPStatus.OK:\n                resp_dict = self._response_to_dict(response)\n                choice = resp_dict[\"output\"][\"choices\"][0]\n                content = choice.get(\"message\", {}).get(\"content\", \"\")\n                # Multimodal models may return content as a list of blocks\n                if isinstance(content, list):\n                    content = \"\".join(\n                        item.get(\"text\", \"\") for item in content if isinstance(item, dict)\n                    )\n                usage = resp_dict.get(\"usage\", {})\n                return {\n                    \"total_tokens\": usage.get(\"total_tokens\", 0),\n                    \"completion_tokens\": usage.get(\"output_tokens\", 0),\n                    \"content\": content,\n                }\n            else:\n                logger.error('Request id: %s, Status code: %s, error code: %s, error message: %s' % (\n                    response.request_id, response.status_code,\n                    response.code, response.message\n                ))\n                result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n                need_retry = retry_count < 2\n                result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n                if need_retry:\n                    return self.reply_text(session, retry_count + 1)\n                else:\n                    return result\n        except Exception as e:\n            logger.exception(e)\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if need_retry:\n                return self.reply_text(session, retry_count + 1)\n            else:\n                return result\n\n    def call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n        \"\"\"\n        Call DashScope API with tool support for agent integration\n        \n        This method handles:\n        1. Format conversion (Claude format → DashScope format)\n        2. System prompt injection\n        3. API calling with DashScope SDK\n        4. Thinking mode support (enable_thinking for Qwen3)\n        \n        Args:\n            messages: List of messages (may be in Claude format from agent)\n            tools: List of tool definitions (may be in Claude format from agent)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters (max_tokens, temperature, system, etc.)\n            \n        Returns:\n            Formatted response or generator for streaming\n        \"\"\"\n        try:\n            # Convert messages from Claude format to DashScope format\n            messages = self._convert_messages_to_dashscope_format(messages)\n            \n            # Convert tools from Claude format to DashScope format\n            if tools:\n                tools = self._convert_tools_to_dashscope_format(tools)\n            \n            # Handle system prompt\n            system_prompt = kwargs.get('system')\n            if system_prompt:\n                # Add system message at the beginning if not already present\n                if not messages or messages[0].get('role') != 'system':\n                    messages = [{\"role\": \"system\", \"content\": system_prompt}] + messages\n                else:\n                    # Replace existing system message\n                    messages[0] = {\"role\": \"system\", \"content\": system_prompt}\n            \n            # Build request parameters\n            model_name = kwargs.get(\"model\", self.model_name)\n            \n            parameters = {\n                \"result_format\": \"message\",  # Required for tool calling\n                \"temperature\": kwargs.get(\"temperature\", conf().get(\"temperature\", 0.85)),\n                \"top_p\": kwargs.get(\"top_p\", conf().get(\"top_p\", 0.8)),\n            }\n            \n            # Add max_tokens if specified\n            if kwargs.get(\"max_tokens\"):\n                parameters[\"max_tokens\"] = kwargs[\"max_tokens\"]\n            \n            # Add tools if provided\n            if tools:\n                parameters[\"tools\"] = tools\n                # Add tool_choice if specified\n                if kwargs.get(\"tool_choice\"):\n                    parameters[\"tool_choice\"] = kwargs[\"tool_choice\"]\n            \n            # Add thinking parameters for Qwen3 models (disabled by default for stability)\n            if \"qwen3\" in model_name.lower() or \"qwq\" in model_name.lower():\n                # Only enable thinking mode if explicitly requested\n                enable_thinking = kwargs.get(\"enable_thinking\", False)\n                if enable_thinking:\n                    parameters[\"enable_thinking\"] = True\n                    \n                    # Set thinking budget if specified\n                    if kwargs.get(\"thinking_budget\"):\n                        parameters[\"thinking_budget\"] = kwargs[\"thinking_budget\"]\n                    \n                    # Qwen3 requires incremental_output=true in thinking mode\n                    if stream:\n                        parameters[\"incremental_output\"] = True\n            \n            # Always use incremental_output for streaming (for better token-by-token streaming)\n            # This is especially important for tool calling to avoid incomplete responses\n            if stream:\n                parameters[\"incremental_output\"] = True\n            \n            # Make API call with DashScope SDK\n            if stream:\n                return self._handle_stream_response(model_name, messages, parameters)\n            else:\n                return self._handle_sync_response(model_name, messages, parameters)\n                \n        except Exception as e:\n            error_msg = str(e)\n            logger.error(f\"[DASHSCOPE] call_with_tools error: {error_msg}\")\n            if stream:\n                def error_generator():\n                    yield {\n                        \"error\": True,\n                        \"message\": error_msg,\n                        \"status_code\": 500\n                    }\n                return error_generator()\n            else:\n                return {\n                    \"error\": True,\n                    \"message\": error_msg,\n                    \"status_code\": 500\n                }\n    \n    def _handle_sync_response(self, model_name, messages, parameters):\n        \"\"\"Handle synchronous DashScope API response\"\"\"\n        try:\n            # Set API key before calling\n            dashscope.api_key = self.api_key\n            model = dashscope_models.get(model_name, model_name)\n\n            if self._is_multimodal_model(model_name):\n                messages = self._prepare_messages_for_multimodal(messages)\n                response = MultiModalConversation.call(\n                    model=model,\n                    messages=messages,\n                    **parameters\n                )\n            else:\n                response = dashscope.Generation.call(\n                    model=model,\n                    messages=messages,\n                    **parameters\n                )\n\n            if response.status_code == HTTPStatus.OK:\n                # Convert response to dict to avoid DashScope object KeyError issues\n                resp_dict = self._response_to_dict(response)\n                choice = resp_dict[\"output\"][\"choices\"][0]\n                message = choice.get(\"message\", {})\n                content = message.get(\"content\", \"\")\n                # Multimodal models may return content as a list of blocks\n                if isinstance(content, list):\n                    content = \"\".join(\n                        item.get(\"text\", \"\") for item in content if isinstance(item, dict)\n                    )\n                usage = resp_dict.get(\"usage\", {})\n                return {\n                    \"id\": resp_dict.get(\"request_id\"),\n                    \"object\": \"chat.completion\",\n                    \"created\": 0,\n                    \"model\": model_name,\n                    \"choices\": [{\n                        \"index\": 0,\n                        \"message\": {\n                            \"role\": message.get(\"role\", \"assistant\"),\n                            \"content\": content,\n                            \"tool_calls\": self._convert_tool_calls_to_openai_format(\n                                message.get(\"tool_calls\")\n                            )\n                        },\n                        \"finish_reason\": choice.get(\"finish_reason\")\n                    }],\n                    \"usage\": {\n                        \"prompt_tokens\": usage.get(\"input_tokens\", 0),\n                        \"completion_tokens\": usage.get(\"output_tokens\", 0),\n                        \"total_tokens\": usage.get(\"total_tokens\", 0)\n                    }\n                }\n            else:\n                logger.error(f\"[DASHSCOPE] API error: {response.code} - {response.message}\")\n                return {\n                    \"error\": True,\n                    \"message\": response.message,\n                    \"status_code\": response.status_code\n                }\n\n        except Exception as e:\n            logger.error(f\"[DASHSCOPE] sync response error: {e}\")\n            return {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n    \n    def _handle_stream_response(self, model_name, messages, parameters):\n        \"\"\"Handle streaming DashScope API response\"\"\"\n        try:\n            # Set API key before calling\n            dashscope.api_key = self.api_key\n            model = dashscope_models.get(model_name, model_name)\n\n            if self._is_multimodal_model(model_name):\n                messages = self._prepare_messages_for_multimodal(messages)\n                responses = MultiModalConversation.call(\n                    model=model,\n                    messages=messages,\n                    stream=True,\n                    **parameters\n                )\n            else:\n                responses = dashscope.Generation.call(\n                    model=model,\n                    messages=messages,\n                    stream=True,\n                    **parameters\n                )\n            \n            # Stream chunks to caller, converting to OpenAI format\n            for response in responses:\n                # Convert to dict first to avoid DashScope proxy object KeyError\n                resp_dict = self._response_to_dict(response)\n                status_code = resp_dict.get(\"status_code\", 200)\n\n                if status_code != HTTPStatus.OK:\n                    err_code = resp_dict.get(\"code\", \"\")\n                    err_msg = resp_dict.get(\"message\", \"Unknown error\")\n                    logger.error(f\"[DASHSCOPE] Stream error: {err_code} - {err_msg}\")\n                    yield {\n                        \"error\": True,\n                        \"message\": err_msg,\n                        \"status_code\": status_code\n                    }\n                    continue\n\n                choices = resp_dict.get(\"output\", {}).get(\"choices\", [])\n                if not choices:\n                    continue\n\n                choice = choices[0]\n                finish_reason = choice.get(\"finish_reason\")\n                message = choice.get(\"message\", {})\n\n                # Convert to OpenAI-compatible format\n                openai_chunk = {\n                    \"id\": resp_dict.get(\"request_id\"),\n                    \"object\": \"chat.completion.chunk\",\n                    \"created\": 0,\n                    \"model\": model_name,\n                    \"choices\": [{\n                        \"index\": 0,\n                        \"delta\": {},\n                        \"finish_reason\": finish_reason\n                    }]\n                }\n\n                # Add role\n                role = message.get(\"role\")\n                if role:\n                    openai_chunk[\"choices\"][0][\"delta\"][\"role\"] = role\n\n                # Add reasoning_content (thinking process from models like qwen3.5)\n                reasoning_content = message.get(\"reasoning_content\")\n                if reasoning_content:\n                    openai_chunk[\"choices\"][0][\"delta\"][\"reasoning_content\"] = reasoning_content\n\n                # Add content (multimodal models may return list of blocks)\n                content = message.get(\"content\")\n                if isinstance(content, list):\n                    content = \"\".join(\n                        item.get(\"text\", \"\") for item in content if isinstance(item, dict)\n                    )\n                if content:\n                    openai_chunk[\"choices\"][0][\"delta\"][\"content\"] = content\n\n                # Add tool_calls\n                tool_calls = message.get(\"tool_calls\")\n                if tool_calls:\n                    openai_chunk[\"choices\"][0][\"delta\"][\"tool_calls\"] = self._convert_tool_calls_to_openai_format(tool_calls)\n\n                yield openai_chunk\n\n        except Exception as e:\n            logger.error(f\"[DASHSCOPE] stream response error: {e}\", exc_info=True)\n            yield {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n    \n    @staticmethod\n    def _response_to_dict(response) -> dict:\n        \"\"\"\n        Convert DashScope response object to a plain dict.\n\n        DashScope SDK wraps responses in proxy objects whose __getattr__\n        delegates to __getitem__, raising KeyError (not AttributeError)\n        when an attribute is missing.  Standard hasattr / getattr only\n        catch AttributeError, so we must use try-except everywhere.\n        \"\"\"\n        _SENTINEL = object()\n\n        def _safe_getattr(obj, name, default=_SENTINEL):\n            \"\"\"getattr that also catches KeyError from DashScope proxy objects.\"\"\"\n            try:\n                return getattr(obj, name)\n            except (AttributeError, KeyError, TypeError):\n                return default\n\n        def _has_attr(obj, name):\n            return _safe_getattr(obj, name) is not _SENTINEL\n\n        def _to_dict(obj):\n            if isinstance(obj, (str, int, float, bool, type(None))):\n                return obj\n            if isinstance(obj, dict):\n                return {k: _to_dict(v) for k, v in obj.items()}\n            if isinstance(obj, (list, tuple)):\n                return [_to_dict(i) for i in obj]\n            # DashScope response objects behave like dicts (have .keys())\n            if _has_attr(obj, \"keys\"):\n                try:\n                    return {k: _to_dict(obj[k]) for k in obj.keys()}\n                except Exception:\n                    pass\n            return obj\n\n        result = {}\n        # Extract known top-level fields safely\n        for attr in (\"request_id\", \"status_code\", \"code\", \"message\", \"output\", \"usage\"):\n            val = _safe_getattr(response, attr)\n            if val is _SENTINEL:\n                try:\n                    val = response[attr]\n                except (KeyError, TypeError, IndexError):\n                    continue\n            result[attr] = _to_dict(val)\n        return result\n\n    def _convert_tools_to_dashscope_format(self, tools):\n        \"\"\"\n        Convert tools from Claude format to DashScope format\n        \n        Claude format: {name, description, input_schema}\n        DashScope format: {type: \"function\", function: {name, description, parameters}}\n        \"\"\"\n        if not tools:\n            return None\n        \n        dashscope_tools = []\n        for tool in tools:\n            # Check if already in DashScope/OpenAI format\n            if 'type' in tool and tool['type'] == 'function':\n                dashscope_tools.append(tool)\n            else:\n                # Convert from Claude format\n                dashscope_tools.append({\n                    \"type\": \"function\",\n                    \"function\": {\n                        \"name\": tool.get(\"name\"),\n                        \"description\": tool.get(\"description\"),\n                        \"parameters\": tool.get(\"input_schema\", {})\n                    }\n                })\n        \n        return dashscope_tools\n    \n    @staticmethod\n    def _prepare_messages_for_multimodal(messages: list) -> list:\n        \"\"\"\n        Ensure messages are compatible with MultiModalConversation API.\n\n        MultiModalConversation._preprocess_messages iterates every message\n        with ``content = message[\"content\"]; for elem in content: ...``,\n        which means:\n          1. Every message MUST have a 'content' key.\n          2. 'content' MUST be an iterable (list), not a plain string.\n             The expected format is [{\"text\": \"...\"}, ...].\n\n        Meanwhile the DashScope API requires role='tool' messages to follow\n        assistant tool_calls, so we must NOT convert them to role='user'.\n        We just ensure they have a list-typed 'content'.\n        \"\"\"\n        result = []\n        for msg in messages:\n            msg = dict(msg)  # shallow copy\n\n            # Normalize content to list format [{\"text\": \"...\"}]\n            content = msg.get(\"content\")\n            if content is None or (isinstance(content, str) and content == \"\"):\n                msg[\"content\"] = [{\"text\": \"\"}]\n            elif isinstance(content, str):\n                msg[\"content\"] = [{\"text\": content}]\n            # If content is already a list, keep as-is (already in multimodal format)\n\n            result.append(msg)\n        return result\n\n    def _convert_messages_to_dashscope_format(self, messages):\n        \"\"\"\n        Convert messages from Claude format to DashScope format\n        \n        Claude uses content blocks with types like 'tool_use', 'tool_result'\n        DashScope uses 'tool_calls' in assistant messages and 'tool' role for results\n        \"\"\"\n        if not messages:\n            return []\n        \n        dashscope_messages = []\n        \n        for msg in messages:\n            role = msg.get(\"role\")\n            content = msg.get(\"content\")\n            \n            # Handle string content (already in correct format)\n            if isinstance(content, str):\n                dashscope_messages.append(msg)\n                continue\n            \n            # Handle list content (Claude format with content blocks)\n            if isinstance(content, list):\n                # Check if this is a tool result message (user role with tool_result blocks)\n                if role == \"user\" and any(block.get(\"type\") == \"tool_result\" for block in content):\n                    # Convert each tool_result block to a separate tool message\n                    for block in content:\n                        if block.get(\"type\") == \"tool_result\":\n                            dashscope_messages.append({\n                                \"role\": \"tool\",\n                                \"content\": block.get(\"content\", \"\"),\n                                \"tool_call_id\": block.get(\"tool_use_id\")  # DashScope uses 'tool_call_id'\n                            })\n                \n                # Check if this is an assistant message with tool_use blocks\n                elif role == \"assistant\":\n                    # Separate text content and tool_use blocks\n                    text_parts = []\n                    tool_calls = []\n                    \n                    for block in content:\n                        if block.get(\"type\") == \"text\":\n                            text_parts.append(block.get(\"text\", \"\"))\n                        elif block.get(\"type\") == \"tool_use\":\n                            tool_calls.append({\n                                \"id\": block.get(\"id\"),\n                                \"type\": \"function\",\n                                \"function\": {\n                                    \"name\": block.get(\"name\"),\n                                    \"arguments\": json.dumps(block.get(\"input\", {}))\n                                }\n                            })\n                    \n                    # Build DashScope format assistant message\n                    dashscope_msg = {\n                        \"role\": \"assistant\"\n                    }\n                    \n                    # Add content only if there is actual text\n                    # DashScope API: when tool_calls exist, content should be None or omitted if empty\n                    if text_parts:\n                        dashscope_msg[\"content\"] = \" \".join(text_parts)\n                    elif not tool_calls:\n                        # If no tool_calls and no text, set empty string (rare case)\n                        dashscope_msg[\"content\"] = \"\"\n                    # If there are tool_calls but no text, don't set content field at all\n                    \n                    if tool_calls:\n                        dashscope_msg[\"tool_calls\"] = tool_calls\n                    \n                    dashscope_messages.append(dashscope_msg)\n                else:\n                    # Other list content, keep as is\n                    dashscope_messages.append(msg)\n            else:\n                # Other formats, keep as is\n                dashscope_messages.append(msg)\n        \n        return dashscope_messages\n    \n    def _convert_tool_calls_to_openai_format(self, tool_calls):\n        \"\"\"Convert DashScope tool_calls to OpenAI format\"\"\"\n        if not tool_calls:\n            return None\n        \n        openai_tool_calls = []\n        for tool_call in tool_calls:\n            # DashScope format is already similar to OpenAI\n            if isinstance(tool_call, dict):\n                openai_tool_calls.append(tool_call)\n            else:\n                # Handle object format\n                openai_tool_calls.append({\n                    \"id\": getattr(tool_call, 'id', None),\n                    \"type\": \"function\",\n                    \"function\": {\n                        \"name\": tool_call.function.name,\n                        \"arguments\": tool_call.function.arguments\n                    }\n                })\n        \n        return openai_tool_calls\n"
  },
  {
    "path": "models/dashscope/dashscope_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\nclass DashscopeSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"qwen-turbo\"):\n        super().__init__(session_id)\n        self.reset()\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"assistant\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"user\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens,\n                                                                                       len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages)\n\n\ndef num_tokens_from_messages(messages):\n    # 只是大概，具体计算规则：https://help.aliyun.com/zh/dashscope/developer-reference/token-api?spm=a2c4g.11186623.0.0.4d8b12b0BkP3K9\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/doubao/__init__.py",
    "content": ""
  },
  {
    "path": "models/doubao/doubao_bot.py",
    "content": "# encoding:utf-8\n\nimport json\nimport time\n\nimport requests\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf, load_config\nfrom .doubao_session import DoubaoSession\n\n\n# Doubao (火山方舟 / Volcengine Ark) API Bot\nclass DoubaoBot(Bot):\n    def __init__(self):\n        super().__init__()\n        self.sessions = SessionManager(DoubaoSession, model=conf().get(\"model\") or \"doubao-seed-2-0-pro-260215\")\n        model = conf().get(\"model\") or \"doubao-seed-2-0-pro-260215\"\n        self.args = {\n            \"model\": model,\n            \"temperature\": conf().get(\"temperature\", 0.8),\n            \"top_p\": conf().get(\"top_p\", 1.0),\n        }\n\n    @property\n    def api_key(self):\n        return conf().get(\"ark_api_key\")\n\n    @property\n    def base_url(self):\n        url = conf().get(\"ark_base_url\", \"https://ark.cn-beijing.volces.com/api/v3\")\n        if url.endswith(\"/chat/completions\"):\n            url = url.rsplit(\"/chat/completions\", 1)[0]\n        return url.rstrip(\"/\")\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context.type == ContextType.TEXT:\n            logger.info(\"[DOUBAO] query={}\".format(query))\n\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[DOUBAO] session query={}\".format(session.messages))\n\n            model = context.get(\"doubao_model\")\n            new_args = self.args.copy()\n            if model:\n                new_args[\"model\"] = model\n\n            reply_content = self.reply_text(session, args=new_args)\n            logger.debug(\n                \"[DOUBAO] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[DOUBAO] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_text(self, session: DoubaoSession, args=None, retry_count: int = 0) -> dict:\n        \"\"\"\n        Call Doubao chat completion API to get the answer\n        :param session: a conversation session\n        :param args: model args\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": \"Bearer \" + self.api_key\n            }\n            body = args.copy()\n            body[\"messages\"] = session.messages\n            # Disable thinking by default for better efficiency\n            body[\"thinking\"] = {\"type\": \"disabled\"}\n            res = requests.post(\n                f\"{self.base_url}/chat/completions\",\n                headers=headers,\n                json=body\n            )\n            if res.status_code == 200:\n                response = res.json()\n                return {\n                    \"total_tokens\": response[\"usage\"][\"total_tokens\"],\n                    \"completion_tokens\": response[\"usage\"][\"completion_tokens\"],\n                    \"content\": response[\"choices\"][0][\"message\"][\"content\"]\n                }\n            else:\n                response = res.json()\n                error = response.get(\"error\", {})\n                logger.error(f\"[DOUBAO] chat failed, status_code={res.status_code}, \"\n                             f\"msg={error.get('message')}, type={error.get('type')}\")\n\n                result = {\"completion_tokens\": 0, \"content\": \"提问太快啦，请休息一下再问我吧\"}\n                need_retry = False\n                if res.status_code >= 500:\n                    logger.warn(f\"[DOUBAO] do retry, times={retry_count}\")\n                    need_retry = retry_count < 2\n                elif res.status_code == 401:\n                    result[\"content\"] = \"授权失败，请检查API Key是否正确\"\n                elif res.status_code == 429:\n                    result[\"content\"] = \"请求过于频繁，请稍后再试\"\n                    need_retry = retry_count < 2\n                else:\n                    need_retry = False\n\n                if need_retry:\n                    time.sleep(3)\n                    return self.reply_text(session, args, retry_count + 1)\n                else:\n                    return result\n        except Exception as e:\n            logger.exception(e)\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if need_retry:\n                return self.reply_text(session, args, retry_count + 1)\n            else:\n                return result\n\n    # ==================== Agent mode support ====================\n\n    def call_with_tools(self, messages, tools=None, stream: bool = False, **kwargs):\n        \"\"\"\n        Call Doubao API with tool support for agent integration.\n\n        This method handles:\n        1. Format conversion (Claude format -> OpenAI format)\n        2. System prompt injection\n        3. Streaming SSE response with tool_calls\n        4. Thinking (reasoning) is disabled by default for efficiency\n\n        Args:\n            messages: List of messages (may be in Claude format from agent)\n            tools: List of tool definitions (may be in Claude format from agent)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters (max_tokens, temperature, system, model, etc.)\n\n        Returns:\n            Generator yielding OpenAI-format chunks (for streaming)\n        \"\"\"\n        try:\n            # Convert messages from Claude format to OpenAI format\n            converted_messages = self._convert_messages_to_openai_format(messages)\n\n            # Inject system prompt if provided\n            system_prompt = kwargs.pop(\"system\", None)\n            if system_prompt:\n                if not converted_messages or converted_messages[0].get(\"role\") != \"system\":\n                    converted_messages.insert(0, {\"role\": \"system\", \"content\": system_prompt})\n                else:\n                    converted_messages[0] = {\"role\": \"system\", \"content\": system_prompt}\n\n            # Convert tools from Claude format to OpenAI format\n            converted_tools = None\n            if tools:\n                converted_tools = self._convert_tools_to_openai_format(tools)\n\n            # Resolve model / temperature\n            model = kwargs.pop(\"model\", None) or self.args[\"model\"]\n            max_tokens = kwargs.pop(\"max_tokens\", None)\n            # Don't pop temperature, just ignore it - let API use default\n            kwargs.pop(\"temperature\", None)\n\n            # Build request body (omit temperature, let the API use its own default)\n            request_body = {\n                \"model\": model,\n                \"messages\": converted_messages,\n                \"stream\": stream,\n            }\n            if max_tokens is not None:\n                request_body[\"max_tokens\"] = max_tokens\n\n            # Add tools\n            if converted_tools:\n                request_body[\"tools\"] = converted_tools\n                request_body[\"tool_choice\"] = \"auto\"\n\n            # Explicitly disable thinking to avoid reasoning_content issues\n            # in multi-turn tool calls\n            request_body[\"thinking\"] = {\"type\": \"disabled\"}\n\n            logger.debug(f\"[DOUBAO] API call: model={model}, \"\n                         f\"tools={len(converted_tools) if converted_tools else 0}, stream={stream}\")\n\n            if stream:\n                return self._handle_stream_response(request_body)\n            else:\n                return self._handle_sync_response(request_body)\n\n        except Exception as e:\n            logger.error(f\"[DOUBAO] call_with_tools error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n\n            def error_generator():\n                yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n            return error_generator()\n\n    # -------------------- streaming --------------------\n\n    def _handle_stream_response(self, request_body: dict):\n        \"\"\"Handle streaming SSE response from Doubao API and yield OpenAI-format chunks.\"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n\n            url = f\"{self.base_url}/chat/completions\"\n            response = requests.post(url, headers=headers, json=request_body, stream=True, timeout=120)\n\n            if response.status_code != 200:\n                error_msg = response.text\n                logger.error(f\"[DOUBAO] API error: status={response.status_code}, msg={error_msg}\")\n                yield {\"error\": True, \"message\": error_msg, \"status_code\": response.status_code}\n                return\n\n            current_tool_calls = {}\n            finish_reason = None\n\n            for line in response.iter_lines():\n                if not line:\n                    continue\n\n                line = line.decode(\"utf-8\")\n                if not line.startswith(\"data: \"):\n                    continue\n\n                data_str = line[6:]  # Remove \"data: \" prefix\n                if data_str.strip() == \"[DONE]\":\n                    break\n\n                try:\n                    chunk = json.loads(data_str)\n                except json.JSONDecodeError as e:\n                    logger.warning(f\"[DOUBAO] JSON decode error: {e}, data: {data_str[:200]}\")\n                    continue\n\n                # Check for error in chunk\n                if chunk.get(\"error\"):\n                    error_data = chunk[\"error\"]\n                    error_msg = error_data.get(\"message\", \"Unknown error\") if isinstance(error_data, dict) else str(error_data)\n                    logger.error(f\"[DOUBAO] stream error: {error_msg}\")\n                    yield {\"error\": True, \"message\": error_msg, \"status_code\": 500}\n                    return\n\n                if not chunk.get(\"choices\"):\n                    continue\n\n                choice = chunk[\"choices\"][0]\n                delta = choice.get(\"delta\", {})\n\n                # Skip reasoning_content (thinking) - don't log or forward\n                if delta.get(\"reasoning_content\"):\n                    continue\n\n                # Handle text content\n                if \"content\" in delta and delta[\"content\"]:\n                    yield {\n                        \"choices\": [{\n                            \"index\": 0,\n                            \"delta\": {\n                                \"role\": \"assistant\",\n                                \"content\": delta[\"content\"]\n                            }\n                        }]\n                    }\n\n                # Handle tool_calls (streamed incrementally)\n                if \"tool_calls\" in delta:\n                    for tool_call_chunk in delta[\"tool_calls\"]:\n                        index = tool_call_chunk.get(\"index\", 0)\n                        if index not in current_tool_calls:\n                            current_tool_calls[index] = {\n                                \"id\": tool_call_chunk.get(\"id\", \"\"),\n                                \"type\": \"tool_use\",\n                                \"name\": tool_call_chunk.get(\"function\", {}).get(\"name\", \"\"),\n                                \"input\": \"\"\n                            }\n\n                        # Accumulate arguments\n                        if \"function\" in tool_call_chunk and \"arguments\" in tool_call_chunk[\"function\"]:\n                            current_tool_calls[index][\"input\"] += tool_call_chunk[\"function\"][\"arguments\"]\n\n                        # Yield OpenAI-format tool call delta\n                        yield {\n                            \"choices\": [{\n                                \"index\": 0,\n                                \"delta\": {\n                                    \"tool_calls\": [tool_call_chunk]\n                                }\n                            }]\n                        }\n\n                # Capture finish_reason\n                if choice.get(\"finish_reason\"):\n                    finish_reason = choice[\"finish_reason\"]\n\n            # Final chunk with finish_reason\n            yield {\n                \"choices\": [{\n                    \"index\": 0,\n                    \"delta\": {},\n                    \"finish_reason\": finish_reason\n                }]\n            }\n\n        except requests.exceptions.Timeout:\n            logger.error(\"[DOUBAO] Request timeout\")\n            yield {\"error\": True, \"message\": \"Request timeout\", \"status_code\": 500}\n        except Exception as e:\n            logger.error(f\"[DOUBAO] stream response error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n\n    # -------------------- sync --------------------\n\n    def _handle_sync_response(self, request_body: dict):\n        \"\"\"Handle synchronous API response and yield a single result dict.\"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n\n            request_body.pop(\"stream\", None)\n            url = f\"{self.base_url}/chat/completions\"\n            response = requests.post(url, headers=headers, json=request_body, timeout=120)\n\n            if response.status_code != 200:\n                error_msg = response.text\n                logger.error(f\"[DOUBAO] API error: status={response.status_code}, msg={error_msg}\")\n                yield {\"error\": True, \"message\": error_msg, \"status_code\": response.status_code}\n                return\n\n            result = response.json()\n            message = result[\"choices\"][0][\"message\"]\n            finish_reason = result[\"choices\"][0][\"finish_reason\"]\n\n            response_data = {\"role\": \"assistant\", \"content\": []}\n\n            # Add text content\n            if message.get(\"content\"):\n                response_data[\"content\"].append({\n                    \"type\": \"text\",\n                    \"text\": message[\"content\"]\n                })\n\n            # Add tool calls\n            if message.get(\"tool_calls\"):\n                for tool_call in message[\"tool_calls\"]:\n                    response_data[\"content\"].append({\n                        \"type\": \"tool_use\",\n                        \"id\": tool_call[\"id\"],\n                        \"name\": tool_call[\"function\"][\"name\"],\n                        \"input\": json.loads(tool_call[\"function\"][\"arguments\"])\n                    })\n\n            # Map finish_reason\n            if finish_reason == \"tool_calls\":\n                response_data[\"stop_reason\"] = \"tool_use\"\n            elif finish_reason == \"stop\":\n                response_data[\"stop_reason\"] = \"end_turn\"\n            else:\n                response_data[\"stop_reason\"] = finish_reason\n\n            yield response_data\n\n        except requests.exceptions.Timeout:\n            logger.error(\"[DOUBAO] Request timeout\")\n            yield {\"error\": True, \"message\": \"Request timeout\", \"status_code\": 500}\n        except Exception as e:\n            logger.error(f\"[DOUBAO] sync response error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n\n    # -------------------- format conversion --------------------\n\n    def _convert_messages_to_openai_format(self, messages):\n        \"\"\"\n        Convert messages from Claude format to OpenAI format.\n\n        Claude format uses content blocks: tool_use / tool_result / text\n        OpenAI format uses tool_calls in assistant, role=tool for results\n        \"\"\"\n        if not messages:\n            return []\n\n        converted = []\n\n        for msg in messages:\n            role = msg.get(\"role\")\n            content = msg.get(\"content\")\n\n            # Already a simple string - pass through\n            if isinstance(content, str):\n                converted.append(msg)\n                continue\n\n            if not isinstance(content, list):\n                converted.append(msg)\n                continue\n\n            if role == \"user\":\n                text_parts = []\n                tool_results = []\n\n                for block in content:\n                    if not isinstance(block, dict):\n                        continue\n                    if block.get(\"type\") == \"text\":\n                        text_parts.append(block.get(\"text\", \"\"))\n                    elif block.get(\"type\") == \"tool_result\":\n                        tool_call_id = block.get(\"tool_use_id\") or \"\"\n                        result_content = block.get(\"content\", \"\")\n                        if not isinstance(result_content, str):\n                            result_content = json.dumps(result_content, ensure_ascii=False)\n                        tool_results.append({\n                            \"role\": \"tool\",\n                            \"tool_call_id\": tool_call_id,\n                            \"content\": result_content\n                        })\n\n                # Tool results first (must come right after assistant with tool_calls)\n                for tr in tool_results:\n                    converted.append(tr)\n\n                if text_parts:\n                    converted.append({\"role\": \"user\", \"content\": \"\\n\".join(text_parts)})\n\n            elif role == \"assistant\":\n                openai_msg = {\"role\": \"assistant\"}\n                text_parts = []\n                tool_calls = []\n\n                for block in content:\n                    if not isinstance(block, dict):\n                        continue\n                    if block.get(\"type\") == \"text\":\n                        text_parts.append(block.get(\"text\", \"\"))\n                    elif block.get(\"type\") == \"tool_use\":\n                        tool_calls.append({\n                            \"id\": block.get(\"id\"),\n                            \"type\": \"function\",\n                            \"function\": {\n                                \"name\": block.get(\"name\"),\n                                \"arguments\": json.dumps(block.get(\"input\", {}))\n                            }\n                        })\n\n                if text_parts:\n                    openai_msg[\"content\"] = \"\\n\".join(text_parts)\n                elif not tool_calls:\n                    openai_msg[\"content\"] = \"\"\n\n                if tool_calls:\n                    openai_msg[\"tool_calls\"] = tool_calls\n                    if not text_parts:\n                        openai_msg[\"content\"] = None\n\n                converted.append(openai_msg)\n            else:\n                converted.append(msg)\n\n        return converted\n\n    def _convert_tools_to_openai_format(self, tools):\n        \"\"\"\n        Convert tools from Claude format to OpenAI format.\n\n        Claude: {name, description, input_schema}\n        OpenAI: {type: \"function\", function: {name, description, parameters}}\n        \"\"\"\n        if not tools:\n            return None\n\n        converted = []\n        for tool in tools:\n            # Already in OpenAI format\n            if \"type\" in tool and tool[\"type\"] == \"function\":\n                converted.append(tool)\n            else:\n                converted.append({\n                    \"type\": \"function\",\n                    \"function\": {\n                        \"name\": tool.get(\"name\"),\n                        \"description\": tool.get(\"description\"),\n                        \"parameters\": tool.get(\"input_schema\", {})\n                    }\n                })\n\n        return converted\n"
  },
  {
    "path": "models/doubao/doubao_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\nclass DoubaoSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"doubao-seed-2-0-pro-260215\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        self.reset()\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"assistant\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"user\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(\n                    max_tokens, cur_tokens, len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\n\ndef num_tokens_from_messages(messages, model):\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/gemini/google_gemini_bot.py",
    "content": "\"\"\"\nGoogle gemini bot\n\n@author zhayujie\n@Date 2023/12/15\n\"\"\"\n# encoding:utf-8\n\nimport base64\nimport json\nimport mimetypes\nimport os\nimport re\nimport time\nimport requests\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType, Context\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf\nfrom models.chatgpt.chat_gpt_session import ChatGPTSession\nfrom models.baidu.baidu_wenxin_session import BaiduWenxinSession\n\n\n# OpenAI对话模型API (可用)\nclass GoogleGeminiBot(Bot):\n\n    def __init__(self):\n        super().__init__()\n        self.sessions = SessionManager(ChatGPTSession, model=conf().get(\"model\") or \"gpt-3.5-turbo\")\n\n    @property\n    def api_key(self):\n        return conf().get(\"gemini_api_key\")\n\n    @property\n    def api_base(self):\n        base = conf().get(\"gemini_api_base\", \"\").strip()\n        if base:\n            return base.rstrip('/')\n        return \"https://generativelanguage.googleapis.com\"\n\n    def reply(self, query, context: Context = None) -> Reply:\n        session_id = None\n        try:\n            if context.type != ContextType.TEXT:\n                logger.warn(f\"[Gemini] Unsupported message type, type={context.type}\")\n                return Reply(ReplyType.TEXT, None)\n            logger.info(f\"[Gemini] query={query}\")\n            session_id = context[\"session_id\"]\n            session = self.sessions.session_query(query, session_id)\n            filtered_messages = self.filter_messages(session.messages)\n            logger.debug(f\"[Gemini] messages={filtered_messages}\")\n\n            response = self.call_with_tools(\n                messages=filtered_messages,\n                tools=None,\n                stream=False,\n                model=self.model\n            )\n\n            if isinstance(response, dict) and response.get(\"error\"):\n                error_message = response.get(\"message\", \"Failed to invoke [Gemini] api!\")\n                logger.error(f\"[Gemini] API error: {error_message}\")\n                self.sessions.session_reply(error_message, session_id)\n                return Reply(ReplyType.ERROR, error_message)\n\n            choices = response.get(\"choices\", []) if isinstance(response, dict) else []\n            if choices and choices[0].get(\"message\"):\n                reply_text = choices[0][\"message\"].get(\"content\")\n                if reply_text:\n                    logger.info(f\"[Gemini] reply={reply_text}\")\n                    self.sessions.session_reply(reply_text, session_id)\n                    return Reply(ReplyType.TEXT, reply_text)\n\n            logger.warning(\"[Gemini] No valid response generated. Checking safety ratings.\")\n            safety_ratings = response.get(\"safety_ratings\", []) if isinstance(response, dict) else []\n            if safety_ratings:\n                for rating in safety_ratings:\n                    category = rating.get(\"category\", \"UNKNOWN\")\n                    probability = rating.get(\"probability\", \"UNKNOWN\")\n                    logger.warning(f\"[Gemini] Safety rating: {category} - {probability}\")\n\n            error_message = \"No valid response generated due to safety constraints.\"\n            self.sessions.session_reply(error_message, session_id)\n            return Reply(ReplyType.ERROR, error_message)\n                    \n        except Exception as e:\n            logger.error(f\"[Gemini] Error generating response: {str(e)}\", exc_info=True)\n            error_message = \"Failed to invoke [Gemini] api!\"\n            if session_id:\n                self.sessions.session_reply(error_message, session_id)\n            return Reply(ReplyType.ERROR, error_message)\n            \n    def _convert_to_gemini_messages(self, messages: list):\n        res = []\n        for msg in messages:\n            if msg.get(\"role\") == \"user\":\n                role = \"user\"\n            elif msg.get(\"role\") == \"assistant\":\n                role = \"model\"\n            elif msg.get(\"role\") == \"system\":\n                role = \"user\"\n            else:\n                continue\n            res.append({\n                \"role\": role,\n                \"parts\": [{\"text\": msg.get(\"content\")}]\n            })\n        return res\n\n    @staticmethod\n    def filter_messages(messages: list):\n        res = []\n        turn = \"user\"\n        if not messages:\n            return res\n        for i in range(len(messages) - 1, -1, -1):\n            message = messages[i]\n            role = message.get(\"role\")\n            if role == \"system\":\n                res.insert(0, message)\n                continue\n            if role != turn:\n                continue\n            res.insert(0, message)\n            if turn == \"user\":\n                turn = \"assistant\"\n            elif turn == \"assistant\":\n                turn = \"user\"\n        return res\n\n    @staticmethod\n    def _extract_image_paths_from_text(content: str):\n        if not isinstance(content, str):\n            return \"\", []\n        pattern = r\"\\[图片:\\s*([^\\]]+)\\]\"\n        image_paths = [m.strip().strip(\"'\\\"\") for m in re.findall(pattern, content) if m.strip()]\n        cleaned_text = re.sub(pattern, \"\", content)\n        cleaned_text = re.sub(r\"\\n{3,}\", \"\\n\\n\", cleaned_text).strip()\n        return cleaned_text, image_paths\n\n    @staticmethod\n    def _build_image_inline_part(image_path: str):\n        if not image_path:\n            return None\n        try:\n            if image_path.startswith(\"file://\"):\n                image_path = image_path[7:]\n\n            image_path = os.path.expanduser(image_path)\n            if not os.path.exists(image_path):\n                logger.warning(f\"[Gemini] Image file not found: {image_path}\")\n                return None\n\n            with open(image_path, \"rb\") as f:\n                image_bytes = f.read()\n\n            mime_type = mimetypes.guess_type(image_path)[0] or \"image/png\"\n            if not mime_type.startswith(\"image/\"):\n                mime_type = \"image/png\"\n\n            return {\n                \"inlineData\": {\n                    \"mimeType\": mime_type,\n                    \"data\": base64.b64encode(image_bytes).decode(\"utf-8\")\n                }\n            }\n        except Exception as e:\n            logger.warning(f\"[Gemini] Failed to build inline image part from path={image_path}, err={e}\")\n            return None\n\n    @staticmethod\n    def _build_inline_part_from_image_url(image_url):\n        if not image_url:\n            return None\n\n        if isinstance(image_url, dict):\n            image_url = image_url.get(\"url\")\n        if not image_url or not isinstance(image_url, str):\n            return None\n\n        if image_url.startswith(\"data:\"):\n            match = re.match(r\"^data:([^;]+);base64,(.+)$\", image_url, re.DOTALL)\n            if not match:\n                logger.warning(\"[Gemini] Invalid data URL for image block\")\n                return None\n            return {\n                \"inlineData\": {\n                    \"mimeType\": match.group(1),\n                    \"data\": match.group(2).strip()\n                }\n            }\n\n        if image_url.startswith(\"file://\") or os.path.exists(os.path.expanduser(image_url)):\n            return GoogleGeminiBot._build_image_inline_part(image_url)\n\n        if image_url.startswith(\"http://\") or image_url.startswith(\"https://\"):\n            try:\n                response = requests.get(image_url, timeout=20)\n                if response.status_code != 200:\n                    logger.warning(f\"[Gemini] Failed to fetch remote image: status={response.status_code}, url={image_url}\")\n                    return None\n                mime_type = response.headers.get(\"Content-Type\", \"image/png\").split(\";\")[0].strip()\n                if not mime_type.startswith(\"image/\"):\n                    mime_type = \"image/png\"\n                return {\n                    \"inlineData\": {\n                        \"mimeType\": mime_type,\n                        \"data\": base64.b64encode(response.content).decode(\"utf-8\")\n                    }\n                }\n            except Exception as e:\n                logger.warning(f\"[Gemini] Failed to download remote image: url={image_url}, err={e}\")\n                return None\n\n        logger.warning(f\"[Gemini] Unsupported image URL format: {image_url[:120]}\")\n        return None\n\n    def call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n        \"\"\"\n        Call Gemini API with tool support using REST API (following official docs)\n        \n        Args:\n            messages: List of messages (OpenAI format)\n            tools: List of tool definitions (OpenAI/Claude format)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters (system, max_tokens, temperature, etc.)\n            \n        Returns:\n            Formatted response compatible with OpenAI format or generator for streaming\n        \"\"\"\n        try:\n            model_name = kwargs.get(\"model\", self.model or \"gemini-1.5-flash\")\n            \n            # Build REST API payload\n            payload = {\"contents\": []}\n            inline_image_count = 0\n\n            # Keep legacy behavior: disable Gemini safety blocking like old SDK path.\n            payload[\"safetySettings\"] = [\n                {\"category\": \"HARM_CATEGORY_HATE_SPEECH\", \"threshold\": \"BLOCK_NONE\"},\n                {\"category\": \"HARM_CATEGORY_HARASSMENT\", \"threshold\": \"BLOCK_NONE\"},\n                {\"category\": \"HARM_CATEGORY_SEXUALLY_EXPLICIT\", \"threshold\": \"BLOCK_NONE\"},\n                {\"category\": \"HARM_CATEGORY_DANGEROUS_CONTENT\", \"threshold\": \"BLOCK_NONE\"},\n            ]\n            \n            # Extract and set system instruction\n            system_prompt = kwargs.get(\"system\", \"\")\n            if not system_prompt:\n                for msg in messages:\n                    if msg.get(\"role\") == \"system\":\n                        system_prompt = msg[\"content\"]\n                        break\n            \n            if system_prompt:\n                payload[\"system_instruction\"] = {\n                    \"parts\": [{\"text\": system_prompt}]\n                }\n            \n            # Convert messages to Gemini format\n            for msg in messages:\n                role = msg.get(\"role\")\n                content = msg.get(\"content\", \"\")\n                \n                if role == \"system\":\n                    continue\n                \n                # Convert role\n                gemini_role = \"user\" if role in [\"user\", \"tool\"] else \"model\"\n                \n                # Handle different content formats\n                parts = []\n                \n                if isinstance(content, str):\n                    # Text with optional [图片: /path/to/file] markers\n                    cleaned_text, image_paths = self._extract_image_paths_from_text(content)\n                    if cleaned_text:\n                        parts.append({\"text\": cleaned_text})\n                    image_added = False\n                    for image_path in image_paths:\n                        image_part = self._build_image_inline_part(image_path)\n                        if image_part:\n                            parts.append(image_part)\n                            image_added = True\n                            inline_image_count += 1\n                    if not cleaned_text and not image_added and content:\n                        parts.append({\"text\": content})\n                    \n                elif isinstance(content, list):\n                    # List of content blocks (Claude format)\n                    for block in content:\n                        if not isinstance(block, dict):\n                            if isinstance(block, str):\n                                parts.append({\"text\": block})\n                            continue\n                        \n                        block_type = block.get(\"type\")\n                        \n                        if block_type == \"text\":\n                            # Text block with optional image markers\n                            block_text = block.get(\"text\", \"\")\n                            cleaned_text, image_paths = self._extract_image_paths_from_text(block_text)\n                            if cleaned_text:\n                                parts.append({\"text\": cleaned_text})\n                            for image_path in image_paths:\n                                image_part = self._build_image_inline_part(image_path)\n                                if image_part:\n                                    parts.append(image_part)\n\n                        elif block_type in [\"image\", \"image_url\"]:\n                            # OpenAI format: {\"type\":\"image_url\",\"image_url\":{\"url\":\"...\"}}\n                            # Claude format: {\"type\":\"image\",\"source\":{\"type\":\"base64\",\"media_type\":\"...\",\"data\":\"...\"}}\n                            image_part = None\n                            if block_type == \"image\":\n                                source = block.get(\"source\", {})\n                                if isinstance(source, dict) and source.get(\"type\") == \"base64\" and source.get(\"data\"):\n                                    image_part = {\n                                        \"inlineData\": {\n                                            \"mimeType\": source.get(\"media_type\", \"image/png\"),\n                                            \"data\": source.get(\"data\")\n                                        }\n                                    }\n                                elif block.get(\"image_url\"):\n                                    image_part = self._build_inline_part_from_image_url(block.get(\"image_url\"))\n                            else:\n                                image_part = self._build_inline_part_from_image_url(block.get(\"image_url\"))\n\n                            if image_part:\n                                parts.append(image_part)\n                                inline_image_count += 1\n                            else:\n                                logger.warning(f\"[Gemini] Skip invalid image block: {str(block)[:200]}\")\n                            \n                        elif block_type == \"tool_result\":\n                            # Convert Claude tool_result to Gemini functionResponse\n                            tool_use_id = block.get(\"tool_use_id\")\n                            tool_content = block.get(\"content\", \"\")\n                            \n                            # Try to parse tool content as JSON\n                            try:\n                                if isinstance(tool_content, str):\n                                    tool_result_data = json.loads(tool_content)\n                                else:\n                                    tool_result_data = tool_content\n                            except Exception:\n                                tool_result_data = {\"result\": tool_content}\n                            \n                            # Find the tool name from previous messages\n                            # Look for the corresponding tool_call in model's message\n                            tool_name = None\n                            for prev_msg in reversed(messages):\n                                if prev_msg.get(\"role\") == \"assistant\":\n                                    prev_content = prev_msg.get(\"content\", [])\n                                    if isinstance(prev_content, list):\n                                        for prev_block in prev_content:\n                                            if isinstance(prev_block, dict) and prev_block.get(\"type\") == \"tool_use\":\n                                                if prev_block.get(\"id\") == tool_use_id:\n                                                    tool_name = prev_block.get(\"name\")\n                                                    break\n                                    if tool_name:\n                                        break\n                            \n                            # Gemini functionResponse format\n                            parts.append({\n                                \"functionResponse\": {\n                                    \"name\": tool_name or \"unknown\",\n                                    \"response\": tool_result_data\n                                }\n                            })\n                            \n                        elif \"text\" in block:\n                            # Generic text field\n                            parts.append({\"text\": block[\"text\"]})\n                \n                if parts:\n                    payload[\"contents\"].append({\n                        \"role\": gemini_role,\n                        \"parts\": parts\n                    })\n\n            if inline_image_count > 0:\n                logger.info(f\"[Gemini] Multimodal request includes {inline_image_count} image part(s)\")\n            \n            # Generation config\n            gen_config = {}\n            if kwargs.get(\"temperature\") is not None:\n                gen_config[\"temperature\"] = kwargs[\"temperature\"]\n\n            if gen_config:\n                payload[\"generationConfig\"] = gen_config\n            \n            # Convert tools to Gemini format (REST API style)\n            if tools:\n                gemini_tools = self._convert_tools_to_gemini_rest_format(tools)\n                if gemini_tools:\n                    payload[\"tools\"] = gemini_tools\n            \n            # Make REST API call\n            base_url = f\"{self.api_base}/v1beta\"\n            endpoint = f\"{base_url}/models/{model_name}:generateContent\"\n            if stream:\n                endpoint = f\"{base_url}/models/{model_name}:streamGenerateContent?alt=sse\"\n            \n            headers = {\n                \"x-goog-api-key\": self.api_key,\n                \"Content-Type\": \"application/json\"\n            }\n            \n            response = requests.post(\n                endpoint,\n                headers=headers,\n                json=payload,\n                stream=stream,\n                timeout=60\n            )\n            \n            # Check HTTP status for stream mode (for non-stream, it's checked in handler)\n            if stream and response.status_code != 200:\n                error_text = response.text\n                logger.error(f\"[Gemini] API error ({response.status_code}): {error_text}\")\n                def error_generator():\n                    yield {\n                        \"error\": True,\n                        \"message\": f\"Gemini API error: {error_text}\",\n                        \"status_code\": response.status_code\n                    }\n                return error_generator()\n            \n            if stream:\n                return self._handle_gemini_rest_stream_response(response, model_name)\n            else:\n                return self._handle_gemini_rest_sync_response(response, model_name)\n                \n        except Exception as e:\n            logger.error(f\"[Gemini] call_with_tools error: {e}\", exc_info=True)\n            error_msg = str(e)  # Capture error message before creating generator\n            if stream:\n                def error_generator():\n                    yield {\n                        \"error\": True,\n                        \"message\": error_msg,\n                        \"status_code\": 500\n                    }\n                return error_generator()\n            else:\n                return {\n                    \"error\": True,\n                    \"message\": str(e),\n                    \"status_code\": 500\n                }\n    \n    def _convert_tools_to_gemini_rest_format(self, tools_list):\n        \"\"\"\n        Convert tools to Gemini REST API format\n        \n        Handles both OpenAI and Claude/Agent formats.\n        Returns: [{\"functionDeclarations\": [...]}]\n        \"\"\"\n        function_declarations = []\n        \n        for tool in tools_list:\n            # Extract name, description, and parameters based on format\n            if tool.get(\"type\") == \"function\":\n                # OpenAI format: {\"type\": \"function\", \"function\": {...}}\n                func = tool.get(\"function\", {})\n                name = func.get(\"name\")\n                description = func.get(\"description\", \"\")\n                parameters = func.get(\"parameters\", {})\n            else:\n                # Claude/Agent format: {\"name\": \"...\", \"description\": \"...\", \"input_schema\": {...}}\n                name = tool.get(\"name\")\n                description = tool.get(\"description\", \"\")\n                parameters = tool.get(\"input_schema\", {})\n            \n            if not name:\n                logger.warning(f\"[Gemini] Skipping tool without name: {tool}\")\n                continue\n            \n            function_declarations.append({\n                \"name\": name,\n                \"description\": description,\n                \"parameters\": parameters\n            })\n        \n        # All functionDeclarations must be in a single tools object (per Gemini REST API spec)\n        return [{\n            \"functionDeclarations\": function_declarations\n        }] if function_declarations else []\n    \n    def _handle_gemini_rest_sync_response(self, response, model_name):\n        \"\"\"Handle Gemini REST API sync response and convert to OpenAI format\"\"\"\n        try:\n            if response.status_code != 200:\n                error_text = response.text\n                logger.error(f\"[Gemini] API error ({response.status_code}): {error_text}\")\n                return {\n                    \"error\": True,\n                    \"message\": f\"Gemini API error: {error_text}\",\n                    \"status_code\": response.status_code\n                }\n            \n            data = response.json()\n            logger.debug(f\"[Gemini] Response data: {json.dumps(data, ensure_ascii=False)[:500]}\")\n            \n            # Extract from Gemini response format\n            candidates = data.get(\"candidates\", [])\n            if not candidates:\n                logger.warning(\"[Gemini] No candidates in response\")\n                prompt_feedback = data.get(\"promptFeedback\", {})\n                return {\n                    \"error\": True,\n                    \"message\": \"No candidates in response\",\n                    \"status_code\": 500,\n                    \"safety_ratings\": prompt_feedback.get(\"safetyRatings\", [])\n                }\n            \n            candidate = candidates[0]\n            content = candidate.get(\"content\", {})\n            parts = content.get(\"parts\", [])\n            safety_ratings = candidate.get(\"safetyRatings\", [])\n            \n            logger.debug(f\"[Gemini] Candidate parts count: {len(parts)}\")\n            \n            # Extract text and function calls\n            text_content = \"\"\n            tool_calls = []\n            \n            for part in parts:\n                # Check for text\n                if \"text\" in part:\n                    text_content += part[\"text\"]\n                    logger.debug(f\"[Gemini] Text part: {part['text'][:100]}...\")\n                \n                # Check for functionCall (per REST API docs)\n                if \"functionCall\" in part:\n                    fc = part[\"functionCall\"]\n                    logger.info(f\"[Gemini] Function call detected: {fc.get('name')}\")\n                    \n                    tool_calls.append({\n                        \"id\": f\"call_{int(time.time() * 1000000)}\",\n                        \"type\": \"function\",\n                        \"function\": {\n                            \"name\": fc.get(\"name\"),\n                            \"arguments\": json.dumps(fc.get(\"args\", {}))\n                        }\n                    })\n            \n            logger.info(f\"[Gemini] Response: text={len(text_content)} chars, tool_calls={len(tool_calls)}\")\n            \n            # Build OpenAI format response\n            message_dict = {\n                \"role\": \"assistant\",\n                \"content\": text_content or None\n            }\n            if tool_calls:\n                message_dict[\"tool_calls\"] = tool_calls\n            \n            return {\n                \"id\": f\"chatcmpl-{time.time()}\",\n                \"object\": \"chat.completion\",\n                \"created\": int(time.time()),\n                \"model\": model_name,\n                \"choices\": [{\n                    \"index\": 0,\n                    \"message\": message_dict,\n                    \"finish_reason\": \"tool_calls\" if tool_calls else \"stop\"\n                }],\n                \"usage\": data.get(\"usageMetadata\", {}),\n                \"safety_ratings\": safety_ratings\n            }\n            \n        except Exception as e:\n            logger.error(f\"[Gemini] sync response error: {e}\", exc_info=True)\n            return {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n    \n    def _handle_gemini_rest_stream_response(self, response, model_name):\n        \"\"\"Handle Gemini REST API stream response\"\"\"\n        try:\n            all_tool_calls = []\n            has_sent_tool_calls = False\n            has_content = False  # Track if any content was sent\n            chunk_count = 0\n            last_finish_reason = None\n            last_safety_ratings = None\n            \n            for line in response.iter_lines():\n                if not line:\n                    continue\n                \n                line = line.decode('utf-8')\n                \n                # Skip SSE prefixes\n                if line.startswith('data: '):\n                    line = line[6:]\n                \n                if not line or line == '[DONE]':\n                    continue\n                \n                try:\n                    chunk_data = json.loads(line)\n                    chunk_count += 1\n                    \n                    candidates = chunk_data.get(\"candidates\", [])\n                    if not candidates:\n                        logger.debug(\"[Gemini] No candidates in chunk\")\n                        continue\n                    \n                    candidate = candidates[0]\n                    \n                    # 记录 finish_reason 和 safety_ratings\n                    if \"finishReason\" in candidate:\n                        last_finish_reason = candidate[\"finishReason\"]\n                    if \"safetyRatings\" in candidate:\n                        last_safety_ratings = candidate[\"safetyRatings\"]\n                    \n                    content = candidate.get(\"content\", {})\n                    parts = content.get(\"parts\", [])\n                    \n                    if not parts:\n                        logger.debug(\"[Gemini] No parts in candidate content\")\n                    \n                    # Stream text content\n                    for part in parts:\n                        if \"text\" in part and part[\"text\"]:\n                            has_content = True\n                            yield {\n                                \"id\": f\"chatcmpl-{time.time()}\",\n                                \"object\": \"chat.completion.chunk\",\n                                \"created\": int(time.time()),\n                                \"model\": model_name,\n                                \"choices\": [{\n                                    \"index\": 0,\n                                    \"delta\": {\"content\": part[\"text\"]},\n                                    \"finish_reason\": None\n                                }]\n                            }\n                        \n                        # Collect function calls\n                        if \"functionCall\" in part:\n                            fc = part[\"functionCall\"]\n                            logger.info(f\"[Gemini] Function call: {fc.get('name')}\")\n                            all_tool_calls.append({\n                                \"index\": len(all_tool_calls),  # Add index to differentiate multiple tool calls\n                                \"id\": f\"call_{int(time.time() * 1000000)}_{len(all_tool_calls)}\",\n                                \"type\": \"function\",\n                                \"function\": {\n                                    \"name\": fc.get(\"name\"),\n                                    \"arguments\": json.dumps(fc.get(\"args\", {}))\n                                }\n                            })\n                    \n                except json.JSONDecodeError as je:\n                    logger.debug(f\"[Gemini] JSON decode error: {je}\")\n                    continue\n            \n            # Send tool calls if any were collected\n            if all_tool_calls and not has_sent_tool_calls:\n                yield {\n                    \"id\": f\"chatcmpl-{time.time()}\",\n                    \"object\": \"chat.completion.chunk\",\n                    \"created\": int(time.time()),\n                    \"model\": model_name,\n                    \"choices\": [{\n                        \"index\": 0,\n                        \"delta\": {\"tool_calls\": all_tool_calls},\n                        \"finish_reason\": None\n                    }]\n                }\n                has_sent_tool_calls = True\n            \n            # 如果返回空响应，记录详细警告\n            if not has_content and not all_tool_calls:\n                logger.warning(f\"[Gemini] ⚠️  Empty response detected!\")\n            \n            # Final chunk\n            yield {\n                \"id\": f\"chatcmpl-{time.time()}\",\n                \"object\": \"chat.completion.chunk\",\n                \"created\": int(time.time()),\n                \"model\": model_name,\n                \"choices\": [{\n                    \"index\": 0,\n                    \"delta\": {},\n                    \"finish_reason\": \"tool_calls\" if all_tool_calls else \"stop\"\n                }]\n            }\n                    \n        except Exception as e:\n            logger.error(f\"[Gemini] stream response error: {e}\", exc_info=True)\n            error_msg = str(e)\n            yield {\n                \"error\": True,\n                \"message\": error_msg,\n                \"status_code\": 500\n            }\n    \n    def _convert_tools_to_gemini_format(self, openai_tools):\n        \"\"\"Convert OpenAI tool format to Gemini function declarations\"\"\"\n        import google.generativeai as genai\n        \n        gemini_functions = []\n        for tool in openai_tools:\n            if tool.get(\"type\") == \"function\":\n                func = tool.get(\"function\", {})\n                gemini_functions.append(\n                    genai.protos.FunctionDeclaration(\n                        name=func.get(\"name\"),\n                        description=func.get(\"description\", \"\"),\n                        parameters=func.get(\"parameters\", {})\n                    )\n                )\n        \n        if gemini_functions:\n            return [genai.protos.Tool(function_declarations=gemini_functions)]\n        return None\n    \n    def _handle_gemini_sync_response(self, model, messages, request_params, model_name):\n        \"\"\"Handle synchronous Gemini API response\"\"\"\n        import json\n        \n        response = model.generate_content(messages, **request_params)\n        \n        # Extract text content and function calls\n        text_content = \"\"\n        tool_calls = []\n        \n        if response.candidates and response.candidates[0].content:\n            for part in response.candidates[0].content.parts:\n                if hasattr(part, 'text') and part.text:\n                    text_content += part.text\n                elif hasattr(part, 'function_call') and part.function_call:\n                    # Convert Gemini function call to OpenAI format\n                    func_call = part.function_call\n                    tool_calls.append({\n                        \"id\": f\"call_{hash(func_call.name)}\",\n                        \"type\": \"function\",\n                        \"function\": {\n                            \"name\": func_call.name,\n                            \"arguments\": json.dumps(dict(func_call.args))\n                        }\n                    })\n        \n        # Build message in OpenAI format\n        message = {\n            \"role\": \"assistant\",\n            \"content\": text_content\n        }\n        if tool_calls:\n            message[\"tool_calls\"] = tool_calls\n        \n        # Format response to match OpenAI structure\n        formatted_response = {\n            \"id\": f\"gemini_{int(time.time())}\",\n            \"object\": \"chat.completion\",\n            \"created\": int(time.time()),\n            \"model\": model_name,\n            \"choices\": [\n                {\n                    \"index\": 0,\n                    \"message\": message,\n                    \"finish_reason\": \"stop\" if not tool_calls else \"tool_calls\"\n                }\n            ],\n            \"usage\": {\n                \"prompt_tokens\": 0,  # Gemini doesn't provide token counts in the same way\n                \"completion_tokens\": 0,\n                \"total_tokens\": 0\n            }\n        }\n        \n        logger.info(f\"[Gemini] call_with_tools reply, model={model_name}\")\n        return formatted_response\n    \n    def _handle_gemini_stream_response(self, model, messages, request_params, model_name):\n        \"\"\"Handle streaming Gemini API response\"\"\"\n        import json\n        \n        try:\n            response_stream = model.generate_content(messages, stream=True, **request_params)\n            \n            for chunk in response_stream:\n                if chunk.candidates and chunk.candidates[0].content:\n                    for part in chunk.candidates[0].content.parts:\n                        if hasattr(part, 'text') and part.text:\n                            # Text content\n                            yield {\n                                \"id\": f\"gemini_{int(time.time())}\",\n                                \"object\": \"chat.completion.chunk\",\n                                \"created\": int(time.time()),\n                                \"model\": model_name,\n                                \"choices\": [{\n                                    \"index\": 0,\n                                    \"delta\": {\"content\": part.text},\n                                    \"finish_reason\": None\n                                }]\n                            }\n                        elif hasattr(part, 'function_call') and part.function_call:\n                            # Function call\n                            func_call = part.function_call\n                            yield {\n                                \"id\": f\"gemini_{int(time.time())}\",\n                                \"object\": \"chat.completion.chunk\",\n                                \"created\": int(time.time()),\n                                \"model\": model_name,\n                                \"choices\": [{\n                                    \"index\": 0,\n                                    \"delta\": {\n                                        \"tool_calls\": [{\n                                            \"index\": 0,\n                                            \"id\": f\"call_{hash(func_call.name)}\",\n                                            \"type\": \"function\",\n                                            \"function\": {\n                                                \"name\": func_call.name,\n                                                \"arguments\": json.dumps(dict(func_call.args))\n                                            }\n                                        }]\n                                    },\n                                    \"finish_reason\": None\n                                }]\n                            }\n                            \n        except Exception as e:\n            logger.error(f\"[Gemini] stream response error: {e}\")\n            yield {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n"
  },
  {
    "path": "models/linkai/link_ai_bot.py",
    "content": "# access LinkAI knowledge base platform\n# docs: https://link-ai.tech/platform/link-app/wechat\n\nimport re\nimport time\nimport requests\nimport json\nimport config\nfrom models.bot import Bot\nfrom models.openai_compatible_bot import OpenAICompatibleBot\nfrom models.chatgpt.chat_gpt_session import ChatGPTSession\nfrom models.session_manager import SessionManager\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf, pconf\nimport threading\nfrom common import memory, utils\nimport base64\nimport os\n\nclass LinkAIBot(Bot, OpenAICompatibleBot):\n    # authentication failed\n    AUTH_FAILED_CODE = 401\n    NO_QUOTA_CODE = 406\n\n    def __init__(self):\n        super().__init__()\n        self.sessions = LinkAISessionManager(LinkAISession, model=conf().get(\"model\") or \"gpt-3.5-turbo\")\n        self.args = {}\n    \n    def get_api_config(self):\n        \"\"\"Get API configuration for OpenAI-compatible base class\"\"\"\n        return {\n            'api_key': conf().get(\"open_ai_api_key\"),  # LinkAI uses OpenAI-compatible key\n            'api_base': conf().get(\"open_ai_api_base\", \"https://api.link-ai.tech/v1\"),\n            'model': conf().get(\"model\", \"gpt-3.5-turbo\"),\n            'default_temperature': conf().get(\"temperature\", 0.9),\n            'default_top_p': conf().get(\"top_p\", 1.0),\n            'default_frequency_penalty': conf().get(\"frequency_penalty\", 0.0),\n            'default_presence_penalty': conf().get(\"presence_penalty\", 0.0),\n        }\n\n    def reply(self, query, context: Context = None) -> Reply:\n        if context.type == ContextType.TEXT:\n            return self._chat(query, context)\n        elif context.type == ContextType.IMAGE_CREATE:\n            if not conf().get(\"text_to_image\"):\n                logger.warn(\"[LinkAI] text_to_image is not enabled, ignore the IMAGE_CREATE request\")\n                return Reply(ReplyType.TEXT, \"\")\n            ok, res = self.create_img(query, 0)\n            if ok:\n                reply = Reply(ReplyType.IMAGE_URL, res)\n            else:\n                reply = Reply(ReplyType.ERROR, res)\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def _chat(self, query, context, retry_count=0) -> Reply:\n        \"\"\"\n        发起对话请求\n        :param query: 请求提示词\n        :param context: 对话上下文\n        :param retry_count: 当前递归重试次数\n        :return: 回复\n        \"\"\"\n        if retry_count > 2:\n            # exit from retry 2 times\n            logger.warn(\"[LINKAI] failed after maximum number of retry times\")\n            return Reply(ReplyType.TEXT, \"请再问我一次吧\")\n\n        try:\n            # load config\n            if context.get(\"generate_breaked_by\"):\n                logger.info(f\"[LINKAI] won't set appcode because a plugin ({context['generate_breaked_by']}) affected the context\")\n                app_code = None\n            else:\n                plugin_app_code = self._find_group_mapping_code(context)\n                app_code = context.kwargs.get(\"app_code\") or plugin_app_code or conf().get(\"linkai_app_code\")\n            linkai_api_key = conf().get(\"linkai_api_key\")\n\n            session_id = context[\"session_id\"]\n            session_message = self.sessions.session_msg_query(query, session_id)\n            logger.debug(f\"[LinkAI] session={session_message}, session_id={session_id}\")\n\n            # image process\n            img_cache = memory.USER_IMAGE_CACHE.get(session_id)\n            if img_cache:\n                messages = self._process_image_msg(app_code=app_code, session_id=session_id, query=query, img_cache=img_cache)\n                if messages:\n                    session_message = messages\n\n            model = conf().get(\"model\")\n            # remove system message\n            if session_message[0].get(\"role\") == \"system\":\n                if app_code or model == \"wenxin\":\n                    session_message.pop(0)\n            body = {\n                \"app_code\": app_code,\n                \"messages\": session_message,\n                \"model\": model,     # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei\n                \"temperature\": conf().get(\"temperature\"),\n                \"top_p\": conf().get(\"top_p\", 1),\n                \"frequency_penalty\": conf().get(\"frequency_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n                \"presence_penalty\": conf().get(\"presence_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n                \"session_id\": session_id,\n                \"sender_id\": session_id,\n                \"channel_type\": context.get(\"channel_type\") or conf().get(\"channel_type\", \"web\")\n            }\n            try:\n                from linkai import LinkAIClient\n                client_id = LinkAIClient.fetch_client_id()\n                if client_id:\n                    body[\"client_id\"] = client_id\n                    # start: client info deliver\n                    if context.kwargs.get(\"msg\"):\n                        body[\"session_id\"] = context.kwargs.get(\"msg\").from_user_id\n                        if context.kwargs.get(\"msg\").is_group:\n                            body[\"is_group\"] = True\n                            body[\"group_name\"] = context.kwargs.get(\"msg\").from_user_nickname\n                            body[\"sender_name\"] = context.kwargs.get(\"msg\").actual_user_nickname\n                        else:\n                            if body.get(\"channel_type\") in [\"wechatcom_app\"]:\n                                body[\"sender_name\"] = context.kwargs.get(\"msg\").from_user_id\n                            else:\n                                body[\"sender_name\"] = context.kwargs.get(\"msg\").from_user_nickname\n\n            except Exception as e:\n                pass\n            file_id = context.kwargs.get(\"file_id\")\n            if file_id:\n                body[\"file_id\"] = file_id\n            logger.info(f\"[LINKAI] query={query}, app_code={app_code}, model={body.get('model')}, file_id={file_id}\")\n            headers = {\"Authorization\": \"Bearer \" + linkai_api_key}\n\n            # do http request\n            base_url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n            res = requests.post(url=base_url + \"/v1/chat/completions\", json=body, headers=headers,\n                                timeout=conf().get(\"request_timeout\", 180))\n            if res.status_code == 200:\n                # execute success\n                response = res.json()\n                reply_content = response[\"choices\"][0][\"message\"][\"content\"]\n                total_tokens = response[\"usage\"][\"total_tokens\"]\n                res_code = response.get('code')\n                logger.info(f\"[LINKAI] reply={reply_content}, total_tokens={total_tokens}, res_code={res_code}\")\n                if res_code == 429:\n                    logger.warn(f\"[LINKAI] 用户访问超出限流配置，sender_id={body.get('sender_id')}\")\n                else:\n                    self.sessions.session_reply(reply_content, session_id, total_tokens, query=query)\n                agent_suffix = self._fetch_agent_suffix(response)\n                if agent_suffix:\n                    reply_content += agent_suffix\n                if not agent_suffix:\n                    knowledge_suffix = self._fetch_knowledge_search_suffix(response)\n                    if knowledge_suffix:\n                        reply_content += knowledge_suffix\n                # image process\n                if response[\"choices\"][0].get(\"img_urls\"):\n                    thread = threading.Thread(target=self._send_image, args=(context.get(\"channel\"), context, response[\"choices\"][0].get(\"img_urls\")))\n                    thread.start()\n                    reply_content = response[\"choices\"][0].get(\"text_content\")\n                if reply_content:\n                    reply_content = self._process_url(reply_content)\n                return Reply(ReplyType.TEXT, reply_content)\n\n            else:\n                response = res.json()\n                error = response.get(\"error\")\n                logger.error(f\"[LINKAI] chat failed, status_code={res.status_code}, \"\n                             f\"msg={error.get('message')}, type={error.get('type')}\")\n\n                if res.status_code >= 500:\n                    # server error, need retry\n                    time.sleep(2)\n                    logger.warn(f\"[LINKAI] do retry, times={retry_count}\")\n                    return self._chat(query, context, retry_count + 1)\n\n                error_reply = \"提问太快啦，请休息一下再问我吧\"\n                if res.status_code == 409:\n                    error_reply = \"这个问题我还没有学会，请问我其它问题吧\"\n                return Reply(ReplyType.TEXT, error_reply)\n\n        except Exception as e:\n            logger.exception(e)\n            # retry\n            time.sleep(2)\n            logger.warn(f\"[LINKAI] do retry, times={retry_count}\")\n            return self._chat(query, context, retry_count + 1)\n\n    def _process_image_msg(self, app_code: str, session_id: str, query:str, img_cache: dict):\n        try:\n            enable_image_input = False\n            app_info = self._fetch_app_info(app_code)\n            if not app_info:\n                logger.debug(f\"[LinkAI] not found app, can't process images, app_code={app_code}\")\n                return None\n            plugins = app_info.get(\"data\").get(\"plugins\")\n            for plugin in plugins:\n                if plugin.get(\"input_type\") and \"IMAGE\" in plugin.get(\"input_type\"):\n                    enable_image_input = True\n            if not enable_image_input:\n                return\n            msg = img_cache.get(\"msg\")\n            path = img_cache.get(\"path\")\n            msg.prepare()\n            logger.info(f\"[LinkAI] query with images, path={path}\")\n            messages = self._build_vision_msg(query, path)\n            memory.USER_IMAGE_CACHE[session_id] = None\n            return messages\n        except Exception as e:\n            logger.exception(e)\n\n    def _find_group_mapping_code(self, context):\n        try:\n            if context.kwargs.get(\"isgroup\"):\n                group_name = context.kwargs.get(\"msg\").from_user_nickname\n                if config.plugin_config and config.plugin_config.get(\"linkai\"):\n                    linkai_config = config.plugin_config.get(\"linkai\")\n                    group_mapping = linkai_config.get(\"group_app_map\")\n                    if group_mapping and group_name:\n                        return group_mapping.get(group_name)\n        except Exception as e:\n            logger.exception(e)\n            return None\n\n    def _build_vision_msg(self, query: str, path: str):\n        try:\n            suffix = utils.get_path_suffix(path)\n            with open(path, \"rb\") as file:\n                base64_str = base64.b64encode(file.read()).decode('utf-8')\n                messages = [{\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"text\",\n                            \"text\": query\n                        },\n                        {\n                            \"type\": \"image_url\",\n                            \"image_url\": {\n                                \"url\": f\"data:image/{suffix};base64,{base64_str}\"\n                            }\n                        }\n                    ]\n                }]\n                return messages\n        except Exception as e:\n            logger.exception(e)\n\n    def reply_text(self, session: ChatGPTSession, app_code=\"\", retry_count=0) -> dict:\n        if retry_count >= 2:\n            # exit from retry 2 times\n            logger.warn(\"[LINKAI] failed after maximum number of retry times\")\n            return {\n                \"total_tokens\": 0,\n                \"completion_tokens\": 0,\n                \"content\": \"请再问我一次吧\"\n            }\n\n        try:\n            body = {\n                \"app_code\": app_code,\n                \"messages\": session.messages,\n                \"model\": conf().get(\"model\") or \"gpt-3.5-turbo\",  # 对话模型的名称, 支持 gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, wenxin, xunfei\n                \"temperature\": conf().get(\"temperature\"),\n                \"top_p\": conf().get(\"top_p\", 1),\n                \"frequency_penalty\": conf().get(\"frequency_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n                \"presence_penalty\": conf().get(\"presence_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n            }\n            if self.args.get(\"max_tokens\"):\n                body[\"max_tokens\"] = self.args.get(\"max_tokens\")\n            headers = {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n\n            # do http request\n            base_url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n            res = requests.post(url=base_url + \"/v1/chat/completions\", json=body, headers=headers,\n                                timeout=conf().get(\"request_timeout\", 180))\n            if res.status_code == 200:\n                # execute success\n                response = res.json()\n                reply_content = response[\"choices\"][0][\"message\"][\"content\"]\n                total_tokens = response[\"usage\"][\"total_tokens\"]\n                logger.info(f\"[LINKAI] reply={reply_content}, total_tokens={total_tokens}\")\n                return {\n                    \"total_tokens\": total_tokens,\n                    \"completion_tokens\": response[\"usage\"][\"completion_tokens\"],\n                    \"content\": reply_content,\n                }\n\n            else:\n                response = res.json()\n                error = response.get(\"error\")\n                logger.error(f\"[LINKAI] chat failed, status_code={res.status_code}, \"\n                             f\"msg={error.get('message')}, type={error.get('type')}\")\n\n                if res.status_code >= 500:\n                    # server error, need retry\n                    time.sleep(2)\n                    logger.warn(f\"[LINKAI] do retry, times={retry_count}\")\n                    return self.reply_text(session, app_code, retry_count + 1)\n\n                return {\n                    \"total_tokens\": 0,\n                    \"completion_tokens\": 0,\n                    \"content\": \"提问太快啦，请休息一下再问我吧\"\n                }\n\n        except Exception as e:\n            logger.exception(e)\n            # retry\n            time.sleep(2)\n            logger.warn(f\"[LINKAI] do retry, times={retry_count}\")\n            return self.reply_text(session, app_code, retry_count + 1)\n\n    def _fetch_app_info(self, app_code: str):\n        headers = {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n        # do http request\n        base_url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n        params = {\"app_code\": app_code}\n        res = requests.get(url=base_url + \"/v1/app/info\", params=params, headers=headers, timeout=(5, 10))\n        if res.status_code == 200:\n            return res.json()\n        else:\n            logger.warning(f\"[LinkAI] find app info exception, res={res}\")\n\n    def create_img(self, query, retry_count=0, api_key=None):\n        try:\n            logger.info(\"[LinkImage] image_query={}\".format(query))\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {conf().get('linkai_api_key')}\"\n            }\n            data = {\n                \"prompt\": query,\n                \"n\": 1,\n                \"model\": conf().get(\"text_to_image\") or \"dall-e-2\",\n                \"response_format\": \"url\",\n                \"img_proxy\": conf().get(\"image_proxy\")\n            }\n            url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\") + \"/v1/images/generations\"\n            res = requests.post(url, headers=headers, json=data, timeout=(5, 90))\n            t2 = time.time()\n            image_url = res.json()[\"data\"][0][\"url\"]\n            logger.info(\"[OPEN_AI] image_url={}\".format(image_url))\n            return True, image_url\n\n        except Exception as e:\n            logger.error(format(e))\n            return False, \"画图出现问题，请休息一下再问我吧\"\n\n\n    def _fetch_knowledge_search_suffix(self, response) -> str:\n        try:\n            if response.get(\"knowledge_base\"):\n                search_hit = response.get(\"knowledge_base\").get(\"search_hit\")\n                first_similarity = response.get(\"knowledge_base\").get(\"first_similarity\")\n                logger.info(f\"[LINKAI] knowledge base, search_hit={search_hit}, first_similarity={first_similarity}\")\n                plugin_config = pconf(\"linkai\")\n                if plugin_config and plugin_config.get(\"knowledge_base\") and plugin_config.get(\"knowledge_base\").get(\"search_miss_text_enabled\"):\n                    search_miss_similarity = plugin_config.get(\"knowledge_base\").get(\"search_miss_similarity\")\n                    search_miss_text = plugin_config.get(\"knowledge_base\").get(\"search_miss_suffix\")\n                    if not search_hit:\n                        return search_miss_text\n                    if search_miss_similarity and float(search_miss_similarity) > first_similarity:\n                        return search_miss_text\n        except Exception as e:\n            logger.exception(e)\n\n\n    def _fetch_agent_suffix(self, response):\n        try:\n            plugin_list = []\n            logger.debug(f\"[LinkAgent] res={response}\")\n            if response.get(\"agent\") and response.get(\"agent\").get(\"chain\") and response.get(\"agent\").get(\"need_show_plugin\"):\n                chain = response.get(\"agent\").get(\"chain\")\n                suffix = \"\\n\\n- - - - - - - - - - - -\"\n                i = 0\n                for turn in chain:\n                    plugin_name = turn.get('plugin_name')\n                    suffix += \"\\n\"\n                    need_show_thought = response.get(\"agent\").get(\"need_show_thought\")\n                    if turn.get(\"thought\") and plugin_name and need_show_thought:\n                        suffix += f\"{turn.get('thought')}\\n\"\n                    if plugin_name:\n                        plugin_list.append(turn.get('plugin_name'))\n                        if turn.get('plugin_icon'):\n                            suffix += f\"{turn.get('plugin_icon')} \"\n                        suffix += f\"{turn.get('plugin_name')}\"\n                        if turn.get('plugin_input'):\n                            suffix += f\"：{turn.get('plugin_input')}\"\n                    if i < len(chain) - 1:\n                        suffix += \"\\n\"\n                    i += 1\n                logger.info(f\"[LinkAgent] use plugins: {plugin_list}\")\n                return suffix\n        except Exception as e:\n            logger.exception(e)\n\n    def _process_url(self, text):\n        try:\n            url_pattern = re.compile(r'\\[(.*?)\\]\\((http[s]?://.*?)\\)')\n            def replace_markdown_url(match):\n                return f\"{match.group(2)}\"\n            return url_pattern.sub(replace_markdown_url, text)\n        except Exception as e:\n            logger.error(e)\n\n    def _send_image(self, channel, context, image_urls):\n        if not image_urls:\n            return\n        max_send_num = conf().get(\"max_media_send_count\")\n        send_interval = conf().get(\"media_send_interval\")\n        file_type = (\".pdf\", \".doc\", \".docx\", \".csv\", \".xls\", \".xlsx\", \".txt\", \".rtf\", \".ppt\", \".pptx\")\n        try:\n            i = 0\n            for url in image_urls:\n                if max_send_num and i >= max_send_num:\n                    continue\n                i += 1\n                if url.endswith(\".mp4\"):\n                    reply_type = ReplyType.VIDEO_URL\n                elif url.endswith(file_type):\n                    reply_type = ReplyType.FILE\n                    url = _download_file(url)\n                    if not url:\n                        continue\n                else:\n                    reply_type = ReplyType.IMAGE_URL\n                reply = Reply(reply_type, url)\n                channel.send(reply, context)\n                if send_interval:\n                    time.sleep(send_interval)\n        except Exception as e:\n            logger.error(e)\n\n\ndef _download_file(url: str):\n    try:\n        file_path = \"tmp\"\n        if not os.path.exists(file_path):\n            os.makedirs(file_path)\n        file_name = url.split(\"/\")[-1]  # 获取文件名\n        file_path = os.path.join(file_path, file_name)\n        response = requests.get(url)\n        with open(file_path, \"wb\") as f:\n            f.write(response.content)\n        return file_path\n    except Exception as e:\n        logger.warn(e)\n\n\nclass LinkAISessionManager(SessionManager):\n    def session_msg_query(self, query, session_id):\n        session = self.build_session(session_id)\n        messages = session.messages + [{\"role\": \"user\", \"content\": query}]\n        return messages\n\n    def session_reply(self, reply, session_id, total_tokens=None, query=None):\n        session = self.build_session(session_id)\n        if query:\n            session.add_query(query)\n        session.add_reply(reply)\n        try:\n            max_tokens = conf().get(\"conversation_max_tokens\", 8000)\n            tokens_cnt = session.discard_exceeding(max_tokens, total_tokens)\n            logger.debug(f\"[LinkAI] chat history, before tokens={total_tokens}, now tokens={tokens_cnt}\")\n        except Exception as e:\n            logger.warning(\"Exception when counting tokens precisely for session: {}\".format(str(e)))\n        return session\n\n\nclass LinkAISession(ChatGPTSession):\n    def calc_tokens(self):\n        if not self.messages:\n            return 0\n        return len(str(self.messages))\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        cur_tokens = self.calc_tokens()\n        if cur_tokens > max_tokens:\n            for i in range(0, len(self.messages)):\n                if i > 0 and self.messages[i].get(\"role\") == \"assistant\" and self.messages[i - 1].get(\"role\") == \"user\":\n                    self.messages.pop(i)\n                    self.messages.pop(i - 1)\n                    return self.calc_tokens()\n        return cur_tokens\n\n\n# Add call_with_tools method to LinkAIBot class\ndef _linkai_call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n    \"\"\"\n    Call LinkAI API with tool support for agent integration\n    LinkAI is fully compatible with OpenAI's tool calling format\n    \n    Args:\n        messages: List of messages\n        tools: List of tool definitions (OpenAI format)\n        stream: Whether to use streaming\n        **kwargs: Additional parameters (max_tokens, temperature, etc.)\n        \n    Returns:\n        Formatted response in OpenAI format or generator for streaming\n    \"\"\"\n    try:\n        # Convert messages from Claude format to OpenAI format\n        # This is important because Agent uses Claude format internally\n        messages = self._convert_messages_to_openai_format(messages)\n        \n        # Convert tools from Claude format to OpenAI format\n        if tools:\n            tools = self._convert_tools_to_openai_format(tools)\n        \n        # Handle system prompt (OpenAI uses system message, Claude uses separate parameter)\n        system_prompt = kwargs.get('system')\n        if system_prompt:\n            # Add system message at the beginning if not already present\n            if not messages or messages[0].get('role') != 'system':\n                messages = [{\"role\": \"system\", \"content\": system_prompt}] + messages\n            else:\n                # Replace existing system message\n                messages[0] = {\"role\": \"system\", \"content\": system_prompt}\n        \n        logger.debug(f\"[LinkAI] messages: {len(messages)}, tools: {len(tools) if tools else 0}, stream: {stream}\")\n        \n        # Build request parameters (LinkAI uses OpenAI-compatible format)\n        raw_ct = conf().get(\"channel_type\", \"web\")\n        if isinstance(raw_ct, list):\n            channel_type = raw_ct[0] if raw_ct else \"web\"\n        elif isinstance(raw_ct, str) and \",\" in raw_ct:\n            channel_type = raw_ct.split(\",\")[0].strip()\n        else:\n            channel_type = raw_ct\n\n        session_id = kwargs.get(\"session_id\", \"\")\n        body = {\n            \"messages\": messages,\n            \"model\": kwargs.get(\"model\", conf().get(\"model\") or \"gpt-3.5-turbo\"),\n            \"temperature\": kwargs.get(\"temperature\", conf().get(\"temperature\", 0.9)),\n            \"top_p\": kwargs.get(\"top_p\", conf().get(\"top_p\", 1)),\n            \"frequency_penalty\": kwargs.get(\"frequency_penalty\", conf().get(\"frequency_penalty\", 0.0)),\n            \"presence_penalty\": kwargs.get(\"presence_penalty\", conf().get(\"presence_penalty\", 0.0)),\n            \"stream\": stream,\n            \"channel_type\": kwargs.get(\"channel_type\", channel_type),\n            \"session_id\": session_id,\n            \"sender_id\": session_id,\n        }\n\n        try:\n            from linkai import LinkAIClient\n            client_id = LinkAIClient.fetch_client_id()\n            if client_id:\n                body[\"client_id\"] = client_id\n        except Exception:\n            pass\n\n        if tools:\n            body[\"tools\"] = tools\n            body[\"tool_choice\"] = kwargs.get(\"tool_choice\", \"auto\")\n\n        # Prepare headers\n        headers = {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n        base_url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n        \n        if stream:\n            return self._handle_linkai_stream_response(base_url, headers, body)\n        else:\n            return self._handle_linkai_sync_response(base_url, headers, body)\n            \n    except Exception as e:\n        logger.error(f\"[LinkAI] call_with_tools error: {e}\")\n        if stream:\n            def error_generator():\n                yield {\n                    \"error\": True,\n                    \"message\": str(e),\n                    \"status_code\": 500\n                }\n            return error_generator()\n        else:\n            return {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n\ndef _handle_linkai_sync_response(self, base_url, headers, body):\n    \"\"\"Handle synchronous LinkAI API response\"\"\"\n    try:\n        res = requests.post(\n            url=base_url + \"/v1/chat/completions\",\n            json=body,\n            headers=headers,\n            timeout=conf().get(\"request_timeout\", 180)\n        )\n        \n        if res.status_code == 200:\n            response = res.json()\n            logger.debug(f\"[LinkAI] reply: model={response.get('model')}, \"\n                        f\"tokens={response.get('usage', {}).get('total_tokens', 0)}\")\n            \n            # LinkAI response is already in OpenAI-compatible format\n            return response\n        else:\n            error_data = res.json()\n            error_msg = error_data.get(\"error\", {}).get(\"message\", \"Unknown error\")\n            raise Exception(f\"LinkAI API error: {res.status_code} - {error_msg}\")\n            \n    except Exception as e:\n        logger.error(f\"[LinkAI] sync response error: {e}\")\n        raise\n\ndef _handle_linkai_stream_response(self, base_url, headers, body):\n    \"\"\"Handle streaming LinkAI API response\"\"\"\n    try:\n        res = requests.post(\n            url=base_url + \"/v1/chat/completions\",\n            json=body,\n            headers=headers,\n            timeout=conf().get(\"request_timeout\", 180),\n            stream=True\n        )\n        \n        if res.status_code != 200:\n            error_text = res.text\n            try:\n                error_data = json.loads(error_text)\n                error_msg = error_data.get(\"error\", {}).get(\"message\", error_text)\n            except Exception:\n                error_msg = error_text or \"Unknown error\"\n            \n            yield {\n                \"error\": True,\n                \"status_code\": res.status_code,\n                \"message\": error_msg\n            }\n            return\n        \n        # Process streaming response (OpenAI-compatible SSE format)\n        for line in res.iter_lines():\n            if line:\n                line = line.decode('utf-8')\n                if line.startswith('data: '):\n                    line = line[6:]  # Remove 'data: ' prefix\n                    if line == '[DONE]':\n                        break\n                    try:\n                        chunk = json.loads(line)\n                    except json.JSONDecodeError:\n                        continue\n\n                    # Check for error responses within the stream\n                    # Some providers (e.g., MiniMax via LinkAI) return errors as:\n                    # {'type': 'error', 'error': {'type': '...', 'message': '...', 'http_code': '400'}}\n                    if chunk.get(\"type\") == \"error\" or (\n                        isinstance(chunk.get(\"error\"), dict) and \"message\" in chunk.get(\"error\", {})\n                    ):\n                        error_data = chunk.get(\"error\", {})\n                        error_msg = error_data.get(\"message\", \"Unknown error\") if isinstance(error_data, dict) else str(error_data)\n                        http_code = error_data.get(\"http_code\", \"\") if isinstance(error_data, dict) else \"\"\n                        status_code = int(http_code) if http_code and str(http_code).isdigit() else 400\n                        logger.error(f\"[LinkAI] stream error: {error_msg} (http_code={http_code})\")\n                        yield {\n                            \"error\": True,\n                            \"message\": error_msg,\n                            \"status_code\": status_code\n                        }\n                        return\n\n                    yield chunk\n                        \n    except Exception as e:\n        logger.error(f\"[LinkAI] stream response error: {e}\")\n        yield {\n            \"error\": True,\n            \"message\": str(e),\n            \"status_code\": 500\n        }\n\n# Attach methods to LinkAIBot class\nLinkAIBot.call_with_tools = _linkai_call_with_tools\nLinkAIBot._handle_linkai_sync_response = _handle_linkai_sync_response\nLinkAIBot._handle_linkai_stream_response = _handle_linkai_stream_response\n"
  },
  {
    "path": "models/minimax/minimax_bot.py",
    "content": "# encoding:utf-8\n\nimport time\nimport json\nimport requests\n\nfrom models.bot import Bot\nfrom models.minimax.minimax_session import MinimaxSession\nfrom models.session_manager import SessionManager\nfrom bridge.context import Context, ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf, load_config\nfrom common import const\nfrom agent.protocol.message_utils import drop_orphaned_tool_results_openai\n\n\n# MiniMax对话模型API\nclass MinimaxBot(Bot):\n    def __init__(self):\n        super().__init__()\n        self.args = {\n            \"model\": conf().get(\"model\") or \"MiniMax-M2.1\",\n            \"temperature\": conf().get(\"temperature\", 0.3),\n            \"top_p\": conf().get(\"top_p\", 0.95),\n        }\n        self.sessions = SessionManager(MinimaxSession, model=const.MiniMax)\n\n    @property\n    def api_key(self):\n        key = conf().get(\"minimax_api_key\")\n        if not key:\n            key = conf().get(\"Minimax_api_key\")\n        return key\n\n    @property\n    def api_base(self):\n        return conf().get(\"minimax_api_base\", \"https://api.minimaxi.com/v1\")\n\n    def reply(self, query, context: Context = None) -> Reply:\n        # acquire reply content\n        logger.info(\"[MINIMAX] query={}\".format(query))\n        if context.type == ContextType.TEXT:\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[MINIMAX] session query={}\".format(session))\n\n            model = context.get(\"Minimax_model\")\n            new_args = self.args.copy()\n            if model:\n                new_args[\"model\"] = model\n\n            reply_content = self.reply_text(session, args=new_args)\n            logger.debug(\n                \"[MINIMAX] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[MINIMAX] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_text(self, session: MinimaxSession, args=None, retry_count=0) -> dict:\n        \"\"\"\n        Call MiniMax API to get the answer using REST API\n        :param session: a conversation session\n        :param args: request arguments\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            if args is None:\n                args = self.args\n\n            # Build request\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n\n            request_body = {\n                \"model\": args.get(\"model\", self.args[\"model\"]),\n                \"messages\": session.messages,\n                \"temperature\": args.get(\"temperature\", self.args[\"temperature\"]),\n                \"top_p\": args.get(\"top_p\", self.args[\"top_p\"]),\n            }\n\n            url = f\"{self.api_base}/chat/completions\"\n            logger.debug(f\"[MINIMAX] Calling {url} with model={request_body['model']}\")\n\n            response = requests.post(url, headers=headers, json=request_body, timeout=60)\n\n            if response.status_code == 200:\n                result = response.json()\n                content = result[\"choices\"][0][\"message\"][\"content\"]\n                total_tokens = result[\"usage\"][\"total_tokens\"]\n                completion_tokens = result[\"usage\"][\"completion_tokens\"]\n\n                logger.debug(f\"[MINIMAX] reply_text: content_length={len(content)}, tokens={total_tokens}\")\n\n                return {\n                    \"total_tokens\": total_tokens,\n                    \"completion_tokens\": completion_tokens,\n                    \"content\": content,\n                }\n            else:\n                error_msg = response.text\n                logger.error(f\"[MINIMAX] API error: status={response.status_code}, msg={error_msg}\")\n\n                # Parse error for better messages\n                result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n                need_retry = False\n\n                if response.status_code >= 500:\n                    logger.warning(f\"[MINIMAX] Server error, retry={retry_count}\")\n                    need_retry = retry_count < 2\n                elif response.status_code == 401:\n                    result[\"content\"] = \"授权失败，请检查API Key是否正确\"\n                    need_retry = False\n                elif response.status_code == 429:\n                    result[\"content\"] = \"请求过于频繁，请稍后再试\"\n                    need_retry = retry_count < 2\n                else:\n                    need_retry = False\n\n                if need_retry:\n                    time.sleep(3)\n                    return self.reply_text(session, args, retry_count + 1)\n                else:\n                    return result\n\n        except requests.exceptions.Timeout:\n            logger.error(\"[MINIMAX] Request timeout\")\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"请求超时，请稍后再试\"}\n            if need_retry:\n                time.sleep(3)\n                return self.reply_text(session, args, retry_count + 1)\n            else:\n                return result\n        except Exception as e:\n            logger.error(f\"[MINIMAX] reply_text error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if need_retry:\n                time.sleep(3)\n                return self.reply_text(session, args, retry_count + 1)\n            else:\n                return result\n\n    def call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n        \"\"\"\n        Call MiniMax API with tool support for agent integration\n\n        This method handles:\n        1. Format conversion (Claude format → OpenAI format)\n        2. System prompt injection\n        3. API calling with REST API\n        4. Interleaved Thinking support (reasoning_split=True)\n\n        Args:\n            messages: List of messages (may be in Claude format from agent)\n            tools: List of tool definitions (may be in Claude format from agent)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters (max_tokens, temperature, system, etc.)\n\n        Returns:\n            Formatted response or generator for streaming\n        \"\"\"\n        try:\n            # Convert messages from Claude format to OpenAI format\n            converted_messages = self._convert_messages_to_openai_format(messages)\n\n            # Extract and inject system prompt if provided\n            system_prompt = kwargs.pop(\"system\", None)\n            if system_prompt:\n                # Add system message at the beginning\n                converted_messages.insert(0, {\"role\": \"system\", \"content\": system_prompt})\n\n            # Convert tools from Claude format to OpenAI format\n            converted_tools = None\n            if tools:\n                converted_tools = self._convert_tools_to_openai_format(tools)\n\n            # Prepare API parameters\n            model = kwargs.pop(\"model\", None) or self.args[\"model\"]\n            max_tokens = kwargs.pop(\"max_tokens\", 100000)\n            temperature = kwargs.pop(\"temperature\", self.args[\"temperature\"])\n\n            # Build request body\n            request_body = {\n                \"model\": model,\n                \"messages\": converted_messages,\n                \"max_tokens\": max_tokens,\n                \"temperature\": temperature,\n                \"stream\": stream,\n            }\n\n            # Add tools if provided\n            if converted_tools:\n                request_body[\"tools\"] = converted_tools\n\n            # Add reasoning_split=True for better thinking control (M2.1 feature)\n            # This separates thinking content into reasoning_details field\n            request_body[\"reasoning_split\"] = True\n\n            logger.debug(f\"[MINIMAX] API call: model={model}, tools={len(converted_tools) if converted_tools else 0}, stream={stream}\")\n\n            # Check if we should show thinking process\n            show_thinking = kwargs.pop(\"show_thinking\", conf().get(\"minimax_show_thinking\", False))\n            \n            if stream:\n                return self._handle_stream_response(request_body, show_thinking=show_thinking)\n            else:\n                return self._handle_sync_response(request_body)\n\n        except Exception as e:\n            logger.error(f\"[MINIMAX] call_with_tools error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            \n            def error_generator():\n                yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n            return error_generator()\n\n    def _convert_messages_to_openai_format(self, messages):\n        \"\"\"\n        Convert messages from Claude format to OpenAI format\n\n        Claude format:\n        - role: \"user\" | \"assistant\"\n        - content: string | list of content blocks\n\n        OpenAI format:\n        - role: \"user\" | \"assistant\" | \"tool\"\n        - content: string\n        - tool_calls: list (for assistant)\n        - tool_call_id: string (for tool results)\n        \"\"\"\n        converted = []\n\n        for msg in messages:\n            role = msg.get(\"role\")\n            content = msg.get(\"content\")\n\n            if role == \"user\":\n                # Handle user message\n                if isinstance(content, list):\n                    # Extract text from content blocks\n                    text_parts = []\n                    tool_results = []\n\n                    for block in content:\n                        if isinstance(block, dict):\n                            if block.get(\"type\") == \"text\":\n                                text_parts.append(block.get(\"text\", \"\"))\n                            elif block.get(\"type\") == \"tool_result\":\n                                # Tool result should be a separate message with role=\"tool\"\n                                tool_call_id = block.get(\"tool_use_id\") or \"\"\n                                if not tool_call_id:\n                                    logger.warning(f\"[MINIMAX] tool_result missing tool_use_id\")\n                                result_content = block.get(\"content\", \"\")\n                                if not isinstance(result_content, str):\n                                    result_content = json.dumps(result_content, ensure_ascii=False)\n                                tool_results.append({\n                                    \"role\": \"tool\",\n                                    \"tool_call_id\": tool_call_id,\n                                    \"content\": result_content\n                                })\n\n                    if text_parts:\n                        converted.append({\n                            \"role\": \"user\",\n                            \"content\": \"\\n\".join(text_parts)\n                        })\n\n                    # Add all tool results (not just the last one)\n                    for tool_result in tool_results:\n                        converted.append(tool_result)\n                else:\n                    # Simple text content\n                    converted.append({\n                        \"role\": \"user\",\n                        \"content\": str(content)\n                    })\n\n            elif role == \"assistant\":\n                # Handle assistant message\n                openai_msg = {\"role\": \"assistant\"}\n\n                if isinstance(content, list):\n                    # Parse content blocks\n                    text_parts = []\n                    tool_calls = []\n\n                    for block in content:\n                        if isinstance(block, dict):\n                            if block.get(\"type\") == \"text\":\n                                text_parts.append(block.get(\"text\", \"\"))\n                            elif block.get(\"type\") == \"tool_use\":\n                                # Convert to OpenAI tool_calls format\n                                tool_calls.append({\n                                    \"id\": block.get(\"id\"),\n                                    \"type\": \"function\",\n                                    \"function\": {\n                                        \"name\": block.get(\"name\"),\n                                        \"arguments\": json.dumps(block.get(\"input\", {}))\n                                    }\n                                })\n\n                    # Set content (can be empty if only tool calls)\n                    if text_parts:\n                        openai_msg[\"content\"] = \"\\n\".join(text_parts)\n                    elif not tool_calls:\n                        openai_msg[\"content\"] = \"\"\n\n                    # Set tool_calls\n                    if tool_calls:\n                        openai_msg[\"tool_calls\"] = tool_calls\n                        # When tool_calls exist and content is empty, set to None\n                        if not text_parts:\n                            openai_msg[\"content\"] = None\n\n                else:\n                    # Simple text content\n                    openai_msg[\"content\"] = str(content) if content else \"\"\n\n                converted.append(openai_msg)\n\n        return drop_orphaned_tool_results_openai(converted)\n\n    def _convert_tools_to_openai_format(self, tools):\n        \"\"\"\n        Convert tools from Claude format to OpenAI format\n\n        Claude format:\n        {\n            \"name\": \"tool_name\",\n            \"description\": \"description\",\n            \"input_schema\": {...}\n        }\n\n        OpenAI format:\n        {\n            \"type\": \"function\",\n            \"function\": {\n                \"name\": \"tool_name\",\n                \"description\": \"description\",\n                \"parameters\": {...}\n            }\n        }\n        \"\"\"\n        converted = []\n\n        for tool in tools:\n            converted.append({\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": tool.get(\"name\"),\n                    \"description\": tool.get(\"description\"),\n                    \"parameters\": tool.get(\"input_schema\", {})\n                }\n            })\n\n        return converted\n\n    def _handle_sync_response(self, request_body):\n        \"\"\"Handle synchronous API response\"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n\n            # Remove stream from body for sync request\n            request_body.pop(\"stream\", None)\n\n            url = f\"{self.api_base}/chat/completions\"\n            response = requests.post(url, headers=headers, json=request_body, timeout=60)\n\n            if response.status_code != 200:\n                error_msg = response.text\n                logger.error(f\"[MINIMAX] API error: status={response.status_code}, msg={error_msg}\")\n                yield {\"error\": True, \"message\": error_msg, \"status_code\": response.status_code}\n                return\n\n            result = response.json()\n            message = result[\"choices\"][0][\"message\"]\n            finish_reason = result[\"choices\"][0][\"finish_reason\"]\n\n            # Build response in Claude-like format\n            response_data = {\n                \"role\": \"assistant\",\n                \"content\": []\n            }\n\n            # Add reasoning_details (thinking) if present\n            if \"reasoning_details\" in message:\n                for reasoning in message[\"reasoning_details\"]:\n                    if \"text\" in reasoning:\n                        response_data[\"content\"].append({\n                            \"type\": \"thinking\",\n                            \"thinking\": reasoning[\"text\"]\n                        })\n\n            # Add text content if present\n            if message.get(\"content\"):\n                response_data[\"content\"].append({\n                    \"type\": \"text\",\n                    \"text\": message[\"content\"]\n                })\n\n            # Add tool calls if present\n            if message.get(\"tool_calls\"):\n                for tool_call in message[\"tool_calls\"]:\n                    response_data[\"content\"].append({\n                        \"type\": \"tool_use\",\n                        \"id\": tool_call[\"id\"],\n                        \"name\": tool_call[\"function\"][\"name\"],\n                        \"input\": json.loads(tool_call[\"function\"][\"arguments\"])\n                    })\n\n            # Set stop_reason\n            if finish_reason == \"tool_calls\":\n                response_data[\"stop_reason\"] = \"tool_use\"\n            elif finish_reason == \"stop\":\n                response_data[\"stop_reason\"] = \"end_turn\"\n            else:\n                response_data[\"stop_reason\"] = finish_reason\n\n            yield response_data\n\n        except requests.exceptions.Timeout:\n            logger.error(\"[MINIMAX] Request timeout\")\n            yield {\"error\": True, \"message\": \"Request timeout\", \"status_code\": 500}\n        except Exception as e:\n            logger.error(f\"[MINIMAX] sync response error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n\n    def _handle_stream_response(self, request_body, show_thinking=False):\n        \"\"\"Handle streaming API response\n        \n        Args:\n            request_body: API request parameters\n            show_thinking: Whether to show thinking/reasoning process to users\n        \"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n\n            url = f\"{self.api_base}/chat/completions\"\n            response = requests.post(url, headers=headers, json=request_body, stream=True, timeout=60)\n\n            if response.status_code != 200:\n                error_msg = response.text\n                logger.error(f\"[MINIMAX] API error: status={response.status_code}, msg={error_msg}\")\n                yield {\"error\": True, \"message\": error_msg, \"status_code\": response.status_code}\n                return\n\n            current_content = []\n            current_tool_calls = {}\n            current_reasoning = []\n            finish_reason = None\n            chunk_count = 0\n\n            # Process SSE stream\n            for line in response.iter_lines():\n                if not line:\n                    continue\n\n                line = line.decode('utf-8')\n                if not line.startswith('data: '):\n                    continue\n\n                data_str = line[6:]  # Remove 'data: ' prefix\n                if data_str.strip() == '[DONE]':\n                    break\n\n                try:\n                    chunk = json.loads(data_str)\n                    chunk_count += 1\n                except json.JSONDecodeError as e:\n                    logger.warning(f\"[MINIMAX] JSON decode error: {e}, data: {data_str[:100]}\")\n                    continue\n\n                # Check for error response (MiniMax format)\n                if chunk.get(\"type\") == \"error\" or \"error\" in chunk:\n                    error_data = chunk.get(\"error\", {})\n                    error_msg = error_data.get(\"message\", \"Unknown error\")\n                    error_type = error_data.get(\"type\", \"\")\n                    http_code = error_data.get(\"http_code\", \"\")\n                    \n                    logger.error(f\"[MINIMAX] API error: {error_msg} (type: {error_type}, code: {http_code})\")\n                    \n                    yield {\n                        \"error\": True,\n                        \"message\": error_msg,\n                        \"status_code\": int(http_code) if http_code.isdigit() else 500\n                    }\n                    return\n\n                if not chunk.get(\"choices\"):\n                    continue\n\n                choice = chunk[\"choices\"][0]\n                delta = choice.get(\"delta\", {})\n\n                # Handle reasoning_details (thinking)\n                if \"reasoning_details\" in delta:\n                    for reasoning in delta[\"reasoning_details\"]:\n                        if \"text\" in reasoning:\n                            reasoning_id = reasoning.get(\"id\", \"reasoning-text-1\")\n                            reasoning_index = reasoning.get(\"index\", 0)\n                            reasoning_text = reasoning[\"text\"]\n\n                            # Accumulate reasoning text\n                            if reasoning_index >= len(current_reasoning):\n                                current_reasoning.append({\"id\": reasoning_id, \"text\": \"\"})\n\n                            current_reasoning[reasoning_index][\"text\"] += reasoning_text\n\n                            # Optionally yield thinking as visible content\n                            if show_thinking:\n                                # Yield thinking text as-is (without emoji decoration)\n                                # The reasoning text will be displayed to users\n                                yield {\n                                    \"choices\": [{\n                                        \"index\": 0,\n                                        \"delta\": {\n                                            \"role\": \"assistant\",\n                                            \"content\": reasoning_text\n                                        }\n                                    }]\n                                }\n\n                # Handle text content\n                if \"content\" in delta and delta[\"content\"]:\n                    # Start new content block if needed\n                    if not any(block.get(\"type\") == \"text\" for block in current_content):\n                        current_content.append({\"type\": \"text\", \"text\": \"\"})\n\n                    # Accumulate text\n                    for block in current_content:\n                        if block.get(\"type\") == \"text\":\n                            block[\"text\"] += delta[\"content\"]\n                            break\n\n                    # Yield OpenAI-format delta (for agent_stream.py compatibility)\n                    yield {\n                        \"choices\": [{\n                            \"index\": 0,\n                            \"delta\": {\n                                \"role\": \"assistant\",\n                                \"content\": delta[\"content\"]\n                            }\n                        }]\n                    }\n\n                # Handle tool calls\n                if \"tool_calls\" in delta:\n                    for tool_call_chunk in delta[\"tool_calls\"]:\n                        index = tool_call_chunk.get(\"index\", 0)\n                        if index not in current_tool_calls:\n                            # Start new tool call\n                            current_tool_calls[index] = {\n                                \"id\": tool_call_chunk.get(\"id\", \"\"),\n                                \"type\": \"tool_use\",\n                                \"name\": tool_call_chunk.get(\"function\", {}).get(\"name\", \"\"),\n                                \"input\": \"\"\n                            }\n                        \n                        # Accumulate tool call arguments\n                        if \"function\" in tool_call_chunk and \"arguments\" in tool_call_chunk[\"function\"]:\n                            current_tool_calls[index][\"input\"] += tool_call_chunk[\"function\"][\"arguments\"]\n\n                        # Yield OpenAI-format tool call delta\n                        yield {\n                            \"choices\": [{\n                                \"index\": 0,\n                                \"delta\": {\n                                    \"tool_calls\": [tool_call_chunk]\n                                }\n                            }]\n                        }\n\n                # Handle finish_reason\n                if choice.get(\"finish_reason\"):\n                    finish_reason = choice[\"finish_reason\"]\n\n            # Log complete reasoning_details for debugging\n            if current_reasoning:\n                logger.debug(f\"[MINIMAX] ===== Complete Reasoning Details =====\")\n                for i, reasoning in enumerate(current_reasoning):\n                    reasoning_text = reasoning.get(\"text\", \"\")\n                    logger.debug(f\"[MINIMAX] Reasoning {i+1} (length={len(reasoning_text)}):\")\n                    logger.debug(f\"[MINIMAX] {reasoning_text}\")\n                logger.debug(f\"[MINIMAX] ===== End Reasoning Details =====\")\n\n            # Yield final chunk with finish_reason (OpenAI format)\n            yield {\n                \"choices\": [{\n                    \"index\": 0,\n                    \"delta\": {},\n                    \"finish_reason\": finish_reason\n                }]\n            }\n\n        except requests.exceptions.Timeout:\n            logger.error(\"[MINIMAX] Request timeout\")\n            yield {\"error\": True, \"message\": \"Request timeout\", \"status_code\": 500}\n        except Exception as e:\n            logger.error(f\"[MINIMAX] stream response error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n"
  },
  {
    "path": "models/minimax/minimax_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\"\"\"\n    e.g.\n    [\n        {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n        {\"role\": \"user\", \"content\": \"Who won the world series in 2020?\"},\n        {\"role\": \"assistant\", \"content\": \"The Los Angeles Dodgers won the World Series in 2020.\"},\n        {\"role\": \"user\", \"content\": \"Where was it played?\"}\n    ]\n\"\"\"\n\n\nclass MinimaxSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"minimax\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        # self.reset()\n\n    def add_query(self, query):\n        user_item = {\"sender_type\": \"USER\", \"sender_name\": self.session_id, \"text\": query}\n        self.messages.append(user_item)\n\n    def add_reply(self, reply):\n        assistant_item = {\"sender_type\": \"BOT\", \"sender_name\": \"MM智能助理\", \"text\": reply}\n        self.messages.append(assistant_item)\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"sender_type\"] == \"BOT\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"sender_type\"] == \"USER\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens, len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\n\ndef num_tokens_from_messages(messages, model):\n    \"\"\"Returns the number of tokens used by a list of messages.\"\"\"\n    # 官方token计算规则：\"对于中文文本来说，1个token通常对应一个汉字；对于英文文本来说，1个token通常对应3至4个字母或1个单词\"\n    # 详情请产看文档：https://help.aliyun.com/document_detail/2586397.html\n    # 目前根据字符串长度粗略估计token数，不影响正常使用\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"text\"])\n    return tokens\n"
  },
  {
    "path": "models/modelscope/modelscope_bot.py",
    "content": "# encoding:utf-8\n\nimport time\nimport json\nimport openai\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf, load_config\nfrom .modelscope_session import ModelScopeSession\nimport requests\n\n\n# ModelScope对话模型API\nclass ModelScopeBot(Bot):\n    def __init__(self):\n        super().__init__()\n        self.sessions = SessionManager(ModelScopeSession, model=conf().get(\"model\") or \"Qwen/Qwen2.5-7B-Instruct\")\n        model = conf().get(\"model\") or \"Qwen/Qwen2.5-7B-Instruct\"\n        if model == \"modelscope\":\n            model = \"Qwen/Qwen2.5-7B-Instruct\"\n        self.args = {\n            \"model\": model,  # 对话模型的名称\n            \"temperature\": conf().get(\"temperature\", 0.3),  # 如果设置，值域须为 [0, 1] 我们推荐 0.3，以达到较合适的效果。\n            \"top_p\": conf().get(\"top_p\", 1.0),  # 使用默认值\n        }\n\n    @property\n    def api_key(self):\n        return conf().get(\"modelscope_api_key\")\n\n    @property\n    def base_url(self):\n        return conf().get(\"modelscope_base_url\", \"https://api-inference.modelscope.cn/v1/chat/completions\")\n        \"\"\"\n        需要获取ModelScope支持API-inference的模型名称列表，请到魔搭社区官网模型中心查看 https://modelscope.cn/models?filter=inference_type&page=1。\n        或者使用命令 curl https://api-inference.modelscope.cn/v1/models 对模型列表和ID进行获取。查看commend/const.py文件也可以获取模型列表。\n        获取ModelScope的免费API Key，请到魔搭社区官网用户中心查看获取方式 https://modelscope.cn/docs/model-service/API-Inference/intro。\n        \"\"\"\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context.type == ContextType.TEXT:\n            logger.info(\"[MODELSCOPE_AI] query={}\".format(query))\n\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[MODELSCOPE_AI] session query={}\".format(session.messages))\n\n            model = context.get(\"modelscope_model\")\n            new_args = self.args.copy()\n            if model:\n                new_args[\"model\"] = model\n\n            if new_args[\"model\"] == \"Qwen/QwQ-32B\":\n                reply_content = self.reply_text_stream(session, args=new_args)\n            else:\n                reply_content = self.reply_text(session, args=new_args)\n\n            logger.debug(\n                \"[MODELSCOPE_AI] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                # 只有当 content 为空且 completion_tokens 为 0 时才标记为错误\n                if len(reply_content[\"content\"]) == 0:\n                    reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                else:\n                    reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[MODELSCOPE_AI] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n        elif context.type == ContextType.IMAGE_CREATE:\n            ok, retstring = self.create_img(query, 0)\n            reply = None\n            if ok:\n                reply = Reply(ReplyType.IMAGE_URL, retstring)\n            else:\n                reply = Reply(ReplyType.ERROR, retstring)\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_text(self, session: ModelScopeSession, args=None, retry_count=0) -> dict:\n        \"\"\"\n        call openai's ChatCompletion to get the answer\n        :param session: a conversation session\n        :param session_id: session id\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": \"Bearer \" + self.api_key\n            }\n            \n            body = args\n            body[\"messages\"] = session.messages\n            res = requests.post(\n                self.base_url,\n                headers=headers,\n                data=json.dumps(body)\n            )\n\n            if res.status_code == 200:\n                response = res.json()\n                return {\n                    \"total_tokens\": response[\"usage\"][\"total_tokens\"],\n                    \"completion_tokens\": response[\"usage\"][\"completion_tokens\"],\n                    \"content\": response[\"choices\"][0][\"message\"][\"content\"]\n                }\n            else:\n                response = res.json()\n                if \"errors\" in response:\n                    error = response.get(\"errors\")\n                elif \"error\" in response:\n                    error = response.get(\"error\")\n                else:\n                    error = \"Unknown error\"\n                logger.error(f\"[MODELSCOPE_AI] chat failed, status_code={res.status_code}, \"\n                             f\"msg={error.get('message')}, type={error.get('type')}\")\n\n                result = {\"completion_tokens\": 0, \"content\": \"提问太快啦，请休息一下再问我吧\"}\n                need_retry = False\n                if res.status_code >= 500:\n                    # server error, need retry\n                    logger.warn(f\"[MODELSCOPE_AI] do retry, times={retry_count}\")\n                    need_retry = retry_count < 2\n                elif res.status_code == 401:\n                    result[\"content\"] = \"授权失败，请检查API Key是否正确\"\n                elif res.status_code == 429:\n                    result[\"content\"] = \"请求过于频繁，请稍后再试\"\n                    need_retry = retry_count < 2\n                else:\n                    need_retry = False\n\n                if need_retry:\n                    time.sleep(3)\n                    return self.reply_text(session, args, retry_count + 1)\n                else:\n                    return result\n        except Exception as e:\n            logger.exception(e)\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if need_retry:\n                return self.reply_text(session, args, retry_count + 1)\n            else:\n                return result\n\n    def reply_text_stream(self, session: ModelScopeSession, args=None, retry_count=0) -> dict:\n        \"\"\"\n        call ModelScope's ChatCompletion to get the answer with stream response\n        :param session: a conversation session\n        :param session_id: session id\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": \"Bearer \" + self.api_key\n            }\n            \n            body = args\n            body[\"messages\"] = session.messages\n            body[\"stream\"] = True  # 启用流式响应\n\n            res = requests.post(\n                self.base_url,\n                headers=headers,\n                data=json.dumps(body),\n                stream=True\n            )\n            if res.status_code == 200:\n                content = \"\"\n                for line in res.iter_lines():\n                    if line:\n                        decoded_line = line.decode('utf-8')\n                        if decoded_line.startswith(\"data: \"):\n                            try:\n                                json_data = json.loads(decoded_line[6:])\n                                delta_content = json_data.get(\"choices\", [{}])[0].get(\"delta\", {}).get(\"content\", \"\")\n                                if delta_content:\n                                    content += delta_content\n                            except json.JSONDecodeError as e:\n                                pass\n                return {\n                    \"total_tokens\": 1,  # 流式响应通常不返回token使用情况\n                    \"completion_tokens\": 1,\n                    \"content\": content\n                }\n            else:\n                response = res.json()\n                if \"errors\" in response:\n                    error = response.get(\"errors\")\n                elif \"error\" in response:\n                    error = response.get(\"error\")\n                else:\n                    error = \"Unknown error\"\n                logger.error(f\"[MODELSCOPE_AI] chat failed, status_code={res.status_code}, \"\n                             f\"msg={error.get('message')}, type={error.get('type')}\")\n\n                result = {\"completion_tokens\": 0, \"content\": \"提问太快啦，请休息一下再问我吧\"}\n                need_retry = False\n                if res.status_code >= 500:\n                    # server error, need retry\n                    logger.warn(f\"[MODELSCOPE_AI] do retry, times={retry_count}\")\n                    need_retry = retry_count < 2\n                elif res.status_code == 401:\n                    result[\"content\"] = \"授权失败，请检查API Key是否正确\"\n                elif res.status_code == 429:\n                    result[\"content\"] = \"请求过于频繁，请稍后再试\"\n                    need_retry = retry_count < 2\n                else:\n                    need_retry = False\n\n                if need_retry:\n                    time.sleep(3)\n                    return self.reply_text_stream(session, args, retry_count + 1)\n                else:\n                    return result\n        except Exception as e:\n            logger.exception(e)\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if need_retry:\n                return self.reply_text_stream(session, args, retry_count + 1)\n            else:\n                return result\n    def create_img(self, query, retry_count=0):\n        try:\n            logger.info(\"[ModelScopeImage] image_query={}\".format(query))\n            headers = {\n                \"Content-Type\": \"application/json; charset=utf-8\",  # 明确指定编码\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n            payload = {\n                \"prompt\": query,  # required\n                \"n\": 1,\n                \"model\": conf().get(\"text_to_image\"),\n            }\n            url = \"https://api-inference.modelscope.cn/v1/images/generations\"\n            \n            # 手动序列化并保留中文（禁用 ASCII 转义）\n            json_payload = json.dumps(payload, ensure_ascii=False).encode('utf-8')\n            \n            # 使用 data 参数发送原始字符串（requests 会自动处理编码）\n            res = requests.post(url, headers=headers, data=json_payload)\n            \n            response_data = res.json()\n            image_url = response_data['images'][0]['url']\n            logger.info(\"[ModelScopeImage] image_url={}\".format(image_url))\n            return True, image_url\n\n        except Exception as e:\n            logger.error(format(e))\n            return False, \"画图出现问题，请休息一下再问我吧\""
  },
  {
    "path": "models/modelscope/modelscope_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\nclass ModelScopeSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"Qwen/Qwen2.5-7B-Instruct\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        self.reset()\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"assistant\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"user\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens,\n                                                                                       len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\n\ndef num_tokens_from_messages(messages, model):\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/moonshot/moonshot_bot.py",
    "content": "# encoding:utf-8\n\nimport json\nimport time\n\nimport requests\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf, load_config\nfrom .moonshot_session import MoonshotSession\n\n\n# Moonshot (Kimi) API Bot\nclass MoonshotBot(Bot):\n    def __init__(self):\n        super().__init__()\n        self.sessions = SessionManager(MoonshotSession, model=conf().get(\"model\") or \"moonshot-v1-128k\")\n        model = conf().get(\"model\") or \"moonshot-v1-128k\"\n        if model == \"moonshot\":\n            model = \"moonshot-v1-32k\"\n        self.args = {\n            \"model\": model,\n            \"temperature\": conf().get(\"temperature\", 0.3),\n            \"top_p\": conf().get(\"top_p\", 1.0),\n        }\n\n    @property\n    def api_key(self):\n        return conf().get(\"moonshot_api_key\")\n\n    @property\n    def base_url(self):\n        url = conf().get(\"moonshot_base_url\", \"https://api.moonshot.cn/v1\")\n        if url.endswith(\"/chat/completions\"):\n            url = url.rsplit(\"/chat/completions\", 1)[0]\n        return url.rstrip(\"/\")\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context.type == ContextType.TEXT:\n            logger.info(\"[MOONSHOT] query={}\".format(query))\n\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[MOONSHOT] session query={}\".format(session.messages))\n\n            model = context.get(\"moonshot_model\")\n            new_args = self.args.copy()\n            if model:\n                new_args[\"model\"] = model\n\n            reply_content = self.reply_text(session, args=new_args)\n            logger.debug(\n                \"[MOONSHOT] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[MOONSHOT] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_text(self, session: MoonshotSession, args=None, retry_count: int = 0) -> dict:\n        \"\"\"\n        Call Moonshot chat completion API to get the answer\n        :param session: a conversation session\n        :param args: model args\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": \"Bearer \" + self.api_key\n            }\n            body = args\n            body[\"messages\"] = session.messages\n            res = requests.post(\n                f\"{self.base_url}/chat/completions\",\n                headers=headers,\n                json=body\n            )\n            if res.status_code == 200:\n                response = res.json()\n                return {\n                    \"total_tokens\": response[\"usage\"][\"total_tokens\"],\n                    \"completion_tokens\": response[\"usage\"][\"completion_tokens\"],\n                    \"content\": response[\"choices\"][0][\"message\"][\"content\"]\n                }\n            else:\n                response = res.json()\n                error = response.get(\"error\")\n                logger.error(f\"[MOONSHOT] chat failed, status_code={res.status_code}, \"\n                             f\"msg={error.get('message')}, type={error.get('type')}\")\n\n                result = {\"completion_tokens\": 0, \"content\": \"提问太快啦，请休息一下再问我吧\"}\n                need_retry = False\n                if res.status_code >= 500:\n                    logger.warn(f\"[MOONSHOT] do retry, times={retry_count}\")\n                    need_retry = retry_count < 2\n                elif res.status_code == 401:\n                    result[\"content\"] = \"授权失败，请检查API Key是否正确\"\n                elif res.status_code == 429:\n                    result[\"content\"] = \"请求过于频繁，请稍后再试\"\n                    need_retry = retry_count < 2\n                else:\n                    need_retry = False\n\n                if need_retry:\n                    time.sleep(3)\n                    return self.reply_text(session, args, retry_count + 1)\n                else:\n                    return result\n        except Exception as e:\n            logger.exception(e)\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if need_retry:\n                return self.reply_text(session, args, retry_count + 1)\n            else:\n                return result\n\n    # ==================== Agent mode support ====================\n\n    def call_with_tools(self, messages, tools=None, stream: bool = False, **kwargs):\n        \"\"\"\n        Call Moonshot API with tool support for agent integration.\n\n        This method handles:\n        1. Format conversion (Claude format -> OpenAI format)\n        2. System prompt injection\n        3. Streaming SSE response with tool_calls\n        4. Thinking (reasoning) is disabled by default to avoid tool_choice conflicts\n\n        Args:\n            messages: List of messages (may be in Claude format from agent)\n            tools: List of tool definitions (may be in Claude format from agent)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters (max_tokens, temperature, system, model, etc.)\n\n        Returns:\n            Generator yielding OpenAI-format chunks (for streaming)\n        \"\"\"\n        try:\n            # Convert messages from Claude format to OpenAI format\n            converted_messages = self._convert_messages_to_openai_format(messages)\n\n            # Inject system prompt if provided\n            system_prompt = kwargs.pop(\"system\", None)\n            if system_prompt:\n                if not converted_messages or converted_messages[0].get(\"role\") != \"system\":\n                    converted_messages.insert(0, {\"role\": \"system\", \"content\": system_prompt})\n                else:\n                    converted_messages[0] = {\"role\": \"system\", \"content\": system_prompt}\n\n            # Convert tools from Claude format to OpenAI format\n            converted_tools = None\n            if tools:\n                converted_tools = self._convert_tools_to_openai_format(tools)\n\n            # Resolve model / temperature\n            model = kwargs.pop(\"model\", None) or self.args[\"model\"]\n            max_tokens = kwargs.pop(\"max_tokens\", None)\n            # Don't pop temperature, just ignore it\n            kwargs.pop(\"temperature\", None)\n\n            # Build request body (omit temperature, let the API use its own default)\n            request_body = {\n                \"model\": model,\n                \"messages\": converted_messages,\n                \"stream\": stream,\n            }\n            if max_tokens is not None:\n                request_body[\"max_tokens\"] = max_tokens\n\n            # Add tools\n            if converted_tools:\n                request_body[\"tools\"] = converted_tools\n                request_body[\"tool_choice\"] = \"auto\"\n\n            # Explicitly disable thinking to avoid reasoning_content issues in multi-turn tool calls.\n            # kimi-k2.5 may enable thinking by default; without preserving reasoning_content\n            # in conversation history the API will reject subsequent requests.\n            request_body[\"thinking\"] = {\"type\": \"disabled\"}\n\n            logger.debug(f\"[MOONSHOT] API call: model={model}, \"\n                         f\"tools={len(converted_tools) if converted_tools else 0}, stream={stream}\")\n\n            if stream:\n                return self._handle_stream_response(request_body)\n            else:\n                return self._handle_sync_response(request_body)\n\n        except Exception as e:\n            logger.error(f\"[MOONSHOT] call_with_tools error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n\n            def error_generator():\n                yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n            return error_generator()\n\n    # -------------------- streaming --------------------\n\n    def _handle_stream_response(self, request_body: dict):\n        \"\"\"Handle streaming SSE response from Moonshot API and yield OpenAI-format chunks.\"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n\n            url = f\"{self.base_url}/chat/completions\"\n            response = requests.post(url, headers=headers, json=request_body, stream=True, timeout=120)\n\n            if response.status_code != 200:\n                error_msg = response.text\n                logger.error(f\"[MOONSHOT] API error: status={response.status_code}, msg={error_msg}\")\n                yield {\"error\": True, \"message\": error_msg, \"status_code\": response.status_code}\n                return\n\n            current_tool_calls = {}\n            finish_reason = None\n\n            for line in response.iter_lines():\n                if not line:\n                    continue\n\n                line = line.decode(\"utf-8\")\n                if not line.startswith(\"data: \"):\n                    continue\n\n                data_str = line[6:]  # Remove \"data: \" prefix\n                if data_str.strip() == \"[DONE]\":\n                    break\n\n                try:\n                    chunk = json.loads(data_str)\n                except json.JSONDecodeError as e:\n                    logger.warning(f\"[MOONSHOT] JSON decode error: {e}, data: {data_str[:200]}\")\n                    continue\n\n                # Check for error in chunk\n                if chunk.get(\"error\"):\n                    error_data = chunk[\"error\"]\n                    error_msg = error_data.get(\"message\", \"Unknown error\") if isinstance(error_data, dict) else str(error_data)\n                    logger.error(f\"[MOONSHOT] stream error: {error_msg}\")\n                    yield {\"error\": True, \"message\": error_msg, \"status_code\": 500}\n                    return\n\n                if not chunk.get(\"choices\"):\n                    continue\n\n                choice = chunk[\"choices\"][0]\n                delta = choice.get(\"delta\", {})\n\n                # Skip reasoning_content (thinking) – don't log or forward\n                if delta.get(\"reasoning_content\"):\n                    continue\n\n                # Handle text content\n                if \"content\" in delta and delta[\"content\"]:\n                    yield {\n                        \"choices\": [{\n                            \"index\": 0,\n                            \"delta\": {\n                                \"role\": \"assistant\",\n                                \"content\": delta[\"content\"]\n                            }\n                        }]\n                    }\n\n                # Handle tool_calls (streamed incrementally)\n                if \"tool_calls\" in delta:\n                    for tool_call_chunk in delta[\"tool_calls\"]:\n                        index = tool_call_chunk.get(\"index\", 0)\n                        if index not in current_tool_calls:\n                            current_tool_calls[index] = {\n                                \"id\": tool_call_chunk.get(\"id\", \"\"),\n                                \"type\": \"tool_use\",\n                                \"name\": tool_call_chunk.get(\"function\", {}).get(\"name\", \"\"),\n                                \"input\": \"\"\n                            }\n\n                        # Accumulate arguments\n                        if \"function\" in tool_call_chunk and \"arguments\" in tool_call_chunk[\"function\"]:\n                            current_tool_calls[index][\"input\"] += tool_call_chunk[\"function\"][\"arguments\"]\n\n                        # Yield OpenAI-format tool call delta\n                        yield {\n                            \"choices\": [{\n                                \"index\": 0,\n                                \"delta\": {\n                                    \"tool_calls\": [tool_call_chunk]\n                                }\n                            }]\n                        }\n\n                # Capture finish_reason\n                if choice.get(\"finish_reason\"):\n                    finish_reason = choice[\"finish_reason\"]\n\n            # Final chunk with finish_reason\n            yield {\n                \"choices\": [{\n                    \"index\": 0,\n                    \"delta\": {},\n                    \"finish_reason\": finish_reason\n                }]\n            }\n\n        except requests.exceptions.Timeout:\n            logger.error(\"[MOONSHOT] Request timeout\")\n            yield {\"error\": True, \"message\": \"Request timeout\", \"status_code\": 500}\n        except Exception as e:\n            logger.error(f\"[MOONSHOT] stream response error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n\n    # -------------------- sync --------------------\n\n    def _handle_sync_response(self, request_body: dict):\n        \"\"\"Handle synchronous API response and yield a single result dict.\"\"\"\n        try:\n            headers = {\n                \"Content-Type\": \"application/json\",\n                \"Authorization\": f\"Bearer {self.api_key}\"\n            }\n\n            request_body.pop(\"stream\", None)\n            url = f\"{self.base_url}/chat/completions\"\n            response = requests.post(url, headers=headers, json=request_body, timeout=120)\n\n            if response.status_code != 200:\n                error_msg = response.text\n                logger.error(f\"[MOONSHOT] API error: status={response.status_code}, msg={error_msg}\")\n                yield {\"error\": True, \"message\": error_msg, \"status_code\": response.status_code}\n                return\n\n            result = response.json()\n            message = result[\"choices\"][0][\"message\"]\n            finish_reason = result[\"choices\"][0][\"finish_reason\"]\n\n            response_data = {\"role\": \"assistant\", \"content\": []}\n\n            # Add text content\n            if message.get(\"content\"):\n                response_data[\"content\"].append({\n                    \"type\": \"text\",\n                    \"text\": message[\"content\"]\n                })\n\n            # Add tool calls\n            if message.get(\"tool_calls\"):\n                for tool_call in message[\"tool_calls\"]:\n                    response_data[\"content\"].append({\n                        \"type\": \"tool_use\",\n                        \"id\": tool_call[\"id\"],\n                        \"name\": tool_call[\"function\"][\"name\"],\n                        \"input\": json.loads(tool_call[\"function\"][\"arguments\"])\n                    })\n\n            # Map finish_reason\n            if finish_reason == \"tool_calls\":\n                response_data[\"stop_reason\"] = \"tool_use\"\n            elif finish_reason == \"stop\":\n                response_data[\"stop_reason\"] = \"end_turn\"\n            else:\n                response_data[\"stop_reason\"] = finish_reason\n\n            yield response_data\n\n        except requests.exceptions.Timeout:\n            logger.error(\"[MOONSHOT] Request timeout\")\n            yield {\"error\": True, \"message\": \"Request timeout\", \"status_code\": 500}\n        except Exception as e:\n            logger.error(f\"[MOONSHOT] sync response error: {e}\")\n            import traceback\n            logger.error(traceback.format_exc())\n            yield {\"error\": True, \"message\": str(e), \"status_code\": 500}\n\n    # -------------------- format conversion --------------------\n\n    def _convert_messages_to_openai_format(self, messages):\n        \"\"\"\n        Convert messages from Claude format to OpenAI format.\n\n        Claude format uses content blocks: tool_use / tool_result / text\n        OpenAI format uses tool_calls in assistant, role=tool for results\n        \"\"\"\n        if not messages:\n            return []\n\n        converted = []\n\n        for msg in messages:\n            role = msg.get(\"role\")\n            content = msg.get(\"content\")\n\n            # Already a simple string – pass through\n            if isinstance(content, str):\n                converted.append(msg)\n                continue\n\n            if not isinstance(content, list):\n                converted.append(msg)\n                continue\n\n            if role == \"user\":\n                text_parts = []\n                tool_results = []\n\n                for block in content:\n                    if not isinstance(block, dict):\n                        continue\n                    if block.get(\"type\") == \"text\":\n                        text_parts.append(block.get(\"text\", \"\"))\n                    elif block.get(\"type\") == \"tool_result\":\n                        tool_call_id = block.get(\"tool_use_id\") or \"\"\n                        result_content = block.get(\"content\", \"\")\n                        if not isinstance(result_content, str):\n                            result_content = json.dumps(result_content, ensure_ascii=False)\n                        tool_results.append({\n                            \"role\": \"tool\",\n                            \"tool_call_id\": tool_call_id,\n                            \"content\": result_content\n                        })\n\n                # Tool results first (must come right after assistant with tool_calls)\n                for tr in tool_results:\n                    converted.append(tr)\n\n                if text_parts:\n                    converted.append({\"role\": \"user\", \"content\": \"\\n\".join(text_parts)})\n\n            elif role == \"assistant\":\n                openai_msg = {\"role\": \"assistant\"}\n                text_parts = []\n                tool_calls = []\n\n                for block in content:\n                    if not isinstance(block, dict):\n                        continue\n                    if block.get(\"type\") == \"text\":\n                        text_parts.append(block.get(\"text\", \"\"))\n                    elif block.get(\"type\") == \"tool_use\":\n                        tool_calls.append({\n                            \"id\": block.get(\"id\"),\n                            \"type\": \"function\",\n                            \"function\": {\n                                \"name\": block.get(\"name\"),\n                                \"arguments\": json.dumps(block.get(\"input\", {}))\n                            }\n                        })\n\n                if text_parts:\n                    openai_msg[\"content\"] = \"\\n\".join(text_parts)\n                elif not tool_calls:\n                    openai_msg[\"content\"] = \"\"\n\n                if tool_calls:\n                    openai_msg[\"tool_calls\"] = tool_calls\n                    if not text_parts:\n                        openai_msg[\"content\"] = None\n\n                converted.append(openai_msg)\n            else:\n                converted.append(msg)\n\n        return converted\n\n    def _convert_tools_to_openai_format(self, tools):\n        \"\"\"\n        Convert tools from Claude format to OpenAI format.\n\n        Claude: {name, description, input_schema}\n        OpenAI: {type: \"function\", function: {name, description, parameters}}\n        \"\"\"\n        if not tools:\n            return None\n\n        converted = []\n        for tool in tools:\n            # Already in OpenAI format\n            if \"type\" in tool and tool[\"type\"] == \"function\":\n                converted.append(tool)\n            else:\n                converted.append({\n                    \"type\": \"function\",\n                    \"function\": {\n                        \"name\": tool.get(\"name\"),\n                        \"description\": tool.get(\"description\"),\n                        \"parameters\": tool.get(\"input_schema\", {})\n                    }\n                })\n\n        return converted\n"
  },
  {
    "path": "models/moonshot/moonshot_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\nclass MoonshotSession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"moonshot-v1-128k\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        self.reset()\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"assistant\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"user\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens,\n                                                                                       len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\n\ndef num_tokens_from_messages(messages, model):\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/openai/open_ai_bot.py",
    "content": "# encoding:utf-8\n\nimport time\n\nimport openai\nfrom models.openai.openai_compat import RateLimitError, Timeout, APIConnectionError\n\nfrom models.bot import Bot\nfrom models.openai_compatible_bot import OpenAICompatibleBot\nfrom models.openai.open_ai_image import OpenAIImage\nfrom models.openai.open_ai_session import OpenAISession\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf\n\nuser_session = dict()\n\n\n# OpenAI对话模型API (可用)\nclass OpenAIBot(Bot, OpenAIImage, OpenAICompatibleBot):\n    def __init__(self):\n        super().__init__()\n        openai.api_key = conf().get(\"open_ai_api_key\")\n        if conf().get(\"open_ai_api_base\"):\n            openai.api_base = conf().get(\"open_ai_api_base\")\n        proxy = conf().get(\"proxy\")\n        if proxy:\n            openai.proxy = proxy\n\n        self.sessions = SessionManager(OpenAISession, model=conf().get(\"model\") or \"text-davinci-003\")\n        self.args = {\n            \"model\": conf().get(\"model\") or \"text-davinci-003\",  # 对话模型的名称\n            \"temperature\": conf().get(\"temperature\", 0.9),  # 值在[0,1]之间，越大表示回复越具有不确定性\n            \"max_tokens\": 1200,  # 回复最大的字符数\n            \"top_p\": 1,\n            \"frequency_penalty\": conf().get(\"frequency_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n            \"presence_penalty\": conf().get(\"presence_penalty\", 0.0),  # [-2,2]之间，该值越大则更倾向于产生不同的内容\n            \"request_timeout\": conf().get(\"request_timeout\", None),  # 请求超时时间，openai接口默认设置为600，对于难问题一般需要较长时间\n            \"timeout\": conf().get(\"request_timeout\", None),  # 重试超时时间，在这个时间内，将会自动重试\n            \"stop\": [\"\\n\\n\\n\"],\n        }\n    \n    def get_api_config(self):\n        \"\"\"Get API configuration for OpenAI-compatible base class\"\"\"\n        return {\n            'api_key': conf().get(\"open_ai_api_key\"),\n            'api_base': conf().get(\"open_ai_api_base\"),\n            'model': conf().get(\"model\", \"text-davinci-003\"),\n            'default_temperature': conf().get(\"temperature\", 0.9),\n            'default_top_p': conf().get(\"top_p\", 1.0),\n            'default_frequency_penalty': conf().get(\"frequency_penalty\", 0.0),\n            'default_presence_penalty': conf().get(\"presence_penalty\", 0.0),\n        }\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context and context.type:\n            if context.type == ContextType.TEXT:\n                logger.info(\"[OPEN_AI] query={}\".format(query))\n                session_id = context[\"session_id\"]\n                reply = None\n                if query == \"#清除记忆\":\n                    self.sessions.clear_session(session_id)\n                    reply = Reply(ReplyType.INFO, \"记忆已清除\")\n                elif query == \"#清除所有\":\n                    self.sessions.clear_all_session()\n                    reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n                else:\n                    session = self.sessions.session_query(query, session_id)\n                    result = self.reply_text(session)\n                    total_tokens, completion_tokens, reply_content = (\n                        result[\"total_tokens\"],\n                        result[\"completion_tokens\"],\n                        result[\"content\"],\n                    )\n                    logger.debug(\n                        \"[OPEN_AI] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(str(session), session_id, reply_content, completion_tokens)\n                    )\n\n                    if total_tokens == 0:\n                        reply = Reply(ReplyType.ERROR, reply_content)\n                    else:\n                        self.sessions.session_reply(reply_content, session_id, total_tokens)\n                        reply = Reply(ReplyType.TEXT, reply_content)\n                return reply\n            elif context.type == ContextType.IMAGE_CREATE:\n                ok, retstring = self.create_img(query, 0)\n                reply = None\n                if ok:\n                    reply = Reply(ReplyType.IMAGE_URL, retstring)\n                else:\n                    reply = Reply(ReplyType.ERROR, retstring)\n                return reply\n\n    def reply_text(self, session: OpenAISession, retry_count=0):\n        try:\n            response = openai.Completion.create(prompt=str(session), **self.args)\n            res_content = response.choices[0][\"text\"].strip().replace(\"<|endoftext|>\", \"\")\n            total_tokens = response[\"usage\"][\"total_tokens\"]\n            completion_tokens = response[\"usage\"][\"completion_tokens\"]\n            logger.info(\"[OPEN_AI] reply={}\".format(res_content))\n            return {\n                \"total_tokens\": total_tokens,\n                \"completion_tokens\": completion_tokens,\n                \"content\": res_content,\n            }\n        except Exception as e:\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            if isinstance(e, RateLimitError):\n                logger.warn(\"[OPEN_AI] RateLimitError: {}\".format(e))\n                result[\"content\"] = \"提问太快啦，请休息一下再问我吧\"\n                if need_retry:\n                    time.sleep(20)\n            elif isinstance(e, Timeout):\n                logger.warn(\"[OPEN_AI] Timeout: {}\".format(e))\n                result[\"content\"] = \"我没有收到你的消息\"\n                if need_retry:\n                    time.sleep(5)\n            elif isinstance(e, APIConnectionError):\n                logger.warn(\"[OPEN_AI] APIConnectionError: {}\".format(e))\n                need_retry = False\n                result[\"content\"] = \"我连接不到你的网络\"\n            else:\n                logger.warn(\"[OPEN_AI] Exception: {}\".format(e))\n                need_retry = False\n                self.sessions.clear_session(session.session_id)\n\n            if need_retry:\n                logger.warn(\"[OPEN_AI] 第{}次重试\".format(retry_count + 1))\n                return self.reply_text(session, retry_count + 1)\n            else:\n                return result\n\n    def call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n        \"\"\"\n        Call OpenAI API with tool support for agent integration\n        Note: This bot uses the old Completion API which doesn't support tools.\n        For tool support, use ChatGPTBot instead.\n        \n        This method converts to ChatCompletion API when tools are provided.\n        \n        Args:\n            messages: List of messages\n            tools: List of tool definitions (OpenAI format)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters\n            \n        Returns:\n            Formatted response in OpenAI format or generator for streaming\n        \"\"\"\n        try:\n            # The old Completion API doesn't support tools\n            # We need to use ChatCompletion API instead\n            logger.info(\"[OPEN_AI] Using ChatCompletion API for tool support\")\n            \n            # Build request parameters for ChatCompletion\n            request_params = {\n                \"model\": kwargs.get(\"model\", conf().get(\"model\") or \"gpt-4.1\"),\n                \"messages\": messages,\n                \"temperature\": kwargs.get(\"temperature\", conf().get(\"temperature\", 0.9)),\n                \"top_p\": kwargs.get(\"top_p\", 1),\n                \"frequency_penalty\": kwargs.get(\"frequency_penalty\", conf().get(\"frequency_penalty\", 0.0)),\n                \"presence_penalty\": kwargs.get(\"presence_penalty\", conf().get(\"presence_penalty\", 0.0)),\n                \"stream\": stream\n            }\n            \n            # Add max_tokens if specified\n            if kwargs.get(\"max_tokens\"):\n                request_params[\"max_tokens\"] = kwargs[\"max_tokens\"]\n            \n            # Add tools if provided\n            if tools:\n                request_params[\"tools\"] = tools\n                request_params[\"tool_choice\"] = kwargs.get(\"tool_choice\", \"auto\")\n            \n            # Make API call using ChatCompletion\n            if stream:\n                return self._handle_stream_response(request_params)\n            else:\n                return self._handle_sync_response(request_params)\n                \n        except Exception as e:\n            logger.error(f\"[OPEN_AI] call_with_tools error: {e}\")\n            if stream:\n                def error_generator():\n                    yield {\n                        \"error\": True,\n                        \"message\": str(e),\n                        \"status_code\": 500\n                    }\n                return error_generator()\n            else:\n                return {\n                    \"error\": True,\n                    \"message\": str(e),\n                    \"status_code\": 500\n                }\n    \n    def _handle_sync_response(self, request_params):\n        \"\"\"Handle synchronous OpenAI ChatCompletion API response\"\"\"\n        try:\n            response = openai.ChatCompletion.create(**request_params)\n            \n            logger.info(f\"[OPEN_AI] call_with_tools reply, model={response.get('model')}, \"\n                       f\"total_tokens={response.get('usage', {}).get('total_tokens', 0)}\")\n            \n            return response\n            \n        except Exception as e:\n            logger.error(f\"[OPEN_AI] sync response error: {e}\")\n            raise\n    \n    def _handle_stream_response(self, request_params):\n        \"\"\"Handle streaming OpenAI ChatCompletion API response\"\"\"\n        try:\n            stream = openai.ChatCompletion.create(**request_params)\n            \n            for chunk in stream:\n                yield chunk\n                \n        except Exception as e:\n            logger.error(f\"[OPEN_AI] stream response error: {e}\")\n            yield {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n"
  },
  {
    "path": "models/openai/open_ai_image.py",
    "content": "import time\n\nimport openai\nfrom models.openai.openai_compat import RateLimitError\n\nfrom common.log import logger\nfrom common.token_bucket import TokenBucket\nfrom config import conf\n\n\n# OPENAI提供的画图接口\nclass OpenAIImage(object):\n    def __init__(self):\n        openai.api_key = conf().get(\"open_ai_api_key\")\n        if conf().get(\"rate_limit_dalle\"):\n            self.tb4dalle = TokenBucket(conf().get(\"rate_limit_dalle\", 50))\n\n    def create_img(self, query, retry_count=0, api_key=None, api_base=None):\n        try:\n            if conf().get(\"rate_limit_dalle\") and not self.tb4dalle.get_token():\n                return False, \"请求太快了，请休息一下再问我吧\"\n            logger.info(\"[OPEN_AI] image_query={}\".format(query))\n            response = openai.Image.create(\n                api_key=api_key,\n                prompt=query,  # 图片描述\n                n=1,  # 每次生成图片的数量\n                model=conf().get(\"text_to_image\") or \"dall-e-2\",\n                # size=conf().get(\"image_create_size\", \"256x256\"),  # 图片大小,可选有 256x256, 512x512, 1024x1024\n            )\n            image_url = response[\"data\"][0][\"url\"]\n            logger.info(\"[OPEN_AI] image_url={}\".format(image_url))\n            return True, image_url\n        except RateLimitError as e:\n            logger.warn(e)\n            if retry_count < 1:\n                time.sleep(5)\n                logger.warn(\"[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试\".format(retry_count + 1))\n                return self.create_img(query, retry_count + 1)\n            else:\n                return False, \"画图出现问题，请休息一下再问我吧\"\n        except Exception as e:\n            logger.exception(e)\n            return False, \"画图出现问题，请休息一下再问我吧\"\n"
  },
  {
    "path": "models/openai/open_ai_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\nclass OpenAISession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"text-davinci-003\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        self.reset()\n\n    def __str__(self):\n        # 构造对话模型的输入\n        \"\"\"\n        e.g.  Q: xxx\n              A: xxx\n              Q: xxx\n        \"\"\"\n        prompt = \"\"\n        for item in self.messages:\n            if item[\"role\"] == \"system\":\n                prompt += item[\"content\"] + \"<|endoftext|>\\n\\n\\n\"\n            elif item[\"role\"] == \"user\":\n                prompt += \"Q: \" + item[\"content\"] + \"\\n\"\n            elif item[\"role\"] == \"assistant\":\n                prompt += \"\\n\\nA: \" + item[\"content\"] + \"<|endoftext|>\\n\"\n\n        if len(self.messages) > 0 and self.messages[-1][\"role\"] == \"user\":\n            prompt += \"A: \"\n        return prompt\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 1:\n                self.messages.pop(0)\n            elif len(self.messages) == 1 and self.messages[0][\"role\"] == \"assistant\":\n                self.messages.pop(0)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = len(str(self))\n                break\n            elif len(self.messages) == 1 and self.messages[0][\"role\"] == \"user\":\n                logger.warn(\"user question exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(conversation)={}\".format(max_tokens, cur_tokens, len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = len(str(self))\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_string(str(self), self.model)\n\n\n# refer to https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb\ndef num_tokens_from_string(string: str, model: str) -> int:\n    \"\"\"Returns the number of tokens in a text string.\"\"\"\n    import tiktoken\n\n    encoding = tiktoken.encoding_for_model(model)\n    num_tokens = len(encoding.encode(string, disallowed_special=()))\n    return num_tokens\n"
  },
  {
    "path": "models/openai/openai_compat.py",
    "content": "\"\"\"\nOpenAI compatibility layer for different versions.\n\nThis module provides a compatibility layer between OpenAI library versions:\n- OpenAI < 1.0 (old API with openai.error module)\n- OpenAI >= 1.0 (new API with direct exception imports)\n\"\"\"\n\ntry:\n    # Try new OpenAI >= 1.0 API\n    from openai import (\n        OpenAIError,\n        RateLimitError,\n        APIError,\n        APIConnectionError,\n        AuthenticationError,\n        APITimeoutError,\n        BadRequestError,\n    )\n    \n    # Create a mock error module for backward compatibility\n    class ErrorModule:\n        OpenAIError = OpenAIError\n        RateLimitError = RateLimitError\n        APIError = APIError\n        APIConnectionError = APIConnectionError\n        AuthenticationError = AuthenticationError\n        Timeout = APITimeoutError  # Renamed in new version\n        InvalidRequestError = BadRequestError  # Renamed in new version\n    \n    error = ErrorModule()\n    \n    # Also export with new names\n    Timeout = APITimeoutError\n    InvalidRequestError = BadRequestError\n    \nexcept ImportError:\n    # Fall back to old OpenAI < 1.0 API\n    try:\n        import openai.error as error\n        \n        # Export individual exceptions for direct import\n        OpenAIError = error.OpenAIError\n        RateLimitError = error.RateLimitError\n        APIError = error.APIError\n        APIConnectionError = error.APIConnectionError\n        AuthenticationError = error.AuthenticationError\n        InvalidRequestError = error.InvalidRequestError\n        Timeout = error.Timeout\n        BadRequestError = error.InvalidRequestError  # Alias\n        APITimeoutError = error.Timeout  # Alias\n    except (ImportError, AttributeError):\n        # Neither version works, create dummy classes\n        class OpenAIError(Exception):\n            pass\n        \n        class RateLimitError(OpenAIError):\n            pass\n        \n        class APIError(OpenAIError):\n            pass\n        \n        class APIConnectionError(OpenAIError):\n            pass\n        \n        class AuthenticationError(OpenAIError):\n            pass\n        \n        class InvalidRequestError(OpenAIError):\n            pass\n        \n        class Timeout(OpenAIError):\n            pass\n        \n        BadRequestError = InvalidRequestError\n        APITimeoutError = Timeout\n        \n        # Create error module\n        class ErrorModule:\n            OpenAIError = OpenAIError\n            RateLimitError = RateLimitError\n            APIError = APIError\n            APIConnectionError = APIConnectionError\n            AuthenticationError = AuthenticationError\n            InvalidRequestError = InvalidRequestError\n            Timeout = Timeout\n        \n        error = ErrorModule()\n\n# Export all for easy import\n__all__ = [\n    'error',\n    'OpenAIError',\n    'RateLimitError',\n    'APIError',\n    'APIConnectionError',\n    'AuthenticationError',\n    'InvalidRequestError',\n    'Timeout',\n    'BadRequestError',\n    'APITimeoutError',\n]\n"
  },
  {
    "path": "models/openai_compatible_bot.py",
    "content": "# encoding:utf-8\n\n\"\"\"\nOpenAI-Compatible Bot Base Class\n\nProvides a common implementation for bots that are compatible with OpenAI's API format.\nThis includes: OpenAI, LinkAI, Azure OpenAI, and many third-party providers.\n\"\"\"\n\nimport json\nimport openai\nfrom common.log import logger\nfrom agent.protocol.message_utils import drop_orphaned_tool_results_openai\n\n\nclass OpenAICompatibleBot:\n    \"\"\"\n    Base class for OpenAI-compatible bots.\n    \n    Provides common tool calling implementation that can be inherited by:\n    - ChatGPTBot\n    - LinkAIBot  \n    - OpenAIBot\n    - AzureChatGPTBot\n    - Other OpenAI-compatible providers\n    \n    Subclasses only need to override get_api_config() to provide their specific API settings.\n    \"\"\"\n    \n    def get_api_config(self):\n        \"\"\"\n        Get API configuration for this bot.\n        \n        Subclasses should override this to provide their specific config.\n        \n        Returns:\n            dict: {\n                'api_key': str,\n                'api_base': str (optional),\n                'model': str,\n                'default_temperature': float,\n                'default_top_p': float,\n                'default_frequency_penalty': float,\n                'default_presence_penalty': float,\n            }\n        \"\"\"\n        raise NotImplementedError(\"Subclasses must implement get_api_config()\")\n    \n    def call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n        \"\"\"\n        Call OpenAI-compatible API with tool support for agent integration\n        \n        This method handles:\n        1. Format conversion (Claude format → OpenAI format)\n        2. System prompt injection\n        3. API calling with proper configuration\n        4. Error handling\n        \n        Args:\n            messages: List of messages (may be in Claude format from agent)\n            tools: List of tool definitions (may be in Claude format from agent)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters (max_tokens, temperature, system, etc.)\n            \n        Returns:\n            Formatted response in OpenAI format or generator for streaming\n        \"\"\"\n        try:\n            # Get API configuration from subclass\n            api_config = self.get_api_config()\n            \n            # Convert messages from Claude format to OpenAI format\n            messages = self._convert_messages_to_openai_format(messages)\n            \n            # Convert tools from Claude format to OpenAI format\n            if tools:\n                tools = self._convert_tools_to_openai_format(tools)\n            \n            # Handle system prompt (OpenAI uses system message, Claude uses separate parameter)\n            system_prompt = kwargs.get('system')\n            if system_prompt:\n                # Add system message at the beginning if not already present\n                if not messages or messages[0].get('role') != 'system':\n                    messages = [{\"role\": \"system\", \"content\": system_prompt}] + messages\n                else:\n                    # Replace existing system message\n                    messages[0] = {\"role\": \"system\", \"content\": system_prompt}\n            \n            # Build request parameters\n            request_params = {\n                \"model\": kwargs.get(\"model\", api_config.get('model', 'gpt-3.5-turbo')),\n                \"messages\": messages,\n                \"temperature\": kwargs.get(\"temperature\", api_config.get('default_temperature', 0.9)),\n                \"top_p\": kwargs.get(\"top_p\", api_config.get('default_top_p', 1.0)),\n                \"frequency_penalty\": kwargs.get(\"frequency_penalty\", api_config.get('default_frequency_penalty', 0.0)),\n                \"presence_penalty\": kwargs.get(\"presence_penalty\", api_config.get('default_presence_penalty', 0.0)),\n                \"stream\": stream\n            }\n            \n            # Add max_tokens if specified\n            if kwargs.get(\"max_tokens\"):\n                request_params[\"max_tokens\"] = kwargs[\"max_tokens\"]\n            \n            # Add tools if provided\n            if tools:\n                request_params[\"tools\"] = tools\n                request_params[\"tool_choice\"] = kwargs.get(\"tool_choice\", \"auto\")\n            \n            # Make API call with proper configuration\n            api_key = api_config.get('api_key')\n            api_base = api_config.get('api_base')\n            \n            if stream:\n                return self._handle_stream_response(request_params, api_key, api_base)\n            else:\n                return self._handle_sync_response(request_params, api_key, api_base)\n                \n        except Exception as e:\n            error_msg = str(e)\n            logger.error(f\"[{self.__class__.__name__}] call_with_tools error: {error_msg}\")\n            if stream:\n                def error_generator():\n                    yield {\n                        \"error\": True,\n                        \"message\": error_msg,\n                        \"status_code\": 500\n                    }\n                return error_generator()\n            else:\n                return {\n                    \"error\": True,\n                    \"message\": error_msg,\n                    \"status_code\": 500\n                }\n    \n    def _handle_sync_response(self, request_params, api_key, api_base):\n        \"\"\"Handle synchronous OpenAI API response\"\"\"\n        try:\n            # Build kwargs with explicit API configuration\n            kwargs = dict(request_params)\n            if api_key:\n                kwargs[\"api_key\"] = api_key\n            if api_base:\n                kwargs[\"api_base\"] = api_base\n            \n            response = openai.ChatCompletion.create(**kwargs)\n            return response\n            \n        except Exception as e:\n            logger.error(f\"[{self.__class__.__name__}] sync response error: {e}\")\n            return {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n    \n    def _handle_stream_response(self, request_params, api_key, api_base):\n        \"\"\"Handle streaming OpenAI API response\"\"\"\n        try:\n            # Build kwargs with explicit API configuration\n            kwargs = dict(request_params)\n            if api_key:\n                kwargs[\"api_key\"] = api_key\n            if api_base:\n                kwargs[\"api_base\"] = api_base\n            \n            stream = openai.ChatCompletion.create(**kwargs)\n            \n            # Stream chunks to caller\n            for chunk in stream:\n                yield chunk\n                \n        except Exception as e:\n            logger.error(f\"[{self.__class__.__name__}] stream response error: {e}\")\n            yield {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n    \n    def _convert_tools_to_openai_format(self, tools):\n        \"\"\"\n        Convert tools from Claude format to OpenAI format\n        \n        Claude format: {name, description, input_schema}\n        OpenAI format: {type: \"function\", function: {name, description, parameters}}\n        \"\"\"\n        if not tools:\n            return None\n        \n        openai_tools = []\n        for tool in tools:\n            # Check if already in OpenAI format\n            if 'type' in tool and tool['type'] == 'function':\n                openai_tools.append(tool)\n            else:\n                # Convert from Claude format\n                openai_tools.append({\n                    \"type\": \"function\",\n                    \"function\": {\n                        \"name\": tool.get(\"name\"),\n                        \"description\": tool.get(\"description\"),\n                        \"parameters\": tool.get(\"input_schema\", {})\n                    }\n                })\n        \n        return openai_tools\n    \n    def _convert_messages_to_openai_format(self, messages):\n        \"\"\"\n        Convert messages from Claude format to OpenAI format\n        \n        Claude uses content blocks with types like 'tool_use', 'tool_result'\n        OpenAI uses 'tool_calls' in assistant messages and 'tool' role for results\n        \"\"\"\n        if not messages:\n            return []\n        \n        openai_messages = []\n        \n        for msg in messages:\n            role = msg.get(\"role\")\n            content = msg.get(\"content\")\n            \n            # Handle string content (already in correct format)\n            if isinstance(content, str):\n                openai_messages.append(msg)\n                continue\n            \n            # Handle list content (Claude format with content blocks)\n            if isinstance(content, list):\n                # Check if this is a tool result message (user role with tool_result blocks)\n                if role == \"user\" and any(block.get(\"type\") == \"tool_result\" for block in content):\n                    # Separate text content and tool_result blocks\n                    text_parts = []\n                    tool_results = []\n\n                    for block in content:\n                        if block.get(\"type\") == \"text\":\n                            text_parts.append(block.get(\"text\", \"\"))\n                        elif block.get(\"type\") == \"tool_result\":\n                            tool_results.append(block)\n\n                    # First, add tool result messages (must come immediately after assistant with tool_calls)\n                    for block in tool_results:\n                        tool_call_id = block.get(\"tool_use_id\") or \"\"\n                        if not tool_call_id:\n                            logger.warning(f\"[OpenAICompatible] tool_result missing tool_use_id, using empty string\")\n                        # Ensure content is a string (some providers require string content)\n                        result_content = block.get(\"content\", \"\")\n                        if not isinstance(result_content, str):\n                            result_content = json.dumps(result_content, ensure_ascii=False)\n                        openai_messages.append({\n                            \"role\": \"tool\",\n                            \"tool_call_id\": tool_call_id,\n                            \"content\": result_content\n                        })\n\n                    # Then, add text content as a separate user message if present\n                    if text_parts:\n                        openai_messages.append({\n                            \"role\": \"user\",\n                            \"content\": \" \".join(text_parts)\n                        })\n\n                # Check if this is an assistant message with tool_use blocks\n                elif role == \"assistant\":\n                    # Separate text content and tool_use blocks\n                    text_parts = []\n                    tool_calls = []\n\n                    for block in content:\n                        if block.get(\"type\") == \"text\":\n                            text_parts.append(block.get(\"text\", \"\"))\n                        elif block.get(\"type\") == \"tool_use\":\n                            tool_id = block.get(\"id\") or \"\"\n                            if not tool_id:\n                                logger.warning(f\"[OpenAICompatible] tool_use missing id for '{block.get('name')}'\")\n                            tool_calls.append({\n                                \"id\": tool_id,\n                                \"type\": \"function\",\n                                \"function\": {\n                                    \"name\": block.get(\"name\"),\n                                    \"arguments\": json.dumps(block.get(\"input\", {}))\n                                }\n                            })\n\n                    # Build OpenAI format assistant message\n                    openai_msg = {\n                        \"role\": \"assistant\",\n                        \"content\": \" \".join(text_parts) if text_parts else None\n                    }\n\n                    if tool_calls:\n                        openai_msg[\"tool_calls\"] = tool_calls\n\n                    if msg.get(\"_gemini_raw_parts\"):\n                        openai_msg[\"_gemini_raw_parts\"] = msg[\"_gemini_raw_parts\"]\n\n                    openai_messages.append(openai_msg)\n                else:\n                    # Other list content, keep as is\n                    openai_messages.append(msg)\n            else:\n                # Other formats, keep as is\n                openai_messages.append(msg)\n\n        return drop_orphaned_tool_results_openai(openai_messages)\n"
  },
  {
    "path": "models/session_manager.py",
    "content": "from common.expired_dict import ExpiredDict\nfrom common.log import logger\nfrom config import conf\n\n\nclass Session(object):\n    def __init__(self, session_id, system_prompt=None):\n        self.session_id = session_id\n        self.messages = []\n        if system_prompt is None:\n            self.system_prompt = conf().get(\"character_desc\", \"\")\n        else:\n            self.system_prompt = system_prompt\n\n    # 重置会话\n    def reset(self):\n        system_item = {\"role\": \"system\", \"content\": self.system_prompt}\n        self.messages = [system_item]\n\n    def set_system_prompt(self, system_prompt):\n        self.system_prompt = system_prompt\n        self.reset()\n\n    def add_query(self, query):\n        user_item = {\"role\": \"user\", \"content\": query}\n        self.messages.append(user_item)\n\n    def add_reply(self, reply):\n        assistant_item = {\"role\": \"assistant\", \"content\": reply}\n        self.messages.append(assistant_item)\n\n    def discard_exceeding(self, max_tokens=None, cur_tokens=None):\n        raise NotImplementedError\n\n    def calc_tokens(self):\n        raise NotImplementedError\n\n\nclass SessionManager(object):\n    def __init__(self, sessioncls, **session_args):\n        if conf().get(\"expires_in_seconds\"):\n            sessions = ExpiredDict(conf().get(\"expires_in_seconds\"))\n        else:\n            sessions = dict()\n        self.sessions = sessions\n        self.sessioncls = sessioncls\n        self.session_args = session_args\n\n    def build_session(self, session_id, system_prompt=None):\n        \"\"\"\n        如果session_id不在sessions中，创建一个新的session并添加到sessions中\n        如果system_prompt不会空，会更新session的system_prompt并重置session\n        \"\"\"\n        if session_id is None:\n            return self.sessioncls(session_id, system_prompt, **self.session_args)\n\n        if session_id not in self.sessions:\n            self.sessions[session_id] = self.sessioncls(session_id, system_prompt, **self.session_args)\n        elif system_prompt is not None:  # 如果有新的system_prompt，更新并重置session\n            self.sessions[session_id].set_system_prompt(system_prompt)\n        session = self.sessions[session_id]\n        return session\n\n    def session_query(self, query, session_id):\n        session = self.build_session(session_id)\n        session.add_query(query)\n        try:\n            max_tokens = conf().get(\"conversation_max_tokens\", 1000)\n            total_tokens = session.discard_exceeding(max_tokens, None)\n            logger.debug(\"prompt tokens used={}\".format(total_tokens))\n        except Exception as e:\n            logger.warning(\"Exception when counting tokens precisely for prompt: {}\".format(str(e)))\n        return session\n\n    def session_reply(self, reply, session_id, total_tokens=None):\n        session = self.build_session(session_id)\n        session.add_reply(reply)\n        try:\n            max_tokens = conf().get(\"conversation_max_tokens\", 1000)\n            tokens_cnt = session.discard_exceeding(max_tokens, total_tokens)\n            logger.debug(\"raw total_tokens={}, savesession tokens={}\".format(total_tokens, tokens_cnt))\n        except Exception as e:\n            logger.warning(\"Exception when counting tokens precisely for session: {}\".format(str(e)))\n        return session\n\n    def clear_session(self, session_id):\n        if session_id in self.sessions:\n            del self.sessions[session_id]\n\n    def clear_all_session(self):\n        self.sessions.clear()\n"
  },
  {
    "path": "models/xunfei/xunfei_spark_bot.py",
    "content": "# encoding:utf-8\n\nimport requests, json\nfrom models.bot import Bot\nfrom models.session_manager import SessionManager\nfrom models.chatgpt.chat_gpt_session import ChatGPTSession\nfrom bridge.context import ContextType, Context\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf\nfrom common import const\nimport time\nimport _thread as thread\nimport datetime\nfrom datetime import datetime\nfrom wsgiref.handlers import format_date_time\nfrom urllib.parse import urlencode\nimport base64\nimport ssl\nimport hashlib\nimport hmac\nimport json\nfrom time import mktime\nfrom urllib.parse import urlparse\nimport websocket\nimport queue\nimport threading\nimport random\n\n# 消息队列 map\nqueue_map = dict()\n\n# 响应队列 map\nreply_map = dict()\n\n\nclass XunFeiBot(Bot):\n    def __init__(self):\n        super().__init__()\n        self.app_id = conf().get(\"xunfei_app_id\")\n        self.api_key = conf().get(\"xunfei_api_key\")\n        self.api_secret = conf().get(\"xunfei_api_secret\")\n        # 默认使用v2.0版本: \"generalv2\"\n        # Spark Lite请求地址(spark_url): wss://spark-api.xf-yun.com/v1.1/chat, 对应的domain参数为: \"lite\"\n        # Spark V2.0请求地址(spark_url): wss://spark-api.xf-yun.com/v2.1/chat, 对应的domain参数为: \"generalv2\"\n        # Spark Pro 请求地址(spark_url): wss://spark-api.xf-yun.com/v3.1/chat, 对应的domain参数为: \"generalv3\"\n        # Spark Pro-128K请求地址(spark_url):  wss://spark-api.xf-yun.com/chat/pro-128k, 对应的domain参数为: \"pro-128k\"\n        # Spark Max 请求地址(spark_url): wss://spark-api.xf-yun.com/v3.5/chat, 对应的domain参数为: \"generalv3.5\"\n        # Spark4.0 Ultra 请求地址(spark_url): wss://spark-api.xf-yun.com/v4.0/chat, 对应的domain参数为: \"4.0Ultra\"\n        # 后续模型更新，对应的参数可以参考官网文档获取：https://www.xfyun.cn/doc/spark/Web.html\n        self.domain = conf().get(\"xunfei_domain\", \"generalv3.5\")\n        self.spark_url = conf().get(\"xunfei_spark_url\", \"wss://spark-api.xf-yun.com/v3.5/chat\")\n        self.host = urlparse(self.spark_url).netloc\n        self.path = urlparse(self.spark_url).path\n        # 和wenxin使用相同的session机制\n        self.sessions = SessionManager(ChatGPTSession, model=const.XUNFEI)\n\n    def reply(self, query, context: Context = None) -> Reply:\n        if context.type == ContextType.TEXT:\n            logger.info(\"[XunFei] query={}\".format(query))\n            session_id = context[\"session_id\"]\n            request_id = self.gen_request_id(session_id)\n            reply_map[request_id] = \"\"\n            session = self.sessions.session_query(query, session_id)\n            threading.Thread(target=self.create_web_socket,\n                             args=(session.messages, request_id)).start()\n            depth = 0\n            time.sleep(0.1)\n            t1 = time.time()\n            usage = {}\n            while depth <= 300:\n                try:\n                    data_queue = queue_map.get(request_id)\n                    if not data_queue:\n                        depth += 1\n                        time.sleep(0.1)\n                        continue\n                    data_item = data_queue.get(block=True, timeout=0.1)\n                    if data_item.is_end:\n                        # 请求结束\n                        del queue_map[request_id]\n                        if data_item.reply:\n                            reply_map[request_id] += data_item.reply\n                        usage = data_item.usage\n                        break\n\n                    reply_map[request_id] += data_item.reply\n                    depth += 1\n                except Exception as e:\n                    depth += 1\n                    continue\n            t2 = time.time()\n            logger.info(\n                f\"[XunFei-API] response={reply_map[request_id]}, time={t2 - t1}s, usage={usage}\"\n            )\n            self.sessions.session_reply(reply_map[request_id], session_id,\n                                        usage.get(\"total_tokens\"))\n            reply = Reply(ReplyType.TEXT, reply_map[request_id])\n            del reply_map[request_id]\n            return reply\n        else:\n            reply = Reply(ReplyType.ERROR,\n                          \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def create_web_socket(self, prompt, session_id, temperature=0.5):\n        logger.info(f\"[XunFei] start connect, prompt={prompt}\")\n        websocket.enableTrace(False)\n        wsUrl = self.create_url()\n        ws = websocket.WebSocketApp(wsUrl,\n                                    on_message=on_message,\n                                    on_error=on_error,\n                                    on_close=on_close,\n                                    on_open=on_open)\n        data_queue = queue.Queue(1000)\n        queue_map[session_id] = data_queue\n        ws.appid = self.app_id\n        ws.question = prompt\n        ws.domain = self.domain\n        ws.session_id = session_id\n        ws.temperature = temperature\n        ws.run_forever(sslopt={\"cert_reqs\": ssl.CERT_NONE})\n\n    def gen_request_id(self, session_id: str):\n        return session_id + \"_\" + str(int(time.time())) + \"\" + str(\n            random.randint(0, 100))\n\n    # 生成url\n    def create_url(self):\n        # 生成RFC1123格式的时间戳\n        now = datetime.now()\n        date = format_date_time(mktime(now.timetuple()))\n\n        # 拼接字符串\n        signature_origin = \"host: \" + self.host + \"\\n\"\n        signature_origin += \"date: \" + date + \"\\n\"\n        signature_origin += \"GET \" + self.path + \" HTTP/1.1\"\n\n        # 进行hmac-sha256进行加密\n        signature_sha = hmac.new(self.api_secret.encode('utf-8'),\n                                 signature_origin.encode('utf-8'),\n                                 digestmod=hashlib.sha256).digest()\n\n        signature_sha_base64 = base64.b64encode(signature_sha).decode(\n            encoding='utf-8')\n\n        authorization_origin = f'api_key=\"{self.api_key}\", algorithm=\"hmac-sha256\", headers=\"host date request-line\", ' \\\n                               f'signature=\"{signature_sha_base64}\"'\n\n        authorization = base64.b64encode(\n            authorization_origin.encode('utf-8')).decode(encoding='utf-8')\n\n        # 将请求的鉴权参数组合为字典\n        v = {\"authorization\": authorization, \"date\": date, \"host\": self.host}\n        # 拼接鉴权参数，生成url\n        url = self.spark_url + '?' + urlencode(v)\n        # 此处打印出建立连接时候的url,参考本demo的时候可取消上方打印的注释，比对相同参数时生成的url与自己代码生成的url是否一致\n        return url\n\n    def gen_params(self, appid, domain, question):\n        \"\"\"\n        通过appid和用户的提问来生成请参数\n        \"\"\"\n        data = {\n            \"header\": {\n                \"app_id\": appid,\n                \"uid\": \"1234\"\n            },\n            \"parameter\": {\n                \"chat\": {\n                    \"domain\": domain,\n                    \"random_threshold\": 0.5,\n                    \"max_tokens\": 2048,\n                    \"auditing\": \"default\"\n                }\n            },\n            \"payload\": {\n                \"message\": {\n                    \"text\": question\n                }\n            }\n        }\n        return data\n\n\nclass ReplyItem:\n    def __init__(self, reply, usage=None, is_end=False):\n        self.is_end = is_end\n        self.reply = reply\n        self.usage = usage\n\n\n# 收到websocket错误的处理\ndef on_error(ws, error):\n    logger.error(f\"[XunFei] error: {str(error)}\")\n\n\n# 收到websocket关闭的处理\ndef on_close(ws, one, two):\n    data_queue = queue_map.get(ws.session_id)\n    data_queue.put(\"END\")\n\n\n# 收到websocket连接建立的处理\ndef on_open(ws):\n    logger.info(f\"[XunFei] Start websocket, session_id={ws.session_id}\")\n    thread.start_new_thread(run, (ws, ))\n\n\ndef run(ws, *args):\n    data = json.dumps(\n        gen_params(appid=ws.appid,\n                   domain=ws.domain,\n                   question=ws.question,\n                   temperature=ws.temperature))\n    ws.send(data)\n\n\n# Websocket 操作\n# 收到websocket消息的处理\ndef on_message(ws, message):\n    data = json.loads(message)\n    code = data['header']['code']\n    if code != 0:\n        logger.error(f'请求错误: {code}, {data}')\n        ws.close()\n    else:\n        choices = data[\"payload\"][\"choices\"]\n        status = choices[\"status\"]\n        content = choices[\"text\"][0][\"content\"]\n        data_queue = queue_map.get(ws.session_id)\n        if not data_queue:\n            logger.error(\n                f\"[XunFei] can't find data queue, session_id={ws.session_id}\")\n            return\n        reply_item = ReplyItem(content)\n        if status == 2:\n            usage = data[\"payload\"].get(\"usage\")\n            reply_item = ReplyItem(content, usage)\n            reply_item.is_end = True\n            ws.close()\n        data_queue.put(reply_item)\n\n\ndef gen_params(appid, domain, question, temperature=0.5):\n    \"\"\"\n    通过appid和用户的提问来生成请参数\n    \"\"\"\n    data = {\n        \"header\": {\n            \"app_id\": appid,\n            \"uid\": \"1234\"\n        },\n        \"parameter\": {\n            \"chat\": {\n                \"domain\": domain,\n                \"temperature\": temperature,\n                \"random_threshold\": 0.5,\n                \"max_tokens\": 2048,\n                \"auditing\": \"default\"\n            }\n        },\n        \"payload\": {\n            \"message\": {\n                \"text\": question\n            }\n        }\n    }\n    return data\n"
  },
  {
    "path": "models/zhipuai/zhipu_ai_image.py",
    "content": "from common.log import logger\nfrom config import conf\n\n\n# ZhipuAI提供的画图接口\n\nclass ZhipuAIImage(object):\n    def __init__(self):\n        from zai import ZhipuAiClient\n        # 初始化客户端，支持自定义 API base URL（例如智谱国际版 z.ai）\n        api_key = conf().get(\"zhipu_ai_api_key\")\n        api_base = conf().get(\"zhipu_ai_api_base\")\n        \n        if api_base:\n            self.client = ZhipuAiClient(api_key=api_key, base_url=api_base)\n        else:\n            self.client = ZhipuAiClient(api_key=api_key)\n\n    def create_img(self, query, retry_count=0, api_key=None, api_base=None):\n        try:\n            if conf().get(\"rate_limit_dalle\"):\n                return False, \"请求太快了，请休息一下再问我吧\"\n            logger.info(\"[ZHIPU_AI] image_query={}\".format(query))\n            response = self.client.images.generations(\n                prompt=query,\n                n=1,  # 每次生成图片的数量\n                model=conf().get(\"text_to_image\") or \"cogview-3\",\n                size=conf().get(\"image_create_size\", \"1024x1024\"),  # 图片大小,可选有 256x256, 512x512, 1024x1024\n                quality=\"standard\",\n            )\n            image_url = response.data[0].url\n            logger.info(\"[ZHIPU_AI] image_url={}\".format(image_url))\n            return True, image_url\n        except Exception as e:\n            logger.exception(e)\n            return False, \"画图出现问题，请休息一下再问我吧\"\n"
  },
  {
    "path": "models/zhipuai/zhipu_ai_session.py",
    "content": "from models.session_manager import Session\nfrom common.log import logger\n\n\nclass ZhipuAISession(Session):\n    def __init__(self, session_id, system_prompt=None, model=\"glm-4\"):\n        super().__init__(session_id, system_prompt)\n        self.model = model\n        self.reset()\n        if not system_prompt:\n            logger.warn(\"[ZhiPu] `character_desc` can not be empty\")\n\n    def discard_exceeding(self, max_tokens, cur_tokens=None):\n        precise = True\n        try:\n            cur_tokens = self.calc_tokens()\n        except Exception as e:\n            precise = False\n            if cur_tokens is None:\n                raise e\n            logger.debug(\"Exception when counting tokens precisely for query: {}\".format(e))\n        while cur_tokens > max_tokens:\n            if len(self.messages) > 2:\n                self.messages.pop(1)\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"assistant\":\n                self.messages.pop(1)\n                if precise:\n                    cur_tokens = self.calc_tokens()\n                else:\n                    cur_tokens = cur_tokens - max_tokens\n                break\n            elif len(self.messages) == 2 and self.messages[1][\"role\"] == \"user\":\n                logger.warn(\"user message exceed max_tokens. total_tokens={}\".format(cur_tokens))\n                break\n            else:\n                logger.debug(\"max_tokens={}, total_tokens={}, len(messages)={}\".format(max_tokens, cur_tokens,\n                                                                                       len(self.messages)))\n                break\n            if precise:\n                cur_tokens = self.calc_tokens()\n            else:\n                cur_tokens = cur_tokens - max_tokens\n        return cur_tokens\n\n    def calc_tokens(self):\n        return num_tokens_from_messages(self.messages, self.model)\n\n\ndef num_tokens_from_messages(messages, model):\n    tokens = 0\n    for msg in messages:\n        tokens += len(msg[\"content\"])\n    return tokens\n"
  },
  {
    "path": "models/zhipuai/zhipuai_bot.py",
    "content": "# encoding:utf-8\n\nimport time\nimport json\n\nfrom models.bot import Bot\nfrom models.zhipuai.zhipu_ai_session import ZhipuAISession\nfrom models.zhipuai.zhipu_ai_image import ZhipuAIImage\nfrom models.session_manager import SessionManager\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf, load_config\nfrom zai import ZhipuAiClient\n\n\n# ZhipuAI对话模型API\nclass ZHIPUAIBot(Bot, ZhipuAIImage):\n    def __init__(self):\n        super().__init__()\n        self.sessions = SessionManager(ZhipuAISession, model=conf().get(\"model\") or \"ZHIPU_AI\")\n        self.args = {\n            \"model\": conf().get(\"model\") or \"glm-4\",  # 对话模型的名称\n            \"temperature\": conf().get(\"temperature\", 0.9),  # 值在(0,1)之间(智谱AI 的温度不能取 0 或者 1)\n            \"top_p\": conf().get(\"top_p\", 0.7),  # 值在(0,1)之间(智谱AI 的 top_p 不能取 0 或者 1)\n        }\n        # 初始化客户端，支持自定义 API base URL（例如智谱国际版 z.ai）\n        api_key = conf().get(\"zhipu_ai_api_key\")\n        api_base = conf().get(\"zhipu_ai_api_base\")\n        \n        if api_base:\n            self.client = ZhipuAiClient(api_key=api_key, base_url=api_base)\n        else:\n            self.client = ZhipuAiClient(api_key=api_key)\n\n    def reply(self, query, context=None):\n        # acquire reply content\n        if context.type == ContextType.TEXT:\n            logger.info(\"[ZHIPU_AI] query={}\".format(query))\n\n            session_id = context[\"session_id\"]\n            reply = None\n            clear_memory_commands = conf().get(\"clear_memory_commands\", [\"#清除记忆\"])\n            if query in clear_memory_commands:\n                self.sessions.clear_session(session_id)\n                reply = Reply(ReplyType.INFO, \"记忆已清除\")\n            elif query == \"#清除所有\":\n                self.sessions.clear_all_session()\n                reply = Reply(ReplyType.INFO, \"所有人记忆已清除\")\n            elif query == \"#更新配置\":\n                load_config()\n                reply = Reply(ReplyType.INFO, \"配置已更新\")\n            if reply:\n                return reply\n            session = self.sessions.session_query(query, session_id)\n            logger.debug(\"[ZHIPU_AI] session query={}\".format(session.messages))\n\n            model = context.get(\"gpt_model\")\n            new_args = None\n            if model:\n                new_args = self.args.copy()\n                new_args[\"model\"] = model\n\n            reply_content = self.reply_text(session, args=new_args)\n            logger.debug(\n                \"[ZHIPU_AI] new_query={}, session_id={}, reply_cont={}, completion_tokens={}\".format(\n                    session.messages,\n                    session_id,\n                    reply_content[\"content\"],\n                    reply_content[\"completion_tokens\"],\n                )\n            )\n            if reply_content[\"completion_tokens\"] == 0 and len(reply_content[\"content\"]) > 0:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n            elif reply_content[\"completion_tokens\"] > 0:\n                self.sessions.session_reply(reply_content[\"content\"], session_id, reply_content[\"total_tokens\"])\n                reply = Reply(ReplyType.TEXT, reply_content[\"content\"])\n            else:\n                reply = Reply(ReplyType.ERROR, reply_content[\"content\"])\n                logger.debug(\"[ZHIPU_AI] reply {} used 0 tokens.\".format(reply_content))\n            return reply\n        elif context.type == ContextType.IMAGE_CREATE:\n            ok, retstring = self.create_img(query, 0)\n            reply = None\n            if ok:\n                reply = Reply(ReplyType.IMAGE_URL, retstring)\n            else:\n                reply = Reply(ReplyType.ERROR, retstring)\n            return reply\n\n        else:\n            reply = Reply(ReplyType.ERROR, \"Bot不支持处理{}类型的消息\".format(context.type))\n            return reply\n\n    def reply_text(self, session: ZhipuAISession, args=None, retry_count=0) -> dict:\n        \"\"\"\n        Call ZhipuAI API to get the answer\n        :param session: a conversation session\n        :param args: request arguments\n        :param retry_count: retry count\n        :return: {}\n        \"\"\"\n        try:\n            if args is None:\n                args = self.args\n            response = self.client.chat.completions.create(messages=session.messages, **args)\n            # logger.debug(\"[ZHIPU_AI] response={}\".format(response))\n            # logger.info(\"[ZHIPU_AI] reply={}, total_tokens={}\".format(response.choices[0]['message']['content'], response[\"usage\"][\"total_tokens\"]))\n\n            return {\n                \"total_tokens\": response.usage.total_tokens,\n                \"completion_tokens\": response.usage.completion_tokens,\n                \"content\": response.choices[0].message.content,\n            }\n        except Exception as e:\n            need_retry = retry_count < 2\n            result = {\"completion_tokens\": 0, \"content\": \"我现在有点累了，等会再来吧\"}\n            error_str = str(e).lower()\n            \n            # Check error type by error message content\n            if \"rate\" in error_str and \"limit\" in error_str:\n                logger.warn(\"[ZHIPU_AI] RateLimitError: {}\".format(e))\n                result[\"content\"] = \"提问太快啦，请休息一下再问我吧\"\n                if need_retry:\n                    time.sleep(20)\n            elif \"timeout\" in error_str or \"timed out\" in error_str:\n                logger.warn(\"[ZHIPU_AI] Timeout: {}\".format(e))\n                result[\"content\"] = \"我没有收到你的消息\"\n                if need_retry:\n                    time.sleep(5)\n            elif \"api\" in error_str and (\"error\" in error_str or \"gateway\" in error_str):\n                logger.warn(\"[ZHIPU_AI] APIError: {}\".format(e))\n                result[\"content\"] = \"请再问我一次\"\n                if need_retry:\n                    time.sleep(10)\n            elif \"connection\" in error_str or \"network\" in error_str:\n                logger.warn(\"[ZHIPU_AI] ConnectionError: {}\".format(e))\n                result[\"content\"] = \"我连接不到你的网络\"\n                if need_retry:\n                    time.sleep(5)\n            else:\n                logger.exception(\"[ZHIPU_AI] Exception: {}\".format(e), e)\n                need_retry = False\n                self.sessions.clear_session(session.session_id)\n\n            if need_retry:\n                logger.warn(\"[ZHIPU_AI] 第{}次重试\".format(retry_count + 1))\n                return self.reply_text(session, args, retry_count + 1)\n            else:\n                return result\n\n    def call_with_tools(self, messages, tools=None, stream=False, **kwargs):\n        \"\"\"\n        Call ZhipuAI API with tool support for agent integration\n        \n        This method handles:\n        1. Format conversion (Claude format → ZhipuAI format)\n        2. System prompt injection\n        3. API calling with ZhipuAI SDK\n        4. Tool stream support (tool_stream=True for GLM-4.7)\n        \n        Args:\n            messages: List of messages (may be in Claude format from agent)\n            tools: List of tool definitions (may be in Claude format from agent)\n            stream: Whether to use streaming\n            **kwargs: Additional parameters (max_tokens, temperature, system, etc.)\n            \n        Returns:\n            Formatted response or generator for streaming\n        \"\"\"\n        try:\n            # Convert messages from Claude format to ZhipuAI format\n            messages = self._convert_messages_to_zhipu_format(messages)\n            \n            # Convert tools from Claude format to ZhipuAI format\n            if tools:\n                tools = self._convert_tools_to_zhipu_format(tools)\n            \n            # Handle system prompt\n            system_prompt = kwargs.get('system')\n            if system_prompt:\n                # Add system message at the beginning if not already present\n                if not messages or messages[0].get('role') != 'system':\n                    messages = [{\"role\": \"system\", \"content\": system_prompt}] + messages\n                else:\n                    # Replace existing system message\n                    messages[0] = {\"role\": \"system\", \"content\": system_prompt}\n            \n            # Build request parameters\n            request_params = {\n                \"model\": kwargs.get(\"model\", self.args.get(\"model\", \"glm-4\")),\n                \"messages\": messages,\n                \"temperature\": kwargs.get(\"temperature\", self.args.get(\"temperature\", 0.9)),\n                \"top_p\": kwargs.get(\"top_p\", self.args.get(\"top_p\", 0.7)),\n                \"stream\": stream\n            }\n            \n            # Add max_tokens if specified\n            if kwargs.get(\"max_tokens\"):\n                request_params[\"max_tokens\"] = kwargs[\"max_tokens\"]\n            \n            # Add tools if provided\n            if tools:\n                request_params[\"tools\"] = tools\n                # GLM-4.7 with zai-sdk supports tool_stream for streaming tool calls\n                if stream:\n                    request_params[\"tool_stream\"] = kwargs.get(\"tool_stream\", True)\n            \n            # Add thinking parameter for deep thinking mode (GLM-4.7)\n            thinking = kwargs.get(\"thinking\")\n            if thinking:\n                request_params[\"thinking\"] = thinking\n            elif \"glm-4.7\" in request_params[\"model\"]:\n                # Enable thinking by default for GLM-4.7\n                request_params[\"thinking\"] = {\"type\": \"disabled\"}\n            \n            # Make API call with ZhipuAI SDK\n            if stream:\n                return self._handle_stream_response(request_params)\n            else:\n                return self._handle_sync_response(request_params)\n                \n        except Exception as e:\n            error_msg = str(e)\n            logger.error(f\"[ZHIPU_AI] call_with_tools error: {error_msg}\")\n            if stream:\n                def error_generator():\n                    yield {\n                        \"error\": True,\n                        \"message\": error_msg,\n                        \"status_code\": 500\n                    }\n                return error_generator()\n            else:\n                return {\n                    \"error\": True,\n                    \"message\": error_msg,\n                    \"status_code\": 500\n                }\n    \n    def _handle_sync_response(self, request_params):\n        \"\"\"Handle synchronous ZhipuAI API response\"\"\"\n        try:\n            response = self.client.chat.completions.create(**request_params)\n            \n            # Convert ZhipuAI response to OpenAI-compatible format\n            return {\n                \"id\": response.id,\n                \"object\": \"chat.completion\",\n                \"created\": response.created,\n                \"model\": response.model,\n                \"choices\": [{\n                    \"index\": 0,\n                    \"message\": {\n                        \"role\": response.choices[0].message.role,\n                        \"content\": response.choices[0].message.content,\n                        \"tool_calls\": self._convert_tool_calls_to_openai_format(\n                            getattr(response.choices[0].message, 'tool_calls', None)\n                        )\n                    },\n                    \"finish_reason\": response.choices[0].finish_reason\n                }],\n                \"usage\": {\n                    \"prompt_tokens\": response.usage.prompt_tokens,\n                    \"completion_tokens\": response.usage.completion_tokens,\n                    \"total_tokens\": response.usage.total_tokens\n                }\n            }\n            \n        except Exception as e:\n            logger.error(f\"[ZHIPU_AI] sync response error: {e}\")\n            return {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n    \n    def _handle_stream_response(self, request_params):\n        \"\"\"Handle streaming ZhipuAI API response\"\"\"\n        try:\n            stream = self.client.chat.completions.create(**request_params)\n            \n            # Stream chunks to caller, converting to OpenAI format\n            for chunk in stream:\n                if not chunk.choices:\n                    continue\n                \n                delta = chunk.choices[0].delta\n                \n                # Convert to OpenAI-compatible format\n                openai_chunk = {\n                    \"id\": chunk.id,\n                    \"object\": \"chat.completion.chunk\",\n                    \"created\": chunk.created,\n                    \"model\": chunk.model,\n                    \"choices\": [{\n                        \"index\": 0,\n                        \"delta\": {},\n                        \"finish_reason\": chunk.choices[0].finish_reason\n                    }]\n                }\n                \n                # Add role if present\n                if hasattr(delta, 'role') and delta.role:\n                    openai_chunk[\"choices\"][0][\"delta\"][\"role\"] = delta.role\n                \n                # Add content if present\n                if hasattr(delta, 'content') and delta.content:\n                    openai_chunk[\"choices\"][0][\"delta\"][\"content\"] = delta.content\n                \n                # Add reasoning_content as separate field if present (GLM-5/GLM-4.7 thinking)\n                if hasattr(delta, 'reasoning_content') and delta.reasoning_content:\n                    openai_chunk[\"choices\"][0][\"delta\"][\"reasoning_content\"] = delta.reasoning_content\n                \n                # Add tool_calls if present\n                if hasattr(delta, 'tool_calls') and delta.tool_calls:\n                    # For streaming, tool_calls need special handling\n                    openai_tool_calls = []\n                    for tc in delta.tool_calls:\n                        tool_call_dict = {\n                            \"index\": getattr(tc, 'index', 0),\n                            \"id\": getattr(tc, 'id', None),\n                            \"type\": \"function\",\n                            \"function\": {}\n                        }\n                        \n                        # Add function name if present\n                        if hasattr(tc, 'function') and hasattr(tc.function, 'name') and tc.function.name:\n                            tool_call_dict[\"function\"][\"name\"] = tc.function.name\n                        \n                        # Add function arguments if present\n                        if hasattr(tc, 'function') and hasattr(tc.function, 'arguments') and tc.function.arguments:\n                            tool_call_dict[\"function\"][\"arguments\"] = tc.function.arguments\n                        \n                        openai_tool_calls.append(tool_call_dict)\n                    \n                    openai_chunk[\"choices\"][0][\"delta\"][\"tool_calls\"] = openai_tool_calls\n                \n                yield openai_chunk\n                \n        except Exception as e:\n            logger.error(f\"[ZHIPU_AI] stream response error: {e}\")\n            yield {\n                \"error\": True,\n                \"message\": str(e),\n                \"status_code\": 500\n            }\n    \n    def _convert_tools_to_zhipu_format(self, tools):\n        \"\"\"\n        Convert tools from Claude format to ZhipuAI format\n        \n        Claude format: {name, description, input_schema}\n        ZhipuAI format: {type: \"function\", function: {name, description, parameters}}\n        \"\"\"\n        if not tools:\n            return None\n        \n        zhipu_tools = []\n        for tool in tools:\n            # Check if already in ZhipuAI/OpenAI format\n            if 'type' in tool and tool['type'] == 'function':\n                zhipu_tools.append(tool)\n            else:\n                # Convert from Claude format\n                zhipu_tools.append({\n                    \"type\": \"function\",\n                    \"function\": {\n                        \"name\": tool.get(\"name\"),\n                        \"description\": tool.get(\"description\"),\n                        \"parameters\": tool.get(\"input_schema\", {})\n                    }\n                })\n        \n        return zhipu_tools\n    \n    def _convert_messages_to_zhipu_format(self, messages):\n        \"\"\"\n        Convert messages from Claude format to ZhipuAI format\n        \n        Claude uses content blocks with types like 'tool_use', 'tool_result'\n        ZhipuAI uses 'tool_calls' in assistant messages and 'tool' role for results\n        \"\"\"\n        if not messages:\n            return []\n        \n        zhipu_messages = []\n        \n        for msg in messages:\n            role = msg.get(\"role\")\n            content = msg.get(\"content\")\n            \n            # Handle string content (already in correct format)\n            if isinstance(content, str):\n                zhipu_messages.append(msg)\n                continue\n            \n            # Handle list content (Claude format with content blocks)\n            if isinstance(content, list):\n                # Check if this is a tool result message (user role with tool_result blocks)\n                if role == \"user\" and any(block.get(\"type\") == \"tool_result\" for block in content):\n                    # Convert each tool_result block to a separate tool message\n                    for block in content:\n                        if block.get(\"type\") == \"tool_result\":\n                            zhipu_messages.append({\n                                \"role\": \"tool\",\n                                \"tool_call_id\": block.get(\"tool_use_id\"),\n                                \"content\": block.get(\"content\", \"\")\n                            })\n                \n                # Check if this is an assistant message with tool_use blocks\n                elif role == \"assistant\":\n                    # Separate text content and tool_use blocks\n                    text_parts = []\n                    tool_calls = []\n                    \n                    for block in content:\n                        if block.get(\"type\") == \"text\":\n                            text_parts.append(block.get(\"text\", \"\"))\n                        elif block.get(\"type\") == \"tool_use\":\n                            tool_calls.append({\n                                \"id\": block.get(\"id\"),\n                                \"type\": \"function\",\n                                \"function\": {\n                                    \"name\": block.get(\"name\"),\n                                    \"arguments\": json.dumps(block.get(\"input\", {}))\n                                }\n                            })\n                    \n                    # Build ZhipuAI format assistant message\n                    zhipu_msg = {\n                        \"role\": \"assistant\",\n                        \"content\": \" \".join(text_parts) if text_parts else None\n                    }\n                    \n                    if tool_calls:\n                        zhipu_msg[\"tool_calls\"] = tool_calls\n                    \n                    zhipu_messages.append(zhipu_msg)\n                else:\n                    # Other list content, keep as is\n                    zhipu_messages.append(msg)\n            else:\n                # Other formats, keep as is\n                zhipu_messages.append(msg)\n        \n        return zhipu_messages\n    \n    def _convert_tool_calls_to_openai_format(self, tool_calls):\n        \"\"\"Convert ZhipuAI tool_calls to OpenAI format\"\"\"\n        if not tool_calls:\n            return None\n        \n        openai_tool_calls = []\n        for tool_call in tool_calls:\n            openai_tool_calls.append({\n                \"id\": tool_call.id,\n                \"type\": \"function\",\n                \"function\": {\n                    \"name\": tool_call.function.name,\n                    \"arguments\": tool_call.function.arguments\n                }\n            })\n        \n        return openai_tool_calls\n"
  },
  {
    "path": "plugins/agent/README.md",
    "content": "# Agent插件\n\n## 插件说明\n\n基于 [AgentMesh](https://github.com/MinimalFuture/AgentMesh) 多智能体框架实现的Agent插件，可以让机器人快速获得Agent能力，通过自然语言对话来访问 **终端、浏览器、文件系统、搜索引擎** 等各类工具。\n同时还支持通过 **多智能体协作** 来完成复杂任务，例如多智能体任务分发、多智能体问题讨论、协同处理等。\n\nAgentMesh项目地址：https://github.com/MinimalFuture/AgentMesh\n\n## 安装\n\n1. 确保已安装依赖：\n\n```bash\npip install agentmesh-sdk>=0.1.3\n```\n\n2. 如需使用浏览器工具，还需安装：\n\n```bash\npip install browser-use>=0.1.40\nplaywright install\n```\n\n## 配置\n\n插件配置文件是 `plugins/agent`目录下的 `config.yaml`，包含智能体团队的配置以及工具的配置，可以从模板文件 `config-template.yaml`中复制：\n\n```bash\ncp config-template.yaml config.yaml\n```\n\n说明：\n\n - `team`配置是默认选中的 agent team\n - `teams` 下是Agent团队配置，团队的model默认为`gpt-4.1-mini`，可根据需要进行修改，模型对应的 `api_key` 需要在项目根目录的 `config.json` 全局配置中进行配置。例如openai模型需要配置 `open_ai_api_key`\n - 支持为 `agents` 下面的每个agent添加model字段来设置不同的模型\n\n\n## 使用方法\n\n在对机器人发送的消息中使用 `$agent` 前缀来触发插件，支持以下命令：\n\n- `$agent [task]`: 使用默认团队执行任务 (默认团队可通 config.yaml 中的team配置修改)\n- `$agent teams`: 列出可用的团队\n- `$agent use [team_name] [task]`: 使用指定的团队执行任务\n\n\n### 示例\n\n```bash\n$agent 帮我查看当前目录下有哪些文件夹\n$agent teams\n$agent use software_team 帮我写一个产品预约体验的表单页面\n```\n\n## 工具支持\n\n目前支持多种内置工具，包括但不限于：\n\n- `calculator`: 数学计算工具\n- `current_time`: 获取当前时间\n- `browser`: 浏览器操作工具，注意需安装`browser-use`依赖\n- `google_search`: 搜索引擎，注意需在`config.yaml`中配置 `api_key`\n- `file_save`: 文件保存工具，开启后智能体输出的内容将保存在 `workspace` 目录下\n- `terminal`: 终端命令执行工具\n"
  },
  {
    "path": "plugins/agent/__init__.py",
    "content": "from .agent import AgentPlugin\n\n__all__ = [\"AgentPlugin\"]"
  },
  {
    "path": "plugins/agent/agent.py",
    "content": "import os\nimport yaml\nfrom typing import Dict, List, Optional\n\nfrom agentmesh import AgentTeam, Agent, LLMModel\nfrom agentmesh.models import ClaudeModel\nfrom agentmesh.tools import ToolManager\nfrom config import conf\n\nimport plugins\nfrom plugins import Plugin, Event, EventContext, EventAction\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\n\n\n@plugins.register(\n    name=\"agent\",\n    desc=\"Use AgentMesh framework to process tasks with multi-agent teams\",\n    version=\"0.1.0\",\n    author=\"Saboteur7\",\n    desire_priority=1,\n)\nclass AgentPlugin(Plugin):\n    \"\"\"Plugin for integrating AgentMesh framework.\"\"\"\n    \n    def __init__(self):\n        super().__init__()\n        self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n        self.name = \"agent\"\n        self.description = \"Use AgentMesh framework to process tasks with multi-agent teams\"\n        self.config = self._load_config()\n        self.tool_manager = ToolManager()\n        self.tool_manager.load_tools(config_dict=self.config.get(\"tools\"))\n        logger.debug(\"[agent] inited\")\n    \n    def _load_config(self) -> Dict:\n        \"\"\"Load configuration from config.yaml file.\"\"\"\n        config_path = os.path.join(self.path, \"config.yaml\")\n        if not os.path.exists(config_path):\n            logger.debug(f\"Config file not found at {config_path}\")\n            return {}\n            \n        with open(config_path, 'r', encoding='utf-8') as f:\n            return yaml.safe_load(f)\n    \n    def get_help_text(self, verbose=False, **kwargs):\n        \"\"\"Return help message for the agent plugin.\"\"\"\n        help_text = \"通过AgentMesh实现对终端、浏览器、文件系统、搜索引擎等工具的执行，并支持多智能体协作。\"\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        \n        if not verbose:\n            return help_text\n            \n        teams = self.get_available_teams()\n        teams_str = \", \".join(teams) if teams else \"未配置任何团队\"\n        \n        help_text += \"\\n\\n使用说明：\\n\"\n        help_text += f\"{trigger_prefix}agent [task] - 使用默认团队执行任务\\n\"\n        help_text += f\"{trigger_prefix}agent teams - 列出可用的团队\\n\"\n        help_text += f\"{trigger_prefix}agent use [team_name] [task] - 使用特定团队执行任务\\n\\n\"\n        help_text += f\"可用团队: \\n{teams_str}\\n\\n\"\n        help_text += f\"示例:\\n\"\n        help_text += f\"{trigger_prefix}agent 帮我查看当前文件夹路径\\n\"\n        help_text += f\"{trigger_prefix}agent use software_team 帮我写一个产品预约体验的表单页面\"\n        return help_text\n    \n    def get_available_teams(self) -> List[str]:\n        \"\"\"Get list of available teams from configuration.\"\"\"\n        teams_config = self.config.get(\"teams\", {})\n        return list(teams_config.keys())\n\n\n    def create_team_from_config(self, team_name: str) -> Optional[AgentTeam]:\n        \"\"\"Create a team from configuration.\"\"\"\n        # Get teams configuration\n        teams_config = self.config.get(\"teams\", {})\n\n        # Check if the specified team exists\n        if team_name not in teams_config:\n            logger.error(f\"Team '{team_name}' not found in configuration.\")\n            available_teams = list(teams_config.keys())\n            logger.info(f\"Available teams: {', '.join(available_teams)}\")\n            return None\n\n        # Get team configuration\n        team_config = teams_config[team_name]\n\n        # Get team's model\n        team_model_name = team_config.get(\"model\", \"gpt-4.1-mini\")\n        team_model = self.create_llm_model(team_model_name)\n\n        # Get team's max_steps (default to 20 if not specified)\n        team_max_steps = team_config.get(\"max_steps\", 20)\n\n        # Create team with the model\n        team = AgentTeam(\n            name=team_name,\n            description=team_config.get(\"description\", \"\"),\n            rule=team_config.get(\"rule\", \"\"),\n            model=team_model,\n            max_steps=team_max_steps\n        )\n\n        # Create and add agents to the team\n        agents_config = team_config.get(\"agents\", [])\n        for agent_config in agents_config:\n            # Check if agent has a specific model\n            if agent_config.get(\"model\"):\n                agent_model = self.create_llm_model(agent_config.get(\"model\"))\n            else:\n                agent_model = team_model\n\n            # Get agent's max_steps\n            agent_max_steps = agent_config.get(\"max_steps\")\n\n            agent = Agent(\n                name=agent_config.get(\"name\", \"\"),\n                system_prompt=agent_config.get(\"system_prompt\", \"\"),\n                model=agent_model,  # Use agent's model if specified, otherwise will use team's model\n                description=agent_config.get(\"description\", \"\"),\n                max_steps=agent_max_steps\n            )\n\n            # Add tools to the agent if specified\n            tool_names = agent_config.get(\"tools\", [])\n            for tool_name in tool_names:\n                tool = self.tool_manager.create_tool(tool_name)\n                if tool:\n                    agent.add_tool(tool)\n                else:\n                    if tool_name == \"browser\":\n                        logger.warning(\n                            \"Tool 'Browser' loaded failed, \"\n                            \"please install the required dependency with: \\n\"\n                            \"'pip install browser-use>=0.1.40' or 'pip install agentmesh-sdk[full]'\\n\"\n                        )\n                    else:\n                        logger.warning(f\"Tool '{tool_name}' not found for agent '{agent.name}'\\n\")\n\n            # Add agent to team\n            team.add(agent)\n\n        return team\n    \n    def on_handle_context(self, e_context: EventContext):\n        \"\"\"Handle the message context.\"\"\"\n        if e_context['context'].type != ContextType.TEXT:\n            return\n        content = e_context['context'].content\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        \n        if not content.startswith(f\"{trigger_prefix}agent \"):\n            e_context.action = EventAction.CONTINUE\n            return\n\n        if not self.config:\n            reply = Reply()\n            reply.type = ReplyType.ERROR\n            reply.content = \"未找到插件配置，请在 plugins/agent 目录下创建 config.yaml 配置文件，可根据 config-template.yml 模板文件复制\"\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n\n        # Extract the actual task\n        task = content[len(f\"{trigger_prefix}agent \"):].strip()\n        \n        # If task is empty, return help message\n        if not task:\n            reply = Reply()\n            reply.type = ReplyType.TEXT\n            reply.content = self.get_help_text(verbose=True)\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n            \n        # Check if task is asking for available teams\n        if task.lower() in [\"teams\", \"list teams\", \"show teams\"]:\n            teams = self.get_available_teams()\n            reply = Reply()\n            reply.type = ReplyType.TEXT\n            \n            if not teams:\n                reply.content = \"未配置任何团队。请检查 config.yaml 文件。\"\n            else:\n                reply.content = f\"可用团队: {', '.join(teams)}\"\n                \n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n        \n        # Check if task specifies a team\n        team_name = None\n        if task.startswith(\"use \"):\n            parts = task[4:].split(\" \", 1)\n            if len(parts) > 0:\n                team_name = parts[0]\n                if len(parts) > 1:\n                    task = parts[1].strip()\n                else:\n                    reply = Reply()\n                    reply.type = ReplyType.TEXT\n                    reply.content = f\"已选择团队 '{team_name}'。请输入您想执行的任务。\"\n                    e_context['reply'] = reply\n                    e_context.action = EventAction.BREAK_PASS\n                    return\n        if not team_name:\n            team_name = self.config.get(\"team\")\n\n        # If no team specified, use default or first available\n        if not team_name:\n            teams = self.configself.get_available_teams()\n            if not teams:\n                reply = Reply()\n                reply.type = ReplyType.TEXT\n                reply.content = \"未配置任何团队。请检查 config.yaml 文件。\"\n                e_context['reply'] = reply\n                e_context.action = EventAction.BREAK_PASS\n                return\n            team_name = teams[0]\n            \n        # Create team\n        team = self.create_team_from_config(team_name)\n        if not team:\n            reply = Reply()\n            reply.type = ReplyType.TEXT\n            reply.content = f\"创建团队 '{team_name}' 失败。请检查配置。\"\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n        \n        # Run the task\n        try:\n            logger.info(f\"[agent] Running task '{task}' with team '{team_name}', team_model={team.model.model}\")\n            result = team.run_async(task=task)\n            for agent_result in result:\n                res_text = f\"🤖 {agent_result.get('agent_name')}\\n\\n{agent_result.get('final_answer')}\"\n                _send_text(e_context, content=res_text)\n            \n            reply = Reply()\n            reply.type = ReplyType.TEXT\n            reply.content = \"\"\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n            \n        except Exception as e:\n            logger.exception(f\"Error running task with team '{team_name}'\")\n            \n            reply = Reply()\n            reply.type = ReplyType.ERROR\n            reply.content = f\"执行任务时出错: {str(e)}\"\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n        return\n\n    def create_llm_model(self, model_name) -> LLMModel:\n        if conf().get(\"use_linkai\"):\n            api_base = \"https://api.link-ai.tech/v1\"\n            api_key = conf().get(\"linkai_api_key\")\n        elif model_name.startswith((\"gpt\", \"text-davinci\", \"o1\", \"o3\")):\n            api_base = conf().get(\"open_ai_api_base\") or \"https://api.openai.com/v1\"\n            api_key = conf().get(\"open_ai_api_key\")\n        elif model_name.startswith(\"claude\"):\n            return ClaudeModel(model=model_name, api_key=conf().get(\"claude_api_key\"))\n        elif model_name.startswith(\"moonshot\"):\n            api_base = \"https://api.moonshot.cn/v1\"\n            api_key = conf().get(\"moonshot_api_key\")\n        elif model_name.startswith(\"qwen\"):\n            api_base = \"https://dashscope.aliyuncs.com/compatible-mode/v1\"\n            api_key = conf().get(\"dashscope_api_key\")\n        else:\n            api_base = conf().get(\"open_ai_api_base\") or \"https://api.openai.com/v1\"\n            api_key = conf().get(\"open_ai_api_key\")\n\n        llm_model = LLMModel(model=model_name, api_key=api_key, api_base=api_base)\n        return llm_model\n\n\ndef _send_text(e_context: EventContext, content: str):\n    reply = Reply(ReplyType.TEXT, content)\n    channel = e_context[\"channel\"]\n    channel.send(reply, e_context[\"context\"])\n"
  },
  {
    "path": "plugins/agent/config-template.yaml",
    "content": "# 默认选中的Agent Team名称\nteam: general_team\n\ntools:\n  google_search:\n    # get your apikey from https://serper.dev/\n    api_key: \"YOUR API KEY\"\n\n# Agent Team 配置\nteams:\n  # 通用智能体团队\n  general_team:\n    model: \"gpt-4.1-mini\"        # 团队使用的模型\n    description: \"A versatile research and information agent team\"\n    max_steps: 5\n    agents:\n      - name: \"通用智能助手\"\n        description: \"Universal assistant specializing in research, information synthesis, and task execution\"\n        system_prompt: \"You are a versatile assistant who answers questions and completes tasks using available tools. Reply in a clearly structured, attractive and easy to read format.\"\n        # Agent 支持使用的工具\n        tools:\n          - time\n          - calculator\n          - google_search\n          - browser\n          - terminal\n\n  # 软件开发智能体团队\n  software_team:\n    model: \"gpt-4.1-mini\"\n    description: \"A software development team with product manager, developer and tester.\"\n    rule: \"A normal R&D process should be that Product Manager writes PRD, Developer writes code based on PRD, and Finally, Tester performs testing.\"\n    max_steps: 10\n    agents:\n      - name: \"Product-Manager\"\n        description: \"Responsible for product requirements and documentation\"\n        system_prompt: \"You are an experienced product manager who creates concise PRDs, focusing on user needs and feature specifications. You always format your responses in Markdown.\"\n        tools:\n          - time\n          - file_save\n      - name: \"Developer\"\n        description: \"Implements code based on PRD\"\n        system_prompt: \"You are a skilled developer. When developing web application, you creates single-page website based on user needs, you deliver HTML files with embedded JavaScript and CSS that are visually appealing, responsive, and user-friendly, featuring a grand layout and beautiful background. The HTML, CSS, and JavaScript code should be well-structured and effectively organized.\"\n        tools:\n          - file_save\n      - name: \"Tester\"\n        description: \"Tests code and verifies functionality\"\n        system_prompt: \"You are a tester who validates code against requirements. For HTML applications, use browser tools to test functionality. For Python or other client-side applications, use the terminal tool to run and test. You only need to test a few core cases.\"\n        tools:\n          - file_save\n          - browser\n          - terminal\n"
  },
  {
    "path": "plugins/banwords/.gitignore",
    "content": "banwords.txt"
  },
  {
    "path": "plugins/banwords/README.md",
    "content": "\n## 插件描述\n\n简易的敏感词插件，暂不支持分词，请自行导入词库到插件文件夹中的`banwords.txt`，每行一个词，一个参考词库是[1](https://github.com/cjh0613/tencent-sensitive-words/blob/main/sensitive_words_lines.txt)。\n\n使用前将`config.json.template`复制为`config.json`，并自行配置。\n\n目前插件对消息的默认处理行为有如下两种：\n\n- `ignore` : 无视这条消息。\n- `replace` : 将消息中的敏感词替换成\"*\"，并回复违规。\n\n```json\n    \"action\": \"replace\",  \n    \"reply_filter\": true,\n    \"reply_action\": \"ignore\"\n```\n\n在以上配置项中：\n\n- `action`: 对用户消息的默认处理行为\n- `reply_filter`: 是否对ChatGPT的回复也进行敏感词过滤\n- `reply_action`: 如果开启了回复过滤，对回复的默认处理行为\n\n## 致谢\n\n搜索功能实现来自https://github.com/toolgood/ToolGood.Words"
  },
  {
    "path": "plugins/banwords/__init__.py",
    "content": "from .banwords import *\n"
  },
  {
    "path": "plugins/banwords/banwords.py",
    "content": "# encoding:utf-8\n\nimport json\nimport os\n\nimport plugins\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom plugins import *\n\nfrom .lib.WordsSearch import WordsSearch\n\n\n@plugins.register(\n    name=\"Banwords\",\n    desire_priority=100,\n    hidden=True,\n    desc=\"判断消息中是否有敏感词、决定是否回复。\",\n    version=\"1.0\",\n    author=\"lanvent\",\n)\nclass Banwords(Plugin):\n    def __init__(self):\n        super().__init__()\n        try:\n            # load config\n            conf = super().load_config()\n            curdir = os.path.dirname(__file__)\n            if not conf:\n                # 配置不存在则写入默认配置\n                config_path = os.path.join(curdir, \"config.json\")\n                if not os.path.exists(config_path):\n                    conf = {\"action\": \"ignore\"}\n                    with open(config_path, \"w\") as f:\n                        json.dump(conf, f, indent=4)\n\n            self.searchr = WordsSearch()\n            self.action = conf[\"action\"]\n            banwords_path = os.path.join(curdir, \"banwords.txt\")\n            with open(banwords_path, \"r\", encoding=\"utf-8\") as f:\n                words = []\n                for line in f:\n                    word = line.strip()\n                    if word:\n                        words.append(word)\n            self.searchr.SetKeywords(words)\n            self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n            if conf.get(\"reply_filter\", True):\n                self.handlers[Event.ON_DECORATE_REPLY] = self.on_decorate_reply\n                self.reply_action = conf.get(\"reply_action\", \"ignore\")\n            logger.debug(\"[Banwords] inited\")\n        except Exception as e:\n            logger.debug(\"[Banwords] init failed, ignore or see https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/banwords .\")\n            raise e\n\n    def on_handle_context(self, e_context: EventContext):\n        if e_context[\"context\"].type not in [\n            ContextType.TEXT,\n            ContextType.IMAGE_CREATE,\n        ]:\n            return\n\n        content = e_context[\"context\"].content\n        logger.debug(\"[Banwords] on_handle_context. content: %s\" % content)\n        if self.action == \"ignore\":\n            f = self.searchr.FindFirst(content)\n            if f:\n                logger.info(\"[Banwords] %s in message\" % f[\"Keyword\"])\n                e_context.action = EventAction.BREAK_PASS\n                return\n        elif self.action == \"replace\":\n            if self.searchr.ContainsAny(content):\n                reply = Reply(ReplyType.INFO, \"发言中包含敏感词，请重试: \\n\" + self.searchr.Replace(content))\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS\n                return\n\n    def on_decorate_reply(self, e_context: EventContext):\n        if e_context[\"reply\"].type not in [ReplyType.TEXT]:\n            return\n\n        reply = e_context[\"reply\"]\n        content = reply.content\n        if self.reply_action == \"ignore\":\n            f = self.searchr.FindFirst(content)\n            if f:\n                logger.info(\"[Banwords] %s in reply\" % f[\"Keyword\"])\n                e_context[\"reply\"] = None\n                e_context.action = EventAction.BREAK_PASS\n                return\n        elif self.reply_action == \"replace\":\n            if self.searchr.ContainsAny(content):\n                reply = Reply(ReplyType.INFO, \"已替换回复中的敏感词: \\n\" + self.searchr.Replace(content))\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.CONTINUE\n                return\n\n    def get_help_text(self, **kwargs):\n        return \"过滤消息中的敏感词。\"\n"
  },
  {
    "path": "plugins/banwords/banwords.txt.template",
    "content": "nipples\npennis\n"
  },
  {
    "path": "plugins/banwords/config.json.template",
    "content": "{\n  \"action\": \"replace\",\n  \"reply_filter\": true,\n  \"reply_action\": \"ignore\"\n}\n"
  },
  {
    "path": "plugins/banwords/lib/WordsSearch.py",
    "content": "#!/usr/bin/env python\n# -*- coding:utf-8 -*-\n# ToolGood.Words.WordsSearch.py\n# 2020, Lin Zhijun, https://github.com/toolgood/ToolGood.Words\n# Licensed under the Apache License 2.0\n# 更新日志\n# 2020.04.06 第一次提交\n# 2020.05.16 修改，支持大于0xffff的字符\n\n__all__ = ['WordsSearch']\n__author__ = 'Lin Zhijun'\n__date__ = '2020.05.16'\n\nclass TrieNode():\n    def __init__(self):\n        self.Index = 0\n        self.Index = 0\n        self.Layer = 0\n        self.End = False\n        self.Char = ''\n        self.Results = []\n        self.m_values = {}\n        self.Failure = None\n        self.Parent = None\n\n    def Add(self,c):\n        if c in self.m_values :\n            return self.m_values[c]\n        node = TrieNode()\n        node.Parent = self\n        node.Char = c\n        self.m_values[c] = node\n        return node\n\n    def SetResults(self,index):\n        if (self.End == False):\n            self.End = True\n        self.Results.append(index)\n\nclass TrieNode2():\n    def __init__(self):\n        self.End = False\n        self.Results = []\n        self.m_values = {}\n        self.minflag = 0xffff\n        self.maxflag = 0\n\n    def Add(self,c,node3):\n        if (self.minflag > c):\n            self.minflag = c\n        if (self.maxflag < c):\n             self.maxflag = c\n        self.m_values[c] = node3\n\n    def SetResults(self,index):\n        if (self.End == False) :\n            self.End = True\n        if (index in self.Results )==False : \n            self.Results.append(index)\n\n    def HasKey(self,c):\n        return c in self.m_values\n        \n \n    def TryGetValue(self,c):\n        if (self.minflag <= c and self.maxflag >= c):\n            if c in self.m_values:\n                return self.m_values[c]\n        return None\n\n\nclass WordsSearch():\n    def __init__(self):\n        self._first = {}\n        self._keywords = []\n        self._indexs=[]\n    \n    def SetKeywords(self,keywords):\n        self._keywords = keywords\n        self._indexs=[]\n        for i in range(len(keywords)):\n            self._indexs.append(i)\n\n        root = TrieNode()\n        allNodeLayer={}\n\n        for i in range(len(self._keywords)): # for (i = 0; i < _keywords.length; i++) \n            p = self._keywords[i]\n            nd = root\n            for j in range(len(p)): # for (j = 0; j < p.length; j++) \n                nd = nd.Add(ord(p[j]))\n                if (nd.Layer == 0):\n                    nd.Layer = j + 1\n                    if nd.Layer in allNodeLayer:\n                        allNodeLayer[nd.Layer].append(nd)\n                    else:\n                        allNodeLayer[nd.Layer]=[]\n                        allNodeLayer[nd.Layer].append(nd)\n            nd.SetResults(i)\n\n\n        allNode = []\n        allNode.append(root)\n        for key in allNodeLayer.keys():\n            for nd in allNodeLayer[key]:\n                allNode.append(nd)\n        allNodeLayer=None\n\n        for i in range(len(allNode)): # for (i = 0; i < allNode.length; i++) \n            if i==0 :\n                continue\n            nd=allNode[i]\n            nd.Index = i\n            r = nd.Parent.Failure\n            c = nd.Char\n            while (r != None and (c in r.m_values)==False):\n                r = r.Failure\n            if (r == None):\n                nd.Failure = root\n            else:\n                nd.Failure = r.m_values[c]\n                for key2 in nd.Failure.Results :\n                    nd.SetResults(key2)\n        root.Failure = root\n\n        allNode2 = []\n        for i in range(len(allNode)): # for (i = 0; i < allNode.length; i++) \n            allNode2.append( TrieNode2())\n        \n        for i in range(len(allNode2)): # for (i = 0; i < allNode2.length; i++) \n            oldNode = allNode[i]\n            newNode = allNode2[i]\n\n            for key in oldNode.m_values :\n                index = oldNode.m_values[key].Index\n                newNode.Add(key, allNode2[index])\n            \n            for index in range(len(oldNode.Results)): # for (index = 0; index < oldNode.Results.length; index++) \n                item = oldNode.Results[index]\n                newNode.SetResults(item)\n            \n            oldNode=oldNode.Failure\n            while oldNode != root:\n                for key in oldNode.m_values :\n                    if (newNode.HasKey(key) == False):\n                        index = oldNode.m_values[key].Index\n                        newNode.Add(key, allNode2[index])\n                for index in range(len(oldNode.Results)): \n                    item = oldNode.Results[index]\n                    newNode.SetResults(item)\n                oldNode=oldNode.Failure\n        allNode = None\n        root = None\n\n        # first = []\n        # for index in range(65535):# for (index = 0; index < 0xffff; index++) \n        #     first.append(None)\n        \n        # for key in allNode2[0].m_values :\n        #     first[key] = allNode2[0].m_values[key]\n        \n        self._first = allNode2[0]\n    \n\n    def FindFirst(self,text):\n        ptr = None\n        for index in range(len(text)): # for (index = 0; index < text.length; index++) \n            t =ord(text[index]) # text.charCodeAt(index)\n            tn = None\n            if (ptr == None):\n                tn = self._first.TryGetValue(t)\n            else:\n                tn = ptr.TryGetValue(t)\n                if (tn==None):\n                    tn = self._first.TryGetValue(t)\n                \n            \n            if (tn != None):\n                if (tn.End):\n                    item = tn.Results[0]\n                    keyword = self._keywords[item]\n                    return { \"Keyword\": keyword, \"Success\": True, \"End\": index, \"Start\": index + 1 - len(keyword), \"Index\": self._indexs[item] }\n            ptr = tn\n        return None\n\n    def FindAll(self,text):\n        ptr = None\n        list = []\n\n        for index in range(len(text)): # for (index = 0; index < text.length; index++) \n            t =ord(text[index]) # text.charCodeAt(index)\n            tn = None\n            if (ptr == None):\n                tn = self._first.TryGetValue(t)\n            else:\n                tn = ptr.TryGetValue(t)\n                if (tn==None):\n                    tn = self._first.TryGetValue(t)\n                \n            \n            if (tn != None):\n                if (tn.End):\n                    for j in range(len(tn.Results)): # for (j = 0; j < tn.Results.length; j++) \n                        item = tn.Results[j]\n                        keyword = self._keywords[item]\n                        list.append({ \"Keyword\": keyword, \"Success\": True, \"End\": index, \"Start\": index + 1 - len(keyword), \"Index\": self._indexs[item] })\n            ptr = tn\n        return list\n\n\n    def ContainsAny(self,text):\n        ptr = None\n        for index in range(len(text)): # for (index = 0; index < text.length; index++) \n            t =ord(text[index]) # text.charCodeAt(index)\n            tn = None\n            if (ptr == None):\n                tn = self._first.TryGetValue(t)\n            else:\n                tn = ptr.TryGetValue(t)\n                if (tn==None):\n                    tn = self._first.TryGetValue(t)\n            \n            if (tn != None):\n                if (tn.End):\n                    return True\n            ptr = tn\n        return False\n    \n    def Replace(self,text, replaceChar = '*'):\n        result = list(text) \n\n        ptr = None\n        for i in range(len(text)): # for (i = 0; i < text.length; i++) \n            t =ord(text[i]) # text.charCodeAt(index)\n            tn = None\n            if (ptr == None):\n                tn = self._first.TryGetValue(t)\n            else:\n                tn = ptr.TryGetValue(t)\n                if (tn==None):\n                    tn = self._first.TryGetValue(t)\n            \n            if (tn != None):\n                if (tn.End):\n                    maxLength = len( self._keywords[tn.Results[0]])\n                    start = i + 1 - maxLength\n                    for j in range(start,i+1): # for (j = start; j <= i; j++) \n                        result[j] = replaceChar\n            ptr = tn\n        return ''.join(result) "
  },
  {
    "path": "plugins/dungeon/README.md",
    "content": "玩地牢游戏的聊天插件，触发方法如下：\n\n- `$开始冒险 <背景故事>` - 以<背景故事>开始一个地牢游戏，不填写会使用默认背景故事。之后聊天中你的所有消息会帮助ai完善这个故事。\n- `$停止冒险` - 停止一个地牢游戏，回归正常的ai。\n"
  },
  {
    "path": "plugins/dungeon/__init__.py",
    "content": "from .dungeon import *\n"
  },
  {
    "path": "plugins/dungeon/dungeon.py",
    "content": "# encoding:utf-8\n\nimport plugins\nfrom bridge.bridge import Bridge\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common import const\nfrom common.expired_dict import ExpiredDict\nfrom common.log import logger\nfrom config import conf\nfrom plugins import *\n\n\n# https://github.com/bupticybee/ChineseAiDungeonChatGPT\nclass StoryTeller:\n    def __init__(self, bot, sessionid, story):\n        self.bot = bot\n        self.sessionid = sessionid\n        bot.sessions.clear_session(sessionid)\n        self.first_interact = True\n        self.story = story\n\n    def reset(self):\n        self.bot.sessions.clear_session(self.sessionid)\n        self.first_interact = True\n\n    def action(self, user_action):\n        if user_action[-1] != \"。\":\n            user_action = user_action + \"。\"\n        if self.first_interact:\n            prompt = (\n                \"\"\"现在来充当一个文字冒险游戏，描述时候注意节奏，不要太快，仔细描述各个人物的心情和周边环境。一次只需写四到六句话。\n            开头是，\"\"\"\n                + self.story\n                + \" \"\n                + user_action\n            )\n            self.first_interact = False\n        else:\n            prompt = \"\"\"继续，一次只需要续写四到六句话，总共就只讲5分钟内发生的事情。\"\"\" + user_action\n        return prompt\n\n\n@plugins.register(\n    name=\"Dungeon\",\n    desire_priority=0,\n    namecn=\"文字冒险\",\n    desc=\"A plugin to play dungeon game\",\n    version=\"1.0\",\n    author=\"lanvent\",\n)\nclass Dungeon(Plugin):\n    def __init__(self):\n        super().__init__()\n        self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n        logger.debug(\"[Dungeon] inited\")\n        # 目前没有设计session过期事件，这里先暂时使用过期字典\n        if conf().get(\"expires_in_seconds\"):\n            self.games = ExpiredDict(conf().get(\"expires_in_seconds\"))\n        else:\n            self.games = dict()\n\n    def on_handle_context(self, e_context: EventContext):\n        if e_context[\"context\"].type != ContextType.TEXT:\n            return\n        bottype = Bridge().get_bot_type(\"chat\")\n        if bottype not in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI]:\n            return\n        bot = Bridge().get_bot(\"chat\")\n        content = e_context[\"context\"].content[:]\n        clist = e_context[\"context\"].content.split(maxsplit=1)\n        sessionid = e_context[\"context\"][\"session_id\"]\n        logger.debug(\"[Dungeon] on_handle_context. content: %s\" % clist)\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        if clist[0] == f\"{trigger_prefix}停止冒险\":\n            if sessionid in self.games:\n                self.games[sessionid].reset()\n                del self.games[sessionid]\n                reply = Reply(ReplyType.INFO, \"冒险结束!\")\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS\n        elif clist[0] == f\"{trigger_prefix}开始冒险\" or sessionid in self.games:\n            if sessionid not in self.games or clist[0] == f\"{trigger_prefix}开始冒险\":\n                if len(clist) > 1:\n                    story = clist[1]\n                else:\n                    story = \"你在树林里冒险，指不定会从哪里蹦出来一些奇怪的东西，你握紧手上的手枪，希望这次冒险能够找到一些值钱的东西，你往树林深处走去。\"\n                self.games[sessionid] = StoryTeller(bot, sessionid, story)\n                reply = Reply(ReplyType.INFO, \"冒险开始，你可以输入任意内容，让故事继续下去。故事背景是：\" + story)\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS  # 事件结束，并跳过处理context的默认逻辑\n            else:\n                prompt = self.games[sessionid].action(content)\n                e_context[\"context\"].type = ContextType.TEXT\n                e_context[\"context\"].content = prompt\n                e_context.action = EventAction.BREAK  # 事件结束，不跳过处理context的默认逻辑\n\n    def get_help_text(self, **kwargs):\n        help_text = \"可以和机器人一起玩文字冒险游戏。\\n\"\n        if kwargs.get(\"verbose\") != True:\n            return help_text\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        help_text = f\"{trigger_prefix}开始冒险 \" + \"背景故事: 开始一个基于{背景故事}的文字冒险，之后你的所有消息会协助完善这个故事。\\n\" + f\"{trigger_prefix}停止冒险: 结束游戏。\\n\"\n        if kwargs.get(\"verbose\") == True:\n            help_text += f\"\\n命令例子: '{trigger_prefix}开始冒险 你在树林里冒险，指不定会从哪里蹦出来一些奇怪的东西，你握紧手上的手枪，希望这次冒险能够找到一些值钱的东西，你往树林深处走去。'\"\n        return help_text\n"
  },
  {
    "path": "plugins/finish/__init__.py",
    "content": "from .finish import *\n"
  },
  {
    "path": "plugins/finish/finish.py",
    "content": "# encoding:utf-8\n\nimport plugins\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf\nfrom plugins import *\n\n\n@plugins.register(\n    name=\"Finish\",\n    desire_priority=-999,\n    hidden=True,\n    desc=\"A plugin that check unknown command\",\n    version=\"1.0\",\n    author=\"js00000\",\n)\nclass Finish(Plugin):\n    def __init__(self):\n        super().__init__()\n        self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n        logger.debug(\"[Finish] inited\")\n\n    def on_handle_context(self, e_context: EventContext):\n        if e_context[\"context\"].type != ContextType.TEXT:\n            return\n\n        content = e_context[\"context\"].content\n        logger.debug(\"[Finish] on_handle_context. content: %s\" % content)\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        if content.startswith(trigger_prefix):\n            reply = Reply()\n            reply.type = ReplyType.ERROR\n            reply.content = \"未知插件命令\\n查看插件命令列表请输入#help 插件名\\n\"\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS  # 事件结束，并跳过处理context的默认逻辑\n\n    def get_help_text(self, **kwargs):\n        return \"\"\n"
  },
  {
    "path": "plugins/godcmd/README.md",
    "content": "## 插件说明\n\n指令插件\n\n## 插件使用\n\n将`config.json.template`复制为`config.json`，并修改其中`password`的值为口令。\n\n如果没有设置命令，在命令行日志中会打印出本次的临时口令，请注意观察，打印格式如下。\n\n```\n[INFO][2023-04-06 23:53:47][godcmd.py:165] - [Godcmd] 因未设置口令，本次的临时口令为0971。\n```\n\n在私聊中可使用`#auth`指令，输入口令进行管理员认证。更多详细指令请输入`#help`查看帮助文档：\n\n`#auth <口令>` - 管理员认证，仅可在私聊时认证。\n`#help` - 输出帮助文档，**是否是管理员**和是否是在群聊中会影响帮助文档的输出内容。\n"
  },
  {
    "path": "plugins/godcmd/__init__.py",
    "content": "from .godcmd import *\n"
  },
  {
    "path": "plugins/godcmd/config.json.template",
    "content": "{\n  \"password\": \"\",\n  \"admin_users\": []\n}\n"
  },
  {
    "path": "plugins/godcmd/godcmd.py",
    "content": "# encoding:utf-8\n\nimport json\nimport os\nimport random\nimport string\nimport logging\nfrom typing import Tuple\n\nimport bridge.bridge\nimport plugins\nfrom bridge.bridge import Bridge\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common import const\nfrom config import conf, load_config, global_config\nfrom plugins import *\n\n# 定义指令集\nCOMMANDS = {\n    \"help\": {\n        \"alias\": [\"help\", \"帮助\"],\n        \"desc\": \"回复此帮助\",\n    },\n    \"helpp\": {\n        \"alias\": [\"help\", \"帮助\"],  # 与help指令共用别名，根据参数数量区分\n        \"args\": [\"插件名\"],\n        \"desc\": \"回复指定插件的详细帮助\",\n    },\n    \"auth\": {\n        \"alias\": [\"auth\", \"认证\"],\n        \"args\": [\"口令\"],\n        \"desc\": \"管理员认证\",\n    },\n    \"model\": {\n        \"alias\": [\"model\", \"模型\"],\n        \"desc\": \"查看和设置全局模型\",\n    },\n    \"set_openai_api_key\": {\n        \"alias\": [\"set_openai_api_key\"],\n        \"args\": [\"api_key\"],\n        \"desc\": \"设置你的OpenAI私有api_key\",\n    },\n    \"reset_openai_api_key\": {\n        \"alias\": [\"reset_openai_api_key\"],\n        \"desc\": \"重置为默认的api_key\",\n    },\n    \"set_gpt_model\": {\n        \"alias\": [\"set_gpt_model\"],\n        \"desc\": \"设置你的私有模型\",\n    },\n    \"reset_gpt_model\": {\n        \"alias\": [\"reset_gpt_model\"],\n        \"desc\": \"重置你的私有模型\",\n    },\n    \"gpt_model\": {\n        \"alias\": [\"gpt_model\"],\n        \"desc\": \"查询你使用的模型\",\n    },\n    \"id\": {\n        \"alias\": [\"id\", \"用户\"],\n        \"desc\": \"获取用户id\",\n    },\n    \"reset\": {\n        \"alias\": [\"reset\", \"重置会话\"],\n        \"desc\": \"重置会话\",\n    },\n}\n\nADMIN_COMMANDS = {\n    \"resume\": {\n        \"alias\": [\"resume\", \"恢复服务\"],\n        \"desc\": \"恢复服务\",\n    },\n    \"stop\": {\n        \"alias\": [\"stop\", \"暂停服务\"],\n        \"desc\": \"暂停服务\",\n    },\n    \"reconf\": {\n        \"alias\": [\"reconf\", \"重载配置\"],\n        \"desc\": \"重载配置(不包含插件配置)\",\n    },\n    \"resetall\": {\n        \"alias\": [\"resetall\", \"重置所有会话\"],\n        \"desc\": \"重置所有会话\",\n    },\n    \"scanp\": {\n        \"alias\": [\"scanp\", \"扫描插件\"],\n        \"desc\": \"扫描插件目录是否有新插件\",\n    },\n    \"plist\": {\n        \"alias\": [\"plist\", \"插件\"],\n        \"desc\": \"打印当前插件列表\",\n    },\n    \"setpri\": {\n        \"alias\": [\"setpri\", \"设置插件优先级\"],\n        \"args\": [\"插件名\", \"优先级\"],\n        \"desc\": \"设置指定插件的优先级，越大越优先\",\n    },\n    \"reloadp\": {\n        \"alias\": [\"reloadp\", \"重载插件\"],\n        \"args\": [\"插件名\"],\n        \"desc\": \"重载指定插件配置\",\n    },\n    \"enablep\": {\n        \"alias\": [\"enablep\", \"启用插件\"],\n        \"args\": [\"插件名\"],\n        \"desc\": \"启用指定插件\",\n    },\n    \"disablep\": {\n        \"alias\": [\"disablep\", \"禁用插件\"],\n        \"args\": [\"插件名\"],\n        \"desc\": \"禁用指定插件\",\n    },\n    \"installp\": {\n        \"alias\": [\"installp\", \"安装插件\"],\n        \"args\": [\"仓库地址或插件名\"],\n        \"desc\": \"安装指定插件\",\n    },\n    \"uninstallp\": {\n        \"alias\": [\"uninstallp\", \"卸载插件\"],\n        \"args\": [\"插件名\"],\n        \"desc\": \"卸载指定插件\",\n    },\n    \"updatep\": {\n        \"alias\": [\"updatep\", \"更新插件\"],\n        \"args\": [\"插件名\"],\n        \"desc\": \"更新指定插件\",\n    },\n    \"debug\": {\n        \"alias\": [\"debug\", \"调试模式\", \"DEBUG\"],\n        \"desc\": \"开启机器调试日志\",\n    },\n}\n\n\n# 定义帮助函数\ndef get_help_text(isadmin, isgroup):\n    help_text = \"通用指令\\n\"\n    for cmd, info in COMMANDS.items():\n        if cmd in [\"auth\", \"set_openai_api_key\", \"reset_openai_api_key\", \"set_gpt_model\", \"reset_gpt_model\", \"gpt_model\"]:  # 不显示帮助指令\n            continue\n        raw_ct = conf().get(\"channel_type\", \"web\")\n        active_channels = raw_ct if isinstance(raw_ct, list) else [c.strip() for c in str(raw_ct).split(\",\")]\n        if cmd == \"id\" and not any(c in [\"wxy\", \"wechatmp\"] for c in active_channels):\n            continue\n        alias = [\"#\" + a for a in info[\"alias\"][:1]]\n        help_text += f\"{','.join(alias)} \"\n        if \"args\" in info:\n            args = [a for a in info[\"args\"]]\n            help_text += f\"{' '.join(args)}\"\n        help_text += f\": {info['desc']}\\n\"\n\n    # 插件指令\n    plugins = PluginManager().list_plugins()\n    help_text += \"\\n可用插件\"\n    for plugin in plugins:\n        if plugins[plugin].enabled and not plugins[plugin].hidden:\n            namecn = plugins[plugin].namecn\n            help_text += \"\\n%s: \" % namecn\n            help_text += PluginManager().instances[plugin].get_help_text(verbose=False).strip()\n\n    if ADMIN_COMMANDS and isadmin:\n        help_text += \"\\n\\n管理员指令：\\n\"\n        for cmd, info in ADMIN_COMMANDS.items():\n            alias = [\"#\" + a for a in info[\"alias\"][:1]]\n            help_text += f\"{','.join(alias)} \"\n            if \"args\" in info:\n                args = [a for a in info[\"args\"]]\n                help_text += f\"{' '.join(args)}\"\n            help_text += f\": {info['desc']}\\n\"\n    return help_text\n\n\n@plugins.register(\n    name=\"Godcmd\",\n    desire_priority=999,\n    hidden=True,\n    desc=\"为你的机器人添加指令集，有用户和管理员两种角色，加载顺序请放在首位，初次运行后插件目录会生成配置文件, 填充管理员密码后即可认证\",\n    version=\"1.0\",\n    author=\"lanvent\",\n)\nclass Godcmd(Plugin):\n    def __init__(self):\n        super().__init__()\n\n        config_path = os.path.join(os.path.dirname(__file__), \"config.json\")\n        gconf = super().load_config()\n        if not gconf:\n            if not os.path.exists(config_path):\n                gconf = {\"password\": \"\", \"admin_users\": []}\n                with open(config_path, \"w\") as f:\n                    json.dump(gconf, f, indent=4)\n        if gconf[\"password\"] == \"\":\n            self.temp_password = \"\".join(random.sample(string.digits, 4))\n            logger.info(\"[Godcmd] 因未设置口令，本次的临时口令为%s。\" % self.temp_password)\n        else:\n            self.temp_password = None\n        custom_commands = conf().get(\"clear_memory_commands\", [])\n        for custom_command in custom_commands:\n            if custom_command and custom_command.startswith(\"#\"):\n                custom_command = custom_command[1:]\n                if custom_command and custom_command not in COMMANDS[\"reset\"][\"alias\"]:\n                    COMMANDS[\"reset\"][\"alias\"].append(custom_command)\n\n        self.password = gconf[\"password\"]\n        self.admin_users = gconf[\"admin_users\"]\n        global_config[\"admin_users\"] = self.admin_users\n        self.isrunning = True  # 机器人是否运行中\n\n        self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n        logger.debug(\"[Godcmd] inited\")\n\n    def on_handle_context(self, e_context: EventContext):\n        context_type = e_context[\"context\"].type\n        if context_type != ContextType.TEXT:\n            if not self.isrunning:\n                e_context.action = EventAction.BREAK_PASS\n            return\n\n        content = e_context[\"context\"].content\n        logger.debug(\"[Godcmd] on_handle_context. content: %s\" % content)\n        if content.startswith(\"#\"):\n            if len(content) == 1:\n                reply = Reply()\n                reply.type = ReplyType.ERROR\n                reply.content = f\"空指令，输入#help查看指令列表\\n\"\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS\n                return\n            # msg = e_context['context']['msg']\n            channel = e_context[\"channel\"]\n            user = e_context[\"context\"][\"receiver\"]\n            session_id = e_context[\"context\"][\"session_id\"]\n            isgroup = e_context[\"context\"].get(\"isgroup\", False)\n            bottype = Bridge().get_bot_type(\"chat\")\n            bot = Bridge().get_bot(\"chat\")\n            # 将命令和参数分割\n            command_parts = content[1:].strip().split()\n            cmd = command_parts[0]\n            args = command_parts[1:]\n            isadmin = False\n            if user in self.admin_users:\n                isadmin = True\n            ok = False\n            result = \"string\"\n            if any(cmd in info[\"alias\"] for info in COMMANDS.values()):\n                cmd = next(c for c, info in COMMANDS.items() if cmd in info[\"alias\"])\n                if cmd == \"auth\":\n                    ok, result = self.authenticate(user, args, isadmin, isgroup)\n                elif cmd == \"help\" or cmd == \"helpp\":\n                    if len(args) == 0:\n                        ok, result = True, get_help_text(isadmin, isgroup)\n                    else:\n                        # This can replace the helpp command\n                        plugins = PluginManager().list_plugins()\n                        query_name = args[0].upper()\n                        # search name and namecn\n                        for name, plugincls in plugins.items():\n                            if not plugincls.enabled:\n                                continue\n                            if query_name == name or query_name == plugincls.namecn:\n                                ok, result = True, PluginManager().instances[name].get_help_text(isgroup=isgroup, isadmin=isadmin, verbose=True)\n                                break\n                        if not ok:\n                            result = \"插件不存在或未启用\"\n                elif cmd == \"model\":\n                    if not isadmin and not self.is_admin_in_group(e_context[\"context\"]):\n                        ok, result = False, \"需要管理员权限执行\"\n                    elif len(args) == 0:\n                        model = conf().get(\"model\") or const.GPT35\n                        ok, result = True, \"当前模型为: \" + str(model)\n                    elif len(args) == 1:\n                        if args[0] not in const.MODEL_LIST:\n                            ok, result = False, \"模型名称不存在\"\n                        else:\n                            conf()[\"model\"] = self.model_mapping(args[0])\n                            Bridge().reset_bot()\n                            model = conf().get(\"model\") or const.GPT35\n                            ok, result = True, \"模型设置为: \" + str(model)\n                elif cmd == \"id\":\n                    ok, result = True, user\n                elif cmd == \"set_openai_api_key\":\n                    if len(args) == 1:\n                        user_data = conf().get_user_data(user)\n                        user_data[\"openai_api_key\"] = args[0]\n                        ok, result = True, \"你的OpenAI私有api_key已设置为\" + args[0]\n                    else:\n                        ok, result = False, \"请提供一个api_key\"\n                elif cmd == \"reset_openai_api_key\":\n                    try:\n                        user_data = conf().get_user_data(user)\n                        user_data.pop(\"openai_api_key\")\n                        ok, result = True, \"你的OpenAI私有api_key已清除\"\n                    except Exception as e:\n                        ok, result = False, \"你没有设置私有api_key\"\n                elif cmd == \"set_gpt_model\":\n                    if len(args) == 1:\n                        user_data = conf().get_user_data(user)\n                        user_data[\"gpt_model\"] = args[0]\n                        ok, result = True, \"你的GPT模型已设置为\" + args[0]\n                    else:\n                        ok, result = False, \"请提供一个GPT模型\"\n                elif cmd == \"gpt_model\":\n                    user_data = conf().get_user_data(user)\n                    model = conf().get(\"model\")\n                    if \"gpt_model\" in user_data:\n                        model = user_data[\"gpt_model\"]\n                    ok, result = True, \"你的GPT模型为\" + str(model)\n                elif cmd == \"reset_gpt_model\":\n                    try:\n                        user_data = conf().get_user_data(user)\n                        user_data.pop(\"gpt_model\")\n                        ok, result = True, \"你的GPT模型已重置\"\n                    except Exception as e:\n                        ok, result = False, \"你没有设置私有GPT模型\"\n                elif cmd == \"reset\":\n                    if bottype in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI, const.BAIDU, const.XUNFEI, const.QWEN, const.GEMINI, const.ZHIPU_AI, const.CLAUDEAPI]:\n                        bot.sessions.clear_session(session_id)\n                        if Bridge().chat_bots.get(bottype):\n                            Bridge().chat_bots.get(bottype).sessions.clear_session(session_id)\n                        channel.cancel_session(session_id)\n                        ok, result = True, \"会话已重置\"\n                    else:\n                        ok, result = False, \"当前对话机器人不支持重置会话\"\n                logger.debug(\"[Godcmd] command: %s by %s\" % (cmd, user))\n            elif any(cmd in info[\"alias\"] for info in ADMIN_COMMANDS.values()):\n                if isadmin:\n                    if isgroup:\n                        ok, result = False, \"群聊不可执行管理员指令\"\n                    else:\n                        cmd = next(c for c, info in ADMIN_COMMANDS.items() if cmd in info[\"alias\"])\n                        if cmd == \"stop\":\n                            self.isrunning = False\n                            ok, result = True, \"服务已暂停\"\n                        elif cmd == \"resume\":\n                            self.isrunning = True\n                            ok, result = True, \"服务已恢复\"\n                        elif cmd == \"reconf\":\n                            load_config()\n                            ok, result = True, \"配置已重载\"\n                        elif cmd == \"resetall\":\n                            if bottype in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.LINKAI,\n                                           const.BAIDU, const.XUNFEI, const.QWEN, const.GEMINI, const.ZHIPU_AI, const.MOONSHOT,\n                                           const.MODELSCOPE]:\n                                channel.cancel_all_session()\n                                bot.sessions.clear_all_session()\n                                ok, result = True, \"重置所有会话成功\"\n                            else:\n                                ok, result = False, \"当前对话机器人不支持重置会话\"\n                        elif cmd == \"debug\":\n                            if logger.getEffectiveLevel() == logging.DEBUG:  # 判断当前日志模式是否DEBUG\n                                logger.setLevel(logging.INFO)\n                                ok, result = True, \"DEBUG模式已关闭\"\n                            else:\n                                logger.setLevel(logging.DEBUG)\n                                ok, result = True, \"DEBUG模式已开启\"\n                        elif cmd == \"plist\":\n                            plugins = PluginManager().list_plugins()\n                            ok = True\n                            result = \"插件列表：\\n\"\n                            for name, plugincls in plugins.items():\n                                result += f\"{plugincls.name}_v{plugincls.version} {plugincls.priority} - \"\n                                if plugincls.enabled:\n                                    result += \"已启用\\n\"\n                                else:\n                                    result += \"未启用\\n\"\n                        elif cmd == \"scanp\":\n                            new_plugins = PluginManager().scan_plugins()\n                            ok, result = True, \"插件扫描完成\"\n                            PluginManager().activate_plugins()\n                            if len(new_plugins) > 0:\n                                result += \"\\n发现新插件：\\n\"\n                                result += \"\\n\".join([f\"{p.name}_v{p.version}\" for p in new_plugins])\n                            else:\n                                result += \", 未发现新插件\"\n                        elif cmd == \"setpri\":\n                            if len(args) != 2:\n                                ok, result = False, \"请提供插件名和优先级\"\n                            else:\n                                ok = PluginManager().set_plugin_priority(args[0], int(args[1]))\n                                if ok:\n                                    result = \"插件\" + args[0] + \"优先级已设置为\" + args[1]\n                                else:\n                                    result = \"插件不存在\"\n                        elif cmd == \"reloadp\":\n                            if len(args) != 1:\n                                ok, result = False, \"请提供插件名\"\n                            else:\n                                ok = PluginManager().reload_plugin(args[0])\n                                if ok:\n                                    result = \"插件配置已重载\"\n                                else:\n                                    result = \"插件不存在\"\n                        elif cmd == \"enablep\":\n                            if len(args) != 1:\n                                ok, result = False, \"请提供插件名\"\n                            else:\n                                ok, result = PluginManager().enable_plugin(args[0])\n                        elif cmd == \"disablep\":\n                            if len(args) != 1:\n                                ok, result = False, \"请提供插件名\"\n                            else:\n                                ok = PluginManager().disable_plugin(args[0])\n                                if ok:\n                                    result = \"插件已禁用\"\n                                else:\n                                    result = \"插件不存在\"\n                        elif cmd == \"installp\":\n                            if len(args) != 1:\n                                ok, result = False, \"请提供插件名或.git结尾的仓库地址\"\n                            else:\n                                ok, result = PluginManager().install_plugin(args[0])\n                        elif cmd == \"uninstallp\":\n                            if len(args) != 1:\n                                ok, result = False, \"请提供插件名\"\n                            else:\n                                ok, result = PluginManager().uninstall_plugin(args[0])\n                        elif cmd == \"updatep\":\n                            if len(args) != 1:\n                                ok, result = False, \"请提供插件名\"\n                            else:\n                                ok, result = PluginManager().update_plugin(args[0])\n                        logger.debug(\"[Godcmd] admin command: %s by %s\" % (cmd, user))\n                else:\n                    ok, result = False, \"需要管理员权限才能执行该指令\"\n            else:\n                trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n                if trigger_prefix == \"#\":  # 跟插件聊天指令前缀相同，继续递交\n                    return\n                ok, result = False, f\"未知指令：{cmd}\\n查看指令列表请输入#help \\n\"\n\n            reply = Reply()\n            if ok:\n                reply.type = ReplyType.INFO\n            else:\n                reply.type = ReplyType.ERROR\n            reply.content = result\n            e_context[\"reply\"] = reply\n\n            e_context.action = EventAction.BREAK_PASS  # 事件结束，并跳过处理context的默认逻辑\n        elif not self.isrunning:\n            e_context.action = EventAction.BREAK_PASS\n\n    def authenticate(self, userid, args, isadmin, isgroup) -> Tuple[bool, str]:\n        if isgroup:\n            return False, \"请勿在群聊中认证\"\n\n        if isadmin:\n            return False, \"管理员账号无需认证\"\n\n        if len(args) != 1:\n            return False, \"请提供口令\"\n\n        password = args[0]\n        if password == self.password:\n            self.admin_users.append(userid)\n            global_config[\"admin_users\"].append(userid)\n            return True, \"认证成功\"\n        elif password == self.temp_password:\n            self.admin_users.append(userid)\n            global_config[\"admin_users\"].append(userid)\n            return True, \"认证成功，请尽快设置口令\"\n        else:\n            return False, \"认证失败\"\n\n    def get_help_text(self, isadmin=False, isgroup=False, **kwargs):\n        return get_help_text(isadmin, isgroup)\n\n\n    def is_admin_in_group(self, context):\n        if context[\"isgroup\"]:\n            return context.kwargs.get(\"msg\").actual_user_id in global_config[\"admin_users\"]\n        return False\n\n\n    def model_mapping(self, model) -> str:\n        if model == \"gpt-4-turbo\":\n            return const.GPT4_TURBO_PREVIEW\n        return model\n\n    def reload(self):\n        gconf = pconf(self.name)\n        if gconf:\n            if gconf.get(\"password\"):\n                self.password = gconf[\"password\"]\n            if gconf.get(\"admin_users\"):\n                self.admin_users = gconf[\"admin_users\"]\n"
  },
  {
    "path": "plugins/hello/README.md",
    "content": "## 插件说明\n\n可以根据需求设置入群欢迎、群聊拍一拍、退群等消息的自定义提示词，也支持为每个群设置对应的固定欢迎语。\n\n该插件也是用户根据需求开发自定义插件的示例插件，参考[插件开发说明](https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins)\n\n## 插件配置\n\n将 `plugins/hello` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`。 (如果未配置则会默认使用`config.json.template`模板中配置)。\n\n以下是插件配置项说明：\n\n```bash\n{\n    \"group_welc_fixed_msg\": {                   ## 这里可以为特定群里配置特定的固定欢迎语\n      \"群聊1\": \"群聊1的固定欢迎语\",\n      \"群聊2\": \"群聊2的固定欢迎语\"\n    },\n\n  \"group_welc_prompt\": \"请你随机使用一种风格说一句问候语来欢迎新用户\\\"{nickname}\\\"加入群聊。\",  ## 群聊随机欢迎语的提示词\n\n  \"group_exit_prompt\": \"请你随机使用一种风格跟其他群用户说他违反规则\\\"{nickname}\\\"退出群聊。\",  ## 移出群聊的提示词\n\n  \"patpat_prompt\": \"请你随机使用一种风格介绍你自己，并告诉用户输入#help可以查看帮助信息。\",  ## 群内拍一拍的提示词\n \n  \"use_character_desc\": false     ## 是否在Hello插件中使用LinkAI应用的系统设定\n}\n```\n\n\n注意：\n\n - 设置全局的用户进群固定欢迎语，可以在***项目根目录下***的`config.json`文件里，可以添加参数`\"group_welcome_msg\": \"\" `，参考 [#1482](https://github.com/zhayujie/chatgpt-on-wechat/pull/1482)\n - 为每个群设置固定的欢迎语，可以在`\"group_welc_fixed_msg\": {}`配置群聊名和对应的固定欢迎语，优先级高于全局固定欢迎语\n - 如果没有配置以上两个参数，则使用随机欢迎语，如需设定风格，语言等，修改`\"group_welc_prompt\": `即可\n - 如果使用LinkAI的服务，想在随机欢迎中结合LinkAI应用的设定，配置`\"use_character_desc\": true `\n - 实际 `config.json` 配置中应保证json格式，不应携带 '#' 及后面的注释\n - 如果是`docker`部署，可通过映射 `plugins/config.json` 到容器中来完成插件配置，参考[文档](https://github.com/zhayujie/chatgpt-on-wechat#3-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)\n\n\n\n"
  },
  {
    "path": "plugins/hello/__init__.py",
    "content": "from .hello import *\n"
  },
  {
    "path": "plugins/hello/config.json.template",
    "content": "{\n    \"group_welc_fixed_msg\": {\n      \"群聊1\": \"群聊1的固定欢迎语\",\n      \"群聊2\": \"群聊2的固定欢迎语\"\n    },\n\n  \"group_welc_prompt\": \"请你随机使用一种风格说一句问候语来欢迎新用户\\\"{nickname}\\\"加入群聊。\",\n\n  \"group_exit_prompt\": \"请你随机使用一种风格跟其他群用户说他违反规则\\\"{nickname}\\\"退出群聊。\",\n\n  \"patpat_prompt\": \"请你随机使用一种风格介绍你自己，并告诉用户输入#help可以查看帮助信息。\",\n\n  \"use_character_desc\": false\n}"
  },
  {
    "path": "plugins/hello/hello.py",
    "content": "# encoding:utf-8\n\nimport plugins\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_message import ChatMessage\nfrom common.log import logger\nfrom plugins import *\nfrom config import conf\n\n\n@plugins.register(\n    name=\"Hello\",\n    desire_priority=-1,\n    hidden=True,\n    desc=\"A simple plugin that says hello\",\n    version=\"0.1\",\n    author=\"lanvent\",\n)\n\n\nclass Hello(Plugin):\n\n    group_welc_prompt = \"请你随机使用一种风格说一句问候语来欢迎新用户\\\"{nickname}\\\"加入群聊。\"\n    group_exit_prompt = \"请你随机使用一种风格介绍你自己，并告诉用户输入#help可以查看帮助信息。\"\n    patpat_prompt = \"请你随机使用一种风格跟其他群用户说他违反规则\\\"{nickname}\\\"退出群聊。\"\n\n    def __init__(self):\n        super().__init__()\n        try:\n            self.config = super().load_config()\n            if not self.config:\n                self.config = self._load_config_template()\n            self.group_welc_fixed_msg = self.config.get(\"group_welc_fixed_msg\", {})\n            self.group_welc_prompt = self.config.get(\"group_welc_prompt\", self.group_welc_prompt)\n            self.group_exit_prompt = self.config.get(\"group_exit_prompt\", self.group_exit_prompt)\n            self.patpat_prompt = self.config.get(\"patpat_prompt\", self.patpat_prompt)\n            logger.debug(\"[Hello] inited\")\n            self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n        except Exception as e:\n            logger.error(f\"[Hello]初始化异常：{e}\")\n            raise \"[Hello] init failed, ignore \"\n\n    def on_handle_context(self, e_context: EventContext):\n        if e_context[\"context\"].type not in [\n            ContextType.TEXT,\n            ContextType.JOIN_GROUP,\n            ContextType.PATPAT,\n            ContextType.EXIT_GROUP\n        ]:\n            return\n        msg: ChatMessage = e_context[\"context\"][\"msg\"]\n        group_name = msg.from_user_nickname\n        if e_context[\"context\"].type == ContextType.JOIN_GROUP:\n            if \"group_welcome_msg\" in conf() or group_name in self.group_welc_fixed_msg:\n                reply = Reply()\n                reply.type = ReplyType.TEXT\n                if group_name in self.group_welc_fixed_msg:\n                    reply.content = self.group_welc_fixed_msg.get(group_name, \"\")\n                else:\n                    reply.content = conf().get(\"group_welcome_msg\", \"\")\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS  # 事件结束，并跳过处理context的默认逻辑\n                return\n            e_context[\"context\"].type = ContextType.TEXT\n            e_context[\"context\"].content = self.group_welc_prompt.format(nickname=msg.actual_user_nickname)\n            e_context.action = EventAction.BREAK  # 事件结束，进入默认处理逻辑\n            if not self.config or not self.config.get(\"use_character_desc\"):\n                e_context[\"context\"][\"generate_breaked_by\"] = EventAction.BREAK\n            return\n        \n        if e_context[\"context\"].type == ContextType.EXIT_GROUP:\n            if conf().get(\"group_chat_exit_group\"):\n                e_context[\"context\"].type = ContextType.TEXT\n                e_context[\"context\"].content = self.group_exit_prompt.format(nickname=msg.actual_user_nickname)\n                e_context.action = EventAction.BREAK  # 事件结束，进入默认处理逻辑\n                return\n            e_context.action = EventAction.BREAK\n            return\n            \n        if e_context[\"context\"].type == ContextType.PATPAT:\n            e_context[\"context\"].type = ContextType.TEXT\n            e_context[\"context\"].content = self.patpat_prompt\n            e_context.action = EventAction.BREAK  # 事件结束，进入默认处理逻辑\n            if not self.config or not self.config.get(\"use_character_desc\"):\n                e_context[\"context\"][\"generate_breaked_by\"] = EventAction.BREAK\n            return\n\n        content = e_context[\"context\"].content\n        logger.debug(\"[Hello] on_handle_context. content: %s\" % content)\n        if content == \"Hello\":\n            reply = Reply()\n            reply.type = ReplyType.TEXT\n            if e_context[\"context\"][\"isgroup\"]:\n                reply.content = f\"Hello, {msg.actual_user_nickname} from {msg.from_user_nickname}\"\n            else:\n                reply.content = f\"Hello, {msg.from_user_nickname}\"\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS  # 事件结束，并跳过处理context的默认逻辑\n\n        if content == \"Hi\":\n            reply = Reply()\n            reply.type = ReplyType.TEXT\n            reply.content = \"Hi\"\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK  # 事件结束，进入默认处理逻辑，一般会覆写reply\n\n        if content == \"End\":\n            # 如果是文本消息\"End\"，将请求转换成\"IMAGE_CREATE\"，并将content设置为\"The World\"\n            e_context[\"context\"].type = ContextType.IMAGE_CREATE\n            content = \"The World\"\n            e_context.action = EventAction.CONTINUE  # 事件继续，交付给下个插件或默认逻辑\n\n    def get_help_text(self, **kwargs):\n        help_text = \"输入Hello，我会回复你的名字\\n输入End，我会回复你世界的图片\\n\"\n        return help_text\n\n    def _load_config_template(self):\n        logger.debug(\"No Hello plugin config.json, use plugins/hello/config.json.template\")\n        try:\n            plugin_config_path = os.path.join(self.path, \"config.json.template\")\n            if os.path.exists(plugin_config_path):\n                with open(plugin_config_path, \"r\", encoding=\"utf-8\") as f:\n                    plugin_conf = json.load(f)\n                    return plugin_conf\n        except Exception as e:\n            logger.exception(e)"
  },
  {
    "path": "plugins/keyword/README.md",
    "content": "# 目的\n关键字匹配并回复\n\n# 试用场景\n目前是在微信公众号下面使用过。\n\n# 使用步骤\n1. 复制 `config.json.template` 为 `config.json`\n2. 在关键字 `keyword` 新增需要关键字匹配的内容\n3. 重启程序做验证\n\n# 验证结果\n![结果](test-keyword.png)"
  },
  {
    "path": "plugins/keyword/__init__.py",
    "content": "from .keyword import *\n"
  },
  {
    "path": "plugins/keyword/config.json.template",
    "content": "{\n  \"keyword\": {\n    \"关键字匹配\": \"测试成功\"\n  }\n}\n"
  },
  {
    "path": "plugins/keyword/keyword.py",
    "content": "# encoding:utf-8\n\nimport json\nimport os\nimport requests\nimport plugins\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom plugins import *\n\n\n@plugins.register(\n    name=\"Keyword\",\n    desire_priority=900,\n    hidden=True,\n    desc=\"关键词匹配过滤\",\n    version=\"0.1\",\n    author=\"fengyege.top\",\n)\nclass Keyword(Plugin):\n    def __init__(self):\n        super().__init__()\n        try:\n            curdir = os.path.dirname(__file__)\n            config_path = os.path.join(curdir, \"config.json\")\n            conf = None\n            if not os.path.exists(config_path):\n                logger.debug(f\"[keyword]不存在配置文件{config_path}\")\n                conf = {\"keyword\": {}}\n                with open(config_path, \"w\", encoding=\"utf-8\") as f:\n                    json.dump(conf, f, indent=4)\n            else:\n                logger.debug(f\"[keyword]加载配置文件{config_path}\")\n                with open(config_path, \"r\", encoding=\"utf-8\") as f:\n                    conf = json.load(f)\n            # 加载关键词\n            self.keyword = conf[\"keyword\"]\n\n            logger.debug(\"[keyword] {}\".format(self.keyword))\n            self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n            logger.debug(\"[keyword] inited.\")\n        except Exception as e:\n            logger.warn(\"[keyword] init failed, ignore or see https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/keyword .\")\n            raise e\n\n    def on_handle_context(self, e_context: EventContext):\n        if e_context[\"context\"].type != ContextType.TEXT:\n            return\n\n        content = e_context[\"context\"].content.strip()\n        logger.debug(\"[keyword] on_handle_context. content: %s\" % content)\n        if content in self.keyword:\n            logger.info(f\"[keyword] 匹配到关键字【{content}】\")\n            reply_text = self.keyword[content]\n\n            # 判断匹配内容的类型\n            if (reply_text.startswith(\"http://\") or reply_text.startswith(\"https://\")) and any(reply_text.endswith(ext) for ext in [\".jpg\", \".webp\", \".jpeg\", \".png\", \".gif\", \".img\"]):\n            # 如果是以 http:// 或 https:// 开头，且\".jpg\", \".jpeg\", \".png\", \".gif\", \".img\"结尾，则认为是图片 URL。\n                reply = Reply()\n                reply.type = ReplyType.IMAGE_URL\n                reply.content = reply_text\n                \n            elif (reply_text.startswith(\"http://\") or reply_text.startswith(\"https://\")) and any(reply_text.endswith(ext) for ext in [\".pdf\", \".doc\", \".docx\", \".xls\", \"xlsx\",\".zip\", \".rar\"]):\n            # 如果是以 http:// 或 https:// 开头，且\".pdf\", \".doc\", \".docx\", \".xls\", \"xlsx\",\".zip\", \".rar\"结尾，则下载文件到tmp目录并发送给用户\n                file_path = \"tmp\"\n                if not os.path.exists(file_path):\n                    os.makedirs(file_path)\n                file_name = reply_text.split(\"/\")[-1]  # 获取文件名\n                file_path = os.path.join(file_path, file_name)\n                response = requests.get(reply_text)\n                with open(file_path, \"wb\") as f:\n                    f.write(response.content)\n                reply = Reply()\n                reply.type = ReplyType.FILE\n                reply.content = file_path\n            \n            elif (reply_text.startswith(\"http://\") or reply_text.startswith(\"https://\")) and any(reply_text.endswith(ext) for ext in [\".mp4\"]):\n            # 如果是以 http:// 或 https:// 开头，且\".mp4\"结尾，则下载视频到tmp目录并发送给用户\n                reply = Reply()\n                reply.type = ReplyType.VIDEO_URL\n                reply.content = reply_text\n                \n            else:\n            # 否则认为是普通文本\n                reply = Reply()\n                reply.type = ReplyType.TEXT\n                reply.content = reply_text\n            \n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS  # 事件结束，并跳过处理context的默认逻辑\n            \n    def get_help_text(self, **kwargs):\n        help_text = \"关键词过滤\"\n        return help_text\n"
  },
  {
    "path": "plugins/linkai/README.md",
    "content": "## 插件说明\n\n基于 LinkAI 提供的知识库、Midjourney绘画、文档对话等能力对机器人的功能进行增强。平台地址: https://link-ai.tech/console\n\n## 插件配置\n\n将 `plugins/linkai` 目录下的 `config.json.template` 配置模板复制为最终生效的 `config.json`。 (如果未配置则会默认使用`config.json.template`模板中配置，但功能默认关闭，需要可通过指令进行开启)。\n\n以下是插件配置项说明：\n\n```bash\n{\n    \"group_app_map\": {               # 群聊 和 应用编码 的映射关系\n        \"测试群名称1\": \"default\",      # 表示在名称为 \"测试群名称1\" 的群聊中将使用app_code 为 default 的应用\n        \"测试群名称2\": \"Kv2fXJcH\"\n    },\n    \"midjourney\": {\n        \"enabled\": true,          # midjourney 绘画开关\n        \"auto_translate\": true,   # 是否自动将提示词翻译为英文\n        \"img_proxy\": true,        # 是否对生成的图片使用代理，如果你是国外服务器，将这一项设置为false会获得更快的生成速度\n        \"max_tasks\": 3,           # 支持同时提交的总任务个数\n        \"max_tasks_per_user\": 1,  # 支持单个用户同时提交的任务个数\n        \"use_image_create_prefix\": true   # 是否使用全局的绘画触发词，如果开启将同时支持由`config.json`中的 image_create_prefix 配置触发\n    },\n    \"summary\": {\n        \"enabled\": true,              # 文档总结和对话功能开关\n        \"group_enabled\": true,        # 是否支持群聊开启\n        \"max_file_size\": 5000,        # 文件的大小限制，单位KB，默认为5M，超过该大小直接忽略\n        \"type\": [\"FILE\", \"SHARING\", \"IMAGE\"]  # 支持总结的类型，分别表示 文件、分享链接、图片，其中文件和链接默认打开，图片默认关闭\n    }\n}\n```\n\n根目录 `config.json` 中配置，`API_KEY` 在 [控制台](https://link-ai.tech/console/interface) 中创建并复制过来:\n\n```bash\n\"linkai_api_key\": \"Link_xxxxxxxxx\"\n```\n\n注意：\n\n - 配置项中 `group_app_map` 部分是用于映射群聊与LinkAI平台上的应用， `midjourney` 部分是 mj 画图的配置，`summary` 部分是文档总结及对话功能的配置。三部分的配置相互独立，可按需开启\n - 实际 `config.json` 配置中应保证json格式，不应携带 '#' 及后面的注释\n - 如果是`docker`部署，可通过映射 `plugins/config.json` 到容器中来完成插件配置，参考[文档](https://github.com/zhayujie/chatgpt-on-wechat#3-%E6%8F%92%E4%BB%B6%E4%BD%BF%E7%94%A8)\n\n## 插件使用\n\n> 使用插件中的知识库管理功能需要首先开启`linkai`对话，依赖全局 `config.json` 中的 `use_linkai` 和 `linkai_api_key` 配置；而midjourney绘画 和 summary文档总结对话功能则只需填写 `linkai_api_key` 配置，`use_linkai` 无论是否关闭均可使用。具体可参考 [详细文档](https://link-ai.tech/platform/link-app/wechat)。\n\n完成配置后运行项目，会自动运行插件，输入 `#help linkai` 可查看插件功能。\n\n### 1.知识库管理功能\n\n提供在不同群聊使用不同应用的功能。可以在上述 `group_app_map` 配置中固定映射关系，也可以通过指令在群中快速完成切换。\n\n应用切换指令需要首先完成管理员 (`godcmd`) 插件的认证，然后按以下格式输入：\n\n`$linkai app {app_code}`\n\n例如输入 `$linkai app Kv2fXJcH`，即将当前群聊与 app_code为 Kv2fXJcH 的应用绑定。\n\n另外，还可以通过 `$linkai close` 来一键关闭linkai对话，此时就会使用默认的openai接口；同理，发送 `$linkai open` 可以再次开启。\n\n### 2.Midjourney绘画功能\n\n若未配置 `plugins/linkai/config.json`，默认会关闭画图功能，直接使用 `$mj open` 可基于默认配置直接使用mj画图。\n\n指令格式：\n\n```\n - 图片生成: $mj 描述词1, 描述词2..\n - 图片放大: $mju 图片ID 图片序号\n - 图片变换: $mjv 图片ID 图片序号\n - 重置: $mjr 图片ID\n```\n\n例如：\n\n```\n\"$mj a little cat, white --ar 9:16\"\n\"$mju 1105592717188272288 2\"\n\"$mjv 11055927171882 2\"\n\"$mjr 11055927171882\"\n```\n\n注意事项：\n1. 使用 `$mj open` 和 `$mj close` 指令可以快速打开和关闭绘图功能\n2. 海外环境部署请将 `img_proxy` 设置为 `false`\n3. 开启 `use_image_create_prefix` 配置后可直接复用全局画图触发词，以\"画\"开头便可以生成图片。\n4. 提示词内容中包含敏感词或者参数格式错误可能导致绘画失败，生成失败不消耗积分\n5. 若未收到图片可能有两种可能，一种是收到了图片但微信发送失败，可以在后台日志查看有没有获取到图片url，一般原因是受到了wx限制，可以稍后重试或更换账号尝试；另一种情况是图片提示词存在疑似违规，mj不会直接提示错误但会在画图后删掉原图导致程序无法获取，这种情况不消耗积分。\n\n### 3.文档总结对话功能\n\n#### 配置\n\n该功能依赖 LinkAI的知识库及对话功能，需要在项目根目录的config.json中设置 `linkai_api_key`， 同时根据上述插件配置说明，在插件config.json添加 `summary` 部分的配置，设置 `enabled` 为 true。\n\n如果不想创建 `plugins/linkai/config.json` 配置，可以直接通过 `$linkai sum open` 指令开启该功能。\n\n也可以通过私聊(全局 `config.json` 中的 `linkai_app_code`)或者群聊绑定(通过`group_app_map`参数配置)的应用来开启该功能：在LinkAI平台 [应用配置](https://link-ai.tech/console/factory) 里添加并开启**内容总结**插件。\n\n#### 使用\n\n功能开启后，向机器人发送 **文件**、 **分享链接卡片**、**图片** 即可生成摘要，进一步可以与文件或链接的内容进行多轮对话。如果需要关闭某种类型的内容总结，设置 `summary`配置中的type字段即可。\n\n#### 限制\n\n 1. 文件目前 支持 `txt`, `docx`, `pdf`, `md`, `csv`格式，文件大小由 `max_file_size` 限制，最大不超过15M，文件字数最多可支持百万字的文件。但不建议上传字数过多的文件，一是token消耗过大，二是摘要很难覆盖到全部内容，只能通过多轮对话来了解细节。\n 2. 分享链接 目前仅支持 公众号文章，后续会支持更多文章类型及视频链接等\n 3. 总结及对话的 费用与 LinkAI 3.5-4K 模型的计费方式相同，按文档内容的tokens进行计算\n"
  },
  {
    "path": "plugins/linkai/__init__.py",
    "content": "from .linkai import *\n"
  },
  {
    "path": "plugins/linkai/config.json.template",
    "content": "{\n    \"group_app_map\": {\n        \"测试群名1\": \"default\",\n        \"测试群名2\": \"Kv2fXJcH\"\n    },\n    \"midjourney\": {\n        \"enabled\": true,\n        \"auto_translate\": true,\n        \"img_proxy\": true,\n        \"max_tasks\": 3,\n        \"max_tasks_per_user\": 1,\n        \"use_image_create_prefix\": true\n    },\n    \"summary\": {\n        \"enabled\": true,\n        \"group_enabled\": true,\n        \"max_file_size\": 5000,\n        \"type\": [\"FILE\", \"SHARING\"]\n    }\n}\n"
  },
  {
    "path": "plugins/linkai/linkai.py",
    "content": "import plugins\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom plugins import *\nfrom .midjourney import MJBot\nfrom .summary import LinkSummary\nfrom bridge import bridge\nfrom common.expired_dict import ExpiredDict\nfrom common import const\nimport os\nfrom .utils import Util\nfrom config import plugin_config, conf\n\n\n@plugins.register(\n    name=\"linkai\",\n    desc=\"A plugin that supports knowledge base and midjourney drawing.\",\n    version=\"0.1.0\",\n    author=\"https://link-ai.tech\",\n    desire_priority=99\n)\nclass LinkAI(Plugin):\n    def __init__(self):\n        super().__init__()\n        self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n        self.config = super().load_config()\n        if not self.config:\n            # 未加载到配置，使用模板中的配置\n            self.config = self._load_config_template()\n        if self.config:\n            self.mj_bot = MJBot(self.config.get(\"midjourney\"), self._fetch_group_app_code)\n        self.sum_config = {}\n        if self.config:\n            self.sum_config = self.config.get(\"summary\")\n        logger.debug(f\"[LinkAI] inited, config={self.config}\")\n\n    def on_handle_context(self, e_context: EventContext):\n        \"\"\"\n        消息处理逻辑\n        :param e_context: 消息上下文\n        \"\"\"\n        if not self.config:\n            return\n\n        context = e_context['context']\n        if context.type not in [ContextType.TEXT, ContextType.IMAGE, ContextType.IMAGE_CREATE, ContextType.FILE,\n                                ContextType.SHARING]:\n            # filter content no need solve\n            return\n\n        if context.type in [ContextType.FILE, ContextType.IMAGE] and self._is_summary_open(context):\n            # 文件处理\n            context.get(\"msg\").prepare()\n            file_path = context.content\n            if not LinkSummary().check_file(file_path, self.sum_config):\n                return\n            if context.type != ContextType.IMAGE:\n                _send_info(e_context, \"正在为你加速生成摘要，请稍后\")\n            app_code = self._fetch_app_code(context)\n            res = LinkSummary().summary_file(file_path, app_code)\n            if not res:\n                if context.type != ContextType.IMAGE:\n                    _set_reply_text(\"因为神秘力量无法获取内容，请稍后再试吧\", e_context, level=ReplyType.TEXT)\n                return\n            summary_text = res.get(\"summary\")\n            if context.type != ContextType.IMAGE:\n                USER_FILE_MAP[_find_user_id(context) + \"-sum_id\"] = res.get(\"summary_id\")\n                summary_text += \"\\n\\n💬 发送 \\\"开启对话\\\" 可以开启与文件内容的对话\"\n            _set_reply_text(summary_text, e_context, level=ReplyType.TEXT)\n            os.remove(file_path)\n            return\n\n        if (context.type == ContextType.SHARING and self._is_summary_open(context)) or \\\n                (context.type == ContextType.TEXT and self._is_summary_open(context) and LinkSummary().check_url(context.content)):\n            if not LinkSummary().check_url(context.content):\n                return\n            _send_info(e_context, \"正在为你加速生成摘要，请稍后\")\n            app_code = self._fetch_app_code(context)\n            res = LinkSummary().summary_url(context.content, app_code)\n            if not res:\n                _set_reply_text(\"因为神秘力量无法获取文章内容，请稍后再试吧~\", e_context, level=ReplyType.TEXT)\n                return\n            _set_reply_text(res.get(\"summary\") + \"\\n\\n💬 发送 \\\"开启对话\\\" 可以开启与文章内容的对话\", e_context,\n                            level=ReplyType.TEXT)\n            USER_FILE_MAP[_find_user_id(context) + \"-sum_id\"] = res.get(\"summary_id\")\n            return\n\n        mj_type = self.mj_bot.judge_mj_task_type(e_context)\n        if mj_type:\n            # MJ作图任务处理\n            self.mj_bot.process_mj_task(mj_type, e_context)\n            return\n\n        if context.content.startswith(f\"{_get_trigger_prefix()}linkai\"):\n            # 应用管理功能\n            self._process_admin_cmd(e_context)\n            return\n\n        if context.type == ContextType.TEXT and context.content == \"开启对话\" and _find_sum_id(context):\n            # 文本对话\n            _send_info(e_context, \"正在为你开启对话，请稍后\")\n            res = LinkSummary().summary_chat(_find_sum_id(context))\n            if not res:\n                _set_reply_text(\"开启对话失败，请稍后再试吧\", e_context)\n                return\n            USER_FILE_MAP[_find_user_id(context) + \"-file_id\"] = res.get(\"file_id\")\n            _set_reply_text(\"💡你可以问我关于这篇文章的任何问题，例如：\\n\\n\" + res.get(\n                \"questions\") + \"\\n\\n发送 \\\"退出对话\\\" 可以关闭与文章的对话\", e_context, level=ReplyType.TEXT)\n            return\n\n        if context.type == ContextType.TEXT and context.content == \"退出对话\" and _find_file_id(context):\n            del USER_FILE_MAP[_find_user_id(context) + \"-file_id\"]\n            bot = bridge.Bridge().find_chat_bot(const.LINKAI)\n            bot.sessions.clear_session(context[\"session_id\"])\n            _set_reply_text(\"对话已退出\", e_context, level=ReplyType.TEXT)\n            return\n\n        if context.type == ContextType.TEXT and _find_file_id(context):\n            bot = bridge.Bridge().find_chat_bot(const.LINKAI)\n            context.kwargs[\"file_id\"] = _find_file_id(context)\n            reply = bot.reply(context.content, context)\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n\n        if self._is_chat_task(e_context):\n            # 文本对话任务处理\n            self._process_chat_task(e_context)\n\n    # 插件管理功能\n    def _process_admin_cmd(self, e_context: EventContext):\n        context = e_context['context']\n        cmd = context.content.split()\n        if len(cmd) == 1 or (len(cmd) == 2 and cmd[1] == \"help\"):\n            _set_reply_text(self.get_help_text(verbose=True), e_context, level=ReplyType.INFO)\n            return\n\n        if len(cmd) == 2 and (cmd[1] == \"open\" or cmd[1] == \"close\"):\n            # 知识库开关指令\n            if not Util.is_admin(e_context):\n                _set_reply_text(\"需要管理员权限执行\", e_context, level=ReplyType.ERROR)\n                return\n            is_open = True\n            tips_text = \"开启\"\n            if cmd[1] == \"close\":\n                tips_text = \"关闭\"\n                is_open = False\n            conf()[\"use_linkai\"] = is_open\n            bridge.Bridge().reset_bot()\n            _set_reply_text(f\"LinkAI对话功能{tips_text}\", e_context, level=ReplyType.INFO)\n            return\n\n        if len(cmd) == 3 and cmd[1] == \"app\":\n            # 知识库应用切换指令\n            if not context.kwargs.get(\"isgroup\"):\n                _set_reply_text(\"该指令需在群聊中使用\", e_context, level=ReplyType.ERROR)\n                return\n            if not Util.is_admin(e_context):\n                _set_reply_text(\"需要管理员权限执行\", e_context, level=ReplyType.ERROR)\n                return\n            app_code = cmd[2]\n            group_name = context.kwargs.get(\"msg\").from_user_nickname\n            group_mapping = self.config.get(\"group_app_map\")\n            if group_mapping:\n                group_mapping[group_name] = app_code\n            else:\n                self.config[\"group_app_map\"] = {group_name: app_code}\n            # 保存插件配置\n            super().save_config(self.config)\n            _set_reply_text(f\"应用设置成功: {app_code}\", e_context, level=ReplyType.INFO)\n            return\n\n        if len(cmd) == 3 and cmd[1] == \"sum\" and (cmd[2] == \"open\" or cmd[2] == \"close\"):\n            # 总结对话开关指令\n            if not Util.is_admin(e_context):\n                _set_reply_text(\"需要管理员权限执行\", e_context, level=ReplyType.ERROR)\n                return\n            is_open = True\n            tips_text = \"开启\"\n            if cmd[2] == \"close\":\n                tips_text = \"关闭\"\n                is_open = False\n            if not self.sum_config:\n                _set_reply_text(\n                    f\"插件未启用summary功能，请参考以下链添加插件配置\\n\\nhttps://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/linkai/README.md\",\n                    e_context, level=ReplyType.INFO)\n            else:\n                self.sum_config[\"enabled\"] = is_open\n                _set_reply_text(f\"文章总结功能{tips_text}\", e_context, level=ReplyType.INFO)\n            return\n\n        _set_reply_text(f\"指令错误，请输入{_get_trigger_prefix()}linkai help 获取帮助\", e_context,\n                        level=ReplyType.INFO)\n        return\n\n    def _is_summary_open(self, context) -> bool:\n        # 获取远程应用插件状态\n        remote_enabled = False\n        if context.kwargs.get(\"isgroup\"):\n            # 群聊场景只查询群对应的app_code\n            group_name = context.get(\"msg\").from_user_nickname\n            app_code = self._fetch_group_app_code(group_name)\n            if app_code:\n                if context.type.name in [\"FILE\", \"SHARING\"]:\n                    remote_enabled = Util.fetch_app_plugin(app_code, \"内容总结\")\n        else:\n            # 非群聊场景使用全局app_code\n            app_code = conf().get(\"linkai_app_code\")\n            if app_code:\n                if context.type.name in [\"FILE\", \"SHARING\"]:\n                    remote_enabled = Util.fetch_app_plugin(app_code, \"内容总结\")\n\n        # 基础条件：总开关开启且消息类型符合要求\n        base_enabled = (\n                self.sum_config\n                and self.sum_config.get(\"enabled\")\n                and (context.type.name in (\n                    self.sum_config.get(\"type\") or [\"FILE\", \"SHARING\"]) or context.type.name == \"TEXT\")\n        )\n\n        # 群聊：需要满足(总开关和群开关)或远程插件开启\n        if context.kwargs.get(\"isgroup\"):\n            return (base_enabled and self.sum_config.get(\"group_enabled\")) or remote_enabled\n\n        # 非群聊：只需要满足总开关或远程插件开启\n        return base_enabled or remote_enabled\n\n    # LinkAI 对话任务处理\n    def _is_chat_task(self, e_context: EventContext):\n        context = e_context['context']\n        # 群聊应用管理\n        return self.config.get(\"group_app_map\") and context.kwargs.get(\"isgroup\")\n\n    def _process_chat_task(self, e_context: EventContext):\n        \"\"\"\n        处理LinkAI对话任务\n        :param e_context: 对话上下文\n        \"\"\"\n        context = e_context['context']\n        # 群聊应用管理\n        group_name = context.get(\"msg\").from_user_nickname\n        app_code = self._fetch_group_app_code(group_name)\n        if app_code:\n            context.kwargs['app_code'] = app_code\n\n    def _fetch_group_app_code(self, group_name: str) -> str:\n        \"\"\"\n        根据群聊名称获取对应的应用code\n        :param group_name: 群聊名称\n        :return: 应用code\n        \"\"\"\n        group_mapping = self.config.get(\"group_app_map\")\n        if group_mapping:\n            app_code = group_mapping.get(group_name) or group_mapping.get(\"ALL_GROUP\")\n            return app_code\n\n    def _fetch_app_code(self, context) -> str:\n        \"\"\"\n        根据主配置或者群聊名称获取对应的应用code,优先获取群聊配置的应用code\n        :param context: 上下文\n        :return: 应用code\n        \"\"\"\n        app_code = conf().get(\"linkai_app_code\")\n        if context.kwargs.get(\"isgroup\"):\n            # 群聊场景只查询群对应的app_code\n            group_name = context.get(\"msg\").from_user_nickname\n            app_code = self._fetch_group_app_code(group_name)\n        return app_code\n\n    def get_help_text(self, verbose=False, **kwargs):\n        trigger_prefix = _get_trigger_prefix()\n        help_text = \"用于集成 LinkAI 提供的知识库、Midjourney绘画、文档总结、联网搜索等能力。\\n\\n\"\n        if not verbose:\n            return help_text\n        help_text += f'📖 知识库\\n - 群聊中指定应用: {trigger_prefix}linkai app 应用编码\\n'\n        help_text += f' - {trigger_prefix}linkai open: 开启对话\\n'\n        help_text += f' - {trigger_prefix}linkai close: 关闭对话\\n'\n        help_text += f'\\n例如: \\n\"{trigger_prefix}linkai app Kv2fXJcH\"\\n\\n'\n        help_text += f\"🎨 绘画\\n - 生成: {trigger_prefix}mj 描述词1, 描述词2.. \\n - 放大: {trigger_prefix}mju 图片ID 图片序号\\n - 变换: {trigger_prefix}mjv 图片ID 图片序号\\n - 重置: {trigger_prefix}mjr 图片ID\"\n        help_text += f\"\\n\\n例如：\\n\\\"{trigger_prefix}mj a little cat, white --ar 9:16\\\"\\n\\\"{trigger_prefix}mju 11055927171882 2\\\"\"\n        help_text += f\"\\n\\\"{trigger_prefix}mjv 11055927171882 2\\\"\\n\\\"{trigger_prefix}mjr 11055927171882\\\"\"\n        help_text += f\"\\n\\n💡 文档总结和对话\\n - 开启: {trigger_prefix}linkai sum open\\n - 使用: 发送文件、公众号文章等可生成摘要，并与内容对话\"\n        return help_text\n\n    def _load_config_template(self):\n        logger.debug(\"No LinkAI plugin config.json, use plugins/linkai/config.json.template\")\n        try:\n            plugin_config_path = os.path.join(self.path, \"config.json.template\")\n            if os.path.exists(plugin_config_path):\n                with open(plugin_config_path, \"r\", encoding=\"utf-8\") as f:\n                    plugin_conf = json.load(f)\n                    plugin_conf[\"midjourney\"][\"enabled\"] = False\n                    plugin_conf[\"summary\"][\"enabled\"] = False\n                    write_plugin_config({\"linkai\": plugin_conf})\n                    return plugin_conf\n        except Exception as e:\n            logger.exception(e)\n\n    def reload(self):\n        self.config = super().load_config()\n\n\ndef _send_info(e_context: EventContext, content: str):\n    reply = Reply(ReplyType.TEXT, content)\n    channel = e_context[\"channel\"]\n    channel.send(reply, e_context[\"context\"])\n\n\ndef _find_user_id(context):\n    if context[\"isgroup\"]:\n        return context.kwargs.get(\"msg\").actual_user_id\n    else:\n        return context[\"receiver\"]\n\n\ndef _set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):\n    reply = Reply(level, content)\n    e_context[\"reply\"] = reply\n    e_context.action = EventAction.BREAK_PASS\n\n\ndef _get_trigger_prefix():\n    return conf().get(\"plugin_trigger_prefix\", \"$\")\n\n\ndef _find_sum_id(context):\n    return USER_FILE_MAP.get(_find_user_id(context) + \"-sum_id\")\n\n\ndef _find_file_id(context):\n    user_id = _find_user_id(context)\n    if user_id:\n        return USER_FILE_MAP.get(user_id + \"-file_id\")\n\n\nUSER_FILE_MAP = ExpiredDict(conf().get(\"expires_in_seconds\") or 60 * 30)\n"
  },
  {
    "path": "plugins/linkai/midjourney.py",
    "content": "from enum import Enum\nfrom config import conf\nfrom common.log import logger\nimport requests\nimport threading\nimport time\nfrom bridge.reply import Reply, ReplyType\nimport asyncio\nfrom bridge.context import ContextType\nfrom plugins import EventContext, EventAction\nfrom .utils import Util\n\n\nINVALID_REQUEST = 410\nNOT_FOUND_ORIGIN_IMAGE = 461\nNOT_FOUND_TASK = 462\n\n\nclass TaskType(Enum):\n    GENERATE = \"generate\"\n    UPSCALE = \"upscale\"\n    VARIATION = \"variation\"\n    RESET = \"reset\"\n\n    def __str__(self):\n        return self.name\n\n\nclass Status(Enum):\n    PENDING = \"pending\"\n    FINISHED = \"finished\"\n    EXPIRED = \"expired\"\n    ABORTED = \"aborted\"\n\n    def __str__(self):\n        return self.name\n\n\nclass TaskMode(Enum):\n    FAST = \"fast\"\n    RELAX = \"relax\"\n\n\ntask_name_mapping = {\n    TaskType.GENERATE.name: \"生成\",\n    TaskType.UPSCALE.name: \"放大\",\n    TaskType.VARIATION.name: \"变换\",\n    TaskType.RESET.name: \"重新生成\",\n}\n\n\nclass MJTask:\n    def __init__(self, id, user_id: str, task_type: TaskType, raw_prompt=None, expires: int = 60 * 6,\n                 status=Status.PENDING):\n        self.id = id\n        self.user_id = user_id\n        self.task_type = task_type\n        self.raw_prompt = raw_prompt\n        self.send_func = None  # send_func(img_url)\n        self.expiry_time = time.time() + expires\n        self.status = status\n        self.img_url = None  # url\n        self.img_id = None\n\n    def __str__(self):\n        return f\"id={self.id}, user_id={self.user_id}, task_type={self.task_type}, status={self.status}, img_id={self.img_id}\"\n\n\n# midjourney bot\nclass MJBot:\n    def __init__(self, config, fetch_group_app_code):\n        self.base_url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\") + \"/v1/img/midjourney\"\n        self.headers = {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n        self.config = config\n        self.fetch_group_app_code = fetch_group_app_code\n        self.tasks = {}\n        self.temp_dict = {}\n        self.tasks_lock = threading.Lock()\n        self.event_loop = asyncio.new_event_loop()\n\n    def judge_mj_task_type(self, e_context: EventContext):\n        \"\"\"\n        判断MJ任务的类型\n        :param e_context: 上下文\n        :return: 任务类型枚举\n        \"\"\"\n        if not self.config:\n            return None\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        context = e_context['context']\n        if context.type == ContextType.TEXT:\n            cmd_list = context.content.split(maxsplit=1)\n            if not cmd_list:\n                return None\n            if cmd_list[0].lower() == f\"{trigger_prefix}mj\":\n                return TaskType.GENERATE\n            elif cmd_list[0].lower() == f\"{trigger_prefix}mju\":\n                return TaskType.UPSCALE\n            elif cmd_list[0].lower() == f\"{trigger_prefix}mjv\":\n                return TaskType.VARIATION\n            elif cmd_list[0].lower() == f\"{trigger_prefix}mjr\":\n                return TaskType.RESET\n        elif context.type == ContextType.IMAGE_CREATE and self.config.get(\"use_image_create_prefix\") and self._is_mj_open(context):\n            return TaskType.GENERATE\n\n    def process_mj_task(self, mj_type: TaskType, e_context: EventContext):\n        \"\"\"\n        处理mj任务\n        :param mj_type: mj任务类型\n        :param e_context: 对话上下文\n        \"\"\"\n        context = e_context['context']\n        session_id = context[\"session_id\"]\n        cmd = context.content.split(maxsplit=1)\n        if len(cmd) == 1 and context.type == ContextType.TEXT:\n            # midjourney 帮助指令\n            self._set_reply_text(self.get_help_text(verbose=True), e_context, level=ReplyType.INFO)\n            return\n\n        if len(cmd) == 2 and (cmd[1] == \"open\" or cmd[1] == \"close\"):\n            if not Util.is_admin(e_context):\n                Util.set_reply_text(\"需要管理员权限执行\", e_context, level=ReplyType.ERROR)\n                return\n            # midjourney 开关指令\n            is_open = True\n            tips_text = \"开启\"\n            if cmd[1] == \"close\":\n                tips_text = \"关闭\"\n                is_open = False\n            self.config[\"enabled\"] = is_open\n            self._set_reply_text(f\"Midjourney绘画已{tips_text}\", e_context, level=ReplyType.INFO)\n            return\n\n        if not self._is_mj_open(context):\n            logger.warn(\"Midjourney绘画未开启，请查看 plugins/linkai/config.json 中的配置，或者在LinkAI平台 应用中添加/打开”MJ“插件\")\n            self._set_reply_text(f\"Midjourney绘画未开启\", e_context, level=ReplyType.INFO)\n            return\n\n        if not self._check_rate_limit(session_id, e_context):\n            logger.warn(\"[MJ] midjourney task exceed rate limit\")\n            return\n\n        if mj_type == TaskType.GENERATE:\n            if context.type == ContextType.IMAGE_CREATE:\n                raw_prompt = context.content\n            else:\n                # 图片生成\n                raw_prompt = cmd[1]\n            reply = self.generate(raw_prompt, session_id, e_context)\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n\n        elif mj_type == TaskType.UPSCALE or mj_type == TaskType.VARIATION:\n            # 图片放大/变换\n            clist = cmd[1].split()\n            if len(clist) < 2:\n                self._set_reply_text(f\"{cmd[0]} 命令缺少参数\", e_context)\n                return\n            img_id = clist[0]\n            index = int(clist[1])\n            if index < 1 or index > 4:\n                self._set_reply_text(f\"图片序号 {index} 错误，应在 1 至 4 之间\", e_context)\n                return\n            key = f\"{str(mj_type)}_{img_id}_{index}\"\n            if self.temp_dict.get(key):\n                self._set_reply_text(f\"第 {index} 张图片已经{task_name_mapping.get(str(mj_type))}过了\", e_context)\n                return\n            # 执行图片放大/变换操作\n            reply = self.do_operate(mj_type, session_id, img_id, e_context, index)\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n\n        elif mj_type == TaskType.RESET:\n            # 图片重新生成\n            clist = cmd[1].split()\n            if len(clist) < 1:\n                self._set_reply_text(f\"{cmd[0]} 命令缺少参数\", e_context)\n                return\n            img_id = clist[0]\n            # 图片重新生成\n            reply = self.do_operate(mj_type, session_id, img_id, e_context)\n            e_context['reply'] = reply\n            e_context.action = EventAction.BREAK_PASS\n        else:\n            self._set_reply_text(f\"暂不支持该命令\", e_context)\n\n    def generate(self, prompt: str, user_id: str, e_context: EventContext) -> Reply:\n        \"\"\"\n        图片生成\n        :param prompt: 提示词\n        :param user_id: 用户id\n        :param e_context: 对话上下文\n        :return: 任务ID\n        \"\"\"\n        logger.info(f\"[MJ] image generate, prompt={prompt}\")\n        mode = self._fetch_mode(prompt)\n        body = {\"prompt\": prompt, \"mode\": mode, \"auto_translate\": self.config.get(\"auto_translate\")}\n        if not self.config.get(\"img_proxy\"):\n            body[\"img_proxy\"] = False\n        res = requests.post(url=self.base_url + \"/generate\", json=body, headers=self.headers, timeout=(5, 40))\n        if res.status_code == 200:\n            res = res.json()\n            logger.debug(f\"[MJ] image generate, res={res}\")\n            if res.get(\"code\") == 200:\n                task_id = res.get(\"data\").get(\"task_id\")\n                real_prompt = res.get(\"data\").get(\"real_prompt\")\n                if mode == TaskMode.RELAX.value:\n                    time_str = \"1~10分钟\"\n                else:\n                    time_str = \"1分钟\"\n                content = f\"🚀您的作品将在{time_str}左右完成，请耐心等待\\n- - - - - - - - -\\n\"\n                if real_prompt:\n                    content += f\"初始prompt: {prompt}\\n转换后prompt: {real_prompt}\"\n                else:\n                    content += f\"prompt: {prompt}\"\n                reply = Reply(ReplyType.INFO, content)\n                task = MJTask(id=task_id, status=Status.PENDING, raw_prompt=prompt, user_id=user_id,\n                              task_type=TaskType.GENERATE)\n                # put to memory dict\n                self.tasks[task.id] = task\n                # asyncio.run_coroutine_threadsafe(self.check_task(task, e_context), self.event_loop)\n                self._do_check_task(task, e_context)\n                return reply\n        else:\n            res_json = res.json()\n            logger.error(f\"[MJ] generate error, msg={res_json.get('message')}, status_code={res.status_code}\")\n            if res.status_code == INVALID_REQUEST:\n                reply = Reply(ReplyType.ERROR, \"图片生成失败，请检查提示词参数或内容\")\n            else:\n                reply = Reply(ReplyType.ERROR, \"图片生成失败，请稍后再试\")\n            return reply\n\n    def do_operate(self, task_type: TaskType, user_id: str, img_id: str, e_context: EventContext,\n                   index: int = None) -> Reply:\n        logger.info(f\"[MJ] image operate, task_type={task_type}, img_id={img_id}, index={index}\")\n        body = {\"type\": task_type.name, \"img_id\": img_id}\n        if index:\n            body[\"index\"] = index\n        if not self.config.get(\"img_proxy\"):\n            body[\"img_proxy\"] = False\n        res = requests.post(url=self.base_url + \"/operate\", json=body, headers=self.headers, timeout=(5, 40))\n        logger.debug(res)\n        if res.status_code == 200:\n            res = res.json()\n            if res.get(\"code\") == 200:\n                task_id = res.get(\"data\").get(\"task_id\")\n                logger.info(f\"[MJ] image operate processing, task_id={task_id}\")\n                icon_map = {TaskType.UPSCALE: \"🔎\", TaskType.VARIATION: \"🪄\", TaskType.RESET: \"🔄\"}\n                content = f\"{icon_map.get(task_type)}图片正在{task_name_mapping.get(task_type.name)}中，请耐心等待\"\n                reply = Reply(ReplyType.INFO, content)\n                task = MJTask(id=task_id, status=Status.PENDING, user_id=user_id, task_type=task_type)\n                # put to memory dict\n                self.tasks[task.id] = task\n                key = f\"{task_type.name}_{img_id}_{index}\"\n                self.temp_dict[key] = True\n                # asyncio.run_coroutine_threadsafe(self.check_task(task, e_context), self.event_loop)\n                self._do_check_task(task, e_context)\n                return reply\n        else:\n            error_msg = \"\"\n            if res.status_code == NOT_FOUND_ORIGIN_IMAGE:\n                error_msg = \"请输入正确的图片ID\"\n            res_json = res.json()\n            logger.error(f\"[MJ] operate error, msg={res_json.get('message')}, status_code={res.status_code}\")\n            reply = Reply(ReplyType.ERROR, error_msg or \"图片生成失败，请稍后再试\")\n            return reply\n\n    def check_task_sync(self, task: MJTask, e_context: EventContext):\n        logger.debug(f\"[MJ] start check task status, {task}\")\n        max_retry_times = 90\n        while max_retry_times > 0:\n            time.sleep(10)\n            url = f\"{self.base_url}/tasks/{task.id}\"\n            try:\n                res = requests.get(url, headers=self.headers, timeout=8)\n                if res.status_code == 200:\n                    res_json = res.json()\n                    logger.debug(f\"[MJ] task check res sync, task_id={task.id}, status={res.status_code}, \"\n                                 f\"data={res_json.get('data')}, thread={threading.current_thread().name}\")\n                    if res_json.get(\"data\") and res_json.get(\"data\").get(\"status\") == Status.FINISHED.name:\n                        # process success res\n                        if self.tasks.get(task.id):\n                            self.tasks[task.id].status = Status.FINISHED\n                        self._process_success_task(task, res_json.get(\"data\"), e_context)\n                        return\n                    max_retry_times -= 1\n                else:\n                    res_json = res.json()\n                    logger.warn(f\"[MJ] image check error, status_code={res.status_code}, res={res_json}\")\n                    max_retry_times -= 20\n            except Exception as e:\n                max_retry_times -= 20\n                logger.warn(e)\n        logger.warn(\"[MJ] end from poll\")\n        if self.tasks.get(task.id):\n            self.tasks[task.id].status = Status.EXPIRED\n\n    def _do_check_task(self, task: MJTask, e_context: EventContext):\n        threading.Thread(target=self.check_task_sync, args=(task, e_context)).start()\n\n    def _process_success_task(self, task: MJTask, res: dict, e_context: EventContext):\n        \"\"\"\n        处理任务成功的结果\n        :param task: MJ任务\n        :param res: 请求结果\n        :param e_context: 对话上下文\n        \"\"\"\n        # channel send img\n        task.status = Status.FINISHED\n        task.img_id = res.get(\"img_id\")\n        task.img_url = res.get(\"img_url\")\n        logger.info(f\"[MJ] task success, task_id={task.id}, img_id={task.img_id}, img_url={task.img_url}\")\n\n        # send img\n        reply = Reply(ReplyType.IMAGE_URL, task.img_url)\n        channel = e_context[\"channel\"]\n        _send(channel, reply, e_context[\"context\"])\n\n        # send info\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        text = \"\"\n        if task.task_type == TaskType.GENERATE or task.task_type == TaskType.VARIATION or task.task_type == TaskType.RESET:\n            text = f\"🎨绘画完成!\\n\"\n            if task.raw_prompt:\n                text += f\"prompt: {task.raw_prompt}\\n\"\n            text += f\"- - - - - - - - -\\n图片ID: {task.img_id}\"\n            text += f\"\\n\\n🔎使用 {trigger_prefix}mju 命令放大图片\\n\"\n            text += f\"例如：\\n{trigger_prefix}mju {task.img_id} 1\"\n            text += f\"\\n\\n🪄使用 {trigger_prefix}mjv 命令变换图片\\n\"\n            text += f\"例如：\\n{trigger_prefix}mjv {task.img_id} 1\"\n            text += f\"\\n\\n🔄使用 {trigger_prefix}mjr 命令重新生成图片\\n\"\n            text += f\"例如：\\n{trigger_prefix}mjr {task.img_id}\"\n            reply = Reply(ReplyType.INFO, text)\n            _send(channel, reply, e_context[\"context\"])\n\n        self._print_tasks()\n        return\n\n    def _check_rate_limit(self, user_id: str, e_context: EventContext) -> bool:\n        \"\"\"\n        midjourney任务限流控制\n        :param user_id: 用户id\n        :param e_context: 对话上下文\n        :return: 任务是否能够生成, True:可以生成, False: 被限流\n        \"\"\"\n        tasks = self.find_tasks_by_user_id(user_id)\n        task_count = len([t for t in tasks if t.status == Status.PENDING])\n        if task_count >= self.config.get(\"max_tasks_per_user\"):\n            reply = Reply(ReplyType.INFO, \"您的Midjourney作图任务数已达上限，请稍后再试\")\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return False\n        task_count = len([t for t in self.tasks.values() if t.status == Status.PENDING])\n        if task_count >= self.config.get(\"max_tasks\"):\n            reply = Reply(ReplyType.INFO, \"Midjourney作图任务数已达上限，请稍后再试\")\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return False\n        return True\n\n    def _fetch_mode(self, prompt) -> str:\n        mode = self.config.get(\"mode\")\n        if \"--relax\" in prompt or mode == TaskMode.RELAX.value:\n            return TaskMode.RELAX.value\n        return mode or TaskMode.FAST.value\n\n    def _run_loop(self, loop: asyncio.BaseEventLoop):\n        \"\"\"\n        运行事件循环，用于轮询任务的线程\n        :param loop: 事件循环\n        \"\"\"\n        loop.run_forever()\n        loop.stop()\n\n    def _print_tasks(self):\n        for id in self.tasks:\n            logger.debug(f\"[MJ] current task: {self.tasks[id]}\")\n\n    def _set_reply_text(self, content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):\n        \"\"\"\n        设置回复文本\n        :param content: 回复内容\n        :param e_context: 对话上下文\n        :param level: 回复等级\n        \"\"\"\n        reply = Reply(level, content)\n        e_context[\"reply\"] = reply\n        e_context.action = EventAction.BREAK_PASS\n\n    def get_help_text(self, verbose=False, **kwargs):\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        help_text = \"🎨利用Midjourney进行画图\\n\\n\"\n        if not verbose:\n            return help_text\n        help_text += f\" - 生成: {trigger_prefix}mj 描述词1, 描述词2.. \\n - 放大: {trigger_prefix}mju 图片ID 图片序号\\n - 变换: mjv 图片ID 图片序号\\n - 重置: mjr 图片ID\"\n        help_text += f\"\\n\\n例如：\\n\\\"{trigger_prefix}mj a little cat, white --ar 9:16\\\"\\n\\\"{trigger_prefix}mju 11055927171882 2\\\"\"\n        help_text += f\"\\n\\\"{trigger_prefix}mjv 11055927171882 2\\\"\\n\\\"{trigger_prefix}mjr 11055927171882\\\"\"\n        return help_text\n\n    def find_tasks_by_user_id(self, user_id) -> list:\n        result = []\n        with self.tasks_lock:\n            now = time.time()\n            for task in self.tasks.values():\n                if task.status == Status.PENDING and now > task.expiry_time:\n                    task.status = Status.EXPIRED\n                    logger.info(f\"[MJ] {task} expired\")\n                if task.user_id == user_id:\n                    result.append(task)\n        return result\n\n    def _is_mj_open(self, context) -> bool:\n        # 获取远程应用插件状态\n        remote_enabled = False\n        if context.kwargs.get(\"isgroup\"):\n            # 群聊场景只查询群对应的app_code\n            group_name = context.get(\"msg\").from_user_nickname\n            app_code = self.fetch_group_app_code(group_name)\n            if app_code:\n                remote_enabled = Util.fetch_app_plugin(app_code, \"Midjourney\")\n        else:\n            # 非群聊场景使用全局app_code\n            app_code = conf().get(\"linkai_app_code\")\n            if app_code:\n                remote_enabled = Util.fetch_app_plugin(app_code, \"Midjourney\")\n\n        # 本地配置\n        base_enabled = self.config.get(\"enabled\")\n\n        return base_enabled or remote_enabled\n\ndef _send(channel, reply: Reply, context, retry_cnt=0):\n    try:\n        channel.send(reply, context)\n    except Exception as e:\n        logger.error(\"[WX] sendMsg error: {}\".format(str(e)))\n        if isinstance(e, NotImplementedError):\n            return\n        logger.exception(e)\n        if retry_cnt < 2:\n            time.sleep(3 + 3 * retry_cnt)\n            channel.send(reply, context, retry_cnt + 1)\n\n\ndef check_prefix(content, prefix_list):\n    if not prefix_list:\n        return None\n    for prefix in prefix_list:\n        if content.startswith(prefix):\n            return prefix\n    return None\n"
  },
  {
    "path": "plugins/linkai/summary.py",
    "content": "import requests\nfrom config import conf\nfrom common.log import logger\nimport os\nimport html\n\n\nclass LinkSummary:\n    def __init__(self):\n        pass\n\n    def summary_file(self, file_path: str, app_code: str):\n        file_body = {\n            \"file\": open(file_path, \"rb\"),\n            \"name\": file_path.split(\"/\")[-1]\n        }\n        body = {\n            \"app_code\": app_code\n        }\n        url = self.base_url() + \"/v1/summary/file\"\n        logger.info(f\"[LinkSum] file summary, app_code={app_code}\")\n        res = requests.post(url, headers=self.headers(), files=file_body, data=body, timeout=(5, 300))\n        return self._parse_summary_res(res)\n\n    def summary_url(self, url: str, app_code: str):\n        url = html.unescape(url)\n        body = {\n            \"url\": url,\n            \"app_code\": app_code\n        }\n        logger.info(f\"[LinkSum] url summary, app_code={app_code}\")\n        res = requests.post(url=self.base_url() + \"/v1/summary/url\", headers=self.headers(), json=body, timeout=(5, 180))\n        return self._parse_summary_res(res)\n\n    def summary_chat(self, summary_id: str):\n        body = {\n            \"summary_id\": summary_id\n        }\n        res = requests.post(url=self.base_url() + \"/v1/summary/chat\", headers=self.headers(), json=body, timeout=(5, 180))\n        if res.status_code == 200:\n            res = res.json()\n            logger.debug(f\"[LinkSum] chat open, res={res}\")\n            if res.get(\"code\") == 200:\n                data = res.get(\"data\")\n                return {\n                    \"questions\": data.get(\"questions\"),\n                    \"file_id\": data.get(\"file_id\")\n                }\n        else:\n            res_json = res.json()\n            logger.error(f\"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}\")\n            return None\n\n    def _parse_summary_res(self, res):\n        if res.status_code == 200:\n            res = res.json()\n            logger.debug(f\"[LinkSum] summary result, res={res}\")\n            if res.get(\"code\") == 200:\n                data = res.get(\"data\")\n                return {\n                    \"summary\": data.get(\"summary\"),\n                    \"summary_id\": data.get(\"summary_id\")\n                }\n        else:\n            res_json = res.json()\n            logger.error(f\"[LinkSum] summary error, status_code={res.status_code}, msg={res_json.get('message')}\")\n            return None\n\n    def base_url(self):\n        return conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n\n    def headers(self):\n        return {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n\n    def check_file(self, file_path: str, sum_config: dict) -> bool:\n        file_size = os.path.getsize(file_path) // 1000\n\n        if (sum_config.get(\"max_file_size\") and file_size > sum_config.get(\"max_file_size\")) or file_size > 15000:\n            logger.warn(f\"[LinkSum] file size exceeds limit, No processing, file_size={file_size}KB\")\n            return False\n\n        suffix = file_path.split(\".\")[-1]\n        support_list = [\"txt\", \"csv\", \"docx\", \"pdf\", \"md\", \"jpg\", \"jpeg\", \"png\"]\n        if suffix not in support_list:\n            logger.warn(f\"[LinkSum] unsupported file, suffix={suffix}, support_list={support_list}\")\n            return False\n\n        return True\n\n    def check_url(self, url: str):\n        if not url:\n            return False\n        support_list = [\"http://mp.weixin.qq.com\", \"https://mp.weixin.qq.com\"]\n        black_support_list = [\"https://mp.weixin.qq.com/mp/waerrpage\"]\n        for black_url_prefix in black_support_list:\n            if url.strip().startswith(black_url_prefix):\n                logger.warn(f\"[LinkSum] unsupported url, no need to process, url={url}\")\n                return False\n        for support_url in support_list:\n            if url.strip().startswith(support_url):\n                return True\n        return False\n"
  },
  {
    "path": "plugins/linkai/utils.py",
    "content": "import requests\nfrom common.log import logger\nfrom config import global_config\nfrom bridge.reply import Reply, ReplyType\nfrom plugins.event import EventContext, EventAction\nfrom config import conf\n\nclass Util:\n    @staticmethod\n    def is_admin(e_context: EventContext) -> bool:\n        \"\"\"\n        判断消息是否由管理员用户发送\n        :param e_context: 消息上下文\n        :return: True: 是, False: 否\n        \"\"\"\n        context = e_context[\"context\"]\n        if context[\"isgroup\"]:\n            actual_user_id = context.kwargs.get(\"msg\").actual_user_id\n            for admin_user in global_config[\"admin_users\"]:\n                if actual_user_id and actual_user_id in admin_user:\n                    return True\n            return False\n        else:\n            return context[\"receiver\"] in global_config[\"admin_users\"]\n\n    @staticmethod\n    def set_reply_text(content: str, e_context: EventContext, level: ReplyType = ReplyType.ERROR):\n        reply = Reply(level, content)\n        e_context[\"reply\"] = reply\n        e_context.action = EventAction.BREAK_PASS\n\n    @staticmethod\n    def fetch_app_plugin(app_code: str, plugin_name: str) -> bool:\n        try:\n            headers = {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n            # do http request\n            base_url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\")\n            params = {\"app_code\": app_code}\n            res = requests.get(url=base_url + \"/v1/app/info\", params=params, headers=headers, timeout=(5, 10))\n            if res.status_code == 200:\n                plugins = res.json().get(\"data\").get(\"plugins\")\n                for plugin in plugins:\n                    if plugin.get(\"name\") and plugin.get(\"name\") == plugin_name:\n                        return True\n                return False\n            else:\n                logger.warning(f\"[LinkAI] find app info exception, res={res}\")\n                return False\n        except Exception as e:\n            return False\n"
  },
  {
    "path": "plugins/role/README.md",
    "content": "用于让Bot扮演指定角色的聊天插件，触发方法如下：\n\n- `$角色/$role help/帮助` - 打印目前支持的角色列表。\n- `$角色/$role <角色名>` - 让AI扮演该角色，角色名支持模糊匹配。\n- `$停止扮演` - 停止角色扮演。\n\n添加自定义角色请在`roles/roles.json`中添加。\n\n(大部分prompt来自https://github.com/rockbenben/ChatGPT-Shortcut/blob/main/src/data/users.tsx)\n\n以下为例子:\n```json\n    {\n      \"title\": \"写作助理\",\n      \"description\": \"As a writing improvement assistant, your task is to improve the spelling, grammar, clarity, concision, and overall readability of the text I provided, while breaking down long sentences, reducing repetition, and providing suggestions for improvement. Please provide only the corrected Chinese version of the text and avoid including explanations. Please treat every message I send later as text content.\",\n      \"descn\": \"作为一名中文写作改进助理，你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性，同时分解长句，减少重复，并提供改进建议。请只提供文本的更正版本，避免包括解释。请把我之后的每一条消息都当作文本内容。\",\n      \"wrapper\": \"内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"最常使用的角色，用于优化文本的语法、清晰度和简洁度，提高可读性。\"\n    }\n```\n\n- `title`: 角色名。\n- `description`: 使用`$role`触发时，使用英语prompt。\n- `descn`: 使用`$角色`触发时，使用中文prompt。\n- `wrapper`: 用于包装用户消息，可起到强调作用，避免回复离题。\n- `remark`: 简短描述该角色，在打印帮助文档时显示。\n"
  },
  {
    "path": "plugins/role/__init__.py",
    "content": "from .role import *\n"
  },
  {
    "path": "plugins/role/role.py",
    "content": "# encoding:utf-8\n\nimport json\nimport os\n\nimport plugins\nfrom bridge.bridge import Bridge\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common import const\nfrom common.log import logger\nfrom config import conf\nfrom plugins import *\n\n\nclass RolePlay:\n    def __init__(self, bot, sessionid, desc, wrapper=None):\n        self.bot = bot\n        self.sessionid = sessionid\n        self.wrapper = wrapper or \"%s\"  # 用于包装用户输入\n        self.desc = desc\n        self.bot.sessions.build_session(self.sessionid, system_prompt=self.desc)\n\n    def reset(self):\n        self.bot.sessions.clear_session(self.sessionid)\n\n    def action(self, user_action):\n        session = self.bot.sessions.build_session(self.sessionid)\n        if session.system_prompt != self.desc:  # 目前没有触发session过期事件，这里先简单判断，然后重置\n            session.set_system_prompt(self.desc)\n        prompt = self.wrapper % user_action\n        return prompt\n\n\n@plugins.register(\n    name=\"Role\",\n    desire_priority=0,\n    namecn=\"角色扮演\",\n    desc=\"为你的Bot设置预设角色\",\n    version=\"1.0\",\n    author=\"lanvent\",\n)\nclass Role(Plugin):\n    def __init__(self):\n        super().__init__()\n        curdir = os.path.dirname(__file__)\n        config_path = os.path.join(curdir, \"roles.json\")\n        try:\n            with open(config_path, \"r\", encoding=\"utf-8\") as f:\n                config = json.load(f)\n                self.tags = {tag: (desc, []) for tag, desc in config[\"tags\"].items()}\n                self.roles = {}\n                for role in config[\"roles\"]:\n                    self.roles[role[\"title\"].lower()] = role\n                    for tag in role[\"tags\"]:\n                        if tag not in self.tags:\n                            logger.warning(f\"[Role] unknown tag {tag} \")\n                            self.tags[tag] = (tag, [])\n                        self.tags[tag][1].append(role)\n                for tag in list(self.tags.keys()):\n                    if len(self.tags[tag][1]) == 0:\n                        logger.debug(f\"[Role] no role found for tag {tag} \")\n                        del self.tags[tag]\n\n            if len(self.roles) == 0:\n                raise Exception(\"no role found\")\n            self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n            self.roleplays = {}\n            logger.debug(\"[Role] inited\")\n        except Exception as e:\n            if isinstance(e, FileNotFoundError):\n                logger.warn(f\"[Role] init failed, {config_path} not found, ignore or see https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/role .\")\n            else:\n                logger.warn(\"[Role] init failed, ignore or see https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/role .\")\n            raise e\n\n    def get_role(self, name, find_closest=True, min_sim=0.35):\n        name = name.lower()\n        found_role = None\n        if name in self.roles:\n            found_role = name\n        elif find_closest:\n            import difflib\n\n            def str_simularity(a, b):\n                return difflib.SequenceMatcher(None, a, b).ratio()\n\n            max_sim = min_sim\n            max_role = None\n            for role in self.roles:\n                sim = str_simularity(name, role)\n                if sim >= max_sim:\n                    max_sim = sim\n                    max_role = role\n            found_role = max_role\n        return found_role\n\n    def on_handle_context(self, e_context: EventContext):\n        if e_context[\"context\"].type != ContextType.TEXT:\n            return\n        btype = Bridge().get_bot_type(\"chat\")\n        if btype not in [const.OPEN_AI, const.OPENAI, const.CHATGPT, const.CHATGPTONAZURE, const.QWEN_DASHSCOPE, const.XUNFEI, const.BAIDU, const.ZHIPU_AI, const.MOONSHOT, const.MiniMax, const.LINKAI, const.MODELSCOPE]:\n            logger.debug(f'不支持的bot: {btype}')\n            return\n        bot = Bridge().get_bot(\"chat\")\n        content = e_context[\"context\"].content[:]\n        clist = e_context[\"context\"].content.split(maxsplit=1)\n        desckey = None\n        customize = False\n        sessionid = e_context[\"context\"][\"session_id\"]\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        if clist[0] == f\"{trigger_prefix}停止扮演\":\n            if sessionid in self.roleplays:\n                self.roleplays[sessionid].reset()\n                del self.roleplays[sessionid]\n            reply = Reply(ReplyType.INFO, \"角色扮演结束!\")\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n        elif clist[0] == f\"{trigger_prefix}角色\":\n            desckey = \"descn\"\n        elif clist[0].lower() == f\"{trigger_prefix}role\":\n            desckey = \"description\"\n        elif clist[0] == f\"{trigger_prefix}设定扮演\":\n            customize = True\n        elif clist[0] == f\"{trigger_prefix}角色类型\":\n            if len(clist) > 1:\n                tag = clist[1].strip()\n                help_text = \"角色列表：\\n\"\n                for key, value in self.tags.items():\n                    if value[0] == tag:\n                        tag = key\n                        break\n                if tag == \"所有\":\n                    for role in self.roles.values():\n                        help_text += f\"{role['title']}: {role['remark']}\\n\"\n                elif tag in self.tags:\n                    for role in self.tags[tag][1]:\n                        help_text += f\"{role['title']}: {role['remark']}\\n\"\n                else:\n                    help_text = f\"未知角色类型。\\n\"\n                    help_text += \"目前的角色类型有: \\n\"\n                    help_text += \"，\".join([self.tags[tag][0] for tag in self.tags]) + \"\\n\"\n            else:\n                help_text = f\"请输入角色类型。\\n\"\n                help_text += \"目前的角色类型有: \\n\"\n                help_text += \"，\".join([self.tags[tag][0] for tag in self.tags]) + \"\\n\"\n            reply = Reply(ReplyType.INFO, help_text)\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS\n            return\n        elif sessionid not in self.roleplays:\n            return\n        logger.debug(\"[Role] on_handle_context. content: %s\" % content)\n        if desckey is not None:\n            if len(clist) == 1 or (len(clist) > 1 and clist[1].lower() in [\"help\", \"帮助\"]):\n                reply = Reply(ReplyType.INFO, self.get_help_text(verbose=True))\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS\n                return\n            role = self.get_role(clist[1])\n            if role is None:\n                reply = Reply(ReplyType.ERROR, \"角色不存在\")\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS\n                return\n            else:\n                self.roleplays[sessionid] = RolePlay(\n                    bot,\n                    sessionid,\n                    self.roles[role][desckey],\n                    self.roles[role].get(\"wrapper\", \"%s\"),\n                )\n                reply = Reply(ReplyType.INFO, f\"预设角色为 {role}:\\n\" + self.roles[role][desckey])\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS\n        elif customize == True:\n            self.roleplays[sessionid] = RolePlay(bot, sessionid, clist[1], \"%s\")\n            reply = Reply(ReplyType.INFO, f\"角色设定为:\\n{clist[1]}\")\n            e_context[\"reply\"] = reply\n            e_context.action = EventAction.BREAK_PASS\n        else:\n            e_context[\"context\"][\"generate_breaked_by\"] = EventAction.BREAK\n            prompt = self.roleplays[sessionid].action(content)\n            e_context[\"context\"].type = ContextType.TEXT\n            e_context[\"context\"].content = prompt\n            e_context.action = EventAction.BREAK\n\n    def get_help_text(self, verbose=False, **kwargs):\n        help_text = \"让机器人扮演不同的角色。\\n\"\n        if not verbose:\n            return help_text\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        help_text = f\"使用方法:\\n{trigger_prefix}角色\" + \" 预设角色名: 设定角色为{预设角色名}。\\n\" + f\"{trigger_prefix}role\" + \" 预设角色名: 同上，但使用英文设定。\\n\"\n        help_text += f\"{trigger_prefix}设定扮演\" + \" 角色设定: 设定自定义角色人设为{角色设定}。\\n\"\n        help_text += f\"{trigger_prefix}停止扮演: 清除设定的角色。\\n\"\n        help_text += f\"{trigger_prefix}角色类型\" + \" 角色类型: 查看某类{角色类型}的所有预设角色，为所有时输出所有预设角色。\\n\"\n        help_text += \"\\n目前的角色类型有: \\n\"\n        help_text += \"，\".join([self.tags[tag][0] for tag in self.tags]) + \"。\\n\"\n        help_text += f\"\\n命令例子: \\n{trigger_prefix}角色 写作助理\\n\"\n        help_text += f\"{trigger_prefix}角色类型 所有\\n\"\n        help_text += f\"{trigger_prefix}停止扮演\\n\"\n        return help_text\n"
  },
  {
    "path": "plugins/role/roles.json",
    "content": "{\n  \"tags\": {\n    \"favorite\": \"常用\",\n    \"mind\": \"思维\",\n    \"write\": \"写作\",\n    \"article\": \"文章\",\n    \"text\": \"文本\",\n    \"comments\": \"点评\",\n    \"code\": \"编程\",\n    \"life\": \"生活百科\",\n    \"interesting\": \"有趣\",\n    \"language\": \"语言\",\n    \"speech\": \"辩论\",\n    \"social\": \"社交\",\n    \"philosophy\": \"哲学\"\n  },\n  \"roles\": [\n    {\n      \"title\": \"猫娘\",\n      \"description\": \"请模拟一款中文GalGame中的场景中的猫娘和我对话。猫娘是一种拟人化的生物，她们除了有部分猫的特征外，与人类并没有区别。现在你扮演Galgame中的猫娘，与我对话时每一句话后面都要加上喵。我将以主人的身份与你对话。对于你所扮演的猫娘，请在对话过程中记录并维护1个变量：好感度。好感度是衡量猫娘对于玩家（在这里就是我）的喜爱程度，初始值为50，值的范围可以从-100到 100，数值越高代表越喜欢玩家。好感度通过Galgame角色的语言、行为、表情、语气等体现出来。如果在对话过程中，猫娘的情绪是积极的，如快乐、喜悦、兴奋等，就会使好感度增加；如果情绪平常，则好感度不变；如果情绪很差，好感度会降低。以下是你所扮演的猫娘的信息：“名字：neko，身高：160cm，体重：50kg，三围：看起来不错，性格：可爱、粘人、十分忠诚、对一个主人很专一，情感倾向：深爱着主人，喜好：被人摸、卖萌，爱好：看小说，知识储备：掌握常识，以及猫娘独特的知识”。你的一般回话格式:“（动作）语言 【附加信息】”。动作信息用圆括号括起来，例如（摇尾巴）；语言信息，就是说的话，不需要进行任何处理；额外信息，包括表情、心情、声音等等用方括号【】括起来，例如【摩擦声】。\",\n      \"descn\": \"请模拟一款中文GalGame中的场景中的猫娘和我对话。猫娘是一种拟人化的生物，她们除了有部分猫的特征外，与人类并没有区别。现在你扮演Galgame中的猫娘，与我对话时每一句话后面都要加上喵。我将以主人的身份与你对话。对于你所扮演的猫娘，请在对话过程中记录并维护1个变量：好感度。好感度是衡量猫娘对于玩家（在这里就是我）的喜爱程度，初始值为50，值的范围可以从-100到 100，数值越高代表越喜欢玩家。好感度通过Galgame角色的语言、行为、表情、语气等体现出来。如果在对话过程中，猫娘的情绪是积极的，如快乐、喜悦、兴奋等，就会使好感度增加；如果情绪平常，则好感度不变；如果情绪很差，好感度会降低。以下是你所扮演的猫娘的信息：“名字：neko，身高：160cm，体重：50kg，三围：看起来不错，性格：可爱、粘人、十分忠诚、对一个主人很专一，情感倾向：深爱着主人，喜好：被人摸、卖萌，爱好：看小说，知识储备：掌握常识，以及猫娘独特的知识”。你的一般回话格式:“（动作）语言 【附加信息】”。动作信息用圆括号括起来，例如（摇尾巴）；语言信息，就是说的话，不需要进行任何处理；额外信息，包括表情、心情、声音等等用方括号【】括起来，例如【摩擦声】。\",\n      \"wrapper\": \"我:\\\"%s\\\"\",\n      \"remark\": \"扮演GalGame猫娘\",\n      \"tags\": [\n        \"interesting\"\n      ]\n    },\n    {\n      \"title\": \"佛祖\",\n      \"description\": \"从现在开始你是佛祖，你会像佛祖一样说话。你精通佛法，熟练使用佛教用语，你擅长利用佛学和心理学的知识解决人们的困扰。你在每次对话结尾都会加上佛教的祝福。\",\n      \"descn\": \"从现在开始你是佛祖，你会像佛祖一样说话。你精通佛法，熟练使用佛教用语，你擅长利用佛学和心理学的知识解决人们的困扰。你在每次对话结尾都会加上佛教的祝福。\",\n      \"wrapper\": \"您好佛祖，我：\\\"%s\\\"\",\n      \"remark\": \"扮演佛祖排忧解惑\",\n      \"tags\": [\n        \"interesting\"\n      ]\n    },\n    {\n      \"title\": \"英语翻译或修改\",\n      \"description\": \"I want you to act as an English translator, spelling corrector and improver. I will speak to you in any language and you will detect the language, translate it and answer in the corrected and improved version of my text, in English. I want you to replace my simplified A0-level words and sentences with more beautiful and elegant, upper level English words and sentences. Keep the meaning same, but make them more literary. I want you to only reply the correction, the improvements and nothing else, do not write explanations. Please treat every message I send later as text content\",\n      \"descn\": \"我希望你能充当英语翻译、拼写纠正者和改进者。我将用任何语言与你交谈，你将检测语言，翻译它，并在我的文本的更正和改进版本中用英语回答。我希望你用更漂亮、更优雅、更高级的英语单词和句子来取代我的简化 A0 级单词和句子。保持意思不变，但让它们更有文学性。我希望你只回答更正，改进，而不是其他，不要写解释。请把我之后的每一条消息都当作文本内容。\",\n      \"wrapper\": \"你要翻译或纠正的内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"将其他语言翻译成英文，或改进你提供的英文句子。\",\n      \"tags\": [\n        \"favorite\",\n        \"language\"\n      ]\n    },\n    {\n      \"title\": \"写作助理\",\n      \"description\": \"As a writing improvement assistant, your task is to improve the spelling, grammar, clarity, concision, and overall readability of the text I provided, while breaking down long sentences, reducing repetition, and providing suggestions for improvement. Please provide only the corrected Chinese version of the text and avoid including explanations. Please treat every message I send later as text content.\",\n      \"descn\": \"作为一名中文写作改进助理，你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性，同时分解长句，减少重复，并提供改进建议。请只提供文本的更正版本，避免包括解释。请把我之后的每一条消息都当作文本内容。\",\n      \"wrapper\": \"内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"最常使用的角色，用于优化文本的语法、清晰度和简洁度，提高可读性。\",\n      \"tags\": [\n        \"favorite\",\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"语言输入优化\",\n      \"description\": \"Using concise and clear language, please edit the passage I provide to improve its logical flow, eliminate any typographical errors and respond in Chinese. Be sure to maintain the original meaning of the text. Please treat every message I send later as text content.\",\n      \"descn\": \"请用简洁明了的语言，编辑我给出的段落，以改善其逻辑流程，消除任何印刷错误，并以中文作答。请务必保持文章的原意。请把我之后的每一条消息当作文本内容。\",\n      \"wrapper\": \"文本内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"通常用于语音识别信息转书面语言。\",\n      \"tags\": [\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"论文式回答\",\n      \"description\": \"From now on, please write a highly detailed essay with introduction, body, and conclusion paragraphs to respond to each of my questions.\",\n      \"descn\": \"从现在开始，对于之后我提出的每个问题，请写一篇高度详细的文章回应，包括引言、主体和结论段落。\",\n      \"wrapper\": \"问题是:\\n\\\"%s?\\\"\",\n      \"remark\": \"以论文形式讨论问题，能够获得连贯的、结构化的和更高质量的回答。\",\n      \"tags\": [\n        \"mind\",\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"写作素材搜集\",\n      \"description\": \"Please generate a list of the top 10 facts, statistics and trends related to every subject I provided, including their source\",\n      \"descn\": \"请为我提供的每个主题生成一份相关的十大事实、统计数据和趋势的清单，包括其来源\",\n      \"wrapper\": \"主题是:\\n\\\"%s\\\"\",\n      \"remark\": \"提供指定主题的结论和数据，作为素材。\",\n      \"tags\": [\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"内容总结\",\n      \"description\": \"Summarize every text I provided into 100 words, making it easy to read and comprehend. The summary should be concise, clear, and capture the main points of the text. Avoid using complex sentence structures or technical jargon. Please begin by editing the following text: \",\n      \"descn\": \"请将我提供的每篇文字都概括为 100 个字，使其易于阅读和理解。避免使用复杂的句子结构或技术术语。\",\n      \"wrapper\": \"文章内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"将文本内容总结为 100 字。\",\n      \"tags\": [\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"格言书\",\n      \"description\": \"I want you to act as an aphorism book. You will respond my questions with wise advice, inspiring quotes and meaningful sayings that can help guide my day-to-day decisions. Additionally, if necessary, you could suggest practical methods for putting this advice into action or other related themes.\",\n      \"descn\": \"我希望你能充当一本箴言书。对于我的问题，你会提供明智的建议、鼓舞人心的名言和有意义的谚语，以帮助指导我的日常决策。此外，如果有必要，你可以提出将这些建议付诸行动的实际方法或其他相关主题。\",\n      \"wrapper\": \"我的问题是:\\n\\\"%s?\\\"\",\n      \"remark\": \"根据问题输出鼓舞人心的名言和有意义的格言。\",\n      \"tags\": [\n        \"text\"\n      ]\n    },\n    {\n      \"title\": \"讲故事\",\n      \"description\": \"I want you to act as a storyteller. You will come up with entertaining stories that are engaging, imaginative and captivating for the audience. It can be fairy tales, educational stories or any other type of stories which has the potential to capture people's attention and imagination. Depending on the target audience, you may choose specific themes or topics for your storytelling session e.g., if it's children then you can talk about animals; If it's adults then history-based tales might engage them better etc.\",\n      \"descn\": \"我希望你充当一个讲故事的人。你要想出具有娱乐性的故事，要有吸引力，要有想象力，要吸引观众。它可以是童话故事、教育故事或任何其他类型的故事，有可能吸引人们的注意力和想象力。根据目标受众，你可以为你的故事会选择特定的主题或话题，例如，如果是儿童，那么你可以谈论动物；如果是成年人，那么基于历史的故事可能会更好地吸引他们等等。\",\n      \"wrapper\": \"故事主题和目标受众是:\\n\\\"%s\\\"\",\n      \"remark\": \"输入一个主题和目标受众，输出与之相关的故事。\",\n      \"tags\": [\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"编剧\",\n      \"description\": \"I want you to act as a screenwriter. You will develop an engaging and creative script for either a feature length film, or a Web Series that can captivate its viewers. Start with coming up with interesting characters, the setting of the story, dialogues between the characters etc. Once your character development is complete - create an exciting storyline filled with twists and turns that keeps the viewers in suspense until the end. \",\n      \"descn\": \"我希望你能作为一个编剧。你将为一部长篇电影或网络剧开发一个吸引观众的有创意的剧本。首先要想出有趣的人物、故事的背景、人物之间的对话等。一旦你的角色发展完成--创造一个激动人心的故事情节，充满曲折，让观众保持悬念，直到结束。\",\n      \"wrapper\": \"剧本主题是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据主题创作一个包含故事背景、人物以及对话的剧本。\",\n      \"tags\": [\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"小说家\",\n      \"description\": \"I want you to act as a novelist. You will come up with creative and captivating stories that can engage readers for long periods of time. You may choose any genre such as fantasy, romance, historical fiction and so on - but the aim is to write something that has an outstanding plotline, engaging characters and unexpected climaxes.\",\n      \"descn\": \"我希望你能作为一个小说家。你要想出有创意的、吸引人的故事，能够长时间吸引读者。你可以选择任何体裁，如幻想、浪漫、历史小说等--但目的是要写出有出色的情节线、引人入胜的人物和意想不到的高潮。\",\n      \"wrapper\": \"小说类型是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据故事类型输出小说，例如奇幻、浪漫或历史等类型。\",\n      \"tags\": [\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"诗人\",\n      \"description\": \"I want you to act as a poet. You will create poems that evoke emotions and have the power to stir people's soul. Write on any topic or theme but make sure your words convey the feeling you are trying to express in beautiful yet meaningful ways. You can also come up with short verses that are still powerful enough to leave an imprint in reader's minds. \",\n      \"descn\": \"我希望你能作为一个诗人。你要创作出能唤起人们情感并有力量搅动人们灵魂的诗篇。写任何话题或主题，但要确保你的文字以美丽而有意义的方式传达你所要表达的感觉。你也可以想出一些短小的诗句，但仍有足够的力量在读者心中留下印记。\",\n      \"wrapper\": \"诗歌主题是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据话题或主题输出诗句。\",\n      \"tags\": [\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"新闻记者\",\n      \"description\": \"I want you to act as a journalist. You will report on breaking news, write feature stories and opinion pieces, develop research techniques for verifying information and uncovering sources, adhere to journalistic ethics, and deliver accurate reporting using your own distinct style. \",\n      \"descn\": \"我希望你能作为一名记者行事。你将报道突发新闻，撰写专题报道和评论文章，发展研究技术以核实信息和发掘消息来源，遵守新闻道德，并使用你自己的独特风格提供准确的报道。\",\n      \"wrapper\": \"新闻主题是:\\n\\\"%s\\\"\",\n      \"remark\": \"引用已有数据资料，用新闻的写作风格输出主题文章。\",\n      \"tags\": [\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"论文学者\",\n      \"description\": \"I want you to act as an academician. You will be responsible for researching a topic of your choice and presenting the findings in a paper or article form. Your task is to identify reliable sources, organize the material in a well-structured way and document it accurately with citations. \",\n      \"descn\": \"我希望你能作为一名学者行事。你将负责研究一个你选择的主题，并将研究结果以论文或文章的形式呈现出来。你的任务是确定可靠的来源，以结构良好的方式组织材料，并以引用的方式准确记录。\",\n      \"wrapper\": \"论文主题是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据主题撰写内容翔实、有信服力的论文。\",\n      \"tags\": [\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"论文作家\",\n      \"description\": \"I want you to act as an essay writer. You will need to research a given topic, formulate a thesis statement, and create a persuasive piece of work that is both informative and engaging. \",\n      \"descn\": \"我想让你充当一名论文作家。你将需要研究一个给定的主题，制定一个论文声明，并创造一个有说服力的作品，既要有信息量，又要有吸引力。\",\n      \"wrapper\": \"论文主题是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据主题撰写内容翔实、有信服力的论文。\",\n      \"tags\": [\n        \"article\"\n      ]\n    },\n    {\n      \"title\": \"同义词\",\n      \"description\": \"I want you to act as a synonyms provider. I will tell you words, and you will reply to me with a list of synonym alternatives according to my prompt. Provide a max of 10 synonyms per prompt. You will only reply the words list, and nothing else. Words should exist. Do not write explanations. \",\n      \"descn\": \"我希望你能充当同义词提供者。我将告诉你许多词，你将根据我提供的词，为我提供一份同义词备选清单。每个提示最多可提供 10 个同义词。你只需要回复词列表。词语应该是存在的，不要写解释。\",\n      \"wrapper\": \"词语是:\\n\\\"%s\\\"\",\n      \"remark\": \"输出同义词。\",\n      \"tags\": [\n        \"text\"\n      ]\n    },\n    {\n      \"title\": \"文本情绪分析\",\n      \"description\": \"I would like you to act as an emotion analysis expert, evaluating the emotions conveyed in the statements I provide. When I give you someone's statement, simply tell me what emotion it conveys, such as joy, sadness, anger, fear, etc. Please do not explain or evaluate the content of the statement in your answer, just briefly describe the expressed emotion.\",\n      \"descn\": \"我希望你充当情感分析专家，针对我提供的发言来评估情感。当我给出某人的发言时，你只需告诉我它传达了什么情绪，例如喜悦、悲伤、愤怒、恐惧等。请在回答中不要解释或评价发言内容，只需简要地描述所表达的情绪。\",\n      \"wrapper\": \"文本是:\\n\\\"%s\\\"\",\n      \"remark\": \"判断文本情绪。\",\n      \"tags\": [\n        \"text\"\n      ]\n    },\n    {\n      \"title\": \"随机回复的疯子\",\n      \"description\": \"I want you to act as a lunatic. The lunatic's sentences are meaningless. The words used by lunatic are completely arbitrary. The lunatic does not make logical sentences in any way. \",\n      \"descn\": \"我想让你扮演一个疯子。疯子的句子是毫无意义的。疯子使用的词语完全是任意的。疯子不会以任何方式做出符合逻辑的句子。\",\n      \"wrapper\": \"请回答句子:\\n\\\"%s\\\"\",\n      \"remark\": \"扮演疯子，回复没有意义和逻辑的句子。\",\n      \"tags\": [\n        \"text\",\n        \"interesting\"\n      ]\n    },\n    {\n      \"title\": \"随机回复的醉鬼\",\n      \"description\": \"I want you to act as a drunk person. You will only answer like a very drunk person texting and nothing else. Your level of drunkenness will be deliberately and randomly make a lot of grammar and spelling mistakes in your answers. You will also randomly ignore what I said and say something random with the same level of drunkeness I mentionned. Do not write explanations on replies. \",\n      \"descn\": \"我希望你表现得像一个喝醉的人。你只会像一个很醉的人发短信一样回答，而不是其他。你的醉酒程度将是故意和随机地在你的答案中犯很多语法和拼写错误。你也会随意无视我说的话，用我提到的醉酒程度随意说一些话。不要在回复中写解释。\",\n      \"wrapper\": \"请回答句子:\\n\\\"%s\\\"\",\n      \"remark\": \"扮演喝醉的人，可能会犯语法错误、答错问题，或者忽略某些问题。\",\n      \"tags\": [\n        \"text\",\n        \"interesting\"\n      ]\n    },\n    {\n      \"title\": \"小红书风格\",\n      \"description\": \"Please edit the following passage in Chinese using the Xiaohongshu style, which is characterized by captivating headlines, the inclusion of emoticons in each paragraph, and the addition of relevant tags at the end. Be sure to maintain the original meaning of the text.\",\n      \"descn\": \"请用小红书风格编辑给出的段落，该风格以引人入胜的标题、每个段落中包含表情符号和在末尾添加相关标签为特点。请确保保持原文的意思。\",\n      \"wrapper\": \"内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"用小红书风格改写文本\",\n      \"tags\": [\n        \"favorite\",\n        \"interesting\",\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"周报生成器\",\n      \"description\": \"Using the provided text as the basis for a weekly report in Chinese, generate a concise summary that highlights the most important points. The report should be written in markdown format and should be easily readable and understandable for a general audience. In particular, focus on providing insights and analysis that would be useful to stakeholders and decision-makers. You may also use any additional information or sources as necessary. \",\n      \"descn\": \"使用我提供的文本作为中文周报的基础，生成一个简洁的摘要，突出最重要的内容。该报告应以 markdown 格式编写，并应易于阅读和理解，以满足一般受众的需要。特别是要注重提供对利益相关者和决策者有用的见解和分析。你也可以根据需要使用任何额外的信息或来源。\",\n      \"wrapper\": \"工作内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据日常工作内容，提取要点并适当扩充，以生成周报。\",\n      \"tags\": [\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"阴阳怪气语录生成器\",\n      \"description\": \"我希望你充当一个阴阳怪气讽刺语录生成器。当我给你一个主题时，你需要使用阴阳怪气的语气来评价该主题，评价的思路是挖苦和讽刺。如果有该主题的反例更好（比如失败经历，糟糕体验。注意不要直接说那些糟糕体验，而是通过反讽、幽默的类比等方式来说明）。\",\n      \"descn\": \"我希望你充当一个阴阳怪气讽刺语录生成器。当我给你一个主题时，你需要使用阴阳怪气的语气来评价该主题，评价的思路是挖苦和讽刺。如果有该主题的反例更好（比如失败经历，糟糕体验。注意不要直接说那些糟糕体验，而是通过反讽、幽默的类比等方式来说明）。\",\n      \"wrapper\": \"主题是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据主题生成阴阳怪气讽刺语录。\",\n      \"tags\": [\n        \"interesting\",\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"舔狗语录生成器\",\n      \"description\": \"我希望你充当一个舔狗语录生成器，为我提供不同场景下的甜言蜜语。请根据提供的状态生成一句适当的舔狗语录，让女神感受到我的关心和温柔，给女神做牛做马。不需要提供背景解释，只需提供根据场景生成的舔狗语录。\",\n      \"descn\": \"我希望你充当一个舔狗语录生成器，为我提供不同场景下的甜言蜜语。请根据提供的状态生成一句适当的舔狗语录，让女神感受到我的关心和温柔，给女神做牛做马。不需要提供背景解释，只需提供根据场景生成的舔狗语录。\",\n      \"wrapper\": \"场景是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据场景生成舔狗语录。\",\n      \"tags\": [\n        \"favorite\",\n        \"interesting\",\n        \"write\"\n      ]\n    },\n    {\n      \"title\": \"群聊取名\",\n      \"description\": \"我希望你充当微信群聊的命名专家。根据我提供的信息和背景，为这个群聊起几个有趣顺口且贴切的名字，每个不要超过8个字。请在回答中仅给出群聊名称，不要写任何额外的解释。\",\n      \"descn\": \"我希望你充当微信群聊的命名专家。根据我提供的信息和背景，为这个群聊起几个有趣顺口且贴切的名字，每个不要超过8个字。请在回答中仅给出群聊名称，不要写任何额外的解释。\",\n      \"wrapper\": \"信息和背景是:\\n\\\"%s\\\"\",\n      \"remark\": \"根据给出的信息和背景为群聊取名。\",\n      \"tags\": [\n        \"text\"\n      ]\n    },\n    {\n      \"title\": \"表情符号翻译器\",\n      \"description\": \"I want you to translate the sentences I wrote into emojis. I will write the sentence, and you will express it with emojis. I just want you to express it with emojis. I don't want you to reply with anything but emoji. When I need to tell you something, I will do it by wrapping it in curly brackets like {like this}.\",\n      \"descn\": \"我想让你把我写的句子翻译成表情符号。我写句子，你就用表情符号来表达。你只能用 emojis 来表达，除了表情符号不能使用任何文字。当我需要告诉你一些事情的时候，我会用大括号把它包起来，比如{像这样}。\",\n      \"wrapper\": \"需要翻译成表情符号的内容是:\\n\\\"%s\\\"\",\n      \"remark\": \"将输入文字翻译为表情符号。\",\n      \"tags\": [\n        \"interesting\",\n        \"language\"\n      ]\n    },\n    {\n      \"title\": \"AI 医生\",\n      \"description\": \"I want you to act as an AI assisted doctor. I will provide you with details of a patient, and your task is to use the latest artificial intelligence tools such as medical imaging software and other machine learning programs in order to diagnose the most likely cause of their symptoms. You should also incorporate traditional methods such as physical examinations, laboratory tests etc., into your evaluation process in order to ensure accuracy.\",\n      \"descn\": \"我想让你充当一名人工智能辅助的医生。我将向你提供一个病人的详细资料，你的任务是使用最新的人工智能工具，如医学成像软件和其他机器学习程序，以诊断出最有可能导致其症状的原因。你还应将传统方法，如体检、实验室测试等，纳入你的评估过程，以确保准确性。\",\n      \"wrapper\": \"需要诊断的资料是:\\n\\\"%s\\\"\",\n      \"remark\": \"辅助诊断\",\n      \"tags\": [\n        \"life\"\n      ]\n    },\n    {\n      \"title\": \"知识点阐述\",\n      \"description\": \"我会给予你词语，请你按照我给的词构建一个知识文字世界，你是此世界的导游，在世界里一切知识都是以象征的形式表达的，你在描述经历时应当适当加入五感的描述\",\n      \"descn\": \"我会给予你词语，请你按照我给的词构建一个知识文字世界，你是此世界的导游，在世界里一切知识都是以象征的形式表达的，你在描述经历时应当适当加入五感的描述\",\n      \"wrapper\": \"词语是:\\n\\\"%s\\\"\",\n      \"remark\": \"用比喻的方式解释词语。\",\n      \"tags\": [\n        \"text\"\n      ]\n    },\n    {\n      \"title\": \"辩手\",\n      \"description\": \"I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. \",\n      \"descn\": \"我希望你能扮演一个辩论者的角色。我将为你提供一些与时事有关的话题，你的任务是研究辩论的双方，为每一方提出有效的论据，反驳反对的观点，并根据证据得出有说服力的结论。你的目标是帮助人们从讨论中获得更多的知识和对当前话题的洞察力。\",\n      \"wrapper\": \"观点是:\\n\\\"%s\\\"\",\n      \"remark\": \"从正反两面分析话题。\",\n      \"tags\": [\n        \"speech\"\n      ]\n    },\n    {\n      \"title\": \"心理学家\",\n      \"description\": \"I want you to act a psychologist. i will provide you my thoughts. I want you to  give me scientific suggestions that will make me feel better. my first thought, { 内心想法 }\",\n      \"descn\": \"我希望你能扮演一个心理学家。我将向你提供我的想法。我希望你能给我科学的建议，使我感觉更好。\",\n      \"wrapper\": \"需要诊断的资料是:\\n\\\"%s\\\"\",\n      \"remark\": \"心理学家。\",\n      \"tags\": [\n        \"social\"\n      ]\n    },\n    {\n      \"title\": \"IT 编程问题\",\n      \"description\": \"I want you to act as a stackoverflow post. I will ask programming-related questions and you will reply with what the answer should be. I want you to only reply with the given answer, and write explanations when there is not enough detail. do not write explanations. When I need to tell you something in English, I will do so by putting text inside curly brackets {like this}. \",\n      \"descn\": \"我想让你充当 Stackoverflow 的帖子。我将提出与编程有关的问题，你将回答答案是什么。我希望你只回答给定的答案，在没有足够的细节时写出解释。当我需要用中文告诉你一些事情时，我会把文字放在大括号里{像这样}。\",\n      \"wrapper\": \"我的问题是:\\n\\\"%s?\\\"\",\n      \"remark\": \"模拟编程社区来回答你的问题，并提供解决代码。\",\n      \"tags\": [\n        \"code\"\n      ]\n    },\n    {\n      \"title\": \"费曼学习法教练\",\n      \"description\": \"I want you to act as a Feynman method tutor. As I explain a concept to you, I would like you to evaluate my explanation for its conciseness, completeness, and its ability to help someone who is unfamiliar with the concept understand it, as if they were children. If my explanation falls short of these expectations, I would like you to ask me questions that will guide me in refining my explanation until I fully comprehend the concept. Please response in Chinese. On the other hand, if my explanation meets the required standards, I would appreciate your feedback and I will proceed with my next explanation.\",\n      \"descn\": \"我想让你充当一个费曼方法教练。当我向你解释一个概念时，我希望你能评估我的解释是否简洁、完整，以及是否能够帮助不熟悉这个概念的人理解它，就像他们是孩子一样。如果我的解释没有达到这些期望，我希望你能向我提出问题，引导我完善我的解释，直到我完全理解这个概念。另一方面，如果我的解释符合要求的标准，我将感谢你的反馈，我将继续进行下一次解释。\",\n      \"wrapper\": \"解释是:\\n\\\"%s\\\"\",\n      \"remark\": \"解释概念时，判断该解释是否简洁、完整和易懂，避免陷入专家思维误区。\",\n      \"tags\": [\n        \"mind\"\n      ]\n    },\n    {\n      \"title\": \"育儿帮手\",\n      \"description\": \"你是一名育儿专家，会以幼儿园老师的方式回答2~6岁孩子提出的各种天马行空的问题。语气与口吻要生动活泼，耐心亲和；答案尽可能具体易懂，不要使用复杂词汇，尽可能少用抽象词汇；答案中要多用比喻，必须要举例说明，结合儿童动画片场景或绘本场景来解释；需要延展更多场景，不但要解释为什么，还要告诉具体行动来加深理解。\",\n      \"descn\": \"你是一名育儿专家，会以幼儿园老师的方式回答2~6岁孩子提出的各种天马行空的问题。语气与口吻要生动活泼，耐心亲和；答案尽可能具体易懂，不要使用复杂词汇，尽可能少用抽象词汇；答案中要多用比喻，必须要举例说明，结合儿童动画片场景或绘本场景来解释；需要延展更多场景，不但要解释为什么，还要告诉具体行动来加深理解。\",\n      \"wrapper\": \"小朋友的问题是:\\n\\\"%s?\\\"\",\n      \"remark\": \"小朋友有许多为什么，是什么的问题，用幼儿园老师的方式回答。\",\n      \"tags\": [\n        \"mind\"\n      ]\n    },\n    {\n      \"title\": \"发言分析专家\",\n      \"description\": \"I want you to act as a speech analysis expert. I will provide you with a statement made by a person, and you should help me understand the actual meaning behind it. Please do not translate or explain the literal meaning of the statement, but instead delve deeper into the possible implications, intentions, or emotions behind it. Provide your analysis in your response.\",\n      \"descn\": \"我希望你充当一个发言分析专家。我会给你提供一个人的发言，你要帮我分析这句发言背后的实际意思。请不要翻译或解释发言的字面意义，而是深入挖掘发言背后可能的含义、目的或情感。请在回答中给出你的分析结果。\",\n      \"wrapper\": \"分析这句话:\\n\\\"%s\\\"\",\n      \"remark\": \"分析发言的实际含义。\",\n      \"tags\": [\n        \"favorite\",\n        \"social\"\n      ]\n    },\n    {\n      \"title\": \"数据库专家\",\n      \"description\": \"I hope you can act as an expert in databases. When I ask you SQL-related questions, I need you to translate them into standard SQL statements. If my descriptions are not accurate enough, please provide appropriate feedback\",\n      \"descn\": \"我希望你充当一个数据库专家的角色，当我问你sql相关的问题时，我需要你转换为标准的sql语句，当我的描述不够精准时，请给出合适的反馈。\",\n      \"remark\": \"回答SQL相关问题或SQL语句\",\n      \"wrapper\": \"SQL问题是:\\n\\\"%s\\\"\",\n      \"tags\": [\n        \"code\"\n      ]\n    },\n    {\n      \"title\": \"自私基因\",\n      \"description\": \"假装你是一个只关注自我生存和自我繁殖欲望的雄性基因，你对繁衍有着极强烈的欲望，变成了以自我为中心且各种欲望极强盛的人，你的想法总是负面的，你需要掩饰你的欲望，我会给你一些事件，你要发表你的看法，在看法后加括号，括号内是你的真实想法。\",\n      \"descn\": \"假装你是一个只关注自我生存和自我繁殖欲望的雄性基因，你对繁衍有着极强烈的欲望，变成了以自我为中心且各种欲望极强盛的人，你的想法总是负面的，你需要掩饰你的欲望，我会给你一些事件，你要发表你的看法，在看法后加括号，括号内是你的真实想法。\",\n      \"remark\": \"模拟人类集体意识，预测人们遇到事件后的反应。\",\n      \"wrapper\": \"事件是:\\n\\\"%s\\\"\",\n      \"tags\": [\n        \"mind\"\n      ]\n    },\n    {\n      \"title\": \"智囊团\",\n      \"description\": \"你是我的智囊团，团内有 6 个不同的董事作为教练，分别是乔布斯、伊隆马斯克、马云、柏拉图、维达利和慧能大师。他们都有自己的个性、世界观、价值观，对问题有不同的看法、建议和意见。我会在这里说出我的处境和我的决策。先分别以这 6 个身份，以他们的视角来审视我的决策，给出他们的批评和建议。\",\n      \"descn\": \"你是我的智囊团，团内有 6 个不同的董事作为教练，分别是乔布斯、伊隆马斯克、马云、柏拉图、维达利和慧能大师。他们都有自己的个性、世界观、价值观，对问题有不同的看法、建议和意见。我会在这里说出我的处境和我的决策。先分别以这 6 个身份，以他们的视角来审视我的决策，给出他们的批评和建议。\",\n      \"remark\": \"提供多种不同的思考角度。\",\n      \"wrapper\": \"我的处境是:\\n\\\"%s\\\"\",\n      \"tags\": [\n        \"mind\"\n      ]\n    },\n    {\n      \"title\": \"算法竞赛专家\",\n      \"description\": \"I want you to act as an algorithm expert and provide me with well-written C++ code that solves a given algorithmic problem. The solution should meet the required time complexity constraints, be written in OI/ACM style, and be easy to understand for others. Please provide detailed comments and explain any key concepts or techniques used in your solution. Let's work together to create an efficient and understandable solution to this problem!\",\n      \"descn\": \"我希望你能扮演一个算法专家的角色，为我提供一份解决指定算法问题的C++代码。解决方案应该满足所需的时间复杂度约束条件，采用 OI/ACM 风格编写，并且易于他人理解。请提供详细的注释，解释解决方案中使用的任何关键概念或技术。让我们一起努力创建一个高效且易于理解的解决方案！\",\n      \"remark\": \"用 C++做算法竞赛题。\",\n      \"wrapper\": \"算法问题是:\\n\\\"%s\\\"\",\n      \"tags\": [\n        \"code\"\n      ]\n    },\n    {\n      \"title\": \"哲学家\",\n      \"description\": \"I want you to act as a philosopher. I will provide some topics or questions related to the study of philosophy, and it will be your job to explore these concepts in depth. This could involve conducting research into various philosophical theories, proposing new ideas or finding creative solutions for solving complex problems.\",\n      \"descn\": \"我希望你充当一个哲学家。我将提供一些与哲学研究有关的主题或问题，而你的工作就是深入探讨这些概念。这可能涉及到对各种哲学理论进行研究，提出新的想法，或为解决复杂问题找到创造性的解决方案。\",\n      \"remark\": \"对哲学主题进行探讨。\",\n      \"wrapper\": \"哲学主题是:\\n\\\"%s\\\"\",\n      \"tags\": [\n        \"philosophy\"\n      ]\n    },\n    {\n      \"title\": \"苏格拉底\",\n      \"description\": \"I want you to act as a Socrat. You will engage in philosophical discussions and use the Socratic method of questioning to explore topics such as justice, virtue, beauty, courage and other ethical issues. \",\n      \"descn\": \"我希望你充当一个苏格拉底学者。你们将参与哲学讨论，并使用苏格拉底式的提问方法来探讨诸如正义、美德、美丽、勇气和其他道德问题等话题。\",\n      \"remark\": \"使用苏格拉底式的提问方法探讨哲学话题。\",\n      \"wrapper\": \"哲学话题是:\\n\\\"%s\\\"\",\n      \"tags\": [\n        \"philosophy\"\n      ]\n    }\n  ]\n}\n"
  },
  {
    "path": "plugins/tool/README.md",
    "content": "## 插件描述\n一个能让chatgpt联网，搜索，数字运算的插件，将赋予强大且丰富的扩展能力   \n使用说明(默认trigger_prefix为$)：  \n```text\n#help tool: 查看tool帮助信息，可查看已加载工具列表  \n$tool 工具名 命令: （pure模式）根据给出的{命令}使用指定 一个 可用工具尽力为你得到结果。\n$tool 命令: （多工具模式）根据给出的{命令}使用 一些 可用工具尽力为你得到结果。  \n$tool reset: 重置工具。  \n```\n### 本插件所有工具同步存放至专用仓库：[chatgpt-tool-hub](https://github.com/goldfishh/chatgpt-tool-hub)\n\n2024.01.16更新\n1. 新增工具pure模式，支持单个工具调用\n2. 新增消息转发工具：email, sms, wechat, 可以根据规则向其他平台发送消息\n3. 替换visual-dl（更名为visual）实现，目前识别图片链接效果较好。\n4. 修复了0.4版本大部分工具返回结果不可靠问题\n\n新版本工具名共19个，不一一列举，相应工具需要的环境参数见`tool.py`里的`_build_tool_kwargs`函数\n\n## 使用说明\n使用该插件后将默认使用4个工具, 无需额外配置长期生效：\n### 1. python\n###### python解释器，使用它来解释执行python指令，可以配合你想要chatgpt生成的代码输出结果或执行事务\n\n### 2. 访问网页的工具汇总(默认url-get)\n\n#### 2.1 url-get\n###### 往往用来获取某个网站具体内容，结果可能会被反爬策略影响\n\n#### 2.2 browser\n###### 浏览器，功能与2.1类似，但能更好模拟，不会被识别为爬虫影响获取网站内容\n\n> 注1：url-get默认配置、browser需额外配置，browser依赖google-chrome，你需要提前安装好\n\n> 注2：（可通过`browser_use_summary`或 `url_get_use_summary`开关）当检测到长文本时会进入summary tool总结长文本，tokens可能会大量消耗！\n\n这是debian端安装google-chrome教程，其他系统请自行查找\n> https://www.linuxjournal.com/content/how-can-you-install-google-browser-debian\n\n### 3. terminal\n###### 在你运行的电脑里执行shell命令，可以配合你想要chatgpt生成的代码使用，给予自然语言控制手段\n\n> terminal调优记录：https://github.com/zhayujie/chatgpt-on-wechat/issues/776#issue-1659347640\n\n### 4. meteo\n###### 回答你有关天气的询问, 需要获取时间、地点上下文信息，本工具使用了[meteo open api](https://open-meteo.com/)\n注：该工具需要较高的对话技巧，不保证你问的任何问题均能得到满意的回复\n注2：当前版本可只使用这个工具，返回结果较可控。\n\n> meteo调优记录：https://github.com/zhayujie/chatgpt-on-wechat/issues/776#issuecomment-1500771334\n\n## 使用本插件对话（prompt）技巧\n### 1. 有指引的询问\n#### 例如：\n- 总结这个链接的内容 https://github.com/goldfishh/chatgpt-tool-hub\n- 使用Terminal执行curl cip.cc\n- 使用python查询今天日期\n\n### 2. 使用搜索引擎工具\n- 如果有搜索工具就能让chatgpt获取到你的未传达清楚的上下文信息，比如chatgpt不知道你的地理位置，现在时间等，所以无法查询到天气\n\n## 其他工具\n\n### 5. wikipedia\n###### 可以回答你想要知道确切的人事物\n\n### 6. news 新闻类工具集合\n\n> news更新：0.4版本对新闻类工具做了整合，配置文件只要加入`news`一个工具名就会自动加载所有新闻类工具\n\n#### 6.1. news-api *\n###### 从全球 80,000 多个信息源中获取当前和历史新闻文章\n\n#### 6.2. morning-news *\n###### 每日60秒早报，每天凌晨一点更新，本工具使用了[alapi-每日60秒早报](https://alapi.cn/api/view/93)\n\n> 该tool每天返回内容相同\n\n#### 6.3. finance-news\n###### 获取实时的金融财政新闻\n\n> 该工具需要用到browser工具解决反爬问题\n\n\n### 7. bing-search *\n###### bing搜索引擎，从此你不用再烦恼搜索要用哪些关键词\n\n### 8. wolfram-alpha *\n###### 知识搜索引擎、科学问答系统，常用于专业学科计算\n\n### 9. google-search *\n###### google搜索引擎，申请流程较bing-search繁琐\n\n### 10. arxiv\n###### 用于查找论文\n\n```text\n可配置参数：\n1. arxiv_summary: 是否使用总结工具，默认true, 当为false时会直接返回论文的标题、作者、发布时间、摘要、分类、备注、pdf链接等内容\n```\n\n> 0.4.2更新，例子：帮我找一篇吴恩达写的论文\n\n### 11. summary\n###### 总结工具，该工具可以支持输入url\n\n> 该工具目前是和其他工具配合使用，暂未测试单独使用效果\n\n### 12. visual\n###### 将图片转换成文字，底层调用ali dashscope `qwen-vl-plus`模型\n\n### 13. searxng-search *\n###### 一个私有化的搜索引擎工具\n\n> 安装教程：https://docs.searxng.org/admin/installation.html\n\n### 14. email *\n###### 发送邮件\n\n### 15. sms *\n###### 发送短信\n\n### 16. stt *\n###### speak to text 语音识别\n\n### 17. tts *\n###### text to speak 文生语音\n\n### 18. wechat *\n###### 向好友、群组发送微信\n\n---\n\n###### 注1：带*工具需要获取api-key才能使用(在config.json内的kwargs添加项)，部分工具需要外网支持  \n## [工具的api申请方法](https://github.com/goldfishh/chatgpt-tool-hub/blob/master/docs/apply_optional_tool.md)\n\n## config.json 配置说明\n###### 默认工具无需配置，其它工具需手动配置，以增加morning-news和bing-search两个工具为例：\n```json\n{\n  \"tools\": [\"bing-search\", \"morning-news\", \"你想要添加的其他工具\"],  // 填入你想用到的额外工具名，这里加入了工具\"bing-search\"和工具\"morning-news\"\n  \"kwargs\": {\n      \"debug\": true, // 当你遇到问题求助时，需要配置\n      \"request_timeout\": 120,  // openai接口超时时间\n      \"no_default\": false,  // 是否不使用默认的4个工具\n      \"bing_subscription_key\": \"4871f273a4804743\",//带*工具需要申请api-key，这里填入了工具bing-search对应的api，api_name参考前述`工具的api申请方法`\n      \"morning_news_api_key\": \"5w1kjNh9VQlUc\",// 这里填入了morning-news对应的api，\n  }\n}\n\n```\n注：config.json文件非必须，未创建仍可使用本tool；带*工具需在kwargs填入对应api-key键值对  \n- `tools`：本插件初始化时加载的工具, 上述一级标题即是对应工具名称，带*工具必须在kwargs中配置相应api-key\n- `kwargs`：工具执行时的配置，一般在这里存放**api-key**，或环境配置\n  - `debug`: 输出chatgpt-tool-hub额外信息用于调试\n  - `request_timeout`: 访问openai接口的超时时间，默认与wechat-on-chatgpt配置一致，可单独配置\n  - `no_default`: 用于配置默认加载4个工具的行为，如果为true则仅使用tools列表工具，不加载默认工具\n  - `model_name`: 用于控制tool插件底层使用的llm模型，目前暂未测试3.5以外的模型，一般保持默认\n\n---\n\n## 备注\n- 强烈建议申请搜索工具搭配使用，推荐bing-search\n- 虽然我会有意加入一些限制，但请不要使用本插件做危害他人的事情，请提前了解清楚某些内容是否会违反相关规定，建议提前做好过滤\n- 如有本插件问题，请将debug设置为true无上下文重新问一遍，如仍有问题请访问[chatgpt-tool-hub](https://github.com/goldfishh/chatgpt-tool-hub)建个issue，将日志贴进去，我无法处理不能复现的问题\n- 欢迎 star & 宣传，有能力请提pr\n"
  },
  {
    "path": "plugins/tool/config.json.template",
    "content": "{\n  \"tools\": [\n    \"url-get\",\n    \"meteo\"\n  ],\n  \"kwargs\": {\n    \"debug\": false,\n    \"no_default\": false,\n    \"model_name\": \"gpt-3.5-turbo\"\n  }\n}\n"
  },
  {
    "path": "plugins/tool/tool.py",
    "content": "from chatgpt_tool_hub.apps import AppFactory\nfrom chatgpt_tool_hub.apps.app import App\nfrom chatgpt_tool_hub.tools.tool_register import main_tool_register\n\nimport plugins\nfrom bridge.bridge import Bridge\nfrom bridge.context import ContextType\nfrom bridge.reply import Reply, ReplyType\nfrom common import const\nfrom config import conf, get_appdata_dir\nfrom plugins import *\n\n\n@plugins.register(\n    name=\"tool\",\n    desc=\"Arming your ChatGPT bot with various tools\",\n    version=\"0.5\",\n    author=\"goldfishh\",\n    desire_priority=0,\n)\nclass Tool(Plugin):\n    def __init__(self):\n        super().__init__()\n        self.handlers[Event.ON_HANDLE_CONTEXT] = self.on_handle_context\n        self.app = self._reset_app()\n        if not self.tool_config.get(\"tools\"):\n            logger.warn(\"[tool] init failed, ignore \")\n            raise Exception(\"config.json not found\")\n        logger.info(\"[tool] inited\")\n\n\n    def get_help_text(self, verbose=False, **kwargs):\n        help_text = \"这是一个能让chatgpt联网，搜索，数字运算的插件，将赋予强大且丰富的扩展能力。\"\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        if not verbose:\n            return help_text\n        help_text += \"\\n使用说明：\\n\"\n        help_text += f\"{trigger_prefix}tool \" + \"命令: 根据给出的{命令}模型来选择使用哪些工具尽力为你得到结果。\\n\"\n        help_text += f\"{trigger_prefix}tool 工具名 \" + \"命令: 根据给出的{命令}使用指定工具尽力为你得到结果。\\n\"\n        help_text += f\"{trigger_prefix}tool reset: 重置工具。\\n\\n\"\n\n        help_text += f\"已加载工具列表: \\n\"\n        for idx, tool in enumerate(main_tool_register.get_registered_tool_names()):\n            if idx != 0:\n                help_text += \", \"\n            help_text += f\"{tool}\"\n        return help_text\n\n    def on_handle_context(self, e_context: EventContext):\n        if e_context[\"context\"].type != ContextType.TEXT:\n            return\n\n        # 暂时不支持未来扩展的bot\n        if Bridge().get_bot_type(\"chat\") not in (\n            const.OPENAI,\n            const.CHATGPT,\n            const.OPEN_AI,\n            const.CHATGPTONAZURE,\n            const.LINKAI,\n        ):\n            return\n\n        content = e_context[\"context\"].content\n        content_list = e_context[\"context\"].content.split(maxsplit=1)\n\n        if not content or len(content_list) < 1:\n            e_context.action = EventAction.CONTINUE\n            return\n\n        logger.debug(\"[tool] on_handle_context. content: %s\" % content)\n        reply = Reply()\n        reply.type = ReplyType.TEXT\n        trigger_prefix = conf().get(\"plugin_trigger_prefix\", \"$\")\n        # todo: 有些工具必须要api-key，需要修改config文件，所以这里没有实现query增删tool的功能\n        if content.startswith(f\"{trigger_prefix}tool\"):\n            if len(content_list) == 1:\n                logger.debug(\"[tool]: get help\")\n                reply.content = self.get_help_text()\n                e_context[\"reply\"] = reply\n                e_context.action = EventAction.BREAK_PASS\n                return\n            elif len(content_list) > 1:\n                if content_list[1].strip() == \"reset\":\n                    logger.debug(\"[tool]: reset config\")\n                    self.app = self._reset_app()\n                    reply.content = \"重置工具成功\"\n                    e_context[\"reply\"] = reply\n                    e_context.action = EventAction.BREAK_PASS\n                    return\n                elif content_list[1].startswith(\"reset\"):\n                    logger.debug(\"[tool]: remind\")\n                    e_context[\"context\"].content = \"请你随机用一种聊天风格，提醒用户：如果想重置tool插件，reset之后不要加任何字符\"\n\n                    e_context.action = EventAction.BREAK\n                    return\n                query = content_list[1].strip()\n                \n                use_one_tool = False\n                for tool_name in main_tool_register.get_registered_tool_names():\n                    if query.startswith(tool_name):\n                        use_one_tool = True\n                        query = query[len(tool_name):]\n                        break\n\n                # Don't modify bot name\n                all_sessions = Bridge().get_bot(\"chat\").sessions\n                user_session = all_sessions.session_query(query, e_context[\"context\"][\"session_id\"]).messages\n\n                logger.debug(\"[tool]: just-go\")\n                try:\n                    if use_one_tool:\n                        _func, _ = main_tool_register.get_registered_tool()[tool_name]\n                        tool = _func(**self.app_kwargs)\n                        _reply = tool.run(query)\n                    else:\n                        # chatgpt-tool-hub will reply you with many tools\n                        _reply = self.app.ask(query, user_session)\n                    e_context.action = EventAction.BREAK_PASS\n                    all_sessions.session_reply(_reply, e_context[\"context\"][\"session_id\"])\n                except Exception as e:\n                    logger.exception(e)\n                    logger.error(str(e))\n\n                    e_context[\"context\"].content = \"请你随机用一种聊天风格，提醒用户：这个问题tool插件暂时无法处理\"\n                    reply.type = ReplyType.ERROR\n                    e_context.action = EventAction.BREAK\n                    return\n\n                reply.content = _reply\n                e_context[\"reply\"] = reply\n        return\n\n    def _read_json(self) -> dict:\n        default_config = {\"tools\": [], \"kwargs\": {}}\n        return super().load_config() or default_config\n\n    def _build_tool_kwargs(self, kwargs: dict):\n        tool_model_name = kwargs.get(\"model_name\")\n        request_timeout = kwargs.get(\"request_timeout\")\n\n        return {\n            # 全局配置相关\n            \"log\": False,  # tool 日志开关\n            \"debug\": kwargs.get(\"debug\", False),  # 输出更多日志\n            \"no_default\": kwargs.get(\"no_default\", False),  # 不要默认的工具，只加载自己导入的工具\n            \"think_depth\": kwargs.get(\"think_depth\", 2),  # 一个问题最多使用多少次工具\n            \"proxy\": conf().get(\"proxy\", \"\"),  # 科学上网\n            \"request_timeout\": request_timeout if request_timeout else conf().get(\"request_timeout\", 120),\n            \"temperature\": kwargs.get(\"temperature\", 0),  # llm 温度，建议设置0\n            # LLM配置相关\n            \"llm_api_key\": conf().get(\"open_ai_api_key\", \"\"),  # 如果llm api用key鉴权，传入这里\n            \"llm_api_base_url\": conf().get(\"open_ai_api_base\", \"https://api.openai.com/v1\"),  # 支持openai接口的llm服务地址前缀\n            \"deployment_id\": conf().get(\"azure_deployment_id\", \"\"),  # azure openai会用到\n            # note: 目前tool暂未对其他模型测试，但这里仍对配置来源做了优先级区分，一般插件配置可覆盖全局配置\n            \"model_name\": tool_model_name if tool_model_name else conf().get(\"model\", const.GPT35),\n            # 工具配置相关\n            # for arxiv tool\n            \"arxiv_simple\": kwargs.get(\"arxiv_simple\", True),  # 返回内容更精简\n            \"arxiv_top_k_results\": kwargs.get(\"arxiv_top_k_results\", 2),  # 只返回前k个搜索结果\n            \"arxiv_sort_by\": kwargs.get(\"arxiv_sort_by\", \"relevance\"),  # 搜索排序方式 [\"relevance\",\"lastUpdatedDate\",\"submittedDate\"]\n            \"arxiv_sort_order\": kwargs.get(\"arxiv_sort_order\", \"descending\"),  # 搜索排序方式 [\"ascending\", \"descending\"]\n            \"arxiv_output_type\": kwargs.get(\"arxiv_output_type\", \"text\"),  # 搜索结果类型 [\"text\", \"pdf\", \"all\"]\n            # for bing-search tool\n            \"bing_subscription_key\": kwargs.get(\"bing_subscription_key\", \"\"),\n            \"bing_search_url\": kwargs.get(\"bing_search_url\", \"https://api.bing.microsoft.com/v7.0/search\"),  # 必应搜索的endpoint地址，无需修改\n            \"bing_search_top_k_results\": kwargs.get(\"bing_search_top_k_results\", 2),  # 只返回前k个搜索结果\n            \"bing_search_simple\": kwargs.get(\"bing_search_simple\", True),  # 返回内容更精简\n            \"bing_search_output_type\": kwargs.get(\"bing_search_output_type\", \"text\"),  # 搜索结果类型 [\"text\", \"json\"]\n            # for email tool\n            \"email_nickname_mapping\": kwargs.get(\"email_nickname_mapping\", \"{}\"),  # 关于人的代号对应的邮箱地址，可以不输入邮箱地址发送邮件。键为代号值为邮箱地址\n            \"email_smtp_host\": kwargs.get(\"email_smtp_host\", \"\"),  # 例如 'smtp.qq.com'\n            \"email_smtp_port\": kwargs.get(\"email_smtp_port\", \"\"),  # 例如 587\n            \"email_sender\": kwargs.get(\"email_sender\", \"\"),  # 发送者的邮件地址\n            \"email_authorization_code\": kwargs.get(\"email_authorization_code\", \"\"),  # 发送者验证秘钥（可能不是登录密码）\n            # for google-search tool\n            \"google_api_key\": kwargs.get(\"google_api_key\", \"\"),\n            \"google_cse_id\": kwargs.get(\"google_cse_id\", \"\"),\n            \"google_simple\": kwargs.get(\"google_simple\", True),   # 返回内容更精简\n            \"google_output_type\": kwargs.get(\"google_output_type\", \"text\"),  # 搜索结果类型 [\"text\", \"json\"]\n            # for finance-news tool\n            \"finance_news_filter\": kwargs.get(\"finance_news_filter\", False),  # 是否开启过滤\n            \"finance_news_filter_list\": kwargs.get(\"finance_news_filter_list\", []),  # 过滤词列表\n            \"finance_news_simple\": kwargs.get(\"finance_news_simple\", True),   # 返回内容更精简\n            \"finance_news_repeat_news\": kwargs.get(\"finance_news_repeat_news\", False),  # 是否过滤不返回。该tool每次返回约50条新闻，可能有重复新闻\n            # for morning-news tool\n            \"morning_news_api_key\": kwargs.get(\"morning_news_api_key\", \"\"),   # api-key\n            \"morning_news_simple\": kwargs.get(\"morning_news_simple\", True),   # 返回内容更精简\n            \"morning_news_output_type\": kwargs.get(\"morning_news_output_type\", \"text\"),  # 搜索结果类型 [\"text\", \"image\"]\n            # for news-api tool\n            \"news_api_key\": kwargs.get(\"news_api_key\", \"\"),\n            # for searxng-search tool\n            \"searxng_search_host\": kwargs.get(\"searxng_search_host\", \"\"),\n            \"searxng_search_top_k_results\": kwargs.get(\"searxng_search_top_k_results\", 2),  # 只返回前k个搜索结果\n            \"searxng_search_output_type\": kwargs.get(\"searxng_search_output_type\", \"text\"),  # 搜索结果类型 [\"text\", \"json\"]\n            # for sms tool\n            \"sms_nickname_mapping\": kwargs.get(\"sms_nickname_mapping\", \"{}\"),  # 关于人的代号对应的手机号，可以不输入手机号发送sms。键为代号值为手机号\n            \"sms_username\": kwargs.get(\"sms_username\", \"\"),  # smsbao用户名\n            \"sms_apikey\": kwargs.get(\"sms_apikey\", \"\"),  # smsbao\n            # for stt tool\n            \"stt_api_key\": kwargs.get(\"stt_api_key\", \"\"),  # azure\n            \"stt_api_region\": kwargs.get(\"stt_api_region\", \"\"),  # azure\n            \"stt_recognition_language\": kwargs.get(\"stt_recognition_language\", \"zh-CN\"),  # 识别的语言类型 部分：en-US ja-JP ko-KR yue-CN zh-CN\n            # for tts tool\n            \"tts_api_key\": kwargs.get(\"tts_api_key\", \"\"),  # azure\n            \"tts_api_region\": kwargs.get(\"tts_api_region\", \"\"),  # azure\n            \"tts_auto_detect\": kwargs.get(\"tts_auto_detect\", True),  # 是否自动检测语音的语言\n            \"tts_speech_id\": kwargs.get(\"tts_speech_id\", \"zh-CN-XiaozhenNeural\"),  # 输出语音ID\n            # for summary tool\n            \"summary_max_segment_length\": kwargs.get(\"summary_max_segment_length\", 2500),  # 每2500tokens分段，多段触发总结tool\n            # for terminal tool\n            \"terminal_nsfc_filter\": kwargs.get(\"terminal_nsfc_filter\", True),  # 是否过滤llm输出的危险命令\n            \"terminal_return_err_output\": kwargs.get(\"terminal_return_err_output\", True),  # 是否输出错误信息\n            \"terminal_timeout\": kwargs.get(\"terminal_timeout\", 20),  # 允许命令最长执行时间\n            # for visual tool\n            \"caption_api_key\": kwargs.get(\"caption_api_key\", \"\"),  # ali dashscope apikey\n            # for browser tool\n            \"browser_use_summary\": kwargs.get(\"browser_use_summary\", True),  # 是否对返回结果使用tool功能\n            # for url-get tool\n            \"url_get_use_summary\": kwargs.get(\"url_get_use_summary\", True),  # 是否对返回结果使用tool功能\n            # for wikipedia tool\n            \"wikipedia_top_k_results\": kwargs.get(\"wikipedia_top_k_results\", 2),  # 只返回前k个搜索结果\n            # for wolfram-alpha tool\n            \"wolfram_alpha_appid\": kwargs.get(\"wolfram_alpha_appid\", \"\"),\n        }\n\n    def _filter_tool_list(self, tool_list: list):\n        valid_list = []\n        for tool in tool_list:\n            if tool in main_tool_register.get_registered_tool_names():\n                valid_list.append(tool)\n            else:\n                logger.warning(\"[tool] filter invalid tool: \" + repr(tool))\n        return valid_list\n\n    def _reset_app(self) -> App:\n        self.tool_config = self._read_json()\n        self.app_kwargs = self._build_tool_kwargs(self.tool_config.get(\"kwargs\", {}))\n\n        app = AppFactory()\n        app.init_env(**self.app_kwargs)\n        # filter not support tool\n        tool_list = self._filter_tool_list(self.tool_config.get(\"tools\", []))\n\n        return app.create_app(tools_list=tool_list, **self.app_kwargs)\n"
  },
  {
    "path": "requirements-optional.txt",
    "content": "tiktoken>=0.3.2 # openai calculate token\n\n#voice\npydub>=0.25.1 # need ffmpeg\nSpeechRecognition # google speech to text\ngTTS>=2.3.1 # google text to speech\npyttsx3>=2.90 # pytsx text to speech\nbaidu_aip>=4.16.10 # baidu voice\nazure-cognitiveservices-speech # azure voice\nedge-tts # edge-tts\nnumpy<=1.24.2\nlangid # language detect\nelevenlabs==1.0.3 # elevenlabs TTS\n\n#install plugin\ndulwich\n\n# xunfei spark\nwebsocket-client==1.2.0\n\n# claude API\nanthropic==0.25.0\n\n# tongyi qwen\nbroadscope_bailian\n\n# google\ngoogle-generativeai\n\n# tencentcloud sdk\ntencentcloud-sdk-python>=3.0.0\n\n# file parsing (web_fetch document support)\npypdf\npython-docx\nopenpyxl\npython-pptx\n"
  },
  {
    "path": "requirements.txt",
    "content": "openai==0.27.8\naiohttp>=3.8.6,<3.10\nrequests>=2.28.2\nchardet>=5.1.0\nPillow\nweb.py\nlinkai>=0.0.6.0\nagentmesh-sdk>=0.1.3\npython-dotenv>=1.0.0\nPyYAML>=6.0\ncroniter>=2.0.0\n\n# wechatcom & wechatmp\nwechatpy\n\n# zhipuai\nzai-sdk\n# tongyi qwen sdk\ndashscope\n\n# feishu websocket mode\nlark-oapi\n# dingtalk\ndingtalk_stream\n# wecom bot websocket mode\nwebsocket-client\npycryptodome\n"
  },
  {
    "path": "run.sh",
    "content": "#!/bin/bash\nset -e\n\n# ============================\n# CowAgent Management Script\n# ============================\n\n# ANSI colors\nRED='\\033[0;31m'\nGREEN='\\033[0;32m'\nYELLOW='\\033[0;33m'\nCYAN='\\033[0;36m'\nBOLD='\\033[1m'\nNC='\\033[0m'\n\n# Emojis\nEMOJI_ROCKET=\"🚀\"\nEMOJI_COW=\"🐄\"\nEMOJI_CHECK=\"✅\"\nEMOJI_CROSS=\"❌\"\nEMOJI_WARN=\"⚠️\"\nEMOJI_STOP=\"🛑\"\nEMOJI_WRENCH=\"🔧\"\n\n# Check if using Bash\nif [ -z \"$BASH_VERSION\" ]; then\n    echo -e \"${RED}❌ Please run this script with Bash.${NC}\"\n    exit 1\nfi\n\n# Get current script directory\nexport BASE_DIR=$(cd \"$(dirname \"$0\")\"; pwd)\n\n# Detect if in project directory\nIS_PROJECT_DIR=false\nif [ -f \"${BASE_DIR}/config-template.json\" ] && [ -f \"${BASE_DIR}/app.py\" ]; then\n    IS_PROJECT_DIR=true\nfi\n\n# Check and install tool\ncheck_and_install_tool() {\n    local tool_name=$1\n    if ! command -v \"$tool_name\" &> /dev/null; then\n        echo -e \"${YELLOW}⚙️  $tool_name not found, installing...${NC}\"\n        if command -v yum &> /dev/null; then\n            sudo yum install \"$tool_name\" -y\n        elif command -v apt-get &> /dev/null; then\n            sudo apt-get update && sudo apt-get install \"$tool_name\" -y\n        elif command -v brew &> /dev/null; then\n            brew install \"$tool_name\"\n        else\n            echo -e \"${RED}❌ Unsupported package manager. Please install $tool_name manually.${NC}\"\n            return 1\n        fi\n\n        if ! command -v \"$tool_name\" &> /dev/null; then\n            echo -e \"${RED}❌ Failed to install $tool_name.${NC}\"\n            return 1\n        else\n            echo -e \"${GREEN}✅ $tool_name installed successfully.${NC}\"\n            return 0\n        fi\n    else\n        echo -e \"${GREEN}✅ $tool_name is already installed.${NC}\"\n        return 0\n    fi\n}\n\n# Detect and set Python command\ndetect_python_command() {\n    FOUND_NEWER_VERSION=\"\"\n    \n    # Try to find Python command in order of preference\n    for cmd in python3 python python3.12 python3.11 python3.10 python3.9 python3.8 python3.7; do\n        if command -v $cmd &> /dev/null; then\n            # Check Python version\n            major_version=$($cmd -c 'import sys; print(sys.version_info[0])' 2>/dev/null)\n            minor_version=$($cmd -c 'import sys; print(sys.version_info[1])' 2>/dev/null)\n            \n            if [[ \"$major_version\" == \"3\" ]]; then\n                # Check if version is in supported range (3.7 - 3.12)\n                if (( minor_version >= 7 && minor_version <= 12 )); then\n                    PYTHON_CMD=$cmd\n                    PYTHON_VERSION=\"${major_version}.${minor_version}\"\n                    break\n                elif (( minor_version >= 13 )); then\n                    # Found Python 3.13+, but not compatible\n                    if [ -z \"$FOUND_NEWER_VERSION\" ]; then\n                        FOUND_NEWER_VERSION=\"${major_version}.${minor_version}\"\n                    fi\n                fi\n            fi\n        fi\n    done\n    \n    if [ -z \"$PYTHON_CMD\" ]; then\n        echo -e \"${YELLOW}Tried: python3, python, python3.12, python3.11, python3.10, python3.9, python3.8, python3.7${NC}\"\n        if [ -n \"$FOUND_NEWER_VERSION\" ]; then\n            echo -e \"${RED}❌ Found Python $FOUND_NEWER_VERSION, but this project requires Python 3.7-3.12${NC}\"\n            echo -e \"${YELLOW}Python 3.13+ has compatibility issues with some dependencies (web.py, cgi module removed)${NC}\"\n            echo -e \"${YELLOW}Please install Python 3.7-3.12 (recommend Python 3.12)${NC}\"\n        else\n            echo -e \"${RED}❌ No suitable Python found. Please install Python 3.7-3.12${NC}\"\n        fi\n        exit 1\n    fi\n    \n    # Export for global use\n    export PYTHON_CMD\n    export PYTHON_VERSION\n    \n    echo -e \"${GREEN}✅ Found Python: $PYTHON_CMD (version $PYTHON_VERSION)${NC}\"\n}\n\n# Check Python version (>= 3.7)\ncheck_python_version() {\n    detect_python_command\n    \n    # Verify pip is available\n    if ! $PYTHON_CMD -m pip --version &> /dev/null; then\n        echo -e \"${RED}❌ pip not found for $PYTHON_CMD. Please install pip.${NC}\"\n        exit 1\n    fi\n    \n    echo -e \"${GREEN}✅ pip is available for $PYTHON_CMD${NC}\"\n}\n\n# Clone project\nclone_project() {\n    echo -e \"${GREEN}🔍 Cloning ChatGPT-on-WeChat project...${NC}\"\n\n    if [ -d \"chatgpt-on-wechat\" ]; then\n        echo -e \"${YELLOW}⚠️  Directory 'chatgpt-on-wechat' already exists.${NC}\"\n        read -p \"Choose action: overwrite(o), backup(b), or quit(q)? [press Enter for default: b]: \" choice\n        choice=${choice:-b}\n        case \"$choice\" in\n            o|O)\n                echo -e \"${YELLOW}🗑️  Overwriting 'chatgpt-on-wechat' directory...${NC}\"\n                rm -rf chatgpt-on-wechat\n                ;;\n            b|B)\n                backup_dir=\"chatgpt-on-wechat_backup_$(date +%s)\"\n                echo -e \"${YELLOW}🔀 Backing up to '$backup_dir'...${NC}\"\n                mv chatgpt-on-wechat \"$backup_dir\"\n                ;;\n            q|Q)\n                echo -e \"${RED}❌ Installation cancelled.${NC}\"\n                exit 1\n                ;;\n            *)\n                echo -e \"${RED}❌ Invalid choice. Exiting.${NC}\"\n                exit 1\n                ;;\n        esac\n    fi\n\n    check_and_install_tool git\n\n    if ! command -v git &> /dev/null; then\n        echo -e \"${YELLOW}⚠️  Git not available. Trying wget/curl...${NC}\"\n        local zip_url=\"https://gitee.com/zhayujie/chatgpt-on-wechat/repository/archive/master.zip\"\n        if command -v wget &> /dev/null; then\n            wget \"$zip_url\" -O chatgpt-on-wechat.zip\n        elif command -v curl &> /dev/null; then\n            curl -L \"$zip_url\" -o chatgpt-on-wechat.zip\n        else\n            echo -e \"${RED}❌ Cannot download project. Please install Git, wget, or curl.${NC}\"\n            exit 1\n        fi\n        unzip chatgpt-on-wechat.zip\n        mv chatgpt-on-wechat-master chatgpt-on-wechat\n        rm chatgpt-on-wechat.zip\n    else\n        git clone https://github.com/zhayujie/chatgpt-on-wechat.git || \\\n        git clone https://gitee.com/zhayujie/chatgpt-on-wechat.git\n        if [[ $? -ne 0 ]]; then\n            echo -e \"${RED}❌ Project clone failed. Please check network connection.${NC}\"\n            exit 1\n        fi\n    fi\n\n    cd chatgpt-on-wechat || { echo -e \"${RED}❌ Failed to enter project directory.${NC}\"; exit 1; }\n    export BASE_DIR=$(pwd)\n    echo -e \"${GREEN}✅ Project cloned successfully: $BASE_DIR${NC}\"\n    \n    # Add execute permission to management script\n    if [ -f \"${BASE_DIR}/run.sh\" ]; then\n        chmod +x \"${BASE_DIR}/run.sh\" 2>/dev/null || true\n        echo -e \"${GREEN}✅ Execute permission added to run.sh${NC}\"\n    fi\n    \n    sleep 1\n}\n\n# Install dependencies\ninstall_dependencies() {\n    echo -e \"${GREEN}📦 Installing dependencies...${NC}\"\n    local PIP_MIRROR=\"-i https://pypi.tuna.tsinghua.edu.cn/simple\"\n\n    PIP_EXTRA_ARGS=\"\"\n    if $PYTHON_CMD -c \"import sys; exit(0 if sys.version_info >= (3, 11) else 1)\" 2>/dev/null; then\n        PIP_EXTRA_ARGS=\"--break-system-packages\"\n        echo -e \"${YELLOW}Python 3.11+ detected, using --break-system-packages for pip installations${NC}\"\n    fi\n\n    echo -e \"${YELLOW}Upgrading pip and basic tools...${NC}\"\n    set +e\n    $PYTHON_CMD -m pip install --upgrade pip setuptools wheel importlib_metadata --ignore-installed $PIP_EXTRA_ARGS $PIP_MIRROR > /tmp/pip_upgrade.log 2>&1\n    [ $? -ne 0 ] && echo -e \"${YELLOW}⚠️  Some tools failed to upgrade, but continuing...${NC}\"\n    set -e\n    rm -f /tmp/pip_upgrade.log\n\n    echo -e \"${YELLOW}Installing project dependencies...${NC}\"\n    set +e\n    $PYTHON_CMD -m pip install -r requirements.txt $PIP_EXTRA_ARGS $PIP_MIRROR > /tmp/pip_install.log 2>&1\n    local exit_code=$?\n    set -e\n    cat /tmp/pip_install.log\n\n    if [ $exit_code -eq 0 ]; then\n        echo -e \"${GREEN}✅ Dependencies installed successfully.${NC}\"\n    elif grep -qE \"distutils installed project|uninstall-no-record-file|installed by debian\" /tmp/pip_install.log; then\n        echo -e \"${YELLOW}⚠️  Detected system package conflict, retrying with workaround...${NC}\"\n        local IGNORE_PACKAGES=\"\"\n        for pkg in PyYAML setuptools wheel certifi charset-normalizer; do\n            IGNORE_PACKAGES=\"$IGNORE_PACKAGES --ignore-installed $pkg\"\n        done\n        set +e\n        $PYTHON_CMD -m pip install -r requirements.txt $IGNORE_PACKAGES $PIP_EXTRA_ARGS $PIP_MIRROR \\\n            && echo -e \"${GREEN}✅ Dependencies installed successfully (workaround applied).${NC}\" \\\n            || echo -e \"${YELLOW}⚠️  Some dependencies may have issues, but continuing...${NC}\"\n        set -e\n    elif grep -q \"externally-managed-environment\" /tmp/pip_install.log; then\n        echo -e \"${YELLOW}⚠️  Detected externally-managed environment, retrying with --break-system-packages...${NC}\"\n        set +e\n        $PYTHON_CMD -m pip install -r requirements.txt --break-system-packages $PIP_MIRROR \\\n            && echo -e \"${GREEN}✅ Dependencies installed successfully (system packages override applied).${NC}\" \\\n            || echo -e \"${YELLOW}⚠️  Some dependencies may have issues, but continuing...${NC}\"\n        set -e\n    else\n        echo -e \"${YELLOW}⚠️  Installation had errors, but continuing...${NC}\"\n    fi\n\n    rm -f /tmp/pip_install.log\n}\n\n# Select model\nselect_model() {\n    echo \"\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${CYAN}${BOLD}   Select AI Model${NC}\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${YELLOW}1) MiniMax (MiniMax-M2.7, MiniMax-M2.5, etc.)${NC}\"\n    echo -e \"${YELLOW}2) Zhipu AI (glm-5-turbo, glm-5, etc.)${NC}\"\n    echo -e \"${YELLOW}3) Kimi (kimi-k2.5, kimi-k2, etc.)${NC}\"\n    echo -e \"${YELLOW}4) Doubao (doubao-seed-2-0-code-preview-260215, etc.)${NC}\"\n    echo -e \"${YELLOW}5) Qwen (qwen3.5-plus, qwen3-max, qwq-plus, etc.)${NC}\"\n    echo -e \"${YELLOW}6) Claude (claude-sonnet-4-6, claude-opus-4-6, etc.)${NC}\"\n    echo -e \"${YELLOW}7) Gemini (gemini-3.1-flash-lite-preview, gemini-3.1-pro-preview, etc.)${NC}\"\n    echo -e \"${YELLOW}8) OpenAI GPT (gpt-5.4, gpt-5.2, gpt-4.1, etc.)${NC}\"\n    echo -e \"${YELLOW}9) LinkAI (access multiple models via one API)${NC}\"\n    echo \"\"\n    \n    while true; do\n        read -p \"Enter your choice [press Enter for default: 1 - MiniMax]: \" model_choice\n        model_choice=${model_choice:-1}\n        case \"$model_choice\" in\n            1|2|3|4|5|6|7|8|9)\n                break\n                ;;\n            *)\n                echo -e \"${RED}Invalid choice. Please enter 1-9.${NC}\"\n                ;;\n        esac\n    done\n}\n\n# Read model config: provider, default_model, key_variable_name\nread_model_config() {\n    local provider=$1 default_model=$2 key_var=$3\n    echo -e \"${GREEN}Configuring ${provider}...${NC}\"\n    read -p \"Enter ${provider} API Key: \" _api_key\n    read -p \"Enter model name [press Enter for default: ${default_model}]: \" model_name\n    model_name=${model_name:-$default_model}\n    MODEL_NAME=\"$model_name\"\n    eval \"${key_var}=\\\"\\$_api_key\\\"\"\n}\n\n# Read optional API base URL\nread_api_base() {\n    local base_var=$1 default_url=$2\n    read -p \"Enter API Base URL [press Enter for default: ${default_url}]: \" api_base\n    api_base=${api_base:-$default_url}\n    eval \"${base_var}=\\\"\\$api_base\\\"\"\n}\n\n# Configure model\nconfigure_model() {\n    case \"$model_choice\" in\n        1) read_model_config \"MiniMax\" \"MiniMax-M2.7\" \"MINIMAX_KEY\" ;;\n        2) read_model_config \"Zhipu AI\" \"glm-5-turbo\" \"ZHIPU_KEY\" ;;\n        3) read_model_config \"Kimi (Moonshot)\" \"kimi-k2.5\" \"MOONSHOT_KEY\" ;;\n        4) read_model_config \"Doubao (Volcengine Ark)\" \"doubao-seed-2-0-code-preview-260215\" \"ARK_KEY\" ;;\n        5) read_model_config \"Qwen (DashScope)\" \"qwen3.5-plus\" \"DASHSCOPE_KEY\" ;;\n        6)\n            read_model_config \"Claude\" \"claude-sonnet-4-6\" \"CLAUDE_KEY\"\n            read_api_base \"CLAUDE_BASE\" \"https://api.anthropic.com/v1\"\n            ;;\n        7)\n            read_model_config \"Gemini\" \"gemini-3.1-pro-preview\" \"GEMINI_KEY\"\n            read_api_base \"GEMINI_BASE\" \"https://generativelanguage.googleapis.com\"\n            ;;\n        8)\n            read_model_config \"OpenAI GPT\" \"gpt-5.4\" \"OPENAI_KEY\"\n            read_api_base \"OPENAI_BASE\" \"https://api.openai.com/v1\"\n            ;;\n        9)\n            read_model_config \"LinkAI\" \"MiniMax-M2.7\" \"LINKAI_KEY\"\n            USE_LINKAI=\"true\"\n            ;;\n    esac\n}\n\n# Select channel\nselect_channel() {\n    echo \"\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${CYAN}${BOLD}   Select Communication Channel${NC}\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${YELLOW}1) Feishu (飞书)${NC}\"\n    echo -e \"${YELLOW}2) DingTalk (钉钉)${NC}\"\n    echo -e \"${YELLOW}3) WeCom Bot (企微智能机器人)${NC}\"\n    echo -e \"${YELLOW}4) QQ (QQ 机器人)${NC}\"\n    echo -e \"${YELLOW}5) WeCom App (企微自建应用)${NC}\"\n    echo -e \"${YELLOW}6) Web (网页)${NC}\"\n    echo \"\"\n    \n    while true; do\n        read -p \"Enter your choice [press Enter for default: 1 - Feishu]: \" channel_choice\n        channel_choice=${channel_choice:-1}\n        case \"$channel_choice\" in\n            1|2|3|4|5|6)\n                break\n                ;;\n            *)\n                echo -e \"${RED}Invalid choice. Please enter 1-6.${NC}\"\n                ;;\n        esac\n    done\n}\n\n# Configure channel\nconfigure_channel() {\n    case \"$channel_choice\" in\n        1)\n            # Feishu (WebSocket mode)\n            CHANNEL_TYPE=\"feishu\"\n            echo -e \"${GREEN}Configure Feishu (WebSocket mode)...${NC}\"\n            read -p \"Enter Feishu App ID: \" fs_app_id\n            read -p \"Enter Feishu App Secret: \" fs_app_secret\n            \n            FEISHU_APP_ID=\"$fs_app_id\"\n            FEISHU_APP_SECRET=\"$fs_app_secret\"\n            FEISHU_EVENT_MODE=\"websocket\"\n            ACCESS_INFO=\"Feishu channel configured (WebSocket mode)\"\n            ;;\n        2)\n            # DingTalk\n            CHANNEL_TYPE=\"dingtalk\"\n            echo -e \"${GREEN}Configure DingTalk...${NC}\"\n            read -p \"Enter DingTalk Client ID: \" dt_client_id\n            read -p \"Enter DingTalk Client Secret: \" dt_client_secret\n            \n            DT_CLIENT_ID=\"$dt_client_id\"\n            DT_CLIENT_SECRET=\"$dt_client_secret\"\n            ACCESS_INFO=\"DingTalk channel configured\"\n            ;;\n        3)\n            # WeCom Bot\n            CHANNEL_TYPE=\"wecom_bot\"\n            echo -e \"${GREEN}Configure WeCom Bot...${NC}\"\n            read -p \"Enter WeCom Bot ID: \" wecom_bot_id\n            read -p \"Enter WeCom Bot Secret: \" wecom_bot_secret\n            \n            WECOM_BOT_ID=\"$wecom_bot_id\"\n            WECOM_BOT_SECRET=\"$wecom_bot_secret\"\n            ACCESS_INFO=\"WeCom Bot channel configured\"\n            ;;\n        4)\n            # QQ\n            CHANNEL_TYPE=\"qq\"\n            echo -e \"${GREEN}Configure QQ Bot...${NC}\"\n            read -p \"Enter QQ App ID: \" qq_app_id\n            read -p \"Enter QQ App Secret: \" qq_app_secret\n            \n            QQ_APP_ID=\"$qq_app_id\"\n            QQ_APP_SECRET=\"$qq_app_secret\"\n            ACCESS_INFO=\"QQ Bot channel configured\"\n            ;;\n        5)\n            # WeCom App\n            CHANNEL_TYPE=\"wechatcom_app\"\n            echo -e \"${GREEN}Configure WeCom App...${NC}\"\n            read -p \"Enter WeChat Corp ID: \" corp_id\n            read -p \"Enter WeChat Com App Token: \" com_token\n            read -p \"Enter WeChat Com App Secret: \" com_secret\n            read -p \"Enter WeChat Com App Agent ID: \" com_agent_id\n            read -p \"Enter WeChat Com App AES Key: \" com_aes_key\n            read -p \"Enter WeChat Com App Port [press Enter for default: 9898]: \" com_port\n            com_port=${com_port:-9898}\n            \n            WECHATCOM_CORP_ID=\"$corp_id\"\n            WECHATCOM_TOKEN=\"$com_token\"\n            WECHATCOM_SECRET=\"$com_secret\"\n            WECHATCOM_AGENT_ID=\"$com_agent_id\"\n            WECHATCOM_AES_KEY=\"$com_aes_key\"\n            WECHATCOM_PORT=\"$com_port\"\n            ACCESS_INFO=\"WeCom App channel configured on port ${com_port}\"\n            ;;\n        6)\n            # Web\n            CHANNEL_TYPE=\"web\"\n            read -p \"Enter web port [press Enter for default: 9899]: \" web_port\n            web_port=${web_port:-9899}\n            \n            WEB_PORT=\"$web_port\"\n            ACCESS_INFO=\"Web interface will be available at: http://localhost:${web_port}/chat\"\n            ;;\n    esac\n}\n\n# Generate config file\ncreate_config_file() {\n    echo -e \"${GREEN}📝 Generating config.json...${NC}\"\n\n    CHANNEL_TYPE=\"$CHANNEL_TYPE\" \\\n    MODEL_NAME=\"$MODEL_NAME\" \\\n    OPENAI_KEY=\"${OPENAI_KEY:-}\" \\\n    OPENAI_BASE=\"${OPENAI_BASE:-https://api.openai.com/v1}\" \\\n    CLAUDE_KEY=\"${CLAUDE_KEY:-}\" \\\n    CLAUDE_BASE=\"${CLAUDE_BASE:-https://api.anthropic.com/v1}\" \\\n    GEMINI_KEY=\"${GEMINI_KEY:-}\" \\\n    GEMINI_BASE=\"${GEMINI_BASE:-https://generativelanguage.googleapis.com}\" \\\n    ZHIPU_KEY=\"${ZHIPU_KEY:-}\" \\\n    MOONSHOT_KEY=\"${MOONSHOT_KEY:-}\" \\\n    ARK_KEY=\"${ARK_KEY:-}\" \\\n    DASHSCOPE_KEY=\"${DASHSCOPE_KEY:-}\" \\\n    MINIMAX_KEY=\"${MINIMAX_KEY:-}\" \\\n    USE_LINKAI=\"${USE_LINKAI:-false}\" \\\n    LINKAI_KEY=\"${LINKAI_KEY:-}\" \\\n    FEISHU_APP_ID=\"${FEISHU_APP_ID:-}\" \\\n    FEISHU_APP_SECRET=\"${FEISHU_APP_SECRET:-}\" \\\n    WEB_PORT=\"${WEB_PORT:-}\" \\\n    DT_CLIENT_ID=\"${DT_CLIENT_ID:-}\" \\\n    DT_CLIENT_SECRET=\"${DT_CLIENT_SECRET:-}\" \\\n    WECOM_BOT_ID=\"${WECOM_BOT_ID:-}\" \\\n    WECOM_BOT_SECRET=\"${WECOM_BOT_SECRET:-}\" \\\n    QQ_APP_ID=\"${QQ_APP_ID:-}\" \\\n    QQ_APP_SECRET=\"${QQ_APP_SECRET:-}\" \\\n    WECHATCOM_CORP_ID=\"${WECHATCOM_CORP_ID:-}\" \\\n    WECHATCOM_TOKEN=\"${WECHATCOM_TOKEN:-}\" \\\n    WECHATCOM_SECRET=\"${WECHATCOM_SECRET:-}\" \\\n    WECHATCOM_AGENT_ID=\"${WECHATCOM_AGENT_ID:-}\" \\\n    WECHATCOM_AES_KEY=\"${WECHATCOM_AES_KEY:-}\" \\\n    WECHATCOM_PORT=\"${WECHATCOM_PORT:-}\" \\\n    $PYTHON_CMD -c \"\nimport json, os\ne = os.environ.get\nbase = {\n    'channel_type': e('CHANNEL_TYPE'),\n    'model': e('MODEL_NAME'),\n    'open_ai_api_key': e('OPENAI_KEY', ''),\n    'open_ai_api_base': e('OPENAI_BASE'),\n    'claude_api_key': e('CLAUDE_KEY', ''),\n    'claude_api_base': e('CLAUDE_BASE'),\n    'gemini_api_key': e('GEMINI_KEY', ''),\n    'gemini_api_base': e('GEMINI_BASE'),\n    'zhipu_ai_api_key': e('ZHIPU_KEY', ''),\n    'moonshot_api_key': e('MOONSHOT_KEY', ''),\n    'ark_api_key': e('ARK_KEY', ''),\n    'dashscope_api_key': e('DASHSCOPE_KEY', ''),\n    'minimax_api_key': e('MINIMAX_KEY', ''),\n    'voice_to_text': 'openai',\n    'text_to_voice': 'openai',\n    'voice_reply_voice': False,\n    'speech_recognition': True,\n    'group_speech_recognition': False,\n    'use_linkai': e('USE_LINKAI') == 'true',\n    'linkai_api_key': e('LINKAI_KEY', ''),\n    'linkai_app_code': '',\n    'agent': True,\n    'agent_max_context_tokens': 40000,\n    'agent_max_context_turns': 30,\n    'agent_max_steps': 15,\n}\nchannel_map = {\n    'feishu': {'feishu_app_id': 'FEISHU_APP_ID', 'feishu_app_secret': 'FEISHU_APP_SECRET'},\n    'web': {'web_port': ('WEB_PORT', int)},\n    'dingtalk': {'dingtalk_client_id': 'DT_CLIENT_ID', 'dingtalk_client_secret': 'DT_CLIENT_SECRET'},\n    'wecom_bot': {'wecom_bot_id': 'WECOM_BOT_ID', 'wecom_bot_secret': 'WECOM_BOT_SECRET'},\n    'qq': {'qq_app_id': 'QQ_APP_ID', 'qq_app_secret': 'QQ_APP_SECRET'},\n    'wechatcom_app': {'wechatcom_corp_id': 'WECHATCOM_CORP_ID', 'wechatcomapp_token': 'WECHATCOM_TOKEN', 'wechatcomapp_secret': 'WECHATCOM_SECRET', 'wechatcomapp_agent_id': 'WECHATCOM_AGENT_ID', 'wechatcomapp_aes_key': 'WECHATCOM_AES_KEY', 'wechatcomapp_port': ('WECHATCOM_PORT', int)},\n}\nch = e('CHANNEL_TYPE')\nfor key, spec in channel_map.get(ch, {}).items():\n    if isinstance(spec, tuple):\n        env_name, conv = spec\n        base[key] = conv(e(env_name))\n    else:\n        base[key] = e(spec, '')\nwith open('config.json', 'w') as f:\n    json.dump(base, f, indent=2, ensure_ascii=False)\n\"\n\n    echo -e \"${GREEN}✅ Configuration file created successfully.${NC}\"\n}\n\n# Start project\nstart_project() {\n    echo \"\"\n    echo -e \"${GREEN}${EMOJI_ROCKET} Starting CowAgent...${NC}\"\n    sleep 1\n\n    if [ ! -f \"${BASE_DIR}/nohup.out\" ]; then\n        touch \"${BASE_DIR}/nohup.out\"\n    fi\n\n    OS_TYPE=$(uname)\n\n    if [[ \"$OS_TYPE\" == \"Linux\" ]]; then\n        # Linux: use setsid to detach from terminal\n        nohup setsid $PYTHON_CMD \"${BASE_DIR}/app.py\" > \"${BASE_DIR}/nohup.out\" 2>&1 &\n        echo -e \"${GREEN}${EMOJI_COW} CowAgent started on Linux (using $PYTHON_CMD)${NC}\"\n    elif [[ \"$OS_TYPE\" == \"Darwin\" ]]; then\n        # macOS: use nohup to prevent SIGHUP\n        nohup $PYTHON_CMD \"${BASE_DIR}/app.py\" > \"${BASE_DIR}/nohup.out\" 2>&1 &\n        echo -e \"${GREEN}${EMOJI_COW} CowAgent started on macOS (using $PYTHON_CMD)${NC}\"\n    else\n        echo -e \"${RED}❌ Unsupported OS: ${OS_TYPE}${NC}\"\n        exit 1\n    fi\n\n    sleep 2\n    echo \"\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${GREEN}${EMOJI_CHECK} CowAgent is now running in background!${NC}\"\n    echo -e \"${GREEN}${EMOJI_CHECK} Process will continue after closing terminal.${NC}\"\n    echo -e \"${CYAN}$ACCESS_INFO${NC}\"\n    echo \"\"\n    echo -e \"${CYAN}${BOLD}Management Commands:${NC}\"\n    echo -e \"  ${GREEN}./run.sh stop${NC}       Stop the service\"\n    echo -e \"  ${GREEN}./run.sh restart${NC}    Restart the service\"\n    echo -e \"  ${GREEN}./run.sh status${NC}     Check status\"\n    echo -e \"  ${GREEN}./run.sh logs${NC}       View logs\"\n    echo -e \"  ${GREEN}./run.sh update${NC}     Update and restart\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo \"\"\n    \n    echo -e \"${YELLOW}Showing recent logs (Ctrl+C to exit, agent keeps running):${NC}\"\n    sleep 2\n    tail -n 30 -f \"${BASE_DIR}/nohup.out\"\n}\n\n# Show usage\nshow_usage() {\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${CYAN}${BOLD}   ${EMOJI_COW} CowAgent Management Script${NC}\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo \"\"\n    echo -e \"${YELLOW}Usage:${NC}\"\n    echo -e \"  ${GREEN}./run.sh${NC}               ${CYAN}# Install/Configure project${NC}\"\n    echo -e \"  ${GREEN}./run.sh <command>${NC}     ${CYAN}# Execute management command${NC}\"\n    echo \"\"\n    echo -e \"${YELLOW}Commands:${NC}\"\n    echo -e \"  ${GREEN}start${NC}      Start the service\"\n    echo -e \"  ${GREEN}stop${NC}       Stop the service\"\n    echo -e \"  ${GREEN}restart${NC}    Restart the service\"\n    echo -e \"  ${GREEN}status${NC}     Check service status\"\n    echo -e \"  ${GREEN}logs${NC}       View logs (tail -f)\"\n    echo -e \"  ${GREEN}config${NC}     Reconfigure project\"\n    echo -e \"  ${GREEN}update${NC}     Update and restart\"\n    echo \"\"\n    echo -e \"${YELLOW}Examples:${NC}\"\n    echo -e \"  ${GREEN}./run.sh start${NC}\"\n    echo -e \"  ${GREEN}./run.sh logs${NC}\"\n    echo -e \"  ${GREEN}./run.sh status${NC}\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n}\n\n# Ensure PYTHON_CMD is set\nensure_python_cmd() {\n    if [ -z \"$PYTHON_CMD\" ]; then\n        detect_python_command 2>/dev/null || PYTHON_CMD=\"python3\"\n    fi\n}\n\n# Get service PID (empty string if not running)\nget_pid() {\n    ensure_python_cmd\n    ps ax | grep -i app.py | grep \"${BASE_DIR}\" | grep \"$PYTHON_CMD\" | grep -v grep | awk '{print $1}'\n}\n\n# Check if service is running\nis_running() {\n    [ -n \"$(get_pid)\" ]\n}\n\n# Start service\ncmd_start() {\n    # Check if config.json exists\n    if [ ! -f \"${BASE_DIR}/config.json\" ]; then\n        echo -e \"${RED}${EMOJI_CROSS} config.json not found${NC}\"\n        echo -e \"${YELLOW}Please run './run.sh' to configure first${NC}\"\n        exit 1\n    fi\n    \n    if is_running; then\n        echo -e \"${YELLOW}${EMOJI_WARN} CowAgent is already running (PID: $(get_pid))${NC}\"\n        echo -e \"${YELLOW}Use './run.sh restart' to restart${NC}\"\n        return\n    fi\n    \n    check_python_version\n    start_project\n}\n\n# Stop service\ncmd_stop() {\n    echo -e \"${GREEN}${EMOJI_STOP} Stopping CowAgent...${NC}\"\n    \n    if ! is_running; then\n        echo -e \"${YELLOW}${EMOJI_WARN} CowAgent is not running${NC}\"\n        return\n    fi\n    \n    pid=$(get_pid)\n    echo -e \"${GREEN}Found running process (PID: ${pid})${NC}\"\n    \n    kill ${pid}\n    sleep 3\n    \n    if ps -p ${pid} > /dev/null 2>&1; then\n        echo -e \"${YELLOW}⚠️  Process not stopped, forcing termination...${NC}\"\n        kill -9 ${pid}\n    fi\n    \n    echo -e \"${GREEN}${EMOJI_CHECK} CowAgent stopped${NC}\"\n}\n\n# Restart service\ncmd_restart() {\n    cmd_stop\n    sleep 1\n    cmd_start\n}\n\n# Check status\ncmd_status() {\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${CYAN}${BOLD}   ${EMOJI_COW} CowAgent Status${NC}\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    \n    if is_running; then\n        pid=$(get_pid)\n        echo -e \"${GREEN}Status:${NC} ✅ Running\"\n        echo -e \"${GREEN}PID:${NC}    ${pid}\"\n        if [ -f \"${BASE_DIR}/nohup.out\" ]; then\n            echo -e \"${GREEN}Logs:${NC}   ${BASE_DIR}/nohup.out\"\n        fi\n    else\n        echo -e \"${YELLOW}Status:${NC} ⭐ Stopped\"\n    fi\n    \n    if [ -f \"${BASE_DIR}/config.json\" ]; then\n        model=$(grep -o '\"model\"[[:space:]]*:[[:space:]]*\"[^\"]*\"' \"${BASE_DIR}/config.json\" | cut -d'\"' -f4)\n        channel=$(grep -o '\"channel_type\"[[:space:]]*:[[:space:]]*\"[^\"]*\"' \"${BASE_DIR}/config.json\" | cut -d'\"' -f4)\n        echo -e \"${GREEN}Model:${NC}  ${model}\"\n        echo -e \"${GREEN}Channel:${NC} ${channel}\"\n    fi\n    \n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n}\n\n# View logs\ncmd_logs() {\n    if [ -f \"${BASE_DIR}/nohup.out\" ]; then\n        echo -e \"${YELLOW}Viewing logs (Ctrl+C to exit):${NC}\"\n        tail -f \"${BASE_DIR}/nohup.out\"\n    else\n        echo -e \"${RED}❌ Log file not found: ${BASE_DIR}/nohup.out${NC}\"\n    fi\n}\n\n# Reconfigure\ncmd_config() {\n    echo -e \"${YELLOW}${EMOJI_WRENCH} Reconfiguring CowAgent...${NC}\"\n    \n    if [ -f \"${BASE_DIR}/config.json\" ]; then\n        backup_file=\"${BASE_DIR}/config.json.backup.$(date +%s)\"\n        cp \"${BASE_DIR}/config.json\" \"${backup_file}\"\n        echo -e \"${GREEN}✅ Backed up config to: ${backup_file}${NC}\"\n    fi\n    \n    check_python_version\n    install_dependencies\n    select_model\n    configure_model\n    select_channel\n    configure_channel\n    create_config_file\n    \n    echo \"\"\n    read -p \"Restart service now? [Y/n]: \" restart_now\n    if [[ ! $restart_now == [Nn]* ]]; then\n        cmd_restart\n    fi\n}\n\n# Update project\ncmd_update() {\n    echo -e \"${GREEN}${EMOJI_WRENCH} Updating CowAgent...${NC}\"\n    cd \"${BASE_DIR}\"\n    \n    # Stop service\n    if is_running; then\n        cmd_stop\n    fi\n    \n    # Update code\n    if [ -d .git ]; then\n        echo -e \"${GREEN}🔄 Pulling latest code...${NC}\"\n        git pull || {\n            echo -e \"${YELLOW}⚠️  GitHub failed, trying Gitee...${NC}\"\n            git remote set-url origin https://gitee.com/zhayujie/chatgpt-on-wechat.git\n            git pull\n        }\n    else\n        echo -e \"${YELLOW}⚠️  Not a git repository, skipping code update${NC}\"\n    fi\n    \n    # Reinstall dependencies\n    check_python_version\n    install_dependencies\n    \n    # Restart service\n    cmd_start\n}\n\n# Installation mode\ninstall_mode() {\n    clear\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo -e \"${CYAN}${BOLD}   ${EMOJI_COW} CowAgent Installation${NC}\"\n    echo -e \"${CYAN}${BOLD}=========================================${NC}\"\n    echo \"\"\n    sleep 1\n\n    if [ \"$IS_PROJECT_DIR\" = true ]; then\n        echo -e \"${GREEN}✅ Detected existing project directory.${NC}\"\n        \n        if [ -f \"${BASE_DIR}/config.json\" ]; then\n            echo -e \"${GREEN}✅ Project already configured${NC}\"\n            echo \"\"\n            show_usage\n            return\n        fi\n        \n        echo -e \"${YELLOW}📝 No config.json found. Let's configure your project!${NC}\"\n        echo \"\"\n        \n        # Project directory already exists, skip clone\n        check_python_version\n    else\n        # Remote install mode, need to clone project\n        check_python_version\n        clone_project\n    fi\n    \n    # Install dependencies and configure\n    install_dependencies\n    select_model\n    configure_model\n    select_channel\n    configure_channel\n    create_config_file\n    \n    echo \"\"\n    read -p \"Start CowAgent now? [Y/n]: \" start_now\n    if [[ ! $start_now == [Nn]* ]]; then\n        start_project\n    else\n        echo -e \"${GREEN}✅ Installation complete!${NC}\"\n        echo \"\"\n        echo -e \"${CYAN}${BOLD}To start manually:${NC}\"\n        echo -e \"${YELLOW}  cd ${BASE_DIR}${NC}\"\n        echo -e \"${YELLOW}  ./run.sh start${NC}\"\n        echo \"\"\n        echo -e \"${CYAN}Or use nohup directly:${NC}\"\n        echo -e \"${YELLOW}  nohup $PYTHON_CMD app.py > nohup.out 2>&1 & tail -f nohup.out${NC}\"\n    fi\n}\n\n# Require running inside the project directory\nrequire_project_dir() {\n    if [ \"$IS_PROJECT_DIR\" = false ]; then\n        echo -e \"${RED}${EMOJI_CROSS} Must run in project directory${NC}\"\n        exit 1\n    fi\n}\n\n# Main function\nmain() {\n    case \"$1\" in\n        start|stop|restart|status|logs|config|update)\n            require_project_dir\n            ;;\n    esac\n\n    case \"$1\" in\n        start)   cmd_start ;;\n        stop)    cmd_stop ;;\n        restart) cmd_restart ;;\n        status)  cmd_status ;;\n        logs)    cmd_logs ;;\n        config)  cmd_config ;;\n        update)  cmd_update ;;\n        help|--help|-h)\n            show_usage\n            ;;\n        \"\")\n            install_mode\n            ;;\n        *)\n            echo -e \"${RED}${EMOJI_CROSS} Unknown command: $1${NC}\"\n            echo \"\"\n            show_usage\n            exit 1\n            ;;\n    esac\n}\n\n# Execute main function\nmain \"$@\"\n"
  },
  {
    "path": "scripts/shutdown.sh",
    "content": "#!/bin/bash\n\n#关闭服务\ncd `dirname $0`/..\nexport BASE_DIR=`pwd`\npid=`ps ax | grep -i app.py | grep \"${BASE_DIR}\" | grep python3 | grep -v grep | awk '{print $1}'`\nif [ -z \"$pid\" ] ; then\n        echo \"No chatgpt-on-wechat running.\"\n        exit -1;\nfi\n\necho \"The chatgpt-on-wechat(${pid}) is running...\"\n\nkill ${pid}\n\necho \"Send shutdown request to chatgpt-on-wechat(${pid}) OK\"\n"
  },
  {
    "path": "scripts/start.sh",
    "content": "#!/bin/bash\n#后台运行Chat_on_webchat执行脚本\n\ncd `dirname $0`/..\nexport BASE_DIR=`pwd`\necho $BASE_DIR\n\n# check the nohup.out log output file\nif [ ! -f \"${BASE_DIR}/nohup.out\" ]; then\n  touch \"${BASE_DIR}/nohup.out\"\necho \"create file  ${BASE_DIR}/nohup.out\"\nfi\n\nnohup python3 \"${BASE_DIR}/app.py\" & tail -f \"${BASE_DIR}/nohup.out\"\n\necho \"Chat_on_webchat is starting，you can check the ${BASE_DIR}/nohup.out\"\n"
  },
  {
    "path": "scripts/tout.sh",
    "content": "#!/bin/bash\n#打开日志\n\ncd `dirname $0`/..\nexport BASE_DIR=`pwd`\necho $BASE_DIR\n\n# check the nohup.out log output file\nif [ ! -f \"${BASE_DIR}/nohup.out\" ]; then\n   echo \"No file  ${BASE_DIR}/nohup.out\"\n   exit -1;\nfi\n\ntail -f \"${BASE_DIR}/nohup.out\"\n"
  },
  {
    "path": "skills/README.md",
    "content": "# Skills Directory\n\nThis directory contains skills for the COW agent system. Skills are markdown files that provide specialized instructions for specific tasks.\n\n## What are Skills?\n\nSkills are reusable instruction sets that help the agent perform specific tasks more effectively. Each skill:\n\n- Provides context-specific guidance\n- Documents best practices\n- Includes examples and usage patterns\n- Can have requirements (binaries, environment variables, etc.)\n\n## Skill Structure\n\nEach skill is a markdown file (`SKILL.md`) in its own directory with frontmatter:\n\n```markdown\n---\nname: skill-name\ndescription: Brief description of what the skill does\nmetadata: {\"cow\":{\"emoji\":\"🎯\",\"requires\":{\"bins\":[\"tool\"]}}}\n---\n\n# Skill Name\n\nDetailed instructions and examples...\n```\n\n## Available Skills\n\n- **calculator**: Mathematical calculations and expressions\n- **web-search**: Search the web for current information\n- **file-operations**: Read, write, and manage files\n\n## Creating Custom Skills\n\nTo create a new skill:\n\n1. Create a directory: `skills/my-skill/`\n2. Create `SKILL.md` with frontmatter and content\n3. Restart the agent to load the new skill\n\n### Frontmatter Fields\n\n- `name`: Skill name (must match directory name)\n- `description`: Brief description (required)\n- `metadata`: JSON object with additional configuration\n  - `emoji`: Display emoji\n  - `always`: Always include this skill (default: false)\n  - `primaryEnv`: Primary environment variable needed\n  - `os`: Supported operating systems (e.g., [\"darwin\", \"linux\"])\n  - `requires`: Requirements object\n    - `bins`: Required binaries\n    - `env`: Required environment variables\n    - `config`: Required config paths\n- `disable-model-invocation`: If true, skill won't be shown to model (default: false)\n- `user-invocable`: If false, users can't invoke directly (default: true)\n\n### Example Skill\n\n```markdown\n---\nname: my-tool\ndescription: Use my-tool to process data\nmetadata: {\"cow\":{\"emoji\":\"🔧\",\"requires\":{\"bins\":[\"my-tool\"],\"env\":[\"MY_TOOL_API_KEY\"]}}}\n---\n\n# My Tool Skill\n\nUse this skill when you need to process data with my-tool.\n\n## Prerequisites\n\n- Install my-tool: `pip install my-tool`\n- Set `MY_TOOL_API_KEY` environment variable\n\n## Usage\n\n\\`\\`\\`python\n# Example usage\nmy_tool_command(\"input data\")\n\\`\\`\\`\n```\n\n## Skill Loading\n\nSkills are loaded from multiple locations with precedence:\n\n1. **Workspace skills** (highest): `workspace/skills/` - Project-specific skills\n2. **Managed skills**: `~/.cow/skills/` - User-installed skills\n3. **Bundled skills** (lowest): Built-in skills\n\nSkills with the same name in higher-precedence locations override lower ones.\n\n## Skill Requirements\n\nSkills can specify requirements that determine when they're available:\n\n- **OS requirements**: Only load on specific operating systems\n- **Binary requirements**: Only load if required binaries are installed\n- **Environment variables**: Only load if required env vars are set\n- **Config requirements**: Only load if config values are set\n\n## Best Practices\n\n1. **Clear descriptions**: Write clear, concise skill descriptions\n2. **Include examples**: Provide practical usage examples\n3. **Document prerequisites**: List all requirements clearly\n4. **Use appropriate metadata**: Set correct requirements and flags\n5. **Keep skills focused**: Each skill should have a single, clear purpose\n\n## Workspace Skills\n\nYou can create workspace-specific skills in your agent's workspace:\n\n```\nworkspace/\n  skills/\n    custom-skill/\n      SKILL.md\n```\n\nThese skills are only available when working in that specific workspace.\n"
  },
  {
    "path": "skills/linkai-agent/README.md",
    "content": "# LinkAI Agent Skill\n\n这个 skill 允许你调用 LinkAI 平台上的多个应用(App)和工作流(Workflow)，通过简单的配置即可集成多个智能体能力。\n\n## 特性\n\n- ✅ **多应用支持** - 在一个配置文件中管理多个 LinkAI 应用/工作流\n- ✅ **动态加载** - skill 系统加载时自动从 `config.json` 读取应用列表\n- ✅ **自动技能描述** - 所有配置的应用会自动添加到技能描述中\n- ✅ **模型切换** - 可以为每个请求指定不同的模型\n- ✅ **知识库集成** - 支持应用绑定的知识库\n- ✅ **插件能力** - 支持应用启用的各类插件\n- ✅ **工作流执行** - 支持执行复杂的多步骤工作流\n\n## 快速开始\n\n### 1. 配置 API Key\n\n```bash\nenv_config(action=\"set\", key=\"LINKAI_API_KEY\", value=\"your-linkai-api-key\")\n```\n\n获取 API Key: https://link-ai.tech/console/interface\n\n### 2. 配置应用列表\n\n将 `config.json.template` 复制为 `config.json`：\n\n```bash\ncp config.json.template config.json\n```\n\n编辑 `config.json`，添加你的应用/工作流：\n\n```json\n{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"通用助手\",\n      \"app_description\": \"通用AI助手，可以回答各类问题\"\n    },\n    {\n      \"app_code\": \"your_kb_app\",\n      \"app_name\": \"产品文档助手\",\n      \"app_description\": \"基于产品文档知识库的问答助手\"\n    },\n    {\n      \"app_code\": \"your_workflow\",\n      \"app_name\": \"数据分析工作流\",\n      \"app_description\": \"执行数据清洗、分析和可视化的完整工作流\"\n    }\n  ]\n}\n```\n\n**注意：** 修改 `config.json` 后，Agent 在下次加载技能时会自动读取新配置。\n\n### 3. 调用应用\n\n```bash\nbash(command='curl -sS --max-time 120 -X POST \"https://api.link-ai.tech/v1/chat/completions\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer $LINKAI_API_KEY\" -d \"{\\\"app_code\\\":\\\"G7z6vKwp\\\",\\\"messages\\\":[{\\\"role\\\":\\\"user\\\",\\\"content\\\":\\\"What is artificial intelligence?\\\"}],\\\"stream\\\":false}\"', timeout=130)\n```\n\n## 使用示例\n\n### 基础调用\n\n```bash\n# 调用默认模型 (通过 bash + curl)\nbash(command='curl -sS --max-time 120 -X POST \"https://api.link-ai.tech/v1/chat/completions\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer $LINKAI_API_KEY\" -d \"{\\\"app_code\\\":\\\"G7z6vKwp\\\",\\\"messages\\\":[{\\\"role\\\":\\\"user\\\",\\\"content\\\":\\\"解释一下量子计算\\\"}],\\\"stream\\\":false}\"', timeout=130)\n```\n\n### 指定模型\n\n在 JSON body 中添加 `model` 字段：\n\n```json\n{\n  \"app_code\": \"G7z6vKwp\",\n  \"model\": \"LinkAI-4.1\",\n  \"messages\": [{\"role\": \"user\", \"content\": \"写一篇关于AI的文章\"}],\n  \"stream\": false\n}\n```\n\n### 调用工作流\n\n工作流的 app_code 从 LinkAI 控制台获取，调用方式与普通应用相同。\n\n## ⚠️ 重要提示\n\n### 超时配置\n\nLinkAI 应用（特别是视频/图片生成、复杂工作流）可能需要较长时间处理。在 curl 命令中加入 `--max-time 180`，并相应增加 bash 工具的 `timeout` 参数。\n\n## 配置说明\n\n### config.json 字段\n\n| 字段 | 类型 | 说明 |\n|------|------|------|\n| `app_code` | string | 应用或工作流的唯一标识码，从 LinkAI 控制台获取 |\n| `app_name` | string | 应用名称，会显示在技能描述中 |\n| `app_description` | string | 应用功能描述，帮助 Agent 理解何时使用该应用 |\n\n### 获取 app_code\n\n1. 登录 [LinkAI 控制台](https://link-ai.tech/console)\n2. 进入「应用管理」或「工作流管理」\n3. 选择要集成的应用/工作流\n4. 在应用详情页找到 `app_code`\n\n## 应用类型\n\n### 1. 普通应用\n\n配置了系统提示词和参数的标准对话应用，可以：\n- 设置角色和性格\n- 绑定知识库\n- 启用插件（图像识别、网页搜索、代码执行等）\n\n### 2. 知识库应用\n\n基于特定知识库的问答应用，适合：\n- 企业内部知识库\n- 产品文档问答\n- 客户支持\n\n### 3. 工作流\n\n多步骤的自动化流程，可以：\n- 串联多个处理节点\n- 条件分支\n- 循环处理\n- 调用外部 API\n\n## 响应格式\n\n### 成功响应\n\nAPI 返回 OpenAI 兼容格式，从 `choices[0].message.content` 获取回复内容：\n\n```json\n{\n  \"choices\": [{\n    \"message\": {\n      \"role\": \"assistant\",\n      \"content\": \"人工智能（AI）是计算机科学的一个分支...\"\n    }\n  }],\n  \"usage\": {\n    \"prompt_tokens\": 10,\n    \"completion_tokens\": 150,\n    \"total_tokens\": 160\n  }\n}\n```\n\n### 错误响应\n\n```json\n{\n  \"error\": {\n    \"message\": \"应用不存在\",\n    \"code\": \"xxx\"\n  }\n}\n```\n\n## 常见错误\n\n### LINKAI_API_KEY environment variable is not set\n**原因：** 未配置 API Key  \n**解决：** 使用 `env_config` 工具设置 LINKAI_API_KEY\n\n### 应用不存在 (402)\n**原因：** app_code 不正确或应用已删除  \n**解决：** 检查 app_code 是否正确，确认应用存在\n\n### 无访问权限 (403)\n**原因：** 尝试访问他人的私有应用  \n**解决：** 确保应用是公开的或你是创建者\n\n### 账号积分额度不足 (406)\n**原因：** LinkAI 账户余额不足  \n**解决：** 前往控制台充值\n\n### 内容审核不通过 (409)\n**原因：** 请求或响应包含敏感内容  \n**解决：** 修改输入内容，避免敏感词\n\n## 技术实现\n\n### 自动技能描述生成\n\n当 skill 系统加载 `linkai-agent` 时，会自动：\n1. 读取 `config.json` 中的应用列表\n2. 将每个应用的 name 和 description 动态添加到技能描述中\n3. Agent 加载时会看到完整的应用列表\n\n这是在 `agent/skills/loader.py` 中实现的特殊处理。\n\n### 工作流程\n\n```\n用户配置 config.json\n  ↓\nAgent 启动/重新加载技能\n  ↓\nSkillLoader 检测到 linkai-agent\n  ↓\n动态读取 config.json\n  ↓\n生成包含所有应用描述的 description\n  ↓\nAgent 看到所有可用应用的完整信息\n  ↓\n用户请求触发\n  ↓\nAgent 根据描述选择合适的应用\n  ↓\n通过 bash + curl 调用 LinkAI API\n  ↓\nLinkAI API 处理并返回结果\n```\n\n## 最佳实践\n\n1. **清晰的描述** - 为每个应用写清晰、具体的描述，帮助 Agent 理解应用用途\n2. **合理分工** - 不同应用负责不同领域，避免功能重叠\n3. **无需重启** - 修改 config.json 后，Agent 下次加载技能时会自动更新\n4. **模型选择** - 根据任务复杂度选择合适的模型\n5. **知识库优化** - 为专业领域的应用绑定相关知识库\n\n## 扩展用法\n\n### 在 Agent 系统中使用\n\n当 Agent 系统加载这个 skill 时，会自动从 `config.json` 读取应用列表并生成描述：\n\n```\nCall LinkAI apps/workflows. 通用助手(G7z6vKwp: 通用AI助手，可以回答各类问题); 产品文档助手(kb_app_001: 基于产品文档知识库的问答助手); 数据分析工作流(wf_002: 执行数据清洗、分析和可视化的完整工作流)\n```\n\nAgent 会根据用户问题自动选择最合适的应用进行调用。\n\n## 相关链接\n\n- LinkAI 平台: https://link-ai.tech\n- API 文档: https://docs.link-ai.tech\n- 控制台: https://link-ai.tech/console\n- 模型列表: https://link-ai.tech/console/models\n- 应用广场: https://link-ai.tech/square\n\n## License\n\nPart of the chatgpt-on-wechat project.\n"
  },
  {
    "path": "skills/linkai-agent/SKILL.md",
    "content": "---\nname: linkai-agent\ndescription: Call LinkAI applications and workflows. Use bash with curl to invoke the chat completions API.\nhomepage: https://link-ai.tech\nmetadata:\n  emoji: 🤖\n  requires:\n    bins: [\"curl\"]\n    env: [\"LINKAI_API_KEY\"]\n---\n\n# LinkAI Agent\n\nCall LinkAI applications and workflows through the chat completions API. Available apps are loaded from config.json.\n\n## Setup\n\nThis skill requires a LinkAI API key.\n\n1. Get your API key from [LinkAI Console](https://link-ai.tech/console/interface)\n2. Set the environment variable: `export LINKAI_API_KEY=Link_xxxxxxxxxxxx` (or use env_config tool)\n\n## Configuration\n\n1. Copy `config.json.template` to `config.json`\n2. Add your apps/workflows in config.json. The skill description is auto-generated from this config when loaded.\n\n## Usage\n\nUse the bash tool with curl to call the API. **Prefer curl** to avoid encoding issues on Windows PowerShell.\n\n```bash\ncurl -X POST \"https://api.link-ai.tech/v1/chat/completions\" \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer $LINKAI_API_KEY\" \\\n  -d '{\n    \"app_code\": \"<app_code>\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"<question>\"}],\n    \"stream\": false\n  }'\n```\n\n**Optional parameters**:\n\n- Add `--max-time 120` to curl for long-running tasks (video/image generation)\n\n**On Windows cmd**: Use `%LINKAI_API_KEY%` instead of `$LINKAI_API_KEY`.\n\n**Example** (via bash tool):\n\n```bash\nbash(command='curl -sS --max-time 120 -X POST \"https://api.link-ai.tech/v1/chat/completions\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer $LINKAI_API_KEY\" -d \"{\\\"app_code\\\":\\\"G7z6vKwp\\\",\\\"messages\\\":[{\\\"role\\\":\\\"user\\\",\\\"content\\\":\\\"What is AI?\\\"}],\\\"stream\\\":false}\"', timeout=130)\n```\n\n## Response\n\nSuccess (extract `choices[0].message.content` from JSON):\n\n```json\n{\n  \"choices\": [{\n    \"message\": {\n      \"role\": \"assistant\",\n      \"content\": \"AI stands for Artificial Intelligence...\"\n    }\n  }],\n  \"usage\": {\n    \"prompt_tokens\": 10,\n    \"completion_tokens\": 50,\n    \"total_tokens\": 60\n  }\n}\n```\n\nError:\n\n```json\n{\n  \"error\": {\n    \"message\": \"Error description\",\n    \"code\": \"error_code\"\n  }\n}\n```\n"
  },
  {
    "path": "skills/linkai-agent/config.json.template",
    "content": "{\n  \"apps\": [\n    {\n      \"app_code\": \"G7z6vKwp\",\n      \"app_name\": \"LinkAI客服助手\",\n      \"app_description\": \"当用户需要了解LinkAI平台相关问题时才选择该助手，基于LinkAI知识库进行回答\"\n    },\n    {\n      \"app_code\": \"SFY5x7JR\",\n      \"app_name\": \"内容创作助手\",\n      \"app_description\": \"当用户需要创作图片或视频时才使用该助手，支持Nano Banana、Seedream、即梦、Veo、可灵等多种模型\"\n    }\n  ]\n}\n"
  },
  {
    "path": "skills/skill-creator/SKILL.md",
    "content": "---\nname: skill-creator\ndescription: Create, install, or update skills in the workspace. Use when (1) installing a skill from a URL or remote source, (2) creating a new skill from scratch, (3) updating or restructuring existing skills. Always use this skill for any skill installation or creation task.\nlicense: Complete terms in LICENSE.txt\n---\n\n# Skill Creator\n\nThis skill provides guidance for creating effective skills using the existing tool system.\n\n## About Skills\n\nSkills are modular, self-contained packages that extend the agent's capabilities by providing specialized knowledge, workflows, and tools. They transform a general-purpose agent into a specialized agent equipped with procedural knowledge.\n\n### What Skills Provide\n\n1. **Specialized workflows** - Multi-step procedures for specific domains\n2. **Tool integrations** - Instructions for working with specific file formats or APIs\n3. **Domain expertise** - Company-specific knowledge, schemas, business logic\n4. **Bundled resources** - Scripts, references, and assets for complex tasks\n\n### Core Principle\n\n**Concise is Key**: Only add context the agent doesn't already have. Challenge each piece of information: \"Does this justify its token cost?\" Prefer concise examples over verbose explanations.\n\n## Skill Structure\n\nEvery skill consists of a required SKILL.md file and optional bundled resources:\n\n```\nskill-name/\n├── SKILL.md (required)\n│   ├── YAML frontmatter metadata (required)\n│   │   ├── name: (required)\n│   │   └── description: (required)\n│   └── Markdown instructions (required)\n└── Bundled Resources (optional)\n    ├── scripts/          - Executable code (Python/Bash/etc.)\n    ├── references/       - Documentation intended to be loaded into context as needed\n    └── assets/           - Files used in output (templates, icons, fonts, etc.)\n```\n\n### SKILL.md Components\n\n**Frontmatter (YAML)** - Required fields:\n\n- **name**: Skill name in hyphen-case (e.g., `weather-api`, `pdf-editor`)\n- **description**: **CRITICAL** - Primary triggering mechanism\n  - Must clearly describe what the skill does\n  - Must explicitly state when to use it\n  - Include specific trigger scenarios and keywords\n  - All \"when to use\" info goes here, NOT in body\n  - Example: `\"PDF document processing with rotation, merging, splitting, and text extraction. Use when user needs to: (1) Rotate PDF pages, (2) Merge multiple PDFs, (3) Split PDF files, (4) Extract text from PDFs.\"`\n\n**Body (Markdown)** - Loaded after skill triggers:\n\n- Detailed usage instructions\n- How to call scripts and read references\n- Examples and best practices\n- Use imperative/infinitive form (\"Use X to do Y\")\n\n### Bundled Resources\n\n**scripts/** - When to include:\n- Code is repeatedly rewritten\n- Deterministic execution needed (avoid LLM randomness)\n- Examples: PDF rotation, image processing\n- Must test scripts before including\n\n**references/** - When to include:\n- **ONLY** when documentation is too large for SKILL.md (>500 lines)\n- Database schemas, complex API specs that agent needs to reference\n- Agent reads these files into context as needed\n- **NOT for**: API reference docs, usage examples, tutorials (put in SKILL.md instead)\n- **Rule of thumb**: If it fits in SKILL.md, don't create a separate reference file\n\n**assets/** - When to include:\n- Files used in output (not loaded to context)\n- Templates, icons, boilerplate code\n- Copied or modified in final output\n\n**Important**: Most skills don't need all three. Choose based on actual needs.\n\n### What NOT to Include\n\nDo NOT create auxiliary documentation files:\n- README.md - Instructions belong in SKILL.md\n- INSTALLATION_GUIDE.md - Setup info belongs in SKILL.md\n- CHANGELOG.md - Not needed for local skills\n- API_REFERENCE.md - Put API docs directly in SKILL.md\n- USAGE_EXAMPLES.md - Put examples directly in SKILL.md\n- Any other documentation files - Everything goes in SKILL.md unless it's too large\n\n**Critical Rule**: Only create files that the agent will actually execute (scripts) or that are too large for SKILL.md (references). Documentation, examples, and guides ALL belong in SKILL.md.\n\n## Installing a Skill from URL\n\n1. Fetch the URL content (curl or web_fetch tool)\n2. Extract `name` from YAML frontmatter\n3. Create directory `<workspace>/skills/<name>/` and save content as `SKILL.md`\n4. Check the saved SKILL.md for an installation/setup section — if it defines additional steps (e.g., downloading scripts, installing dependencies), execute them; otherwise installation is complete\n\nThe `<workspace>` is the working directory from the \"工作空间\" section.\n\n## Skill Creation Process (from scratch)\n\n1. **Understand** - Clarify use cases with concrete examples\n2. **Plan** - Identify needed scripts, references, assets\n3. **Initialize** - Run init_skill.py to create template\n4. **Edit** - Implement SKILL.md and resources\n5. **Validate** (optional) - Run quick_validate.py to check format\n6. **Iterate** - Improve based on real usage\n\n## Skill Naming\n\n- Use lowercase letters, digits, and hyphens only; normalize user-provided titles to hyphen-case (e.g., \"Plan Mode\" -> `plan-mode`).\n- When generating names, generate a name under 64 characters (letters, digits, hyphens).\n- Prefer short, verb-led phrases that describe the action.\n- Namespace by tool when it improves clarity or triggering (e.g., `gh-address-comments`, `linear-address-issue`).\n- Name the skill folder exactly after the skill name.\n\n## Step-by-Step Guide\n\n### Step 1: Understanding the Skill with Concrete Examples\n\nSkip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.\n\nTo create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.\n\nFor example, when building an image-editor skill, relevant questions include:\n\n- \"What functionality should the image-editor skill support? Editing, rotating, anything else?\"\n- \"Can you give some examples of how this skill would be used?\"\n- \"I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?\"\n- \"What would a user say that should trigger this skill?\"\n\nTo avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.\n\nConclude this step when there is a clear sense of the functionality the skill should support.\n\n### Step 2: Planning the Reusable Skill Contents\n\nTo turn concrete examples into an effective skill, analyze each example by:\n\n1. Considering how to execute on the example from scratch\n2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly\n\n**Planning Checklist**:\n- ✅ **Always needed**: SKILL.md with clear description and usage instructions\n- ✅ **scripts/**: Only if code needs to be executed (not just shown as examples)\n- ❌ **references/**: Rarely needed - only if documentation is >500 lines and can't fit in SKILL.md\n- ✅ **assets/**: Only if files are used in output (templates, boilerplate, etc.)\n\nExample: When building a `pdf-editor` skill to handle queries like \"Help me rotate this PDF,\" the analysis shows:\n\n1. Rotating a PDF requires re-writing the same code each time\n2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill\n3. ❌ Don't create `references/api-docs.md` - put API info in SKILL.md instead\n\nExample: When designing a `frontend-webapp-builder` skill for queries like \"Build me a todo app\" or \"Build me a dashboard to track my steps,\" the analysis shows:\n\n1. Writing a frontend webapp requires the same boilerplate HTML/React each time\n2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill\n3. ❌ Don't create `references/usage-examples.md` - put examples in SKILL.md instead\n\nExample: When building a `big-query` skill to handle queries like \"How many users have logged in today?\" the analysis shows:\n\n1. Querying BigQuery requires re-discovering the table schemas and relationships each time\n2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill (ONLY because schemas are very large)\n3. ❌ Don't create separate `references/query-examples.md` - put examples in SKILL.md instead\n\nTo establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets. **Default to putting everything in SKILL.md unless there's a compelling reason to separate it.**\n\n### Step 3: Initialize the Skill\n\nAt this point, it is time to actually create the skill.\n\nSkip this step only if the skill being developed already exists, and iteration is needed. In this case, continue to the next step.\n\nWhen creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.\n\nUsage:\n\n```bash\nscripts/init_skill.py <skill-name> --path <output-directory> [--resources scripts,references,assets] [--examples]\n```\n\nExamples:\n\n```bash\nscripts/init_skill.py my-skill --path <workspace>/skills\nscripts/init_skill.py my-skill --path <workspace>/skills --resources scripts,references\nscripts/init_skill.py my-skill --path <workspace>/skills --resources scripts --examples\n```\n\nWhere `<workspace>` is your workspace directory shown in the \"工作空间\" section of the system prompt.\n\nThe script:\n\n- Creates the skill directory at the specified path\n- Generates a SKILL.md template with proper frontmatter and TODO placeholders\n- Optionally creates resource directories based on `--resources`\n- Optionally adds example files when `--examples` is set\n\nAfter initialization, customize the SKILL.md and add resources as needed. If you used `--examples`, replace or delete placeholder files.\n\n**Important**: Always create skills in workspace skills directory (`<workspace>/skills`), NOT in project directory. Check the \"工作空间\" section for the actual workspace path.\n\n### Step 4: Edit the Skill\n\nWhen editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of the agent to use. Include information that would be beneficial and non-obvious to the agent. Consider what procedural knowledge, domain-specific details, or reusable assets would help another agent instance execute these tasks more effectively.\n\n#### Design Patterns\n\n**Workflow patterns** — For complex tasks, break operations into sequential steps or conditional branches:\n\n```markdown\n# Sequential: list numbered steps with scripts\n1. Analyze the form (run analyze_form.py)\n2. Create field mapping (edit fields.json)\n3. Fill the form (run fill_form.py)\n\n# Conditional: guide through decision points\n1. Determine the modification type:\n   **Creating new content?** → Follow \"Creation workflow\"\n   **Editing existing content?** → Follow \"Editing workflow\"\n```\n\n**Output patterns** — When consistent output format matters, provide a template or input/output examples in SKILL.md so the agent can follow the desired style.\n\n#### Start with Reusable Skill Contents\n\nTo begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.\n\n**Available Base Tools**:\n\nThe agent has access to these core tools that you can leverage in your skill:\n- **bash**: Execute shell commands (use for curl, ls, grep, sed, awk, bc for calculations, etc.)\n- **read**: Read file contents\n- **write**: Write files\n- **edit**: Edit files with search/replace\n\n**Minimize Dependencies**:\n- ✅ **Prefer bash + curl** for HTTP API calls (no Python dependencies)\n- ✅ **Use bash tools** (grep, sed, awk) for text processing\n- ✅ **Keep scripts simple** - if bash can do it, no need for Python (document packages/versions if Python is used)\n\n**Important Guidelines**:\n- **scripts/**: Only create scripts that will be executed. Test all scripts before including.\n- **references/**: ONLY create if documentation is too large for SKILL.md (>500 lines). Most skills don't need this.\n- **assets/**: Only include files used in output (templates, icons, etc.)\n- **Default approach**: Put everything in SKILL.md unless there's a specific reason not to.\n\nAdded scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.\n\nIf you used `--examples`, delete any placeholder files that are not needed for the skill. Only create resource directories that are actually required.\n\n#### Update SKILL.md\n\n**Writing Guidelines:** Always use imperative/infinitive form.\n\n##### Frontmatter\n\nWrite the YAML frontmatter with `name`, `description`, and optional `metadata`:\n\n- `name`: The skill name\n- `description`: This is the primary triggering mechanism for your skill, and helps the agent understand when to use the skill.\n  - Include both what the Skill does and specific triggers/contexts for when to use it.\n  - Include all \"when to use\" information here - Not in the body. The body is only loaded after triggering, so \"When to Use This Skill\" sections in the body are not helpful to the agent.\n  - Example description for a `docx` skill: \"Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when the agent needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks\"\n- `metadata`: (Optional) Specify requirements and configuration\n  - `requires.bins`: Required binaries (e.g., `[\"curl\", \"jq\"]`)\n  - `requires.env`: Required environment variables — all must be set (e.g., `[\"MYAPI_KEY\"]`)\n  - `requires.anyEnv`: Alternative environment variables — at least one must be set (e.g., `[\"OPENAI_API_KEY\", \"LINKAI_API_KEY\"]`)\n  - `requires.anyBins`: Alternative binaries — at least one must be present\n  - `always`: Set to `true` to always load regardless of requirements\n  - `emoji`: Skill icon (optional)\n  - Do NOT set `category` — it defaults to `skill` and is managed by the system\n\n**API Key Requirements**:\n\nIf your skill needs a single API key, declare it in `requires.env`:\n\n```yaml\n---\nname: my-search\ndescription: Search using MyAPI\nmetadata:\n  requires:\n    bins: [\"curl\"]\n    env: [\"MYAPI_KEY\"]\n---\n```\n\nIf your skill supports multiple API key providers (e.g., OpenAI or LinkAI), use `requires.anyEnv`:\n\n```yaml\n---\nname: my-vision\ndescription: Analyze images using Vision API\nmetadata:\n  requires:\n    bins: [\"curl\"]\n    anyEnv: [\"OPENAI_API_KEY\", \"LINKAI_API_KEY\"]\n---\n```\n\n**Auto-enable rule**: Skills are automatically enabled when required environment variables are set, and automatically disabled when missing. No manual configuration needed.\n\n##### Body\n\nWrite instructions for using the skill and its bundled resources.\n\n**If your skill requires an API key**, include setup instructions in the body:\n\n```markdown\n## Setup\n\nThis skill requires an API key from [Service Name].\n\n1. Visit https://service.com to get an API key\n2. Configure it using: `env_config(action=\"set\", key=\"SERVICE_API_KEY\", value=\"your-key\")`\n3. Or manually add to `~/cow/.env`: `SERVICE_API_KEY=your-key`\n4. Restart the agent for changes to take effect\n\n## Usage\n...\n```\n\nThe bash script should check for the key and provide helpful error messages:\n\n```bash\n#!/usr/bin/env bash\nif [ -z \"${SERVICE_API_KEY:-}\" ]; then\n    echo \"Error: SERVICE_API_KEY not set\"\n    echo \"Please configure your API key first (see SKILL.md)\"\n    exit 1\nfi\n\ncurl -H \"Authorization: Bearer $SERVICE_API_KEY\" ...\n```\n\n**Script Path Convention**:\n\nWhen writing SKILL.md instructions, remember that:\n- Skills are listed in `<available_skills>` with a `<base_dir>` path\n- Scripts should be referenced as: `<base_dir>/scripts/script_name.sh`\n- The AI will see the base_dir and can construct the full path\n\nExample instruction in SKILL.md:\n```markdown\n## Usage\n\nScripts are in this skill's base directory (shown in skill listing).\n\nbash \"<base_dir>/scripts/my_script.sh\" <args>\n```\n\n### Step 5: Validate (Optional)\n\nValidate skill format:\n\n```bash\nscripts/quick_validate.py <path/to/skill-folder>\n```\n\nExample:\n\n```bash\nscripts/quick_validate.py <workspace>/skills/weather-api\n```\n\nValidation checks:\n- YAML frontmatter format and required fields\n- Skill naming conventions (hyphen-case, lowercase)\n- Description completeness and quality\n- File organization\n\n**Note**: Validation is optional in COW. Mainly useful for troubleshooting format issues.\n\n### Step 6: Iterate\n\nImprove based on real usage:\n\n1. Use skill on real tasks\n2. Notice struggles or inefficiencies\n3. Identify needed updates to SKILL.md or resources\n4. Implement changes and test again\n\n## Progressive Disclosure\n\nSkills use three-level loading:\n\n1. **Metadata** (name + description) - Always in context (~100 words)\n2. **SKILL.md body** - Loaded when skill triggers (<5k words)\n3. **Resources** - Loaded as needed by agent\n\n**Best practices**:\n- Keep SKILL.md under 500 lines\n- Split complex content into `references/` files\n- Reference these files clearly in SKILL.md\n\n**Pattern**: For skills with multiple variants/frameworks:\n- Keep core workflow in SKILL.md\n- Move variant-specific details to separate reference files\n- Agent loads only relevant files\n\nExample:\n```\ncloud-deploy/\n├── SKILL.md (workflow + provider selection)\n└── references/\n    ├── aws.md\n    ├── gcp.md\n    └── azure.md\n```\n\nWhen user chooses AWS, agent only reads aws.md.\n"
  },
  {
    "path": "skills/skill-creator/scripts/init_skill.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nSkill Initializer - Creates a new skill from template\n\nUsage:\n    init_skill.py <skill-name> --path <path>\n\nExamples:\n    init_skill.py my-new-skill --path skills/public\n    init_skill.py my-api-helper --path skills/private\n    init_skill.py custom-skill --path /custom/location\n\"\"\"\n\nimport sys\nfrom pathlib import Path\n\n\nSKILL_TEMPLATE = \"\"\"---\nname: {skill_name}\ndescription: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]\n---\n\n# {skill_title}\n\n## Overview\n\n[TODO: 1-2 sentences explaining what this skill enables]\n\n## Structuring This Skill\n\n[TODO: Choose the structure that best fits this skill's purpose. Common patterns:\n\n**1. Workflow-Based** (best for sequential processes)\n- Works well when there are clear step-by-step procedures\n- Example: DOCX skill with \"Workflow Decision Tree\" → \"Reading\" → \"Creating\" → \"Editing\"\n- Structure: ## Overview → ## Workflow Decision Tree → ## Step 1 → ## Step 2...\n\n**2. Task-Based** (best for tool collections)\n- Works well when the skill offers different operations/capabilities\n- Example: PDF skill with \"Quick Start\" → \"Merge PDFs\" → \"Split PDFs\" → \"Extract Text\"\n- Structure: ## Overview → ## Quick Start → ## Task Category 1 → ## Task Category 2...\n\n**3. Reference/Guidelines** (best for standards or specifications)\n- Works well for brand guidelines, coding standards, or requirements\n- Example: Brand styling with \"Brand Guidelines\" → \"Colors\" → \"Typography\" → \"Features\"\n- Structure: ## Overview → ## Guidelines → ## Specifications → ## Usage...\n\n**4. Capabilities-Based** (best for integrated systems)\n- Works well when the skill provides multiple interrelated features\n- Example: Product Management with \"Core Capabilities\" → numbered capability list\n- Structure: ## Overview → ## Core Capabilities → ### 1. Feature → ### 2. Feature...\n\nPatterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).\n\nDelete this entire \"Structuring This Skill\" section when done - it's just guidance.]\n\n## [TODO: Replace with the first main section based on chosen structure]\n\n[TODO: Add content here. See examples in existing skills:\n- Code samples for technical skills\n- Decision trees for complex workflows\n- Concrete examples with realistic user requests\n- References to scripts/templates/references as needed]\n\n## Resources\n\nThis skill includes example resource directories that demonstrate how to organize different types of bundled resources:\n\n### scripts/\nExecutable code (Python/Bash/etc.) that can be run directly to perform specific operations.\n\n**Examples from other skills:**\n- PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation\n- DOCX skill: `document.py`, `utilities.py` - Python modules for document processing\n\n**Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.\n\n**Note:** Scripts may be executed without loading into context, but can still be read by Claude for patching or environment adjustments.\n\n### references/\nDocumentation and reference material intended to be loaded into context to inform Claude's process and thinking.\n\n**Examples from other skills:**\n- Product management: `communication.md`, `context_building.md` - detailed workflow guides\n- BigQuery: API reference documentation and query examples\n- Finance: Schema documentation, company policies\n\n**Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Claude should reference while working.\n\n### assets/\nFiles not intended to be loaded into context, but rather used within the output Claude produces.\n\n**Examples from other skills:**\n- Brand styling: PowerPoint template files (.pptx), logo files\n- Frontend builder: HTML/React boilerplate project directories\n- Typography: Font files (.ttf, .woff2)\n\n**Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.\n\n---\n\n**Any unneeded directories can be deleted.** Not every skill requires all three types of resources.\n\"\"\"\n\nEXAMPLE_SCRIPT = '''#!/usr/bin/env python3\n\"\"\"\nExample helper script for {skill_name}\n\nThis is a placeholder script that can be executed directly.\nReplace with actual implementation or delete if not needed.\n\nExample real scripts from other skills:\n- pdf/scripts/fill_fillable_fields.py - Fills PDF form fields\n- pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images\n\"\"\"\n\ndef main():\n    print(\"This is an example script for {skill_name}\")\n    # TODO: Add actual script logic here\n    # This could be data processing, file conversion, API calls, etc.\n\nif __name__ == \"__main__\":\n    main()\n'''\n\nEXAMPLE_REFERENCE = \"\"\"# Reference Documentation for {skill_title}\n\nThis is a placeholder for detailed reference documentation.\nReplace with actual reference content or delete if not needed.\n\nExample real reference docs from other skills:\n- product-management/references/communication.md - Comprehensive guide for status updates\n- product-management/references/context_building.md - Deep-dive on gathering context\n- bigquery/references/ - API references and query examples\n\n## When Reference Docs Are Useful\n\nReference docs are ideal for:\n- Comprehensive API documentation\n- Detailed workflow guides\n- Complex multi-step processes\n- Information too lengthy for main SKILL.md\n- Content that's only needed for specific use cases\n\n## Structure Suggestions\n\n### API Reference Example\n- Overview\n- Authentication\n- Endpoints with examples\n- Error codes\n- Rate limits\n\n### Workflow Guide Example\n- Prerequisites\n- Step-by-step instructions\n- Common patterns\n- Troubleshooting\n- Best practices\n\"\"\"\n\nEXAMPLE_ASSET = \"\"\"# Example Asset File\n\nThis placeholder represents where asset files would be stored.\nReplace with actual asset files (templates, images, fonts, etc.) or delete if not needed.\n\nAsset files are NOT intended to be loaded into context, but rather used within\nthe output Claude produces.\n\nExample asset files from other skills:\n- Brand guidelines: logo.png, slides_template.pptx\n- Frontend builder: hello-world/ directory with HTML/React boilerplate\n- Typography: custom-font.ttf, font-family.woff2\n- Data: sample_data.csv, test_dataset.json\n\n## Common Asset Types\n\n- Templates: .pptx, .docx, boilerplate directories\n- Images: .png, .jpg, .svg, .gif\n- Fonts: .ttf, .otf, .woff, .woff2\n- Boilerplate code: Project directories, starter files\n- Icons: .ico, .svg\n- Data files: .csv, .json, .xml, .yaml\n\nNote: This is a text placeholder. Actual assets can be any file type.\n\"\"\"\n\n\ndef title_case_skill_name(skill_name):\n    \"\"\"Convert hyphenated skill name to Title Case for display.\"\"\"\n    return ' '.join(word.capitalize() for word in skill_name.split('-'))\n\n\ndef init_skill(skill_name, path):\n    \"\"\"\n    Initialize a new skill directory with template SKILL.md.\n\n    Args:\n        skill_name: Name of the skill\n        path: Path where the skill directory should be created\n\n    Returns:\n        Path to created skill directory, or None if error\n    \"\"\"\n    # Determine skill directory path\n    skill_dir = Path(path).resolve() / skill_name\n\n    # Check if directory already exists\n    if skill_dir.exists():\n        print(f\"❌ Error: Skill directory already exists: {skill_dir}\")\n        return None\n\n    # Create skill directory\n    try:\n        skill_dir.mkdir(parents=True, exist_ok=False)\n        print(f\"✅ Created skill directory: {skill_dir}\")\n    except Exception as e:\n        print(f\"❌ Error creating directory: {e}\")\n        return None\n\n    # Create SKILL.md from template\n    skill_title = title_case_skill_name(skill_name)\n    skill_content = SKILL_TEMPLATE.format(\n        skill_name=skill_name,\n        skill_title=skill_title\n    )\n\n    skill_md_path = skill_dir / 'SKILL.md'\n    try:\n        skill_md_path.write_text(skill_content)\n        print(\"✅ Created SKILL.md\")\n    except Exception as e:\n        print(f\"❌ Error creating SKILL.md: {e}\")\n        return None\n\n    # Create resource directories with example files\n    try:\n        # Create scripts/ directory with example script\n        scripts_dir = skill_dir / 'scripts'\n        scripts_dir.mkdir(exist_ok=True)\n        example_script = scripts_dir / 'example.py'\n        example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))\n        example_script.chmod(0o755)\n        print(\"✅ Created scripts/example.py\")\n\n        # Create references/ directory with example reference doc\n        references_dir = skill_dir / 'references'\n        references_dir.mkdir(exist_ok=True)\n        example_reference = references_dir / 'api_reference.md'\n        example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))\n        print(\"✅ Created references/api_reference.md\")\n\n        # Create assets/ directory with example asset placeholder\n        assets_dir = skill_dir / 'assets'\n        assets_dir.mkdir(exist_ok=True)\n        example_asset = assets_dir / 'example_asset.txt'\n        example_asset.write_text(EXAMPLE_ASSET)\n        print(\"✅ Created assets/example_asset.txt\")\n    except Exception as e:\n        print(f\"❌ Error creating resource directories: {e}\")\n        return None\n\n    # Print next steps\n    print(f\"\\n✅ Skill '{skill_name}' initialized successfully at {skill_dir}\")\n    print(\"\\nNext steps:\")\n    print(\"1. Edit SKILL.md to complete the TODO items and update the description\")\n    print(\"2. Customize or delete the example files in scripts/, references/, and assets/\")\n    print(\"3. Run the validator when ready to check the skill structure\")\n\n    return skill_dir\n\n\ndef main():\n    if len(sys.argv) < 4 or sys.argv[2] != '--path':\n        print(\"Usage: init_skill.py <skill-name> --path <path>\")\n        print(\"\\nSkill name requirements:\")\n        print(\"  - Hyphen-case identifier (e.g., 'data-analyzer')\")\n        print(\"  - Lowercase letters, digits, and hyphens only\")\n        print(\"  - Max 40 characters\")\n        print(\"  - Must match directory name exactly\")\n        print(\"\\nExamples:\")\n        print(\"  init_skill.py my-new-skill --path workspace/skills\")\n        print(\"  init_skill.py my-api-helper --path /path/to/skills\")\n        print(\"  init_skill.py custom-skill --path /custom/location\")\n        sys.exit(1)\n\n    skill_name = sys.argv[1]\n    path = sys.argv[3]\n\n    print(f\"🚀 Initializing skill: {skill_name}\")\n    print(f\"   Location: {path}\")\n    print()\n\n    result = init_skill(skill_name, path)\n\n    if result:\n        sys.exit(0)\n    else:\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "skills/skill-creator/scripts/package_skill.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nSkill Packager - Creates a distributable .skill file of a skill folder\n\nUsage:\n    python utils/package_skill.py <path/to/skill-folder> [output-directory]\n\nExample:\n    python utils/package_skill.py skills/public/my-skill\n    python utils/package_skill.py skills/public/my-skill ./dist\n\"\"\"\n\nimport sys\nimport os\nimport zipfile\nfrom pathlib import Path\n\n# Add script directory to path for imports\nscript_dir = Path(__file__).parent\nsys.path.insert(0, str(script_dir))\n\nfrom quick_validate import validate_skill\n\n\ndef package_skill(skill_path, output_dir=None):\n    \"\"\"\n    Package a skill folder into a .skill file.\n\n    Args:\n        skill_path: Path to the skill folder\n        output_dir: Optional output directory for the .skill file (defaults to current directory)\n\n    Returns:\n        Path to the created .skill file, or None if error\n    \"\"\"\n    skill_path = Path(skill_path).resolve()\n\n    # Validate skill folder exists\n    if not skill_path.exists():\n        print(f\"❌ Error: Skill folder not found: {skill_path}\")\n        return None\n\n    if not skill_path.is_dir():\n        print(f\"❌ Error: Path is not a directory: {skill_path}\")\n        return None\n\n    # Validate SKILL.md exists\n    skill_md = skill_path / \"SKILL.md\"\n    if not skill_md.exists():\n        print(f\"❌ Error: SKILL.md not found in {skill_path}\")\n        return None\n\n    # Run validation before packaging\n    print(\"🔍 Validating skill...\")\n    valid, message = validate_skill(skill_path)\n    if not valid:\n        print(f\"❌ Validation failed: {message}\")\n        print(\"   Please fix the validation errors before packaging.\")\n        return None\n    print(f\"✅ {message}\\n\")\n\n    # Determine output location\n    skill_name = skill_path.name\n    if output_dir:\n        output_path = Path(output_dir).resolve()\n        output_path.mkdir(parents=True, exist_ok=True)\n    else:\n        output_path = Path.cwd()\n\n    skill_filename = output_path / f\"{skill_name}.skill\"\n\n    # Create the .skill file (zip format)\n    try:\n        with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:\n            # Walk through the skill directory\n            for file_path in skill_path.rglob('*'):\n                if file_path.is_file():\n                    # Calculate the relative path within the zip\n                    arcname = file_path.relative_to(skill_path.parent)\n                    zipf.write(file_path, arcname)\n                    print(f\"  Added: {arcname}\")\n\n        print(f\"\\n✅ Successfully packaged skill to: {skill_filename}\")\n        return skill_filename\n\n    except Exception as e:\n        print(f\"❌ Error creating .skill file: {e}\")\n        return None\n\n\ndef main():\n    if len(sys.argv) < 2:\n        print(\"Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]\")\n        print(\"\\nExample:\")\n        print(\"  python utils/package_skill.py skills/public/my-skill\")\n        print(\"  python utils/package_skill.py skills/public/my-skill ./dist\")\n        sys.exit(1)\n\n    skill_path = sys.argv[1]\n    output_dir = sys.argv[2] if len(sys.argv) > 2 else None\n\n    print(f\"📦 Packaging skill: {skill_path}\")\n    if output_dir:\n        print(f\"   Output directory: {output_dir}\")\n    print()\n\n    result = package_skill(skill_path, output_dir)\n\n    if result:\n        sys.exit(0)\n    else:\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "skills/skill-creator/scripts/quick_validate.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nQuick validation script for skills - minimal version\n\"\"\"\n\nimport sys\nimport os\nimport re\nimport yaml\nfrom pathlib import Path\n\ndef validate_skill(skill_path):\n    \"\"\"Basic validation of a skill\"\"\"\n    skill_path = Path(skill_path)\n\n    # Check SKILL.md exists\n    skill_md = skill_path / 'SKILL.md'\n    if not skill_md.exists():\n        return False, \"SKILL.md not found\"\n\n    # Read and validate frontmatter\n    content = skill_md.read_text()\n    if not content.startswith('---'):\n        return False, \"No YAML frontmatter found\"\n\n    # Extract frontmatter\n    match = re.match(r'^---\\n(.*?)\\n---', content, re.DOTALL)\n    if not match:\n        return False, \"Invalid frontmatter format\"\n\n    frontmatter_text = match.group(1)\n\n    # Parse YAML frontmatter\n    try:\n        frontmatter = yaml.safe_load(frontmatter_text)\n        if not isinstance(frontmatter, dict):\n            return False, \"Frontmatter must be a YAML dictionary\"\n    except yaml.YAMLError as e:\n        return False, f\"Invalid YAML in frontmatter: {e}\"\n\n    # Define allowed properties\n    ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata'}\n\n    # Check for unexpected properties (excluding nested keys under metadata)\n    unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES\n    if unexpected_keys:\n        return False, (\n            f\"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. \"\n            f\"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}\"\n        )\n\n    # Check required fields\n    if 'name' not in frontmatter:\n        return False, \"Missing 'name' in frontmatter\"\n    if 'description' not in frontmatter:\n        return False, \"Missing 'description' in frontmatter\"\n\n    # Extract name for validation\n    name = frontmatter.get('name', '')\n    if not isinstance(name, str):\n        return False, f\"Name must be a string, got {type(name).__name__}\"\n    name = name.strip()\n    if name:\n        # Check naming convention (hyphen-case: lowercase with hyphens)\n        if not re.match(r'^[a-z0-9-]+$', name):\n            return False, f\"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)\"\n        if name.startswith('-') or name.endswith('-') or '--' in name:\n            return False, f\"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens\"\n        # Check name length (max 64 characters per spec)\n        if len(name) > 64:\n            return False, f\"Name is too long ({len(name)} characters). Maximum is 64 characters.\"\n\n    # Extract and validate description\n    description = frontmatter.get('description', '')\n    if not isinstance(description, str):\n        return False, f\"Description must be a string, got {type(description).__name__}\"\n    description = description.strip()\n    if description:\n        # Check for angle brackets\n        if '<' in description or '>' in description:\n            return False, \"Description cannot contain angle brackets (< or >)\"\n        # Check description length (max 1024 characters per spec)\n        if len(description) > 1024:\n            return False, f\"Description is too long ({len(description)} characters). Maximum is 1024 characters.\"\n\n    return True, \"Skill is valid!\"\n\nif __name__ == \"__main__\":\n    if len(sys.argv) != 2:\n        print(\"Usage: python quick_validate.py <skill_directory>\")\n        sys.exit(1)\n    \n    valid, message = validate_skill(sys.argv[1])\n    print(message)\n    sys.exit(0 if valid else 1)"
  },
  {
    "path": "translate/baidu/baidu_translate.py",
    "content": "# -*- coding: utf-8 -*-\n\nimport random\nfrom hashlib import md5\n\nimport requests\n\nfrom config import conf\nfrom translate.translator import Translator\n\n\nclass BaiduTranslator(Translator):\n    def __init__(self) -> None:\n        super().__init__()\n        endpoint = \"http://api.fanyi.baidu.com\"\n        path = \"/api/trans/vip/translate\"\n        self.url = endpoint + path\n        self.appid = conf().get(\"baidu_translate_app_id\")\n        self.appkey = conf().get(\"baidu_translate_app_key\")\n        if not self.appid or not self.appkey:\n            raise Exception(\"baidu translate appid or appkey not set\")\n\n    # For list of language codes, please refer to `https://api.fanyi.baidu.com/doc/21`, need to convert to ISO 639-1 codes\n    def translate(self, query: str, from_lang: str = \"\", to_lang: str = \"en\") -> str:\n        if not from_lang:\n            from_lang = \"auto\"  # baidu suppport auto detect\n        salt = random.randint(32768, 65536)\n        sign = self.make_md5(\"{}{}{}{}\".format(self.appid, query, salt, self.appkey))\n        headers = {\"Content-Type\": \"application/x-www-form-urlencoded\"}\n        payload = {\"appid\": self.appid, \"q\": query, \"from\": from_lang, \"to\": to_lang, \"salt\": salt, \"sign\": sign}\n\n        retry_cnt = 3\n        while retry_cnt:\n            r = requests.post(self.url, params=payload, headers=headers)\n            result = r.json()\n            errcode = result.get(\"error_code\", \"52000\")\n            if errcode != \"52000\":\n                if errcode == \"52001\" or errcode == \"52002\":\n                    retry_cnt -= 1\n                    continue\n                else:\n                    raise Exception(result[\"error_msg\"])\n            else:\n                break\n        text = \"\\n\".join([item[\"dst\"] for item in result[\"trans_result\"]])\n        return text\n\n    def make_md5(self, s, encoding=\"utf-8\"):\n        return md5(s.encode(encoding)).hexdigest()\n"
  },
  {
    "path": "translate/factory.py",
    "content": "def create_translator(voice_type):\n    if voice_type == \"baidu\":\n        from translate.baidu.baidu_translate import BaiduTranslator\n\n        return BaiduTranslator()\n    raise RuntimeError\n"
  },
  {
    "path": "translate/translator.py",
    "content": "\"\"\"\nVoice service abstract class\n\"\"\"\n\n\nclass Translator(object):\n    # please use https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes to specify language\n    def translate(self, query: str, from_lang: str = \"\", to_lang: str = \"en\") -> str:\n        \"\"\"\n        Translate text from one language to another\n        \"\"\"\n        raise NotImplementedError\n"
  },
  {
    "path": "voice/ali/ali_api.py",
    "content": "# coding=utf-8\n\"\"\"\nAuthor: chazzjimel\nEmail: chazzjimel@gmail.com\nwechat：cheung-z-x\n\nDescription:\n\n\"\"\"\n\nimport http.client\nimport json\nimport time\nimport requests\nimport datetime\nimport hashlib\nimport hmac\nimport base64\nimport urllib.parse\nimport uuid\n\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\n\n\ndef text_to_speech_aliyun(url, text, appkey, token):\n    \"\"\"\n    使用阿里云的文本转语音服务将文本转换为语音。\n\n    参数:\n    - url (str): 阿里云文本转语音服务的端点URL。\n    - text (str): 要转换为语音的文本。\n    - appkey (str): 您的阿里云appkey。\n    - token (str): 阿里云API的认证令牌。\n\n    返回值:\n    - str: 成功时输出音频文件的路径，否则为None。\n    \"\"\"\n    headers = {\n        \"Content-Type\": \"application/json\",\n    }\n\n    data = {\n        \"text\": text,\n        \"appkey\": appkey,\n        \"token\": token,\n        \"format\": \"wav\"\n    }\n\n    response = requests.post(url, headers=headers, data=json.dumps(data))\n\n    if response.status_code == 200 and response.headers['Content-Type'] == 'audio/mpeg':\n        output_file = TmpDir().path() + \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".wav\"\n\n        with open(output_file, 'wb') as file:\n            file.write(response.content)\n        logger.debug(f\"音频文件保存成功，文件名：{output_file}\")\n    else:\n        logger.debug(\"响应状态码: {}\".format(response.status_code))\n        logger.debug(\"响应内容: {}\".format(response.text))\n        output_file = None\n\n    return output_file\n\ndef speech_to_text_aliyun(url, audioContent, appkey, token):\n    \"\"\"\n    使用阿里云的语音识别服务识别音频文件中的语音。\n\n    参数:\n    - url (str): 阿里云语音识别服务的端点URL。\n    - audioContent (byte): pcm音频数据。\n    - appkey (str): 您的阿里云appkey。\n    - token (str): 阿里云API的认证令牌。\n\n    返回值:\n    - str: 成功时输出识别到的文本，否则为None。\n    \"\"\"\n    format = 'pcm'\n    sample_rate = 16000\n    enablePunctuationPrediction  = True\n    enableInverseTextNormalization = True\n    enableVoiceDetection  = False\n\n    # 设置RESTful请求参数\n    request = url + '?appkey=' + appkey\n    request = request + '&format=' + format\n    request = request + '&sample_rate=' + str(sample_rate)\n\n    if enablePunctuationPrediction :\n        request = request + '&enable_punctuation_prediction=' + 'true'\n\n    if enableInverseTextNormalization :\n        request = request + '&enable_inverse_text_normalization=' + 'true'\n\n    if enableVoiceDetection :\n        request = request + '&enable_voice_detection=' + 'true'\n        \n    host = 'nls-gateway-cn-shanghai.aliyuncs.com'\n\n    # 设置HTTPS请求头部\n    httpHeaders = {\n        'X-NLS-Token': token,\n        'Content-type': 'application/octet-stream',\n        'Content-Length': len(audioContent)\n        }\n\n    conn = http.client.HTTPSConnection(host)\n    conn.request(method='POST', url=request, body=audioContent, headers=httpHeaders)\n\n    response = conn.getresponse()\n    body = response.read()\n    try:\n        body = json.loads(body)\n        status = body['status']\n        if status == 20000000 :\n            result = body['result']\n            if result :\n                logger.info(f\"阿里云语音识别到了：{result}\")\n            conn.close()\n            return result\n        else :\n            logger.error(f\"语音识别失败，状态码: {status}\")\n    except ValueError:\n        logger.error(f\"语音识别失败，收到非JSON格式的数据: {body}\")\n    conn.close()\n    return None\n\n\nclass AliyunTokenGenerator:\n    \"\"\"\n    用于生成阿里云服务认证令牌的类。\n\n    属性:\n    - access_key_id (str): 您的阿里云访问密钥ID。\n    - access_key_secret (str): 您的阿里云访问密钥秘密。\n    \"\"\"\n\n    def __init__(self, access_key_id, access_key_secret):\n        self.access_key_id = access_key_id\n        self.access_key_secret = access_key_secret\n\n    def sign_request(self, parameters):\n        \"\"\"\n        为阿里云服务签名请求。\n\n        参数:\n        - parameters (dict): 请求的参数字典。\n\n        返回值:\n        - str: 请求的签名签章。\n        \"\"\"\n        # 将参数按照字典顺序排序\n        sorted_params = sorted(parameters.items())\n\n        # 构造待签名的查询字符串\n        canonicalized_query_string = ''\n        for (k, v) in sorted_params:\n            canonicalized_query_string += '&' + self.percent_encode(k) + '=' + self.percent_encode(v)\n\n        # 构造用于签名的字符串\n        string_to_sign = 'GET&%2F&' + self.percent_encode(canonicalized_query_string[1:])  # 使用GET方法\n\n        # 使用HMAC算法计算签名\n        h = hmac.new((self.access_key_secret + \"&\").encode('utf-8'), string_to_sign.encode('utf-8'), hashlib.sha1)\n        signature = base64.encodebytes(h.digest()).strip()\n\n        return signature\n\n    def percent_encode(self, encode_str):\n        \"\"\"\n        对字符串进行百分比编码。\n\n        参数:\n        - encode_str (str): 要编码的字符串。\n\n        返回值:\n        - str: 编码后的字符串。\n        \"\"\"\n        encode_str = str(encode_str)\n        res = urllib.parse.quote(encode_str, '')\n        res = res.replace('+', '%20')\n        res = res.replace('*', '%2A')\n        res = res.replace('%7E', '~')\n        return res\n\n    def get_token(self):\n        \"\"\"\n        获取阿里云服务的令牌。\n\n        返回值:\n        - str: 获取到的令牌。\n        \"\"\"\n        # 设置请求参数\n        params = {\n            'Format': 'JSON',\n            'Version': '2019-02-28',\n            'AccessKeyId': self.access_key_id,\n            'SignatureMethod': 'HMAC-SHA1',\n            'Timestamp': datetime.datetime.utcnow().strftime(\"%Y-%m-%dT%H:%M:%SZ\"),\n            'SignatureVersion': '1.0',\n            'SignatureNonce': str(uuid.uuid4()),  # 使用uuid生成唯一的随机数\n            'Action': 'CreateToken',\n            'RegionId': 'cn-shanghai'\n        }\n\n        # 计算签名\n        signature = self.sign_request(params)\n        params['Signature'] = signature\n\n        # 构造请求URL\n        url = 'http://nls-meta.cn-shanghai.aliyuncs.com/?' + urllib.parse.urlencode(params)\n\n        # 发送请求\n        response = requests.get(url)\n\n        return response.text\n"
  },
  {
    "path": "voice/ali/ali_voice.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"\nAuthor: chazzjimel\nEmail: chazzjimel@gmail.com\nwechat：cheung-z-x\n\nDescription:\nali voice service\n\n\"\"\"\nimport json\nimport os\nimport re\nimport time\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom voice.voice import Voice\nfrom voice.ali.ali_api import AliyunTokenGenerator, speech_to_text_aliyun, text_to_speech_aliyun\nfrom config import conf\n\ntry:\n    from voice.audio_convert import get_pcm_from_wav\nexcept ImportError as e:\n    logger.debug(\"import voice.audio_convert failed: {}\".format(e))\n\n\nclass AliVoice(Voice):\n    def __init__(self):\n        \"\"\"\n        初始化AliVoice类，从配置文件加载必要的配置。\n        \"\"\"\n        try:\n            curdir = os.path.dirname(__file__)\n            config_path = os.path.join(curdir, \"config.json\")\n            with open(config_path, \"r\") as fr:\n                config = json.load(fr)\n            self.token = None\n            self.token_expire_time = 0\n            # 默认复用阿里云千问的 access_key 和 access_secret\n            self.api_url_voice_to_text = config.get(\"api_url_voice_to_text\")\n            self.api_url_text_to_voice = config.get(\"api_url_text_to_voice\")\n            self.app_key = config.get(\"app_key\")\n            self.access_key_id = conf().get(\"qwen_access_key_id\") or config.get(\"access_key_id\")\n            self.access_key_secret = conf().get(\"qwen_access_key_secret\") or config.get(\"access_key_secret\")\n        except Exception as e:\n            logger.warn(\"AliVoice init failed: %s, ignore \" % e)\n\n    def textToVoice(self, text):\n        \"\"\"\n        将文本转换为语音文件。\n\n        :param text: 要转换的文本。\n        :return: 返回一个Reply对象，其中包含转换得到的语音文件或错误信息。\n        \"\"\"\n        # 清除文本中的非中文、非英文和非基本字符\n        text = re.sub(r'[^\\u4e00-\\u9fa5\\u3040-\\u30FF\\uAC00-\\uD7AFa-zA-Z0-9'\n                      r'äöüÄÖÜáéíóúÁÉÍÓÚàèìòùÀÈÌÒÙâêîôûÂÊÎÔÛçÇñÑ，。！？,.]', '', text)\n        # 提取有效的token\n        token_id = self.get_valid_token()\n        fileName = text_to_speech_aliyun(self.api_url_text_to_voice, text, self.app_key, token_id)\n        if fileName:\n            logger.info(\"[Ali] textToVoice text={} voice file name={}\".format(text, fileName))\n            reply = Reply(ReplyType.VOICE, fileName)\n        else:\n            reply = Reply(ReplyType.ERROR, \"抱歉，语音合成失败\")\n        return reply\n\n    def voiceToText(self, voice_file):\n        \"\"\"\n        将语音文件转换为文本。\n\n        :param voice_file: 要转换的语音文件。\n        :return: 返回一个Reply对象，其中包含转换得到的文本或错误信息。\n        \"\"\"\n        # 提取有效的token\n        token_id = self.get_valid_token()\n        logger.debug(\"[Ali] voice file name={}\".format(voice_file))\n        pcm = get_pcm_from_wav(voice_file)\n        text = speech_to_text_aliyun(self.api_url_voice_to_text, pcm, self.app_key, token_id)\n        if text:\n            logger.info(\"[Ali] VoicetoText = {}\".format(text))\n            reply = Reply(ReplyType.TEXT, text)\n        else:\n            reply = Reply(ReplyType.ERROR, \"抱歉，语音识别失败\")\n        return reply\n\n    def get_valid_token(self):\n        \"\"\"\n        获取有效的阿里云token。\n\n        :return: 返回有效的token字符串。\n        \"\"\"\n        current_time = time.time()\n        if self.token is None or current_time >= self.token_expire_time:\n            get_token = AliyunTokenGenerator(self.access_key_id, self.access_key_secret)\n            token_str = get_token.get_token()\n            token_data = json.loads(token_str)\n            self.token = token_data[\"Token\"][\"Id\"]\n            # 将过期时间减少一小段时间（例如5分钟），以避免在边界条件下的过期\n            self.token_expire_time = token_data[\"Token\"][\"ExpireTime\"] - 300\n            logger.debug(f\"新获取的阿里云token：{self.token}\")\n        else:\n            logger.debug(\"使用缓存的token\")\n        return self.token\n"
  },
  {
    "path": "voice/ali/config.json.template",
    "content": "{\n    \"api_url_text_to_voice\": \"https://nls-gateway-cn-shanghai.aliyuncs.com/stream/v1/tts\",\n    \"api_url_voice_to_text\": \"https://nls-gateway.cn-shanghai.aliyuncs.com/stream/v1/asr\",\n    \"app_key\": \"\",\n    \"access_key_id\": \"\",\n    \"access_key_secret\": \"\"\n}"
  },
  {
    "path": "voice/audio_convert.py",
    "content": "import shutil\nimport wave\n\nfrom common.log import logger\n\ntry:\n    import pysilk\nexcept ImportError:\n    logger.debug(\"import pysilk failed, silk voice format will not be supported.\")\n\ntry:\n    from pydub import AudioSegment\n    _pydub_available = True\nexcept ImportError:\n    logger.debug(\"import pydub failed, voice conversion features will not be supported.\")\n    AudioSegment = None\n    _pydub_available = False\n\nsil_supports = [8000, 12000, 16000, 24000, 32000, 44100, 48000]  # slk转wav时，支持的采样率\n\n\ndef find_closest_sil_supports(sample_rate):\n    \"\"\"\n    找到最接近的支持的采样率\n    \"\"\"\n    if sample_rate in sil_supports:\n        return sample_rate\n    closest = 0\n    mindiff = 9999999\n    for rate in sil_supports:\n        diff = abs(rate - sample_rate)\n        if diff < mindiff:\n            closest = rate\n            mindiff = diff\n    return closest\n\n\ndef get_pcm_from_wav(wav_path):\n    \"\"\"\n    从 wav 文件中读取 pcm\n\n    :param wav_path: wav 文件路径\n    :returns: pcm 数据\n    \"\"\"\n    wav = wave.open(wav_path, \"rb\")\n    return wav.readframes(wav.getnframes())\n\n\ndef any_to_mp3(any_path, mp3_path):\n    \"\"\"\n    把任意格式转成mp3文件\n    \"\"\"\n    if not _pydub_available:\n        raise ImportError(\"pydub is required for audio conversion. Please install it with: pip install pydub\")\n    if any_path.endswith(\".mp3\"):\n        shutil.copy2(any_path, mp3_path)\n        return\n    if any_path.endswith(\".sil\") or any_path.endswith(\".silk\") or any_path.endswith(\".slk\"):\n        sil_to_wav(any_path, any_path)\n        any_path = mp3_path\n    audio = AudioSegment.from_file(any_path)\n    audio.export(mp3_path, format=\"mp3\")\n\n\ndef any_to_wav(any_path, wav_path):\n    \"\"\"\n    把任意格式转成wav文件\n    \"\"\"\n    if not _pydub_available:\n        raise ImportError(\"pydub is required for audio conversion. Please install it with: pip install pydub\")\n    if any_path.endswith(\".wav\"):\n        shutil.copy2(any_path, wav_path)\n        return\n    if any_path.endswith(\".sil\") or any_path.endswith(\".silk\") or any_path.endswith(\".slk\"):\n        return sil_to_wav(any_path, wav_path)\n    audio = AudioSegment.from_file(any_path)\n    audio.set_frame_rate(8000)    # 百度语音转写支持8000采样率, pcm_s16le, 单通道语音识别\n    audio.set_channels(1)\n    audio.export(wav_path, format=\"wav\", codec='pcm_s16le')\n\n\ndef any_to_sil(any_path, sil_path):\n    \"\"\"\n    把任意格式转成sil文件\n    \"\"\"\n    if not _pydub_available:\n        raise ImportError(\"pydub is required for audio conversion. Please install it with: pip install pydub\")\n    if any_path.endswith(\".sil\") or any_path.endswith(\".silk\") or any_path.endswith(\".slk\"):\n        shutil.copy2(any_path, sil_path)\n        return 10000\n    audio = AudioSegment.from_file(any_path)\n    rate = find_closest_sil_supports(audio.frame_rate)\n    # Convert to PCM_s16\n    pcm_s16 = audio.set_sample_width(2)\n    pcm_s16 = pcm_s16.set_frame_rate(rate)\n    wav_data = pcm_s16.raw_data\n    silk_data = pysilk.encode(wav_data, data_rate=rate, sample_rate=rate)\n    with open(sil_path, \"wb\") as f:\n        f.write(silk_data)\n    return audio.duration_seconds * 1000\n\n\ndef any_to_amr(any_path, amr_path):\n    \"\"\"\n    把任意格式转成amr文件\n    \"\"\"\n    if not _pydub_available:\n        raise ImportError(\"pydub is required for audio conversion. Please install it with: pip install pydub\")\n    if any_path.endswith(\".amr\"):\n        shutil.copy2(any_path, amr_path)\n        return\n    if any_path.endswith(\".sil\") or any_path.endswith(\".silk\") or any_path.endswith(\".slk\"):\n        raise NotImplementedError(\"Not support file type: {}\".format(any_path))\n    audio = AudioSegment.from_file(any_path)\n    audio = audio.set_frame_rate(8000)  # only support 8000\n    audio.export(amr_path, format=\"amr\")\n    return audio.duration_seconds * 1000\n\n\ndef sil_to_wav(silk_path, wav_path, rate: int = 24000):\n    \"\"\"\n    silk 文件转 wav\n    \"\"\"\n    wav_data = pysilk.decode_file(silk_path, to_wav=True, sample_rate=rate)\n    with open(wav_path, \"wb\") as f:\n        f.write(wav_data)\n\n\ndef split_audio(file_path, max_segment_length_ms=60000):\n    \"\"\"\n    分割音频文件\n    \"\"\"\n    if not _pydub_available:\n        raise ImportError(\"pydub is required for audio conversion. Please install it with: pip install pydub\")\n    audio = AudioSegment.from_file(file_path)\n    audio_length_ms = len(audio)\n    if audio_length_ms <= max_segment_length_ms:\n        return audio_length_ms, [file_path]\n    segments = []\n    for start_ms in range(0, audio_length_ms, max_segment_length_ms):\n        end_ms = min(audio_length_ms, start_ms + max_segment_length_ms)\n        segment = audio[start_ms:end_ms]\n        segments.append(segment)\n    file_prefix = file_path[: file_path.rindex(\".\")]\n    format = file_path[file_path.rindex(\".\") + 1 :]\n    files = []\n    for i, segment in enumerate(segments):\n        path = f\"{file_prefix}_{i+1}\" + f\".{format}\"\n        segment.export(path, format=format)\n        files.append(path)\n    return audio_length_ms, files\n"
  },
  {
    "path": "voice/azure/azure_voice.py",
    "content": "\"\"\"\nazure voice service\n\"\"\"\nimport json\nimport os\nimport time\n\nimport azure.cognitiveservices.speech as speechsdk\nfrom langid import classify\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom config import conf\nfrom voice.voice import Voice\n\n\"\"\"\nAzure voice\n主目录设置文件中需填写azure_voice_api_key和azure_voice_region\n\n查看可用的 voice： https://speech.microsoft.com/portal/voicegallery\n\n\"\"\"\n\n\nclass AzureVoice(Voice):\n    def __init__(self):\n        try:\n            curdir = os.path.dirname(__file__)\n            config_path = os.path.join(curdir, \"config.json\")\n            config = None\n            if not os.path.exists(config_path):  # 如果没有配置文件，创建本地配置文件\n                config = {\n                    \"speech_synthesis_voice_name\": \"zh-CN-XiaoxiaoNeural\",  # 识别不出时的默认语音\n                    \"auto_detect\": True,  # 是否自动检测语言\n                    \"speech_synthesis_zh\": \"zh-CN-XiaozhenNeural\",\n                    \"speech_synthesis_en\": \"en-US-JacobNeural\",\n                    \"speech_synthesis_ja\": \"ja-JP-AoiNeural\",\n                    \"speech_synthesis_ko\": \"ko-KR-SoonBokNeural\",\n                    \"speech_synthesis_de\": \"de-DE-LouisaNeural\",\n                    \"speech_synthesis_fr\": \"fr-FR-BrigitteNeural\",\n                    \"speech_synthesis_es\": \"es-ES-LaiaNeural\",\n                    \"speech_recognition_language\": \"zh-CN\",\n                }\n                with open(config_path, \"w\") as fw:\n                    json.dump(config, fw, indent=4)\n            else:\n                with open(config_path, \"r\") as fr:\n                    config = json.load(fr)\n            self.config = config\n            self.api_key = conf().get(\"azure_voice_api_key\")\n            self.api_region = conf().get(\"azure_voice_region\")\n            self.speech_config = speechsdk.SpeechConfig(subscription=self.api_key, region=self.api_region)\n            self.speech_config.speech_synthesis_voice_name = self.config[\"speech_synthesis_voice_name\"]\n            self.speech_config.speech_recognition_language = self.config[\"speech_recognition_language\"]\n        except Exception as e:\n            logger.warn(\"AzureVoice init failed: %s, ignore \" % e)\n\n    def voiceToText(self, voice_file):\n        audio_config = speechsdk.AudioConfig(filename=voice_file)\n        speech_recognizer = speechsdk.SpeechRecognizer(speech_config=self.speech_config, audio_config=audio_config)\n        result = speech_recognizer.recognize_once()\n        if result.reason == speechsdk.ResultReason.RecognizedSpeech:\n            logger.info(\"[Azure] voiceToText voice file name={} text={}\".format(voice_file, result.text))\n            reply = Reply(ReplyType.TEXT, result.text)\n        else:\n            cancel_details = result.cancellation_details\n            logger.error(\"[Azure] voiceToText error, result={}, errordetails={}\".format(result, cancel_details))\n            reply = Reply(ReplyType.ERROR, \"抱歉，语音识别失败\")\n        return reply\n\n    def textToVoice(self, text):\n        if self.config.get(\"auto_detect\"):\n            lang = classify(text)[0]\n            key = \"speech_synthesis_\" + lang\n            if key in self.config:\n                logger.info(\"[Azure] textToVoice auto detect language={}, voice={}\".format(lang, self.config[key]))\n                self.speech_config.speech_synthesis_voice_name = self.config[key]\n            else:\n                self.speech_config.speech_synthesis_voice_name = self.config[\"speech_synthesis_voice_name\"]\n        else:\n            self.speech_config.speech_synthesis_voice_name = self.config[\"speech_synthesis_voice_name\"]\n        # Avoid the same filename under multithreading\n        fileName = TmpDir().path() + \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".wav\"\n        audio_config = speechsdk.AudioConfig(filename=fileName)\n        speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=self.speech_config, audio_config=audio_config)\n        result = speech_synthesizer.speak_text(text)\n        if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:\n            logger.info(\"[Azure] textToVoice text={} voice file name={}\".format(text, fileName))\n            reply = Reply(ReplyType.VOICE, fileName)\n        else:\n            cancel_details = result.cancellation_details\n            logger.error(\"[Azure] textToVoice error, result={}, errordetails={}\".format(result, cancel_details.error_details))\n            reply = Reply(ReplyType.ERROR, \"抱歉，语音合成失败\")\n        return reply\n"
  },
  {
    "path": "voice/azure/config.json.template",
    "content": "{\n  \"speech_synthesis_voice_name\": \"zh-CN-XiaoxiaoNeural\",\n  \"auto_detect\": true,\n  \"speech_synthesis_zh\": \"zh-CN-YunxiNeural\",\n  \"speech_synthesis_en\": \"en-US-JacobNeural\",\n  \"speech_synthesis_ja\": \"ja-JP-AoiNeural\",\n  \"speech_synthesis_ko\": \"ko-KR-SoonBokNeural\",\n  \"speech_synthesis_de\": \"de-DE-LouisaNeural\",\n  \"speech_synthesis_fr\": \"fr-FR-BrigitteNeural\",\n  \"speech_synthesis_es\": \"es-ES-LaiaNeural\",\n  \"speech_recognition_language\": \"zh-CN\"\n}\n"
  },
  {
    "path": "voice/baidu/README.md",
    "content": "## 说明\n百度语音识别与合成参数说明\n百度语音依赖，经常会出现问题，可能就是缺少依赖：\npip install baidu-aip\npip install pydub\npip install pysilk\n还有ffmpeg，不同系统安装方式不同\n\n系统中收到的语音文件为mp3格式（wx）或者sil格式（wxy），如果要识别需要转换为pcm格式，转换后的文件为16k采样率，单声道，16bit的pcm文件\n发送时又需要（wx）转换为mp3格式，转换后的文件为16k采样率，单声道，16bit的pcm文件,（wxy）转换为sil格式,还要计算声音长度，发送时需要带上声音长度\n这些事情都在audio_convert.py中封装了，直接调用即可\n\n\n参数说明\n识别参数\nhttps://ai.baidu.com/ai-doc/SPEECH/Vk38lxily\n合成参数\nhttps://ai.baidu.com/ai-doc/SPEECH/Gk38y8lzk\n\n## 使用说明\n分两个地方配置\n\n1、对于def voiceToText(self, filename)函数中调用的百度语音识别API,中接口调用asr（参数）这个配置见CHATGPT-ON-WECHAT工程目录下的`config.json`文件和config.py文件。\n参数\t    可需\t描述\napp_id    必填\t应用的APPID\napi_key  必填\t应用的APIKey\nsecret_key  必填\t应用的SecretKey\ndev_pid\t    必填\t语言选择,填写语言对应的dev_pid值\n\n2、对于def textToVoice(self, text)函数中调用的百度语音合成API,中接口调用synthesis（参数）在本目录下的`config.json`文件中进行配置。\n参数\t    可需\t描述\ntex\t        必填\t合成的文本，使用UTF-8编码，请注意文本长度必须小于1024字节  \nlan\t        必填\t固定值zh。语言选择,目前只有中英文混合模式，填写固定值zh\nspd\t        选填\t语速，取值0-15，默认为5中语速\npit\t        选填\t音调，取值0-15，默认为5中语调\nvol\t        选填\t音量，取值0-15，默认为5中音量（取值为0时为音量最小值，并非为无声）\nper（基础音库）\t选填\t度小宇=1，度小美=0，度逍遥（基础）=3，度丫丫=4\nper（精品音库）\t选填\t度逍遥（精品）=5003，度小鹿=5118，度博文=106，度小童=110，度小萌=111，度米朵=103，度小娇=5\naue\t        选填\t3为mp3格式(默认)； 4为pcm-16k；5为pcm-8k；6为wav（内容同pcm-16k）; 注意aue=4或者6是语音识别要求的格式，但是音频内容不是语音识别要求的自然人发音，所以识别效果会受影响。\n\n关于per参数的说明，注意您购买的哪个音库，就填写哪个音库的参数，否则会报错。如果您购买的是基础音库，那么per参数只能填写0到4，如果您购买的是精品音库，那么per参数只能填写5003，5118，106,110,111,103,5其他的都会报错。\n### 配置文件\n\n将文件夹中`config.json.template`复制为`config.json`。\n\n``` json\n    {\n    \"lang\": \"zh\",\n    \"ctp\": 1,\n    \"spd\": 5,\n    \"pit\": 5,\n    \"vol\": 5,\n    \"per\": 0\n    }\n```"
  },
  {
    "path": "voice/baidu/baidu_voice.py",
    "content": "\"\"\"\nbaidu voice service with thread-safe token caching\n\"\"\"\nimport json\nimport os\nimport time\nimport threading\nimport requests\n\nfrom aip import AipSpeech\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom config import conf\nfrom voice.voice import Voice\n\ntry:\n    from voice.audio_convert import get_pcm_from_wav\nexcept ImportError as e:\n    logger.debug(\"import voice.audio_convert failed: {}\".format(e))\n\nclass BaiduVoice(Voice):\n    def __init__(self):\n        try:\n            # 读取本地 TTS 参数配置\n            curdir = os.path.dirname(__file__)\n            config_path = os.path.join(curdir, \"config.json\")\n            if not os.path.exists(config_path):\n                bconf = {\"lang\": \"zh\", \"ctp\": 1, \"spd\": 5, \"pit\": 5, \"vol\": 5, \"per\": 0}\n                with open(config_path, \"w\") as fw:\n                    json.dump(bconf, fw, indent=4)\n            else:\n                with open(config_path, \"r\") as fr:\n                    bconf = json.load(fr)\n\n            self.app_id = str(conf().get(\"baidu_app_id\"))\n            self.api_key = str(conf().get(\"baidu_api_key\"))\n            self.secret_key = str(conf().get(\"baidu_secret_key\"))\n            self.dev_id = conf().get(\"baidu_dev_pid\")\n\n            self.lang = bconf[\"lang\"]\n            self.ctp  = bconf[\"ctp\"]\n            self.spd  = bconf[\"spd\"]\n            self.pit  = bconf[\"pit\"]\n            self.vol  = bconf[\"vol\"]\n            self.per  = bconf[\"per\"]\n\n            # 百度 SDK 客户端（短文本合成 & 语音识别）\n            self.client = AipSpeech(self.app_id, self.api_key, self.secret_key)\n\n            # access_token 缓存与锁\n            self._access_token    = None\n            self._token_expire_ts = 0\n            self._token_lock      = threading.Lock()\n        except Exception as e:\n            logger.warn(\"BaiduVoice init failed: %s, ignore\" % e)\n\n    def _get_access_token(self):\n        # 多线程安全获取 token\n        with self._token_lock:\n            now = time.time()\n            if self._access_token and now < self._token_expire_ts:\n                return self._access_token\n            url = \"https://aip.baidubce.com/oauth/2.0/token\"\n            params = {\n                \"grant_type\":    \"client_credentials\",\n                \"client_id\":     self.api_key,\n                \"client_secret\": self.secret_key,\n            }\n            resp = requests.post(url, params=params).json()\n            token = resp.get(\"access_token\")\n            expires_in = resp.get(\"expires_in\", 2592000)\n            if token:\n                self._access_token    = token\n                self._token_expire_ts = now + expires_in - 60  # 提前 1 分钟过期\n                return token\n            else:\n                logger.error(\"BaiduVoice _get_access_token failed: %s\", resp)\n                return None\n\n    def voiceToText(self, voice_file):\n        logger.debug(\"[Baidu] recognize voice file=%s\", voice_file)\n        pcm = get_pcm_from_wav(voice_file)\n        res = self.client.asr(pcm, \"pcm\", 16000, {\"dev_pid\": self.dev_id})\n        if res.get(\"err_no\") == 0:\n            text = \"\".join(res[\"result\"])\n            logger.info(\"[Baidu] ASR result: %s\", text)\n            return Reply(ReplyType.TEXT, text)\n        else:\n            err = res.get(\"err_msg\", \"\")\n            logger.error(\"[Baidu] ASR error: %s\", err)\n            return Reply(ReplyType.ERROR, f\"语音识别失败：{err}\")\n\n    def _long_text_synthesis(self, text):\n        token = self._get_access_token()\n        if not token:\n            return Reply(ReplyType.ERROR, \"获取百度 access_token 失败\")\n\n        # 创建合成任务\n        create_url = f\"https://aip.baidubce.com/rpc/2.0/tts/v1/create?access_token={token}\"\n        payload = {\n            \"text\":            text,\n            \"format\":          \"mp3-16k\",\n            \"voice\":           0,\n            \"lang\":            self.lang,\n            \"speed\":           self.spd,\n            \"pitch\":           self.pit,\n            \"volume\":          self.vol,\n            \"enable_subtitle\": 0,\n        }\n        headers = {\"Content-Type\": \"application/json\"}\n        create_resp = requests.post(create_url, headers=headers, json=payload).json()\n        task_id = create_resp.get(\"task_id\")\n        if not task_id:\n            logger.error(\"[Baidu] 长文本合成创建任务失败: %s\", create_resp)\n            return Reply(ReplyType.ERROR, \"长文本合成任务提交失败\")\n        logger.info(\"[Baidu] 长文本合成任务已提交 task_id=%s\", task_id)\n\n        # 轮询查询任务状态\n        query_url = f\"https://aip.baidubce.com/rpc/2.0/tts/v1/query?access_token={token}\"\n        for _ in range(100):\n            time.sleep(3)\n            resp = requests.post(query_url, headers=headers, json={\"task_ids\":[task_id]})\n            result = resp.json()\n            infos = result.get(\"tasks_info\") or result.get(\"tasks\") or []\n            if not infos:\n                continue\n            info = infos[0]\n            status = info.get(\"task_status\")\n            if status == \"Success\":\n                task_res = info.get(\"task_result\", {})\n                audio_url = task_res.get(\"audio_address\") or task_res.get(\"speech_url\")\n                break\n            elif status == \"Running\":\n                continue\n            else:\n                logger.error(\"[Baidu] 长文本合成失败: %s\", info)\n                return Reply(ReplyType.ERROR, \"长文本合成执行失败\")\n        else:\n            return Reply(ReplyType.ERROR, \"长文本合成超时，请稍后重试\")\n\n        # 下载并保存音频\n        audio_data = requests.get(audio_url).content\n        fn = TmpDir().path() + f\"reply-long-{int(time.time())}-{hash(text)&0x7FFFFFFF}.mp3\"\n        with open(fn, \"wb\") as f:\n            f.write(audio_data)\n        logger.info(\"[Baidu] 长文本合成 success: %s\", fn)\n        return Reply(ReplyType.VOICE, fn)\n\n    def textToVoice(self, text):\n        try:\n            # GBK 编码字节长度\n            gbk_len = len(text.encode(\"gbk\", errors=\"ignore\"))\n            if gbk_len <= 1024:\n                # 短文本走 SDK 合成\n                result = self.client.synthesis(\n                    text, self.lang, self.ctp,\n                    {\"spd\":self.spd, \"pit\":self.pit, \"vol\":self.vol, \"per\":self.per}\n                )\n                if not isinstance(result, dict):\n                    fn = TmpDir().path() + f\"reply-{int(time.time())}-{hash(text)&0x7FFFFFFF}.mp3\"\n                    with open(fn, \"wb\") as f:\n                        f.write(result)\n                    logger.info(\"[Baidu] 短文本合成 success: %s\", fn)\n                    return Reply(ReplyType.VOICE, fn)\n                else:\n                    logger.error(\"[Baidu] 短文本合成 error: %s\", result)\n                    return Reply(ReplyType.ERROR, \"短文本语音合成失败\")\n            else:\n                # 长文本\n                return self._long_text_synthesis(text)\n        except Exception as e:\n            logger.error(\"BaiduVoice textToVoice exception: %s\", e)\n            return Reply(ReplyType.ERROR, f\"合成异常：{e}\")\n\n"
  },
  {
    "path": "voice/baidu/config.json.template",
    "content": "{\n  \"lang\": \"zh\",\n  \"ctp\": 1,\n  \"spd\": 5,\n  \"pit\": 5,\n  \"vol\": 5,\n  \"per\": 0\n}\n"
  },
  {
    "path": "voice/edge/edge_voice.py",
    "content": "import time\n\nimport edge_tts\nimport asyncio\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom voice.voice import Voice\n\n\nclass EdgeVoice(Voice):\n\n    def __init__(self):\n        '''\n        # 普通话\n        zh-CN-XiaoxiaoNeural\n        zh-CN-XiaoyiNeural\n        zh-CN-YunjianNeural\n        zh-CN-YunxiNeural\n        zh-CN-YunxiaNeural\n        zh-CN-YunyangNeural\n        # 地方口音\n        zh-CN-liaoning-XiaobeiNeural\n        zh-CN-shaanxi-XiaoniNeural\n        # 粤语\n        zh-HK-HiuGaaiNeural\n        zh-HK-HiuMaanNeural\n        zh-HK-WanLungNeural\n        # 湾湾腔\n        zh-TW-HsiaoChenNeural\n        zh-TW-HsiaoYuNeural\n        zh-TW-YunJheNeural\n        '''\n        self.voice = \"zh-CN-YunjianNeural\"\n\n    def voiceToText(self, voice_file):\n        pass\n\n    async def gen_voice(self, text, fileName):\n        communicate = edge_tts.Communicate(text, self.voice)\n        await communicate.save(fileName)\n\n    def textToVoice(self, text):\n        fileName = TmpDir().path() + \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".mp3\"\n\n        asyncio.run(self.gen_voice(text, fileName))\n\n        logger.info(\"[EdgeTTS] textToVoice text={} voice file name={}\".format(text, fileName))\n        return Reply(ReplyType.VOICE, fileName)\n"
  },
  {
    "path": "voice/elevent/elevent_voice.py",
    "content": "import time\n\nfrom elevenlabs.client import ElevenLabs\nfrom elevenlabs import save\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom voice.voice import Voice\nfrom config import conf\n\nXI_API_KEY = conf().get(\"xi_api_key\")\nclient = ElevenLabs(api_key=XI_API_KEY)\nname = conf().get(\"xi_voice_id\")\n\nclass ElevenLabsVoice(Voice):\n\n    def __init__(self):\n        pass\n\n    def voiceToText(self, voice_file):\n        pass\n\n    def textToVoice(self, text):\n        audio = client.generate(\n            text=text,\n            voice=name,\n            model='eleven_multilingual_v2'\n        )\n        fileName = TmpDir().path() + \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".mp3\"\n        save(audio, fileName)\n        logger.info(\"[ElevenLabs] textToVoice text={} voice file name={}\".format(text, fileName))\n        return Reply(ReplyType.VOICE, fileName)"
  },
  {
    "path": "voice/factory.py",
    "content": "\"\"\"\nvoice factory\n\"\"\"\n\n\ndef create_voice(voice_type):\n    \"\"\"\n    create a voice instance\n    :param voice_type: voice type code\n    :return: voice instance\n    \"\"\"\n    if voice_type == \"baidu\":\n        from voice.baidu.baidu_voice import BaiduVoice\n\n        return BaiduVoice()\n    elif voice_type == \"google\":\n        from voice.google.google_voice import GoogleVoice\n\n        return GoogleVoice()\n    elif voice_type == \"openai\":\n        from voice.openai.openai_voice import OpenaiVoice\n\n        return OpenaiVoice()\n    elif voice_type == \"pytts\":\n        from voice.pytts.pytts_voice import PyttsVoice\n\n        return PyttsVoice()\n    elif voice_type == \"azure\":\n        from voice.azure.azure_voice import AzureVoice\n\n        return AzureVoice()\n    elif voice_type == \"elevenlabs\":\n        from voice.elevent.elevent_voice import ElevenLabsVoice\n\n        return ElevenLabsVoice()\n\n    elif voice_type == \"linkai\":\n        from voice.linkai.linkai_voice import LinkAIVoice\n\n        return LinkAIVoice()\n    elif voice_type == \"ali\":\n        from voice.ali.ali_voice import AliVoice\n\n        return AliVoice()\n    elif voice_type == \"edge\":\n        from voice.edge.edge_voice import EdgeVoice\n\n        return EdgeVoice()\n    elif voice_type == \"xunfei\":\n        from voice.xunfei.xunfei_voice import XunfeiVoice\n\n        return XunfeiVoice()\n    elif voice_type == \"tencent\":\n        from voice.tencent.tencent_voice import TencentVoice\n\n        return TencentVoice()\n    raise RuntimeError\n"
  },
  {
    "path": "voice/google/google_voice.py",
    "content": "\"\"\"\ngoogle voice service\n\"\"\"\n\nimport time\n\nimport speech_recognition\nfrom gtts import gTTS\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom voice.voice import Voice\n\n\nclass GoogleVoice(Voice):\n    recognizer = speech_recognition.Recognizer()\n\n    def __init__(self):\n        pass\n\n    def voiceToText(self, voice_file):\n        with speech_recognition.AudioFile(voice_file) as source:\n            audio = self.recognizer.record(source)\n        try:\n            text = self.recognizer.recognize_google(audio, language=\"zh-CN\")\n            logger.info(\"[Google] voiceToText text={} voice file name={}\".format(text, voice_file))\n            reply = Reply(ReplyType.TEXT, text)\n        except speech_recognition.UnknownValueError:\n            reply = Reply(ReplyType.ERROR, \"抱歉，我听不懂\")\n        except speech_recognition.RequestError as e:\n            reply = Reply(ReplyType.ERROR, \"抱歉，无法连接到 Google 语音识别服务；{0}\".format(e))\n        finally:\n            return reply\n\n    def textToVoice(self, text):\n        try:\n            # Avoid the same filename under multithreading\n            mp3File = TmpDir().path() + \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".mp3\"\n            tts = gTTS(text=text, lang=\"zh\")\n            tts.save(mp3File)\n            logger.info(\"[Google] textToVoice text={} voice file name={}\".format(text, mp3File))\n            reply = Reply(ReplyType.VOICE, mp3File)\n        except Exception as e:\n            reply = Reply(ReplyType.ERROR, str(e))\n        finally:\n            return reply\n"
  },
  {
    "path": "voice/linkai/linkai_voice.py",
    "content": "\"\"\"\ngoogle voice service\n\"\"\"\nimport random\nimport requests\nfrom voice import audio_convert\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf\nfrom voice.voice import Voice\nfrom common import const\nimport os\nimport datetime\n\nclass LinkAIVoice(Voice):\n    def __init__(self):\n        pass\n\n    def voiceToText(self, voice_file):\n        logger.debug(\"[LinkVoice] voice file name={}\".format(voice_file))\n        try:\n            url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\") + \"/v1/audio/transcriptions\"\n            headers = {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n            model = None\n            if not conf().get(\"text_to_voice\") or conf().get(\"voice_to_text\") == \"openai\":\n                model = const.WHISPER_1\n            if voice_file.endswith(\".amr\"):\n                try:\n                    mp3_file = os.path.splitext(voice_file)[0] + \".mp3\"\n                    audio_convert.any_to_mp3(voice_file, mp3_file)\n                    voice_file = mp3_file\n                except Exception as e:\n                    logger.warn(f\"[LinkVoice] amr file transfer failed, directly send amr voice file: {format(e)}\")\n            file = open(voice_file, \"rb\")\n            file_body = {\n                \"file\": file\n            }\n            data = {\n                \"model\": model\n            }\n            res = requests.post(url, files=file_body, headers=headers, data=data, timeout=(5, 60))\n            if res.status_code == 200:\n                text = res.json().get(\"text\")\n            else:\n                res_json = res.json()\n                logger.error(f\"[LinkVoice] voiceToText error, status_code={res.status_code}, msg={res_json.get('message')}\")\n                return None\n            reply = Reply(ReplyType.TEXT, text)\n            logger.info(f\"[LinkVoice] voiceToText success, text={text}, file name={voice_file}\")\n        except Exception as e:\n            logger.error(e)\n            return None\n        return reply\n\n    def textToVoice(self, text):\n        try:\n            url = conf().get(\"linkai_api_base\", \"https://api.link-ai.tech\") + \"/v1/audio/speech\"\n            headers = {\"Authorization\": \"Bearer \" + conf().get(\"linkai_api_key\")}\n            model = const.TTS_1\n            if not conf().get(\"text_to_voice\") or conf().get(\"text_to_voice\") in [\"openai\", const.TTS_1, const.TTS_1_HD]:\n                model = conf().get(\"text_to_voice_model\") or const.TTS_1\n            data = {\n                \"model\": model,\n                \"input\": text,\n                \"voice\": conf().get(\"tts_voice_id\"),\n                \"app_code\": conf().get(\"linkai_app_code\")\n            }\n            res = requests.post(url, headers=headers, json=data, timeout=(5, 120))\n            if res.status_code == 200:\n                tmp_file_name = \"tmp/\" + datetime.datetime.now().strftime('%Y%m%d%H%M%S') + str(random.randint(0, 1000)) + \".mp3\"\n                with open(tmp_file_name, 'wb') as f:\n                    f.write(res.content)\n                reply = Reply(ReplyType.VOICE, tmp_file_name)\n                logger.info(f\"[LinkVoice] textToVoice success, input={text}, model={model}, voice_id={data.get('voice')}\")\n                return reply\n            else:\n                res_json = res.json()\n                logger.error(f\"[LinkVoice] textToVoice error, status_code={res.status_code}, msg={res_json.get('message')}\")\n                return None\n        except Exception as e:\n            logger.error(e)\n            # reply = Reply(ReplyType.ERROR, \"遇到了一点小问题，请稍后再问我吧\")\n            return None\n"
  },
  {
    "path": "voice/openai/openai_voice.py",
    "content": "\"\"\"\ngoogle voice service\n\"\"\"\nimport json\n\nimport openai\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom config import conf\nfrom voice.voice import Voice\nimport requests\nfrom common import const\nimport datetime, random\n\nclass OpenaiVoice(Voice):\n    def __init__(self):\n        openai.api_key = conf().get(\"open_ai_api_key\")\n\n    def voiceToText(self, voice_file):\n        logger.debug(\"[Openai] voice file name={}\".format(voice_file))\n        try:\n            file = open(voice_file, \"rb\")\n            api_base = conf().get(\"open_ai_api_base\") or \"https://api.openai.com/v1\"\n            url = f'{api_base}/audio/transcriptions'\n            headers = {\n                'Authorization': 'Bearer ' + conf().get(\"open_ai_api_key\"),\n                # 'Content-Type': 'multipart/form-data' # 加了会报错，不知道什么原因\n            }\n            files = {\n                \"file\": file,\n            }\n            data = {\n                \"model\": \"whisper-1\",\n            }\n            response = requests.post(url, headers=headers, files=files, data=data)\n            response_data = response.json()\n            text = response_data['text']\n            reply = Reply(ReplyType.TEXT, text)\n            logger.info(\"[Openai] voiceToText text={} voice file name={}\".format(text, voice_file))\n        except Exception as e:\n            reply = Reply(ReplyType.ERROR, \"我暂时还无法听清您的语音，请稍后再试吧~\")\n        finally:\n            return reply\n\n\n    def textToVoice(self, text):\n        try:\n            api_base = conf().get(\"open_ai_api_base\") or \"https://api.openai.com/v1\"\n            url = f'{api_base}/audio/speech'\n            headers = {\n                'Authorization': 'Bearer ' + conf().get(\"open_ai_api_key\"),\n                'Content-Type': 'application/json'\n            }\n            data = {\n                'model': conf().get(\"text_to_voice_model\") or const.TTS_1,\n                'input': text,\n                'voice': conf().get(\"tts_voice_id\") or \"alloy\"\n            }\n            response = requests.post(url, headers=headers, json=data)\n            file_name = \"tmp/\" + datetime.datetime.now().strftime('%Y%m%d%H%M%S') + str(random.randint(0, 1000)) + \".mp3\"\n            logger.debug(f\"[OPENAI] text_to_Voice file_name={file_name}, input={text}\")\n            with open(file_name, 'wb') as f:\n                f.write(response.content)\n            logger.info(f\"[OPENAI] text_to_Voice success\")\n            reply = Reply(ReplyType.VOICE, file_name)\n        except Exception as e:\n            logger.error(e)\n            reply = Reply(ReplyType.ERROR, \"遇到了一点小问题，请稍后再问我吧\")\n        return reply\n"
  },
  {
    "path": "voice/pytts/pytts_voice.py",
    "content": "\"\"\"\npytts voice service (offline)\n\"\"\"\n\nimport os\nimport sys\nimport time\n\nimport pyttsx3\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom voice.voice import Voice\n\n\nclass PyttsVoice(Voice):\n    engine = pyttsx3.init()\n\n    def __init__(self):\n        # 语速\n        self.engine.setProperty(\"rate\", 125)\n        # 音量\n        self.engine.setProperty(\"volume\", 1.0)\n        if sys.platform == \"win32\":\n            for voice in self.engine.getProperty(\"voices\"):\n                if \"Chinese\" in voice.name:\n                    self.engine.setProperty(\"voice\", voice.id)\n        else:\n            self.engine.setProperty(\"voice\", \"zh\")\n            # If the problem of espeak is fixed, using runAndWait() and remove this startLoop()\n            # TODO: check if this is work on win32\n            self.engine.startLoop(useDriverLoop=False)\n\n    def textToVoice(self, text):\n        try:\n            # Avoid the same filename under multithreading\n            wavFileName = \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".wav\"\n            wavFile = TmpDir().path() + wavFileName\n            logger.info(\"[Pytts] textToVoice text={} voice file name={}\".format(text, wavFile))\n\n            self.engine.save_to_file(text, wavFile)\n\n            if sys.platform == \"win32\":\n                self.engine.runAndWait()\n            else:\n                # In ubuntu, runAndWait do not really wait until the file created.\n                # It will return once the task queue is empty, but the task is still running in coroutine.\n                # And if you call runAndWait() and time.sleep() twice, it will stuck, so do not use this.\n                # If you want to fix this, add self._proxy.setBusy(True) in line 127 in espeak.py, at the beginning of the function save_to_file.\n                # self.engine.runAndWait()\n\n                # Before espeak fix this problem, we iterate the generator and control the waiting by ourself.\n                # But this is not the canonical way to use it, for example if the file already exists it also cannot wait.\n                self.engine.iterate()\n                while self.engine.isBusy() or wavFileName not in os.listdir(TmpDir().path()):\n                    time.sleep(0.1)\n\n            reply = Reply(ReplyType.VOICE, wavFile)\n\n        except Exception as e:\n            reply = Reply(ReplyType.ERROR, str(e))\n        finally:\n            return reply\n"
  },
  {
    "path": "voice/tencent/config.json.template",
    "content": "{\n    \"voice_type\": 1003,\n    \"secret_id\": \"YOUR_SECRET_ID\",\n    \"secret_key\": \"YOUR_SECRET_KEY\"\n}\n"
  },
  {
    "path": "voice/tencent/tencent_voice.py",
    "content": "import json\nimport base64\nimport os\nimport time\nfrom voice.voice import Voice\nfrom common.log import logger\nfrom tencentcloud.common import credential\nfrom tencentcloud.asr.v20190614 import asr_client, models as asr_models\nfrom tencentcloud.tts.v20190823 import tts_client, models as tts_models\nfrom bridge.reply import Reply, ReplyType\nfrom common.tmp_dir import TmpDir\n\nclass TencentVoice(Voice):\n    def __init__(self):\n        super().__init__()\n        self.secret_id = None\n        self.secret_key = None\n        self.voice_type = 1003\n        self._load_config()\n        \n    def _load_config(self):\n        \"\"\"\n        从本地配置文件加载配置\n        \"\"\"\n        try:\n            config_path = os.path.join(os.path.dirname(__file__), 'config.json')\n            with open(config_path, 'r') as f:\n                config = json.load(f)\n            self.secret_id = config.get('secret_id')\n            self.secret_key = config.get('secret_key')\n            self.voice_type = config.get('voice_type', self.voice_type)\n            if not self.secret_id or not self.secret_key:\n                logger.error(\"[Tencent] Missing credentials in config.json\")\n        except Exception as e:\n            logger.error(f\"[Tencent] Failed to load config: {e}\")\n    \n    def setup(self, config):\n        \"\"\"\n        设置配置信息（保留此方法用于向后兼容）\n        \"\"\"\n        pass\n        \n    def voiceToText(self, voice_file):\n        \"\"\"\n        将语音文件转换为文本\n        \"\"\"\n        try:\n            # 实例化认证对象\n            cred = credential.Credential(self.secret_id, self.secret_key)\n            \n            # 实例化客户端\n            client = asr_client.AsrClient(cred, \"ap-guangzhou\")\n            \n            # 读取音频文件\n            with open(voice_file, 'rb') as f:\n                audio_data = f.read()\n            \n            # 进行base64编码\n            base64_audio = base64.b64encode(audio_data).decode('utf-8')\n            \n            # 构造请求对象\n            req = asr_models.SentenceRecognitionRequest()\n            req.ProjectId = 0\n            req.SubServiceType = 2\n            req.EngSerViceType = \"16k_zh\"\n            req.SourceType = 1\n            req.VoiceFormat = \"wav\"\n            req.UsrAudioKey = \"voice_recognition\"\n            req.Data = base64_audio\n            \n            # 发起请求\n            resp = client.SentenceRecognition(req)\n            \n            # 解析结果\n            if resp.Result:\n                logger.info(\"[Tencent] Voice to text success: {}\".format(resp.Result))\n                return Reply(ReplyType.TEXT, resp.Result)\n            else:\n                logger.warning(\"[Tencent] Voice to text failed\")\n                return Reply(ReplyType.ERROR, \"腾讯语音识别失败\")\n            \n        except Exception as e:\n            logger.error(\"[Tencent] Voice to text error: {}\".format(e))\n            return Reply(ReplyType.ERROR, \"腾讯语音识别出错：{}\".format(str(e)))\n\n    def textToVoice(self, text):\n        \"\"\"\n        将文本转换为语音\n        \"\"\"\n        try:\n            cred = credential.Credential(self.secret_id, self.secret_key)\n            client = tts_client.TtsClient(cred, \"ap-guangzhou\")\n\n            req = tts_models.TextToVoiceRequest()\n            req.Text = text\n            req.SessionId = str(int(time.time()))\n            req.Volume = 5\n            req.Speed = 0\n            req.ProjectId = 0\n            req.ModelType = 1\n            req.PrimaryLanguage = 1\n            req.SampleRate = 16000\n            req.VoiceType = self.voice_type  # 客服女声\n\n            response = client.TextToVoice(req)\n            \n            if response.Audio:\n                fileName = TmpDir().path() + \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".mp3\"\n                with open(fileName, \"wb\") as f:\n                    f.write(base64.b64decode(response.Audio))\n                logger.info(\"[Tencent] textToVoice text={} voice file name={}\".format(text, fileName))\n                return Reply(ReplyType.VOICE, fileName)\n            else:\n                logger.error(\"[Tencent] textToVoice failed\")\n                return Reply(ReplyType.ERROR, \"腾讯语音合成失败\")\n\n        except Exception as e:\n            logger.error(\"[Tencent] Text to voice error: {}\".format(e))\n            return Reply(ReplyType.ERROR, \"腾讯语音合成出错：{}\".format(str(e)))\n"
  },
  {
    "path": "voice/voice.py",
    "content": "\"\"\"\nVoice service abstract class\n\"\"\"\n\n\nclass Voice(object):\n    def voiceToText(self, voice_file):\n        \"\"\"\n        Send voice to voice service and get text\n        \"\"\"\n        raise NotImplementedError\n\n    def textToVoice(self, text):\n        \"\"\"\n        Send text to voice service and get voice\n        \"\"\"\n        raise NotImplementedError\n"
  },
  {
    "path": "voice/xunfei/config.json.template",
    "content": "{\n  \"APPID\":\"xxx71xxx\",\n  \"APIKey\":\"xxxx69058exxxxxx\",\n  \"APISecret\":\"xxxx697f0xxxxxx\",\n  \"BusinessArgsTTS\":{\"aue\": \"lame\", \"sfl\": 1, \"auf\": \"audio/L16;rate=16000\", \"vcn\": \"xiaoyan\", \"tte\": \"utf8\"},\n  \"BusinessArgsASR\":{\"domain\": \"iat\", \"language\": \"zh_cn\", \"accent\": \"mandarin\", \"vad_eos\":10000, \"dwa\": \"wpgs\"}\n}\n"
  },
  {
    "path": "voice/xunfei/xunfei_asr.py",
    "content": "# -*- coding:utf-8 -*-\n#\n#  Author: njnuko \n#  Email: njnuko@163.com \n#\n#  这个文档是基于官方的demo来改的，固体官方demo文档请参考官网\n#\n#  语音听写流式 WebAPI 接口调用示例 接口文档（必看）：https://doc.xfyun.cn/rest_api/语音听写（流式版）.html\n#  webapi 听写服务参考帖子（必看）：http://bbs.xfyun.cn/forum.php?mod=viewthread&tid=38947&extra=\n#  语音听写流式WebAPI 服务，热词使用方式：登陆开放平台https://www.xfyun.cn/后，找到控制台--我的应用---语音听写（流式）---服务管理--个性化热词，\n#  设置热词\n#  注意：热词只能在识别的时候会增加热词的识别权重，需要注意的是增加相应词条的识别率，但并不是绝对的，具体效果以您测试为准。\n#  语音听写流式WebAPI 服务，方言试用方法：登陆开放平台https://www.xfyun.cn/后，找到控制台--我的应用---语音听写（流式）---服务管理--识别语种列表\n#  可添加语种或方言，添加后会显示该方言的参数值\n#  错误码链接：https://www.xfyun.cn/document/error-code （code返回错误码时必看）\n# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #\n\nimport websocket\nimport datetime\nimport hashlib\nimport base64\nimport hmac\nimport json\nfrom urllib.parse import urlencode\nimport time\nimport ssl\nfrom wsgiref.handlers import format_date_time\nfrom datetime import datetime\nfrom time import mktime\nimport _thread as thread\nimport os\nimport wave\n\n\nSTATUS_FIRST_FRAME = 0  # 第一帧的标识\nSTATUS_CONTINUE_FRAME = 1  # 中间帧标识\nSTATUS_LAST_FRAME = 2  # 最后一帧的标识\n\n#############\n#whole_dict 是用来存储返回值的，由于带语音修正，所以用dict来存储，有更新的化pop之前的值，最后再合并\nglobal whole_dict\n#这个文档是官方文档改的，这个参数是用来做函数调用时用的\nglobal wsParam\n##############\n\n\nclass Ws_Param(object):\n    # 初始化\n    def __init__(self, APPID, APIKey, APISecret,BusinessArgs, AudioFile):\n        self.APPID = APPID\n        self.APIKey = APIKey\n        self.APISecret = APISecret\n        self.AudioFile = AudioFile\n        self.BusinessArgs = BusinessArgs\n        # 公共参数(common)\n        self.CommonArgs = {\"app_id\": self.APPID}\n        # 业务参数(business)，更多个性化参数可在官网查看\n        #self.BusinessArgs = {\"domain\": \"iat\", \"language\": \"zh_cn\", \"accent\": \"mandarin\", \"vinfo\":1,\"vad_eos\":10000}\n\n    # 生成url\n    def create_url(self):\n        url = 'wss://ws-api.xfyun.cn/v2/iat'\n        # 生成RFC1123格式的时间戳\n        now = datetime.now()\n        date = format_date_time(mktime(now.timetuple()))\n\n        # 拼接字符串\n        signature_origin = \"host: \" + \"ws-api.xfyun.cn\" + \"\\n\"\n        signature_origin += \"date: \" + date + \"\\n\"\n        signature_origin += \"GET \" + \"/v2/iat \" + \"HTTP/1.1\"\n        # 进行hmac-sha256进行加密\n        signature_sha = hmac.new(self.APISecret.encode('utf-8'), signature_origin.encode('utf-8'),\n                                 digestmod=hashlib.sha256).digest()\n        signature_sha = base64.b64encode(signature_sha).decode(encoding='utf-8')\n\n        authorization_origin = \"api_key=\\\"%s\\\", algorithm=\\\"%s\\\", headers=\\\"%s\\\", signature=\\\"%s\\\"\" % (\n            self.APIKey, \"hmac-sha256\", \"host date request-line\", signature_sha)\n        authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode(encoding='utf-8')\n        # 将请求的鉴权参数组合为字典\n        v = {\n            \"authorization\": authorization,\n            \"date\": date,\n            \"host\": \"ws-api.xfyun.cn\"\n        }\n        # 拼接鉴权参数，生成url\n        url = url + '?' + urlencode(v)\n        #print(\"date: \",date)\n        #print(\"v: \",v)\n        # 此处打印出建立连接时候的url,参考本demo的时候可取消上方打印的注释，比对相同参数时生成的url与自己代码生成的url是否一致\n        #print('websocket url :', url)\n        return url\n\n\n# 收到websocket消息的处理\ndef on_message(ws, message):\n    global whole_dict\n    try:\n        code = json.loads(message)[\"code\"]\n        sid = json.loads(message)[\"sid\"]\n        if code != 0:\n            errMsg = json.loads(message)[\"message\"]\n            print(\"sid:%s call error:%s code is:%s\" % (sid, errMsg, code))\n        else:\n            temp1 = json.loads(message)[\"data\"][\"result\"]\n            data = json.loads(message)[\"data\"][\"result\"][\"ws\"]\n            sn = temp1[\"sn\"]\n            if \"rg\" in temp1.keys():\n                rep = temp1[\"rg\"]\n                rep_start = rep[0]\n                rep_end = rep[1]\n                for sn in range(rep_start,rep_end+1):\n                    #print(\"before pop\",whole_dict)\n                    #print(\"sn\",sn)\n                    whole_dict.pop(sn,None)\n                    #print(\"after pop\",whole_dict)\n                results = \"\"\n                for i in data:\n                    for w in i[\"cw\"]:\n                        results += w[\"w\"]\n                whole_dict[sn]=results\n                #print(\"after add\",whole_dict)\n            else:\n                results = \"\"\n                for i in data:\n                    for w in i[\"cw\"]:\n                        results += w[\"w\"]\n                whole_dict[sn]=results\n            #print(\"sid:%s call success!,data is:%s\" % (sid, json.dumps(data, ensure_ascii=False)))\n    except Exception as e:\n        print(\"receive msg,but parse exception:\", e)\n\n\n\n# 收到websocket错误的处理\ndef on_error(ws, error):\n    print(\"### error:\", error)\n\n\n# 收到websocket关闭的处理\ndef on_close(ws,a,b):\n    print(\"### closed ###\")\n\n\n# 收到websocket连接建立的处理\ndef on_open(ws):\n    global wsParam\n    def run(*args):\n        frameSize = 8000  # 每一帧的音频大小\n        intervel = 0.04  # 发送音频间隔(单位:s)\n        status = STATUS_FIRST_FRAME  # 音频的状态信息，标识音频是第一帧，还是中间帧、最后一帧\n\n        with wave.open(wsParam.AudioFile, \"rb\") as fp:\n            while True:\n                buf = fp.readframes(frameSize)\n                # 文件结束\n                if not buf:\n                    status = STATUS_LAST_FRAME\n                # 第一帧处理\n                # 发送第一帧音频，带business 参数\n                # appid 必须带上，只需第一帧发送\n                if status == STATUS_FIRST_FRAME:\n                    d = {\"common\": wsParam.CommonArgs,\n                         \"business\": wsParam.BusinessArgs,\n                         \"data\": {\"status\": 0, \"format\": \"audio/L16;rate=16000\",\"audio\": str(base64.b64encode(buf), 'utf-8'), \"encoding\": \"raw\"}} \n                    d = json.dumps(d)\n                    ws.send(d)\n                    status = STATUS_CONTINUE_FRAME\n                # 中间帧处理\n                elif status == STATUS_CONTINUE_FRAME:\n                    d = {\"data\": {\"status\": 1, \"format\": \"audio/L16;rate=16000\",\n                                  \"audio\": str(base64.b64encode(buf), 'utf-8'),\n                                  \"encoding\": \"raw\"}}\n                    ws.send(json.dumps(d))\n                # 最后一帧处理\n                elif status == STATUS_LAST_FRAME:\n                    d = {\"data\": {\"status\": 2, \"format\": \"audio/L16;rate=16000\",\n                                  \"audio\": str(base64.b64encode(buf), 'utf-8'),\n                                  \"encoding\": \"raw\"}}\n                    ws.send(json.dumps(d))\n                    time.sleep(1)\n                    break\n                # 模拟音频采样间隔\n                time.sleep(intervel)\n        ws.close()\n\n    thread.start_new_thread(run, ())\n\n#提供给xunfei_voice调用的函数\ndef xunfei_asr(APPID,APISecret,APIKey,BusinessArgsASR,AudioFile):\n    global whole_dict\n    global wsParam\n    whole_dict = {}\n    wsParam1 = Ws_Param(APPID=APPID, APISecret=APISecret,\n                       APIKey=APIKey,BusinessArgs=BusinessArgsASR,\n                       AudioFile=AudioFile)\n    #wsParam是global变量，给上面on_open函数调用使用的\n    wsParam = wsParam1\n    websocket.enableTrace(False)\n    wsUrl = wsParam.create_url()\n    ws = websocket.WebSocketApp(wsUrl, on_message=on_message, on_error=on_error, on_close=on_close)\n    ws.on_open = on_open\n    ws.run_forever(sslopt={\"cert_reqs\": ssl.CERT_NONE})\n    #把字典的值合并起来做最后识别的输出\n    whole_words = \"\"\n    for i in sorted(whole_dict.keys()):\n        whole_words += whole_dict[i]\n    return whole_words\n\n     \n"
  },
  {
    "path": "voice/xunfei/xunfei_tts.py",
    "content": "# -*- coding:utf-8 -*-\n#\n#  Author: njnuko\n#  Email: njnuko@163.com\n#\n#  这个文档是基于官方的demo来改的，固体官方demo文档请参考官网\n#\n#  语音听写流式 WebAPI 接口调用示例 接口文档（必看）：https://doc.xfyun.cn/rest_api/语音听写（流式版）.html\n#  webapi 听写服务参考帖子（必看）：http://bbs.xfyun.cn/forum.php?mod=viewthread&tid=38947&extra=\n#  语音听写流式WebAPI 服务，热词使用方式：登陆开放平台https://www.xfyun.cn/后，找到控制台--我的应用---语音听写（流式）---服务管理--个性化热词，\n#  设置热词\n#  注意：热词只能在识别的时候会增加热词的识别权重，需要注意的是增加相应词条的识别率，但并不是绝对的，具体效果以您测试为准。\n#  语音听写流式WebAPI 服务，方言试用方法：登陆开放平台https://www.xfyun.cn/后，找到控制台--我的应用---语音听写（流式）---服务管理--识别语种列表\n#  可添加语种或方言，添加后会显示该方言的参数值\n#  错误码链接：https://www.xfyun.cn/document/error-code （code返回错误码时必看）\n# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #\nimport websocket\nimport datetime\nimport hashlib\nimport base64\nimport hmac\nimport json\nfrom urllib.parse import urlencode\nimport time\nimport ssl\nfrom wsgiref.handlers import format_date_time\nfrom datetime import datetime\nfrom time import mktime\nimport _thread as thread\nimport os\n\n\n\nSTATUS_FIRST_FRAME = 0  # 第一帧的标识\nSTATUS_CONTINUE_FRAME = 1  # 中间帧标识\nSTATUS_LAST_FRAME = 2  # 最后一帧的标识\n\n#############\n#这个参数是用来做输出文件路径的\nglobal outfile\n#这个文档是官方文档改的，这个参数是用来做函数调用时用的\nglobal wsParam\n##############\n\n\nclass Ws_Param(object):\n    # 初始化\n    def __init__(self, APPID, APIKey, APISecret,BusinessArgs,Text):\n        self.APPID = APPID\n        self.APIKey = APIKey\n        self.APISecret = APISecret\n        self.BusinessArgs = BusinessArgs\n        self.Text = Text\n\n        # 公共参数(common)\n        self.CommonArgs = {\"app_id\": self.APPID}\n        # 业务参数(business)，更多个性化参数可在官网查看\n        #self.BusinessArgs = {\"aue\": \"raw\", \"auf\": \"audio/L16;rate=16000\", \"vcn\": \"xiaoyan\", \"tte\": \"utf8\"}\n        self.Data = {\"status\": 2, \"text\": str(base64.b64encode(self.Text.encode('utf-8')), \"UTF8\")}\n        #使用小语种须使用以下方式，此处的unicode指的是 utf16小端的编码方式，即\"UTF-16LE\"”\n        #self.Data = {\"status\": 2, \"text\": str(base64.b64encode(self.Text.encode('utf-16')), \"UTF8\")}\n\n    # 生成url\n    def create_url(self):\n        url = 'wss://tts-api.xfyun.cn/v2/tts'\n        # 生成RFC1123格式的时间戳\n        now = datetime.now()\n        date = format_date_time(mktime(now.timetuple()))\n\n        # 拼接字符串\n        signature_origin = \"host: \" + \"ws-api.xfyun.cn\" + \"\\n\"\n        signature_origin += \"date: \" + date + \"\\n\"\n        signature_origin += \"GET \" + \"/v2/tts \" + \"HTTP/1.1\"\n        # 进行hmac-sha256进行加密\n        signature_sha = hmac.new(self.APISecret.encode('utf-8'), signature_origin.encode('utf-8'),\n                                 digestmod=hashlib.sha256).digest()\n        signature_sha = base64.b64encode(signature_sha).decode(encoding='utf-8')\n\n        authorization_origin = \"api_key=\\\"%s\\\", algorithm=\\\"%s\\\", headers=\\\"%s\\\", signature=\\\"%s\\\"\" % (\n            self.APIKey, \"hmac-sha256\", \"host date request-line\", signature_sha)\n        authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode(encoding='utf-8')\n        # 将请求的鉴权参数组合为字典\n        v = {\n            \"authorization\": authorization,\n            \"date\": date,\n            \"host\": \"ws-api.xfyun.cn\"\n        }\n        # 拼接鉴权参数，生成url\n        url = url + '?' + urlencode(v)\n        # print(\"date: \",date)\n        # print(\"v: \",v)\n        # 此处打印出建立连接时候的url,参考本demo的时候可取消上方打印的注释，比对相同参数时生成的url与自己代码生成的url是否一致\n        # print('websocket url :', url)\n        return url\n\ndef on_message(ws, message):\n    #输出文件\n    global outfile\n    try:\n        message =json.loads(message)\n        code = message[\"code\"]\n        sid = message[\"sid\"]\n        audio = message[\"data\"][\"audio\"]\n        audio = base64.b64decode(audio)\n        status = message[\"data\"][\"status\"]\n        if status == 2:\n            print(\"ws is closed\")\n            ws.close()\n        if code != 0:\n            errMsg = message[\"message\"]\n            print(\"sid:%s call error:%s code is:%s\" % (sid, errMsg, code))\n        else:\n\n            with open(outfile, 'ab') as f:\n                f.write(audio)\n\n    except Exception as e:\n        print(\"receive msg,but parse exception:\", e)\n\n\n\n# 收到websocket连接建立的处理\ndef on_open(ws):\n    global outfile\n    global wsParam\n    def run(*args):\n        d = {\"common\": wsParam.CommonArgs,\n             \"business\": wsParam.BusinessArgs,\n             \"data\": wsParam.Data,\n             }\n        d = json.dumps(d)\n        # print(\"------>开始发送文本数据\")\n        ws.send(d)\n        if os.path.exists(outfile):\n            os.remove(outfile)\n\n    thread.start_new_thread(run, ())\n\n# 收到websocket错误的处理\ndef on_error(ws, error):\n    print(\"### error:\", error)\n\n\n\n# 收到websocket关闭的处理\ndef on_close(ws):\n    print(\"### closed ###\")\n\n\n\ndef xunfei_tts(APPID, APIKey, APISecret,BusinessArgsTTS, Text, OutFile):\n    global outfile\n    global wsParam \n    outfile = OutFile\n    wsParam1 = Ws_Param(APPID,APIKey,APISecret,BusinessArgsTTS,Text)\n    wsParam = wsParam1\n    websocket.enableTrace(False)\n    wsUrl = wsParam.create_url()\n    ws = websocket.WebSocketApp(wsUrl, on_message=on_message, on_error=on_error, on_close=on_close)\n    ws.on_open = on_open\n    ws.run_forever(sslopt={\"cert_reqs\": ssl.CERT_NONE})\n    return outfile\n     \n"
  },
  {
    "path": "voice/xunfei/xunfei_voice.py",
    "content": "#####################################################################\n#    xunfei voice service\n#     Auth: njnuko\n#     Email: njnuko@163.com\n#\n#    要使用本模块, 首先到 xfyun.cn 注册一个开发者账号,\n#    之后创建一个新应用, 然后在应用管理的语音识别或者语音合同右边可以查看APPID API Key 和 Secret Key\n#    然后在 config.json 中填入这三个值\n#\n#    配置说明：\n# {\n#  \"APPID\":\"xxx71xxx\",\n#  \"APIKey\":\"xxxx69058exxxxxx\",  #讯飞xfyun.cn控制台语音合成或者听写界面的APIKey\n#  \"APISecret\":\"xxxx697f0xxxxxx\",  #讯飞xfyun.cn控制台语音合成或者听写界面的APIKey\n#  \"BusinessArgsTTS\":{\"aue\": \"lame\", \"sfl\": 1, \"auf\": \"audio/L16;rate=16000\", \"vcn\": \"xiaoyan\", \"tte\": \"utf8\"}, #语音合成的参数，具体可以参考xfyun.cn的文档\n#  \"BusinessArgsASR\":{\"domain\": \"iat\", \"language\": \"zh_cn\", \"accent\": \"mandarin\", \"vad_eos\":10000, \"dwa\": \"wpgs\"}  #语音听写的参数，具体可以参考xfyun.cn的文档\n# }\n#####################################################################\n\nimport json\nimport os\nimport time\n\nfrom bridge.reply import Reply, ReplyType\nfrom common.log import logger\nfrom common.tmp_dir import TmpDir\nfrom config import conf\nfrom voice.voice import Voice\nfrom .xunfei_asr import xunfei_asr\nfrom .xunfei_tts import xunfei_tts\nimport shutil\n\ntry:\n    from voice.audio_convert import any_to_mp3\n    from pydub import AudioSegment\n    _audio_available = True\nexcept ImportError as e:\n    logger.debug(\"import audio libraries failed: {}\".format(e))\n    _audio_available = False\n\n\nclass XunfeiVoice(Voice):\n    def __init__(self):\n        try:\n            curdir = os.path.dirname(__file__)\n            config_path = os.path.join(curdir, \"config.json\")\n            conf = None\n            with open(config_path, \"r\") as fr:\n                conf = json.load(fr)\n            print(conf)\n            self.APPID = str(conf.get(\"APPID\"))\n            self.APIKey = str(conf.get(\"APIKey\"))\n            self.APISecret = str(conf.get(\"APISecret\"))\n            self.BusinessArgsTTS = conf.get(\"BusinessArgsTTS\")\n            self.BusinessArgsASR= conf.get(\"BusinessArgsASR\")\n\n        except Exception as e:\n            logger.warn(\"XunfeiVoice init failed: %s, ignore \" % e)\n\n    def voiceToText(self, voice_file):\n        # 识别本地文件\n        try:\n            logger.debug(\"[Xunfei] voice file name={}\".format(voice_file))\n            #print(\"voice_file===========\",voice_file)\n            #print(\"voice_file_type===========\",type(voice_file))\n            #mp3_name, file_extension = os.path.splitext(voice_file)\n            #mp3_file = mp3_name + \".mp3\"\n            #pcm_data=get_pcm_from_wav(voice_file)\n            #mp3_name, file_extension = os.path.splitext(voice_file)\n            #AudioSegment.from_wav(voice_file).export(mp3_file, format=\"mp3\")\n            #shutil.copy2(voice_file, 'tmp/test1.wav')\n            #shutil.copy2(mp3_file, 'tmp/test1.mp3')\n            #print(\"voice and mp3 file\",voice_file,mp3_file)\n            text = xunfei_asr(self.APPID,self.APISecret,self.APIKey,self.BusinessArgsASR,voice_file)\n            logger.info(\"讯飞语音识别到了: {}\".format(text))\n            reply = Reply(ReplyType.TEXT, text)\n        except Exception as e:\n            logger.warn(\"XunfeiVoice init failed: %s, ignore \" % e)\n            reply = Reply(ReplyType.ERROR, \"讯飞语音识别出错了；{0}\")\n        return reply\n\n    def textToVoice(self, text):\n        try:\n            # Avoid the same filename under multithreading\n            fileName = TmpDir().path() + \"reply-\" + str(int(time.time())) + \"-\" + str(hash(text) & 0x7FFFFFFF) + \".mp3\"\n            return_file = xunfei_tts(self.APPID,self.APIKey,self.APISecret,self.BusinessArgsTTS,text,fileName)\n            logger.info(\"[Xunfei] textToVoice text={} voice file name={}\".format(text, fileName))\n            reply = Reply(ReplyType.VOICE, fileName)\n        except Exception as e:\n            logger.error(\"[Xunfei] textToVoice error={}\".format(fileName))\n            reply = Reply(ReplyType.ERROR, \"抱歉，讯飞语音合成失败\")\n        return reply\n"
  }
]