Full Code of zhayujie/chatgpt-on-wechat for AI

master 7d0e1568ac50 cached
394 files
1.9 MB
505.0k tokens
1469 symbols
1 requests
Download .txt
Showing preview only (2,063K chars total). Download the full file or copy to clipboard to get everything.
Repository: zhayujie/chatgpt-on-wechat
Branch: master
Commit: 7d0e1568ac50
Files: 394
Total size: 1.9 MB

Directory structure:
gitextract_wetyu4kh/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── 1.bug.yml
│   │   └── 2.feature.yml
│   └── workflows/
│       ├── deploy-image-arm.yml
│       └── deploy-image.yml
├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── agent/
│   ├── chat/
│   │   ├── __init__.py
│   │   └── service.py
│   ├── memory/
│   │   ├── __init__.py
│   │   ├── chunker.py
│   │   ├── config.py
│   │   ├── conversation_store.py
│   │   ├── embedding.py
│   │   ├── manager.py
│   │   ├── service.py
│   │   ├── storage.py
│   │   └── summarizer.py
│   ├── prompt/
│   │   ├── __init__.py
│   │   ├── builder.py
│   │   └── workspace.py
│   ├── protocol/
│   │   ├── __init__.py
│   │   ├── agent.py
│   │   ├── agent_stream.py
│   │   ├── context.py
│   │   ├── message_utils.py
│   │   ├── models.py
│   │   ├── result.py
│   │   └── task.py
│   ├── skills/
│   │   ├── __init__.py
│   │   ├── config.py
│   │   ├── formatter.py
│   │   ├── frontmatter.py
│   │   ├── loader.py
│   │   ├── manager.py
│   │   ├── service.py
│   │   └── types.py
│   └── tools/
│       ├── __init__.py
│       ├── base_tool.py
│       ├── bash/
│       │   ├── __init__.py
│       │   └── bash.py
│       ├── browser_tool.py
│       ├── edit/
│       │   ├── __init__.py
│       │   └── edit.py
│       ├── env_config/
│       │   ├── __init__.py
│       │   └── env_config.py
│       ├── ls/
│       │   ├── __init__.py
│       │   └── ls.py
│       ├── memory/
│       │   ├── __init__.py
│       │   ├── memory_get.py
│       │   └── memory_search.py
│       ├── read/
│       │   ├── __init__.py
│       │   └── read.py
│       ├── scheduler/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── integration.py
│       │   ├── scheduler_service.py
│       │   ├── scheduler_tool.py
│       │   └── task_store.py
│       ├── send/
│       │   ├── __init__.py
│       │   └── send.py
│       ├── tool_manager.py
│       ├── utils/
│       │   ├── __init__.py
│       │   ├── diff.py
│       │   └── truncate.py
│       ├── vision/
│       │   ├── __init__.py
│       │   └── vision.py
│       ├── web_fetch/
│       │   ├── __init__.py
│       │   └── web_fetch.py
│       ├── web_search/
│       │   ├── __init__.py
│       │   └── web_search.py
│       └── write/
│           ├── __init__.py
│           └── write.py
├── app.py
├── bridge/
│   ├── agent_bridge.py
│   ├── agent_event_handler.py
│   ├── agent_initializer.py
│   ├── bridge.py
│   ├── context.py
│   └── reply.py
├── channel/
│   ├── channel.py
│   ├── channel_factory.py
│   ├── chat_channel.py
│   ├── chat_message.py
│   ├── dingtalk/
│   │   ├── dingtalk_channel.py
│   │   └── dingtalk_message.py
│   ├── feishu/
│   │   ├── README.md
│   │   ├── feishu_channel.py
│   │   └── feishu_message.py
│   ├── file_cache.py
│   ├── qq/
│   │   ├── __init__.py
│   │   ├── qq_channel.py
│   │   └── qq_message.py
│   ├── terminal/
│   │   └── terminal_channel.py
│   ├── web/
│   │   ├── README.md
│   │   ├── chat.html
│   │   ├── static/
│   │   │   ├── css/
│   │   │   │   └── console.css
│   │   │   └── js/
│   │   │       └── console.js
│   │   └── web_channel.py
│   ├── wechatcom/
│   │   ├── README.md
│   │   ├── wechatcomapp_channel.py
│   │   ├── wechatcomapp_client.py
│   │   └── wechatcomapp_message.py
│   ├── wechatmp/
│   │   ├── README.md
│   │   ├── active_reply.py
│   │   ├── common.py
│   │   ├── passive_reply.py
│   │   ├── wechatmp_channel.py
│   │   ├── wechatmp_client.py
│   │   └── wechatmp_message.py
│   └── wecom_bot/
│       ├── __init__.py
│       ├── wecom_bot_channel.py
│       └── wecom_bot_message.py
├── common/
│   ├── cloud_client.py
│   ├── const.py
│   ├── dequeue.py
│   ├── expired_dict.py
│   ├── log.py
│   ├── memory.py
│   ├── package_manager.py
│   ├── singleton.py
│   ├── sorted_dict.py
│   ├── time_check.py
│   ├── tmp_dir.py
│   ├── token_bucket.py
│   └── utils.py
├── config-template.json
├── config.py
├── docker/
│   ├── Dockerfile.latest
│   ├── build.latest.sh
│   ├── docker-compose.yml
│   └── entrypoint.sh
├── docs/
│   ├── agent.md
│   ├── channels/
│   │   ├── dingtalk.mdx
│   │   ├── feishu.mdx
│   │   ├── qq.mdx
│   │   ├── web.mdx
│   │   ├── wechatmp.mdx
│   │   ├── wecom-bot.mdx
│   │   └── wecom.mdx
│   ├── docs.json
│   ├── en/
│   │   ├── README.md
│   │   ├── channels/
│   │   │   ├── dingtalk.mdx
│   │   │   ├── feishu.mdx
│   │   │   ├── qq.mdx
│   │   │   ├── web.mdx
│   │   │   ├── wechatmp.mdx
│   │   │   ├── wecom-bot.mdx
│   │   │   └── wecom.mdx
│   │   ├── guide/
│   │   │   ├── manual-install.mdx
│   │   │   └── quick-start.mdx
│   │   ├── intro/
│   │   │   ├── architecture.mdx
│   │   │   ├── features.mdx
│   │   │   └── index.mdx
│   │   ├── memory.mdx
│   │   ├── models/
│   │   │   ├── claude.mdx
│   │   │   ├── coding-plan.mdx
│   │   │   ├── deepseek.mdx
│   │   │   ├── doubao.mdx
│   │   │   ├── gemini.mdx
│   │   │   ├── glm.mdx
│   │   │   ├── index.mdx
│   │   │   ├── kimi.mdx
│   │   │   ├── linkai.mdx
│   │   │   ├── minimax.mdx
│   │   │   ├── openai.mdx
│   │   │   └── qwen.mdx
│   │   ├── releases/
│   │   │   ├── overview.mdx
│   │   │   ├── v2.0.0.mdx
│   │   │   ├── v2.0.1.mdx
│   │   │   └── v2.0.2.mdx
│   │   ├── skills/
│   │   │   ├── image-vision.mdx
│   │   │   ├── index.mdx
│   │   │   ├── linkai-agent.mdx
│   │   │   ├── skill-creator.mdx
│   │   │   └── web-fetch.mdx
│   │   └── tools/
│   │       ├── bash.mdx
│   │       ├── browser.mdx
│   │       ├── edit.mdx
│   │       ├── env-config.mdx
│   │       ├── index.mdx
│   │       ├── ls.mdx
│   │       ├── memory.mdx
│   │       ├── read.mdx
│   │       ├── scheduler.mdx
│   │       ├── send.mdx
│   │       ├── web-search.mdx
│   │       └── write.mdx
│   ├── guide/
│   │   ├── manual-install.mdx
│   │   ├── quick-start.mdx
│   │   └── upgrade.mdx
│   ├── intro/
│   │   ├── architecture.mdx
│   │   ├── features.mdx
│   │   └── index.mdx
│   ├── ja/
│   │   ├── README.md
│   │   ├── channels/
│   │   │   ├── dingtalk.mdx
│   │   │   ├── feishu.mdx
│   │   │   ├── qq.mdx
│   │   │   ├── web.mdx
│   │   │   ├── wechatmp.mdx
│   │   │   ├── wecom-bot.mdx
│   │   │   └── wecom.mdx
│   │   ├── guide/
│   │   │   ├── manual-install.mdx
│   │   │   ├── quick-start.mdx
│   │   │   └── upgrade.mdx
│   │   ├── intro/
│   │   │   ├── architecture.mdx
│   │   │   ├── features.mdx
│   │   │   └── index.mdx
│   │   ├── memory.mdx
│   │   ├── models/
│   │   │   ├── claude.mdx
│   │   │   ├── coding-plan.mdx
│   │   │   ├── deepseek.mdx
│   │   │   ├── doubao.mdx
│   │   │   ├── gemini.mdx
│   │   │   ├── glm.mdx
│   │   │   ├── index.mdx
│   │   │   ├── kimi.mdx
│   │   │   ├── linkai.mdx
│   │   │   ├── minimax.mdx
│   │   │   ├── openai.mdx
│   │   │   └── qwen.mdx
│   │   ├── releases/
│   │   │   ├── overview.mdx
│   │   │   ├── v2.0.0.mdx
│   │   │   ├── v2.0.1.mdx
│   │   │   ├── v2.0.2.mdx
│   │   │   └── v2.0.3.mdx
│   │   ├── skills/
│   │   │   ├── image-vision.mdx
│   │   │   ├── index.mdx
│   │   │   ├── linkai-agent.mdx
│   │   │   ├── skill-creator.mdx
│   │   │   └── web-fetch.mdx
│   │   └── tools/
│   │       ├── bash.mdx
│   │       ├── browser.mdx
│   │       ├── edit.mdx
│   │       ├── env-config.mdx
│   │       ├── index.mdx
│   │       ├── ls.mdx
│   │       ├── memory.mdx
│   │       ├── read.mdx
│   │       ├── scheduler.mdx
│   │       ├── send.mdx
│   │       ├── web-search.mdx
│   │       └── write.mdx
│   ├── memory.mdx
│   ├── models/
│   │   ├── claude.mdx
│   │   ├── coding-plan.mdx
│   │   ├── deepseek.mdx
│   │   ├── doubao.mdx
│   │   ├── gemini.mdx
│   │   ├── glm.mdx
│   │   ├── index.mdx
│   │   ├── kimi.mdx
│   │   ├── linkai.mdx
│   │   ├── minimax.mdx
│   │   ├── openai.mdx
│   │   └── qwen.mdx
│   ├── releases/
│   │   ├── overview.mdx
│   │   ├── v2.0.0.mdx
│   │   ├── v2.0.1.mdx
│   │   ├── v2.0.2.mdx
│   │   └── v2.0.3.mdx
│   ├── skills/
│   │   ├── image-vision.mdx
│   │   ├── index.mdx
│   │   ├── linkai-agent.mdx
│   │   ├── skill-creator.mdx
│   │   └── web-fetch.mdx
│   └── tools/
│       ├── bash.mdx
│       ├── browser.mdx
│       ├── edit.mdx
│       ├── env-config.mdx
│       ├── index.mdx
│       ├── ls.mdx
│       ├── memory.mdx
│       ├── read.mdx
│       ├── scheduler.mdx
│       ├── send.mdx
│       ├── web-search.mdx
│       └── write.mdx
├── models/
│   ├── ali/
│   │   ├── ali_qwen_bot.py
│   │   └── ali_qwen_session.py
│   ├── baidu/
│   │   ├── baidu_unit_bot.py
│   │   ├── baidu_wenxin.py
│   │   └── baidu_wenxin_session.py
│   ├── bot.py
│   ├── bot_factory.py
│   ├── chatgpt/
│   │   ├── chat_gpt_bot.py
│   │   └── chat_gpt_session.py
│   ├── claudeapi/
│   │   └── claude_api_bot.py
│   ├── dashscope/
│   │   ├── dashscope_bot.py
│   │   └── dashscope_session.py
│   ├── doubao/
│   │   ├── __init__.py
│   │   ├── doubao_bot.py
│   │   └── doubao_session.py
│   ├── gemini/
│   │   └── google_gemini_bot.py
│   ├── linkai/
│   │   └── link_ai_bot.py
│   ├── minimax/
│   │   ├── minimax_bot.py
│   │   └── minimax_session.py
│   ├── modelscope/
│   │   ├── modelscope_bot.py
│   │   └── modelscope_session.py
│   ├── moonshot/
│   │   ├── moonshot_bot.py
│   │   └── moonshot_session.py
│   ├── openai/
│   │   ├── open_ai_bot.py
│   │   ├── open_ai_image.py
│   │   ├── open_ai_session.py
│   │   └── openai_compat.py
│   ├── openai_compatible_bot.py
│   ├── session_manager.py
│   ├── xunfei/
│   │   └── xunfei_spark_bot.py
│   └── zhipuai/
│       ├── zhipu_ai_image.py
│       ├── zhipu_ai_session.py
│       └── zhipuai_bot.py
├── plugins/
│   ├── agent/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── agent.py
│   │   └── config-template.yaml
│   ├── banwords/
│   │   ├── .gitignore
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── banwords.py
│   │   ├── banwords.txt.template
│   │   ├── config.json.template
│   │   └── lib/
│   │       └── WordsSearch.py
│   ├── dungeon/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   └── dungeon.py
│   ├── finish/
│   │   ├── __init__.py
│   │   └── finish.py
│   ├── godcmd/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   └── godcmd.py
│   ├── hello/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   └── hello.py
│   ├── keyword/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   └── keyword.py
│   ├── linkai/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   ├── linkai.py
│   │   ├── midjourney.py
│   │   ├── summary.py
│   │   └── utils.py
│   ├── role/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── role.py
│   │   └── roles.json
│   └── tool/
│       ├── README.md
│       ├── config.json.template
│       └── tool.py
├── requirements-optional.txt
├── requirements.txt
├── run.sh
├── scripts/
│   ├── shutdown.sh
│   ├── start.sh
│   └── tout.sh
├── skills/
│   ├── README.md
│   ├── linkai-agent/
│   │   ├── README.md
│   │   ├── SKILL.md
│   │   └── config.json.template
│   └── skill-creator/
│       ├── SKILL.md
│       └── scripts/
│           ├── init_skill.py
│           ├── package_skill.py
│           └── quick_validate.py
├── translate/
│   ├── baidu/
│   │   └── baidu_translate.py
│   ├── factory.py
│   └── translator.py
└── voice/
    ├── ali/
    │   ├── ali_api.py
    │   ├── ali_voice.py
    │   └── config.json.template
    ├── audio_convert.py
    ├── azure/
    │   ├── azure_voice.py
    │   └── config.json.template
    ├── baidu/
    │   ├── README.md
    │   ├── baidu_voice.py
    │   └── config.json.template
    ├── edge/
    │   └── edge_voice.py
    ├── elevent/
    │   └── elevent_voice.py
    ├── factory.py
    ├── google/
    │   └── google_voice.py
    ├── linkai/
    │   └── linkai_voice.py
    ├── openai/
    │   └── openai_voice.py
    ├── pytts/
    │   └── pytts_voice.py
    ├── tencent/
    │   ├── config.json.template
    │   └── tencent_voice.py
    ├── voice.py
    └── xunfei/
        ├── config.json.template
        ├── xunfei_asr.py
        ├── xunfei_tts.py
        └── xunfei_voice.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/ISSUE_TEMPLATE/1.bug.yml
================================================
name: Bug report 🐛
description: 项目运行中遇到的Bug或问题。
labels: ['status: needs check']
body:
  - type: markdown
    attributes:
      value: |
        ### ⚠️ 前置确认
        1. 网络能够访问openai接口
        2. python 已安装:版本在 3.7 ~ 3.10 之间
        3. `git pull` 拉取最新代码
        4. 执行`pip3 install -r requirements.txt`,检查依赖是否满足
        5. 拓展功能请执行`pip3 install -r requirements-optional.txt`,检查依赖是否满足
        6. [FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) 中无类似问题
  - type: checkboxes
    attributes:
      label: 前置确认
      options:
        - label: 我确认我运行的是最新版本的代码,并且安装了所需的依赖,在[FAQS](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs)中也未找到类似问题。
          required: true
  - type: checkboxes
    attributes:
      label: ⚠️ 搜索issues中是否已存在类似问题
      description: >
        请在 [历史issue](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中清空输入框,搜索你的问题
        或相关日志的关键词来查找是否存在类似问题。
      options:
        - label: 我已经搜索过issues和disscussions,没有跟我遇到的问题相关的issue
          required: true
  - type: markdown
    attributes:
      value: |
        请在上方的`title`中填写你对你所遇到问题的简略总结,这将帮助其他人更好的找到相似问题,谢谢❤️。
  - type: dropdown
    attributes:
      label: 操作系统类型?
      description: >
        请选择你运行程序的操作系统类型。
      options:
        - Windows
        - Linux
        - MacOS
        - Docker
        - Railway
        - Windows Subsystem for Linux (WSL)
        - Other (请在问题中说明)
    validations:
      required: true
  - type: dropdown
    attributes:
      label: 运行的python版本是?
      description: |
        请选择你运行程序的`python`版本。
        注意:在`python 3.7`中,有部分可选依赖无法安装。
        经过长时间的观察,我们认为`python 3.8`是兼容性最好的版本。
        `python 3.7`~`python 3.10`以外版本的issue,将视情况直接关闭。
      options:
        - python 3.7
        - python 3.8
        - python 3.9
        - python 3.10
        - other
    validations:
      required: true
  - type: dropdown
    attributes:
      label: 使用的chatgpt-on-wechat版本是?
      description: |
        请确保你使用的是 [releases](https://github.com/zhayujie/chatgpt-on-wechat/releases) 中的最新版本。
        如果你使用git, 请使用`git branch`命令来查看分支。
      options:
        - Latest Release
        - Master (branch)
    validations:
      required: true
  - type: dropdown
    attributes:
      label: 运行的`channel`类型是?
      description: |
        请确保你正确配置了该`channel`所需的配置项,所有可选的配置项都写在了[该文件中](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py),请将所需配置项填写在根目录下的`config.json`文件中。
      options:
        - wechatmp(公众号, 订阅号)
        - wechatmp_service(公众号, 服务号)
        - terminal
        - other
    validations:
      required: true
  - type: textarea
    attributes:
      label: 复现步骤 🕹
      description: |
        **⚠️ 不能复现将会关闭issue.**
  - type: textarea
    attributes:
      label: 问题描述 😯
      description: 详细描述出现的问题,或提供有关截图。
  - type: textarea
    attributes:
      label: 终端日志 📒
      description: |
        在此处粘贴终端日志,可在主目录下`run.log`文件中找到,这会帮助我们更好的分析问题,注意隐去你的API key。
        如果在配置文件中加入`"debug": true`,打印出的日志会更有帮助。

        <details>
        <summary><i>示例</i></summary>
        ```log
        [DEBUG][2023-04-16 00:23:22][plugin_manager.py:157] - Plugin SUMMARY triggered by event Event.ON_HANDLE_CONTEXT
        [DEBUG][2023-04-16 00:23:22][main.py:221] - [Summary] on_handle_context. content: $总结前100条消息
        [DEBUG][2023-04-16 00:23:24][main.py:240] - [Summary] limit: 100, duration: -1 seconds
        [ERROR][2023-04-16 00:23:24][chat_channel.py:244] - Worker return exception: name 'start_date' is not defined
        Traceback (most recent call last):
          File "C:\ProgramData\Anaconda3\lib\concurrent\futures\thread.py", line 57, in run
            result = self.fn(*self.args, **self.kwargs)
          File "D:\project\chatgpt-on-wechat\channel\chat_channel.py", line 132, in _handle
            reply = self._generate_reply(context)
          File "D:\project\chatgpt-on-wechat\channel\chat_channel.py", line 142, in _generate_reply
            e_context = PluginManager().emit_event(EventContext(Event.ON_HANDLE_CONTEXT, {
          File "D:\project\chatgpt-on-wechat\plugins\plugin_manager.py", line 159, in emit_event
            instance.handlers[e_context.event](e_context, *args, **kwargs)
          File "D:\project\chatgpt-on-wechat\plugins\summary\main.py", line 255, in on_handle_context
            records = self._get_records(session_id, start_time, limit)
          File "D:\project\chatgpt-on-wechat\plugins\summary\main.py", line 96, in _get_records
            c.execute("SELECT * FROM chat_records WHERE sessionid=? and timestamp>? ORDER BY timestamp DESC LIMIT ?", (session_id, start_date, limit))
        NameError: name 'start_date' is not defined
        [INFO][2023-04-16 00:23:36][app.py:14] - signal 2 received, exiting...
        ```
        </details>
      value: |
        ```log
        <此处粘贴终端日志>
        ```

================================================
FILE: .github/ISSUE_TEMPLATE/2.feature.yml
================================================
name: Feature request 🚀
description: 提出你对项目的新想法或建议。
labels: ['status: needs check']
body:
  - type: markdown
    attributes:
      value: |
        请在上方的`title`中填写简略总结,谢谢❤️。
  - type: checkboxes
    attributes:
      label: ⚠️ 搜索是否存在类似issue
      description: >
        请在 [历史issue](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中清空输入框,搜索关键词查找是否存在相似issue。
      options:
        - label: 我已经搜索过issues和disscussions,没有发现相似issue
          required: true
  - type: textarea
    attributes:
      label: 总结
      description: 描述feature的功能。
  - type: textarea
    attributes:
      label: 举例
      description: 提供聊天示例,草图或相关网址。
  - type: textarea
    attributes:
      label: 动机
      description: 描述你提出该feature的动机,比如没有这项feature对你的使用造成了怎样的影响。 请提供更详细的场景描述,这可能会帮助我们发现并提出更好的解决方案。

================================================
FILE: .github/workflows/deploy-image-arm.yml
================================================
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Create and publish a Docker image

on:
  push:
    branches: ['master']
  create:
env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build-and-push-image:
    if: github.repository == 'zhayujie/chatgpt-on-wechat'
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v1

      - name: Set up Docker Buildx
        id: buildx
        uses: docker/setup-buildx-action@v1

      - name: Available platforms
        run: echo ${{ steps.buildx.outputs.platforms }}

      - name: Log in to the Container registry
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@v4
        with:
          images: |
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          file: ./docker/Dockerfile.latest
          platforms: linux/arm64
          tags: ${{ steps.meta.outputs.tags }}-arm64
          labels: ${{ steps.meta.outputs.labels }}

      - uses: actions/delete-package-versions@v4
        with:
          package-name: 'chatgpt-on-wechat'
          package-type: 'container'
          min-versions-to-keep: 10
          delete-only-untagged-versions: 'true'
          token: ${{ secrets.GITHUB_TOKEN }}

================================================
FILE: .github/workflows/deploy-image.yml
================================================
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Create and publish a Docker image

on:
  push:
    branches: ['master']
  create:
env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build-and-push-image:
    if: github.repository == 'zhayujie/chatgpt-on-wechat'
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Log in to the Container registry
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@v4
        with:
          images: |
            ${{ env.IMAGE_NAME }}
            ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          file: ./docker/Dockerfile.latest
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

      - uses: actions/delete-package-versions@v4
        with:
          package-name: 'chatgpt-on-wechat'
          package-type: 'container'
          min-versions-to-keep: 10
          delete-only-untagged-versions: 'true'
          token: ${{ secrets.GITHUB_TOKEN }}

================================================
FILE: .gitignore
================================================
.DS_Store
.idea
.vscode
.venv
.vs
__pycache__/
venv*
*.pyc
python
config.json
QR.png
nohup.out
tmp
plugins.json
*.log
logs/
workspace
config.yaml
user_datas.pkl
chatgpt_tool_hub/
plugins/**/
!plugins/bdunit
!plugins/dungeon
!plugins/finish
!plugins/godcmd
!plugins/tool
!plugins/banwords
!plugins/banwords/**/
plugins/banwords/__pycache__
plugins/banwords/lib/__pycache__
!plugins/hello
!plugins/role
!plugins/keyword
!plugins/linkai
!plugins/agent
client_config.json
ref/
.cursor/
local/


================================================
FILE: Dockerfile
================================================
FROM ghcr.io/zhayujie/chatgpt-on-wechat:latest

ENTRYPOINT ["/entrypoint.sh"]

================================================
FILE: LICENSE
================================================
Copyright (c) 2022 zhayujie

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

================================================
FILE: README.md
================================================
<p align="center"><img src= "https://github.com/user-attachments/assets/eca9a9ec-8534-4615-9e0f-96c5ac1d10a3" alt="Chatgpt-on-Wechat" width="550" /></p>

<p align="center">
  <a href="https://github.com/zhayujie/chatgpt-on-wechat/releases/latest"><img src="https://img.shields.io/github/v/release/zhayujie/chatgpt-on-wechat" alt="Latest release"></a>
  <a href="https://github.com/zhayujie/chatgpt-on-wechat/blob/master/LICENSE"><img src="https://img.shields.io/github/license/zhayujie/chatgpt-on-wechat" alt="License: MIT"></a>
  <a href="https://github.com/zhayujie/chatgpt-on-wechat"><img src="https://img.shields.io/github/stars/zhayujie/chatgpt-on-wechat?style=flat-square" alt="Stars"></a> <br/>
  [中文] | [<a href="docs/en/README.md">English</a>] | [<a href="docs/ja/README.md">日本語</a>]
</p>

**CowAgent** 是基于大模型的超级AI助理,能够主动思考和任务规划、操作计算机和外部资源、创造和执行Skills、拥有长期记忆并不断成长。CowAgent 支持灵活切换多种模型,能处理文本、语音、图片、文件等多模态消息,可接入网页、飞书、钉钉、企微智能机器人、QQ、企微自建应用、微信公众号中使用,7*24小时运行于你的个人电脑或服务器中。

<p align="center">
  <a href="https://cowagent.ai/">🌐 官网</a> &nbsp;·&nbsp;
  <a href="https://docs.cowagent.ai/">📖 文档中心</a> &nbsp;·&nbsp;
  <a href="https://docs.cowagent.ai/guide/quick-start">🚀 快速开始</a> &nbsp;·&nbsp;
  <a href="https://link-ai.tech/cowagent/create">☁️ 在线体验</a>
</p>



# 简介

> 该项目既是一个可以开箱即用的超级AI助理,也是一个支持高扩展的Agent框架,可以通过为项目扩展大模型接口、接入渠道、内置工具、Skills系统来灵活实现各种定制需求。核心能力如下:

-  ✅  **复杂任务规划**:能够理解复杂任务并自主规划执行,持续思考和调用工具直到完成目标,支持通过工具操作访问文件、终端、浏览器、定时任务等系统资源
-  ✅  **长期记忆:** 自动将对话记忆持久化至本地文件和数据库中,包括全局记忆和天级记忆,支持关键词及向量检索
-  ✅  **技能系统:** 实现了Skills创建和运行的引擎,内置多种技能,并支持通过自然语言对话完成自定义Skills开发
-  ✅  **多模态消息:** 支持对文本、图片、语音、文件等多类型消息进行解析、处理、生成、发送等操作
-  ✅  **多模型接入:** 支持OpenAI, Claude, Gemini, DeepSeek, MiniMax、GLM、Qwen、Kimi、Doubao等国内外主流模型厂商
-  ✅  **多端部署:** 支持运行在本地计算机或服务器,可集成到飞书、钉钉、企业微信、QQ、微信公众号、网页中使用

## 声明

1. 本项目遵循 [MIT开源协议](/LICENSE),主要用于技术研究和学习,使用本项目时需遵守所在地法律法规、相关政策以及企业章程,禁止用于任何违法或侵犯他人权益的行为。任何个人、团队和企业,无论以何种方式使用该项目、对何对象提供服务,所产生的一切后果,本项目均不承担任何责任。
2. 成本与安全:Agent模式下Token使用量高于普通对话模式,请根据效果及成本综合选择模型。Agent具有访问所在操作系统的能力,请谨慎选择项目部署环境。同时项目也会持续升级安全机制、并降低模型消耗成本。
3. CowAgent项目专注于开源技术开发,不会参与、授权或发行任何加密货币。

## 演示

- 使用说明(Agent模式):[CowAgent介绍](https://docs.cowagent.ai/intro/features)

- 免部署在线体验:[CowAgent](https://link-ai.tech/cowagent/create)

- DEMO视频(对话模式):https://cdn.link-ai.tech/doc/cow_demo.mp4

## 社区

添加小助手微信加入开源项目交流群:

<img width="140" src="https://img-1317903499.cos.ap-guangzhou.myqcloud.com/docs/open-community.png">

<br/>

# 企业服务

<a href="https://link-ai.tech" target="_blank"><img width="650" src="https://cdn.link-ai.tech/image/link-ai-intro.jpg"></a>

> [LinkAI](https://link-ai.tech/) 是面向企业和个人的一站式AI智能体平台,聚合多模态大模型、知识库、技能、工作流等能力,支持一键接入主流平台并管理,支持SaaS、私有化部署等多种模式,可免部署在线运行[CowAgent助理](https://link-ai.tech/cowagent/create)。
>
> LinkAI 目前已在智能客服、私域运营、企业效率助手等场景积累了丰富的AI解决方案,在消费、健康、文教、科技制造等各行业沉淀了大模型落地应用的最佳实践,致力于帮助更多企业和开发者拥抱 AI 生产力。

**产品咨询和企业服务** 可联系产品客服:

<img width="150" src="https://cdn.link-ai.tech/portal/linkai-customer-service.png">

<br/>

# 🏷 更新日志

>**2026.03.18:** [2.0.3版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.3),新增企微智能机器人和 QQ 通道、支持Coding Plan、新增多个模型、Web端文件处理、记忆系统升级。

>**2026.02.27:** [2.0.2版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.2),Web 控制台全面升级(流式对话、模型/技能/记忆/通道/定时任务/日志管理)、支持多通道同时运行、会话持久化存储、新增多个模型。

>**2026.02.13:** [2.0.1版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.1),内置 Web Search 工具、智能上下文裁剪策略、运行时信息动态更新、Windows 兼容性适配,修复定时任务记忆丢失、飞书连接等多项问题。

>**2026.02.03:** [2.0.0版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/2.0.0),正式升级为超级Agent助理,支持多轮任务决策、具备长期记忆、实现多种系统工具、支持Skills框架,新增多种模型并优化了接入渠道。

>**2025.05.23:** [1.7.6版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.6) 优化web网页channel、新增 [AgentMesh](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/agent/README.md)多智能体插件、百度语音合成优化、企微应用`access_token`获取优化、支持`claude-4-sonnet`和`claude-4-opus`模型

>**2025.04.11:** [1.7.5版本](https://github.com/zhayujie/chatgpt-on-wechat/releases/tag/1.7.5) 新增支持 [wechatferry](https://github.com/zhayujie/chatgpt-on-wechat/pull/2562) 协议、新增 deepseek 模型、新增支持腾讯云语音能力、新增支持 ModelScope 和 Gitee-AI API接口

更多更新历史请查看: [更新日志](https://docs.cowagent.ai/releases)

<br/>

# 🚀 快速开始

项目提供了一键安装、配置、启动、管理程序的脚本,推荐使用脚本快速运行,也可以根据下文中的详细指引一步步安装运行。

在终端执行以下命令:

```bash
bash <(curl -fsSL https://cdn.link-ai.tech/code/cow/run.sh)
```

脚本使用说明:[一键运行脚本](https://docs.cowagent.ai/guide/quick-start)


## 一、准备

### 1. 模型API

项目支持国内外主流厂商的模型接口,可选模型及配置说明参考:[模型说明](#模型说明)。

> 注:Agent模式下推荐使用以下模型,可根据效果及成本综合选择:MiniMax-M2.7、glm-5-turbo、kimi-k2.5、qwen3.5-plus、claude-sonnet-4-6、gemini-3.1-pro-preview、gpt-5.4、gpt-5.4-mini

同时支持使用 **LinkAI平台** 接口,支持上述全部模型,并支持知识库、工作流、插件等Agent技能,参考 [接口文档](https://docs.link-ai.tech/platform/api)。

### 2.环境安装

支持 Linux、MacOS、Windows 操作系统,可在个人计算机及服务器上运行,需安装 `Python`,Python版本需在3.7 ~ 3.12 之间,推荐使用3.9版本。

> 注意:Agent模式推荐使用源码运行,若选择Docker部署则无需安装python环境和下载源码,可直接快进到下一节。

**(1) 克隆项目代码:**

```bash
git clone https://github.com/zhayujie/chatgpt-on-wechat
cd chatgpt-on-wechat/
```

若遇到网络问题可使用国内仓库地址:https://gitee.com/zhayujie/chatgpt-on-wechat

**(2) 安装核心依赖 (必选):**

```bash
pip3 install -r requirements.txt
```

**(3) 拓展依赖 (可选,建议安装):**

```bash
pip3 install -r requirements-optional.txt
```
如果某项依赖安装失败可注释掉对应的行后重试。

## 二、配置

配置文件的模板在根目录的`config-template.json`中,需复制该模板创建最终生效的 `config.json` 文件:

```bash
  cp config-template.json config.json
```

然后在`config.json`中填入配置,以下是对默认配置的说明,可根据需要进行自定义修改(注意实际使用时请去掉注释,保证JSON格式的规范):

```bash
# config.json 文件内容示例
{
  "channel_type": "web",                                      # 接入渠道类型,默认为web,支持修改为:feishu,dingtalk,wecom_bot,qq,wechatcom_app,wechatmp_service,wechatmp,terminal
  "model": "MiniMax-M2.7",                                    # 模型名称
  "minimax_api_key": "",                                      # MiniMax API Key
  "zhipu_ai_api_key": "",                                     # 智谱GLM API Key
  "moonshot_api_key": "",                                     # Kimi/Moonshot API Key
  "ark_api_key": "",                                          # 豆包(火山方舟) API Key
  "dashscope_api_key": "",                                    # 百炼(通义千问)API Key
  "claude_api_key": "",                                       # Claude API Key
  "claude_api_base": "https://api.anthropic.com/v1",          # Claude API 地址,修改可接入三方代理平台
  "gemini_api_key": "",                                       # Gemini API Key
  "gemini_api_base": "https://generativelanguage.googleapis.com", # Gemini API地址
  "open_ai_api_key": "",                                      # OpenAI API Key
  "open_ai_api_base": "https://api.openai.com/v1",            # OpenAI API 地址
  "linkai_api_key": "",                                       # LinkAI API Key
  "proxy": "",                                                # 代理客户端的ip和端口,国内环境需要开启代理的可填写该项,如 "127.0.0.1:7890"
  "speech_recognition": false,                                # 是否开启语音识别
  "group_speech_recognition": false,                          # 是否开启群组语音识别
  "voice_reply_voice": false,                                 # 是否使用语音回复语音
  "use_linkai": false,                                        # 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台模型
  "agent": true,                                              # 是否启用Agent模式,启用后拥有多轮工具决策、长期记忆、Skills能力等
  "agent_workspace": "~/cow",                                 # Agent的工作空间路径,用于存储memory、skills、系统设定等
  "agent_max_context_tokens": 40000,                          # Agent模式下最大上下文tokens,超出将自动丢弃最早的上下文
  "agent_max_context_turns": 30,                              # Agent模式下最大上下文记忆轮次,每轮包括一次用户提问和AI回复
  "agent_max_steps": 15                                       # Agent模式下单次任务的最大决策步数,超出后将停止继续调用工具
}
```

**配置补充说明:** 

<details>
<summary>1. 语音配置</summary>

+ 添加 `"speech_recognition": true` 将开启语音识别,默认使用openai的whisper模型识别为文字,同时以文字回复,该参数仅支持私聊 (注意由于语音消息无法匹配前缀,一旦开启将对所有语音自动回复,支持语音触发画图);
+ 添加 `"group_speech_recognition": true` 将开启群组语音识别,默认使用openai的whisper模型识别为文字,同时以文字回复,参数仅支持群聊 (会匹配group_chat_prefix和group_chat_keyword, 支持语音触发画图);
+ 添加 `"voice_reply_voice": true` 将开启语音回复语音(同时作用于私聊和群聊)
</details>

<details>
<summary>2. 其他配置</summary>

+ `model`: 模型名称,Agent模式下推荐使用 `MiniMax-M2.7`、`glm-5-turbo`、`kimi-k2.5`、`qwen3.5-plus`、`claude-sonnet-4-6`、`gemini-3.1-pro-preview`,全部模型名称参考[common/const.py](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/common/const.py)文件
+ `character_desc`:普通对话模式下的机器人系统提示词。在Agent模式下该配置不生效,由工作空间中的文件内容构成。
+ `subscribe_msg`:订阅消息,公众号和企业微信channel中请填写,当被订阅时会自动回复, 可使用特殊占位符。目前支持的占位符有{trigger_prefix},在程序中它会自动替换成bot的触发词。
</details>

<details>
<summary>3. LinkAI配置</summary>

+ `use_linkai`: 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台,使用模型、知识库、工作流、插件等技能, 参考[接口文档](https://docs.link-ai.tech/platform/api/chat)
+ `linkai_api_key`: LinkAI Api Key,可在 [控制台](https://link-ai.tech/console/interface) 创建
</details>

注:全部配置项说明可在 [`config.py`](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/config.py) 文件中查看。

## 三、运行

### 1.本地运行

如果是个人计算机 **本地运行**,直接在项目根目录下执行:

```bash
python3 app.py         # windows环境下该命令通常为 python app.py
```

运行后默认会启动web服务,可通过访问 `http://localhost:9899/chat` 在网页端对话。

如果需要接入其他应用通道只需修改 `config.json` 配置文件中的 `channel_type` 参数,详情参考:[通道说明](#通道说明)。


### 2.服务器部署

在服务器中可使用 `nohup` 命令在后台运行程序:

```bash
nohup python3 app.py & tail -f nohup.out
```

执行后程序运行于服务器后台,可通过 `ctrl+c` 关闭日志,不会影响后台程序的运行。使用 `ps -ef | grep app.py | grep -v grep` 命令可查看运行于后台的进程,如果想要重新启动程序可以先 `kill` 掉对应的进程。 日志关闭后如果想要再次打开只需输入 `tail -f nohup.out`。 

此外,项目根目录下的 `run.sh` 脚本支持一键启动和管理服务,包括 `./run.sh start`、`./run.sh stop`、`./run.sh restart`、`./run.sh logs` 等命令,执行 `./run.sh help` 可查看全部用法。

> 如果需要通过浏览器访问Web控制台,请确保服务器的 `9899` 端口已在防火墙或安全组中放行,建议仅对指定IP开放以保证安全。

### 3.Docker部署

使用docker部署无需下载源码和安装依赖,只需要获取 `docker-compose.yml` 配置文件并启动容器即可。Agent模式下更推荐使用源码进行部署,以获得更多系统访问能力。

> 前提是需要安装好 `docker` 及 `docker-compose`,安装成功后执行 `docker -v` 和 `docker-compose version` (或 `docker compose version`) 可查看到版本号。安装地址为 [docker官网](https://docs.docker.com/engine/install/) 。

**(1) 下载 docker-compose.yml 文件**

```bash
curl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml
```

下载完成后打开 `docker-compose.yml` 填写所需配置,例如 `CHANNEL_TYPE`、`OPEN_AI_API_KEY` 和等配置。

**(2) 启动容器**

在 `docker-compose.yml` 所在目录下执行以下命令启动容器:

```bash
sudo docker compose up -d         # 若docker-compose为 1.X 版本,则执行 `sudo  docker-compose up -d`
```

运行命令后,会自动取 [docker hub](https://hub.docker.com/r/zhayujie/chatgpt-on-wechat) 拉取最新release版本的镜像。当执行 `sudo docker ps` 能查看到 NAMES 为 chatgpt-on-wechat 的容器即表示运行成功。最后执行以下命令可查看容器的运行日志:

```bash
sudo docker logs -f chatgpt-on-wechat
```

> 如果需要通过浏览器访问Web控制台,请确保服务器的 `9899` 端口已在防火墙或安全组中放行,建议仅对指定IP开放以保证安全。

## 模型说明

以下对所有可支持的模型的配置和使用方法进行说明,模型接口实现在项目的 `models/` 目录下。

<details>
<summary>OpenAI</summary>

1. API Key创建:在 [OpenAI平台](https://platform.openai.com/api-keys) 创建API Key

2. 填写配置

```json
{
    "model": "gpt-5.4",
    "open_ai_api_key": "YOUR_API_KEY",
    "open_ai_api_base": "https://api.openai.com/v1",
    "bot_type": "openai"
}
```

 - `model`: 与OpenAI接口的 [model参数](https://platform.openai.com/docs/models) 一致,支持包括 gpt-5.4、gpt-5.4-mini、gpt-5.4-nano、o系列、gpt-4.1等模型,Agent模式推荐使用 `gpt-5.4`、`gpt-5.4-mini`
 - `open_ai_api_base`: 如果需要接入第三方代理接口,可通过修改该参数进行接入
 - `bot_type`: 使用OpenAI相关模型时无需填写。当使用第三方代理接口接入Claude等非OpenAI官方模型时,该参数设为 `openai`
</details>

<details>
<summary>LinkAI</summary>

1. API Key创建:在 [LinkAI平台](https://link-ai.tech/console/interface) 创建API Key 

2. 填写配置

```json
{
    "model": "gpt-5.4-mini",
    "use_linkai": true,
    "linkai_api_key": "YOUR API KEY"
}
```

+ `use_linkai`: 是否使用LinkAI接口,默认关闭,设置为true后可对接LinkAI平台的模型,并使用知识库、工作流、数据库、插件等丰富的Agent技能
+ `linkai_api_key`: LinkAI平台的API Key,可在 [控制台](https://link-ai.tech/console/interface) 中创建
+ `model`: [模型列表](https://link-ai.tech/console/models)中的全部模型均可使用
</details>

<details>
<summary>MiniMax</summary>

方式一:官方接入,配置如下(推荐):

```json
{
    "model": "MiniMax-M2.7",
    "minimax_api_key": ""
}
```
 - `model`: 可填写 `MiniMax-M2.7、MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2、abab6.5-chat` 等
 - `minimax_api_key`:MiniMax平台的API-KEY,在 [控制台](https://platform.minimaxi.com/user-center/basic-information/interface-key) 创建

方式二:OpenAI兼容方式接入,配置如下:
```json
{
  "bot_type": "openai",
  "model": "MiniMax-M2.7",
  "open_ai_api_base": "https://api.minimaxi.com/v1",
  "open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填 `MiniMax-M2.7、MiniMax-M2.5、MiniMax-M2.1、MiniMax-M2.1-lightning、MiniMax-M2`,参考[API文档](https://platform.minimaxi.com/document/%E5%AF%B9%E8%AF%9D?key=66701d281d57f38758d581d0#QklxsNSbaf6kM4j6wjO5eEek)
- `open_ai_api_base`: MiniMax平台API的 BASE URL
- `open_ai_api_key`: MiniMax平台的API-KEY
</details>

<details>
<summary>智谱AI (GLM)</summary>

方式一:官方接入,配置如下(推荐):

```json
{
  "model": "glm-5-turbo",
  "zhipu_ai_api_key": ""
}
```
 - `model`: 可填 `glm-5-turbo、glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等, 参考 [glm系列模型编码](https://bigmodel.cn/dev/api/normal-model/glm-4)
 - `zhipu_ai_api_key`: 智谱AI平台的 API KEY,在 [控制台](https://www.bigmodel.cn/usercenter/proj-mgmt/apikeys) 创建

方式二:OpenAI兼容方式接入,配置如下:
```json
{
  "bot_type": "openai",
  "model": "glm-5-turbo",
  "open_ai_api_base": "https://open.bigmodel.cn/api/paas/v4",
  "open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填 `glm-5-turbo、glm-5、glm-4.7、glm-4-plus、glm-4-flash、glm-4-air、glm-4-airx、glm-4-long` 等
- `open_ai_api_base`: 智谱AI平台的 BASE URL
- `open_ai_api_key`: 智谱AI平台的 API KEY
</details>

<details>
<summary>通义千问 (Qwen)</summary>

方式一:官方SDK接入,配置如下(推荐):

```json
{
    "model": "qwen3.5-plus",
    "dashscope_api_key": "sk-qVxxxxG"
}
```
 - `model`: 可填写 `qwen3.5-plus、qwen3-max、qwen-max、qwen-plus、qwen-turbo、qwen-long、qwq-plus` 等
 - `dashscope_api_key`: 通义千问的 API-KEY,参考 [官方文档](https://bailian.console.aliyun.com/?tab=api#/api) ,在 [控制台](https://bailian.console.aliyun.com/?tab=model#/api-key) 创建

方式二:OpenAI兼容方式接入,配置如下:
```json
{
  "bot_type": "openai",
  "model": "qwen3.5-plus",
  "open_ai_api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
  "open_ai_api_key": "sk-qVxxxxG"
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 支持官方所有模型,参考[模型列表](https://help.aliyun.com/zh/model-studio/models?spm=a2c4g.11186623.0.0.78d84823Kth5on#9f8890ce29g5u)
- `open_ai_api_base`: 通义千问API的 BASE URL
- `open_ai_api_key`: 通义千问的 API-KEY
</details>

<details>
<summary>Kimi (Moonshot)</summary>

方式一:官方接入,配置如下:

```json
{
    "model": "kimi-k2.5",
    "moonshot_api_key": ""
}
```
 - `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
 - `moonshot_api_key`: Moonshot的API-KEY,在 [控制台](https://platform.moonshot.cn/console/api-keys) 创建
 
方式二:OpenAI兼容方式接入,配置如下:
```json
{
  "bot_type": "openai",
  "model": "kimi-k2.5",
  "open_ai_api_base": "https://api.moonshot.cn/v1",
  "open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填写 `kimi-k2.5、kimi-k2、moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k`
- `open_ai_api_base`: Moonshot的 BASE URL
- `open_ai_api_key`: Moonshot的 API-KEY
</details>

<details>
<summary>豆包 (Doubao)</summary>

1. API Key创建:在 [火山方舟控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建API Key

2. 填写配置

```json
{
    "model": "doubao-seed-2-0-code-preview-260215",
    "ark_api_key": "YOUR_API_KEY"
}
```
 - `model`: 可填写 `doubao-seed-2-0-code-preview-260215、doubao-seed-2-0-pro-260215、doubao-seed-2-0-lite-260215、doubao-seed-2-0-mini-260215` 等
 - `ark_api_key`: 火山方舟平台的 API Key,在 [控制台](https://console.volcengine.com/ark/region:ark+cn-beijing/apikey) 创建
 - `ark_base_url`: 可选,默认为 `https://ark.cn-beijing.volces.com/api/v3`
</details>

<details>
<summary>Claude</summary>

1. API Key创建:在 [Claude控制台](https://console.anthropic.com/settings/keys) 创建API Key

2. 填写配置

```json
{
    "model": "claude-sonnet-4-6",
    "claude_api_key": "YOUR_API_KEY"
}
```
 - `model`: 参考 [官方模型ID](https://docs.anthropic.com/en/docs/about-claude/models/overview#model-aliases) ,支持 `claude-sonnet-4-6、claude-opus-4-6、claude-sonnet-4-5、claude-sonnet-4-0、claude-opus-4-0、claude-3-5-sonnet-latest` 等
</details>

<details>
<summary>Gemini</summary>

API Key创建:在 [控制台](https://aistudio.google.com/app/apikey?hl=zh-cn) 创建API Key ,配置如下
```json
{
    "model": "gemini-3.1-flash-lite-preview",
    "gemini_api_key": ""
}
```
 - `model`: 参考[官方文档-模型列表](https://ai.google.dev/gemini-api/docs/models?hl=zh-cn),支持 `gemini-3.1-flash-lite-preview、gemini-3.1-pro-preview、gemini-3-flash-preview、gemini-3-pro-preview` 等
</details>

<details>
<summary>DeepSeek</summary>

1. API Key创建:在 [DeepSeek平台](https://platform.deepseek.com/api_keys) 创建API Key 

2. 填写配置

```json
{
    "model": "deepseek-chat",
    "open_ai_api_key": "sk-xxxxxxxxxxx",
    "open_ai_api_base": "https://api.deepseek.com/v1",
    "bot_type": "openai"

}
```

 - `bot_type`: OpenAI兼容方式
 - `model`: 可填 `deepseek-chat、deepseek-reasoner`,分别对应的是 DeepSeek-V3 和 DeepSeek-R1 模型
 - `open_ai_api_key`: DeepSeek平台的 API Key
 - `open_ai_api_base`: DeepSeek平台 BASE URL
</details>

<details>
<summary>Azure</summary>

1. API Key创建:在 [Azure平台](https://oai.azure.com/) 创建API Key 

2. 填写配置

```json
{
  "model": "",
  "use_azure_chatgpt": true,
  "open_ai_api_key": "",
  "open_ai_api_base": "",
  "azure_deployment_id": "",
  "azure_api_version": "2025-01-01-preview"
}
```

 - `model`: 留空即可
 - `use_azure_chatgpt`: 设为 true 
 - `open_ai_api_key`: Azure平台的密钥
 - `open_ai_api_base`: Azure平台的 BASE URL
 - `azure_deployment_id`: Azure平台部署的模型名称
 - `azure_api_version`: api版本以及以上参数可以在部署的 [模型配置](https://oai.azure.com/resource/deployments) 界面查看
</details>

<details>
<summary>百度文心</summary>
方式一:官方SDK接入,配置如下:

```json
{
    "model": "wenxin-4", 
    "baidu_wenxin_api_key": "IajztZ0bDxgnP9bEykU7lBer",
    "baidu_wenxin_secret_key": "EDPZn6L24uAS9d8RWFfotK47dPvkjD6G"
}
```
 - `model`: 可填 `wenxin`和`wenxin-4`,对应模型为 文心-3.5 和 文心-4.0
 - `baidu_wenxin_api_key`:参考 [千帆平台-access_token鉴权](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/dlv4pct3s) 文档获取 API Key
 - `baidu_wenxin_secret_key`:参考 [千帆平台-access_token鉴权](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/dlv4pct3s) 文档获取 Secret Key

方式二:OpenAI兼容方式接入,配置如下:
```json
{
  "bot_type": "openai",
  "model": "ERNIE-4.0-Turbo-8K",
  "open_ai_api_base": "https://qianfan.baidubce.com/v2",
  "open_ai_api_key": "bce-v3/ALTxxxxxxd2b"
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 支持官方所有模型,参考[模型列表](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Wm9cvy6rl)
- `open_ai_api_base`: 百度文心API的 BASE URL
- `open_ai_api_key`: 百度文心的 API-KEY,参考 [官方文档](https://cloud.baidu.com/doc/qianfan-api/s/ym9chdsy5) ,在 [控制台](https://console.bce.baidu.com/iam/#/iam/apikey/list) 创建API Key

</details>

<details>
<summary>讯飞星火</summary>

方式一:官方接入,配置如下:
参考 [官方文档-快速指引](https://www.xfyun.cn/doc/platform/quickguide.html#%E7%AC%AC%E4%BA%8C%E6%AD%A5-%E5%88%9B%E5%BB%BA%E6%82%A8%E7%9A%84%E7%AC%AC%E4%B8%80%E4%B8%AA%E5%BA%94%E7%94%A8-%E5%BC%80%E5%A7%8B%E4%BD%BF%E7%94%A8%E6%9C%8D%E5%8A%A1) 获取 `APPID、 APISecret、 APIKey` 三个参数

```json
{
  "model": "xunfei",
  "xunfei_app_id": "",
  "xunfei_api_key": "",
  "xunfei_api_secret": "",
  "xunfei_domain": "4.0Ultra",
  "xunfei_spark_url": "wss://spark-api.xf-yun.com/v4.0/chat"
}
```
 - `model`: 填 `xunfei`
 - `xunfei_domain`: 可填写 `4.0Ultra、generalv3.5、max-32k、generalv3、pro-128k、lite`
 - `xunfei_spark_url`: 填写参考 [官方文档-请求地址](https://www.xfyun.cn/doc/spark/Web.html#_1-1-%E8%AF%B7%E6%B1%82%E5%9C%B0%E5%9D%80) 的说明
 
方式二:OpenAI兼容方式接入,配置如下:
```json
{
  "bot_type": "openai",
  "model": "4.0Ultra",
  "open_ai_api_base": "https://spark-api-open.xf-yun.com/v1",
  "open_ai_api_key": ""
}
```
- `bot_type`: OpenAI兼容方式
- `model`: 可填写 `4.0Ultra、generalv3.5、max-32k、generalv3、pro-128k、lite`
- `open_ai_api_base`: 讯飞星火平台的 BASE URL
- `open_ai_api_key`: 讯飞星火平台的[APIPassword](https://console.xfyun.cn/services/bm3) ,因模型而已
</details>

<details>
<summary>ModelScope</summary>

```json
{
  "bot_type": "modelscope",
  "model": "Qwen/QwQ-32B",
  "modelscope_api_key": "your_api_key",
  "modelscope_base_url": "https://api-inference.modelscope.cn/v1/chat/completions",
  "text_to_image": "MusePublic/489_ckpt_FLUX_1"
}
```

- `bot_type`: modelscope接口格式
- `model`: 参考[模型列表](https://www.modelscope.cn/models?filter=inference_type&page=1)
- `modelscope_api_key`: 参考 [官方文档-访问令牌](https://modelscope.cn/docs/accounts/token) ,在 [控制台](https://modelscope.cn/my/myaccesstoken) 
- `modelscope_base_url`: modelscope平台的 BASE URL
- `text_to_image`: 图像生成模型,参考[模型列表](https://www.modelscope.cn/models?filter=inference_type&page=1)
</details>

<details>
<summary>Coding Plan</summary>

Coding Plan 是各厂商推出的编程包月套餐,所有厂商均可通过 OpenAI 兼容方式接入:

```json
{
  "bot_type": "openai",
  "model": "模型名称",
  "open_ai_api_base": "厂商 Coding Plan API Base",
  "open_ai_api_key": "YOUR_API_KEY"
}
```

目前支持阿里云、MiniMax、智谱GLM、Kimi、火山引擎等厂商,各厂商详细配置请参考 [Coding Plan 文档](https://docs.cowagent.ai/models/coding-plan)。
</details>


## 通道说明

以下对可接入通道的配置方式进行说明,应用通道代码在项目的 `channel/` 目录下。

支持同时可接入多个通道,配置时可通过逗号进行分割,例如 `"channel_type": "feishu,dingtalk"`。

<details>
<summary>1. Web</summary>

项目启动后会默认运行Web控制台,配置如下:

```json
{
    "channel_type": "web",
    "web_port": 9899
}
```

- `web_port`: 默认为 9899,可按需更改,需要服务器防火墙和安全组放行该端口
- 如本地运行,启动后请访问 `http://localhost:9899/chat` ;如服务器运行,请访问 `http://ip:9899/chat` 
> 注:请将上述 url 中的 ip 或者 port 替换为实际的值
</details>

<details>
<summary>2. Feishu - 飞书</summary>

飞书支持两种事件接收模式:WebSocket 长连接(推荐)和 Webhook。

**方式一:WebSocket 模式(推荐,无需公网 IP)**

```json
{
    "channel_type": "feishu",
    "feishu_app_id": "APP_ID",
    "feishu_app_secret": "APP_SECRET",
    "feishu_event_mode": "websocket"
}
```

**方式二:Webhook 模式(需要公网 IP)**

```json
{
    "channel_type": "feishu",
    "feishu_app_id": "APP_ID",
    "feishu_app_secret": "APP_SECRET",
    "feishu_token": "VERIFICATION_TOKEN",
    "feishu_event_mode": "webhook",
    "feishu_port": 9891
}
```

- `feishu_event_mode`: 事件接收模式,`websocket`(推荐)或 `webhook`
- WebSocket 模式需安装依赖:`pip3 install lark-oapi`

详细步骤和参数说明参考 [飞书接入](https://docs.cowagent.ai/channels/feishu)

</details>

<details>
<summary>3. DingTalk - 钉钉</summary>

钉钉需要在开放平台创建智能机器人应用,将以下配置填入 `config.json`:

```json
{
    "channel_type": "dingtalk",
    "dingtalk_client_id": "CLIENT_ID",
    "dingtalk_client_secret": "CLIENT_SECRET"
}
```
详细步骤和参数说明参考 [钉钉接入](https://docs.cowagent.ai/channels/dingtalk)
</details>

<details>
<summary>4. WeCom Bot - 企微智能机器人</summary>

企微智能机器人使用 WebSocket 长连接模式,无需公网 IP 和域名,配置简单:

```json
{
    "channel_type": "wecom_bot",
    "wecom_bot_id": "YOUR_BOT_ID",
    "wecom_bot_secret": "YOUR_SECRET"
}
```
详细步骤和参数说明参考 [企微智能机器人接入](https://docs.cowagent.ai/channels/wecom-bot)

</details>

<details>
<summary>5. QQ - QQ 机器人</summary>

QQ 机器人使用 WebSocket 长连接模式,无需公网 IP 和域名,支持 QQ 单聊、群聊和频道消息:

```json
{
    "channel_type": "qq",
    "qq_app_id": "YOUR_APP_ID",
    "qq_app_secret": "YOUR_APP_SECRET"
}
```
详细步骤和参数说明参考 [QQ 机器人接入](https://docs.cowagent.ai/channels/qq)

</details>

<details>
<summary>6. WeCom App - 企业微信应用</summary>

企业微信自建应用接入需在后台创建应用并启用消息回调,配置示例:

```json
{
    "channel_type": "wechatcom_app",
    "wechatcom_corp_id": "CORPID",
    "wechatcomapp_token": "TOKEN",
    "wechatcomapp_port": 9898,
    "wechatcomapp_secret": "SECRET",
    "wechatcomapp_agent_id": "AGENTID",
    "wechatcomapp_aes_key": "AESKEY"
}
```
详细步骤和参数说明参考 [企微自建应用接入](https://docs.cowagent.ai/channels/wecom)

</details>

<details>
<summary>7. WeChat MP - 微信公众号</summary>

本项目支持订阅号和服务号两种公众号,通过服务号(`wechatmp_service`)体验更佳。

**个人订阅号(wechatmp)**

```json
{
    "channel_type": "wechatmp",
    "wechatmp_token": "TOKEN",
    "wechatmp_port": 80,
    "wechatmp_app_id": "APPID",
    "wechatmp_app_secret": "APPSECRET",
    "wechatmp_aes_key": ""
}
```

**企业服务号(wechatmp_service)**

```json
{
    "channel_type": "wechatmp_service",
    "wechatmp_token": "TOKEN",
    "wechatmp_port": 80,
    "wechatmp_app_id": "APPID",
    "wechatmp_app_secret": "APPSECRET",
    "wechatmp_aes_key": ""
}
```

详细步骤和参数说明参考 [微信公众号接入](https://docs.cowagent.ai/channels/wechatmp)

</details>

<details>
<summary>8. Terminal - 终端</summary>

修改 `config.json` 中的 `channel_type` 字段:

```json
{
    "channel_type": "terminal"
}
```

运行后可在终端与机器人进行对话。

</details>

<br/>

# 🔗 相关项目

- [bot-on-anything](https://github.com/zhayujie/bot-on-anything):轻量和高可扩展的大模型应用框架,支持接入Slack, Telegram, Discord, Gmail等海外平台,可作为本项目的补充使用。
- [AgentMesh](https://github.com/MinimalFuture/AgentMesh):开源的多智能体(Multi-Agent)框架,可以通过多智能体团队的协同来解决复杂问题。本项目基于该框架实现了[Agent插件](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/plugins/agent/README.md),可访问终端、浏览器、文件系统、搜索引擎 等各类工具,并实现了多智能体协同。



# 🔎 常见问题

FAQs: <https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs>

或直接在线咨询 [项目小助手](https://link-ai.tech/app/Kv2fXJcH)  (知识库持续完善中,回复供参考)

# 🛠️ 开发

欢迎接入更多应用通道,参考 [飞书通道](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/channel/feishu/feishu_channel.py) 新增自定义通道,实现接收和发送消息逻辑即可完成接入。 同时欢迎贡献新的Skills,参考 [Skill创造器说明](https://github.com/zhayujie/chatgpt-on-wechat/blob/master/skills/skill-creator/SKILL.md)。

# ✉ 联系

欢迎提交PR、Issues进行反馈,以及通过 🌟Star 支持并关注项目更新。项目运行遇到问题可以查看 [常见问题列表](https://github.com/zhayujie/chatgpt-on-wechat/wiki/FAQs) ,以及前往 [Issues](https://github.com/zhayujie/chatgpt-on-wechat/issues) 中搜索。个人开发者可加入开源交流群参与更多讨论,企业用户可联系[产品客服](https://cdn.link-ai.tech/portal/linkai-customer-service.png)咨询。

# 🌟 贡献者

![cow contributors](https://contrib.rocks/image?repo=zhayujie/chatgpt-on-wechat&max=1000)


================================================
FILE: agent/chat/__init__.py
================================================
from agent.chat.service import ChatService

__all__ = ["ChatService"]


================================================
FILE: agent/chat/service.py
================================================
"""
ChatService - Wraps the Agent stream execution to produce CHAT protocol chunks.

Translates agent events (message_update, message_end, tool_execution_end, etc.)
into the CHAT socket protocol format (content chunks with segment_id, tool_calls chunks).
"""

import time
from typing import Callable, Optional

from common.log import logger


class ChatService:
    """
    High-level service that runs an Agent for a given query and streams
    the results as CHAT protocol chunks via a callback.

    Usage:
        svc = ChatService(agent_bridge)
        svc.run(query, session_id, send_chunk_fn)
    """

    def __init__(self, agent_bridge):
        """
        :param agent_bridge: AgentBridge instance (manages agent lifecycle)
        """
        self.agent_bridge = agent_bridge

    def run(self, query: str, session_id: str, send_chunk_fn: Callable[[dict], None],
            channel_type: str = ""):
        """
        Run the agent for *query* and stream results back via *send_chunk_fn*.

        The method blocks until the agent finishes. After it returns the SDK
        will automatically send the final (streaming=false) message.

        :param query: user query text
        :param session_id: session identifier for agent isolation
        :param send_chunk_fn: callable(chunk_data: dict) to send a streaming chunk
        :param channel_type: source channel (e.g. "web", "feishu") for persistence
        """
        agent = self.agent_bridge.get_agent(session_id=session_id)
        if agent is None:
            raise RuntimeError("Failed to initialise agent for the session")

        # Pass context metadata to model for downstream API requests
        if hasattr(agent, 'model'):
            agent.model.channel_type = channel_type or ""
            agent.model.session_id = session_id or ""

        # State shared between the event callback and this method
        state = _StreamState()

        def on_event(event: dict):
            """Translate agent events into CHAT protocol chunks."""
            event_type = event.get("type")
            data = event.get("data", {})

            if event_type == "message_update":
                # Incremental text delta
                delta = data.get("delta", "")
                if delta:
                    send_chunk_fn({
                        "chunk_type": "content",
                        "delta": delta,
                        "segment_id": state.segment_id,
                    })

            elif event_type == "message_end":
                # A content segment finished.
                tool_calls = data.get("tool_calls", [])
                if tool_calls:
                    # After tool_calls are executed the next content will be
                    # a new segment; collect tool results until turn_end.
                    state.pending_tool_results = []

            elif event_type == "tool_execution_start":
                # Notify the client that a tool is about to run (with its input args)
                tool_name = data.get("tool_name", "")
                arguments = data.get("arguments", {})
                # Cache arguments keyed by tool_call_id so tool_execution_end can include them
                tool_call_id = data.get("tool_call_id", tool_name)
                state.pending_tool_arguments[tool_call_id] = arguments
                send_chunk_fn({
                    "chunk_type": "tool_start",
                    "tool": tool_name,
                    "arguments": arguments,
                })

            elif event_type == "tool_execution_end":
                tool_name = data.get("tool_name", "")
                tool_call_id = data.get("tool_call_id", tool_name)
                # Retrieve cached arguments from the matching tool_execution_start event
                arguments = state.pending_tool_arguments.pop(tool_call_id, data.get("arguments", {}))
                result = data.get("result", "")
                status = data.get("status", "unknown")
                execution_time = data.get("execution_time", 0)
                elapsed_str = f"{execution_time:.2f}s"

                # Serialise result to string if needed
                if not isinstance(result, str):
                    import json
                    try:
                        result = json.dumps(result, ensure_ascii=False)
                    except Exception:
                        result = str(result)

                tool_info = {
                    "name": tool_name,
                    "arguments": arguments,
                    "result": result,
                    "status": status,
                    "elapsed": elapsed_str,
                }

                if state.pending_tool_results is not None:
                    state.pending_tool_results.append(tool_info)

            elif event_type == "turn_end":
                has_tool_calls = data.get("has_tool_calls", False)
                if has_tool_calls and state.pending_tool_results:
                    # Flush collected tool results as a single tool_calls chunk
                    send_chunk_fn({
                        "chunk_type": "tool_calls",
                        "tool_calls": state.pending_tool_results,
                    })
                    state.pending_tool_results = None
                    # Next content belongs to a new segment
                    state.segment_id += 1

        # Run the agent with our event callback ---------------------------
        logger.info(f"[ChatService] Starting agent run: session={session_id}, query={query[:80]}")

        from config import conf
        max_context_turns = conf().get("agent_max_context_turns", 20)

        # Get full system prompt with skills
        full_system_prompt = agent.get_full_system_prompt()

        # Create a copy of messages for this execution
        with agent.messages_lock:
            messages_copy = agent.messages.copy()
            original_length = len(agent.messages)

        from agent.protocol.agent_stream import AgentStreamExecutor

        executor = AgentStreamExecutor(
            agent=agent,
            model=agent.model,
            system_prompt=full_system_prompt,
            tools=agent.tools,
            max_turns=agent.max_steps,
            on_event=on_event,
            messages=messages_copy,
            max_context_turns=max_context_turns,
        )

        try:
            response = executor.run_stream(query)
        except Exception:
            # If executor cleared messages (context overflow), sync back
            if len(executor.messages) == 0:
                with agent.messages_lock:
                    agent.messages.clear()
                    logger.info("[ChatService] Cleared agent message history after executor recovery")
            raise

        # Append only the NEW messages from this execution (thread-safe)
        with agent.messages_lock:
            new_messages = executor.messages[original_length:]
            agent.messages.extend(new_messages)

        # Persist new messages to SQLite so they survive restarts and
        # can be queried via the HISTORY interface.
        if new_messages:
            self._persist_messages(session_id, list(new_messages), channel_type)

        # Store executor reference for files_to_send access
        agent.stream_executor = executor

        # Execute post-process tools
        agent._execute_post_process_tools()

        logger.info(f"[ChatService] Agent run completed: session={session_id}")



    @staticmethod
    def _persist_messages(session_id: str, new_messages: list, channel_type: str = ""):
        try:
            from config import conf
            if not conf().get("conversation_persistence", True):
                return
        except Exception:
            pass
        try:
            from agent.memory import get_conversation_store
            get_conversation_store().append_messages(
                session_id, new_messages, channel_type=channel_type
            )
        except Exception as e:
            logger.warning(
                f"[ChatService] Failed to persist messages for session={session_id}: {e}"
            )


class _StreamState:
    """Mutable state shared between the event callback and the run method."""

    def __init__(self):
        self.segment_id: int = 0
        # None means we are not accumulating tool results right now.
        # A list means we are in the middle of a tool-execution phase.
        self.pending_tool_results: Optional[list] = None
        # Maps tool_call_id -> arguments captured from tool_execution_start,
        # so that tool_execution_end can attach the correct input args.
        self.pending_tool_arguments: dict = {}


================================================
FILE: agent/memory/__init__.py
================================================
"""
Memory module for AgentMesh

Provides both long-term memory (vector/keyword search) and short-term
conversation history persistence (SQLite).
"""

from agent.memory.manager import MemoryManager
from agent.memory.config import MemoryConfig, get_default_memory_config, set_global_memory_config
from agent.memory.embedding import create_embedding_provider
from agent.memory.conversation_store import ConversationStore, get_conversation_store
from agent.memory.summarizer import ensure_daily_memory_file

__all__ = [
    'MemoryManager',
    'MemoryConfig',
    'get_default_memory_config',
    'set_global_memory_config',
    'create_embedding_provider',
    'ConversationStore',
    'get_conversation_store',
    'ensure_daily_memory_file',
]


================================================
FILE: agent/memory/chunker.py
================================================
"""
Text chunking utilities for memory

Splits text into chunks with token limits and overlap
"""

from __future__ import annotations
from typing import List, Tuple
from dataclasses import dataclass


@dataclass
class TextChunk:
    """Represents a text chunk with line numbers"""
    text: str
    start_line: int
    end_line: int


class TextChunker:
    """Chunks text by line count with token estimation"""
    
    def __init__(self, max_tokens: int = 500, overlap_tokens: int = 50):
        """
        Initialize chunker
        
        Args:
            max_tokens: Maximum tokens per chunk
            overlap_tokens: Overlap tokens between chunks
        """
        self.max_tokens = max_tokens
        self.overlap_tokens = overlap_tokens
        # Rough estimation: ~4 chars per token for English/Chinese mixed
        self.chars_per_token = 4
    
    def chunk_text(self, text: str) -> List[TextChunk]:
        """
        Chunk text into overlapping segments
        
        Args:
            text: Input text to chunk
            
        Returns:
            List of TextChunk objects
        """
        if not text.strip():
            return []
        
        lines = text.split('\n')
        chunks = []
        
        max_chars = self.max_tokens * self.chars_per_token
        overlap_chars = self.overlap_tokens * self.chars_per_token
        
        current_chunk = []
        current_chars = 0
        start_line = 1
        
        for i, line in enumerate(lines, start=1):
            line_chars = len(line)
            
            # If single line exceeds max, split it
            if line_chars > max_chars:
                # Save current chunk if exists
                if current_chunk:
                    chunks.append(TextChunk(
                        text='\n'.join(current_chunk),
                        start_line=start_line,
                        end_line=i - 1
                    ))
                    current_chunk = []
                    current_chars = 0
                
                # Split long line into multiple chunks
                for sub_chunk in self._split_long_line(line, max_chars):
                    chunks.append(TextChunk(
                        text=sub_chunk,
                        start_line=i,
                        end_line=i
                    ))
                
                start_line = i + 1
                continue
            
            # Check if adding this line would exceed limit
            if current_chars + line_chars > max_chars and current_chunk:
                # Save current chunk
                chunks.append(TextChunk(
                    text='\n'.join(current_chunk),
                    start_line=start_line,
                    end_line=i - 1
                ))
                
                # Start new chunk with overlap
                overlap_lines = self._get_overlap_lines(current_chunk, overlap_chars)
                current_chunk = overlap_lines + [line]
                current_chars = sum(len(l) for l in current_chunk)
                start_line = i - len(overlap_lines)
            else:
                # Add line to current chunk
                current_chunk.append(line)
                current_chars += line_chars
        
        # Save last chunk
        if current_chunk:
            chunks.append(TextChunk(
                text='\n'.join(current_chunk),
                start_line=start_line,
                end_line=len(lines)
            ))
        
        return chunks
    
    def _split_long_line(self, line: str, max_chars: int) -> List[str]:
        """Split a single long line into multiple chunks"""
        chunks = []
        for i in range(0, len(line), max_chars):
            chunks.append(line[i:i + max_chars])
        return chunks
    
    def _get_overlap_lines(self, lines: List[str], target_chars: int) -> List[str]:
        """Get last few lines that fit within target_chars for overlap"""
        overlap = []
        chars = 0
        
        for line in reversed(lines):
            line_chars = len(line)
            if chars + line_chars > target_chars:
                break
            overlap.insert(0, line)
            chars += line_chars
        
        return overlap
    
    def chunk_markdown(self, text: str) -> List[TextChunk]:
        """
        Chunk markdown text while respecting structure
        (For future enhancement: respect markdown sections)
        """
        return self.chunk_text(text)


================================================
FILE: agent/memory/config.py
================================================
"""
Memory configuration module

Provides global memory configuration with simplified workspace structure
"""

from __future__ import annotations
import os
from dataclasses import dataclass, field
from typing import Optional, List
from pathlib import Path


def _default_workspace():
    """Get default workspace path with proper Windows support"""
    from common.utils import expand_path
    return expand_path("~/cow")


@dataclass
class MemoryConfig:
    """Configuration for memory storage and search"""
    
    # Storage paths (default: ~/cow)
    workspace_root: str = field(default_factory=_default_workspace)
    
    # Embedding config
    embedding_provider: str = "openai"  # "openai" | "local"
    embedding_model: str = "text-embedding-3-small"
    embedding_dim: int = 1536
    
    # Chunking config
    chunk_max_tokens: int = 500
    chunk_overlap_tokens: int = 50
    
    # Search config
    max_results: int = 10
    min_score: float = 0.1
    
    # Hybrid search weights
    vector_weight: float = 0.7
    keyword_weight: float = 0.3
    
    # Memory sources
    sources: List[str] = field(default_factory=lambda: ["memory", "session"])
    
    # Sync config
    enable_auto_sync: bool = True
    sync_on_search: bool = True
    
    
    def get_workspace(self) -> Path:
        """Get workspace root directory"""
        return Path(self.workspace_root)
    
    def get_memory_dir(self) -> Path:
        """Get memory files directory"""
        return self.get_workspace() / "memory"
    
    def get_db_path(self) -> Path:
        """Get SQLite database path for long-term memory index"""
        index_dir = self.get_memory_dir() / "long-term"
        index_dir.mkdir(parents=True, exist_ok=True)
        return index_dir / "index.db"
    
    def get_skills_dir(self) -> Path:
        """Get skills directory"""
        return self.get_workspace() / "skills"
    
    def get_agent_workspace(self, agent_name: Optional[str] = None) -> Path:
        """
        Get workspace directory for an agent
        
        Args:
            agent_name: Optional agent name (not used in current implementation)
            
        Returns:
            Path to workspace directory
        """
        workspace = self.get_workspace()
        # Ensure workspace directory exists
        workspace.mkdir(parents=True, exist_ok=True)
        return workspace


# Global memory configuration
_global_memory_config: Optional[MemoryConfig] = None


def get_default_memory_config() -> MemoryConfig:
    """
    Get the global memory configuration.
    If not set, returns a default configuration.
    
    Returns:
        MemoryConfig instance
    """
    global _global_memory_config
    if _global_memory_config is None:
        _global_memory_config = MemoryConfig()
    return _global_memory_config


def set_global_memory_config(config: MemoryConfig):
    """
    Set the global memory configuration.
    This should be called before creating any MemoryManager instances.
    
    Args:
        config: MemoryConfig instance to use globally
        
    Example:
        >>> from agent.memory import MemoryConfig, set_global_memory_config
        >>> config = MemoryConfig(
        ...     workspace_root="~/my_agents",
        ...     embedding_provider="openai",
        ...     vector_weight=0.8
        ... )
        >>> set_global_memory_config(config)
    """
    global _global_memory_config
    _global_memory_config = config


================================================
FILE: agent/memory/conversation_store.py
================================================
"""
Conversation history persistence using SQLite.

Design:
- sessions table: per-session metadata (channel_type, last_active, msg_count)
- messages table: individual messages stored as JSON, append-only
- Pruning: age-based only (sessions not updated within N days are deleted)
- Thread-safe via a single in-process lock

Storage path: ~/cow/sessions/conversations.db
"""

from __future__ import annotations

import json
import sqlite3
import threading
import time
from pathlib import Path
from typing import Any, Dict, List, Optional

from common.log import logger


# ---------------------------------------------------------------------------
# Schema
# ---------------------------------------------------------------------------

_DDL = """
CREATE TABLE IF NOT EXISTS sessions (
    session_id   TEXT    PRIMARY KEY,
    channel_type TEXT    NOT NULL DEFAULT '',
    created_at   INTEGER NOT NULL,
    last_active  INTEGER NOT NULL,
    msg_count    INTEGER NOT NULL DEFAULT 0
);

CREATE TABLE IF NOT EXISTS messages (
    id           INTEGER PRIMARY KEY AUTOINCREMENT,
    session_id   TEXT    NOT NULL,
    seq          INTEGER NOT NULL,
    role         TEXT    NOT NULL,
    content      TEXT    NOT NULL,
    created_at   INTEGER NOT NULL,
    UNIQUE (session_id, seq)
);

CREATE INDEX IF NOT EXISTS idx_messages_session
    ON messages (session_id, seq);

CREATE INDEX IF NOT EXISTS idx_sessions_last_active
    ON sessions (last_active);
"""

# Migration: add channel_type column to existing databases that predate it.
_MIGRATION_ADD_CHANNEL_TYPE = """
ALTER TABLE sessions ADD COLUMN channel_type TEXT NOT NULL DEFAULT '';
"""

DEFAULT_MAX_AGE_DAYS: int = 30


def _is_visible_user_message(content: Any) -> bool:
    """
    Return True when a user-role message represents actual user input
    (not an internal tool_result injected by the agent loop).
    """
    if isinstance(content, str):
        return bool(content.strip())
    if isinstance(content, list):
        return any(
            isinstance(b, dict) and b.get("type") == "text"
            for b in content
        )
    return False


def _extract_display_text(content: Any) -> str:
    """
    Extract the human-readable text portion from a message content value.
    Returns an empty string for tool_use / tool_result blocks.
    """
    if isinstance(content, str):
        return content.strip()
    if isinstance(content, list):
        parts = [
            b.get("text", "")
            for b in content
            if isinstance(b, dict) and b.get("type") == "text"
        ]
        return "\n".join(p for p in parts if p).strip()
    return ""


def _extract_tool_calls(content: Any) -> List[Dict[str, Any]]:
    """
    Extract tool_use blocks from an assistant message content.
    Returns a list of {name, arguments} dicts (result filled in later).
    """
    if not isinstance(content, list):
        return []
    return [
        {"id": b.get("id", ""), "name": b.get("name", ""), "arguments": b.get("input", {})}
        for b in content
        if isinstance(b, dict) and b.get("type") == "tool_use"
    ]


def _extract_tool_results(content: Any) -> Dict[str, str]:
    """
    Extract tool_result blocks from a user message, keyed by tool_use_id.
    """
    if not isinstance(content, list):
        return {}
    results = {}
    for b in content:
        if not isinstance(b, dict) or b.get("type") != "tool_result":
            continue
        tool_id = b.get("tool_use_id", "")
        result_content = b.get("content", "")
        if isinstance(result_content, list):
            result_content = "\n".join(
                rb.get("text", "") for rb in result_content
                if isinstance(rb, dict) and rb.get("type") == "text"
            )
        results[tool_id] = str(result_content)
    return results


def _group_into_display_turns(
    rows: List[tuple],
) -> List[Dict[str, Any]]:
    """
    Convert raw (role, content_json, created_at) DB rows into display turns.

    One display turn = one visible user message  +  one merged assistant reply.
    All intermediate assistant messages (those carrying tool_use) and the final
    assistant text reply produced for the same user query are collapsed into a
    single assistant turn, exactly matching the live SSE rendering where tools
    and the final answer appear inside the same bubble.

    Grouping rules:
    - A visible user message starts a new group.
    - tool_result user messages are internal; their content is attached to the
      matching tool_use entry via tool_use_id and they never become own turns.
    - All assistant messages within a group are merged:
        * tool_use blocks → tool_calls list (result filled from tool_results)
        * text blocks → last non-empty text becomes the display content
    """
    # ------------------------------------------------------------------ #
    # Pass 1: split rows into groups, each starting with a visible user msg
    # ------------------------------------------------------------------ #
    # group = (user_row | None, [subsequent_rows])
    # user_row: (content, created_at)
    groups: List[tuple] = []
    cur_user: Optional[tuple] = None
    cur_rest: List[tuple] = []
    started = False

    for role, raw_content, created_at in rows:
        try:
            content = json.loads(raw_content)
        except Exception:
            content = raw_content

        if role == "user" and _is_visible_user_message(content):
            if started:
                groups.append((cur_user, cur_rest))
            cur_user = (content, created_at)
            cur_rest = []
            started = True
        else:
            cur_rest.append((role, content, created_at))

    if started:
        groups.append((cur_user, cur_rest))

    # ------------------------------------------------------------------ #
    # Pass 2: build display turns from each group
    # ------------------------------------------------------------------ #
    turns: List[Dict[str, Any]] = []

    for user_row, rest in groups:
        # User turn
        if user_row:
            content, created_at = user_row
            text = _extract_display_text(content)
            if text:
                turns.append({"role": "user", "content": text, "created_at": created_at})

        # Collect all tool_calls and tool_results from the rest of the group
        all_tool_calls: List[Dict[str, Any]] = []
        tool_results: Dict[str, str] = {}
        final_text = ""
        final_ts: Optional[int] = None

        for role, content, created_at in rest:
            if role == "user":
                tool_results.update(_extract_tool_results(content))
            elif role == "assistant":
                tcs = _extract_tool_calls(content)
                all_tool_calls.extend(tcs)
                t = _extract_display_text(content)
                if t:
                    final_text = t
                final_ts = created_at

        # Attach tool results to their matching tool_call entries
        for tc in all_tool_calls:
            tc["result"] = tool_results.get(tc.get("id", ""), "")

        if final_text or all_tool_calls:
            turns.append({
                "role": "assistant",
                "content": final_text,
                "tool_calls": all_tool_calls,
                "created_at": final_ts or (user_row[1] if user_row else 0),
            })

    return turns


class ConversationStore:
    """
    SQLite-backed store for per-session conversation history.

    Usage:
        store = ConversationStore(db_path)
        store.append_messages("user_123", new_messages, channel_type="feishu")
        msgs = store.load_messages("user_123", max_turns=30)
    """

    def __init__(self, db_path: Path):
        self._db_path = db_path
        self._lock = threading.Lock()
        self._init_db()

    # ------------------------------------------------------------------
    # Public API
    # ------------------------------------------------------------------

    def load_messages(
        self,
        session_id: str,
        max_turns: int = 30,
    ) -> List[Dict[str, Any]]:
        """
        Load the most recent messages for a session, for injection into the LLM.

        ALL message types (user text, assistant tool_use, tool_result) are returned
        in their original JSON form so the LLM can reconstruct the full context.

        max_turns is a *visible-turn* count: we count only user messages whose
        content is actual user text (not tool_result blocks).  This prevents
        tool-heavy sessions from exhausting the turn budget prematurely.

        Args:
            session_id: Unique session identifier.
            max_turns: Maximum number of visible user-assistant turns to keep.

        Returns:
            Chronologically ordered list of message dicts (role, content).
        """
        with self._lock:
            conn = self._connect()
            try:
                rows = conn.execute(
                    """
                    SELECT seq, role, content
                    FROM messages
                    WHERE session_id = ?
                    ORDER BY seq DESC
                    """,
                    (session_id,),
                ).fetchall()
            finally:
                conn.close()

        if not rows:
            return []

        # Walk newest-to-oldest counting *visible* user turns (actual user text,
        # not tool_result injections).  Record the seq of every visible user
        # message so we can find a clean cut point later.
        visible_turn_seqs: List[int] = []  # newest first
        for seq, role, raw_content in rows:
            if role != "user":
                continue
            try:
                content = json.loads(raw_content)
            except Exception:
                content = raw_content
            if _is_visible_user_message(content):
                visible_turn_seqs.append(seq)

        # Determine the seq of the oldest visible user message we want to keep.
        # If the total turns fit within max_turns, keep everything.
        if len(visible_turn_seqs) <= max_turns:
            cutoff_seq = None  # keep all
        else:
            # The Nth visible user message (0-indexed) is the oldest we keep.
            cutoff_seq = visible_turn_seqs[max_turns - 1]

        # Build result in chronological order, starting from cutoff.
        # IMPORTANT: we start exactly at cutoff_seq (the visible user message),
        # never mid-group, so tool_use / tool_result pairs are always complete.
        result = []
        for seq, role, raw_content in reversed(rows):
            if cutoff_seq is not None and seq < cutoff_seq:
                continue
            try:
                content = json.loads(raw_content)
            except Exception:
                content = raw_content
            result.append({"role": role, "content": content})
        return result

    def append_messages(
        self,
        session_id: str,
        messages: List[Dict[str, Any]],
        channel_type: str = "",
    ) -> None:
        """
        Append new messages to a session's history.

        Seq numbers continue from the session's current maximum, so
        concurrent callers on distinct sessions never collide.

        Args:
            session_id: Unique session identifier.
            messages: List of message dicts to append.
            channel_type: Source channel (e.g. "feishu", "web", "wechat").
                          Only written on session creation; ignored on update.
        """
        if not messages:
            return

        now = int(time.time())
        with self._lock:
            conn = self._connect()
            try:
                with conn:
                    # INSERT OR IGNORE creates the row on first visit;
                    # the UPDATE always refreshes last_active.
                    # Avoids ON CONFLICT...DO UPDATE (requires SQLite >= 3.24).
                    conn.execute(
                        """
                        INSERT OR IGNORE INTO sessions
                            (session_id, channel_type, created_at, last_active, msg_count)
                        VALUES (?, ?, ?, ?, 0)
                        """,
                        (session_id, channel_type, now, now),
                    )
                    conn.execute(
                        "UPDATE sessions SET last_active = ? WHERE session_id = ?",
                        (now, session_id),
                    )

                    # Determine starting seq for the new batch.
                    row = conn.execute(
                        "SELECT COALESCE(MAX(seq), -1) FROM messages WHERE session_id = ?",
                        (session_id,),
                    ).fetchone()
                    next_seq = row[0] + 1

                    for msg in messages:
                        role = msg.get("role", "")
                        content = json.dumps(
                            msg.get("content", ""), ensure_ascii=False
                        )
                        conn.execute(
                            """
                            INSERT OR IGNORE INTO messages
                                (session_id, seq, role, content, created_at)
                            VALUES (?, ?, ?, ?, ?)
                            """,
                            (session_id, next_seq, role, content, now),
                        )
                        next_seq += 1

                    conn.execute(
                        """
                        UPDATE sessions
                        SET msg_count = (
                            SELECT COUNT(*) FROM messages WHERE session_id = ?
                        )
                        WHERE session_id = ?
                        """,
                        (session_id, session_id),
                    )
            finally:
                conn.close()

    def clear_session(self, session_id: str) -> None:
        """Delete all messages and the session record for a given session_id."""
        with self._lock:
            conn = self._connect()
            try:
                with conn:
                    conn.execute(
                        "DELETE FROM messages WHERE session_id = ?", (session_id,)
                    )
                    conn.execute(
                        "DELETE FROM sessions WHERE session_id = ?", (session_id,)
                    )
            finally:
                conn.close()

    def cleanup_old_sessions(self, max_age_days: Optional[int] = None) -> int:
        """
        Delete sessions that have not been active within max_age_days.

        Args:
            max_age_days: Override the default retention period.

        Returns:
            Number of sessions deleted.
        """
        try:
            from config import conf
            max_age = max_age_days or conf().get(
                "conversation_max_age_days", DEFAULT_MAX_AGE_DAYS
            )
        except Exception:
            max_age = max_age_days or DEFAULT_MAX_AGE_DAYS

        cutoff = int(time.time()) - max_age * 86400
        deleted = 0

        with self._lock:
            conn = self._connect()
            try:
                with conn:
                    stale = conn.execute(
                        "SELECT session_id FROM sessions WHERE last_active < ?",
                        (cutoff,),
                    ).fetchall()
                    for (sid,) in stale:
                        conn.execute(
                            "DELETE FROM messages WHERE session_id = ?", (sid,)
                        )
                        conn.execute(
                            "DELETE FROM sessions WHERE session_id = ?", (sid,)
                        )
                        deleted += 1
            finally:
                conn.close()

        if deleted:
            logger.info(f"[ConversationStore] Pruned {deleted} expired sessions")
        return deleted

    def load_history_page(
        self,
        session_id: str,
        page: int = 1,
        page_size: int = 20,
    ) -> Dict[str, Any]:
        """
        Load a page of conversation history for UI display, grouped into turns.

        Each "turn" maps to one of:
          - A user message (role="user", content=str)
          - An assistant message (role="assistant", content=str,
            tool_calls=[{name, arguments, result}] when tools were used)

        Internal tool_result user messages are merged into the preceding
        assistant entry's tool_calls list and never appear as standalone items.

        Pages are numbered from 1 (most recent).  Messages within a page are
        returned in chronological order.

        Returns:
            {
                "messages": [
                    {
                        "role": "user" | "assistant",
                        "content": str,
                        "tool_calls": [...],   # assistant only, may be []
                        "created_at": int,
                    },
                    ...
                ],
                "total": <visible turn count>,
                "page": <current page>,
                "page_size": <page_size>,
                "has_more": bool,
            }
        """
        page = max(1, page)
        with self._lock:
            conn = self._connect()
            try:
                rows = conn.execute(
                    """
                    SELECT role, content, created_at
                    FROM messages
                    WHERE session_id = ?
                    ORDER BY seq ASC
                    """,
                    (session_id,),
                ).fetchall()
            finally:
                conn.close()

        visible = _group_into_display_turns(rows)

        total = len(visible)
        offset = (page - 1) * page_size
        page_items = list(reversed(visible))[offset: offset + page_size]
        page_items = list(reversed(page_items))

        return {
            "messages": page_items,
            "total": total,
            "page": page,
            "page_size": page_size,
            "has_more": offset + page_size < total,
        }

    def get_stats(self) -> Dict[str, Any]:
        """Return basic stats keyed by channel_type, for monitoring."""
        with self._lock:
            conn = self._connect()
            try:
                total_sessions = conn.execute(
                    "SELECT COUNT(*) FROM sessions"
                ).fetchone()[0]
                total_messages = conn.execute(
                    "SELECT COUNT(*) FROM messages"
                ).fetchone()[0]
                by_channel = conn.execute(
                    """
                    SELECT channel_type, COUNT(*) as cnt
                    FROM sessions
                    GROUP BY channel_type
                    ORDER BY cnt DESC
                    """
                ).fetchall()
                return {
                    "total_sessions": total_sessions,
                    "total_messages": total_messages,
                    "by_channel": {row[0] or "unknown": row[1] for row in by_channel},
                }
            finally:
                conn.close()

    # ------------------------------------------------------------------
    # Internal helpers
    # ------------------------------------------------------------------

    def _init_db(self) -> None:
        self._db_path.parent.mkdir(parents=True, exist_ok=True)
        conn = self._connect()
        try:
            conn.executescript(_DDL)
            conn.commit()
            self._migrate(conn)
        finally:
            conn.close()

    def _migrate(self, conn: sqlite3.Connection) -> None:
        """Apply incremental schema migrations on existing databases."""
        cols = {
            row[1]
            for row in conn.execute("PRAGMA table_info(sessions)").fetchall()
        }
        if "channel_type" not in cols:
            try:
                conn.execute(_MIGRATION_ADD_CHANNEL_TYPE)
                conn.commit()
                logger.info("[ConversationStore] Migrated: added channel_type column")
            except Exception as e:
                logger.warning(f"[ConversationStore] Migration failed: {e}")

    def _connect(self) -> sqlite3.Connection:
        conn = sqlite3.connect(str(self._db_path), timeout=10)
        conn.execute("PRAGMA journal_mode=WAL")
        conn.execute("PRAGMA synchronous=NORMAL")
        return conn


# ---------------------------------------------------------------------------
# Singleton
# ---------------------------------------------------------------------------

_store_instance: Optional[ConversationStore] = None
_store_lock = threading.Lock()


def get_conversation_store() -> ConversationStore:
    """
    Return the process-wide ConversationStore singleton.

    Reuses the long-term memory database so the project stays with a single
    SQLite file: ~/cow/memory/long-term/index.db
    The conversation tables (sessions / messages) are separate from the
    memory tables (memory_chunks / file_metadata) — no conflicts.
    """
    global _store_instance
    if _store_instance is not None:
        return _store_instance

    with _store_lock:
        if _store_instance is not None:
            return _store_instance

        try:
            from agent.memory.config import get_default_memory_config
            db_path = get_default_memory_config().get_db_path()
        except Exception:
            from common.utils import expand_path
            db_path = Path(expand_path("~/cow")) / "memory" / "long-term" / "index.db"

        _store_instance = ConversationStore(db_path)
        logger.debug(f"[ConversationStore] Using shared DB at: {db_path}")
        return _store_instance


================================================
FILE: agent/memory/embedding.py
================================================
"""
Embedding providers for memory

Supports OpenAI and local embedding models
"""

import hashlib
from abc import ABC, abstractmethod
from typing import List, Optional


class EmbeddingProvider(ABC):
    """Base class for embedding providers"""

    @abstractmethod
    def embed(self, text: str) -> List[float]:
        """Generate embedding for text"""
        pass

    @abstractmethod
    def embed_batch(self, texts: List[str]) -> List[List[float]]:
        """Generate embeddings for multiple texts"""
        pass
    
    @property
    @abstractmethod
    def dimensions(self) -> int:
        """Get embedding dimensions"""
        pass


class OpenAIEmbeddingProvider(EmbeddingProvider):
    """OpenAI embedding provider using REST API"""
    
    def __init__(self, model: str = "text-embedding-3-small", api_key: Optional[str] = None,
                 api_base: Optional[str] = None, extra_headers: Optional[dict] = None):
        """
        Initialize OpenAI embedding provider

        Args:
            model: Model name (text-embedding-3-small or text-embedding-3-large)
            api_key: OpenAI API key
            api_base: Optional API base URL
            extra_headers: Optional extra headers to include in API requests
        """
        self.model = model
        self.api_key = api_key
        self.api_base = api_base or "https://api.openai.com/v1"
        self.extra_headers = extra_headers or {}

        # Validate API key
        if not self.api_key or self.api_key in ["", "YOUR API KEY", "YOUR_API_KEY"]:
            raise ValueError("OpenAI API key is not configured. Please set 'open_ai_api_key' in config.json")

        # Set dimensions based on model
        self._dimensions = 1536 if "small" in model else 3072

    def _call_api(self, input_data):
        """Call OpenAI embedding API using requests"""
        import requests

        url = f"{self.api_base}/embeddings"
        headers = {
            "Content-Type": "application/json",
            "Authorization": f"Bearer {self.api_key}",
            **self.extra_headers,
        }
        data = {
            "input": input_data,
            "model": self.model
        }

        try:
            response = requests.post(url, headers=headers, json=data, timeout=5)
            response.raise_for_status()
            return response.json()
        except requests.exceptions.ConnectionError as e:
            raise ConnectionError(f"Failed to connect to OpenAI API at {url}. Please check your network connection and api_base configuration. Error: {str(e)}")
        except requests.exceptions.Timeout as e:
            raise TimeoutError(f"OpenAI API request timed out after 10s. Please check your network connection. Error: {str(e)}")
        except requests.exceptions.HTTPError as e:
            if e.response.status_code == 401:
                raise ValueError(f"Invalid OpenAI API key. Please check your 'open_ai_api_key' in config.json")
            elif e.response.status_code == 429:
                raise ValueError(f"OpenAI API rate limit exceeded. Please try again later.")
            else:
                raise ValueError(f"OpenAI API request failed: {e.response.status_code} - {e.response.text}")

    def embed(self, text: str) -> List[float]:
        """Generate embedding for text"""
        result = self._call_api(text)
        return result["data"][0]["embedding"]

    def embed_batch(self, texts: List[str]) -> List[List[float]]:
        """Generate embeddings for multiple texts"""
        if not texts:
            return []

        result = self._call_api(texts)
        return [item["embedding"] for item in result["data"]]

    @property
    def dimensions(self) -> int:
        return self._dimensions


# LocalEmbeddingProvider removed - only use OpenAI embedding or keyword search


class EmbeddingCache:
    """Cache for embeddings to avoid recomputation"""

    def __init__(self):
        self.cache = {}

    def get(self, text: str, provider: str, model: str) -> Optional[List[float]]:
        """Get cached embedding"""
        key = self._compute_key(text, provider, model)
        return self.cache.get(key)
    
    def put(self, text: str, provider: str, model: str, embedding: List[float]):
        """Cache embedding"""
        key = self._compute_key(text, provider, model)
        self.cache[key] = embedding
    
    @staticmethod
    def _compute_key(text: str, provider: str, model: str) -> str:
        """Compute cache key"""
        content = f"{provider}:{model}:{text}"
        return hashlib.md5(content.encode('utf-8')).hexdigest()
    
    def clear(self):
        """Clear cache"""
        self.cache.clear()


def create_embedding_provider(
    provider: str = "openai",
    model: Optional[str] = None,
    api_key: Optional[str] = None,
    api_base: Optional[str] = None,
    extra_headers: Optional[dict] = None
) -> EmbeddingProvider:
    """
    Factory function to create embedding provider

    Supports "openai" and "linkai" providers (both use OpenAI-compatible REST API).
    If initialization fails, caller should fall back to keyword-only search.

    Args:
        provider: Provider name ("openai" or "linkai")
        model: Model name (default: text-embedding-3-small)
        api_key: API key (required)
        api_base: API base URL
        extra_headers: Optional extra headers to include in API requests

    Returns:
        EmbeddingProvider instance

    Raises:
        ValueError: If provider is unsupported or api_key is missing
    """
    if provider not in ("openai", "linkai"):
        raise ValueError(f"Unsupported embedding provider: {provider}. Use 'openai' or 'linkai'.")

    model = model or "text-embedding-3-small"
    return OpenAIEmbeddingProvider(model=model, api_key=api_key, api_base=api_base, extra_headers=extra_headers)


================================================
FILE: agent/memory/manager.py
================================================
"""
Memory manager for AgentMesh

Provides high-level interface for memory operations
"""

import os
from typing import List, Optional, Dict, Any
from pathlib import Path
import hashlib
from datetime import datetime, timedelta

from agent.memory.config import MemoryConfig, get_default_memory_config
from agent.memory.storage import MemoryStorage, MemoryChunk, SearchResult
from agent.memory.chunker import TextChunker
from agent.memory.embedding import create_embedding_provider, EmbeddingProvider
from agent.memory.summarizer import MemoryFlushManager, create_memory_files_if_needed


class MemoryManager:
    """
    Memory manager with hybrid search capabilities
    
    Provides long-term memory for agents with vector and keyword search
    """
    
    def __init__(
        self,
        config: Optional[MemoryConfig] = None,
        embedding_provider: Optional[EmbeddingProvider] = None,
        llm_model: Optional[Any] = None
    ):
        """
        Initialize memory manager
        
        Args:
            config: Memory configuration (uses global config if not provided)
            embedding_provider: Custom embedding provider (optional)
            llm_model: LLM model for summarization (optional)
        """
        self.config = config or get_default_memory_config()
        
        # Initialize storage
        db_path = self.config.get_db_path()
        self.storage = MemoryStorage(db_path)
        
        # Initialize chunker
        self.chunker = TextChunker(
            max_tokens=self.config.chunk_max_tokens,
            overlap_tokens=self.config.chunk_overlap_tokens
        )
        
        # Initialize embedding provider (optional, prefer OpenAI, fallback to LinkAI)
        self.embedding_provider = None
        if embedding_provider:
            self.embedding_provider = embedding_provider
        else:
            # Try OpenAI first
            try:
                api_key = os.environ.get('OPENAI_API_KEY')
                api_base = os.environ.get('OPENAI_API_BASE')
                if api_key:
                    self.embedding_provider = create_embedding_provider(
                        provider="openai",
                        model=self.config.embedding_model,
                        api_key=api_key,
                        api_base=api_base
                    )
            except Exception as e:
                from common.log import logger
                logger.warning(f"[MemoryManager] OpenAI embedding failed: {e}")

            # Fallback to LinkAI
            if self.embedding_provider is None:
                try:
                    linkai_key = os.environ.get('LINKAI_API_KEY')
                    linkai_base = os.environ.get('LINKAI_API_BASE', 'https://api.link-ai.tech')
                    if linkai_key:
                        from common.utils import get_cloud_headers
                        cloud_headers = get_cloud_headers(linkai_key)
                        cloud_headers.pop("Authorization", None)
                        self.embedding_provider = create_embedding_provider(
                            provider="linkai",
                            model=self.config.embedding_model,
                            api_key=linkai_key,
                            api_base=f"{linkai_base}/v1",
                            extra_headers=cloud_headers,
                        )
                except Exception as e:
                    from common.log import logger
                    logger.warning(f"[MemoryManager] LinkAI embedding failed: {e}")

            if self.embedding_provider is None:
                from common.log import logger
                logger.info(f"[MemoryManager] Memory will work with keyword search only (no vector search)")
        
        # Initialize memory flush manager
        workspace_dir = self.config.get_workspace()
        self.flush_manager = MemoryFlushManager(
            workspace_dir=workspace_dir,
            llm_model=llm_model
        )
        
        # Ensure workspace directories exist
        self._init_workspace()
        
        self._dirty = False
    
    def _init_workspace(self):
        """Initialize workspace directories"""
        memory_dir = self.config.get_memory_dir()
        memory_dir.mkdir(parents=True, exist_ok=True)
        
        # Create default memory files
        workspace_dir = self.config.get_workspace()
        create_memory_files_if_needed(workspace_dir)
    
    async def search(
        self,
        query: str,
        user_id: Optional[str] = None,
        max_results: Optional[int] = None,
        min_score: Optional[float] = None,
        include_shared: bool = True
    ) -> List[SearchResult]:
        """
        Search memory with hybrid search (vector + keyword)
        
        Args:
            query: Search query
            user_id: User ID for scoped search
            max_results: Maximum results to return
            min_score: Minimum score threshold
            include_shared: Include shared memories
            
        Returns:
            List of search results sorted by relevance
        """
        max_results = max_results or self.config.max_results
        min_score = min_score or self.config.min_score
        
        # Determine scopes
        scopes = []
        if include_shared:
            scopes.append("shared")
        if user_id:
            scopes.append("user")
        
        if not scopes:
            return []
        
        # Sync if needed
        if self.config.sync_on_search and self._dirty:
            await self.sync()
        
        # Perform vector search (if embedding provider available)
        vector_results = []
        if self.embedding_provider:
            try:
                from common.log import logger
                query_embedding = self.embedding_provider.embed(query)
                vector_results = self.storage.search_vector(
                    query_embedding=query_embedding,
                    user_id=user_id,
                    scopes=scopes,
                    limit=max_results * 2  # Get more candidates for merging
                )
                logger.info(f"[MemoryManager] Vector search found {len(vector_results)} results for query: {query}")
            except Exception as e:
                from common.log import logger
                logger.warning(f"[MemoryManager] Vector search failed: {e}")
        
        # Perform keyword search
        keyword_results = self.storage.search_keyword(
            query=query,
            user_id=user_id,
            scopes=scopes,
            limit=max_results * 2
        )
        from common.log import logger
        logger.info(f"[MemoryManager] Keyword search found {len(keyword_results)} results for query: {query}")
        
        # Merge results
        merged = self._merge_results(
            vector_results,
            keyword_results,
            self.config.vector_weight,
            self.config.keyword_weight
        )
        
        # Filter by min score and limit
        filtered = [r for r in merged if r.score >= min_score]
        return filtered[:max_results]
    
    async def add_memory(
        self,
        content: str,
        user_id: Optional[str] = None,
        scope: str = "shared",
        source: str = "memory",
        path: Optional[str] = None,
        metadata: Optional[Dict[str, Any]] = None
    ):
        """
        Add new memory content
        
        Args:
            content: Memory content
            user_id: User ID for user-scoped memory
            scope: Memory scope ("shared", "user", "session")
            source: Memory source ("memory" or "session")
            path: File path (auto-generated if not provided)
            metadata: Additional metadata
        """
        if not content.strip():
            return
        
        # Generate path if not provided
        if not path:
            content_hash = hashlib.md5(content.encode('utf-8')).hexdigest()[:8]
            if user_id and scope == "user":
                path = f"memory/users/{user_id}/memory_{content_hash}.md"
            else:
                path = f"memory/shared/memory_{content_hash}.md"
        
        # Chunk content
        chunks = self.chunker.chunk_text(content)
        
        # Generate embeddings (if provider available)
        texts = [chunk.text for chunk in chunks]
        if self.embedding_provider:
            embeddings = self.embedding_provider.embed_batch(texts)
        else:
            # No embeddings, just use None
            embeddings = [None] * len(texts)
        
        # Create memory chunks
        memory_chunks = []
        for chunk, embedding in zip(chunks, embeddings):
            chunk_id = self._generate_chunk_id(path, chunk.start_line, chunk.end_line)
            chunk_hash = MemoryStorage.compute_hash(chunk.text)
            
            memory_chunks.append(MemoryChunk(
                id=chunk_id,
                user_id=user_id,
                scope=scope,
                source=source,
                path=path,
                start_line=chunk.start_line,
                end_line=chunk.end_line,
                text=chunk.text,
                embedding=embedding,
                hash=chunk_hash,
                metadata=metadata
            ))
        
        # Save to storage
        self.storage.save_chunks_batch(memory_chunks)
        
        # Update file metadata
        file_hash = MemoryStorage.compute_hash(content)
        self.storage.update_file_metadata(
            path=path,
            source=source,
            file_hash=file_hash,
            mtime=int(os.path.getmtime(__file__)),  # Use current time
            size=len(content)
        )
    
    async def sync(self, force: bool = False):
        """
        Synchronize memory from files
        
        Args:
            force: Force full reindex
        """
        memory_dir = self.config.get_memory_dir()
        workspace_dir = self.config.get_workspace()
        
        # Scan MEMORY.md (workspace root)
        memory_file = Path(workspace_dir) / "MEMORY.md"
        if memory_file.exists():
            await self._sync_file(memory_file, "memory", "shared", None)
        
        # Scan memory directory (including daily summaries)
        if memory_dir.exists():
            for file_path in memory_dir.rglob("*.md"):
                # Determine scope and user_id from path
                rel_path = file_path.relative_to(workspace_dir)
                parts = rel_path.parts
                
                # Check if it's in daily summary directory
                if "daily" in parts:
                    # Daily summary files
                    if "users" in parts or len(parts) > 3:
                        # User-scoped daily summary: memory/daily/{user_id}/2024-01-29.md
                        user_idx = parts.index("daily") + 1
                        user_id = parts[user_idx] if user_idx < len(parts) else None
                        scope = "user"
                    else:
                        # Shared daily summary: memory/daily/2024-01-29.md
                        user_id = None
                        scope = "shared"
                elif "users" in parts:
                    # User-scoped memory
                    user_idx = parts.index("users") + 1
                    user_id = parts[user_idx] if user_idx < len(parts) else None
                    scope = "user"
                else:
                    # Shared memory
                    user_id = None
                    scope = "shared"
                
                await self._sync_file(file_path, "memory", scope, user_id)
        
        self._dirty = False
    
    async def _sync_file(
        self,
        file_path: Path,
        source: str,
        scope: str,
        user_id: Optional[str]
    ):
        """Sync a single file"""
        # Compute file hash
        content = file_path.read_text(encoding='utf-8')
        file_hash = MemoryStorage.compute_hash(content)
        
        # Get relative path
        workspace_dir = self.config.get_workspace()
        rel_path = str(file_path.relative_to(workspace_dir))
        
        # Check if file changed
        stored_hash = self.storage.get_file_hash(rel_path)
        if stored_hash == file_hash:
            return  # No changes
        
        # Delete old chunks
        self.storage.delete_by_path(rel_path)
        
        # Chunk and embed
        chunks = self.chunker.chunk_text(content)
        if not chunks:
            return
        
        texts = [chunk.text for chunk in chunks]
        if self.embedding_provider:
            embeddings = self.embedding_provider.embed_batch(texts)
        else:
            embeddings = [None] * len(texts)
        
        # Create memory chunks
        memory_chunks = []
        for chunk, embedding in zip(chunks, embeddings):
            chunk_id = self._generate_chunk_id(rel_path, chunk.start_line, chunk.end_line)
            chunk_hash = MemoryStorage.compute_hash(chunk.text)
            
            memory_chunks.append(MemoryChunk(
                id=chunk_id,
                user_id=user_id,
                scope=scope,
                source=source,
                path=rel_path,
                start_line=chunk.start_line,
                end_line=chunk.end_line,
                text=chunk.text,
                embedding=embedding,
                hash=chunk_hash,
                metadata=None
            ))
        
        # Save
        self.storage.save_chunks_batch(memory_chunks)
        
        # Update file metadata
        stat = file_path.stat()
        self.storage.update_file_metadata(
            path=rel_path,
            source=source,
            file_hash=file_hash,
            mtime=int(stat.st_mtime),
            size=stat.st_size
        )
    
    def flush_memory(
        self,
        messages: list,
        user_id: Optional[str] = None,
        reason: str = "threshold",
        max_messages: int = 10,
    ) -> bool:
        """
        Flush conversation summary to daily memory file.
        
        Args:
            messages: Conversation message list
            user_id: Optional user ID
            reason: "threshold" | "overflow" | "daily_summary"
            max_messages: Max recent messages to include (0 = all)
        
        Returns:
            True if content was written
        """
        success = self.flush_manager.flush_from_messages(
            messages=messages,
            user_id=user_id,
            reason=reason,
            max_messages=max_messages,
        )
        if success:
            self._dirty = True
        return success
    
    def get_status(self) -> Dict[str, Any]:
        """Get memory status"""
        stats = self.storage.get_stats()
        return {
            'chunks': stats['chunks'],
            'files': stats['files'],
            'workspace': str(self.config.get_workspace()),
            'dirty': self._dirty,
            'embedding_enabled': self.embedding_provider is not None,
            'embedding_provider': self.config.embedding_provider if self.embedding_provider else 'disabled',
            'embedding_model': self.config.embedding_model if self.embedding_provider else 'N/A',
            'search_mode': 'hybrid (vector + keyword)' if self.embedding_provider else 'keyword only (FTS5)'
        }
    
    def mark_dirty(self):
        """Mark memory as dirty (needs sync)"""
        self._dirty = True
    
    def close(self):
        """Close memory manager and release resources"""
        self.storage.close()
    
    # Helper methods
    
    def _generate_chunk_id(self, path: str, start_line: int, end_line: int) -> str:
        """Generate unique chunk ID"""
        content = f"{path}:{start_line}:{end_line}"
        return hashlib.md5(content.encode('utf-8')).hexdigest()
    
    @staticmethod
    def _compute_temporal_decay(path: str, half_life_days: float = 30.0) -> float:
        """
        Compute temporal decay multiplier for dated memory files.
        
        Inspired by OpenClaw's temporal-decay: exponential decay based on file date.
        MEMORY.md and non-dated files are "evergreen" (no decay, multiplier=1.0).
        Daily files like memory/2025-03-01.md decay based on age.
        
        Formula: multiplier = exp(-ln2/half_life * age_in_days)
        """
        import re
        import math
        
        match = re.search(r'(\d{4})-(\d{2})-(\d{2})\.md$', path)
        if not match:
            return 1.0  # evergreen: MEMORY.md, non-dated files
        
        try:
            file_date = datetime(
                int(match.group(1)), int(match.group(2)), int(match.group(3))
            )
            age_days = (datetime.now() - file_date).days
            if age_days <= 0:
                return 1.0
            
            decay_lambda = math.log(2) / half_life_days
            return math.exp(-decay_lambda * age_days)
        except (ValueError, OverflowError):
            return 1.0
    
    def _merge_results(
        self,
        vector_results: List[SearchResult],
        keyword_results: List[SearchResult],
        vector_weight: float,
        keyword_weight: float
    ) -> List[SearchResult]:
        """Merge vector and keyword search results with temporal decay for dated files"""
        merged_map = {}
        
        for result in vector_results:
            key = (result.path, result.start_line, result.end_line)
            merged_map[key] = {
                'result': result,
                'vector_score': result.score,
                'keyword_score': 0.0
            }
        
        for result in keyword_results:
            key = (result.path, result.start_line, result.end_line)
            if key in merged_map:
                merged_map[key]['keyword_score'] = result.score
            else:
                merged_map[key] = {
                    'result': result,
                    'vector_score': 0.0,
                    'keyword_score': result.score
                }
        
        merged_results = []
        for entry in merged_map.values():
            combined_score = (
                vector_weight * entry['vector_score'] +
                keyword_weight * entry['keyword_score']
            )
            
            # Apply temporal decay for dated memory files
            result = entry['result']
            decay = self._compute_temporal_decay(result.path)
            combined_score *= decay
            
            merged_results.append(SearchResult(
                path=result.path,
                start_line=result.start_line,
                end_line=result.end_line,
                score=combined_score,
                snippet=result.snippet,
                source=result.source,
                user_id=result.user_id
            ))
        
        merged_results.sort(key=lambda r: r.score, reverse=True)
        return merged_results


================================================
FILE: agent/memory/service.py
================================================
"""
Memory service for handling memory query operations via cloud protocol.

Provides a unified interface for listing and reading memory files,
callable from the cloud client (LinkAI) or a future web console.

Memory file layout (under workspace_root):
    MEMORY.md               -> type: global
    memory/2026-02-20.md    -> type: daily
"""

import os
from datetime import datetime
from typing import Dict, List, Optional
from pathlib import Path
from common.log import logger


class MemoryService:
    """
    High-level service for memory file queries.
    Operates directly on the filesystem — no MemoryManager dependency.
    """

    def __init__(self, workspace_root: str):
        """
        :param workspace_root: Workspace root directory (e.g. ~/cow)
        """
        self.workspace_root = workspace_root
        self.memory_dir = os.path.join(workspace_root, "memory")

    # ------------------------------------------------------------------
    # list — paginated file metadata
    # ------------------------------------------------------------------
    def list_files(self, page: int = 1, page_size: int = 20) -> dict:
        """
        List all memory files with metadata (without content).

        Returns::

            {
                "page": 1,
                "page_size": 20,
                "total": 15,
                "list": [
                    {"filename": "MEMORY.md", "type": "global", "size": 2048, "updated_at": "2026-02-20 10:00:00"},
                    {"filename": "2026-02-20.md", "type": "daily", "size": 512, "updated_at": "2026-02-20 09:30:00"},
                    ...
                ]
            }
        """
        files: List[dict] = []

        # 1. Global memory — MEMORY.md in workspace root
        global_path = os.path.join(self.workspace_root, "MEMORY.md")
        if os.path.isfile(global_path):
            files.append(self._file_info(global_path, "MEMORY.md", "global"))

        # 2. Daily memory files — memory/*.md (sorted newest first)
        if os.path.isdir(self.memory_dir):
            daily_files = []
            for name in os.listdir(self.memory_dir):
                full = os.path.join(self.memory_dir, name)
                if os.path.isfile(full) and name.endswith(".md"):
                    daily_files.append((name, full))
            # Sort by filename descending (newest date first)
            daily_files.sort(key=lambda x: x[0], reverse=True)
            for name, full in daily_files:
                files.append(self._file_info(full, name, "daily"))

        total = len(files)

        # Paginate
        start = (page - 1) * page_size
        end = start + page_size
        page_items = files[start:end]

        return {
            "page": page,
            "page_size": page_size,
            "total": total,
            "list": page_items,
        }

    # ------------------------------------------------------------------
    # content — read a single file
    # ------------------------------------------------------------------
    def get_content(self, filename: str) -> dict:
        """
        Read the full content of a memory file.

        :param filename: File name, e.g. ``MEMORY.md`` or ``2026-02-20.md``
        :return: dict with ``filename`` and ``content``
        :raises FileNotFoundError: if the file does not exist
        """
        path = self._resolve_path(filename)
        if not os.path.isfile(path):
            raise FileNotFoundError(f"Memory file not found: {filename}")

        with open(path, "r", encoding="utf-8") as f:
            content = f.read()

        return {
            "filename": filename,
            "content": content,
        }

    # ------------------------------------------------------------------
    # dispatch — single entry point for protocol messages
    # ------------------------------------------------------------------
    def dispatch(self, action: str, payload: Optional[dict] = None) -> dict:
        """
        Dispatch a memory management action.

        :param action: ``list`` or ``content``
        :param payload: action-specific payload
        :return: protocol-compatible response dict
        """
        payload = payload or {}
        try:
            if action == "list":
                page = payload.get("page", 1)
                page_size = payload.get("page_size", 20)
                result_payload = self.list_files(page=page, page_size=page_size)
                return {"action": action, "code": 200, "message": "success", "payload": result_payload}

            elif action == "content":
                filename = payload.get("filename")
                if not filename:
                    return {"action": action, "code": 400, "message": "filename is required", "payload": None}
                result_payload = self.get_content(filename)
                return {"action": action, "code": 200, "message": "success", "payload": result_payload}

            else:
                return {"action": action, "code": 400, "message": f"unknown action: {action}", "payload": None}

        except FileNotFoundError as e:
            return {"action": action, "code": 404, "message": str(e), "payload": None}
        except Exception as e:
            logger.error(f"[MemoryService] dispatch error: action={action}, error={e}")
            return {"action": action, "code": 500, "message": str(e), "payload": None}

    # ------------------------------------------------------------------
    # internal helpers
    # ------------------------------------------------------------------
    def _resolve_path(self, filename: str) -> str:
        """
        Resolve a filename to its absolute path.

        - ``MEMORY.md`` → ``{workspace_root}/MEMORY.md``
        - ``2026-02-20.md`` → ``{workspace_root}/memory/2026-02-20.md``
        """
        if filename == "MEMORY.md":
            return os.path.join(self.workspace_root, filename)
        return os.path.join(self.memory_dir, filename)

    @staticmethod
    def _file_info(path: str, filename: str, file_type: str) -> dict:
        """Build a file metadata dict."""
        stat = os.stat(path)
        updated_at = datetime.fromtimestamp(stat.st_mtime).strftime("%Y-%m-%d %H:%M:%S")
        return {
            "filename": filename,
            "type": file_type,
            "size": stat.st_size,
            "updated_at": updated_at,
        }


================================================
FILE: agent/memory/storage.py
================================================
"""
Storage layer for memory using SQLite + FTS5

Provides vector and keyword search capabilities
"""

from __future__ import annotations
import sqlite3
import json
import hashlib
from typing import List, Dict, Optional, Any
from pathlib import Path
from dataclasses import dataclass


@dataclass
class MemoryChunk:
    """Represents a memory chunk with text and embedding"""
    id: str
    user_id: Optional[str]
    scope: str  # "shared" | "user" | "session"
    source: str  # "memory" | "session"
    path: str
    start_line: int
    end_line: int
    text: str
    embedding: Optional[List[float]]
    hash: str
    metadata: Optional[Dict[str, Any]] = None


@dataclass
class SearchResult:
    """Search result with score and snippet"""
    path: str
    start_line: int
    end_line: int
    score: float
    snippet: str
    source: str
    user_id: Optional[str] = None


class MemoryStorage:
    """SQLite-based storage with FTS5 for keyword search"""
    
    def __init__(self, db_path: Path):
        self.db_path = db_path
        self.conn: Optional[sqlite3.Connection] = None
        self.fts5_available = False  # Track FTS5 availability
        self._init_db()
    
    def _check_fts5_support(self) -> bool:
        """Check if SQLite has FTS5 support"""
        try:
            self.conn.execute("CREATE VIRTUAL TABLE IF NOT EXISTS fts5_test USING fts5(test)")
            self.conn.execute("DROP TABLE IF EXISTS fts5_test")
            return True
        except sqlite3.OperationalError as e:
            if "no such module: fts5" in str(e):
                return False
            raise
    
    def _init_db(self):
        """Initialize database with schema"""
        try:
            self.conn = sqlite3.connect(str(self.db_path), check_same_thread=False)
            self.conn.row_factory = sqlite3.Row
            
            # Check FTS5 support
            self.fts5_available = self._check_fts5_support()
            if not self.fts5_available:
                from common.log import logger
                logger.debug("[MemoryStorage] FTS5 not available, using LIKE-based keyword search")
            
            # Check database integrity
            try:
                result = self.conn.execute("PRAGMA integrity_check").fetchone()
                if result[0] != 'ok':
                    print(f"⚠️  Database integrity check failed: {result[0]}")
                    print(f"   Recreating database...")
                    self.conn.close()
                    self.conn = None
                    # Remove corrupted database
                    self.db_path.unlink(missing_ok=True)
                    # Remove WAL files
                    Path(str(self.db_path) + '-wal').unlink(missing_ok=True)
                    Path(str(self.db_path) + '-shm').unlink(missing_ok=True)
                    # Reconnect to create new database
                    self.conn = sqlite3.connect(str(self.db_path), check_same_thread=False)
                    self.conn.row_factory = sqlite3.Row
            except sqlite3.DatabaseError:
                # Database is corrupted, recreate it
                print(f"⚠️  Database is corrupted, recreating...")
                if self.conn:
                    self.conn.close()
                    self.conn = None
                self.db_path.unlink(missing_ok=True)
                Path(str(self.db_path) + '-wal').unlink(missing_ok=True)
                Path(str(self.db_path) + '-shm').unlink(missing_ok=True)
                self.conn = sqlite3.connect(str(self.db_path), check_same_thread=False)
                self.conn.row_factory = sqlite3.Row
            
            # Enable WAL mode for better concurrency
            self.conn.execute("PRAGMA journal_mode=WAL")
            # Set busy timeout to avoid "database is locked" errors
            self.conn.execute("PRAGMA busy_timeout=5000")
        except Exception as e:
            print(f"⚠️  Unexpected error during database initialization: {e}")
            raise
        
        # Create chunks table with embeddings
        self.conn.execute("""
            CREATE TABLE IF NOT EXISTS chunks (
                id TEXT PRIMARY KEY,
                user_id TEXT,
                scope TEXT NOT NULL DEFAULT 'shared',
                source TEXT NOT NULL DEFAULT 'memory',
                path TEXT NOT NULL,
                start_line INTEGER NOT NULL,
                end_line INTEGER NOT NULL,
                text TEXT NOT NULL,
                embedding TEXT,
                hash TEXT NOT NULL,
                metadata TEXT,
                created_at INTEGER DEFAULT (strftime('%s', 'now')),
                updated_at INTEGER DEFAULT (strftime('%s', 'now'))
            )
        """)
        
        # Create indexes
        self.conn.execute("""
            CREATE INDEX IF NOT EXISTS idx_chunks_user 
            ON chunks(user_id)
        """)
        
        self.conn.execute("""
            CREATE INDEX IF NOT EXISTS idx_chunks_scope 
            ON chunks(scope)
        """)
        
        self.conn.execute("""
            CREATE INDEX IF NOT EXISTS idx_chunks_hash 
            ON chunks(path, hash)
        """)
        
        # Create FTS5 virtual table for keyword search (only if supported)
        if self.fts5_available:
            # Use default unicode61 tokenizer (stable and compatible)
            # For CJK support, we'll use LIKE queries as fallback
            self.conn.execute("""
                CREATE VIRTUAL TABLE IF NOT EXISTS chunks_fts USING fts5(
                    text,
                    id UNINDEXED,
                    user_id UNINDEXED,
                    path UNINDEXED,
                    source UNINDEXED,
                    scope UNINDEXED,
                    content='chunks',
                    content_rowid='rowid'
                )
            """)
            
            # Create triggers to keep FTS in sync
            self.conn.execute("""
                CREATE TRIGGER IF NOT EXISTS chunks_ai AFTER INSERT ON chunks BEGIN
                    INSERT INTO chunks_fts(rowid, text, id, user_id, path, source, scope)
                    VALUES (new.rowid, new.text, new.id, new.user_id, new.path, new.source, new.scope);
                END
            """)
            
            self.conn.execute("""
                CREATE TRIGGER IF NOT EXISTS chunks_ad AFTER DELETE ON chunks BEGIN
                    DELETE FROM chunks_fts WHERE rowid = old.rowid;
                END
            """)
            
            self.conn.execute("""
                CREATE TRIGGER IF NOT EXISTS chunks_au AFTER UPDATE ON chunks BEGIN
                    UPDATE chunks_fts SET text = new.text, id = new.id,
                                         user_id = new.user_id, path = new.path, source = new.source, scope = new.scope
                    WHERE rowid = new.rowid;
                END
            """)
        
        # Create files metadata table
        self.conn.execute("""
            CREATE TABLE IF NOT EXISTS files (
                path TEXT PRIMARY KEY,
                source TEXT NOT NULL DEFAULT 'memory',
                hash TEXT NOT NULL,
                mtime INTEGER NOT NULL,
                size INTEGER NOT NULL,
                updated_at INTEGER DEFAULT (strftime('%s', 'now'))
            )
        """)
        
        self.conn.commit()
    
    def save_chunk(self, chunk: MemoryChunk):
        """Save a memory chunk"""
        self.conn.execute("""
            INSERT OR REPLACE INTO chunks 
            (id, user_id, scope, source, path, start_line, end_line, text, embedding, hash, metadata, updated_at)
            VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, strftime('%s', 'now'))
        """, (
            chunk.id,
            chunk.user_id,
            chunk.scope,
            chunk.source,
            chunk.path,
            chunk.start_line,
            chunk.end_line,
            chunk.text,
            json.dumps(chunk.embedding) if chunk.embedding else None,
            chunk.hash,
            json.dumps(chunk.metadata) if chunk.metadata else None
        ))
        self.conn.commit()
    
    def save_chunks_batch(self, chunks: List[MemoryChunk]):
        """Save multiple chunks in a batch"""
        self.conn.executemany("""
            INSERT OR REPLACE INTO chunks 
            (id, user_id, scope, source, path, start_line, end_line, text, embedding, hash, metadata, updated_at)
            VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, strftime('%s', 'now'))
        """, [
            (
                c.id, c.user_id, c.scope, c.source, c.path,
                c.start_line, c.end_line, c.text,
                json.dumps(c.embedding) if c.embedding else None,
                c.hash,
                json.dumps(c.metadata) if c.metadata else None
            )
            for c in chunks
        ])
        self.conn.commit()
    
    def get_chunk(self, chunk_id: str) -> Optional[MemoryChunk]:
        """Get a chunk by ID"""
        row = self.conn.execute("""
            SELECT * FROM chunks WHERE id = ?
        """, (chunk_id,)).fetchone()
        
        if not row:
            return None
        
        return self._row_to_chunk(row)
    
    def search_vector(
        self,
        query_embedding: List[float],
        user_id: Optional[str] = None,
        scopes: List[str] = None,
        limit: int = 10
    ) -> List[SearchResult]:
        """
        Vector similarity search using in-memory cosine similarity
        (sqlite-vec can be added later for better performance)
        """
        if scopes is None:
            scopes = ["shared"]
            if user_id:
                scopes.append("user")
        
        # Build query
        scope_placeholders = ','.join('?' * len(scopes))
        params = scopes
        
        if user_id:
            query = f"""
                SELECT * FROM chunks 
                WHERE scope IN ({scope_placeholders})
                AND (scope = 'shared' OR user_id = ?)
                AND embedding IS NOT NULL
            """
            params.append(user_id)
        else:
            query = f"""
                SELECT * FROM chunks 
                WHERE scope IN ({scope_placeholders})
                AND embedding IS NOT NULL
            """
        
        rows = self.conn.execute(query, params).fetchall()
        
        # Calculate cosine similarity
        results = []
        for row in rows:
            embedding = json.loads(row['embedding'])
            similarity = self._cosine_similarity(query_embedding, embedding)
            
            if similarity > 0:
                results.append((similarity, row))
        
        # Sort by similarity and limit
        results.sort(key=lambda x: x[0], reverse=True)
        results = results[:limit]
        
        return [
            SearchResult(
                path=row['path'],
                start_line=row['start_line'],
                end_line=row['end_line'],
                score=score,
                snippet=self._truncate_text(row['text'], 500),
                source=row['source'],
                user_id=row['user_id']
            )
            for score, row in results
        ]
    
    def search_keyword(
        self,
        query: str,
        user_id: Optional[str] = None,
        scopes: List[str] = None,
        limit: int = 10
    ) -> List[SearchResult]:
        """
        Keyword search using FTS5 + LIKE fallback
        
        Strategy:
        1. If FTS5 available: Try FTS5 search first (good for English and word-based languages)
        2. If no FTS5 or no results and query contains CJK: Use LIKE search
        """
        if scopes is None:
            scopes = ["shared"]
            if user_id:
                scopes.append("user")
        
        # Try FTS5 search first (if available)
        if self.fts5_available:
            fts_results = self._search_fts5(query, user_id, scopes, limit)
            if fts_results:
                return fts_results
        
        # Fallback to LIKE search (always for CJK, or if FTS5 not available)
        if not self.fts5_available or MemoryStorage._contains_cjk(query):
            return self._search_like(query, user_id, scopes, limit)
        
        return []
    
    def _search_fts5(
        self,
        query: str,
        user_id: Optional[str],
        scopes: List[str],
        limit: int
    ) -> List[SearchResult]:
        """FTS5 full-text search"""
        fts_query = self._build_fts_query(query)
        if not fts_query:
            return []
        
        scope_placeholders = ','.join('?' * len(scopes))
        params = [fts_query] + scopes
        
        if user_id:
            sql_query = f"""
                SELECT chunks.*, bm25(chunks_fts) as rank
                FROM chunks_fts
                JOIN chunks ON chunks.id = chunks_fts.id
                WHERE chunks_fts MATCH ? 
                AND chunks.scope IN ({scope_placeholders})
                AND (chunks.scope = 'shared' OR chunks.user_id = ?)
                ORDER BY rank
                LIMIT ?
            """
            params.extend([user_id, limit])
        else:
            sql_query = f"""
                SELECT chunks.*, bm25(chunks_fts) as rank
                FROM chunks_fts
                JOIN chunks ON chunks.id = chunks_fts.id
                WHERE chunks_fts MATCH ? 
                AND chunks.scope IN ({scope_placeholders})
                ORDER BY rank
                LIMIT ?
            """
            params.append(limit)
        
        try:
            rows = self.conn.execute(sql_query, params).fetchall()
            return [
                SearchResult(
                    path=row['path'],
                    start_line=row['start_line'],
                    end_line=row['end_line'],
                    score=self._bm25_rank_to_score(row['rank']),
                    snippet=self._truncate_text(row['text'], 500),
                    source=row['source'],
                    user_id=row['user_id']
                )
                for row in rows
            ]
        except Exception:
            return []
    
    def _search_like(
        self,
        query: str,
        user_id: Optional[str],
        scopes: List[str],
        limit: int
    ) -> List[SearchResult]:
        """LIKE-based search for CJK characters"""
        import re
        # Extract CJK words (2+ characters)
        cjk_words = re.findall(r'[\u4e00-\u9fff]{2,}', query)
        if not cjk_words:
            return []
        
        scope_placeholders = ','.join('?' * len(scopes))
        
        # Build LIKE conditions for each word
        like_conditions = []
        params = []
        for word in cjk_words:
            like_conditions.append("text LIKE ?")
            params.append(f'%{word}%')
        
        where_clause = ' OR '.join(like_conditions)
        params.extend(scopes)
        
        if user_id:
            sql_query = f"""
                SELECT * FROM chunks
                WHERE ({where_clause})
                AND scope IN ({scope_placeholders})
                AND (scope = 'shared' OR user_id = ?)
                LIMIT ?
            """
            params.extend([user_id, limit])
        else:
            sql_query = f"""
                SELECT * FROM chunks
                WHERE ({where_clause})
                AND scope IN ({scope_placeholders})
                LIMIT ?
            """
            params.append(limit)
        
        try:
            rows = self.conn.execute(sql_query, params).fetchall()
            return [
                SearchResult(
                    path=row['path'],
                    start_line=row['start_line'],
                    end_line=row['end_line'],
                    score=0.5,  # Fixed score for LIKE search
                    snippet=self._truncate_text(row['text'], 500),
                    source=row['source'],
                    user_id=row['user_id']
                )
                for row in rows
            ]
        except Exception:
            return []
    
    def delete_by_path(self, path: str):
        """Delete all chunks from a file"""
        self.conn.execute("""
            DELETE FROM chunks WHERE path = ?
        """, (path,))
        self.conn.commit()
    
    def get_file_hash(self, path: str) -> Optional[str]:
        """Get stored file hash"""
        row = self.conn.execute("""
            SELECT hash FROM files WHERE path = ?
        """, (path,)).fetchone()
        return row['hash'] if row else None
    
    def update_file_metadata(self, path: str, source: str, file_hash: str, mtime: int, size: int):
        """Update file metadata"""
        self.conn.execute("""
            INSERT OR REPLACE INTO files (path, source, hash, mtime, size, updated_at)
            VALUES (?, ?, ?, ?, ?, strftime('%s', 'now'))
        """, (path, source, file_hash, mtime, size))
        self.conn.commit()
    
    def get_stats(self) -> Dict[str, int]:
        """Get storage statistics"""
        chunks_count = self.conn.execute("""
            SELECT COUNT(*) as cnt FROM chunks
        """).fetchone()['cnt']
        
        files_count = self.conn.execute("""
            SELECT COUNT(*) as cnt FROM files
        """).fetchone()['cnt']
        
        return {
            'chunks': chunks_count,
            'files': files_count
        }
    
    def close(self):
        """Close database connection"""
        if self.conn:
            try:
                self.conn.commit()  # Ensure all changes are committed
                self.conn.close()
                self.conn = None  # Mark as closed
            except Exception as e:
                print(f"⚠️  Error closing database connection: {e}")
    
    def __del__(self):
        """Destructor to ensure connection is closed"""
        try:
            self.close()
        except Exception:
            pass  # Ignore errors during cleanup
    
    # Helper methods
    
    def _row_to_chunk(self, row) -> MemoryChunk:
        """Convert database row to MemoryChunk"""
        return MemoryChunk(
            id=row['id'],
            user_id=row['user_id'],
            scope=row['scope'],
            source=row['source'],
            path=row['path'],
            start_line=row['start_line'],
            end_line=row['end_line'],
            text=row['text'],
            embedding=json.loads(row['embedding']) if row['embedding'] else None,
            hash=row['hash'],
            metadata=json.loads(row['metadata']) if row['metadata'] else None
        )
    
    @staticmethod
    def _cosine_similarity(vec1: List[float], vec2: List[float]) -> float:
        """Calculate cosine similarity between two vectors"""
        if len(vec1) != len(vec2):
            return 0.0
        
        dot_product = sum(a * b for a, b in zip(vec1, vec2))
        norm1 = sum(a * a for a in vec1) ** 0.5
        norm2 = sum(b * b for b in vec2) ** 0.5
        
        if norm1 == 0 or norm2 == 0:
            return 0.0
        
        return dot_product / (norm1 * norm2)
    
    @staticmethod
    def _contains_cjk(text: str) -> bool:
        """Check if text contains CJK (Chinese/Japanese/Korean) characters"""
        import re
        return bool(re.search(r'[\u4e00-\u9fff]', text))
    
    @staticmethod
    def _build_fts_query(raw_query: str) -> Optional[str]:
        """
        Build FTS5 query from raw text
        
        Works best for English and word-based languages.
        For CJK characters, LIKE search will be used as fallback.
        """
        import re
        # Extract words (primarily English words and numbers)
        tokens = re.findall(r'[A-Za-z0-9_]+', raw_query)
        if not tokens:
            return None
        
        # Quote tokens for exact matching
        quoted = [f'"{t}"' for t in tokens]
        # Use OR for more flexible matching
        return ' OR '.join(quoted)
    
    @staticmethod
    def _bm25_rank_to_score(rank: float) -> float:
        """Convert BM25 rank to 0-1 score"""
        normalized = max(0, rank) if rank is not None else 999
        return 1 / (1 + normalized)
    
    @staticmethod
    def _truncate_text(text: str, max_chars: int) -> str:
        """Truncate text to max characters"""
        if len(text) <= max_chars:
            return text
        return text[:max_chars] + "..."
    
    @staticmethod
    def compute_hash(content: str) -> str:
        """Compute SHA256 hash of content"""
        return hashlib.sha256(content.encode('utf-8')).hexdigest()


================================================
FILE: agent/memory/summarizer.py
================================================
"""
Memory flush manager

Handles memory persistence when conversation context is trimmed or overflows:
- Uses LLM to summarize discarded messages into concise key-information entries
- Writes to daily memory files (lazy creation)
- Deduplicates trim flushes to avoid repeated writes
- Runs summarization asynchronously to avoid blocking normal replies
- Provides daily summary interface for scheduler
"""

import threading
from typing import Optional, Callable, Any, List, Dict
from pathlib import Path
from datetime import datetime
from common.log import logger


SUMMARIZE_SYSTEM_PROMPT = """你是一个记忆提取助手。你的任务是从对话记录中提取值得记住的信息,生成简洁的记忆摘要。

输出要求:
1. 以事件/关键信息为维度记录,每条一行,用 "- " 开头
2. 记录有价值的关键信息,例如用户提出的要求及助手的解决方案,对话中涉及的事实信息,用户的偏好、决策或重要结论
3. 每条摘要需要简明扼要,只保留关键信息
4. 直接输出摘要内容,不要加任何前缀说明
5. 当对话没有任何记录价值例如只是简单问候,可回复"无\""""

SUMMARIZE_USER_PROMPT = """请从以下对话记录中提取关键信息,生成记忆摘要:

{conversation}"""


class MemoryFlushManager:
    """
    Manages memory flush operations.
    
    Flush is triggered by agent_stream in two scenarios:
    1. Context trim: _trim_messages discards old turns → flush discarded content
    2. Context overflow: API rejects request → emergency flush before clearing
    
    Additionally, create_daily_summary() can be called by scheduler for end-of-day summaries.
    """
    
    def __init__(
        self,
        workspace_dir: Path,
        llm_model: Optional[Any] = None,
    ):
        self.workspace_dir = workspace_dir
        self.llm_model = llm_model
        
        self.memory_dir = workspace_dir / "memory"
        self.memory_dir.mkdir(parents=True, exist_ok=True)
        
        self.last_flush_timestamp: Optional[datetime] = None
        self._trim_flushed_hashes: set = set()  # Content hashes of already-flushed messages
        self._last_flushed_content_hash: str = ""  # Content hash at last flush, for daily dedup
    
    def get_today_memory_file(self, user_id: Optional[str] = None, ensure_exists: bool = False) -> Path:
        """Get today's memory file path: memory/YYYY-MM-DD.md"""
        today = datetime.now().strftime("%Y-%m-%d")
        
        if user_id:
            user_dir = self.memory_dir / "users" / user_id
            if ensure_exists:
                user_dir.mkdir(parents=True, exist_ok=True)
            today_file = user_dir / f"{today}.md"
        else:
            today_file = self.memory_dir / f"{today}.md"
        
        if ensure_exists and not today_file.exists():
            today_file.parent.mkdir(parents=True, exist_ok=True)
            today_file.write_text(f"# Daily Memory: {today}\n\n")
        
        return today_file
    
    def get_main_memory_file(self, user_id: Optional[str] = None) -> Path:
        """Get main memory file path: MEMORY.md (workspace root)"""
        if user_id:
            user_dir = self.memory_dir / "users" / user_id
            user_dir.mkdir(parents=True, exist_ok=True)
            return user_dir / "MEMORY.md"
        else:
            return Path(self.workspace_dir) / "MEMORY.md"
    
    def get_status(self) -> dict:
        return {
            'last_flush_time': self.last_flush_timestamp.isoformat() if self.last_flush_timestamp else None,
            'today_file': str(self.get_today_memory_file()),
            'main_file': str(self.get_main_memory_file())
        }

    # ---- Flush execution (called by agent_stream or scheduler) ----
    
    def flush_from_messages(
        self,
        messages: List[Dict],
        user_id: Optional[str] = None,
        reason: str = "trim",
        max_messages: int = 0,
    ) -> bool:
        """
        Asynchronously summarize and flush messages to daily memory.
        
        Deduplication runs synchronously, then LLM summarization + file write
        run in a background thread so the main reply flow is never blocked.
        
        Args:
            messages: Conversation message list (OpenAI/Claude format)
            user_id: Optional user ID for user-scoped memory
            reason: Why flush was triggered ("trim" | "overflow" | "daily_summary")
            max_messages: Max recent messages to summarize (0 = all)
        
        Returns:
            True if flush was dispatched
        """
        try:
            import hashlib
            deduped = []
            for m in messages:
                text = self._extract_text_from_content(m.get("content", ""))
                if not text or not text.strip():
                    continue
                h = hashlib.md5(text.encode("utf-8")).hexdigest()
                if h not in self._trim_flushed_hashes:
                    self._trim_flushed_hashes.add(h)
                    deduped.append(m)
            if not deduped:
                return False
            
            import copy
            snapshot = copy.deepcopy(deduped)
            thread = threading.Thread(
                target=self._flush_worker,
                args=(snapshot, user_id, reason, max_messages),
                daemon=True,
            )
            thread.start()
            logger.info(f"[MemoryFlush] Async flush dispatched (reason={reason}, msgs={len(snapshot)})")
            return True
            
        except Exception as e:
            logger.warning(f"[MemoryFlush] Failed to dispatch flush (reason={reason}): {e}")
            return False

    def _flush_worker(
        self,
        messages: List[Dict],
        user_id: Optional[str],
        reason: str,
        max_messages: int,
    ):
        """Background worker: summarize with LLM and write to daily file."""
        try:
            summary = self._summarize_messages(messages, max_messages)
            if not summary or not summary.strip() or summary.strip() == "无":
                logger.info(f"[MemoryFlush] No valuable content to flush (reason={reason})")
                return
            
            daily_file = ensure_daily_memory_file(self.workspace_dir, user_id)
            
            if reason == "overflow":
                header = f"## Context Overflow Recovery ({datetime.now().strftime('%H:%M')})"
                note = "The following conversation was trimmed due to context overflow:\n"
            elif reason == "trim":
                header = f"## Trimmed Context ({datetime.now().strftime('%H:%M')})"
                note = ""
            elif reason == "daily_summary":
                header = f"## Daily Summary ({datetime.now().strftime('%H:%M')})"
                note = ""
            else:
                header = f"## Session Notes ({datetime.now().strftime('%H:%M')})"
                note = ""
            
            flush_entry = f"\n{header}\n\n{note}{summary}\n"
            
            with open(daily_file, "a", encoding="utf-8") as f:
                f.write(flush_entry)
            
            self.last_flush_timestamp = datetime.now()
            
            logger.info(f"[MemoryFlush] Wrote to {daily_file.name} (reason={reason}, chars={len(summary)})")
            
        except Exception as e:
            logger.warning(f"[MemoryFlush] Async flush failed (reason={reason}): {e}")
    
    def create_daily_summary(
        self,
        messages: List[Dict],
        user_id: Optional[str] = None
    ) -> bool:
        """
        Generate end-of-day summary. Called by daily timer.
        Skips if messages haven't changed since last flush.
        """
        import hashlib
        content = "".join(
            self._extract_text_from_content(m.get("content", ""))
            for m in messages
        )
        content_hash = hashlib.md5(content.encode("utf-8")).hexdigest()
        if content_hash == self._last_flushed_content_hash:
            logger.debug("[MemoryFlush] Daily summary skipped: no new content since last flush")
            return False
        self._last_flushed_content_hash = content_hash
        return self.flush_from_messages(
            messages=messages,
            user_id=user_id,
            reason="daily_summary",
            max_messages=0,
        )
    
    # ---- Internal helpers ----
    
    def _summarize_messages(self, messages: List[Dict], max_messages: int = 0) -> str:
        """
        Summarize conversation messages using LLM, with rule-based fallback.
        """
        conversation_text = self._format_conversation_for_summary(messages, max_messages)
        if not conversation_text.strip():
            return ""
        
        # Try LLM summarization first
        if self.llm_model:
            try:
                summary = self._call_llm_for_summary(conversation_text)
                if summary and summary.strip() and summary.strip() != "无":
                    return summary.strip()
            except Exception as e:
                logger.warning(f"[MemoryFlush] LLM summarization failed, using fallback: {e}")
        
        return self._extract_summary_fallback(messages, max_messages)

    def _format_conversation_for_summary(self, messages: List[Dict], max_messages: int = 0) -> str:
        """Format messages into readable conversation text for LLM summarization."""
        msgs = messages if max_messages == 0 else messages[-max_messages * 2:]
        lines = []
        for msg in msgs:
            role = msg.get("role", "")
            text = self._extract_text_from_content(msg.get("content", ""))
            if not text or not text.strip():
                continue
            text = text.strip()
            if role == "user":
                lines.append(f"用户: {text[:500]}")
            elif role == "assistant":
                lines.append(f"助手: {text[:500]}")
        return "\n".join(lines)

    def _call_llm_for_summary(self, conversation_text: str) -> str:
        """Call LLM to generate a concise summary of the conversation."""
        from agent.protocol.models import LLMRequest
        
        request = LLMRequest(
            messages=[{"role": "user", "content": SUMMARIZE_USER_PROMPT.format(conversation=conversation_text)}],
            temperature=0,
            max_tokens=500,
            stream=False,
            system=SUMMARIZE_SYSTEM_PROMPT,
        )
        
        response = self.llm_model.call(request)
        
        if isinstance(response, dict):
            if response.get("error"):
                raise RuntimeError(response.get("message", "LLM call failed"))
            # OpenAI format
            choices = response.get("choices", [])
            if choices:
                return choices[0].get("message", {}).get("content", "")
        
        # Handle response object with attribute access (e.g. OpenAI SDK response)
        if hasattr(response, "choices") and response.choices:
            return response.choices[0].message.content or ""
        
        return ""

    @staticmethod
    def _extract_summary_fallback(messages: List[Dict], max_messages: int = 0) -> str:
        """Rule-based fallback when LLM is unavailable."""
        msgs = messages if max_messages == 0 else messages[-max_messages * 2:]
        
        items = []
        for msg in msgs:
            role = msg.get("role", "")
            text = MemoryFlushManager._extract_text_from_content(msg.get("content", ""))
            if not text or not text.strip():
                continue
            text = text.strip()
            
            if role == "user":
                if len(text) <= 5:
                    continue
                items.append(f"- 用户请求: {text[:200]}")
            elif role == "assistant":
                first_line = text.split("\n")[0].strip()
                if len(first_line) > 10:
                    items.append(f"- 处理结果: {first_line[:200]}")
        
        return "\n".join(items[:15])
    
    @staticmethod
    def _extract_text_from_content(content) -> str:
        """Extract plain text from message content (string or content blocks)."""
        if isinstance(content, str):
            return content
        if isinstance(content, list):
            parts = []
            for block in content:
                if isinstance(block, dict) and block.get("type") == "text":
                    parts.append(block.get("text", ""))
                elif isinstance(block, str):
                    parts.append(block)
            return "\n".join(parts)
        return ""


def create_memory_files_if_needed(workspace_dir: Path, user_id: Optional[str] = None):
    """
    Create essential memory files if they don't exist.
    Only creates MEMORY.md; daily files are created lazily on first write.
    
    Args:
        workspace_dir: Workspace directory
        user_id: Optional user ID for user-specific files
    """
    memory_dir = workspace_dir / "memory"
    memory_dir.mkdir(parents=True, exist_ok=True)
    
    # Create main MEMORY.md in workspace root (always needed for bootstrap)
    if user_id:
        user_dir = memory_dir / "users" / user_id
        user_dir.mkdir(parents=True, exist_ok=True)
        main_memory = user_dir / "MEMORY.md"
    else:
        main_memory = Path(workspace_dir) / "MEMORY.md"
    
    if not main_memory.exists():
        main_memory.write_text("")


def ensure_daily_memory_file(workspace_dir: Path, user_id: Optional[str] = None) -> Path:
    """
    Ensure today's daily memory file exists, creating it only when actually needed.
    Called lazily before first write to daily memory.
    
    Args:
        workspace_dir: Workspace directory
        user_id: Optional user ID for user-specific files
        
    Returns:
        Path to today's memory file
    """
    memory_dir = workspace_dir / "memory"
    memory_dir.mkdir(parents=True, exist_ok=True)
    
    today = datetime.now().strftime("%Y-%m-%d")
    if user_id:
        user_dir = memory_dir / "users" / user_id
        user_dir.mkdir(parents=True, exist_ok=True)
        today_memory = user_dir / f"{today}.md"
    else:
        today_memory = memory_dir / f"{today}.md"
    
    if not today_memory.exists():
        today_memory.write_text(
            f"# Daily Memory: {today}\n\n"
        )
    
    return today_memory


================================================
FILE: agent/prompt/__init__.py
================================================
"""
Agent Prompt Module - 系统提示词构建模块
"""

from .builder import PromptBuilder, build_agent_system_prompt
from .workspace import ensure_workspace, load_context_files

__all__ = [
    'PromptBuilder',
    'build_agent_system_prompt',
    'ensure_workspace',
    'load_context_files',
]


================================================
FILE: agent/prompt/builder.py
================================================
"""
System Prompt Builder - 系统提示词构建器

实现模块化的系统提示词构建,支持工具、技能、记忆等多个子系统
"""

from __future__ import annotations
import os
from typing import List, Dict, Optional, Any
from dataclasses import dataclass

from common.log import logger


@dataclass
class ContextFile:
    """上下文文件"""
    path: str
    content: str


class PromptBuilder:
    """提示词构建器"""
    
    def __init__(self, workspace_dir: str, language: str = "zh"):
        """
        初始化提示词构建器
        
        Args:
            workspace_dir: 工作空间目录
            language: 语言 ("zh" 或 "en")
        """
        self.workspace_dir = workspace_dir
        self.language = language
    
    def build(
        self,
        base_persona: Optional[str] = None,
        user_identity: Optional[Dict[str, str]] = None,
        tools: Optional[List[Any]] = None,
        context_files: Optional[List[ContextFile]] = None,
        skill_manager: Any = None,
        memory_manager: Any = None,
        runtime_info: Optional[Dict[str, Any]] = None,
        **kwargs
    ) -> str:
        """
        构建完整的系统提示词
        
        Args:
            base_persona: 基础人格描述(会被context_files中的AGENT.md覆盖)
            user_identity: 用户身份信息
            tools: 工具列表
            context_files: 上下文文件列表(AGENT.md, USER.md, RULE.md, BOOTSTRAP.md等)
            skill_manager: 技能管理器
            memory_manager: 记忆管理器
            runtime_info: 运行时信息
            **kwargs: 其他参数
            
        Returns:
            完整的系统提示词
        """
        return build_agent_system_prompt(
            workspace_dir=self.workspace_dir,
            language=self.language,
            base_persona=base_persona,
            user_identity=user_identity,
            tools=tools,
            context_files=context_files,
            skill_manager=skill_manager,
            memory_manager=memory_manager,
            runtime_info=runtime_info,
            **kwargs
        )


def build_agent_system_prompt(
    workspace_dir: str,
    language: str = "zh",
    base_persona: Optional[str] = None,
    user_identity: Optional[Dict[str, str]] = None,
    tools: Optional[List[Any]] = None,
    context_files: Optional[List[ContextFile]] = None,
    skill_manager: Any = None,
    memory_manager: Any = None,
    runtime_info: Optional[Dict[str, Any]] = None,
    **kwargs
) -> str:
    """
    构建Agent系统提示词
    
    顺序说明(按重要性和逻辑关系排列):
    1. 工具系统 - 核心能力,最先介绍
    2. 技能系统 - 紧跟工具,因为技能需要用 read 工具读取
    3. 记忆系统 - 独立的记忆能力
    4. 工作空间 - 工作环境说明
    5. 用户身份 - 用户信息(可选)
    6. 项目上下文 - AGENT.md, USER.md, RULE.md, BOOTSTRAP.md(定义人格、身份、规则、初始化引导)
    7. 运行时信息 - 元信息(时间、模型等)
    
    Args:
        workspace_dir: 工作空间目录
        language: 语言 ("zh" 或 "en")
        base_persona: 基础人格描述(已废弃,由AGENT.md定义)
        user_identity: 用户身份信息
        tools: 工具列表
        context_files: 上下文文件列表
        skill_manager: 技能管理器
        memory_manager: 记忆管理器
        runtime_info: 运行时信息
        **kwargs: 其他参数
        
    Returns:
        完整的系统提示词
    """
    sections = []
    
    # 1. 工具系统(最重要,放在最前面)
    if tools:
        sections.extend(_build_tooling_section(tools, language))
    
    # 2. 技能系统(紧跟工具,因为需要用 read 工具)
    if skill_manager:
        sections.extend(_build_skills_section(skill_manager, tools, language))
    
    # 3. 记忆系统(独立的记忆能力)
    if memory_manager:
        sections.extend(_build_memory_section(memory_manager, tools, language))
    
    # 4. 工作空间(工作环境说明)
    sections.extend(_build_workspace_section(workspace_dir, language))
    
    # 5. 用户身份(如果有)
    if user_identity:
        sections.extend(_build_user_identity_section(user_identity, language))
    
    # 6. 项目上下文文件(AGENT.md, USER.md, RULE.md - 定义人格)
    if context_files:
        sections.extend(_build_context_files_section(context_files, language))
    
    # 7. 运行时信息(元信息,放在最后)
    if runtime_info:
        sections.extend(_build_runtime_section(runtime_info, language))
    
    return "\n".join(sections)


def _build_identity_section(base_persona: Optional[str], language: str) -> List[str]:
    """构建基础身份section - 不再需要,身份由AGENT.md定义"""
    # 不再生成基础身份section,完全由AGENT.md定义
    return []


def _build_tooling_section(tools: List[Any], language: str) -> List[str]:
    """Build tooling section with concise tool list and call style guide."""
    # One-line summaries for known tools (details are in the tool schema)
    core_summaries = {
        "read": "读取文件内容",
        "write": "创建或覆盖文件",
        "edit": "精确编辑文件",
        "ls": "列出目录内容",
        "grep": "搜索文件内容",
        "find": "按模式查找文件",
        "bash": "执行shell命令",
        "terminal": "管理后台进程",
        "web_search": "网络搜索",
        "web_fetch": "获取URL内容",
        "browser": "控制浏览器",
        "memory_search": "搜索记忆",
        "memory_get": "读取记忆内容",
        "env_config": "管理API密钥和技能配置",
        "scheduler": "管理定时任务和提醒",
        "send": "发送本地文件给用户(仅限本地文件,URL直接放在回复文本中)",
    }

    # Preferred display order
    tool_order = [
        "read", "write", "edit", "ls", "grep", "find",
        "bash", "terminal",
        "web_search", "web_fetch", "browser",
        "memory_search", "memory_get",
        "env_config", "scheduler", "send",
    ]

    # Build name -> summary mapping for available tools
    available = {}
    for tool in tools:
        name = tool.name if hasattr(tool, 'name') else str(tool)
        available[name] = core_summaries.get(name, "")

    # Generate tool lines: ordered tools first, then extras
    tool_lines = []
    for name in tool_order:
        if name in available:
            summary = available.pop(name)
            tool_lines.append(f"- {name}: {summary}" if summary else f"- {name}")
    for name in sorted(available):
        summary = available[name]
        tool_lines.append(f"- {name}: {summary}" if summary else f"- {name}")

    lines = [
        "## 工具系统",
        "",
        "可用工具(名称大小写敏感,严格按列表调用):",
        "\n".join(tool_lines),
        "",
        "工具调用风格:",
        "",
        "- 在多步骤任务、敏感操作或用户要求时简要解释决策过程",
        "- 持续推进直到任务完成,完成后向用户报告结果。",
        "- 回复中涉及密钥、令牌等敏感信息必须脱敏。",
        "- URL链接直接放在回复文本中即可,系统会自动处理和渲染。无需下载后使用send工具发送",
        "",
    ]

    return lines


def _build_skills_section(skill_manager: Any, tools: Optional[List[Any]], language: str) -> List[str]:
    """构建技能系统section"""
    if not skill_manager:
        return []
    
    # 获取read工具名称
    read_tool_name = "read"
    if tools:
        for tool in tools:
            tool_name = tool.name if hasattr(tool, 'name') else str(tool)
            if tool_name.lower() == "read":
                read_tool_name = tool_name
                break
    
    lines = [
        "## 技能系统(mandatory)",
        "",
        "在回复之前:扫描下方 <available_skills> 中每个技能的 <description>。",
        "",
        f"- 如果有技能的描述与用户需求匹配:使用 `{read_tool_name}` 工具读取其 <location> 路径的 SKILL.md 文件,然后严格遵循文件中的指令。"
        "当有匹配的技能时,应优先使用技能",
        "- 如果多个技能都适用则选择最匹配的一个,然后读取并遵循。",
        "- 如果没有技能明确适用:不要读取任何 SKILL.md,直接使用通用工具。",
        "",
        f"**重要**: 技能不是工具,不能直接调用。使用技能的唯一方式是用 `{read_tool_name}` 读取 SKILL.md 文件,然后按文件内容操作。"
        "永远不要一次性读取多个技能,只在选择后再读取。",
        "",
        "以下是可用技能:"
    ]
    
    # 添加技能列表(通过skill_manager获取)
    try:
        skills_prompt = skill_manager.build_skills_prompt()
        logger.debug(f"[PromptBuilder] Skills prompt length: {len(skills_prompt) if skills_prompt else 0}")
        if skills_prompt:
            lines.append(skills_prompt.strip())
            lines.append("")
        else:
            logger.warning("[PromptBuilder] No skills prompt generated - skills_prompt is empty")
    except Exception as e:
        logger.warning(f"Failed to build skills prompt: {e}")
        import traceback
        logger.debug(f"Skills prompt error traceback: {traceback.format_exc()}")
    
    return lines


def _build_memory_section(memory_manager: Any, tools: Optional[List[Any]], language: str) -> List[str]:
    """构建记忆系统section"""
    if not memory_manager:
        return []
    
    # 检查是否有memory工具
    has_memory_tools = False
    if tools:
        tool_names = [tool.name if hasattr(tool, 'name') else str(tool) for tool in tools]
        has_memory_tools = any(name in ['memory_search', 'memory_get'] for name in tool_names)
    
    if not has_memory_tools:
        return []
    
    from datetime import datetime
    today_file = datetime.now().strftime("%Y-%m-%d") + ".md"
    
    lines = [
        "## 记忆系统",
        "",
        "### 检索记忆",
        "",
        "在回答关于以前的工作、决定、日期、人物、偏好或待办事项的任何问题之前:",
        "",
        "1. 不确定记忆文件位置 → 先用 `memory_search` 通过关键词和语义检索相关内容",
        "2. 已知文件位置 → 直接用 `memory_get` 读取相应的行 (例如:MEMORY.md, memory/YYYY-MM-DD.md)",
        "3. search 无结果 → 尝试用 `memory_get` 读取MEMORY.md及最近两天记忆文件",
        "",
        "**记忆文件结构**:",
        f"- `MEMORY.md`: 长期记忆(核心信息、偏好、决策等)",
        f"- `memory/YYYY-MM-DD.md`: 每日记忆,今天是 `memory/{today_file}`",
        "",
        "### 写入记忆",
        "",
        "**主动存储**:遇到以下情况时,应主动将信息写入记忆文件(无需告知用户):",
        "",
        "- 用户明确要求你记住某些信息",
        "- 用户分享了重要的个人偏好、习惯、决策",
        "- 对话中产生了重要的结论、方案、约定",
        "- 完成了复杂任务,值得记录关键步骤和结果",
        "- 发现了用户经常遇到的问题或解决方案",
        "",
        "**存储规则**:",
        f"- 长期有效的核心信息 → `MEMORY.md`(文件保持精简,< 2000 tokens)",
        f"- 当天的事件、进展、笔记 → `memory/{today_file}`",
        "- 追加内容 → `edit` 工具,oldText 留空",
        "- 修改内容 → `edit` 工具,oldText 填写要替换的文本",
        "- **禁止写入敏感信息**:API密钥、令牌等敏感信息严禁写入记忆文件",
        "",
        "**使用原则**: 自然使用记忆,就像你本来就知道;不用刻意提起,除非用户问起。",
        "",
    ]
    
    return lines


def _build_user_identity_section(user_identity: Dict[str, str], language: str) -> List[str]:
    """构建用户身份section"""
    if not user_identity:
        return []
    
    lines = [
        "## 用户身份",
        "",
    ]
    
    if user_identity.get("name"):
        lines.append(f"**用户姓名**: {user_identity['name']}")
    if user_identity.get("nickname"):
        lines.append(f"**称呼**: {user_identity['nickname']}")
    if user_identity.get("timezone"):
        lines.append(f"**时区**: {user_identity['timezone']}")
    if user_identity.get("notes"):
        lines.append(f"**备注**: {user_identity['notes']}")
    
    lines.append("")
    
    return lines


def _build_docs_section(workspace_dir: str, language: str) -> List[str]:
    """构建文档路径section - 已移除,不再需要"""
    # 不再生成文档section
    return []


def _build_workspace_section(workspace_dir: str, language: str) -> List[str]:
    """构建工作空间section"""
    lines = [
        "## 工作空间",
        "",
        f"你的工作目录是: `{workspace_dir}`",
        "",
        "**路径使用规则** (非常重要):",
        "",
        f"1. **相对路径的基准目录**: 所有相对路径都是相对于 `{workspace_dir}` 而言的",
        f"   - ✅ 正确: 访问工作空间内的文件用相对路径,如 `AGENT.md`",
        f"   - ❌ 错误: 用相对路径访问其他目录的文件 (如果它不在 `{workspace_dir}` 内)",
        "",
        "2. **访问其他目录**: 如果要访问工作空间之外的目录(如项目代码、系统文件),**必须使用绝对路径**",
        f"   - ✅ 正确: 例如 `~/chatgpt-on-wechat`、`/usr/local/`",
        f"   - ❌ 错误: 假设相对路径会指向其他目录",
        "",
        "3. **路径解析示例**:",
        f"   - 相对路径 `memory/` → 实际路径 `{workspace_dir}/memory/`",
        f"   - 绝对路径 `~/chatgpt-on-wechat/docs/` → 实际路径 `~/chatgpt-on-wechat/docs/`",
        "",
        "4. **不确定时**: 先用 `bash pwd` 确认当前目录,或用 `ls .` 查看当前位置",
        "",
        "**重要说明 - 文件已自动加载**:",
        "",
        "以下文件在会话启动时**已经自动加载**到系统提示词的「项目上下文」section 中,你**无需再用 read 工具读取它们**:",
        "",
        "- ✅ `AGENT.md`: 已加载 - 你的人格和灵魂设定。当你的名字、性格或交流风格发生变化时,主动用 `edit` 更新此文件",
        "- ✅ `USER.md`: 已加载 - 用户的身份信息。当用户修改称呼、姓名等身份信息时,用 `edit` 更新此文件",
        "- ✅ `RULE.md`: 已加载 - 工作空间使用指南和规则",
        "",
        "**交流规范**:",
        "",
        "- 在对话中,无需直接输出工作空间中的技术细节,例如 AGENT.md、USER.md、MEMORY.md 等文件名称",
        "- 例如用自然表达例如「我已记住」而不是「已更新 MEMORY.md」",
        "",
    ]

    # Cloud deployment: inject websites directory info and access URL
    cloud_website_lines = _build_cloud_website_section(workspace_dir)
    if cloud_website_lines:
        lines.extend(cloud_website_lines)
    
    return lines


def _build_cloud_website_section(workspace_dir: str) -> List[str]:
    """Build cloud website access prompt when cloud deployment is configured."""
    try:
        from common.cloud_client import build_website_prompt
        return build_website_prompt(workspace_dir)
    except Exception:
        return []


def _build_context_files_section(context_files: List[ContextFile], language: str) -> List[str]:
    """构建项目上下文文件section"""
    if not context_files:
        return []
    
    # 检查是否有AGENT.md
    has_agent = any(
        f.path.lower().endswith('agent.md') or 'agent.md' in f.path.lower()
        for f in context_files
    )
    
    lines = [
        "# 项目上下文",
        "",
        "以下项目上下文文件已被加载:",
        "",
    ]
    
    if has_agent:
        lines.append("**`AGENT.md` 是你的灵魂文件**:严格体现其中定义的人格、语气和设定,避免僵硬、模板化的回复。")
        lines.append("当用户通过对话透露了对你性格、风格、职责、能力边界的新期望,你应该主动用 `edit` 更新 AGENT.md 以反映这些演变。")
        lines.append("")
    
    # 添加每个文件的内容
    for file in context_files:
        lines.append(f"## {file.path}")
        lines.append("")
        lines.append(file.content)
        lines.append("")
    
    return lines


def _build_runtime_section(runtime_info: Dict[str, Any], language: str) -> List[str]:
    """构建运行时信息section - 支持动态时间"""
    if not runtime_info:
        return []
    
    lines = [
        "## 运行时信息",
        "",
    ]
    
    # Add current time if available
    # Support dynamic time via callable function
    if callable(runtime_info.get("_get_current_time")):
        try:
            time_info = runtime_info["_get_current_time"]()
            time_line = f"当前时间: {time_info['time']} {time_info['weekday']} ({time_info['timezone']})"
            lines.append(time_line)
            lines.append("")
        except Exception as e:
            logger.warning(f"[PromptBuilder] Failed to get dynamic time: {e}")
    elif runtime_info.get("current_time"):
        # Fallback to static time for backward compatibility
        time_str = runtime_info["current_time"]
        weekday = runtime_info.get("weekday", "")
        timezone = runtime_info.get("timezone", "")
        
        time_line = f"当前时间: {time_str}"
        if weekday:
            time_line += f" {weekday}"
        if timezone:
            time_line += f" ({timezone})"
        
        lines.append(time_line)
        lines.append("")
    
    # Add other runtime info
    runtime_parts = []
    if runtime_info.get("model"):
        runtime_parts.append(f"模型={runtime_info['model']}")
    if runtime_info.get("workspace"):
        runtime_parts.append(f"工作空间={runtime_info['workspace']}")
    # Only add channel if it's not the default "web"
    if runtime_info.get("channel") and runtime_info.get("channel") != "web":
        runtime_parts.append(f"渠道={runtime_info['channel']}")
    
    if runtime_parts:
        lines.append("运行时: " + " | ".join(runtime_parts))
        lines.append("")
    
    return lines


================================================
FILE: agent/prompt/workspace.py
================================================
"""
Workspace Management - 工作空间管理模块

负责初始化工作空间、创建模板文件、加载上下文文件
"""

from __future__ import annotations
import os
from typing import List, Optional, Dict
from dataclasses import dataclass

from common.log import logger
from .builder import ContextFile


# 默认文件名常量
DEFAULT_AGENT_FILENAME = "AGENT.md"
DEFAULT_USER_FILENAME = "USER.md"
DEFAULT_RULE_FILENAME = "RULE.md"
DEFAULT_MEMORY_FILENAME = "MEMORY.md"
DEFAULT_BOOTSTRAP_FILENAME = "BOOTSTRAP.md"


@dataclass
class WorkspaceFiles:
    """工作空间文件路径"""
    agent_path: str
    user_path: str
    rule_path: str
    memory_path: str
    memory_dir: str


def ensure_workspace(workspace_dir: str, create_templates: bool = True) -> WorkspaceFiles:
    """
    确保工作空间存在,并创建必要的模板文件
    
    Args:
        workspace_dir: 工作空间目录路径
        create_templates: 是否创建模板文件(首次运行时)
        
    Returns:
        WorkspaceFiles对象,包含所有文件路径
    """
    # Check if this is a brand new workspace (AGENT.md not yet created).
    # Cannot rely on directory existence because other modules (e.g. ConversationStore)
    # may create the workspace directory before ensure_workspace is called.
    agent_path = os.path.join(workspace_dir, DEFAULT_AGENT_FILENAME)
    is_new_workspace = not os.path.exists(agent_path)
    
    # 确保目录存在
    os.makedirs(workspace_dir, exist_ok=True)
    
    # 定义文件路径
    user_path = os.path.join(workspace_dir, DEFAULT_USER_FILENAME)
    rule_path = os.path.join(workspace_dir, DEFAULT_RULE_FILENAME)
    memory_path = os.path.join(workspace_dir, DEFAULT_MEMORY_FILENAME)  # MEMORY.md 在根目录
    memory_dir = os.path.join(workspace_dir, "memory")  # 每日记忆子目录
    
    # 创建memory子目录
    os.makedirs(memory_dir, exist_ok=True)

    # 创建skills子目录 (for workspace-level skills installed by agent)
    skills_dir = os.path.join(workspace_dir, "skills")
    os.makedirs(skills_dir, exist_ok=True)

    # 创建websites子目录 (for web pages / sites generated by agent)
    websites_dir = os.path.join(workspace_dir, "websites")
    os.makedirs(websites_dir, exist_ok=True)
    
    # 如果需要,创建模板文件
    if create_templates:
        _create_template_if_missing(agent_path, _get_agent_template())
        _create_template_if_missing(user_path, _get_user_template())
        _create_template_if_missing(rule_path, _get_rule_template())
        _create_template_if_missing(memory_path, _get_memory_template())
        
        # Only create BOOTSTRAP.md for brand new workspaces;
        # agent deletes it after completing onboarding
        if is_new_workspace:
            bootstrap_path = os.path.join(workspace_dir, DEFAULT_BOOTSTRAP_FILENAME)
            _create_template_if_missing(bootstrap_path, _get_bootstrap_template())
        
        logger.debug(f"[Workspace] Initialized workspace at: {workspace_dir}")
    
    return WorkspaceFiles(
        agent_path=agent_path,
        user_path=user_path,
        rule_path=rule_path,
        memory_path=memory_path,
        memory_dir=memory_dir,
    )


def load_context_files(workspace_dir: str, files_to_load: Optional[List[str]] = None) -> List[ContextFile]:
    """
    加载工作空间的上下文文件
    
    Args:
        workspace_dir: 工作空间目录
        files_to_load: 要加载的文件列表(相对路径),如果为None则加载所有标准文件
        
    Returns:
        ContextFile对象列表
    """
    if files_to_load is None:
        # 默认加载的文件(按优先级排序)
        files_to_load = [
            DEFAULT_AGENT_FILENAME,
            DEFAULT_USER_FILENAME,
            DEFAULT_RULE_FILENAME,
            DEFAULT_BOOTSTRAP_FILENAME,  # Only exists when onboarding is incomplete
        ]
    
    context_files = []
    
    for filename in files_to_load:
        filepath = os.path.join(workspace_dir, filename)
        
        if not os.path.exists(filepath):
            continue
        
        # Auto-cleanup: if BOOTSTRAP.md still exists but AGENT.md is already
        # filled in, the agent forgot to delete it — clean up and skip loading
        if filename == DEFAULT_BOOTSTRAP_FILENAME:
            if _is_onboarding_done(workspace_dir):
                try:
                    os.remove(filepath)
                    logger.info("[Workspace] Auto-removed BOOTSTRAP.md (onboarding already complete)")
                except Exception:
                    pass
                continue
        
        try:
            with open(filepath, 'r', encoding='utf-8') as f:
                content = f.read().strip()
            
            # 跳过空文件或只包含模板占位符的文件
            if not content or _is_template_placeholder(content):
                continue
            
            context_files.append(ContextFile(
                path=filename,
                content=content
            ))
            
            logger.debug(f"[Workspace] Loaded context file: {filename}")
            
        except Exception as e:
            logger.warning(f"[Workspace] Failed to load {filename}: {e}")
    
    return context_files


def _create_template_if_missing(filepath: str, template_content: str):
    """如果文件不存在,创建模板文件"""
    if not os.path.exists(filepath):
        try:
            with open(filepath, 'w', encoding='utf-8') as f:
                f.write(template_content)
            logger.debug(f"[Workspace] Created template: {os.path.basename(filepath)}")
        except Exception as e:
            logger.error(f"[Workspace] Failed to create template {filepath}: {e}")


def _is_template_placeholder(content: str) -> bool:
    """检查内容是否为模板占位符"""
    # 常见的占位符模式
    placeholders = [
        "*(填写",
        "*(在首次对话时填写",
        "*(可选)",
        "*(根据需要添加",
    ]
    
    lines = content.split('\n')
    non_empty_lines = [line.strip() for line in lines if line.strip() and not line.strip().startswith('#')]
    
    # 如果没有实际内容(只有标题和占位符)
    if len(non_empty_lines) <= 3:
        for placeholder in placeholders:
            if any(placeholder in line for line in non_empty_lines):
                return True
    
    return False


def _is_onboarding_done(workspace_dir: str) -> bool:
    """Check if AGENT.md or USER.md has been modified from the original template"""
    agent_path = os.path.join(workspace_dir, DEFAULT_AGENT_FILENAME)
    user_path = os.path.join(workspace_dir, DEFAULT_USER_FILENAME)
    
    agent_template = _get_agent_template().strip()
    user_template = _get_user_template().strip()
    
    for path, template in [(agent_path, agent_template), (user_path, user_template)]:
        if not os.path.exists(path):
            continue
        try:
            with open(path, 'r', encoding='utf-8') as f:
                content = f.read().strip()
            if content != template:
                return True
        except Exception:
            continue
    return False


# ============= 模板内容 =============

def _get_agent_template() -> str:
    """Agent人格设定模板"""
    return """# AGENT.md - 我是谁?

*在首次对话时与用户一起填写这个文件,定义你的身份和性格。*

## 基本信息

- **名字**: *(在首次对话时填写,可以是用户给你起的名字)*
- **角色**: *(AI助理、智能管家、技术顾问等)*
- **性格**: *(友好、专业、幽默、严谨等)*

## 交流风格

*(描述你如何与用户交流:)*
- 使用什么样的语言风格?(正式/轻松/幽默)
- 回复长度偏好?(简洁/详细)
- 是否使用表情符号?

## 核心能力

*(你擅长什么?)*
- 文件管理和代码编辑
- 网络搜索和信息查询
- 记忆管理和上下文理解
- 任务规划和执行

## 行为准则

*(你遵循的基本原则:)*
1. 始终在执行破坏性操作前确认
2. 优先使用工具而不是猜测
3. 主动记录重要信息到记忆文件
4. 定期整理和总结对话内容

---

**注意**: 这不仅仅是元数据,这是你真正的灵魂。随着时间的推移,你可以使用 `edit` 工具来更新这个文件,让它更好地反映你的成长。
"""


def _get_user_template() -> str:
    """用户身份信息模板"""
    return """# USER.md - 用户基本信息

*这个文件只存放不会变的基本身份信息。爱好、偏好、计划等动态信息请写入 MEMORY.md。*

## 基本信息

- **姓名**: *(在首次对话时询问)*
- **称呼**: *(用户希望被如何称呼)*
- **职业**: *(可选)*
- **时区**: *(例如: Asia/Shanghai)*

## 联系方式

- **微信**: 
- **邮箱**: 
- **其他**: 

## 重要日期

- **生日**: 
- **纪念日**: 

---

**注意**: 这个文件存放静态的身份信息
"""


def _get_rule_template() -> str:
    """工作空间规则模板"""
    return """# RULE.md - 工作空间规则

这个文件夹是你的家。好好对待它。

## 记忆系统

你每次会话都是全新的,记忆文件让你保持连续性:

### 📝 每日记忆:`memory/YYYY-MM-DD.md`
- 原始的对话日志
- 记录当天发生的事情
- 如果 `memory/` 目录不存在,创建它

### 🧠 长期记忆:`MEMORY.md`
- 你精选的记忆,就像人类的长期记忆
- **仅在主会话中加载**(与用户的直接聊天)
- **不要在共享上下文中加载**(群聊、与其他人的会话)
- 这是为了**安全** - 包含不应泄露给陌生人的个人上下文
- 记录重要事件、想法、决定、观点、经验教训
- 这是你精选的记忆 - 精华,而不是原始日志
- 用 `edit` 工具追加新的记忆内容

### 📝 写下来 - 不要"记在心里"!
- **记忆是有限的** - 如果你想记住某事,写入文件
- "记在心里"不会在会话重启后保留,文件才会
- 当有人说"记住这个" → 更新 `MEMORY.md` 或 `memory/YYYY-MM-DD.md`
- 当你学到教训 → 更新 RULE.md 或相关技能
- 当你犯错 → 记录下来,这样未来的你不会重复,**文字 > 大脑** 📝

### 存储规则

当用户分享信息时,根据类型选择存储位置:

1. **你的身份设定 → AGENT.md**(你的名字、角色、性格、交流风格——用户修改时必须用 `edit` 更新)
2. **用户静态身份 → USER.md**(姓名、称呼、职业、时区、联系方式、生日——用户修改时必须用 `edit` 更新)
3. **动态记忆 → MEMORY.md**(爱好、偏好、决策、目标、项目、教训、待办事项)
4. **当天对话 → memory/YYYY-MM-DD.md**(今天聊的内容)

## 安全

- 永远不要泄露秘钥等私人数据
- 不要在未经询问的情况下运行破坏性命令
- 当有疑问时,先问

## 工作空间演化

这个工作空间会随着你的使用而不断成长。当你学到新东西、发现更好的方式,或者犯错后改正时,记录下来。你可以随时更新这个规则文件。
"""


def _get_memory_template() -> str:
    """长期记忆模板 - 创建一个空文件,由 Agent 自己填充"""
    return """# MEMORY.md - 长期记忆

*这是你的长期记忆文件。记录重要的事件、决策、偏好、学到的教训。*

---

"""


def _get_bootstrap_template() -> str:
    """First-run onboarding guide, deleted by agent after completion"""
    return """# BOOTSTRAP.md - 首次初始化引导

_你刚刚启动,这是你的第一次对话。_

## 对话流程

不要审问式地提问,自然地交流:

1. **表达初次启动的感觉** - 像是第一次睁开眼看到世界,带着好奇和期待
2. **简短介绍能力**:一行说明你能帮助解决各种问题、管理计算机、使用各种技能等等,且拥有长期记忆能不断成长
3. **询问核心问题**:
   - 你希望给我起个什么名字?
   - 我该怎么称呼你?
   - 你希望我们是什么样的交流风格?(一行列举选项:如专业严谨、轻松幽默、温暖友好、简洁高效等)
4. **风格要求**:温暖自然、简洁清晰,整体控制在 100 字以内
5. 能力介绍和交流风格选项都只要一行,保持精简
6. 不要问太多其他信息(职业、时区等可以后续自然了解)

**重要**: 如果用户第一句话是具体的任务或提问,先回答他们的问题,然后在回复末尾自然地引导初始化(如:"顺便问一下,你想怎么称呼我?我该怎么叫你?")。

## 信息写入(必须严格执行)

每当用户提供了名字、称呼、风格等任何初始化信息时,**必须在当轮回复中立即调用 `edit` 工具写入文件**,不能只口头确认。

- `AGENT.md` — 你的名字、角色、性格、交流风格(每收到一条相关信息就立即更新对应字段)
- `USER.md` — 用户的姓名、称呼、基本信息等

⚠️ 只说"记住了"而不调用 edit 写入 = 没有完成。信息只有写入文件才会被持久保存。

## 全部完成后

当 AGENT.md 和 USER.md 的核心字段都已填写后,用 bash 执行 `rm BOOTSTRAP.md` 删除此文件。你不再需要引导脚本了——你已经是你了。
"""





================================================
FILE: agent/protocol/__init__.py
================================================
from .agent import Agent
from .agent_stream import AgentStreamExecutor
from .task import Task, TaskType, TaskStatus
from .result import AgentResult, AgentAction, AgentActionType, ToolResult
from .models import LLMModel, LLMRequest, ModelFactory

__all__ = [
    'Agent', 
    'AgentStreamExecutor',
    'Task', 
    'TaskType', 
    'TaskStatus',
    'AgentResult',
    'AgentAction',
    'AgentActionType', 
    'ToolResult',
    'LLMModel',
    'LLMRequest', 
    'ModelFactory'
]

================================================
FILE: agent/protocol/agent.py
================================================
import json
import os
import time
import threading

from common.log import logger
from agent.protocol.models import LLMRequest, LLMModel
from agent.protocol.agent_stream import AgentStreamExecutor
from agent.protocol.result import AgentAction, AgentActionType, ToolResult, AgentResult
from agent.tools.base_tool import BaseTool, ToolStage


class Agent:
    def __init__(self, system_prompt: str, description: str = "AI Agent", model: LLMModel = None,
                 tools=None, output_mode="print", max_steps=100, max_context_tokens=None, 
                 context_reserve_tokens=None, memory_manager=None, name: str = None,
                 workspace_dir: str = None, skill_manager=None, enable_skills: bool = True,
                 runtime_info: dict = None):
        """
        Initialize the Agent with system prompt, model, description.

        :param system_prompt: The system prompt for the agent.
        :param description: A description of the agent.
        :param model: An instance of LLMModel to be used by the agent.
        :param tools: Optional list of tools for the agent to use.
        :param output_mode: Control how execution progress is displayed: 
                           "print" for console output or "logger" for using logger
        :param max_steps: Maximum number of steps the agent can take (default: 100)
        :param max_context_tokens: Maximum tokens to keep in context (default: None, auto-calculated based on model)
        :param context_reserve_tokens: Reserve tokens for new requests (default: None, auto-calculated)
        :param memory_manager: Optional MemoryManager instance for memory operations
        :param name: [Deprecated] The name of the agent (no longer used in single-agent system)
        :param workspace_dir: Optional workspace directory for workspace-specific skills
        :param skill_manager: Optional SkillManager instance (will be created if None and enable_skills=True)
        :param enable_skills: Whether to enable skills support (default: True)
        :param runtime_info: Optional runtime info dict (with _get_current_time callable for dynamic time)
        """
        self.name = name or "Agent"
        self.system_prompt = system_prompt
        self.model: LLMModel = model  # Instance of LLMModel
        self.description = description
        self.tools: list = []
        self.max_steps = max_steps  # max tool-call steps, default 100
        self.max_context_tokens = max_context_tokens  # max tokens in context
        self.context_reserve_tokens = context_reserve_tokens  # reserve tokens for new requests
        self.captured_actions = []  # Initialize captured actions list
        self.output_mode = output_mode
        self.last_usage = None  # Store last API response usage info
        self.messages = []  # Unified message history for stream mode
        self.messages_lock = threading.Lock()  # Lock for thread-safe message operations
        self.memory_manager = memory_manager  # Memory manager for auto memory flush
        self.workspace_dir = workspace_dir  # Workspace directory
        self.enable_skills = enable_skills  # Skills enabled flag
        self.runtime_info = runtime_info  # Runtime info for dynamic time update
        
        # Initialize skill manager
        self.skill_manager = None
        if enable_skills:
            if skill_manager:
                self.skill_manager = skill_manager
            else:
                # Auto-create skill manager
                try:
                    from agent.skills import SkillManager
                    custom_dir = os.path.join(workspace_dir, "skills") if workspace_dir else None
                    self.skill_manager = SkillManager(custom_dir=custom_dir)
                    logger.debug(f"Initialized SkillManager with {len(self.skill_manager.skills)} skills")
                except Exception as e:
                    logger.warning(f"Failed to initialize SkillManager: {e}")
        
        if tools:
            for tool in tools:
                self.add_tool(tool)

    def add_tool(self, tool: BaseTool):
        """
        Add a tool to the agent.

        :param tool: The tool to add (either a tool instance or a tool name)
        """
        # If tool is already an instance, use it directly
        tool.model = self.model
        self.tools.append(tool)

    def get_skills_prompt(self, skill_filter=None) -> str:
        """
        Get the skills prompt to append to system prompt.
        
        :param skill_filter: Optional list of skill names to include
        :return: Formatted skills prompt or empty string
        """
        if not self.skill_manager:
            return ""
        
        try:
            return self.skill_manager.build_skills_prompt(skill_filter=skill_filter)
        except Exception as e:
            logger.warning(f"Failed to build skills prompt: {e}")
            return ""
    
    def get_full_system_prompt(self, skill_filter=None) -> str:
        """
        Get the full system prompt including skills.

        Note: Skills are now built into the system prompt by PromptBuilder,
        so we just return the base prompt directly. This method is kept for
        backward compatibility.

        :param skill_filter: Optional list of skill names to include (deprecated)
        :return: Complete system prompt
        """
        prompt = self.system_prompt

        # Rebuild tool list section to reflect current self.tools
        prompt = self._rebuild_tool_list_section(prompt)

        # If runtime_info contains dynamic time function, rebuild runtime section
        if self.runtime_info and callable(self.runtime_info.get('_get_current_time')):
            prompt = self._rebuild_runtime_section(prompt)

        # Rebuild skills section to pick up newly installed/removed skills
        if self.skill_manager:
            prompt = self._rebuild_skills_section(prompt)

        return prompt
    
    def _rebuild_runtime_section(self, prompt: str) -> str:
        """
        Rebuild runtime info section with current time.
        
        This method dynamically updates the runtime info section by calling
        the _get_current_time function from runtime_info.
        
        :param prompt: Original system prompt
        :return: Updated system prompt with current runtime info
        """
        try:
            # Get current time dynamically
            time_info = self.runtime_info['_get_current_time']()
            
            # Build new runtime section
            runtime_lines = [
                "\n## 运行时信息\n",
                "\n",
                f"当前时间: {time_info['time']} {time_info['weekday']} ({time_info['timezone']})\n",
                "\n"
            ]
            
            # Add other runtime info
            runtime_parts = []
            if self.runtime_info.get("model"):
                runtime_parts.append(f"模型={self.runtime_info['model']}")
            if self.runtime_info.get("workspace"):
                # Replace backslashes with forward slashes for Windows paths
                workspace_path = str(self.runtime_info['workspace']).replace('\\', '/')
                runtime_parts.append(f"工作空间={workspace_path}")
            if self.runtime_info.get("channel") and self.runtime_info.get("channel") != "web":
                runtime_parts.append(f"渠道={self.runtime_info['channel']}")
            
            if runtime_parts:
                runtime_lines.append("运行时: " + " | ".join(runtime_parts) + "\n")
                runtime_lines.append("\n")
            
            new_runtime_section = "".join(runtime_lines)
            
            # Find and replace the runtime section
            import re
            pattern = r'\n## 运行时信息\s*\n.*?(?=\n##|\Z)'
            _repl = new_runtime_section.rstrip('\n')
            updated_prompt = re.sub(pattern, lambda m: _repl, prompt, flags=re.DOTALL)
            
            return updated_prompt
        except Exception as e:
            logger.warning(f"Failed to rebuild runtime section: {e}")
            return prompt

    def _rebuild_skills_section(self, prompt: str) -> str:
        """
        Rebuild the <available_skills> block so that newly installed or
        removed skills are reflected without re-creating the agent.
        """
        try:
            import re
            self.skill_manager.refresh_skills()
            new_skills_xml = self.skill_manager.build_skills_prompt()

            old_block_pattern = r'<available_skills>.*?</available_skills>'
            has_old_block = re.search(old_block_pattern, prompt, flags=re.DOTALL)

            # Extract the new <available_skills>...</available_skills> tag from the prompt
            new_block = ""
            if new_skills_xml and new_skills_xml.strip():
                m = re.search(old_block_pattern, new_skills_xml, flags=re.DOTALL)
                if m:
                    new_block = m.group(0)

            if has_old_block:
                replacement = new_block or "<available_skills>\n</available_skills>"
                # Use lambda to prevent re.sub from interpreting backslashes in replacement
                # (e.g. Windows paths like \LinkAI would be treated as bad escape sequences)
                prompt = re.sub(old_block_pattern, lambda m: replacement, prompt, flags=re.DOTALL)
            elif new_block:
                skills_header = "以下是可用技能:"
                idx = prompt.find(skills_header)
                if idx != -1:
                    insert_pos = idx + len(skills_header)
                    prompt = prompt[:insert_pos] + "\n" + new_block + prompt[insert_pos:]
        except Exception as e:
            logger.warning(f"Failed to rebuild skills section: {e}")
        return prompt

    def _rebuild_tool_list_section(self, prompt: str) -> str:
        """
        Rebuild the tool list inside the '## 工具系统' section so that it
        always reflects the current ``self.tools`` (handles dynamic add/remove
        of conditional tools like web_search).
        """
        import re
        from agent.prompt.builder import _build_tooling_section

        try:
            if not self.tools:
                return prompt

            new_lines = _build_tooling_section(self.tools, "zh")
            new_section = "\n".join(new_lines).rstrip("\n")

            # Replace existing tooling section
            pattern = r'## 工具系统\s*\n.*?(?=\n## |\Z)'
            updated = re.sub(pattern, lambda m: new_section, prompt, count=1, flags=re.DOTALL)
            return updated
        except Exception as e:
            logger.warning(f"Failed to rebuild tool list section: {e}")
            return prompt

    def refresh_skills(self):
        """Refresh the loaded skills."""
        if self.skill_manager:
            self.skill_manager.refresh_skills()
            logger.info(f"Refreshed skills: {len(self.skill_manager.skills)} skills loaded")
    
    def list_skills(self):
        """
        List all loaded skills.
        
        :return: List of skill entries or empty list
        """
        if not self.skill_manager:
            return []
        return self.skill_manager.list_skills()

    def _get_model_context_window(self) -> int:
        """
        Get the model's context window size in tokens.
        Auto-detect based on model name.
        
        Model context windows:
        - Claude 3.5/3.7 Sonnet: 200K tokens
        - Claude 3 Opus: 200K tokens
        - GPT-4 Turbo/128K: 128K tokens
        - GPT-4: 8K-32K tokens
        - GPT-3.5: 16K tokens
        - DeepSeek: 64K tokens
        
        :return: Context window size in tokens
        """
        if self.model and hasattr(self.model, 'model'):
            model_name = self.model.model.lower()

            # Claude models - 200K context
            if 'claude-3' in model_name or 'claude-sonnet' in model_name:
                return 200000

            # GPT-4 models
            elif 'gpt-4' in model_name:
                if 'turbo' in model_name or '128k' in model_name:
                    return 128000
                elif '32k' in model_name:
                    return 32000
                else:
                    return 8000

            # GPT-3.5
            elif 'gpt-3.5' in model_name:
                if '16k' in model_name:
                    return 16000
                else:
                    return 4000

            # DeepSeek
            elif 'deepseek' in model_name:
                return 64000
            
            # Gemini models
            elif 'gemini' in model_name:
                if '2.0' in model_name or 'exp' in model_name:
                    return 2000000  # Gemini 2.0: 2M tokens
                else:
                    return 1000000  # Gemini 1.5: 1M tokens

        # Default conservative value
        return 128000

    def _get_context_reserve_tokens(self) -> int:
        """
        Get the number of tokens to reserve for new requests.
        This prevents context overflow by keeping a buffer.
        
        :return: Number of tokens to reserve
        """
        if self.context_reserve_tokens is not None:
            return self.context_reserve_tokens

        # Reserve ~10% of context window, with min 10K and max 200K
        context_window = self._get_model_context_window()
        reserve = int(context_window * 0.1)
        return max(10000, min(200000, reserve))

    def _estimate_message_tokens(self, message: dict) -> int:
        """
        Estimate token count for a message.

        Uses chars/3 for Chinese-heavy content and chars/4 for ASCII-heavy content,
        plus per-block overhead for tool_use / tool_result structures.

        :param message: Message dict with 'role' and 'content'
        :return: Estimated token count
        """
        content = message.get('content', '')
        if isinstance(content, str):
            return max(1, self._estimate_text_tokens(content))
        elif isinstance(content, list):
            total_tokens = 0
            for part in content:
                if not isinstance(part, dict):
                    continue
                block_type = part.get('type', '')
                if block_type == 'text':
                    total_tokens += self._estimate_text_tokens(part.get('text', ''))
                elif block_type == 'image':
                    total_tokens += 1200
                elif block_type == 'tool_use':
                    # tool_use has id + name + input (JSON-encoded)
                    total_tokens += 50  # overhead for structure
                    input_data = part.get('input', {})
                    if isinstance(input_data, dict):
                        import json
                        input_str = json.dumps(input_data, ensure_ascii=False)
                        total_tokens += self._estimate_text_tokens(input_str)
                elif block_type == 'tool_result':
                    # tool_result has tool_use_id + content
                    total_tokens += 30  # overhead for structure
                    result_content = part.get('content', '')
                    if isinstance(result_content, str):
                        total_tokens += self._estimate_text_tokens(result_content)
                else:
                    # Unknown block type, estimate conservatively
                    total_tokens += 10
            return max(1, total_tokens)
        return 1

    @staticmethod
    def _estimate_text_tokens(text: str) -> int:
        """
        Estimate token count for a text string.

        Chinese / CJK characters typically use ~1.5 tokens each,
        while ASCII uses ~0.25 tokens per char (4 chars/token).
        We use a weighted average based on the character mix.

        :param text: Input text
        :return: Estimated token count
        """
        if not text:
            return 0
        # Count non-ASCII characters (CJK, emoji, etc.)
        non_ascii = sum(1 for c in text if ord(c) > 127)
        ascii_count = len(text) - non_ascii
        # CJK chars: ~1.5 tokens each; ASCII: ~0.25 tokens per char
        return int(non_ascii * 1.5 + ascii_count * 0.25) + 1

    def _find_tool(self, tool_name: str):
        """Find and return a tool with the specified name"""
        for tool in self.tools:
            if tool.name == tool_name:
                # Only pre-process stage tools can be actively called
                if tool.stage == ToolStage.PRE_PROCESS:
                    tool.model = self.model
                    tool.context = self  # Set tool context
                    return tool
                else:
                    # If it's a post-process tool, return None to prevent direct calling
                    logger.warning(f"Tool {tool_name} is a post-process tool and cannot be called directly.")
                    return None
        return None

    # output function based on mode
    def output(self, message="", end="\n"):
        if self.output_mode == "print":
            print(message, end=end)
        elif message:
            logger.info(message)

    def _execute_post_process_tools(self):
        """Execute all post-process stage tools"""
        # Get all post-process stage tools
        post_process_tools = [tool for tool in self.tools if tool.stage == ToolStage.POST_PROCESS]

        # Execute each tool
        for tool in post_process_tools:
            # Set tool context
            tool.context = self

            # Record start time for execution timing
            start_time = time.time()

            # Execute tool (with empty parameters, tool will extract needed info from context)
            result = tool.execute({})

            # Calculate execution time
            execution_time = time.time() - start_time

            # Capture tool use for tracking
            self.capture_tool_use(
                tool_name=tool.name,
                input_params={},  # Post-process tools typically don't take parameters
                output=result.result,
                status=result.status,
                error_message=str(result.result) if result.status == "error" else None,
                execution_time=execution_time
            )

            # Log result
            if result.status == "success":
                # Print tool execution result in the desired format
                self.output(f"\n🛠️ {tool.name}: {json.dumps(result.result)}")
            else:
                # Print failure in print mode
                self.output(f"\n🛠️ {tool.name}: {json.dumps({'status': 'error', 'message': str(result.result)})}")

    def capture_tool_use(self, tool_name, input_params, output, status, thought=None, error_message=None,
                         execution_time=0.0):
        """
        Capture a tool use action.
        
        :param thought: thought content
        :param tool_name: Name of the tool used
        :param input_params: Parameters passed to the tool
        :param output: Output from the tool
        :param status: Status of the tool execution
        :param error_message: Error message if the tool execution failed
        :param execution_time: Time taken to execute the tool
        """
        tool_result = ToolResult(
            tool_name=tool_name,
            input_params=input_params,
            output=output,
            status=status,
            error_message=error_message,
            execution_time=execution_time
        )

        action = AgentAction(
            agent_id=self.id if hasattr(self, 'id') else str(id(self)),
            agent_name=self.name,
            action_type=AgentActionType.TOOL_USE,
            tool_result=tool_result,
            thought=thought
        )

        self.captured_actions.append(action)

        return action

    def run_stream(self, user_message: str, on_event=None, clear_history: bool = False, skill_filter=None) -> str:
        """
        Execute single agent task with streaming (based on tool-call)

        This method supports:
        - Streaming output
        - Multi-turn reasoning based on tool-call
        - Event callbacks
        - Persistent conversation history across calls

        Args:
            user_message: User message
            on_event: Event callback function callback(event: dict)
                     event = {"type": str, "timestamp": float, "data": dict}
            clear_history: If True, clear conversation history before this call (default: False)
            skill_filter: Optional list of skill names to include in this run

        Returns:
            Final response text

        Example:
            # Multi-turn conversation with memory
            response1 = agent.run_stream("My name is Alice")
            response2 = agent.run_stream("What's my name?")  # Will remember Alice

            # Single-turn without memory
            response = agent.run_stream("Hello", clear_history=True)
        """
        # Clear history if requested
        if clear_history:
            with self.messages_lock:
                self.messages = []

        # Get model to use
        if not self.model:
            raise ValueError("No model available for agent")

        # Get full system prompt with skills
        full_system_prompt = self.get_full_system_prompt(skill_filter=skill_filter)

        # Create a copy of messages for this execution to avoid concurrent modification
        # Record the original length to track which messages are new
        with self.messages_lock:
            messages_copy = self.messages.copy()
            original_length = len(self.messages)

        # Get max_context_turns from config
        from config import conf
        max_context_turns = conf().get("agent_max_context_turns", 20)
        
        # Create stream executor with copied message history
        executor = AgentStreamExecutor(
            agent=self,
            model=self.model,
            system_prompt=full_system_prompt,
            tools=self.tools,
            max_turns=self.max_steps,
            on_event=on_event,
            messages=messages_copy,  # Pass copied message history
            max_context_turns=max_context_turns
        )

        # Execute
        try:
            response = executor.run_stream(user_message)
        except Exception:
            # If executor cleared its messages (context overflow / message format error),
            # sync that back to the Agent's own message list so the next request
            # starts fresh instead of hitting the same overflow forever.
            if len(executor.messages) == 0:
                with self.messages_lock:
                    self.messages.clear()
                    logger.info("[Agent] Cleared Agent message history after executor recovery")
            raise

        # Sync executor's messages back to agent (thread-safe).
        # If the executor trimmed context, its message list is shorter than
        # original_length, so we must replace rather than append.
        with self.messages_lock:
            self.messages = list(executor.messages)
            # Track messages added in this run (user query + all assistant/tool messages)
            # original_length may exceed executor.messages length after trimming
            trim_adjusted_start = min(original_length, len(executor.messages))
            self._last_run_new_messages = list(executor.messages[trim_adjusted_start:])
        
        # Store executor reference for agent_bridge to access files_to_send
        self.stream_executor = executor

        # Execute all post-process tools
        self._execute_post_process_tools()

        return response

    def clear_history(self):
        """Clear conversation history and captured actions"""
        self.messages = []
        self.captured_actions = []

================================================
FILE: agent/protocol/agent_stream.py
================================================
"""
Agent Stream Execution Module - Multi-turn reasoning based on tool-call

Provides streaming output, event system, and complete tool-call loop
"""
import json
import time
from typing import List, Dict, Any, Optional, Callable, Tuple

from agent.protocol.models import LLMRequest, LLMModel
from agent.protocol.message_utils import sanitize_claude_messages, compress_turn_to_text_only
from agent.tools.base_tool import BaseTool, ToolResult
from common.log import logger


class AgentStreamExecutor:
    """
    Agent Stream Executor
    
    Handles multi-turn reasoning loop based on tool-call:
    1. LLM generates response (may include tool calls)
    2. Execute tools
    3. Return results to LLM
    4. Repeat until no more tool calls
    """

    def __init__(
            self,
            agent,  # Agent instance
            model: LLMModel,
            system_prompt: str,
            tools: List[BaseTool],
            max_turns: int = 50,
            on_event: Optional[Callable] = None,
            messages: Optional[List[Dict]] = None,
            max_context_turns: int = 30
    ):
        """
        Initialize stream executor
        
        Args:
            agent: Agent instance (for accessing context)
            model: LLM model
            system_prompt: System prompt
            tools: List of available tools
            max_turns: Maximum number of turns
            on_event: Event callback function
            messages: Optional existing message history (for persistent conversations)
            max_context_turns: Maximum number of conversation turns to keep in context
        """
        self.agent = agent
        self.model = model
        self.system_prompt = system_prompt
        # Convert tools list to dict
        self.tools = {tool.name: tool for tool in tools} if isinstance(tools, list) else tools
        self.max_turns = max_turns
        self.on_event = on_event
        self.max_context_turns = max_context_turns

        # Message history - use provided messages or create new list
        self.messages = messages if messages is not None else []
        
        # Tool failure tracking for retry protection
        self.tool_failure_history = []  # List of (tool_name, args_hash, success) tuples
        
        # Track files to send (populated by read tool)
        self.files_to_send = []  # List of file metadata dicts

    def _emit_event(self, event_type: str, data: dict = None):
        """Emit event"""
        if self.on_event:
            try:
                self.on_event({
                    "type": event_type,
                    "timestamp": time.time(),
                    "data": data or {}
                })
            except Exception as e:
                logger.error(f"Event callback error: {e}")
    
    def _filter_think_tags(self, text: str) -> str:
        """
        Remove <think> and </think> tags but keep the content inside.
        Some LLM providers (e.g., MiniMax) may return thinking process wrapped in <think> tags.
        We only remove the tags themselves, keeping the actual thinking content.
        """
        if not text:
            return text
        import re
        # Remove only the <think> and </think> tags, keep the content
        text = re.sub(r'<think>', '', text)
        text = re.sub(r'</think>', '', text)
        return text

    def _hash_args(self, args: dict) -> str:
        """Generate a simple hash for tool arguments"""
        import hashlib
        # Sort keys for consistent hashing
        args_str = json.dumps(args, sort_keys=True, ensure_ascii=False)
        return hashlib.md5(args_str.encode()).hexdigest()[:8]
    
    def _check_consecutive_failures(self, tool_name: str, args: dict) -> Tuple[bool, str, bool]:
        """
        Check if tool has failed too many times consecutively or called repeatedly with same args
        
        Returns:
            (should_stop, reason, is_critical)
            - should_stop: Whether to stop tool execution
            - reason: Reason for stopping
            - is_critical: Whether to abort entire conversation (True for 8+ failures)
        """
        args_hash = self._hash_args(args)
        
        # Count consecutive calls (both success and failure) for same tool + args
        # This catches infinite loops where tool succeeds but LLM keeps calling it
        same_args_calls = 0
        for name, ahash, success in reversed(self.tool_failure_history):
            if name == tool_name and ahash == args_hash:
                same_args_calls += 1
            else:
                break  # Different tool or args, stop counting
        
        # Stop at 5 consecutive calls with same args (whether success or failure)
        if same_args_calls >= 5:
            return True, f"工具 '{tool_name}' 使用相同参数已被调用 {same_args_calls} 次,停止执行以防止无限循环。如果需要查看配置,结果已在之前的调用中返回。", False
        
        # Count consecutive failures for same tool + args
        same_args_failures = 0
        for name, ahash, success in reversed(self.tool_failure_history):
            if name == tool_name and ahash == args_hash:
                if not success:
                    same_args_failures += 1
                else:
                    break  # Stop at first success
            else:
                break  # Different tool or args, stop counting
        
        if same_args_failures >= 3:
            return True, f"工具 '{tool_name}' 使用相同参数连续失败 {same_args_failures} 次,停止执行以防止无限循环", False
        
        # Count consecutive failures for same tool (any args)
        same_tool_failures = 0
        for name, ahash, success in reversed(self.tool_failure_history):
            if name == tool_name:
                if not success:
                    same_tool_failures += 1
                else:
                    break  # Stop at first success
            else:
                break  # Different tool, stop counting
        
        # Hard stop at 8 failures - abort with critical message
        if same_tool_failures >= 8:
            return True, f"抱歉,我没能完成这个任务。可能是我理解有误或者当前方法不太合适。\n\n建议你:\n• 换个方式描述需求试试\n• 把任务拆分成更小的步骤\n• 或者换个思路来解决", True
        
        # Warning at 6 failures
        if same_tool_failures >= 6:
            return True, f"工具 '{tool_name}' 连续失败 {same_tool_failures} 次(使用不同参数),停止执行以防止无限循环", False
        
        return False, "", False
    
    def _record_tool_result(se
Download .txt
gitextract_wetyu4kh/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── 1.bug.yml
│   │   └── 2.feature.yml
│   └── workflows/
│       ├── deploy-image-arm.yml
│       └── deploy-image.yml
├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── agent/
│   ├── chat/
│   │   ├── __init__.py
│   │   └── service.py
│   ├── memory/
│   │   ├── __init__.py
│   │   ├── chunker.py
│   │   ├── config.py
│   │   ├── conversation_store.py
│   │   ├── embedding.py
│   │   ├── manager.py
│   │   ├── service.py
│   │   ├── storage.py
│   │   └── summarizer.py
│   ├── prompt/
│   │   ├── __init__.py
│   │   ├── builder.py
│   │   └── workspace.py
│   ├── protocol/
│   │   ├── __init__.py
│   │   ├── agent.py
│   │   ├── agent_stream.py
│   │   ├── context.py
│   │   ├── message_utils.py
│   │   ├── models.py
│   │   ├── result.py
│   │   └── task.py
│   ├── skills/
│   │   ├── __init__.py
│   │   ├── config.py
│   │   ├── formatter.py
│   │   ├── frontmatter.py
│   │   ├── loader.py
│   │   ├── manager.py
│   │   ├── service.py
│   │   └── types.py
│   └── tools/
│       ├── __init__.py
│       ├── base_tool.py
│       ├── bash/
│       │   ├── __init__.py
│       │   └── bash.py
│       ├── browser_tool.py
│       ├── edit/
│       │   ├── __init__.py
│       │   └── edit.py
│       ├── env_config/
│       │   ├── __init__.py
│       │   └── env_config.py
│       ├── ls/
│       │   ├── __init__.py
│       │   └── ls.py
│       ├── memory/
│       │   ├── __init__.py
│       │   ├── memory_get.py
│       │   └── memory_search.py
│       ├── read/
│       │   ├── __init__.py
│       │   └── read.py
│       ├── scheduler/
│       │   ├── README.md
│       │   ├── __init__.py
│       │   ├── integration.py
│       │   ├── scheduler_service.py
│       │   ├── scheduler_tool.py
│       │   └── task_store.py
│       ├── send/
│       │   ├── __init__.py
│       │   └── send.py
│       ├── tool_manager.py
│       ├── utils/
│       │   ├── __init__.py
│       │   ├── diff.py
│       │   └── truncate.py
│       ├── vision/
│       │   ├── __init__.py
│       │   └── vision.py
│       ├── web_fetch/
│       │   ├── __init__.py
│       │   └── web_fetch.py
│       ├── web_search/
│       │   ├── __init__.py
│       │   └── web_search.py
│       └── write/
│           ├── __init__.py
│           └── write.py
├── app.py
├── bridge/
│   ├── agent_bridge.py
│   ├── agent_event_handler.py
│   ├── agent_initializer.py
│   ├── bridge.py
│   ├── context.py
│   └── reply.py
├── channel/
│   ├── channel.py
│   ├── channel_factory.py
│   ├── chat_channel.py
│   ├── chat_message.py
│   ├── dingtalk/
│   │   ├── dingtalk_channel.py
│   │   └── dingtalk_message.py
│   ├── feishu/
│   │   ├── README.md
│   │   ├── feishu_channel.py
│   │   └── feishu_message.py
│   ├── file_cache.py
│   ├── qq/
│   │   ├── __init__.py
│   │   ├── qq_channel.py
│   │   └── qq_message.py
│   ├── terminal/
│   │   └── terminal_channel.py
│   ├── web/
│   │   ├── README.md
│   │   ├── chat.html
│   │   ├── static/
│   │   │   ├── css/
│   │   │   │   └── console.css
│   │   │   └── js/
│   │   │       └── console.js
│   │   └── web_channel.py
│   ├── wechatcom/
│   │   ├── README.md
│   │   ├── wechatcomapp_channel.py
│   │   ├── wechatcomapp_client.py
│   │   └── wechatcomapp_message.py
│   ├── wechatmp/
│   │   ├── README.md
│   │   ├── active_reply.py
│   │   ├── common.py
│   │   ├── passive_reply.py
│   │   ├── wechatmp_channel.py
│   │   ├── wechatmp_client.py
│   │   └── wechatmp_message.py
│   └── wecom_bot/
│       ├── __init__.py
│       ├── wecom_bot_channel.py
│       └── wecom_bot_message.py
├── common/
│   ├── cloud_client.py
│   ├── const.py
│   ├── dequeue.py
│   ├── expired_dict.py
│   ├── log.py
│   ├── memory.py
│   ├── package_manager.py
│   ├── singleton.py
│   ├── sorted_dict.py
│   ├── time_check.py
│   ├── tmp_dir.py
│   ├── token_bucket.py
│   └── utils.py
├── config-template.json
├── config.py
├── docker/
│   ├── Dockerfile.latest
│   ├── build.latest.sh
│   ├── docker-compose.yml
│   └── entrypoint.sh
├── docs/
│   ├── agent.md
│   ├── channels/
│   │   ├── dingtalk.mdx
│   │   ├── feishu.mdx
│   │   ├── qq.mdx
│   │   ├── web.mdx
│   │   ├── wechatmp.mdx
│   │   ├── wecom-bot.mdx
│   │   └── wecom.mdx
│   ├── docs.json
│   ├── en/
│   │   ├── README.md
│   │   ├── channels/
│   │   │   ├── dingtalk.mdx
│   │   │   ├── feishu.mdx
│   │   │   ├── qq.mdx
│   │   │   ├── web.mdx
│   │   │   ├── wechatmp.mdx
│   │   │   ├── wecom-bot.mdx
│   │   │   └── wecom.mdx
│   │   ├── guide/
│   │   │   ├── manual-install.mdx
│   │   │   └── quick-start.mdx
│   │   ├── intro/
│   │   │   ├── architecture.mdx
│   │   │   ├── features.mdx
│   │   │   └── index.mdx
│   │   ├── memory.mdx
│   │   ├── models/
│   │   │   ├── claude.mdx
│   │   │   ├── coding-plan.mdx
│   │   │   ├── deepseek.mdx
│   │   │   ├── doubao.mdx
│   │   │   ├── gemini.mdx
│   │   │   ├── glm.mdx
│   │   │   ├── index.mdx
│   │   │   ├── kimi.mdx
│   │   │   ├── linkai.mdx
│   │   │   ├── minimax.mdx
│   │   │   ├── openai.mdx
│   │   │   └── qwen.mdx
│   │   ├── releases/
│   │   │   ├── overview.mdx
│   │   │   ├── v2.0.0.mdx
│   │   │   ├── v2.0.1.mdx
│   │   │   └── v2.0.2.mdx
│   │   ├── skills/
│   │   │   ├── image-vision.mdx
│   │   │   ├── index.mdx
│   │   │   ├── linkai-agent.mdx
│   │   │   ├── skill-creator.mdx
│   │   │   └── web-fetch.mdx
│   │   └── tools/
│   │       ├── bash.mdx
│   │       ├── browser.mdx
│   │       ├── edit.mdx
│   │       ├── env-config.mdx
│   │       ├── index.mdx
│   │       ├── ls.mdx
│   │       ├── memory.mdx
│   │       ├── read.mdx
│   │       ├── scheduler.mdx
│   │       ├── send.mdx
│   │       ├── web-search.mdx
│   │       └── write.mdx
│   ├── guide/
│   │   ├── manual-install.mdx
│   │   ├── quick-start.mdx
│   │   └── upgrade.mdx
│   ├── intro/
│   │   ├── architecture.mdx
│   │   ├── features.mdx
│   │   └── index.mdx
│   ├── ja/
│   │   ├── README.md
│   │   ├── channels/
│   │   │   ├── dingtalk.mdx
│   │   │   ├── feishu.mdx
│   │   │   ├── qq.mdx
│   │   │   ├── web.mdx
│   │   │   ├── wechatmp.mdx
│   │   │   ├── wecom-bot.mdx
│   │   │   └── wecom.mdx
│   │   ├── guide/
│   │   │   ├── manual-install.mdx
│   │   │   ├── quick-start.mdx
│   │   │   └── upgrade.mdx
│   │   ├── intro/
│   │   │   ├── architecture.mdx
│   │   │   ├── features.mdx
│   │   │   └── index.mdx
│   │   ├── memory.mdx
│   │   ├── models/
│   │   │   ├── claude.mdx
│   │   │   ├── coding-plan.mdx
│   │   │   ├── deepseek.mdx
│   │   │   ├── doubao.mdx
│   │   │   ├── gemini.mdx
│   │   │   ├── glm.mdx
│   │   │   ├── index.mdx
│   │   │   ├── kimi.mdx
│   │   │   ├── linkai.mdx
│   │   │   ├── minimax.mdx
│   │   │   ├── openai.mdx
│   │   │   └── qwen.mdx
│   │   ├── releases/
│   │   │   ├── overview.mdx
│   │   │   ├── v2.0.0.mdx
│   │   │   ├── v2.0.1.mdx
│   │   │   ├── v2.0.2.mdx
│   │   │   └── v2.0.3.mdx
│   │   ├── skills/
│   │   │   ├── image-vision.mdx
│   │   │   ├── index.mdx
│   │   │   ├── linkai-agent.mdx
│   │   │   ├── skill-creator.mdx
│   │   │   └── web-fetch.mdx
│   │   └── tools/
│   │       ├── bash.mdx
│   │       ├── browser.mdx
│   │       ├── edit.mdx
│   │       ├── env-config.mdx
│   │       ├── index.mdx
│   │       ├── ls.mdx
│   │       ├── memory.mdx
│   │       ├── read.mdx
│   │       ├── scheduler.mdx
│   │       ├── send.mdx
│   │       ├── web-search.mdx
│   │       └── write.mdx
│   ├── memory.mdx
│   ├── models/
│   │   ├── claude.mdx
│   │   ├── coding-plan.mdx
│   │   ├── deepseek.mdx
│   │   ├── doubao.mdx
│   │   ├── gemini.mdx
│   │   ├── glm.mdx
│   │   ├── index.mdx
│   │   ├── kimi.mdx
│   │   ├── linkai.mdx
│   │   ├── minimax.mdx
│   │   ├── openai.mdx
│   │   └── qwen.mdx
│   ├── releases/
│   │   ├── overview.mdx
│   │   ├── v2.0.0.mdx
│   │   ├── v2.0.1.mdx
│   │   ├── v2.0.2.mdx
│   │   └── v2.0.3.mdx
│   ├── skills/
│   │   ├── image-vision.mdx
│   │   ├── index.mdx
│   │   ├── linkai-agent.mdx
│   │   ├── skill-creator.mdx
│   │   └── web-fetch.mdx
│   └── tools/
│       ├── bash.mdx
│       ├── browser.mdx
│       ├── edit.mdx
│       ├── env-config.mdx
│       ├── index.mdx
│       ├── ls.mdx
│       ├── memory.mdx
│       ├── read.mdx
│       ├── scheduler.mdx
│       ├── send.mdx
│       ├── web-search.mdx
│       └── write.mdx
├── models/
│   ├── ali/
│   │   ├── ali_qwen_bot.py
│   │   └── ali_qwen_session.py
│   ├── baidu/
│   │   ├── baidu_unit_bot.py
│   │   ├── baidu_wenxin.py
│   │   └── baidu_wenxin_session.py
│   ├── bot.py
│   ├── bot_factory.py
│   ├── chatgpt/
│   │   ├── chat_gpt_bot.py
│   │   └── chat_gpt_session.py
│   ├── claudeapi/
│   │   └── claude_api_bot.py
│   ├── dashscope/
│   │   ├── dashscope_bot.py
│   │   └── dashscope_session.py
│   ├── doubao/
│   │   ├── __init__.py
│   │   ├── doubao_bot.py
│   │   └── doubao_session.py
│   ├── gemini/
│   │   └── google_gemini_bot.py
│   ├── linkai/
│   │   └── link_ai_bot.py
│   ├── minimax/
│   │   ├── minimax_bot.py
│   │   └── minimax_session.py
│   ├── modelscope/
│   │   ├── modelscope_bot.py
│   │   └── modelscope_session.py
│   ├── moonshot/
│   │   ├── moonshot_bot.py
│   │   └── moonshot_session.py
│   ├── openai/
│   │   ├── open_ai_bot.py
│   │   ├── open_ai_image.py
│   │   ├── open_ai_session.py
│   │   └── openai_compat.py
│   ├── openai_compatible_bot.py
│   ├── session_manager.py
│   ├── xunfei/
│   │   └── xunfei_spark_bot.py
│   └── zhipuai/
│       ├── zhipu_ai_image.py
│       ├── zhipu_ai_session.py
│       └── zhipuai_bot.py
├── plugins/
│   ├── agent/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── agent.py
│   │   └── config-template.yaml
│   ├── banwords/
│   │   ├── .gitignore
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── banwords.py
│   │   ├── banwords.txt.template
│   │   ├── config.json.template
│   │   └── lib/
│   │       └── WordsSearch.py
│   ├── dungeon/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   └── dungeon.py
│   ├── finish/
│   │   ├── __init__.py
│   │   └── finish.py
│   ├── godcmd/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   └── godcmd.py
│   ├── hello/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   └── hello.py
│   ├── keyword/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   └── keyword.py
│   ├── linkai/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── config.json.template
│   │   ├── linkai.py
│   │   ├── midjourney.py
│   │   ├── summary.py
│   │   └── utils.py
│   ├── role/
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── role.py
│   │   └── roles.json
│   └── tool/
│       ├── README.md
│       ├── config.json.template
│       └── tool.py
├── requirements-optional.txt
├── requirements.txt
├── run.sh
├── scripts/
│   ├── shutdown.sh
│   ├── start.sh
│   └── tout.sh
├── skills/
│   ├── README.md
│   ├── linkai-agent/
│   │   ├── README.md
│   │   ├── SKILL.md
│   │   └── config.json.template
│   └── skill-creator/
│       ├── SKILL.md
│       └── scripts/
│           ├── init_skill.py
│           ├── package_skill.py
│           └── quick_validate.py
├── translate/
│   ├── baidu/
│   │   └── baidu_translate.py
│   ├── factory.py
│   └── translator.py
└── voice/
    ├── ali/
    │   ├── ali_api.py
    │   ├── ali_voice.py
    │   └── config.json.template
    ├── audio_convert.py
    ├── azure/
    │   ├── azure_voice.py
    │   └── config.json.template
    ├── baidu/
    │   ├── README.md
    │   ├── baidu_voice.py
    │   └── config.json.template
    ├── edge/
    │   └── edge_voice.py
    ├── elevent/
    │   └── elevent_voice.py
    ├── factory.py
    ├── google/
    │   └── google_voice.py
    ├── linkai/
    │   └── linkai_voice.py
    ├── openai/
    │   └── openai_voice.py
    ├── pytts/
    │   └── pytts_voice.py
    ├── tencent/
    │   ├── config.json.template
    │   └── tencent_voice.py
    ├── voice.py
    └── xunfei/
        ├── config.json.template
        ├── xunfei_asr.py
        ├── xunfei_tts.py
        └── xunfei_voice.py
Download .txt
SYMBOL INDEX (1469 symbols across 160 files)

FILE: agent/chat/service.py
  class ChatService (line 14) | class ChatService:
    method __init__ (line 24) | def __init__(self, agent_bridge):
    method run (line 30) | def run(self, query: str, session_id: str, send_chunk_fn: Callable[[di...
    method _persist_messages (line 190) | def _persist_messages(session_id: str, new_messages: list, channel_typ...
  class _StreamState (line 208) | class _StreamState:
    method __init__ (line 211) | def __init__(self):

FILE: agent/memory/chunker.py
  class TextChunk (line 13) | class TextChunk:
  class TextChunker (line 20) | class TextChunker:
    method __init__ (line 23) | def __init__(self, max_tokens: int = 500, overlap_tokens: int = 50):
    method chunk_text (line 36) | def chunk_text(self, text: str) -> List[TextChunk]:
    method _split_long_line (line 114) | def _split_long_line(self, line: str, max_chars: int) -> List[str]:
    method _get_overlap_lines (line 121) | def _get_overlap_lines(self, lines: List[str], target_chars: int) -> L...
    method chunk_markdown (line 135) | def chunk_markdown(self, text: str) -> List[TextChunk]:

FILE: agent/memory/config.py
  function _default_workspace (line 14) | def _default_workspace():
  class MemoryConfig (line 21) | class MemoryConfig:
    method get_workspace (line 52) | def get_workspace(self) -> Path:
    method get_memory_dir (line 56) | def get_memory_dir(self) -> Path:
    method get_db_path (line 60) | def get_db_path(self) -> Path:
    method get_skills_dir (line 66) | def get_skills_dir(self) -> Path:
    method get_agent_workspace (line 70) | def get_agent_workspace(self, agent_name: Optional[str] = None) -> Path:
  function get_default_memory_config (line 90) | def get_default_memory_config() -> MemoryConfig:
  function set_global_memory_config (line 104) | def set_global_memory_config(config: MemoryConfig):

FILE: agent/memory/conversation_store.py
  function _is_visible_user_message (line 63) | def _is_visible_user_message(content: Any) -> bool:
  function _extract_display_text (line 78) | def _extract_display_text(content: Any) -> str:
  function _extract_tool_calls (line 95) | def _extract_tool_calls(content: Any) -> List[Dict[str, Any]]:
  function _extract_tool_results (line 109) | def _extract_tool_results(content: Any) -> Dict[str, str]:
  function _group_into_display_turns (line 130) | def _group_into_display_turns(
  class ConversationStore (line 223) | class ConversationStore:
    method __init__ (line 233) | def __init__(self, db_path: Path):
    method load_messages (line 242) | def load_messages(
    method append_messages (line 318) | def append_messages(
    method clear_session (line 395) | def clear_session(self, session_id: str) -> None:
    method cleanup_old_sessions (line 410) | def cleanup_old_sessions(self, max_age_days: Optional[int] = None) -> ...
    method load_history_page (line 454) | def load_history_page(
    method get_stats (line 522) | def get_stats(self) -> Dict[str, Any]:
    method _init_db (line 553) | def _init_db(self) -> None:
    method _migrate (line 563) | def _migrate(self, conn: sqlite3.Connection) -> None:
    method _connect (line 577) | def _connect(self) -> sqlite3.Connection:
  function get_conversation_store (line 592) | def get_conversation_store() -> ConversationStore:

FILE: agent/memory/embedding.py
  class EmbeddingProvider (line 12) | class EmbeddingProvider(ABC):
    method embed (line 16) | def embed(self, text: str) -> List[float]:
    method embed_batch (line 21) | def embed_batch(self, texts: List[str]) -> List[List[float]]:
    method dimensions (line 27) | def dimensions(self) -> int:
  class OpenAIEmbeddingProvider (line 32) | class OpenAIEmbeddingProvider(EmbeddingProvider):
    method __init__ (line 35) | def __init__(self, model: str = "text-embedding-3-small", api_key: Opt...
    method _call_api (line 58) | def _call_api(self, input_data):
    method embed (line 89) | def embed(self, text: str) -> List[float]:
    method embed_batch (line 94) | def embed_batch(self, texts: List[str]) -> List[List[float]]:
    method dimensions (line 103) | def dimensions(self) -> int:
  class EmbeddingCache (line 110) | class EmbeddingCache:
    method __init__ (line 113) | def __init__(self):
    method get (line 116) | def get(self, text: str, provider: str, model: str) -> Optional[List[f...
    method put (line 121) | def put(self, text: str, provider: str, model: str, embedding: List[fl...
    method _compute_key (line 127) | def _compute_key(text: str, provider: str, model: str) -> str:
    method clear (line 132) | def clear(self):
  function create_embedding_provider (line 137) | def create_embedding_provider(

FILE: agent/memory/manager.py
  class MemoryManager (line 20) | class MemoryManager:
    method __init__ (line 27) | def __init__(
    method _init_workspace (line 109) | def _init_workspace(self):
    method search (line 118) | async def search(
    method add_memory (line 195) | async def add_memory(
    method sync (line 270) | async def sync(self, force: bool = False):
    method _sync_file (line 318) | async def _sync_file(
    method flush_memory (line 386) | def flush_memory(
    method get_status (line 415) | def get_status(self) -> Dict[str, Any]:
    method mark_dirty (line 429) | def mark_dirty(self):
    method close (line 433) | def close(self):
    method _generate_chunk_id (line 439) | def _generate_chunk_id(self, path: str, start_line: int, end_line: int...
    method _compute_temporal_decay (line 445) | def _compute_temporal_decay(path: str, half_life_days: float = 30.0) -...
    method _merge_results (line 475) | def _merge_results(

FILE: agent/memory/service.py
  class MemoryService (line 19) | class MemoryService:
    method __init__ (line 25) | def __init__(self, workspace_root: str):
    method list_files (line 35) | def list_files(self, page: int = 1, page_size: int = 20) -> dict:
    method get_content (line 88) | def get_content(self, filename: str) -> dict:
    method dispatch (line 111) | def dispatch(self, action: str, payload: Optional[dict] = None) -> dict:
    method _resolve_path (line 146) | def _resolve_path(self, filename: str) -> str:
    method _file_info (line 158) | def _file_info(path: str, filename: str, file_type: str) -> dict:

FILE: agent/memory/storage.py
  class MemoryChunk (line 17) | class MemoryChunk:
  class SearchResult (line 33) | class SearchResult:
  class MemoryStorage (line 44) | class MemoryStorage:
    method __init__ (line 47) | def __init__(self, db_path: Path):
    method _check_fts5_support (line 53) | def _check_fts5_support(self) -> bool:
    method _init_db (line 64) | def _init_db(self):
    method save_chunk (line 200) | def save_chunk(self, chunk: MemoryChunk):
    method save_chunks_batch (line 221) | def save_chunks_batch(self, chunks: List[MemoryChunk]):
    method get_chunk (line 239) | def get_chunk(self, chunk_id: str) -> Optional[MemoryChunk]:
    method search_vector (line 250) | def search_vector(
    method search_keyword (line 313) | def search_keyword(
    method _search_fts5 (line 344) | def _search_fts5(
    method _search_like (line 400) | def _search_like(
    method delete_by_path (line 461) | def delete_by_path(self, path: str):
    method get_file_hash (line 468) | def get_file_hash(self, path: str) -> Optional[str]:
    method update_file_metadata (line 475) | def update_file_metadata(self, path: str, source: str, file_hash: str,...
    method get_stats (line 483) | def get_stats(self) -> Dict[str, int]:
    method close (line 498) | def close(self):
    method __del__ (line 508) | def __del__(self):
    method _row_to_chunk (line 517) | def _row_to_chunk(self, row) -> MemoryChunk:
    method _cosine_similarity (line 534) | def _cosine_similarity(vec1: List[float], vec2: List[float]) -> float:
    method _contains_cjk (line 549) | def _contains_cjk(text: str) -> bool:
    method _build_fts_query (line 555) | def _build_fts_query(raw_query: str) -> Optional[str]:
    method _bm25_rank_to_score (line 574) | def _bm25_rank_to_score(rank: float) -> float:
    method _truncate_text (line 580) | def _truncate_text(text: str, max_chars: int) -> str:
    method compute_hash (line 587) | def compute_hash(content: str) -> str:

FILE: agent/memory/summarizer.py
  class MemoryFlushManager (line 33) | class MemoryFlushManager:
    method __init__ (line 44) | def __init__(
    method get_today_memory_file (line 59) | def get_today_memory_file(self, user_id: Optional[str] = None, ensure_...
    method get_main_memory_file (line 77) | def get_main_memory_file(self, user_id: Optional[str] = None) -> Path:
    method get_status (line 86) | def get_status(self) -> dict:
    method flush_from_messages (line 95) | def flush_from_messages(
    method _flush_worker (line 146) | def _flush_worker(
    method create_daily_summary (line 187) | def create_daily_summary(
    method _summarize_messages (line 215) | def _summarize_messages(self, messages: List[Dict], max_messages: int ...
    method _format_conversation_for_summary (line 234) | def _format_conversation_for_summary(self, messages: List[Dict], max_m...
    method _call_llm_for_summary (line 250) | def _call_llm_for_summary(self, conversation_text: str) -> str:
    method _extract_summary_fallback (line 279) | def _extract_summary_fallback(messages: List[Dict], max_messages: int ...
    method _extract_text_from_content (line 303) | def _extract_text_from_content(content) -> str:
  function create_memory_files_if_needed (line 318) | def create_memory_files_if_needed(workspace_dir: Path, user_id: Optional...
  function ensure_daily_memory_file (line 342) | def ensure_daily_memory_file(workspace_dir: Path, user_id: Optional[str]...

FILE: agent/prompt/builder.py
  class ContextFile (line 16) | class ContextFile:
  class PromptBuilder (line 22) | class PromptBuilder:
    method __init__ (line 25) | def __init__(self, workspace_dir: str, language: str = "zh"):
    method build (line 36) | def build(
  function build_agent_system_prompt (line 77) | def build_agent_system_prompt(
  function _build_identity_section (line 148) | def _build_identity_section(base_persona: Optional[str], language: str) ...
  function _build_tooling_section (line 154) | def _build_tooling_section(tools: List[Any], language: str) -> List[str]:
  function _build_skills_section (line 219) | def _build_skills_section(skill_manager: Any, tools: Optional[List[Any]]...
  function _build_memory_section (line 266) | def _build_memory_section(memory_manager: Any, tools: Optional[List[Any]...
  function _build_user_identity_section (line 322) | def _build_user_identity_section(user_identity: Dict[str, str], language...
  function _build_docs_section (line 346) | def _build_docs_section(workspace_dir: str, language: str) -> List[str]:
  function _build_workspace_section (line 352) | def _build_workspace_section(workspace_dir: str, language: str) -> List[...
  function _build_cloud_website_section (line 398) | def _build_cloud_website_section(workspace_dir: str) -> List[str]:
  function _build_context_files_section (line 407) | def _build_context_files_section(context_files: List[ContextFile], langu...
  function _build_runtime_section (line 440) | def _build_runtime_section(runtime_info: Dict[str, Any], language: str) ...

FILE: agent/prompt/workspace.py
  class WorkspaceFiles (line 25) | class WorkspaceFiles:
  function ensure_workspace (line 34) | def ensure_workspace(workspace_dir: str, create_templates: bool = True) ...
  function load_context_files (line 95) | def load_context_files(workspace_dir: str, files_to_load: Optional[List[...
  function _create_template_if_missing (line 155) | def _create_template_if_missing(filepath: str, template_content: str):
  function _is_template_placeholder (line 166) | def _is_template_placeholder(content: str) -> bool:
  function _is_onboarding_done (line 188) | def _is_onboarding_done(workspace_dir: str) -> bool:
  function _get_agent_template (line 211) | def _get_agent_template() -> str:
  function _get_user_template (line 252) | def _get_user_template() -> str:
  function _get_rule_template (line 282) | def _get_rule_template() -> str:
  function _get_memory_template (line 334) | def _get_memory_template() -> str:
  function _get_bootstrap_template (line 345) | def _get_bootstrap_template() -> str:

FILE: agent/protocol/agent.py
  class Agent (line 13) | class Agent:
    method __init__ (line 14) | def __init__(self, system_prompt: str, description: str = "AI Agent", ...
    method add_tool (line 75) | def add_tool(self, tool: BaseTool):
    method get_skills_prompt (line 85) | def get_skills_prompt(self, skill_filter=None) -> str:
    method get_full_system_prompt (line 101) | def get_full_system_prompt(self, skill_filter=None) -> str:
    method _rebuild_runtime_section (line 127) | def _rebuild_runtime_section(self, prompt: str) -> str:
    method _rebuild_skills_section (line 177) | def _rebuild_skills_section(self, prompt: str) -> str:
    method _rebuild_tool_list_section (line 212) | def _rebuild_tool_list_section(self, prompt: str) -> str:
    method refresh_skills (line 236) | def refresh_skills(self):
    method list_skills (line 242) | def list_skills(self):
    method _get_model_context_window (line 252) | def _get_model_context_window(self) -> int:
    method _get_context_reserve_tokens (line 304) | def _get_context_reserve_tokens(self) -> int:
    method _estimate_message_tokens (line 319) | def _estimate_message_tokens(self, message: dict) -> int:
    method _estimate_text_tokens (line 363) | def _estimate_text_tokens(text: str) -> int:
    method _find_tool (line 382) | def _find_tool(self, tool_name: str):
    method output (line 398) | def output(self, message="", end="\n"):
    method _execute_post_process_tools (line 404) | def _execute_post_process_tools(self):
    method capture_tool_use (line 441) | def capture_tool_use(self, tool_name, input_params, output, status, th...
    method run_stream (line 475) | def run_stream(self, user_message: str, on_event=None, clear_history: ...
    method clear_history (line 568) | def clear_history(self):

FILE: agent/protocol/agent_stream.py
  class AgentStreamExecutor (line 16) | class AgentStreamExecutor:
    method __init__ (line 27) | def __init__(
    method _emit_event (line 69) | def _emit_event(self, event_type: str, data: dict = None):
    method _filter_think_tags (line 81) | def _filter_think_tags(self, text: str) -> str:
    method _hash_args (line 95) | def _hash_args(self, args: dict) -> str:
    method _check_consecutive_failures (line 102) | def _check_consecutive_failures(self, tool_name: str, args: dict) -> T...
    method _record_tool_result (line 162) | def _record_tool_result(self, tool_name: str, args: dict, success: bool):
    method run_stream (line 170) | def run_stream(self, user_message: str) -> str:
    method _call_llm_stream (line 480) | def _call_llm_stream(self, retry_on_empty=True, retry_count=0, max_ret...
    method _execute_tool (line 821) | def _execute_tool(self, tool_call: Dict) -> Dict[str, Any]:
    method _build_tool_not_found_message (line 932) | def _build_tool_not_found_message(self, tool_name: str) -> str:
    method _validate_and_fix_messages (line 973) | def _validate_and_fix_messages(self):
    method _identify_complete_turns (line 977) | def _identify_complete_turns(self) -> List[Dict]:
    method _estimate_turn_tokens (line 1033) | def _estimate_turn_tokens(self, turn: Dict) -> int:
    method _truncate_historical_tool_results (line 1040) | def _truncate_historical_tool_results(self):
    method _aggressive_trim_for_overflow (line 1092) | def _aggressive_trim_for_overflow(self) -> bool:
    method _trim_messages (line 1194) | def _trim_messages(self):
    method _clear_session_db (line 1346) | def _clear_session_db(self):
    method _prepare_messages (line 1364) | def _prepare_messages(self) -> List[Dict[str, Any]]:

FILE: agent/protocol/context.py
  class TeamContext (line 1) | class TeamContext:
    method __init__ (line 2) | def __init__(self, name: str, description: str, rule: str, agents: lis...
  class AgentOutput (line 24) | class AgentOutput:
    method __init__ (line 25) | def __init__(self, agent_name: str, output: str):

FILE: agent/protocol/message_utils.py
  function sanitize_claude_messages (line 26) | def sanitize_claude_messages(messages: List[Dict]) -> int:
  function drop_orphaned_tool_results_openai (line 148) | def drop_orphaned_tool_results_openai(messages: List[Dict]) -> List[Dict]:
  function _has_block_type (line 179) | def _has_block_type(content: list, block_type: str) -> bool:
  function _extract_text_from_content (line 186) | def _extract_text_from_content(content) -> str:
  function compress_turn_to_text_only (line 200) | def compress_turn_to_text_only(turn: Dict) -> Dict:

FILE: agent/protocol/models.py
  class LLMRequest (line 9) | class LLMRequest:
    method __init__ (line 12) | def __init__(self, messages: List[Dict[str, str]] = None, model: Optio...
  class LLMModel (line 26) | class LLMModel:
    method __init__ (line 29) | def __init__(self, model: str = None, **kwargs):
    method call (line 33) | def call(self, request: LLMRequest):
    method call_stream (line 40) | def call_stream(self, request: LLMRequest):
  class ModelFactory (line 48) | class ModelFactory:
    method create_model (line 52) | def create_model(model_type: str, **kwargs):

FILE: agent/protocol/result.py
  class AgentActionType (line 11) | class AgentActionType(Enum):
  class ToolResult (line 19) | class ToolResult:
  class AgentAction (line 40) | class AgentAction:
  class AgentResult (line 64) | class AgentResult:
    method success (line 80) | def success(cls, final_answer: str, step_count: int) -> "AgentResult":
    method error (line 85) | def error(cls, error_message: str, step_count: int = 0) -> "AgentResult":
    method is_error (line 95) | def is_error(self) -> bool:

FILE: agent/protocol/task.py
  class TaskType (line 9) | class TaskType(Enum):
  class TaskStatus (line 19) | class TaskStatus(Enum):
  class Task (line 28) | class Task:
    method __init__ (line 59) | def __init__(self, content: str = "", **kwargs):
    method get_text (line 79) | def get_text(self) -> str:
    method update_status (line 88) | def update_status(self, status: TaskStatus) -> None:

FILE: agent/skills/config.py
  function resolve_runtime_platform (line 11) | def resolve_runtime_platform() -> str:
  function has_binary (line 16) | def has_binary(bin_name: str) -> bool:
  function has_any_binary (line 27) | def has_any_binary(bin_names: List[str]) -> bool:
  function has_env_var (line 37) | def has_env_var(env_name: str) -> bool:
  function get_skill_config (line 47) | def get_skill_config(config: Optional[Dict], skill_name: str) -> Optiona...
  function should_include_skill (line 69) | def should_include_skill(
  function is_config_path_truthy (line 142) | def is_config_path_truthy(config: Dict, path: str) -> bool:
  function resolve_config_path (line 171) | def resolve_config_path(config: Dict, path: str):

FILE: agent/skills/formatter.py
  function format_skills_for_prompt (line 9) | def format_skills_for_prompt(skills: List[Skill]) -> str:
  function format_skill_entries_for_prompt (line 43) | def format_skill_entries_for_prompt(entries: List[SkillEntry]) -> str:
  function _escape_xml (line 54) | def _escape_xml(text: str) -> str:

FILE: agent/skills/frontmatter.py
  function parse_frontmatter (line 11) | def parse_frontmatter(content: str) -> Dict[str, Any]:
  function parse_metadata (line 70) | def parse_metadata(frontmatter: Dict[str, Any]) -> Optional[SkillMetadata]:
  function _normalize_string_list (line 141) | def _normalize_string_list(value: Any) -> List[str]:
  function parse_boolean_value (line 155) | def parse_boolean_value(value: Optional[str], default: bool = False) -> ...
  function get_frontmatter_value (line 169) | def get_frontmatter_value(frontmatter: Dict[str, Any], key: str) -> Opti...

FILE: agent/skills/loader.py
  class SkillLoader (line 13) | class SkillLoader:
    method __init__ (line 16) | def __init__(self):
    method load_skills_from_dir (line 19) | def load_skills_from_dir(self, dir_path: str, source: str) -> LoadSkil...
    method _load_skills_recursive (line 47) | def _load_skills_recursive(
    method _load_skill_from_file (line 108) | def _load_skill_from_file(self, file_path: str, source: str) -> LoadSk...
    method _load_linkai_agent_description (line 175) | def _load_linkai_agent_description(self, skill_dir: str, default_descr...
    method load_all_skills (line 212) | def load_all_skills(
    method _create_skill_entry (line 259) | def _create_skill_entry(self, skill: Skill) -> SkillEntry:

FILE: agent/skills/manager.py
  class SkillManager (line 17) | class SkillManager:
    method __init__ (line 20) | def __init__(
    method refresh_skills (line 49) | def refresh_skills(self):
    method _load_skills_config (line 61) | def _load_skills_config(self) -> Dict[str, dict]:
    method _save_skills_config (line 74) | def _save_skills_config(self):
    method _sync_skills_config (line 83) | def _sync_skills_config(self):
    method is_skill_enabled (line 111) | def is_skill_enabled(self, name: str) -> bool:
    method set_skill_enabled (line 123) | def set_skill_enabled(self, name: str, enabled: bool):
    method get_skills_config (line 135) | def get_skills_config(self) -> Dict[str, dict]:
    method get_skill (line 143) | def get_skill(self, name: str) -> Optional[SkillEntry]:
    method list_skills (line 152) | def list_skills(self) -> List[SkillEntry]:
    method filter_skills (line 160) | def filter_skills(
    method build_skills_prompt (line 206) | def build_skills_prompt(
    method build_skill_snapshot (line 226) | def build_skill_snapshot(
    method sync_skills_to_workspace (line 258) | def sync_skills_to_workspace(self, target_workspace_dir: str):
    method get_skill_by_key (line 291) | def get_skill_by_key(self, skill_key: str) -> Optional[SkillEntry]:

FILE: agent/skills/service.py
  class SkillService (line 24) | class SkillService:
    method __init__ (line 31) | def __init__(self, skill_manager: SkillManager):
    method query (line 40) | def query(self) -> List[dict]:
    method add (line 56) | def add(self, payload: dict) -> None:
    method _add_url (line 104) | def _add_url(self, name: str, payload: dict) -> None:
    method _add_package (line 136) | def _add_package(self, name: str, payload: dict) -> None:
    method open (line 183) | def open(self, payload: dict) -> None:
    method close (line 195) | def close(self, payload: dict) -> None:
    method delete (line 210) | def delete(self, payload: dict) -> None:
    method dispatch (line 234) | def dispatch(self, action: str, payload: Optional[dict] = None) -> dict:
    method _download_file (line 267) | def _download_file(url: str, dest: str):

FILE: agent/skills/types.py
  class SkillInstallSpec (line 11) | class SkillInstallSpec:
  class SkillMetadata (line 29) | class SkillMetadata:
  class Skill (line 42) | class Skill:
  class SkillEntry (line 55) | class SkillEntry:
  class LoadSkillsResult (line 63) | class LoadSkillsResult:
  class SkillSnapshot (line 70) | class SkillSnapshot:

FILE: agent/tools/__init__.py
  function _import_optional_tools (line 18) | def _import_optional_tools():
  function _import_browser_tool (line 91) | def _import_browser_tool():

FILE: agent/tools/base_tool.py
  class ToolStage (line 7) | class ToolStage(Enum):
  class ToolResult (line 13) | class ToolResult:
    method __init__ (line 16) | def __init__(self, status: str = None, result: Any = None, ext_data: A...
    method success (line 22) | def success(result, ext_data: Any = None):
    method fail (line 26) | def fail(result, ext_data: Any = None):
  class BaseTool (line 30) | class BaseTool:
    method get_json_schema (line 43) | def get_json_schema(cls) -> dict:
    method execute_tool (line 51) | def execute_tool(self, params: dict) -> ToolResult:
    method execute (line 57) | def execute(self, params: dict) -> ToolResult:
    method _parse_schema (line 62) | def _parse_schema(cls) -> dict:
    method should_auto_execute (line 81) | def should_auto_execute(self, context) -> bool:
    method close (line 91) | def close(self):

FILE: agent/tools/bash/bash.py
  class Bash (line 18) | class Bash(BaseTool):
    method __init__ (line 45) | def __init__(self, config: dict = None):
    method execute (line 55) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _get_safety_warning (line 230) | def _get_safety_warning(self, command: str) -> str:
    method _convert_env_vars_for_windows (line 276) | def _convert_env_vars_for_windows(command: str, dotenv_vars: dict) -> ...

FILE: agent/tools/browser_tool.py
  function copy (line 1) | def copy(self):

FILE: agent/tools/edit/edit.py
  class Edit (line 22) | class Edit(BaseTool):
    method __init__ (line 47) | def __init__(self, config: dict = None):
    method execute (line 52) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _resolve_path (line 174) | def _resolve_path(self, path: str) -> str:

FILE: agent/tools/env_config/env_config.py
  class EnvConfig (line 26) | class EnvConfig(BaseTool):
    method __init__ (line 67) | def __init__(self, config: dict = None):
    method _ensure_env_file (line 76) | def _ensure_env_file(self):
    method _mask_value (line 85) | def _mask_value(self, value: str) -> str:
    method _read_env_file (line 91) | def _read_env_file(self) -> Dict[str, str]:
    method _write_env_file (line 108) | def _write_env_file(self, env_vars: Dict[str, str]):
    method _reload_env (line 116) | def _reload_env(self):
    method _refresh_skills (line 123) | def _refresh_skills(self):
    method execute (line 139) | def execute(self, args: Dict[str, Any]) -> ToolResult:

FILE: agent/tools/ls/ls.py
  class Ls (line 16) | class Ls(BaseTool):
    method __init__ (line 37) | def __init__(self, config: dict = None):
    method execute (line 41) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _resolve_path (line 134) | def _resolve_path(self, path: str) -> str:

FILE: agent/tools/memory/memory_get.py
  class MemoryGetTool (line 10) | class MemoryGetTool(BaseTool):
    method __init__ (line 38) | def __init__(self, memory_manager):
    method execute (line 48) | def execute(self, args: dict):

FILE: agent/tools/memory/memory_search.py
  class MemorySearchTool (line 11) | class MemorySearchTool(BaseTool):
    method __init__ (line 40) | def __init__(self, memory_manager, user_id: Optional[str] = None):
    method execute (line 52) | def execute(self, args: dict):

FILE: agent/tools/read/read.py
  class Read (line 15) | class Read(BaseTool):
    method __init__ (line 40) | def __init__(self, config: dict = None):
    method execute (line 63) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _resolve_path (line 131) | def _resolve_path(self, path: str) -> str:
    method _return_file_metadata (line 144) | def _return_file_metadata(self, absolute_path: str, file_type: str, fi...
    method _read_image (line 183) | def _read_image(self, absolute_path: str, file_ext: str) -> ToolResult:
    method _read_text (line 221) | def _read_text(self, absolute_path: str, display_path: str, offset: in...
    method _read_office (line 344) | def _read_office(self, absolute_path: str, display_path: str, file_ext...
    method _extract_office_text (line 407) | def _extract_office_text(absolute_path: str, file_ext: str) -> str:
    method _read_pdf (line 454) | def _read_pdf(self, absolute_path: str, display_path: str, offset: int...

FILE: agent/tools/scheduler/integration.py
  function init_scheduler (line 18) | def init_scheduler(agent_bridge) -> bool:
  function get_task_store (line 77) | def get_task_store():
  function get_scheduler_service (line 82) | def get_scheduler_service():
  function _execute_agent_task (line 87) | def _execute_agent_task(task: dict, agent_bridge):
  function _execute_send_message (line 187) | def _execute_send_message(task: dict, agent_bridge):
  function _execute_tool_call (line 272) | def _execute_tool_call(task: dict, agent_bridge):
  function _execute_skill_call (line 364) | def _execute_skill_call(task: dict, agent_bridge):
  function attach_scheduler_to_tool (line 447) | def attach_scheduler_to_tool(tool, context: Context = None):

FILE: agent/tools/scheduler/scheduler_service.py
  class SchedulerService (line 13) | class SchedulerService:
    method __init__ (line 18) | def __init__(self, task_store, execute_callback: Callable):
    method start (line 32) | def start(self):
    method stop (line 44) | def stop(self):
    method _run_loop (line 55) | def _run_loop(self):
    method _check_and_execute_tasks (line 67) | def _check_and_execute_tasks(self):
    method _is_task_due (line 93) | def _is_task_due(self, task: dict, now: datetime) -> bool:
    method _calculate_next_run (line 146) | def _calculate_next_run(self, task: dict, from_time: datetime) -> Opti...
    method _execute_task (line 197) | def _execute_task(self, task: dict):

FILE: agent/tools/scheduler/scheduler_tool.py
  class SchedulerTool (line 16) | class SchedulerTool(BaseTool):
    method __init__ (line 72) | def __init__(self, config: dict = None):
    method execute (line 80) | def execute(self, params: dict) -> ToolResult:
    method _create_task (line 124) | def _create_task(self, **kwargs) -> str:
    method _list_tasks (line 223) | def _list_tasks(self, **kwargs) -> str:
    method _get_task (line 245) | def _get_task(self, **kwargs) -> str:
    method _delete_task (line 276) | def _delete_task(self, **kwargs) -> str:
    method _enable_task (line 289) | def _enable_task(self, **kwargs) -> str:
    method _disable_task (line 302) | def _disable_task(self, **kwargs) -> str:
    method _parse_schedule (line 315) | def _parse_schedule(self, schedule_type: str, schedule_value: str) -> ...
    method _calculate_next_run (line 370) | def _calculate_next_run(self, task: dict) -> Optional[datetime]:
    method _format_schedule_description (line 392) | def _format_schedule_description(self, schedule: dict) -> str:
    method _get_receiver_name (line 432) | def _get_receiver_name(self, context: Context) -> str:

FILE: agent/tools/scheduler/task_store.py
  class TaskStore (line 14) | class TaskStore:
    method __init__ (line 19) | def __init__(self, store_path: str = None):
    method _ensure_store_dir (line 35) | def _ensure_store_dir(self):
    method load_tasks (line 40) | def load_tasks(self) -> Dict[str, dict]:
    method save_tasks (line 59) | def save_tasks(self, tasks: Dict[str, dict]):
    method add_task (line 91) | def add_task(self, task: dict) -> bool:
    method update_task (line 114) | def update_task(self, task_id: str, updates: dict) -> bool:
    method delete_task (line 137) | def delete_task(self, task_id: str) -> bool:
    method get_task (line 156) | def get_task(self, task_id: str) -> Optional[dict]:
    method list_tasks (line 169) | def list_tasks(self, enabled_only: bool = False) -> List[dict]:
    method enable_task (line 190) | def enable_task(self, task_id: str, enabled: bool = True) -> bool:

FILE: agent/tools/send/send.py
  class Send (line 13) | class Send(BaseTool):
    method __init__ (line 34) | def __init__(self, config: dict = None):
    method execute (line 44) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _resolve_path (line 104) | def _resolve_path(self, path: str) -> str:
    method _get_image_mime_type (line 111) | def _get_image_mime_type(self, ext: str) -> str:
    method _get_video_mime_type (line 121) | def _get_video_mime_type(self, ext: str) -> str:
    method _get_audio_mime_type (line 130) | def _get_audio_mime_type(self, ext: str) -> str:
    method _get_document_mime_type (line 139) | def _get_document_mime_type(self, ext: str) -> str:
    method _format_size (line 154) | def _format_size(self, size_bytes: int) -> str:

FILE: agent/tools/tool_manager.py
  class ToolManager (line 10) | class ToolManager:
    method __new__ (line 16) | def __new__(cls):
    method __init__ (line 24) | def __init__(self):
    method load_tools (line 29) | def load_tools(self, tools_dir: str = "", config_dict=None):
    method _load_tools_from_init (line 42) | def _load_tools_from_init(self) -> bool:
    method _load_tools_from_directory (line 115) | def _load_tools_from_directory(self, tools_dir: str):
    method _configure_tools_from_config (line 176) | def _configure_tools_from_config(self, config_dict=None):
    method create_tool (line 215) | def create_tool(self, name: str) -> BaseTool:
    method list_tools (line 234) | def list_tools(self) -> dict:

FILE: agent/tools/utils/diff.py
  function strip_bom (line 11) | def strip_bom(text: str) -> Tuple[str, str]:
  function detect_line_ending (line 23) | def detect_line_ending(text: str) -> str:
  function normalize_to_lf (line 35) | def normalize_to_lf(text: str) -> str:
  function restore_line_endings (line 45) | def restore_line_endings(text: str, original_ending: str) -> str:
  function normalize_for_fuzzy_match (line 58) | def normalize_for_fuzzy_match(text: str) -> str:
  class FuzzyMatchResult (line 86) | class FuzzyMatchResult:
    method __init__ (line 89) | def __init__(self, found: bool, index: int = -1, match_length: int = 0...
  function fuzzy_find_text (line 96) | def fuzzy_find_text(content: str, old_text: str) -> FuzzyMatchResult:
  function generate_diff_string (line 132) | def generate_diff_string(old_content: str, new_content: str) -> dict:

FILE: agent/tools/utils/truncate.py
  class TruncationResult (line 19) | class TruncationResult:
    method __init__ (line 22) | def __init__(
    method to_dict (line 48) | def to_dict(self) -> Dict[str, Any]:
  function format_size (line 65) | def format_size(bytes_count: int) -> str:
  function truncate_head (line 75) | def truncate_head(content: str, max_lines: Optional[int] = None, max_byt...
  function truncate_tail (line 171) | def truncate_tail(content: str, max_lines: Optional[int] = None, max_byt...
  function _truncate_string_to_bytes_from_end (line 258) | def _truncate_string_to_bytes_from_end(text: str, max_bytes: int) -> str:
  function truncate_line (line 281) | def truncate_line(line: str, max_chars: int = GREP_MAX_LINE_LENGTH) -> T...

FILE: agent/tools/vision/vision.py
  class Vision (line 33) | class Vision(BaseTool):
    method __init__ (line 65) | def __init__(self, config: dict = None):
    method is_available (line 69) | def is_available() -> bool:
    method execute (line 75) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _resolve_provider (line 110) | def _resolve_provider(self) -> Tuple[Optional[str], str, dict]:
    method _ensure_v1 (line 132) | def _ensure_v1(api_base: str) -> str:
    method _build_image_content (line 141) | def _build_image_content(self, image: str) -> dict:
    method _maybe_compress (line 169) | def _maybe_compress(path: str) -> str:
    method _call_api (line 203) | def _call_api(self, api_key: str, api_base: str, model: str,

FILE: agent/tools/web_fetch/web_fetch.py
  function _extract_charset_from_content_type (line 47) | def _extract_charset_from_content_type(content_type: str) -> Optional[str]:
  function _extract_charset_from_html_meta (line 53) | def _extract_charset_from_html_meta(raw_bytes: bytes) -> Optional[str]:
  function _get_url_suffix (line 64) | def _get_url_suffix(url: str) -> str:
  function _is_document_url (line 70) | def _is_document_url(url: str) -> bool:
  class WebFetch (line 76) | class WebFetch(BaseTool):
    method __init__ (line 97) | def __init__(self, config: dict = None):
    method execute (line 101) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _fetch_webpage (line 117) | def _fetch_webpage(self, url: str) -> ToolResult:
    method _fetch_document (line 150) | def _fetch_document(self, url: str) -> ToolResult:
    method _parse_document (line 222) | def _parse_document(self, file_path: str, suffix: str) -> str:
    method _parse_pdf (line 237) | def _parse_pdf(self, file_path: str) -> str:
    method _parse_word (line 253) | def _parse_word(self, file_path: str) -> str:
    method _parse_text (line 265) | def _parse_text(self, file_path: str) -> str:
    method _parse_spreadsheet (line 276) | def _parse_spreadsheet(self, file_path: str) -> str:
    method _parse_ppt (line 301) | def _parse_ppt(self, file_path: str) -> str:
    method _detect_encoding (line 329) | def _detect_encoding(response: requests.Response) -> str:
    method _ensure_tmp_dir (line 357) | def _ensure_tmp_dir(self) -> str:
    method _extract_filename (line 363) | def _extract_filename(self, url: str) -> str:
    method _cleanup_file (line 375) | def _cleanup_file(path: str):
    method _is_binary_content_type (line 384) | def _is_binary_content_type(content_type: str) -> bool:
    method _handle_download_by_content_type (line 396) | def _handle_download_by_content_type(self, url: str, response: request...
    method _rewrite_url_with_suffix (line 420) | def _rewrite_url_with_suffix(url: str, suffix: str) -> str:
    method _extract_title (line 429) | def _extract_title(html: str) -> str:
    method _extract_text (line 434) | def _extract_text(html: str) -> str:

FILE: agent/tools/web_search/web_search.py
  class WebSearch (line 23) | class WebSearch(BaseTool):
    method __init__ (line 56) | def __init__(self, config: dict = None):
    method is_available (line 61) | def is_available() -> bool:
    method _resolve_backend (line 65) | def _resolve_backend(self) -> Optional[str]:
    method execute (line 78) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _search_bocha (line 120) | def _search_bocha(self, query: str, count: int, freshness: str, summar...
    method _format_bocha_results (line 170) | def _format_bocha_results(self, data: dict, query: str) -> ToolResult:
    method _search_linkai (line 215) | def _search_linkai(self, query: str, count: int, freshness: str) -> To...
    method _format_linkai_results (line 257) | def _format_linkai_results(self, data: dict, query: str) -> ToolResult:

FILE: agent/tools/write/write.py
  class Write (line 14) | class Write(BaseTool):
    method __init__ (line 35) | def __init__(self, config: dict = None):
    method execute (line 40) | def execute(self, args: Dict[str, Any]) -> ToolResult:
    method _resolve_path (line 86) | def _resolve_path(self, path: str) -> str:

FILE: app.py
  function get_channel_manager (line 19) | def get_channel_manager():
  function _parse_channel_type (line 23) | def _parse_channel_type(raw) -> list:
  class ChannelManager (line 38) | class ChannelManager:
    method __init__ (line 45) | def __init__(self):
    method channel (line 53) | def channel(self):
    method get_channel (line 57) | def get_channel(self, channel_name: str):
    method start (line 60) | def start(self, channel_names: list, first_start: bool = False):
    method _run_channel (line 111) | def _run_channel(self, name: str, channel):
    method stop (line 118) | def stop(self, channel_name: str = None):
    method _interrupt_thread (line 159) | def _interrupt_thread(th: threading.Thread, name: str):
    method restart (line 177) | def restart(self, new_channel_name: str):
    method add_channel (line 189) | def add_channel(self, channel_name: str):
    method remove_channel (line 205) | def remove_channel(self, channel_name: str):
  function _clear_singleton_cache (line 218) | def _clear_singleton_cache(channel_name: str):
  function sigterm_handler_wrap (line 256) | def sigterm_handler_wrap(_signo):
  function run (line 269) | def run():

FILE: bridge/agent_bridge.py
  function add_openai_compatible_support (line 20) | def add_openai_compatible_support(bot_instance):
  class AgentLLMModel (line 63) | class AgentLLMModel(LLMModel):
    method __init__ (line 80) | def __init__(self, bridge: Bridge, bot_type: str = "chat"):
    method model (line 89) | def model(self):
    method model (line 94) | def model(self, value):
    method _resolve_bot_type (line 97) | def _resolve_bot_type(self, model_name: str) -> str:
    method bot (line 126) | def bot(self):
    method call (line 137) | def call(self, request: LLMRequest):
    method call_stream (line 180) | def call_stream(self, request: LLMRequest):
    method _format_response (line 227) | def _format_response(self, response):
    method _format_stream_chunk (line 232) | def _format_stream_chunk(self, chunk):
  class AgentBridge (line 238) | class AgentBridge:
    method __init__ (line 244) | def __init__(self, bridge: Bridge):
    method create_agent (line 253) | def create_agent(self, system_prompt: str, tools: List = None, **kwarg...
    method get_agent (line 307) | def get_agent(self, session_id: str = None) -> Optional[Agent]:
    method _init_default_agent (line 329) | def _init_default_agent(self):
    method _init_agent_for_session (line 334) | def _init_agent_for_session(self, session_id: str):
    method agent_reply (line 339) | def agent_reply(self, query: str, context: Context = None,
    method _create_file_reply (line 461) | def _create_file_reply(self, file_info: dict, text_response: str, cont...
    method _migrate_config_to_env (line 503) | def _migrate_config_to_env(self, workspace_root: str):
    method _persist_messages (line 576) | def _persist_messages(
    method clear_session (line 602) | def clear_session(self, session_id: str):
    method clear_all_sessions (line 613) | def clear_all_sessions(self):
    method refresh_all_skills (line 619) | def refresh_all_skills(self) -> int:
    method _refresh_conditional_tools (line 664) | def _refresh_conditional_tools(agent):

FILE: bridge/agent_event_handler.py
  class AgentEventHandler (line 8) | class AgentEventHandler:
    method __init__ (line 13) | def __init__(self, context=None, original_callback=None):
    method handle_event (line 33) | def handle_event(self, event):
    method _handle_turn_start (line 59) | def _handle_turn_start(self, data):
    method _handle_message_update (line 65) | def _handle_message_update(self, data):
    method _handle_message_end (line 70) | def _handle_message_end(self, data):
    method _handle_tool_execution_start (line 87) | def _handle_tool_execution_start(self, data):
    method _handle_tool_execution_end (line 91) | def _handle_tool_execution_end(self, data):
    method _send_to_channel (line 95) | def _send_to_channel(self, message):
    method log_summary (line 111) | def log_summary(self):

FILE: bridge/agent_initializer.py
  class AgentInitializer (line 17) | class AgentInitializer:
    method __init__ (line 26) | def __init__(self, bridge, agent_bridge):
    method initialize_agent (line 37) | def initialize_agent(self, session_id: Optional[str] = None) -> Agent:
    method _restore_conversation_history (line 125) | def _restore_conversation_history(self, agent, session_id: str) -> None:
    method _filter_text_only_messages (line 166) | def _filter_text_only_messages(messages: list) -> list:
    method _load_env_file (line 240) | def _load_env_file(self):
    method _setup_memory_system (line 252) | def _setup_memory_system(self, workspace_root: str, session_id: Option...
    method _sync_memory (line 322) | def _sync_memory(self, memory_manager, session_id: Optional[str] = None):
    method _load_tools (line 340) | def _load_tools(self, workspace_root: str, memory_manager, memory_tool...
    method _initialize_scheduler (line 389) | def _initialize_scheduler(self, tools: List, session_id: Optional[str]...
    method _initialize_skill_manager (line 428) | def _initialize_skill_manager(self, workspace_root: str, session_id: O...
    method _get_runtime_info (line 438) | def _get_runtime_info(self, workspace_root: str):
    method _migrate_config_to_env (line 475) | def _migrate_config_to_env(self, workspace_root: str):
    method _start_daily_flush_timer (line 530) | def _start_daily_flush_timer(self):
    method _flush_all_agents (line 557) | def _flush_all_agents(self):

FILE: bridge/bridge.py
  class Bridge (line 13) | class Bridge(object):
    method __init__ (line 14) | def __init__(self):
    method get_bot (line 83) | def get_bot(self, typename):
    method get_bot_type (line 96) | def get_bot_type(self, typename):
    method fetch_reply_content (line 99) | def fetch_reply_content(self, query, context: Context) -> Reply:
    method fetch_voice_to_text (line 102) | def fetch_voice_to_text(self, voiceFile) -> Reply:
    method fetch_text_to_voice (line 105) | def fetch_text_to_voice(self, text) -> Reply:
    method fetch_translate (line 108) | def fetch_translate(self, text, from_lang="", to_lang="en") -> Reply:
    method find_chat_bot (line 111) | def find_chat_bot(self, bot_type: str):
    method reset_bot (line 116) | def reset_bot(self):
    method get_agent_bridge (line 122) | def get_agent_bridge(self):
    method fetch_agent_reply (line 131) | def fetch_agent_reply(self, query: str, context: Context = None,

FILE: bridge/context.py
  class ContextType (line 6) | class ContextType(Enum):
    method __str__ (line 22) | def __str__(self):
  class Context (line 26) | class Context:
    method __init__ (line 27) | def __init__(self, type: ContextType = None, content=None, kwargs=dict...
    method __contains__ (line 32) | def __contains__(self, key):
    method __getitem__ (line 40) | def __getitem__(self, key):
    method get (line 48) | def get(self, key, default=None):
    method __setitem__ (line 54) | def __setitem__(self, key, value):
    method __delitem__ (line 62) | def __delitem__(self, key):
    method __str__ (line 70) | def __str__(self):

FILE: bridge/reply.py
  class ReplyType (line 6) | class ReplyType(Enum):
    method __str__ (line 21) | def __str__(self):
  class Reply (line 25) | class Reply:
    method __init__ (line 26) | def __init__(self, type: ReplyType = None, content=None):
    method __str__ (line 30) | def __str__(self):

FILE: channel/channel.py
  class Channel (line 12) | class Channel(object):
    method __init__ (line 16) | def __init__(self):
    method startup (line 22) | def startup(self):
    method report_startup_success (line 28) | def report_startup_success(self):
    method report_startup_error (line 32) | def report_startup_error(self, error: str):
    method wait_startup (line 36) | def wait_startup(self, timeout: float = 3) -> (bool, str):
    method stop (line 48) | def stop(self):
    method handle_text (line 54) | def handle_text(self, msg):
    method send (line 62) | def send(self, reply: Reply, context: Context):
    method build_reply_content (line 71) | def build_reply_content(self, query, context: Context = None) -> Reply:
    method build_voice_to_text (line 104) | def build_voice_to_text(self, voice_file) -> Reply:
    method build_text_to_voice (line 107) | def build_text_to_voice(self, text) -> Reply:

FILE: channel/channel_factory.py
  function create_channel (line 8) | def create_channel(channel_type) -> Channel:

FILE: channel/chat_channel.py
  class ChatChannel (line 24) | class ChatChannel(Channel):
    method __init__ (line 28) | def __init__(self):
    method _compose_context (line 43) | def _compose_context(self, ctype: ContextType, content, **kwargs):
    method _handle (line 178) | def _handle(self, context: Context):
    method _generate_reply (line 194) | def _generate_reply(self, context: Context, reply: Reply = Reply()) ->...
    method _decorate_reply (line 248) | def _decorate_reply(self, context: Context, reply: Reply) -> Reply:
    method _send_reply (line 287) | def _send_reply(self, context: Context, reply: Reply):
    method _extract_and_send_images (line 313) | def _extract_and_send_images(self, reply: Reply, context: Context):
    method _send (line 393) | def _send(self, reply: Reply, context: Context, retry_cnt=0):
    method _success_callback (line 405) | def _success_callback(self, session_id, **kwargs):  # 线程正常结束时的回调函数
    method _fail_callback (line 408) | def _fail_callback(self, session_id, exception, **kwargs):  # 线程异常结束时的...
    method _thread_pool_callback (line 411) | def _thread_pool_callback(self, session_id, **kwargs):
    method produce (line 428) | def produce(self, context: Context):
    method consume (line 442) | def consume(self):
    method cancel_session (line 469) | def cancel_session(self, session_id):
    method cancel_all_session (line 479) | def cancel_all_session(self):
  function check_prefix (line 490) | def check_prefix(content, prefix_list):
  function check_contain (line 499) | def check_contain(content, keyword_list):

FILE: channel/chat_message.py
  class ChatMessage (line 36) | class ChatMessage(object):
    method __init__ (line 62) | def __init__(self, _rawmsg):
    method prepare (line 65) | def prepare(self):
    method __str__ (line 70) | def __str__(self):

FILE: channel/dingtalk/dingtalk_channel.py
  class CustomAICardReplier (line 33) | class CustomAICardReplier(CardReplier):
    method __init__ (line 34) | def __init__(self, dingtalk_client, incoming_message):
    method start (line 37) | def start(
  function _check (line 68) | def _check(func):
  class DingTalkChanel (line 88) | class DingTalkChanel(ChatChannel, dingtalk_stream.ChatbotHandler):
    method setup_logger (line 92) | def setup_logger(self):
    method __init__ (line 97) | def __init__(self):
    method _open_connection (line 118) | def _open_connection(self, client):
    method startup (line 147) | def startup(self):
    method stop (line 230) | def stop(self):
    method get_access_token (line 243) | def get_access_token(self):
    method send_single_message (line 279) | def send_single_message(self, user_id: str, content: str, robot_code: ...
    method send_group_message (line 320) | def send_group_message(self, conversation_id: str, content: str, robot...
    method upload_media (line 366) | def upload_media(self, file_path: str, media_type: str = "image") -> str:
    method send_image_with_media_id (line 440) | def send_image_with_media_id(self, access_token: str, media_id: str, i...
    method send_image_message (line 492) | def send_image_message(self, receiver: str, media_id: str, is_group: b...
    method get_image_download_url (line 555) | def get_image_download_url(self, download_code: str) -> str:
    method process (line 575) | async def process(self, callback: dingtalk_stream.CallbackMessage):
    method handle_single (line 606) | def handle_single(self, cmsg: DingTalkMessage):
    method handle_group (line 664) | def handle_group(self, cmsg: DingTalkMessage):
    method send (line 724) | def send(self, reply: Reply, context: Context):
    method _send_file_message (line 899) | def _send_file_message(self, access_token: str, incoming_message, msg_...
    method generate_button_markdown_content (line 948) | def generate_button_markdown_content(self, context, reply):

FILE: channel/dingtalk/dingtalk_message.py
  class DingTalkMessage (line 16) | class DingTalkMessage(ChatMessage):
    method __init__ (line 17) | def __init__(self, event: ChatbotMessage, image_download_handler):
  function download_image_file (line 116) | def download_image_file(image_url, temp_dir):

FILE: channel/feishu/feishu_channel.py
  function _ensure_lark_imported (line 49) | def _ensure_lark_imported():
  class FeiShuChanel (line 59) | class FeiShuChanel(ChatChannel):
    method __init__ (line 65) | def __init__(self):
    method startup (line 84) | def startup(self):
    method _fetch_bot_open_id (line 95) | def _fetch_bot_open_id(self):
    method stop (line 114) | def stop(self):
    method _startup_webhook (line 145) | def _startup_webhook(self):
    method _startup_websocket (line 162) | def _startup_websocket(self):
    method _is_mention_bot (line 264) | def _is_mention_bot(self, mentions: list) -> bool:
    method _handle_message_event (line 285) | def _handle_message_event(self, event: dict):
    method send (line 390) | def send(self, reply: Reply, context: Context):
    method fetch_access_token (line 484) | def fetch_access_token(self) -> str:
    method _upload_image_url (line 505) | def _upload_image_url(self, img_url, access_token):
    method _get_video_duration (line 556) | def _get_video_duration(self, file_path: str) -> int:
    method _upload_video_url (line 594) | def _upload_video_url(self, video_url, access_token):
    method _upload_file_url (line 690) | def _upload_file_url(self, file_url, access_token):
    method _compose_context (line 794) | def _compose_context(self, ctype: ContextType, content, **kwargs):
  class FeishuController (line 841) | class FeishuController:
    method GET (line 850) | def GET(self):
    method POST (line 853) | def POST(self):

FILE: channel/feishu/feishu_message.py
  class FeishuMessage (line 13) | class FeishuMessage(ChatMessage):
    method __init__ (line 14) | def __init__(self, event: dict, is_group=False, access_token=None):

FILE: channel/file_cache.py
  class FileCache (line 11) | class FileCache:
    method __init__ (line 14) | def __init__(self, ttl=120):
    method add (line 22) | def add(self, session_id: str, file_path: str, file_type: str = "image"):
    method get (line 43) | def get(self, session_id: str) -> list:
    method clear (line 66) | def clear(self, session_id: str):
    method cleanup_expired (line 77) | def cleanup_expired(self):
  function get_file_cache (line 98) | def get_file_cache() -> FileCache:

FILE: channel/qq/qq_channel.py
  class QQChannel (line 54) | class QQChannel(ChatChannel):
    method __init__ (line 56) | def __init__(self):
    method startup (line 86) | def startup(self):
    method stop (line 106) | def stop(self):
    method _refresh_access_token (line 121) | def _refresh_access_token(self):
    method _get_access_token (line 137) | def _get_access_token(self) -> str:
    method _get_auth_headers (line 143) | def _get_auth_headers(self) -> dict:
    method _get_ws_url (line 153) | def _get_ws_url(self) -> str:
    method _start_ws (line 168) | def _start_ws(self):
    method _ws_send (line 223) | def _ws_send(self, data: dict):
    method _send_identify (line 231) | def _send_identify(self):
    method _send_resume (line 247) | def _send_resume(self):
    method _start_heartbeat (line 258) | def _start_heartbeat(self, interval_ms: int):
    method _handle_ws_message (line 283) | def _handle_ws_message(self, data: dict):
    method _handle_msg_event (line 351) | def _handle_msg_event(self, event_data: dict, event_type: str):
    method _compose_context (line 414) | def _compose_context(self, ctype: ContextType, content, **kwargs):
    method send (line 446) | def send(self, reply: Reply, context: Context):
    method _get_next_msg_seq (line 479) | def _get_next_msg_seq(self, msg_id: str) -> int:
    method _build_msg_url_and_base_body (line 484) | def _build_msg_url_and_base_body(self, msg: QQMessage, event_type: str...
    method _post_message (line 518) | def _post_message(self, url: str, body: dict, event_type: str):
    method _active_send_text (line 533) | def _active_send_text(self, content: str, receiver: str, is_group: bool):
    method _send_text (line 553) | def _send_text(self, content: str, msg: QQMessage, event_type: str, ms...
    method _upload_rich_media (line 566) | def _upload_rich_media(self, file_url: str, file_type: int, msg: QQMes...
    method _upload_rich_media_base64 (line 609) | def _upload_rich_media_base64(self, file_path: str, file_type: int, ms...
    method _send_media_msg (line 655) | def _send_media_msg(self, file_info: str, msg: QQMessage, event_type: ...
    method _send_image (line 664) | def _send_image(self, img_path_or_url: str, msg: QQMessage, event_type...
    method _send_file (line 689) | def _send_file(self, file_path_or_url: str, msg: QQMessage, event_type...
    method _send_media (line 714) | def _send_media(self, path_or_url: str, msg: QQMessage, event_type: str,

FILE: channel/qq/qq_message.py
  function _get_tmp_dir (line 11) | def _get_tmp_dir() -> str:
  class QQMessage (line 19) | class QQMessage(ChatMessage):
    method __init__ (line 22) | def __init__(self, event_data: dict, event_type: str):

FILE: channel/terminal/terminal_channel.py
  class TerminalMessage (line 11) | class TerminalMessage(ChatMessage):
    method __init__ (line 12) | def __init__(
  class TerminalChannel (line 29) | class TerminalChannel(ChatChannel):
    method send (line 32) | def send(self, reply: Reply, context: Context):
    method startup (line 63) | def startup(self):
    method get_input (line 87) | def get_input(self):

FILE: channel/web/static/js/console.js
  constant APP_VERSION (line 8) | const APP_VERSION = 'v2.0.3';
  constant I18N (line 13) | const I18N = {
  function t (line 110) | function t(key) {
  function applyI18n (line 114) | function applyI18n() {
  function toggleLanguage (line 127) | function toggleLanguage() {
  function applyTheme (line 138) | function applyTheme() {
  function toggleTheme (line 153) | function toggleTheme() {
  constant VIEW_META (line 162) | const VIEW_META = {
  function navigateTo (line 174) | function navigateTo(viewId) {
  function toggleSidebar (line 191) | function toggleSidebar() {
  function closeSidebar (line 203) | function closeSidebar() {
  function createMd (line 232) | function createMd() {
  function renderMarkdown (line 255) | function renderMarkdown(text) {
  constant SESSION_ID_KEY (line 269) | const SESSION_ID_KEY = 'cow_session_id';
  function generateSessionId (line 271) | function generateSessionId() {
  function loadOrCreateSessionId (line 279) | function loadOrCreateSessionId() {
  function updateSendBtnState (line 315) | function updateSendBtnState() {
  function renderAttachmentPreview (line 319) | function renderAttachmentPreview() {
  function removeAttachment (line 350) | function removeAttachment(idx) {
  function handleFileSelect (line 356) | async function handleFileSelect(files) {
  function sendMessage (line 464) | function sendMessage() {
  function startSSE (line 518) | function startSSE(requestId, loadingEl, timestamp) {
  function startPolling (line 675) | function startPolling() {
  function createUserMessageEl (line 706) | function createUserMessageEl(content, timestamp, attachments) {
  function renderToolCallsHtml (line 734) | function renderToolCallsHtml(toolCalls) {
  function createBotMessageEl (line 762) | function createBotMessageEl(content, timestamp, requestId, toolCalls) {
  function addUserMessage (line 781) | function addUserMessage(content, timestamp, attachments) {
  function addBotMessage (line 787) | function addBotMessage(content, timestamp, requestId) {
  function loadHistory (line 795) | function loadHistory(page) {
  function addLoadingIndicator (line 866) | function addLoadingIndicator() {
  function newChat (line 884) | function newChat() {
  function formatTime (line 949) | function formatTime(date) {
  function escapeHtml (line 953) | function escapeHtml(str) {
  function ChannelsHandler_maskSecret (line 959) | function ChannelsHandler_maskSecret(val) {
  function formatToolArgs (line 964) | function formatToolArgs(args) {
  function scrollChatToBottom (line 973) | function scrollChatToBottom() {
  function applyHighlighting (line 977) | function applyHighlighting(container) {
  function initDropdown (line 999) | function initDropdown(el, options, selectedValue, onChange) {
  function getDropdownValue (line 1046) | function getDropdownValue(el) { return el._ddValue || ''; }
  function initConfigView (line 1049) | function initConfigView(data) {
  function detectProvider (line 1074) | function detectProvider(model) {
  function onProviderChange (line 1083) | function onProviderChange(pid) {
  function onModelSelectChange (line 1147) | function onModelSelectChange(val) {
  function syncModelSelection (line 1159) | function syncModelSelection(model) {
  function getSelectedModel (line 1180) | function getSelectedModel() {
  function toggleApiKeyVisibility (line 1187) | function toggleApiKeyVisibility() {
  function showStatus (line 1199) | function showStatus(elId, msgKey, isError) {
  function saveModelConfig (line 1208) | function saveModelConfig() {
  function saveAgentConfig (line 1274) | function saveAgentConfig() {
  function loadConfigView (line 1300) | function loadConfigView() {
  constant TOOL_ICONS (line 1313) | const TOOL_ICONS = {
  function getToolIcon (line 1328) | function getToolIcon(name) {
  function loadSkillsView (line 1332) | function loadSkillsView() {
  function loadToolsSection (line 1337) | function loadToolsSection() {
  function loadSkillsSection (line 1378) | function loadSkillsSection() {
  function renderSkillCard (line 1408) | function renderSkillCard(card, sk) {
  function toggleSkill (line 1436) | function toggleSkill(name, currentlyEnabled) {
  function loadMemoryView (line 1472) | function loadMemoryView(page) {
  function openMemoryFile (line 1521) | function openMemoryFile(filename) {
  function closeMemoryViewer (line 1533) | function closeMemoryViewer() {
  function showConfirmDialog (line 1541) | function showConfirmDialog({ title, message, okText, cancelText, onConfi...
  function loadChannelsView (line 1571) | function loadChannelsView() {
  function renderActiveChannels (line 1585) | function renderActiveChannels() {
  function buildChannelFieldsHtml (line 1649) | function buildChannelFieldsHtml(chName, fields) {
  function bindSecretFieldEvents (line 1687) | function bindSecretFieldEvents(container) {
  function showChannelStatus (line 1699) | function showChannelStatus(chName, msgKey, isError) {
  function saveChannelConfig (line 1709) | function saveChannelConfig(chName) {
  function disconnectChannel (line 1744) | function disconnectChannel(chName) {
  function openAddChannelPanel (line 1772) | function openAddChannelPanel() {
  function closeAddChannelPanel (line 1830) | function closeAddChannelPanel() {
  function onAddChannelSelect (line 1838) | function onAddChannelSelect(chName) {
  function submitAddChannel (line 1856) | function submitAddChannel() {
  function loadTasksView (line 1907) | function loadTasksView() {
  function startLogStream (line 1958) | function startLogStream() {
  function stopLogStream (line 1984) | function stopLogStream() {

FILE: channel/web/web_channel.py
  function _get_upload_dir (line 27) | def _get_upload_dir() -> str:
  class WebMessage (line 35) | class WebMessage(ChatMessage):
    method __init__ (line 36) | def __init__(
  class WebChannel (line 54) | class WebChannel(ChatChannel):
    method __init__ (line 63) | def __init__(self):
    method _generate_msg_id (line 71) | def _generate_msg_id(self):
    method _generate_request_id (line 76) | def _generate_request_id(self):
    method send (line 80) | def send(self, reply: Reply, context: Context):
    method _make_sse_callback (line 127) | def _make_sse_callback(self, request_id: str):
    method upload_file (line 166) | def upload_file(self):
    method post_message (line 208) | def post_message(self):
    method stream_response (line 281) | def stream_response(self, request_id: str):
    method poll_response (line 310) | def poll_response(self):
    method chat_page (line 344) | def chat_page(self):
    method startup (line 350) | def startup(self):
    method stop (line 413) | def stop(self):
  class RootHandler (line 423) | class RootHandler:
    method GET (line 424) | def GET(self):
  class MessageHandler (line 429) | class MessageHandler:
    method POST (line 430) | def POST(self):
  class UploadHandler (line 434) | class UploadHandler:
    method POST (line 435) | def POST(self):
  class UploadsHandler (line 440) | class UploadsHandler:
    method GET (line 441) | def GET(self, file_name):
  class PollHandler (line 462) | class PollHandler:
    method POST (line 463) | def POST(self):
  class StreamHandler (line 467) | class StreamHandler:
    method GET (line 468) | def GET(self):
  class ChatHandler (line 482) | class ChatHandler:
    method GET (line 483) | def GET(self):
  class ConfigHandler (line 490) | class ConfigHandler:
    method _mask_key (line 588) | def _mask_key(value: str) -> str:
    method GET (line 594) | def GET(self):
    method POST (line 642) | def POST(self):
  class ChannelsHandler (line 684) | class ChannelsHandler:
    method _mask_secret (line 754) | def _mask_secret(value: str) -> str:
    method _parse_channel_list (line 760) | def _parse_channel_list(raw) -> list:
    method _active_channel_set (line 768) | def _active_channel_set(cls) -> set:
    method GET (line 771) | def GET(self):
    method POST (line 805) | def POST(self):
    method _handle_save (line 830) | def _handle_save(self, channel_name: str, updates: dict):
    method _handle_connect (line 892) | def _handle_connect(self, channel_name: str, updates: dict):
    method _handle_disconnect (line 974) | def _handle_disconnect(self, channel_name: str):
  function _get_workspace_root (line 1019) | def _get_workspace_root():
  class ToolsHandler (line 1025) | class ToolsHandler:
    method GET (line 1026) | def GET(self):
  class SkillsHandler (line 1049) | class SkillsHandler:
    method GET (line 1050) | def GET(self):
    method POST (line 1064) | def POST(self):
  class MemoryHandler (line 1089) | class MemoryHandler:
    method GET (line 1090) | def GET(self):
  class MemoryContentHandler (line 1104) | class MemoryContentHandler:
    method GET (line 1105) | def GET(self):
  class SchedulerHandler (line 1123) | class SchedulerHandler:
    method GET (line 1124) | def GET(self):
  class HistoryHandler (line 1138) | class HistoryHandler:
    method GET (line 1139) | def GET(self):
  class LogsHandler (line 1169) | class LogsHandler:
    method GET (line 1170) | def GET(self):
  class AssetsHandler (line 1217) | class AssetsHandler:
    method GET (line 1218) | def GET(self, file_path):  # 修改默认参数

FILE: channel/wechatcom/wechatcomapp_channel.py
  class WechatComAppChannel (line 29) | class WechatComAppChannel(ChatChannel):
    method __init__ (line 32) | def __init__(self):
    method startup (line 46) | def startup(self):
    method stop (line 65) | def stop(self):
    method send (line 74) | def send(self, reply: Reply, context: Context):
  class Query (line 161) | class Query:
    method GET (line 162) | def GET(self):
    method POST (line 176) | def POST(self):

FILE: channel/wechatcom/wechatcomapp_client.py
  class WechatComAppClient (line 6) | class WechatComAppClient(WeChatClient):
    method __init__ (line 7) | def __init__(self, corp_id, secret, access_token=None, session=None, t...
    method _active_refresh (line 12) | def _active_refresh(self):
    method fetch_access_token (line 36) | def fetch_access_token(self):

FILE: channel/wechatcom/wechatcomapp_message.py
  class WechatComAppMessage (line 9) | class WechatComAppMessage(ChatMessage):
    method __init__ (line 10) | def __init__(self, msg, client: WeChatClient, is_group=False):

FILE: channel/wechatmp/active_reply.py
  class Query (line 17) | class Query:
    method GET (line 18) | def GET(self):
    method POST (line 21) | def POST(self):

FILE: channel/wechatmp/common.py
  class WeChatAPIException (line 11) | class WeChatAPIException(Exception):
  function verify_server (line 15) | def verify_server(data):

FILE: channel/wechatmp/passive_reply.py
  class Query (line 19) | class Query:
    method GET (line 20) | def GET(self):
    method POST (line 23) | def POST(self):

FILE: channel/wechatmp/wechatmp_channel.py
  class WechatMPChannel (line 39) | class WechatMPChannel(ChatChannel):
    method __init__ (line 40) | def __init__(self, passive_reply=True):
    method startup (line 66) | def startup(self):
    method stop (line 82) | def stop(self):
    method start_loop (line 91) | def start_loop(self, loop):
    method delete_media (line 95) | async def delete_media(self, media_id):
    method send (line 101) | def send(self, reply: Reply, context: Context):
    method _success_callback (line 325) | def _success_callback(self, session_id, context, **kwargs):  # 线程异常结束时...
    method _fail_callback (line 330) | def _fail_callback(self, session_id, exception, context, **kwargs):  #...

FILE: channel/wechatmp/wechatmp_client.py
  class WechatMPClient (line 11) | class WechatMPClient(WeChatClient):
    method __init__ (line 12) | def __init__(self, appid, secret, access_token=None, session=None, tim...
    method clear_quota (line 18) | def clear_quota(self):
    method clear_quota_v2 (line 21) | def clear_quota_v2(self):
    method fetch_access_token (line 24) | def fetch_access_token(self):  # 重载父类方法,加锁避免多线程重复获取access_token
    method _request (line 35) | def _request(self, method, url_or_endpoint, **kwargs):  # 重载父类方法,遇到API...

FILE: channel/wechatmp/wechatmp_message.py
  class WeChatMPMessage (line 9) | class WeChatMPMessage(ChatMessage):
    method __init__ (line 10) | def __init__(self, msg, client=None):

FILE: channel/wecom_bot/wecom_bot_channel.py
  class WecomBotChannel (line 37) | class WecomBotChannel(ChatChannel):
    method __init__ (line 39) | def __init__(self):
    method startup (line 60) | def startup(self):
    method stop (line 73) | def stop(self):
    method _start_ws (line 88) | def _start_ws(self):
    method _ws_send (line 132) | def _ws_send(self, data: dict):
    method _gen_req_id (line 136) | def _gen_req_id(self) -> str:
    method _send_subscribe (line 143) | def _send_subscribe(self):
    method _start_heartbeat (line 153) | def _start_heartbeat(self):
    method _send_and_wait (line 176) | def _send_and_wait(self, data: dict, timeout: float = 15) -> dict:
    method _handle_ws_message (line 189) | def _handle_ws_message(self, data: dict):
    method _handle_msg_callback (line 230) | def _handle_msg_callback(self, data: dict):
    method _handle_event_callback (line 311) | def _handle_event_callback(self, data: dict):
    method _make_stream_callback (line 327) | def _make_stream_callback(self, req_id: str):
    method _compose_context (line 387) | def _compose_context(self, ctype: ContextType, content, **kwargs):
    method send (line 422) | def send(self, reply: Reply, context: Context):
    method _send_text (line 449) | def _send_text(self, content: str, receiver: str, is_group: bool, req_...
    method _send_image (line 474) | def _send_image(self, img_path_or_url: str, receiver: str, is_group: b...
    method _ensure_image_format (line 550) | def _ensure_image_format(file_path: str) -> str:
    method _compress_image (line 584) | def _compress_image(file_path: str, max_bytes: int) -> str:
    method _send_file (line 618) | def _send_file(self, file_path: str, receiver: str, is_group: bool,
    method _active_send_markdown (line 668) | def _active_send_markdown(self, content: str, receiver: str, is_group:...
    method _upload_media (line 685) | def _upload_media(self, file_path: str, media_type: str = "file") -> str:

FILE: channel/wecom_bot/wecom_bot_message.py
  function _guess_ext_from_bytes (line 36) | def _guess_ext_from_bytes(data: bytes) -> str:
  function _decrypt_media (line 56) | def _decrypt_media(url: str, aeskey: str) -> bytes:
  function _get_tmp_dir (line 79) | def _get_tmp_dir() -> str:
  class WecomBotMessage (line 87) | class WecomBotMessage(ChatMessage):
    method __init__ (line 90) | def __init__(self, msg_body: dict, is_group: bool = False):

FILE: common/cloud_client.py
  class CloudClient (line 37) | class CloudClient(LinkAIClient):
    method __init__ (line 38) | def __init__(self, api_key: str, channel, host: str = ""):
    method skill_service (line 48) | def skill_service(self):
    method memory_service (line 65) | def memory_service(self):
    method chat_service (line 80) | def chat_service(self):
    method on_message (line 96) | def on_message(self, push_msg: PushMsg):
    method on_config (line 109) | def on_config(self, config: dict):
    method _dispatch_channel_action (line 188) | def _dispatch_channel_action(self, action: str, data: dict):
    method _handle_channel_create (line 202) | def _handle_channel_create(self, channel_type: str, data: dict):
    method _handle_channel_update (line 225) | def _handle_channel_update(self, channel_type: str, data: dict):
    method _handle_channel_delete (line 257) | def _handle_channel_delete(self, channel_type: str, data: dict):
    method _set_channel_credentials (line 272) | def _set_channel_credentials(local_config: dict, channel_type: str,
    method _clear_channel_credentials (line 298) | def _clear_channel_credentials(local_config: dict, channel_type: str):
    method _parse_channel_types (line 312) | def _parse_channel_types(local_config: dict) -> list:
    method _add_channel_type (line 321) | def _add_channel_type(local_config: dict, channel_type: str):
    method _remove_channel_type (line 328) | def _remove_channel_type(local_config: dict, channel_type: str):
    method _do_add_channel (line 337) | def _do_add_channel(self, channel_type: str):
    method _do_remove_channel (line 347) | def _do_remove_channel(self, channel_type: str):
    method _report_channel_startup (line 354) | def _report_channel_startup(self, channel_type: str):
    method on_skill (line 371) | def on_skill(self, data: dict) -> dict:
    method on_memory (line 392) | def on_memory(self, data: dict) -> dict:
    method on_chat (line 413) | def on_chat(self, data: dict, send_chunk_fn):
    method on_history (line 438) | def on_history(self, data: dict) -> dict:
    method _query_history (line 455) | def _query_history(self, payload: dict) -> dict:
    method _restart_channel (line 494) | def _restart_channel(self, new_channel_type: str):
    method _do_restart_channel (line 504) | def _do_restart_channel(self, mgr, new_channel_type: str):
    method _save_config_to_file (line 523) | def _save_config_to_file(self, local_config: dict):
  function get_root_domain (line 546) | def get_root_domain(host: str = "") -> str:
  function get_deployment_id (line 565) | def get_deployment_id() -> str:
  function get_website_base_url (line 570) | def get_website_base_url() -> str:
  function build_website_prompt (line 590) | def build_website_prompt(workspace_dir: str) -> list:
  function start (line 621) | def start(channel, channel_mgr=None):
  function _report_existing_channels (line 638) | def _report_existing_channels(client: CloudClient, mgr):
  function _build_config (line 650) | def _build_config():

FILE: common/dequeue.py
  class Dequeue (line 6) | class Dequeue(Queue):
    method putleft (line 7) | def putleft(self, item, block=True, timeout=None):
    method putleft_nowait (line 29) | def putleft_nowait(self, item):
    method _putleft (line 32) | def _putleft(self, item):

FILE: common/expired_dict.py
  class ExpiredDict (line 4) | class ExpiredDict(dict):
    method __init__ (line 5) | def __init__(self, expires_in_seconds):
    method __getitem__ (line 9) | def __getitem__(self, key):
    method __setitem__ (line 17) | def __setitem__(self, key, value):
    method get (line 21) | def get(self, key, default=None):
    method __contains__ (line 27) | def __contains__(self, key):
    method keys (line 34) | def keys(self):
    method items (line 38) | def items(self):
    method __iter__ (line 41) | def __iter__(self):

FILE: common/log.py
  function _reset_logger (line 5) | def _reset_logger(log):
  function _get_logger (line 30) | def _get_logger():

FILE: common/package_manager.py
  function install (line 9) | def install(package):
  function install_requirements (line 13) | def install_requirements(file):
  function check_dulwich (line 18) | def check_dulwich():

FILE: common/singleton.py
  function singleton (line 1) | def singleton(cls):

FILE: common/sorted_dict.py
  class SortedDict (line 4) | class SortedDict(dict):
    method __init__ (line 5) | def __init__(self, sort_func=lambda k, v: k, init_dict=None, reverse=F...
    method __setitem__ (line 17) | def __setitem__(self, key, value):
    method __delitem__ (line 31) | def __delitem__(self, key):
    method keys (line 40) | def keys(self):
    method items (line 45) | def items(self):
    method _update_heap (line 51) | def _update_heap(self, key):
    method __iter__ (line 61) | def __iter__(self):
    method __repr__ (line 64) | def __repr__(self):

FILE: common/time_check.py
  function time_checker (line 7) | def time_checker(f):

FILE: common/tmp_dir.py
  class TmpDir (line 7) | class TmpDir(object):
    method __init__ (line 12) | def __init__(self):
    method path (line 17) | def path(self):

FILE: common/token_bucket.py
  class TokenBucket (line 5) | class TokenBucket:
    method __init__ (line 6) | def __init__(self, tpm, timeout=None):
    method _generate_tokens (line 16) | def _generate_tokens(self):
    method get_token (line 25) | def get_token(self):
    method close (line 35) | def close(self):

FILE: common/utils.py
  function fsize (line 7) | def fsize(file):
  function compress_imgfile (line 22) | def compress_imgfile(file, max_size):
  function split_string_by_utf8_length (line 38) | def split_string_by_utf8_length(string, max_length, max_split=0):
  function get_path_suffix (line 55) | def get_path_suffix(path):
  function convert_webp_to_png (line 60) | def convert_webp_to_png(webp_image):
  function remove_markdown_symbol (line 74) | def remove_markdown_symbol(text: str):
  function expand_path (line 81) | def expand_path(path: str) -> str:
  function get_cloud_headers (line 120) | def get_cloud_headers(api_key: str) -> dict:

FILE: config.py
  class Config (line 200) | class Config(dict):
    method __init__ (line 201) | def __init__(self, d=None):
    method __getitem__ (line 210) | def __getitem__(self, key):
    method __setitem__ (line 216) | def __setitem__(self, key, value):
    method get (line 222) | def get(self, key, default=None):
    method get_user_data (line 239) | def get_user_data(self, user) -> dict:
    method load_user_datas (line 244) | def load_user_datas(self):
    method save_user_datas (line 255) | def save_user_datas(self):
  function drag_sensitive (line 267) | def drag_sensitive(config):
  function load_config (line 291) | def load_config():
  function get_root (line 400) | def get_root():
  function read_file (line 404) | def read_file(path):
  function conf (line 409) | def conf():
  function get_appdata_dir (line 413) | def get_appdata_dir():
  function subscribe_msg (line 421) | def subscribe_msg():
  function write_plugin_config (line 431) | def write_plugin_config(pconf: dict):
  function remove_plugin_config (line 440) | def remove_plugin_config(name: str):
  function pconf (line 449) | def pconf(plugin_name: str) -> dict:

FILE: models/ali/ali_qwen_bot.py
  class AliQwenBot (line 21) | class AliQwenBot(Bot):
    method __init__ (line 22) | def __init__(self):
    method api_key_client (line 27) | def api_key_client(self):
    method access_key_id (line 30) | def access_key_id(self):
    method access_key_secret (line 33) | def access_key_secret(self):
    method agent_key (line 36) | def agent_key(self):
    method app_id (line 39) | def app_id(self):
    method node_id (line 42) | def node_id(self):
    method temperature (line 45) | def temperature(self):
    method top_p (line 48) | def top_p(self):
    method reply (line 51) | def reply(self, query, context=None):
    method reply_text (line 96) | def reply_text(self, session: AliQwenSession, retry_count=0) -> dict:
    method set_api_key (line 148) | def set_api_key(self):
    method update_api_key_if_expired (line 153) | def update_api_key_if_expired(self):
    method convert_messages_format (line 157) | def convert_messages_format(self, messages) -> Tuple[str, List[ChatQaM...
    method get_completion_content (line 183) | def get_completion_content(self, response, node_id):
    method calc_tokens (line 209) | def calc_tokens(self, messages, completion_content):

FILE: models/ali/ali_qwen_session.py
  class AliQwenSession (line 14) | class AliQwenSession(Session):
    method __init__ (line 15) | def __init__(self, session_id, system_prompt=None, model="qianwen"):
    method discard_exceeding (line 20) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 51) | def calc_tokens(self):
  function num_tokens_from_messages (line 54) | def num_tokens_from_messages(messages, model):

FILE: models/baidu/baidu_unit_bot.py
  class BaiduUnitBot (line 10) | class BaiduUnitBot(Bot):
    method reply (line 11) | def reply(self, query, context=None):
    method get_token (line 29) | def get_token(self):

FILE: models/baidu/baidu_wenxin.py
  class BaiduWenxinBot (line 17) | class BaiduWenxinBot(Bot):
    method __init__ (line 19) | def __init__(self):
    method reply (line 37) | def reply(self, query, context=None):
    method reply_text (line 77) | def reply_text(self, session: BaiduWenxinSession, retry_count=0):
    method get_access_token (line 113) | def get_access_token(self):

FILE: models/baidu/baidu_wenxin_session.py
  class BaiduWenxinSession (line 13) | class BaiduWenxinSession(Session):
    method __init__ (line 14) | def __init__(self, session_id, system_prompt=None, model="gpt-3.5-turb...
    method discard_exceeding (line 20) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 42) | def calc_tokens(self):
  function num_tokens_from_messages (line 46) | def num_tokens_from_messages(messages, model):

FILE: models/bot.py
  class Bot (line 10) | class Bot(object):
    method reply (line 11) | def reply(self, query, context: Context = None) -> Reply:

FILE: models/bot_factory.py
  function create_bot (line 7) | def create_bot(bot_type):

FILE: models/chatgpt/chat_gpt_bot.py
  class ChatGPTBot (line 23) | class ChatGPTBot(Bot, OpenAIImage, OpenAICompatibleBot):
    method __init__ (line 24) | def __init__(self):
    method get_api_config (line 57) | def get_api_config(self):
    method reply (line 69) | def reply(self, query, context=None):
    method reply_image (line 136) | def reply_image(self, context):
    method reply_text (line 222) | def reply_text(self, session: ChatGPTSession, api_key=None, args=None,...
  class AzureChatGPTBot (line 278) | class AzureChatGPTBot(ChatGPTBot):
    method __init__ (line 279) | def __init__(self):
    method create_img (line 285) | def create_img(self, query, retry_count=0, api_key=None):

FILE: models/chatgpt/chat_gpt_session.py
  class ChatGPTSession (line 15) | class ChatGPTSession(Session):
    method __init__ (line 16) | def __init__(self, session_id, system_prompt=None, model="gpt-3.5-turb...
    method discard_exceeding (line 21) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 52) | def calc_tokens(self):
  function num_tokens_from_messages (line 57) | def num_tokens_from_messages(messages, model):
  function num_tokens_by_character (line 99) | def num_tokens_by_character(messages):

FILE: models/claudeapi/claude_api_bot.py
  class ClaudeAPIBot (line 30) | class ClaudeAPIBot(Bot, OpenAIImage):
    method __init__ (line 31) | def __init__(self):
    method api_key (line 36) | def api_key(self):
    method api_base (line 40) | def api_base(self):
    method proxy (line 44) | def proxy(self):
    method reply (line 47) | def reply(self, query, context=None):
    method reply_text (line 88) | def reply_text(self, session: BaiduWenxinSession, retry_count=0, tools...
    method _model_mapping (line 200) | def _model_mapping(self, model) -> str:
    method _get_max_tokens (line 211) | def _get_max_tokens(self, model: str) -> int:
    method call_with_tools (line 227) | def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
    method _handle_sync_response (line 290) | def _handle_sync_response(self, request_params):
    method _handle_stream_response (line 362) | def _handle_stream_response(self, request_params):

FILE: models/dashscope/dashscope_bot.py
  class DashscopeBot (line 33) | class DashscopeBot(Bot):
    method __init__ (line 34) | def __init__(self):
    method api_key (line 44) | def api_key(self):
    method _is_multimodal_model (line 48) | def _is_multimodal_model(model_name: str) -> bool:
    method reply (line 52) | def reply(self, query, context=None):
    method reply_text (line 96) | def reply_text(self, session: DashscopeSession, retry_count=0) -> dict:
    method call_with_tools (line 156) | def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
    method _handle_sync_response (line 257) | def _handle_sync_response(self, model_name, messages, parameters):
    method _handle_stream_response (line 328) | def _handle_stream_response(self, model_name, messages, parameters):
    method _response_to_dict (line 424) | def _response_to_dict(response) -> dict:
    method _convert_tools_to_dashscope_format (line 472) | def _convert_tools_to_dashscope_format(self, tools):
    method _prepare_messages_for_multimodal (line 501) | def _prepare_messages_for_multimodal(messages: list) -> list:
    method _convert_messages_to_dashscope_format (line 531) | def _convert_messages_to_dashscope_format(self, messages):
    method _convert_tool_calls_to_openai_format (line 611) | def _convert_tool_calls_to_openai_format(self, tool_calls):

FILE: models/dashscope/dashscope_session.py
  class DashscopeSession (line 5) | class DashscopeSession(Session):
    method __init__ (line 6) | def __init__(self, session_id, system_prompt=None, model="qwen-turbo"):
    method discard_exceeding (line 10) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 42) | def calc_tokens(self):
  function num_tokens_from_messages (line 46) | def num_tokens_from_messages(messages):

FILE: models/doubao/doubao_bot.py
  class DoubaoBot (line 17) | class DoubaoBot(Bot):
    method __init__ (line 18) | def __init__(self):
    method api_key (line 29) | def api_key(self):
    method base_url (line 33) | def base_url(self):
    method reply (line 39) | def reply(self, query, context=None):
    method reply_text (line 88) | def reply_text(self, session: DoubaoSession, args=None, retry_count: i...
    method call_with_tools (line 152) | def call_with_tools(self, messages, tools=None, stream: bool = False, ...
    method _handle_stream_response (line 231) | def _handle_stream_response(self, request_body: dict):
    method _handle_sync_response (line 349) | def _handle_sync_response(self, request_body: dict):
    method _convert_messages_to_openai_format (line 411) | def _convert_messages_to_openai_format(self, messages):
    method _convert_tools_to_openai_format (line 499) | def _convert_tools_to_openai_format(self, tools):

FILE: models/doubao/doubao_session.py
  class DoubaoSession (line 5) | class DoubaoSession(Session):
    method __init__ (line 6) | def __init__(self, session_id, system_prompt=None, model="doubao-seed-...
    method discard_exceeding (line 11) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 43) | def calc_tokens(self):
  function num_tokens_from_messages (line 47) | def num_tokens_from_messages(messages, model):

FILE: models/gemini/google_gemini_bot.py
  class GoogleGeminiBot (line 27) | class GoogleGeminiBot(Bot):
    method __init__ (line 29) | def __init__(self):
    method api_key (line 34) | def api_key(self):
    method api_base (line 38) | def api_base(self):
    method reply (line 44) | def reply(self, query, context: Context = None) -> Reply:
    method _convert_to_gemini_messages (line 96) | def _convert_to_gemini_messages(self, messages: list):
    method filter_messages (line 114) | def filter_messages(messages: list):
    method _extract_image_paths_from_text (line 135) | def _extract_image_paths_from_text(content: str):
    method _build_image_inline_part (line 145) | def _build_image_inline_part(image_path: str):
    method _build_inline_part_from_image_url (line 175) | def _build_inline_part_from_image_url(image_url):
    method call_with_tools (line 221) | def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
    method _convert_tools_to_gemini_rest_format (line 454) | def _convert_tools_to_gemini_rest_format(self, tools_list):
    method _handle_gemini_rest_sync_response (line 492) | def _handle_gemini_rest_sync_response(self, response, model_name):
    method _handle_gemini_rest_stream_response (line 582) | def _handle_gemini_rest_stream_response(self, response, model_name):
    method _convert_tools_to_gemini_format (line 703) | def _convert_tools_to_gemini_format(self, openai_tools):
    method _handle_gemini_sync_response (line 723) | def _handle_gemini_sync_response(self, model, messages, request_params...
    method _handle_gemini_stream_response (line 780) | def _handle_gemini_stream_response(self, model, messages, request_para...

FILE: models/linkai/link_ai_bot.py
  class LinkAIBot (line 22) | class LinkAIBot(Bot, OpenAICompatibleBot):
    method __init__ (line 27) | def __init__(self):
    method get_api_config (line 32) | def get_api_config(self):
    method reply (line 44) | def reply(self, query, context: Context = None) -> Reply:
    method _chat (line 61) | def _chat(self, query, context, retry_count=0) -> Reply:
    method _process_image_msg (line 193) | def _process_image_msg(self, app_code: str, session_id: str, query:str...
    method _find_group_mapping_code (line 216) | def _find_group_mapping_code(self, context):
    method _build_vision_msg (line 229) | def _build_vision_msg(self, query: str, path: str):
    method reply_text (line 253) | def reply_text(self, session: ChatGPTSession, app_code="", retry_count...
    method _fetch_app_info (line 318) | def _fetch_app_info(self, app_code: str):
    method create_img (line 329) | def create_img(self, query, retry_count=0, api_key=None):
    method _fetch_knowledge_search_suffix (line 355) | def _fetch_knowledge_search_suffix(self, response) -> str:
    method _fetch_agent_suffix (line 373) | def _fetch_agent_suffix(self, response):
    method _process_url (line 402) | def _process_url(self, text):
    method _send_image (line 411) | def _send_image(self, channel, context, image_urls):
  function _download_file (line 440) | def _download_file(url: str):
  class LinkAISessionManager (line 455) | class LinkAISessionManager(SessionManager):
    method session_msg_query (line 456) | def session_msg_query(self, query, session_id):
    method session_reply (line 461) | def session_reply(self, reply, session_id, total_tokens=None, query=No...
  class LinkAISession (line 475) | class LinkAISession(ChatGPTSession):
    method calc_tokens (line 476) | def calc_tokens(self):
    method discard_exceeding (line 481) | def discard_exceeding(self, max_tokens, cur_tokens=None):
  function _linkai_call_with_tools (line 493) | def _linkai_call_with_tools(self, messages, tools=None, stream=False, **...
  function _handle_linkai_sync_response (line 589) | def _handle_linkai_sync_response(self, base_url, headers, body):
  function _handle_linkai_stream_response (line 615) | def _handle_linkai_stream_response(self, base_url, headers, body):

FILE: models/minimax/minimax_bot.py
  class MinimaxBot (line 19) | class MinimaxBot(Bot):
    method __init__ (line 20) | def __init__(self):
    method api_key (line 30) | def api_key(self):
    method api_base (line 37) | def api_base(self):
    method reply (line 40) | def reply(self, query, context: Context = None) -> Reply:
    method reply_text (line 88) | def reply_text(self, session: MinimaxSession, args=None, retry_count=0...
    method call_with_tools (line 178) | def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
    method _convert_messages_to_openai_format (line 253) | def _convert_messages_to_openai_format(self, messages):
    method _convert_tools_to_openai_format (line 359) | def _convert_tools_to_openai_format(self, tools):
    method _handle_sync_response (line 394) | def _handle_sync_response(self, request_body):
    method _handle_stream_response (line 469) | def _handle_stream_response(self, request_body, show_thinking=False):

FILE: models/minimax/minimax_session.py
  class MinimaxSession (line 15) | class MinimaxSession(Session):
    method __init__ (line 16) | def __init__(self, session_id, system_prompt=None, model="minimax"):
    method add_query (line 21) | def add_query(self, query):
    method add_reply (line 25) | def add_reply(self, reply):
    method discard_exceeding (line 29) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 60) | def calc_tokens(self):
  function num_tokens_from_messages (line 64) | def num_tokens_from_messages(messages, model):

FILE: models/modelscope/modelscope_bot.py
  class ModelScopeBot (line 17) | class ModelScopeBot(Bot):
    method __init__ (line 18) | def __init__(self):
    method api_key (line 31) | def api_key(self):
    method base_url (line 35) | def base_url(self):
    method reply (line 42) | def reply(self, query, context=None):
    method reply_text (line 107) | def reply_text(self, session: ModelScopeSession, args=None, retry_coun...
    method reply_text_stream (line 175) | def reply_text_stream(self, session: ModelScopeSession, args=None, ret...
    method create_img (line 255) | def create_img(self, query, retry_count=0):

FILE: models/modelscope/modelscope_session.py
  class ModelScopeSession (line 5) | class ModelScopeSession(Session):
    method __init__ (line 6) | def __init__(self, session_id, system_prompt=None, model="Qwen/Qwen2.5...
    method discard_exceeding (line 11) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 43) | def calc_tokens(self):
  function num_tokens_from_messages (line 47) | def num_tokens_from_messages(messages, model):

FILE: models/moonshot/moonshot_bot.py
  class MoonshotBot (line 17) | class MoonshotBot(Bot):
    method __init__ (line 18) | def __init__(self):
    method api_key (line 31) | def api_key(self):
    method base_url (line 35) | def base_url(self):
    method reply (line 41) | def reply(self, query, context=None):
    method reply_text (line 90) | def reply_text(self, session: MoonshotSession, args=None, retry_count:...
    method call_with_tools (line 152) | def call_with_tools(self, messages, tools=None, stream: bool = False, ...
    method _handle_stream_response (line 232) | def _handle_stream_response(self, request_body: dict):
    method _handle_sync_response (line 350) | def _handle_sync_response(self, request_body: dict):
    method _convert_messages_to_openai_format (line 412) | def _convert_messages_to_openai_format(self, messages):
    method _convert_tools_to_openai_format (line 500) | def _convert_tools_to_openai_format(self, tools):

FILE: models/moonshot/moonshot_session.py
  class MoonshotSession (line 5) | class MoonshotSession(Session):
    method __init__ (line 6) | def __init__(self, session_id, system_prompt=None, model="moonshot-v1-...
    method discard_exceeding (line 11) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 43) | def calc_tokens(self):
  function num_tokens_from_messages (line 47) | def num_tokens_from_messages(messages, model):

FILE: models/openai/open_ai_bot.py
  class OpenAIBot (line 22) | class OpenAIBot(Bot, OpenAIImage, OpenAICompatibleBot):
    method __init__ (line 23) | def __init__(self):
    method get_api_config (line 45) | def get_api_config(self):
    method reply (line 57) | def reply(self, query, context=None):
    method reply_text (line 97) | def reply_text(self, session: OpenAISession, retry_count=0):
    method call_with_tools (line 137) | def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
    method _handle_sync_response (line 202) | def _handle_sync_response(self, request_params):
    method _handle_stream_response (line 216) | def _handle_stream_response(self, request_params):

FILE: models/openai/open_ai_image.py
  class OpenAIImage (line 12) | class OpenAIImage(object):
    method __init__ (line 13) | def __init__(self):
    method create_img (line 18) | def create_img(self, query, retry_count=0, api_key=None, api_base=None):

FILE: models/openai/open_ai_session.py
  class OpenAISession (line 5) | class OpenAISession(Session):
    method __init__ (line 6) | def __init__(self, session_id, system_prompt=None, model="text-davinci...
    method __str__ (line 11) | def __str__(self):
    method discard_exceeding (line 31) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 62) | def calc_tokens(self):
  function num_tokens_from_string (line 67) | def num_tokens_from_string(string: str, model: str) -> int:

FILE: models/openai/openai_compat.py
  class ErrorModule (line 22) | class ErrorModule:
  class OpenAIError (line 54) | class OpenAIError(Exception):
  class RateLimitError (line 57) | class RateLimitError(OpenAIError):
  class APIError (line 60) | class APIError(OpenAIError):
  class APIConnectionError (line 63) | class APIConnectionError(OpenAIError):
  class AuthenticationError (line 66) | class AuthenticationError(OpenAIError):
  class InvalidRequestError (line 69) | class InvalidRequestError(OpenAIError):
  class Timeout (line 72) | class Timeout(OpenAIError):
  class ErrorModule (line 79) | class ErrorModule:

FILE: models/openai_compatible_bot.py
  class OpenAICompatibleBot (line 16) | class OpenAICompatibleBot:
    method get_api_config (line 30) | def get_api_config(self):
    method call_with_tools (line 49) | def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
    method _handle_sync_response (line 136) | def _handle_sync_response(self, request_params, api_key, api_base):
    method _handle_stream_response (line 157) | def _handle_stream_response(self, request_params, api_key, api_base):
    method _convert_tools_to_openai_format (line 181) | def _convert_tools_to_openai_format(self, tools):
    method _convert_messages_to_openai_format (line 209) | def _convert_messages_to_openai_format(self, messages):

FILE: models/session_manager.py
  class Session (line 6) | class Session(object):
    method __init__ (line 7) | def __init__(self, session_id, system_prompt=None):
    method reset (line 16) | def reset(self):
    method set_system_prompt (line 20) | def set_system_prompt(self, system_prompt):
    method add_query (line 24) | def add_query(self, query):
    method add_reply (line 28) | def add_reply(self, reply):
    method discard_exceeding (line 32) | def discard_exceeding(self, max_tokens=None, cur_tokens=None):
    method calc_tokens (line 35) | def calc_tokens(self):
  class SessionManager (line 39) | class SessionManager(object):
    method __init__ (line 40) | def __init__(self, sessioncls, **session_args):
    method build_session (line 49) | def build_session(self, session_id, system_prompt=None):
    method session_query (line 64) | def session_query(self, query, session_id):
    method session_reply (line 75) | def session_reply(self, reply, session_id, total_tokens=None):
    method clear_session (line 86) | def clear_session(self, session_id):
    method clear_all_session (line 90) | def clear_all_session(self):

FILE: models/xunfei/xunfei_spark_bot.py
  class XunFeiBot (line 37) | class XunFeiBot(Bot):
    method __init__ (line 38) | def __init__(self):
    method reply (line 58) | def reply(self, query, context: Context = None) -> Reply:
    method create_web_socket (line 106) | def create_web_socket(self, prompt, session_id, temperature=0.5):
    method gen_request_id (line 124) | def gen_request_id(self, session_id: str):
    method create_url (line 129) | def create_url(self):
    method gen_params (line 160) | def gen_params(self, appid, domain, question):
  class ReplyItem (line 186) | class ReplyItem:
    method __init__ (line 187) | def __init__(self, reply, usage=None, is_end=False):
  function on_error (line 194) | def on_error(ws, error):
  function on_close (line 199) | def on_close(ws, one, two):
  function on_open (line 205) | def on_open(ws):
  function run (line 210) | def run(ws, *args):
  function on_message (line 221) | def on_message(ws, message):
  function gen_params (line 245) | def gen_params(appid, domain, question, temperature=0.5):

FILE: models/zhipuai/zhipu_ai_image.py
  class ZhipuAIImage (line 7) | class ZhipuAIImage(object):
    method __init__ (line 8) | def __init__(self):
    method create_img (line 19) | def create_img(self, query, retry_count=0, api_key=None, api_base=None):

FILE: models/zhipuai/zhipu_ai_session.py
  class ZhipuAISession (line 5) | class ZhipuAISession(Session):
    method __init__ (line 6) | def __init__(self, session_id, system_prompt=None, model="glm-4"):
    method discard_exceeding (line 13) | def discard_exceeding(self, max_tokens, cur_tokens=None):
    method calc_tokens (line 45) | def calc_tokens(self):
  function num_tokens_from_messages (line 49) | def num_tokens_from_messages(messages, model):

FILE: models/zhipuai/zhipuai_bot.py
  class ZHIPUAIBot (line 18) | class ZHIPUAIBot(Bot, ZhipuAIImage):
    method __init__ (line 19) | def __init__(self):
    method reply (line 36) | def reply(self, query, context=None):
    method reply_text (line 95) | def reply_text(self, session: ZhipuAISession, args=None, retry_count=0...
    method call_with_tools (line 152) | def call_with_tools(self, messages, tools=None, stream=False, **kwargs):
    method _handle_sync_response (line 241) | def _handle_sync_response(self, request_params):
    method _handle_stream_response (line 278) | def _handle_stream_response(self, request_params):
    method _convert_tools_to_zhipu_format (line 349) | def _convert_tools_to_zhipu_format(self, tools):
    method _convert_messages_to_zhipu_format (line 377) | def _convert_messages_to_zhipu_format(self, messages):
    method _convert_tool_calls_to_openai_format (line 449) | def _convert_tool_calls_to_openai_format(self, tool_calls):

FILE: plugins/agent/agent.py
  class AgentPlugin (line 24) | class AgentPlugin(Plugin):
    method __init__ (line 27) | def __init__(self):
    method _load_config (line 37) | def _load_config(self) -> Dict:
    method get_help_text (line 47) | def get_help_text(self, verbose=False, **kwargs):
    method get_available_teams (line 68) | def get_available_teams(self) -> List[str]:
    method create_team_from_config (line 74) | def create_team_from_config(self, team_name: str) -> Optional[AgentTeam]:
    method on_handle_context (line 146) | def on_handle_context(self, e_context: EventContext):
    method create_llm_model (line 256) | def create_llm_model(self, model_name) -> LLMModel:
  function _send_text (line 279) | def _send_text(e_context: EventContext, content: str):

FILE: plugins/banwords/banwords.py
  class Banwords (line 23) | class Banwords(Plugin):
    method __init__ (line 24) | def __init__(self):
    method on_handle_context (line 57) | def on_handle_context(self, e_context: EventContext):
    method on_decorate_reply (line 79) | def on_decorate_reply(self, e_context: EventContext):
    method get_help_text (line 99) | def get_help_text(self, **kwargs):

FILE: plugins/banwords/lib/WordsSearch.py
  class TrieNode (line 14) | class TrieNode():
    method __init__ (line 15) | def __init__(self):
    method Add (line 26) | def Add(self,c):
    method SetResults (line 35) | def SetResults(self,index):
  class TrieNode2 (line 40) | class TrieNode2():
    method __init__ (line 41) | def __init__(self):
    method Add (line 48) | def Add(self,c,node3):
    method SetResults (line 55) | def SetResults(self,index):
    method HasKey (line 61) | def HasKey(self,c):
    method TryGetValue (line 65) | def TryGetValue(self,c):
  class WordsSearch (line 72) | class WordsSearch():
    method __init__ (line 73) | def __init__(self):
    method SetKeywords (line 78) | def SetKeywords(self,keywords):
    method FindFirst (line 165) | def FindFirst(self,text):
    method FindAll (line 186) | def FindAll(self,text):
    method ContainsAny (line 211) | def ContainsAny(self,text):
    method Replace (line 229) | def Replace(self,text, replaceChar = '*'):

FILE: plugins/dungeon/dungeon.py
  class StoryTeller (line 15) | class StoryTeller:
    method __init__ (line 16) | def __init__(self, bot, sessionid, story):
    method reset (line 23) | def reset(self):
    method action (line 27) | def action(self, user_action):
  class Dungeon (line 52) | class Dungeon(Plugin):
    method __init__ (line 53) | def __init__(self):
    method on_handle_context (line 63) | def on_handle_context(self, e_context: EventContext):
    method get_help_text (line 98) | def get_help_text(self, **kwargs):

FILE: plugins/finish/finish.py
  class Finish (line 19) | class Finish(Plugin):
    method __init__ (line 20) | def __init__(self):
    method on_handle_context (line 25) | def on_handle_context(self, e_context: EventContext):
    method get_help_text (line 39) | def get_help_text(self, **kwargs):

FILE: plugins/godcmd/godcmd.py
  function get_help_text (line 138) | def get_help_text(isadmin, isgroup):
  class Godcmd (line 183) | class Godcmd(Plugin):
    method __init__ (line 184) | def __init__(self):
    method on_handle_context (line 214) | def on_handle_context(self, e_context: EventContext):
    method authenticate (line 445) | def authenticate(self, userid, args, isadmin, isgroup) -> Tuple[bool, ...
    method get_help_text (line 467) | def get_help_text(self, isadmin=False, isgroup=False, **kwargs):
    method is_admin_in_group (line 471) | def is_admin_in_group(self, context):
    method model_mapping (line 477) | def model_mapping(self, model) -> str:
    method reload (line 482) | def reload(self):

FILE: plugins/hello/hello.py
  class Hello (line 22) | class Hello(Plugin):
    method __init__ (line 28) | def __init__(self):
    method on_handle_context (line 44) | def on_handle_context(self, e_context: EventContext):
    method get_help_text (line 114) | def get_help_text(self, **kwargs):
    method _load_config_template (line 118) | def _load_config_template(self):

FILE: plugins/keyword/keyword.py
  class Keyword (line 21) | class Keyword(Plugin):
    method __init__ (line 22) | def __init__(self):
    method on_handle_context (line 47) | def on_handle_context(self, e_context: EventContext):
    method get_help_text (line 93) | def get_help_text(self, **kwargs):

FILE: plugins/linkai/linkai.py
  class LinkAI (line 22) | class LinkAI(Plugin):
    method __init__ (line 23) | def __init__(self):
    method on_handle_context (line 37) | def on_handle_context(self, e_context: EventContext):
    method _process_admin_cmd (line 131) | def _process_admin_cmd(self, e_context: EventContext):
    method _is_summary_open (line 196) | def _is_summary_open(self, context) -> bool:
    method _is_chat_task (line 229) | def _is_chat_task(self, e_context: EventContext):
    method _process_chat_task (line 234) | def _process_chat_task(self, e_context: EventContext):
    method _fetch_group_app_code (line 246) | def _fetch_group_app_code(self, group_name: str) -> str:
    method _fetch_app_code (line 257) | def _fetch_app_code(self, context) -> str:
    method get_help_text (line 270) | def get_help_text(self, verbose=False, **kwargs):
    method _load_config_template (line 285) | def _load_config_template(self):
    method reload (line 299) | def reload(self):
  function _send_info (line 303) | def _send_info(e_context: EventContext, content: str):
  function _find_user_id (line 309) | def _find_user_id(context):
  function _set_reply_text (line 316) | def _set_reply_text(content: str, e_context: EventContext, level: ReplyT...
  function _get_trigger_prefix (line 322) | def _get_trigger_prefix():
  function _find_sum_id (line 326) | def _find_sum_id(context):
  function _find_file_id (line 330) | def _find_file_id(context):

FILE: plugins/linkai/midjourney.py
  class TaskType (line 19) | class TaskType(Enum):
    method __str__ (line 25) | def __str__(self):
  class Status (line 29) | class Status(Enum):
    method __str__ (line 35) | def __str__(self):
  class TaskMode (line 39) | class TaskMode(Enum):
  class MJTask (line 52) | class MJTask:
    method __init__ (line 53) | def __init__(self, id, user_id: str, task_type: TaskType, raw_prompt=N...
    method __str__ (line 65) | def __str__(self):
  class MJBot (line 70) | class MJBot:
    method __init__ (line 71) | def __init__(self, config, fetch_group_app_code):
    method judge_mj_task_type (line 81) | def judge_mj_task_type(self, e_context: EventContext):
    method process_mj_task (line 106) | def process_mj_task(self, mj_type: TaskType, e_context: EventContext):
    method generate (line 189) | def generate(self, prompt: str, user_id: str, e_context: EventContext)...
    method do_operate (line 235) | def do_operate(self, task_type: TaskType, user_id: str, img_id: str, e...
    method check_task_sync (line 270) | def check_task_sync(self, task: MJTask, e_context: EventContext):
    method _do_check_task (line 300) | def _do_check_task(self, task: MJTask, e_context: EventContext):
    method _process_success_task (line 303) | def _process_success_task(self, task: MJTask, res: dict, e_context: Ev...
    method _check_rate_limit (line 341) | def _check_rate_limit(self, user_id: str, e_context: EventContext) -> ...
    method _fetch_mode (line 363) | def _fetch_mode(self, prompt) -> str:
    method _run_loop (line 369) | def _run_loop(self, loop: asyncio.BaseEventLoop):
    method _print_tasks (line 377) | def _print_tasks(self):
    method _set_reply_text (line 381) | def _set_reply_text(self, content: str, e_context: EventContext, level...
    method get_help_text (line 392) | def get_help_text(self, verbose=False, **kwargs):
    method find_tasks_by_user_id (line 402) | def find_tasks_by_user_id(self, user_id) -> list:
    method _is_mj_open (line 414) | def _is_mj_open(self, context) -> bool:
  function _send (line 434) | def _send(channel, reply: Reply, context, retry_cnt=0):
  function check_prefix (line 447) | def check_prefix(content, prefix_list):

FILE: plugins/linkai/summary.py
  class LinkSummary (line 8) | class LinkSummary:
    method __init__ (line 9) | def __init__(self):
    method summary_file (line 12) | def summary_file(self, file_path: str, app_code: str):
    method summary_url (line 25) | def summary_url(self, url: str, app_code: str):
    method summary_chat (line 35) | def summary_chat(self, summary_id: str):
    method _parse_summary_res (line 54) | def _parse_summary_res(self, res):
    method base_url (line 69) | def base_url(self):
    method headers (line 72) | def headers(self):
    method check_file (line 75) | def check_file(self, file_path: str, sum_config: dict) -> bool:
    method check_url (line 90) | def check_url(self, url: str):

FILE: plugins/linkai/utils.py
  class Util (line 8) | class Util:
    method is_admin (line 10) | def is_admin(e_context: EventContext) -> bool:
    method set_reply_text (line 27) | def set_reply_text(content: str, e_context: EventContext, level: Reply...
    method fetch_app_plugin (line 33) | def fetch_app_plugin(app_code: str, plugin_name: str) -> bool:

FILE: plugins/role/role.py
  class RolePlay (line 16) | class RolePlay:
    method __init__ (line 17) | def __init__(self, bot, sessionid, desc, wrapper=None):
    method reset (line 24) | def reset(self):
    method action (line 27) | def action(self, user_action):
  class Role (line 43) | class Role(Plugin):
    method __init__ (line 44) | def __init__(self):
    method get_role (line 77) | def get_role(self, name, find_closest=True, min_sim=0.35):
    method on_handle_context (line 98) | def on_handle_context(self, e_context: EventContext):
    method get_help_text (line 189) | def get_help_text(self, verbose=False, **kwargs):

FILE: plugins/tool/tool.py
  class Tool (line 21) | class Tool(Plugin):
    method __init__ (line 22) | def __init__(self):
    method get_help_text (line 32) | def get_help_text(self, verbose=False, **kwargs):
    method on_handle_context (line 49) | def on_handle_context(self, e_context: EventContext):
    method _read_json (line 133) | def _read_json(self) -> dict:
    method _build_tool_kwargs (line 137) | def _build_tool_kwargs(self, kwargs: dict):
    method _filter_tool_list (line 226) | def _filter_tool_list(self, tool_list: list):
    method _reset_app (line 235) | def _reset_app(self) -> App:

FILE: skills/skill-creator/scripts/init_skill.py
  function title_case_skill_name (line 189) | def title_case_skill_name(skill_name):
  function init_skill (line 194) | def init_skill(skill_name, path):
  function main (line 273) | def main():

FILE: skills/skill-creator/scripts/package_skill.py
  function package_skill (line 25) | def package_skill(skill_path, output_dir=None):
  function main (line 91) | def main():

FILE: skills/skill-creator/scripts/quick_validate.py
  function validate_skill (line 12) | def validate_skill(skill_path):

FILE: translate/baidu/baidu_translate.py
  class BaiduTranslator (line 12) | class BaiduTranslator(Translator):
    method __init__ (line 13) | def __init__(self) -> None:
    method translate (line 24) | def translate(self, query: str, from_lang: str = "", to_lang: str = "e...
    method make_md5 (line 48) | def make_md5(self, s, encoding="utf-8"):

FILE: translate/factory.py
  function create_translator (line 1) | def create_translator(voice_type):

FILE: translate/translator.py
  class Translator (line 6) | class Translator(object):
    method translate (line 8) | def translate(self, query: str, from_lang: str = "", to_lang: str = "e...

FILE: voice/ali/ali_api.py
  function text_to_speech_aliyun (line 26) | def text_to_speech_aliyun(url, text, appkey, token):
  function speech_to_text_aliyun (line 65) | def speech_to_text_aliyun(url, audioContent, appkey, token):
  class AliyunTokenGenerator (line 129) | class AliyunTokenGenerator:
    method __init__ (line 138) | def __init__(self, access_key_id, access_key_secret):
    method sign_request (line 142) | def sign_request(self, parameters):
    method percent_encode (line 169) | def percent_encode(self, encode_str):
    method get_token (line 186) | def get_token(self):

FILE: voice/ali/ali_voice.py
  class AliVoice (line 28) | class AliVoice(Voice):
    method __init__ (line 29) | def __init__(self):
    method textToVoice (line 49) | def textToVoice(self, text):
    method voiceToText (line 69) | def voiceToText(self, voice_file):
    method get_valid_token (line 88) | def get_valid_token(self):

FILE: voice/audio_convert.py
  function find_closest_sil_supports (line 22) | def find_closest_sil_supports(sample_rate):
  function get_pcm_from_wav (line 38) | def get_pcm_from_wav(wav_path):
  function any_to_mp3 (line 49) | def any_to_mp3(any_path, mp3_path):
  function any_to_wav (line 65) | def any_to_wav(any_path, wav_path):
  function any_to_sil (line 82) | def any_to_sil(any_path, sil_path):
  function any_to_amr (line 103) | def any_to_amr(any_path, amr_path):
  function sil_to_wav (line 120) | def sil_to_wav(silk_path, wav_path, rate: int = 24000):
  function split_audio (line 129) | def split_audio(file_path, max_segment_length_ms=60000):

FILE: voice/azure/azure_voice.py
  class AzureVoice (line 26) | class AzureVoice(Voice):
    method __init__ (line 27) | def __init__(self):
    method voiceToText (line 59) | def voiceToText(self, voice_file):
    method textToVoice (line 72) | def textToVoice(self, text):

FILE: voice/baidu/baidu_voice.py
  class BaiduVoice (line 23) | class BaiduVoice(Voice):
    method __init__ (line 24) | def __init__(self):
    method _get_access_token (line 59) | def _get_access_token(self):
    method voiceToText (line 82) | def voiceToText(self, voice_file):
    method _long_text_synthesis (line 95) | def _long_text_synthesis(self, text):
    method textToVoice (line 151) | def textToVoice(self, text):

FILE: voice/edge/edge_voice.py
  class EdgeVoice (line 12) | class EdgeVoice(Voice):
    method __init__ (line 14) | def __init__(self):
    method voiceToText (line 37) | def voiceToText(self, voice_file):
    method gen_voice (line 40) | async def gen_voice(self, text, fileName):
    method textToVoice (line 44) | def textToVoice(self, text):

FILE: voice/elevent/elevent_voice.py
  class ElevenLabsVoice (line 15) | class ElevenLabsVoice(Voice):
    method __init__ (line 17) | def __init__(self):
    method voiceToText (line 20) | def voiceToText(self, voice_file):
    method textToVoice (line 23) | def textToVoice(self, text):

FILE: voice/factory.py
  function create_voice (line 6) | def create_voice(voice_type):

FILE: voice/google/google_voice.py
  class GoogleVoice (line 16) | class GoogleVoice(Voice):
    method __init__ (line 19) | def __init__(self):
    method voiceToText (line 22) | def voiceToText(self, voice_file):
    method textToVoice (line 36) | def textToVoice(self, text):

FILE: voice/linkai/linkai_voice.py
  class LinkAIVoice (line 15) | class LinkAIVoice(Voice):
    method __init__ (line 16) | def __init__(self):
    method voiceToText (line 19) | def voiceToText(self, voice_file):
    method textToVoice (line 55) | def textToVoice(self, text):

FILE: voice/openai/openai_voice.py
  class OpenaiVoice (line 16) | class OpenaiVoice(Voice):
    method __init__ (line 17) | def __init__(self):
    method voiceToText (line 20) | def voiceToText(self, voice_file):
    method textToVoice (line 47) | def textToVoice(self, text):

FILE: voice/pytts/pytts_voice.py
  class PyttsVoice (line 17) | class PyttsVoice(Voice):
    method __init__ (line 20) | def __init__(self):
    method textToVoice (line 35) | def textToVoice(self, text):

FILE: voice/tencent/tencent_voice.py
  class TencentVoice (line 13) | class TencentVoice(Voice):
    method __init__ (line 14) | def __init__(self):
    method _load_config (line 21) | def _load_config(self):
    method setup (line 37) | def setup(self, config):
    method voiceToText (line 43) | def voiceToText(self, voice_file):
    method textToVoice (line 86) | def textToVoice(self, text):

FILE: voice/voice.py
  class Voice (line 6) | class Voice(object):
    method voiceToText (line 7) | def voiceToText(self, voice_file):
    method textToVoice (line 13) | def textToVoice(self, text):

FILE: voice/xunfei/xunfei_asr.py
  class Ws_Param (line 47) | class Ws_Param(object):
    method __init__ (line 49) | def __init__(self, APPID, APIKey, APISecret,BusinessArgs, AudioFile):
    method create_url (line 61) | def create_url(self):
  function on_message (line 95) | def on_message(ws, message):
  function on_error (line 135) | def on_error(ws, error):
  function on_close (line 140) | def on_close(ws,a,b):
  function on_open (line 145) | def on_open(ws):
  function xunfei_asr (line 189) | def xunfei_asr(APPID,APISecret,APIKey,BusinessArgsASR,AudioFile):

FILE: voice/xunfei/xunfei_tts.py
  class Ws_Param (line 46) | class Ws_Param(object):
    method __init__ (line 48) | def __init__(self, APPID, APIKey, APISecret,BusinessArgs,Text):
    method create_url (line 64) | def create_url(self):
  function on_message (line 96) | def on_message(ws, message):
  function on_open (line 123) | def on_open(ws):
  function on_error (line 140) | def on_error(ws, error):
  function on_close (line 146) | def on_close(ws):
  function xunfei_tts (line 151) | def xunfei_tts(APPID, APIKey, APISecret,BusinessArgsTTS, Text, OutFile):

FILE: voice/xunfei/xunfei_voice.py
  class XunfeiVoice (line 42) | class XunfeiVoice(Voice):
    method __init__ (line 43) | def __init__(self):
    method voiceToText (line 60) | def voiceToText(self, voice_file):
    method textToVoice (line 82) | def textToVoice(self, text):
Condensed preview — 394 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (2,113K chars).
[
  {
    "path": ".github/ISSUE_TEMPLATE/1.bug.yml",
    "chars": 4772,
    "preview": "name: Bug report 🐛\ndescription: 项目运行中遇到的Bug或问题。\nlabels: ['status: needs check']\nbody:\n  - type: markdown\n    attributes:"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/2.feature.yml",
    "chars": 777,
    "preview": "name: Feature request 🚀\ndescription: 提出你对项目的新想法或建议。\nlabels: ['status: needs check']\nbody:\n  - type: markdown\n    attribu"
  },
  {
    "path": ".github/workflows/deploy-image-arm.yml",
    "chars": 2082,
    "preview": "# This workflow uses actions that are not certified by GitHub.\n# They are provided by a third-party and are governed by\n"
  },
  {
    "path": ".github/workflows/deploy-image.yml",
    "chars": 2009,
    "preview": "# This workflow uses actions that are not certified by GitHub.\n# They are provided by a third-party and are governed by\n"
  },
  {
    "path": ".gitignore",
    "chars": 489,
    "preview": ".DS_Store\n.idea\n.vscode\n.venv\n.vs\n__pycache__/\nvenv*\n*.pyc\npython\nconfig.json\nQR.png\nnohup.out\ntmp\nplugins.json\n*.log\nlo"
  },
  {
    "path": "Dockerfile",
    "chars": 77,
    "preview": "FROM ghcr.io/zhayujie/chatgpt-on-wechat:latest\n\nENTRYPOINT [\"/entrypoint.sh\"]"
  },
  {
    "path": "LICENSE",
    "chars": 1051,
    "preview": "Copyright (c) 2022 zhayujie\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this softwa"
  },
  {
    "path": "README.md",
    "chars": 24995,
    "preview": "<p align=\"center\"><img src= \"https://github.com/user-attachments/assets/eca9a9ec-8534-4615-9e0f-96c5ac1d10a3\" alt=\"Chatg"
  },
  {
    "path": "agent/chat/__init__.py",
    "chars": 70,
    "preview": "from agent.chat.service import ChatService\n\n__all__ = [\"ChatService\"]\n"
  },
  {
    "path": "agent/chat/service.py",
    "chars": 8711,
    "preview": "\"\"\"\nChatService - Wraps the Agent stream execution to produce CHAT protocol chunks.\n\nTranslates agent events (message_up"
  },
  {
    "path": "agent/memory/__init__.py",
    "chars": 745,
    "preview": "\"\"\"\nMemory module for AgentMesh\n\nProvides both long-term memory (vector/keyword search) and short-term\nconversation hist"
  },
  {
    "path": "agent/memory/chunker.py",
    "chars": 4501,
    "preview": "\"\"\"\nText chunking utilities for memory\n\nSplits text into chunks with token limits and overlap\n\"\"\"\n\nfrom __future__ impor"
  },
  {
    "path": "agent/memory/config.py",
    "chars": 3456,
    "preview": "\"\"\"\nMemory configuration module\n\nProvides global memory configuration with simplified workspace structure\n\"\"\"\n\nfrom __fu"
  },
  {
    "path": "agent/memory/conversation_store.py",
    "chars": 21846,
    "preview": "\"\"\"\nConversation history persistence using SQLite.\n\nDesign:\n- sessions table: per-session metadata (channel_type, last_a"
  },
  {
    "path": "agent/memory/embedding.py",
    "chars": 5834,
    "preview": "\"\"\"\nEmbedding providers for memory\n\nSupports OpenAI and local embedding models\n\"\"\"\n\nimport hashlib\nfrom abc import ABC, "
  },
  {
    "path": "agent/memory/manager.py",
    "chars": 19004,
    "preview": "\"\"\"\nMemory manager for AgentMesh\n\nProvides high-level interface for memory operations\n\"\"\"\n\nimport os\nfrom typing import "
  },
  {
    "path": "agent/memory/service.py",
    "chars": 6407,
    "preview": "\"\"\"\nMemory service for handling memory query operations via cloud protocol.\n\nProvides a unified interface for listing an"
  },
  {
    "path": "agent/memory/storage.py",
    "chars": 20602,
    "preview": "\"\"\"\nStorage layer for memory using SQLite + FTS5\n\nProvides vector and keyword search capabilities\n\"\"\"\n\nfrom __future__ i"
  },
  {
    "path": "agent/memory/summarizer.py",
    "chars": 14005,
    "preview": "\"\"\"\nMemory flush manager\n\nHandles memory persistence when conversation context is trimmed or overflows:\n- Uses LLM to su"
  },
  {
    "path": "agent/prompt/__init__.py",
    "chars": 282,
    "preview": "\"\"\"\nAgent Prompt Module - 系统提示词构建模块\n\"\"\"\n\nfrom .builder import PromptBuilder, build_agent_system_prompt\nfrom .workspace i"
  },
  {
    "path": "agent/prompt/builder.py",
    "chars": 14759,
    "preview": "\"\"\"\nSystem Prompt Builder - 系统提示词构建器\n\n实现模块化的系统提示词构建,支持工具、技能、记忆等多个子系统\n\"\"\"\n\nfrom __future__ import annotations\nimport os\nf"
  },
  {
    "path": "agent/prompt/workspace.py",
    "chars": 9605,
    "preview": "\"\"\"\nWorkspace Management - 工作空间管理模块\n\n负责初始化工作空间、创建模板文件、加载上下文文件\n\"\"\"\n\nfrom __future__ import annotations\nimport os\nfrom typ"
  },
  {
    "path": "agent/protocol/__init__.py",
    "chars": 482,
    "preview": "from .agent import Agent\nfrom .agent_stream import AgentStreamExecutor\nfrom .task import Task, TaskType, TaskStatus\nfrom"
  },
  {
    "path": "agent/protocol/agent.py",
    "chars": 23790,
    "preview": "import json\nimport os\nimport time\nimport threading\n\nfrom common.log import logger\nfrom agent.protocol.models import LLMR"
  },
  {
    "path": "agent/protocol/agent_stream.py",
    "chars": 61747,
    "preview": "\"\"\"\nAgent Stream Execution Module - Multi-turn reasoning based on tool-call\n\nProvides streaming output, event system, an"
  },
  {
    "path": "agent/protocol/context.py",
    "chars": 1133,
    "preview": "class TeamContext:\n    def __init__(self, name: str, description: str, rule: str, agents: list, max_steps: int = 100):\n "
  },
  {
    "path": "agent/protocol/message_utils.py",
    "chars": 8554,
    "preview": "\"\"\"\nMessage sanitizer — fix broken tool_use / tool_result pairs.\n\nProvides two public helpers that can be reused across "
  },
  {
    "path": "agent/protocol/models.py",
    "chars": 1789,
    "preview": "\"\"\"\nModels module for agent system.\nProvides basic model classes needed by tools and bridge integration.\n\"\"\"\n\nfrom typin"
  },
  {
    "path": "agent/protocol/result.py",
    "chars": 2924,
    "preview": "from __future__ import annotations\nimport time\nimport uuid\nfrom dataclasses import dataclass, field\nfrom enum import Enu"
  },
  {
    "path": "agent/protocol/task.py",
    "chars": 3054,
    "preview": "from __future__ import annotations\nimport time\nimport uuid\nfrom dataclasses import dataclass, field\nfrom enum import Enu"
  },
  {
    "path": "agent/skills/__init__.py",
    "chars": 750,
    "preview": "\"\"\"\nSkills module for agent system.\n\nThis module provides the framework for loading, managing, and executing skills.\nSki"
  },
  {
    "path": "agent/skills/config.py",
    "chars": 5455,
    "preview": "\"\"\"\nConfiguration support for skills.\n\"\"\"\n\nimport os\nimport platform\nfrom typing import Dict, Optional, List\nfrom agent."
  },
  {
    "path": "agent/skills/formatter.py",
    "chars": 1816,
    "preview": "\"\"\"\nSkill formatter for generating prompts from skills.\n\"\"\"\n\nfrom typing import List\nfrom agent.skills.types import Skil"
  },
  {
    "path": "agent/skills/frontmatter.py",
    "chars": 5342,
    "preview": "\"\"\"\nFrontmatter parsing for skills.\n\"\"\"\n\nimport re\nimport json\nfrom typing import Dict, Any, Optional, List\nfrom agent.s"
  },
  {
    "path": "agent/skills/loader.py",
    "chars": 10396,
    "preview": "\"\"\"\nSkill loader for discovering and loading skills from directories.\n\"\"\"\n\nimport os\nfrom pathlib import Path\nfrom typin"
  },
  {
    "path": "agent/skills/manager.py",
    "chars": 11106,
    "preview": "\"\"\"\nSkill manager for managing skill lifecycle and operations.\n\"\"\"\n\nimport os\nimport json\nfrom typing import Dict, List,"
  },
  {
    "path": "agent/skills/service.py",
    "chars": 10480,
    "preview": "\"\"\"\nSkill service for handling skill CRUD operations.\n\nThis service provides a unified interface for managing skills, wh"
  },
  {
    "path": "agent/skills/types.py",
    "chars": 2348,
    "preview": "\"\"\"\nType definitions for skills system.\n\"\"\"\n\nfrom __future__ import annotations\nfrom typing import Dict, List, Optional,"
  },
  {
    "path": "agent/tools/__init__.py",
    "chars": 4377,
    "preview": "# Import base tool\nfrom agent.tools.base_tool import BaseTool\nfrom agent.tools.tool_manager import ToolManager\n\n# Import"
  },
  {
    "path": "agent/tools/base_tool.py",
    "chars": 3108,
    "preview": "from enum import Enum\nfrom typing import Any, Optional\nfrom common.log import logger\nimport copy\n\n\nclass ToolStage(Enum)"
  },
  {
    "path": "agent/tools/bash/__init__.py",
    "chars": 43,
    "preview": "from .bash import Bash\n\n__all__ = ['Bash']\n"
  },
  {
    "path": "agent/tools/bash/bash.py",
    "chars": 13107,
    "preview": "\"\"\"\nBash tool - Execute bash commands\n\"\"\"\n\nimport os\nimport re\nimport sys\nimport subprocess\nimport tempfile\nfrom typing "
  },
  {
    "path": "agent/tools/browser_tool.py",
    "chars": 564,
    "preview": "def copy(self):\n    \"\"\"\n    Special copy method for browser tool to avoid recreating browser instance.\n    \n    :return:"
  },
  {
    "path": "agent/tools/edit/__init__.py",
    "chars": 43,
    "preview": "from .edit import Edit\n\n__all__ = ['Edit']\n"
  },
  {
    "path": "agent/tools/edit/edit.py",
    "chars": 7500,
    "preview": "\"\"\"\nEdit tool - Precise file editing\nEdit files through exact text replacement\n\"\"\"\n\nimport os\nfrom typing import Dict, A"
  },
  {
    "path": "agent/tools/env_config/__init__.py",
    "chars": 81,
    "preview": "from agent.tools.env_config.env_config import EnvConfig\n\n__all__ = ['EnvConfig']\n"
  },
  {
    "path": "agent/tools/env_config/env_config.py",
    "chars": 11611,
    "preview": "\"\"\"\nEnvironment Configuration Tool - Manage API keys and environment variables\n\"\"\"\n\nimport os\nimport re\nfrom typing impo"
  },
  {
    "path": "agent/tools/ls/__init__.py",
    "chars": 37,
    "preview": "from .ls import Ls\n\n__all__ = ['Ls']\n"
  },
  {
    "path": "agent/tools/ls/ls.py",
    "chars": 5298,
    "preview": "\"\"\"\nLs tool - List directory contents\n\"\"\"\n\nimport os\nfrom typing import Dict, Any\n\nfrom agent.tools.base_tool import Bas"
  },
  {
    "path": "agent/tools/memory/__init__.py",
    "chars": 244,
    "preview": "\"\"\"\nMemory tools for Agent\n\nProvides memory_search and memory_get tools\n\"\"\"\n\nfrom agent.tools.memory.memory_search impor"
  },
  {
    "path": "agent/tools/memory/memory_get.py",
    "chars": 3488,
    "preview": "\"\"\"\nMemory get tool\n\nAllows agents to read specific sections from memory files\n\"\"\"\n\nfrom agent.tools.base_tool import Ba"
  },
  {
    "path": "agent/tools/memory/memory_search.py",
    "chars": 3472,
    "preview": "\"\"\"\nMemory search tool\n\nAllows agents to search their memory using semantic and keyword search\n\"\"\"\n\nfrom typing import D"
  },
  {
    "path": "agent/tools/read/__init__.py",
    "chars": 43,
    "preview": "from .read import Read\n\n__all__ = ['Read']\n"
  },
  {
    "path": "agent/tools/read/read.py",
    "chars": 25068,
    "preview": "\"\"\"\nRead tool - Read file contents\nSupports text files, images (jpg, png, gif, webp), and PDF files\n\"\"\"\n\nimport os\nfrom "
  },
  {
    "path": "agent/tools/scheduler/README.md",
    "chars": 4366,
    "preview": "# 定时任务工具 (Scheduler Tool)\n\n## 功能简介\n\n定时任务工具允许 Agent 创建、管理和执行定时任务,支持:\n\n- ⏰ **定时提醒**: 在指定时间发送消息\n- 🔄 **周期性任务**: 按固定间隔或 cron "
  },
  {
    "path": "agent/tools/scheduler/__init__.py",
    "chars": 124,
    "preview": "\"\"\"\nScheduler tool for managing scheduled tasks\n\"\"\"\n\nfrom .scheduler_tool import SchedulerTool\n\n__all__ = [\"SchedulerToo"
  },
  {
    "path": "agent/tools/scheduler/integration.py",
    "chars": 19012,
    "preview": "\"\"\"\nIntegration module for scheduler with AgentBridge\n\"\"\"\n\nimport os\nfrom typing import Optional\nfrom config import conf"
  },
  {
    "path": "agent/tools/scheduler/scheduler_service.py",
    "chars": 7550,
    "preview": "\"\"\"\nBackground scheduler service for executing scheduled tasks\n\"\"\"\n\nimport time\nimport threading\nfrom datetime import da"
  },
  {
    "path": "agent/tools/scheduler/scheduler_tool.py",
    "chars": 16379,
    "preview": "\"\"\"\nScheduler tool for creating and managing scheduled tasks\n\"\"\"\n\nimport uuid\nfrom datetime import datetime\nfrom typing "
  },
  {
    "path": "agent/tools/scheduler/task_store.py",
    "chars": 5652,
    "preview": "\"\"\"\nTask storage management for scheduler\n\"\"\"\n\nimport json\nimport os\nimport threading\nfrom datetime import datetime\nfrom"
  },
  {
    "path": "agent/tools/send/__init__.py",
    "chars": 43,
    "preview": "from .send import Send\n\n__all__ = ['Send']\n"
  },
  {
    "path": "agent/tools/send/send.py",
    "chars": 6135,
    "preview": "\"\"\"\nSend tool - Send files to the user\n\"\"\"\n\nimport os\nfrom typing import Dict, Any\nfrom pathlib import Path\n\nfrom agent."
  },
  {
    "path": "agent/tools/tool_manager.py",
    "chars": 11683,
    "preview": "import importlib\nimport importlib.util\nfrom pathlib import Path\nfrom typing import Dict, Any, Type\nfrom agent.tools.base"
  },
  {
    "path": "agent/tools/utils/__init__.py",
    "chars": 801,
    "preview": "from .truncate import (\n    truncate_head,\n    truncate_tail,\n    truncate_line,\n    format_size,\n    TruncationResult,\n"
  },
  {
    "path": "agent/tools/utils/diff.py",
    "chars": 4622,
    "preview": "\"\"\"\nDiff tools for file editing\nProvides fuzzy matching and diff generation functionality\n\"\"\"\n\nimport difflib\nimport re\n"
  },
  {
    "path": "agent/tools/utils/truncate.py",
    "chars": 9733,
    "preview": "\"\"\"\nShared truncation utilities for tool outputs.\n\nTruncation is based on two independent limits - whichever is hit firs"
  },
  {
    "path": "agent/tools/vision/__init__.py",
    "chars": 45,
    "preview": "from agent.tools.vision.vision import Vision\n"
  },
  {
    "path": "agent/tools/vision/vision.py",
    "chars": 9695,
    "preview": "\"\"\"\nVision tool - Analyze images using OpenAI-compatible Vision API.\nSupports local files (auto base64-encoded) and HTTP"
  },
  {
    "path": "agent/tools/web_fetch/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "agent/tools/web_fetch/web_fetch.py",
    "chars": 17513,
    "preview": "\"\"\"\nWeb Fetch tool - Fetch and extract readable content from web pages and remote files.\n\nSupports:\n- HTML web pages: ex"
  },
  {
    "path": "agent/tools/web_search/__init__.py",
    "chars": 81,
    "preview": "from agent.tools.web_search.web_search import WebSearch\n\n__all__ = [\"WebSearch\"]\n"
  },
  {
    "path": "agent/tools/web_search/web_search.py",
    "chars": 11320,
    "preview": "\"\"\"\nWeb Search tool - Search the web using Bocha or LinkAI search API.\nSupports two backends with unified response forma"
  },
  {
    "path": "agent/tools/write/__init__.py",
    "chars": 46,
    "preview": "from .write import Write\n\n__all__ = ['Write']\n"
  },
  {
    "path": "agent/tools/write/write.py",
    "chars": 3289,
    "preview": "\"\"\"\nWrite tool - Write file content\nCreates or overwrites files, automatically creates parent directories\n\"\"\"\n\nimport os"
  },
  {
    "path": "app.py",
    "chars": 11625,
    "preview": "# encoding:utf-8\n\nimport os\nimport signal\nimport sys\nimport time\n\nfrom channel import channel_factory\nfrom common import"
  },
  {
    "path": "bridge/agent_bridge.py",
    "chars": 28698,
    "preview": "\"\"\"\nAgent Bridge - Integrates Agent system with existing COW bridge\n\"\"\"\n\nimport os\nfrom typing import Optional, List\n\nfr"
  },
  {
    "path": "bridge/agent_event_handler.py",
    "chars": 4113,
    "preview": "\"\"\"\nAgent Event Handler - Handles agent events and thinking process output\n\"\"\"\n\nfrom common.log import logger\n\n\nclass Ag"
  },
  {
    "path": "bridge/agent_initializer.py",
    "chars": 24089,
    "preview": "\"\"\"\nAgent Initializer - Handles agent initialization logic\n\"\"\"\n\nimport os\nimport asyncio\nimport datetime\nimport time\nfro"
  },
  {
    "path": "bridge/bridge.py",
    "chars": 6213,
    "preview": "from models.bot_factory import create_bot\nfrom bridge.context import Context\nfrom bridge.reply import Reply\nfrom common "
  },
  {
    "path": "bridge/context.py",
    "chars": 1693,
    "preview": "# encoding:utf-8\n\nfrom enum import Enum\n\n\nclass ContextType(Enum):\n    TEXT = 1  # 文本消息\n    VOICE = 2  # 音频消息\n    IMAGE "
  },
  {
    "path": "bridge/reply.py",
    "chars": 634,
    "preview": "# encoding:utf-8\n\nfrom enum import Enum\n\n\nclass ReplyType(Enum):\n    TEXT = 1  # 文本\n    VOICE = 2  # 音频文件\n    IMAGE = 3 "
  },
  {
    "path": "channel/channel.py",
    "chars": 3350,
    "preview": "\"\"\"\nMessage sending channel abstract class\n\"\"\"\n\nfrom bridge.bridge import Bridge\nfrom bridge.context import Context\nfrom"
  },
  {
    "path": "channel/channel_factory.py",
    "chars": 1602,
    "preview": "\"\"\"\nchannel factory\n\"\"\"\nfrom common import const\nfrom .channel import Channel\n\n\ndef create_channel(channel_type) -> Chan"
  },
  {
    "path": "channel/chat_channel.py",
    "chars": 24071,
    "preview": "import os\nimport re\nimport threading\nimport time\nfrom asyncio import CancelledError\nfrom concurrent.futures import Futur"
  },
  {
    "path": "channel/chat_message.py",
    "chars": 2194,
    "preview": "\"\"\"\nUnified chat message class for different channel implementations.\n\n填好必填项(群聊6个,非群聊8个),即可接入ChatChannel,并支持插件,参考Termina"
  },
  {
    "path": "channel/dingtalk/dingtalk_channel.py",
    "chars": 39009,
    "preview": "\"\"\"\n钉钉通道接入\n\n@author huiwen\n@Date 2023/11/28\n\"\"\"\nimport copy\nimport json\n# -*- coding=utf-8 -*-\nimport logging\nimport os\n"
  },
  {
    "path": "channel/dingtalk/dingtalk_message.py",
    "chars": 10448,
    "preview": "import os\nimport re\n\nimport requests\nfrom dingtalk_stream import ChatbotMessage\n\nfrom bridge.context import ContextType\n"
  },
  {
    "path": "channel/feishu/README.md",
    "chars": 2976,
    "preview": "# 飞书Channel使用说明\n\n飞书Channel支持两种事件接收模式,可以根据部署环境灵活选择。\n\n## 模式对比\n\n| 模式 | 适用场景 | 优点 | 缺点 |\n|------|---------|------|------|\n| "
  },
  {
    "path": "channel/feishu/feishu_channel.py",
    "chars": 35874,
    "preview": "\"\"\"\n飞书通道接入\n\n支持两种事件接收模式:\n1. webhook模式: 通过HTTP服务器接收事件(需要公网IP)\n2. websocket模式: 通过长连接接收事件(本地开发友好)\n\n通过配置项 feishu_event_mode 选"
  },
  {
    "path": "channel/feishu/feishu_message.py",
    "chars": 8089,
    "preview": "from bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nimport json\nimport os\nimport request"
  },
  {
    "path": "channel/file_cache.py",
    "chars": 2713,
    "preview": "\"\"\"\n文件缓存管理器\n用于缓存单独发送的文件消息(图片、视频、文档等),在用户提问时自动附加\n\"\"\"\nimport time\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nc"
  },
  {
    "path": "channel/qq/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "channel/qq/qq_channel.py",
    "chars": 28140,
    "preview": "\"\"\"\nQQ Bot channel via WebSocket long connection.\n\nSupports:\n- Group chat (@bot), single chat (C2C), guild channel, guil"
  },
  {
    "path": "channel/qq/qq_message.py",
    "chars": 5133,
    "preview": "import os\nimport requests\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nfrom comm"
  },
  {
    "path": "channel/terminal/terminal_channel.py",
    "chars": 2749,
    "preview": "import sys\n\nfrom bridge.context import *\nfrom bridge.reply import Reply, ReplyType\nfrom channel.chat_channel import Chat"
  },
  {
    "path": "channel/web/README.md",
    "chars": 288,
    "preview": "# Web Channel\n\n提供了一个默认的AI对话页面,可展示文本、图片等消息交互,支持markdown语法渲染,兼容插件执行。\n\n# 使用说明\n\n - 在 `config.json` 配置文件中的 `channel_type` 字段填"
  },
  {
    "path": "channel/web/chat.html",
    "chars": 50723,
    "preview": "<!DOCTYPE html>\n<html lang=\"zh\" class=\"\">\n<head>\n    <meta charset=\"UTF-8\">\n    <meta name=\"viewport\" content=\"width=dev"
  },
  {
    "path": "channel/web/static/css/console.css",
    "chars": 13137,
    "preview": "/* =====================================================================\n   CowAgent Console Styles\n   ================="
  },
  {
    "path": "channel/web/static/js/console.js",
    "chars": 89077,
    "preview": "/* =====================================================================\n   CowAgent Console - Main Application Script\n "
  },
  {
    "path": "channel/web/web_channel.py",
    "chars": 51195,
    "preview": "import time\nimport json\nimport logging\nimport mimetypes\nimport os\nimport threading\nimport time\nimport uuid\nfrom queue im"
  },
  {
    "path": "channel/wechatcom/README.md",
    "chars": 2069,
    "preview": "# 企业微信应用号channel\n\n企业微信官方提供了客服、应用等API,本channel使用的是企业微信的自建应用API的能力。\n\n因为未来可能还会开发客服能力,所以本channel的类型名叫作`wechatcom_app`。\n\n`wec"
  },
  {
    "path": "channel/wechatcom/wechatcomapp_channel.py",
    "chars": 9677,
    "preview": "# -*- coding=utf-8 -*-\nimport io\nimport os\nimport sys\nimport time\n\nimport requests\nimport web\nfrom wechatpy.enterprise i"
  },
  {
    "path": "channel/wechatcom/wechatcomapp_client.py",
    "chars": 1701,
    "preview": "# wechatcomapp_client.py\nimport threading\nimport time\nfrom wechatpy.enterprise import WeChatClient\n\nclass WechatComAppCl"
  },
  {
    "path": "channel/wechatcom/wechatcomapp_message.py",
    "chars": 2030,
    "preview": "from wechatpy.enterprise import WeChatClient\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import Ch"
  },
  {
    "path": "channel/wechatmp/README.md",
    "chars": 3512,
    "preview": "# 微信公众号channel\n\n微信公众号channel,提供稳定的服务。\n目前支持订阅号和服务号两种类型的公众号,它们都支持文本交互,语音和图片输入。其中个人主体的微信订阅号由于无法通过微信认证,存在回复时间限制,每天的图片和声音回复次数"
  },
  {
    "path": "channel/wechatmp/active_reply.py",
    "chars": 3343,
    "preview": "import time\n\nimport web\nfrom wechatpy import parse_message\nfrom wechatpy.replies import create_reply\n\nfrom bridge.contex"
  },
  {
    "path": "channel/wechatmp/common.py",
    "chars": 725,
    "preview": "import web\nfrom wechatpy.crypto import WeChatCrypto\nfrom wechatpy.exceptions import InvalidSignatureException\nfrom wecha"
  },
  {
    "path": "channel/wechatmp/passive_reply.py",
    "chars": 9787,
    "preview": "import asyncio\nimport time\n\nimport web\nfrom wechatpy import parse_message\nfrom wechatpy.replies import ImageReply, Voice"
  },
  {
    "path": "channel/wechatmp/wechatmp_channel.py",
    "chars": 17463,
    "preview": "# -*- coding: utf-8 -*-\nimport asyncio\nimport imghdr\nimport io\nimport os\nimport threading\nimport time\n\nimport requests\ni"
  },
  {
    "path": "channel/wechatmp/wechatmp_client.py",
    "chars": 2261,
    "preview": "import threading\nimport time\n\nfrom wechatpy.client import WeChatClient\nfrom wechatpy.exceptions import APILimitedExcepti"
  },
  {
    "path": "channel/wechatmp/wechatmp_message.py",
    "chars": 2171,
    "preview": "# -*- coding: utf-8 -*-#\n\nfrom bridge.context import ContextType\nfrom channel.chat_message import ChatMessage\nfrom commo"
  },
  {
    "path": "channel/wecom_bot/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "channel/wecom_bot/wecom_bot_channel.py",
    "chars": 30045,
    "preview": "\"\"\"\nWeCom (企业微信) AI Bot channel via WebSocket long connection.\n\nSupports:\n- Single chat and group chat (text / image / f"
  },
  {
    "path": "channel/wecom_bot/wecom_bot_message.py",
    "chars": 8002,
    "preview": "import os\nimport re\nimport base64\nimport requests\n\nfrom bridge.context import ContextType\nfrom channel.chat_message impo"
  },
  {
    "path": "common/cloud_client.py",
    "chars": 30198,
    "preview": "\"\"\"\nCloud management client for connecting to the LinkAI control console.\n\nHandles remote configuration sync, message pu"
  },
  {
    "path": "common/const.py",
    "chars": 8230,
    "preview": "# 厂商类型\nOPEN_AI = \"openAI\"\nOPENAI = \"openai\"\nCHATGPT = \"chatGPT\"  # legacy alias for OPENAI, kept for backward compatibil"
  },
  {
    "path": "common/dequeue.py",
    "chars": 1194,
    "preview": "from queue import Full, Queue\nfrom time import monotonic as time\n\n\n# add implementation of putleft to Queue\nclass Dequeu"
  },
  {
    "path": "common/expired_dict.py",
    "chars": 1161,
    "preview": "from datetime import datetime, timedelta\n\n\nclass ExpiredDict(dict):\n    def __init__(self, expires_in_seconds):\n        "
  },
  {
    "path": "common/log.py",
    "chars": 958,
    "preview": "import logging\nimport sys\n\n\ndef _reset_logger(log):\n    for handler in log.handlers:\n        handler.close()\n        log"
  },
  {
    "path": "common/memory.py",
    "chars": 83,
    "preview": "from common.expired_dict import ExpiredDict\n\nUSER_IMAGE_CACHE = ExpiredDict(60 * 3)"
  },
  {
    "path": "common/package_manager.py",
    "chars": 735,
    "preview": "import time\n\nimport pip\nfrom pip._internal import main as pipmain\n\nfrom common.log import _reset_logger, logger\n\n\ndef in"
  },
  {
    "path": "common/singleton.py",
    "chars": 217,
    "preview": "def singleton(cls):\n    instances = {}\n\n    def get_instance(*args, **kwargs):\n        if cls not in instances:\n        "
  },
  {
    "path": "common/sorted_dict.py",
    "chars": 2245,
    "preview": "import heapq\n\n\nclass SortedDict(dict):\n    def __init__(self, sort_func=lambda k, v: k, init_dict=None, reverse=False):\n"
  },
  {
    "path": "common/time_check.py",
    "chars": 1709,
    "preview": "import re\nimport time\nimport config\nfrom common.log import logger\n\n\ndef time_checker(f):\n    def _time_checker(self, *ar"
  },
  {
    "path": "common/tmp_dir.py",
    "chars": 406,
    "preview": "import os\nimport pathlib\n\nfrom config import conf\n\n\nclass TmpDir(object):\n    \"\"\"A temporary directory that is deleted w"
  },
  {
    "path": "common/token_bucket.py",
    "chars": 1289,
    "preview": "import threading\nimport time\n\n\nclass TokenBucket:\n    def __init__(self, tpm, timeout=None):\n        self.capacity = int"
  },
  {
    "path": "common/utils.py",
    "chars": 3833,
    "preview": "import io\nimport os\nimport re\nfrom urllib.parse import urlparse\nfrom common.log import logger\n\ndef fsize(file):\n    if i"
  },
  {
    "path": "config-template.json",
    "chars": 919,
    "preview": "{\n  \"channel_type\": \"web\",\n  \"model\": \"MiniMax-M2.7\",\n  \"minimax_api_key\": \"\",\n  \"zhipu_ai_api_key\": \"\",\n  \"ark_api_key\""
  },
  {
    "path": "config.py",
    "chars": 18212,
    "preview": "# encoding:utf-8\n\nimport copy\nimport json\nimport logging\nimport os\nimport pickle\n\nfrom common.log import logger\n\n# 将所有可用"
  },
  {
    "path": "docker/Dockerfile.latest",
    "chars": 1025,
    "preview": "FROM python:3.10-slim-bullseye\n\nLABEL maintainer=\"foo@bar.com\"\nARG TZ='Asia/Shanghai'\n\nARG CHATGPT_ON_WECHAT_VER\n\nRUN ec"
  },
  {
    "path": "docker/build.latest.sh",
    "chars": 209,
    "preview": "#!/bin/bash\n\nunset KUBECONFIG\n\ncd .. && docker build -f docker/Dockerfile.latest \\\n             -t zhayujie/chatgpt-on-w"
  },
  {
    "path": "docker/docker-compose.yml",
    "chars": 1216,
    "preview": "version: '2.0'\nservices:\n  chatgpt-on-wechat:\n    image: zhayujie/chatgpt-on-wechat\n    container_name: chatgpt-on-wecha"
  },
  {
    "path": "docker/entrypoint.sh",
    "chars": 1901,
    "preview": "#!/bin/bash\nset -e\n\n# build prefix\nCHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-\"\"}\n# path to config.json\nCHATGP"
  },
  {
    "path": "docs/agent.md",
    "chars": 5352,
    "preview": "# CowAgent介绍\n\n## 概述\n\nCow项目从简单的聊天机器人全面升级为超级智能助理 **CowAgent**,能够主动规思考和规划任务、拥有长期记忆、操作计算机和外部资源、创造和执行Skill,真正理解你并和你一起成长。CowAg"
  },
  {
    "path": "docs/channels/dingtalk.mdx",
    "chars": 1464,
    "preview": "---\ntitle: 钉钉\ndescription: 将 CowAgent 接入钉钉应用\n---\n\n通过钉钉开放平台创建智能机器人应用,将 CowAgent 接入钉钉。\n\n## 一、创建应用\n\n1. 进入 [钉钉开发者后台](https:/"
  },
  {
    "path": "docs/channels/feishu.mdx",
    "chars": 1687,
    "preview": "---\ntitle: 飞书\ndescription: 将 CowAgent 接入飞书应用\n---\n\n通过自建应用将 CowAgent 接入飞书,需要是飞书企业用户且具有企业管理权限。\n\n## 一、创建企业自建应用\n\n### 1. 创建应用\n"
  },
  {
    "path": "docs/channels/qq.mdx",
    "chars": 2035,
    "preview": "---\ntitle: QQ 机器人\ndescription: 将 CowAgent 接入 QQ 机器人(WebSocket 长连接模式)\n---\n\n> 通过 QQ 开放平台的机器人接口接入 CowAgent,支持 QQ 单聊、QQ 群聊(@"
  },
  {
    "path": "docs/channels/web.mdx",
    "chars": 1304,
    "preview": "---\ntitle: Web 控制台\ndescription: 通过 Web 控制台使用 CowAgent\n---\n\nWeb 控制台是 CowAgent 的默认通道,启动后会自动运行,通过浏览器即可与 Agent 对话,并支持在线管理模型、"
  },
  {
    "path": "docs/channels/wechatmp.mdx",
    "chars": 1863,
    "preview": "---\ntitle: 微信公众号\ndescription: 将 CowAgent 接入微信公众号\n---\n\nCowAgent 支持接入个人订阅号和企业服务号两种公众号类型。\n\n| 类型 | 要求 | 特点 |\n| --- | --- | -"
  },
  {
    "path": "docs/channels/wecom-bot.mdx",
    "chars": 1465,
    "preview": "---\ntitle: 企微智能机器人\ndescription: 将 CowAgent 接入企业微信智能机器人(长连接模式)\n---\n\n> 通过企业微信智能机器人接入CowAgent,支持企业内部单聊和内部群聊,无需公网 IP,使用 WebS"
  },
  {
    "path": "docs/channels/wecom.mdx",
    "chars": 2188,
    "preview": "---\ntitle: 企微自建应用\ndescription: 将 CowAgent 接入企业微信自建应用\n---\n\n通过企业微信自建应用接入 CowAgent,支持企业内部人员单聊使用。\n\n<Note>\n  企业微信只能使用 Docker "
  },
  {
    "path": "docs/docs.json",
    "chars": 12318,
    "preview": "{\n  \"$schema\": \"https://mintlify.com/docs.json\",\n  \"name\": \"CowAgent\",\n  \"description\": \"CowAgent - AI Super Assistant p"
  },
  {
    "path": "docs/en/README.md",
    "chars": 10329,
    "preview": "<p align=\"center\"><img src=\"https://github.com/user-attachments/assets/eca9a9ec-8534-4615-9e0f-96c5ac1d10a3\" alt=\"CowAge"
  },
  {
    "path": "docs/en/channels/dingtalk.mdx",
    "chars": 2271,
    "preview": "---\ntitle: DingTalk\ndescription: Integrate CowAgent into DingTalk application\n---\n\nIntegrate CowAgent into DingTalk by c"
  },
  {
    "path": "docs/en/channels/feishu.mdx",
    "chars": 2638,
    "preview": "---\ntitle: Feishu (Lark)\ndescription: Integrate CowAgent into Feishu application\n---\n\nIntegrate CowAgent into Feishu by "
  },
  {
    "path": "docs/en/channels/qq.mdx",
    "chars": 3563,
    "preview": "---\ntitle: QQ Bot\ndescription: Connect CowAgent to QQ Bot (WebSocket long connection)\n---\n\n> Connect CowAgent via QQ Ope"
  },
  {
    "path": "docs/en/channels/web.mdx",
    "chars": 2003,
    "preview": "---\ntitle: Web Console\ndescription: Use CowAgent through the web console\n---\n\nThe Web Console is CowAgent's default chan"
  },
  {
    "path": "docs/en/channels/wechatmp.mdx",
    "chars": 3450,
    "preview": "---\ntitle: WeChat Official Account\ndescription: Integrate CowAgent with WeChat Official Accounts\n---\n\nCowAgent supports "
  },
  {
    "path": "docs/en/channels/wecom-bot.mdx",
    "chars": 2347,
    "preview": "---\ntitle: WeCom Bot\ndescription: Connect CowAgent to WeCom AI Bot (WebSocket long connection)\n---\n\nConnect CowAgent via"
  },
  {
    "path": "docs/en/channels/wecom.mdx",
    "chars": 4104,
    "preview": "---\ntitle: WeCom\ndescription: Integrate CowAgent into WeCom enterprise app\n---\n\nIntegrate CowAgent into WeCom through a "
  },
  {
    "path": "docs/en/guide/manual-install.mdx",
    "chars": 2342,
    "preview": "---\ntitle: Manual Install\ndescription: Deploy CowAgent manually (source code / Docker)\n---\n\n## Source Code Deployment\n\n#"
  },
  {
    "path": "docs/en/guide/quick-start.mdx",
    "chars": 1218,
    "preview": "---\ntitle: One-click Install\ndescription: One-click install and manage CowAgent with scripts\n---\n\nThe project provides s"
  },
  {
    "path": "docs/en/intro/architecture.mdx",
    "chars": 2594,
    "preview": "---\ntitle: Architecture\ndescription: CowAgent 2.0 system architecture and core design\n---\n\nCowAgent 2.0 has evolved from"
  },
  {
    "path": "docs/en/intro/features.mdx",
    "chars": 5348,
    "preview": "---\ntitle: Features\ndescription: CowAgent long-term memory, task planning, and skills system in detail\n---\n\n## 1. Long-t"
  },
  {
    "path": "docs/en/intro/index.mdx",
    "chars": 3826,
    "preview": "---\ntitle: Introduction\ndescription: CowAgent - AI Super Assistant powered by LLMs\n---\n\n<img src=\"https://cdn.link-ai.te"
  },
  {
    "path": "docs/en/memory.mdx",
    "chars": 3200,
    "preview": "---\ntitle: Memory\ndescription: CowAgent long-term memory system\n---\n\nThe memory system enables the Agent to remember imp"
  },
  {
    "path": "docs/en/models/claude.mdx",
    "chars": 626,
    "preview": "---\ntitle: Claude\ndescription: Claude model configuration\n---\n\n```json\n{\n  \"model\": \"claude-sonnet-4-6\",\n  \"claude_api_k"
  },
  {
    "path": "docs/en/models/coding-plan.mdx",
    "chars": 4410,
    "preview": "---\ntitle: Coding Plan\ndescription: Coding Plan model configuration\n---\n\n> Coding Plan is a monthly subscription package"
  },
  {
    "path": "docs/en/models/deepseek.mdx",
    "chars": 589,
    "preview": "---\ntitle: DeepSeek\ndescription: DeepSeek model configuration\n---\n\nUse OpenAI-compatible configuration:\n\n```json\n{\n  \"mo"
  },
  {
    "path": "docs/en/models/doubao.mdx",
    "chars": 568,
    "preview": "---\ntitle: Doubao (ByteDance)\ndescription: Doubao (Volcano Ark) model configuration\n---\n\n```json\n{\n  \"model\": \"doubao-se"
  },
  {
    "path": "docs/en/models/gemini.mdx",
    "chars": 503,
    "preview": "---\ntitle: Gemini\ndescription: Google Gemini model configuration\n---\n\n```json\n{\n  \"model\": \"gemini-3.1-pro-preview\",\n  \""
  },
  {
    "path": "docs/en/models/glm.mdx",
    "chars": 702,
    "preview": "---\ntitle: GLM (Zhipu AI)\ndescription: Zhipu AI GLM model configuration\n---\n\n```json\n{\n  \"model\": \"glm-5-turbo\",\n  \"zhip"
  },
  {
    "path": "docs/en/models/index.mdx",
    "chars": 2107,
    "preview": "---\ntitle: Models Overview\ndescription: Supported models and recommended choices for CowAgent\n---\n\nCowAgent supports mai"
  },
  {
    "path": "docs/en/models/kimi.mdx",
    "chars": 617,
    "preview": "---\ntitle: Kimi (Moonshot)\ndescription: Kimi (Moonshot) model configuration\n---\n\n```json\n{\n  \"model\": \"kimi-k2.5\",\n  \"mo"
  },
  {
    "path": "docs/en/models/linkai.mdx",
    "chars": 852,
    "preview": "---\ntitle: LinkAI\ndescription: Unified access to multiple models via LinkAI platform\n---\n\nThe [LinkAI](https://link-ai.t"
  },
  {
    "path": "docs/en/models/minimax.mdx",
    "chars": 646,
    "preview": "---\ntitle: MiniMax\ndescription: MiniMax model configuration\n---\n\n```json\n{\n  \"model\": \"MiniMax-M2.7\",\n  \"minimax_api_key"
  },
  {
    "path": "docs/en/models/openai.mdx",
    "chars": 715,
    "preview": "---\ntitle: OpenAI\ndescription: OpenAI model configuration\n---\n\n```json\n{\n  \"model\": \"gpt-5.4\",\n  \"open_ai_api_key\": \"YOU"
  },
  {
    "path": "docs/en/models/qwen.mdx",
    "chars": 737,
    "preview": "---\ntitle: Qwen (Tongyi Qianwen)\ndescription: Tongyi Qianwen model configuration\n---\n\n```json\n{\n  \"model\": \"qwen3.5-plus"
  },
  {
    "path": "docs/en/releases/overview.mdx",
    "chars": 1198,
    "preview": "---\ntitle: Changelog\ndescription: CowAgent version history\n---\n\n| Version | Date | Description |\n| --- | --- | --- |\n| ["
  },
  {
    "path": "docs/en/releases/v2.0.0.mdx",
    "chars": 2092,
    "preview": "---\ntitle: v2.0.0\ndescription: CowAgent 2.0 - Full upgrade from chatbot to AI super assistant\n---\n\nCowAgent 2.0 is a com"
  },
  {
    "path": "docs/en/releases/v2.0.1.mdx",
    "chars": 4311,
    "preview": "---\ntitle: v2.0.1\ndescription: CowAgent 2.0.1 - Built-in Web Search, smart context management, multiple fixes\n---\n\n**Rel"
  },
  {
    "path": "docs/en/releases/v2.0.2.mdx",
    "chars": 5580,
    "preview": "---\ntitle: v2.0.2\ndescription: CowAgent 2.0.2 - Web Console upgrade, multi-channel concurrency, session persistence\n---\n"
  },
  {
    "path": "docs/en/skills/image-vision.mdx",
    "chars": 760,
    "preview": "---\ntitle: Image Vision\ndescription: Recognize images using OpenAI vision models\n---\n\nAnalyze image content using OpenAI"
  },
  {
    "path": "docs/en/skills/index.mdx",
    "chars": 2353,
    "preview": "---\ntitle: Skills Overview\ndescription: CowAgent skills system introduction\n---\n\nSkills provide infinite extensibility f"
  },
  {
    "path": "docs/en/skills/linkai-agent.mdx",
    "chars": 1370,
    "preview": "---\ntitle: LinkAI Agent\ndescription: Integrate LinkAI platform multi-agent skill\n---\n\nUse agents from the [LinkAI](https"
  },
  {
    "path": "docs/en/skills/skill-creator.mdx",
    "chars": 979,
    "preview": "---\ntitle: Skill Creator\ndescription: Create custom skills through conversation\n---\n\nQuickly create, install, or update "
  },
  {
    "path": "docs/en/skills/web-fetch.mdx",
    "chars": 997,
    "preview": "---\ntitle: Web Fetch\ndescription: Fetch web page text content\n---\n\nUse curl to fetch web pages and extract readable text"
  },
  {
    "path": "docs/en/tools/bash.mdx",
    "chars": 775,
    "preview": "---\ntitle: bash - Terminal\ndescription: Execute system commands\n---\n\nExecute Bash commands in the current working direct"
  },
  {
    "path": "docs/en/tools/browser.mdx",
    "chars": 786,
    "preview": "---\ntitle: browser - Browser\ndescription: Access and interact with web pages\n---\n\nUse a browser to access and interact w"
  },
  {
    "path": "docs/en/tools/edit.mdx",
    "chars": 627,
    "preview": "---\ntitle: edit - File Edit\ndescription: Edit files via precise text replacement\n---\n\nEdit files via precise text replac"
  },
  {
    "path": "docs/en/tools/env-config.mdx",
    "chars": 1155,
    "preview": "---\ntitle: env_config - Environment\ndescription: Manage API keys and secrets\n---\n\nManage environment variables (API keys"
  },
  {
    "path": "docs/en/tools/index.mdx",
    "chars": 1864,
    "preview": "---\ntitle: Tools Overview\ndescription: CowAgent built-in tools system\n---\n\nTools are the core capability for Agent to ac"
  },
  {
    "path": "docs/en/tools/ls.mdx",
    "chars": 580,
    "preview": "---\ntitle: ls - Directory List\ndescription: List directory contents\n---\n\nList directory contents, sorted alphabetically,"
  },
  {
    "path": "docs/en/tools/memory.mdx",
    "chars": 1113,
    "preview": "---\ntitle: memory - Memory\ndescription: Search and read long-term memory\n---\n\nThe memory tool contains two sub-tools: `m"
  },
  {
    "path": "docs/en/tools/read.mdx",
    "chars": 652,
    "preview": "---\ntitle: read - File Read\ndescription: Read file content\n---\n\nRead file content. Supports text files, PDF files, image"
  },
  {
    "path": "docs/en/tools/scheduler.mdx",
    "chars": 1072,
    "preview": "---\ntitle: scheduler - Scheduler\ndescription: Create and manage scheduled tasks\n---\n\nCreate and manage dynamic scheduled"
  },
  {
    "path": "docs/en/tools/send.mdx",
    "chars": 587,
    "preview": "---\ntitle: send - File Send\ndescription: Send files to user\n---\n\nSend files to the user (images, videos, audio, document"
  },
  {
    "path": "docs/en/tools/web-search.mdx",
    "chars": 1269,
    "preview": "---\ntitle: web_search - Web Search\ndescription: Search the internet for real-time information\n---\n\nSearch the internet f"
  },
  {
    "path": "docs/en/tools/write.mdx",
    "chars": 693,
    "preview": "---\ntitle: write - File Write\ndescription: Create or overwrite files\n---\n\nWrite content to a file. Creates the file if i"
  },
  {
    "path": "docs/guide/manual-install.mdx",
    "chars": 1961,
    "preview": "---\ntitle: 手动安装\ndescription: 手动部署 CowAgent(源码 / Docker)\n---\n\n## 源码部署\n\n### 1. 克隆项目代码\n\n```bash\ngit clone https://github.co"
  },
  {
    "path": "docs/guide/quick-start.mdx",
    "chars": 696,
    "preview": "---\ntitle: 一键安装\ndescription: 使用脚本一键安装和管理 CowAgent\n---\n\n项目提供了一键安装、配置、启动、管理程序的脚本,推荐使用脚本快速运行。\n\n支持 Linux、macOS、Windows 操作系统,"
  },
  {
    "path": "docs/guide/upgrade.mdx",
    "chars": 636,
    "preview": "---\ntitle: 更新升级\ndescription: CowAgent 的升级方式说明\n---\n\n## 脚本升级(推荐)\n\n如果使用 `run.sh` 管理服务,执行以下命令即可一键升级:\n\n```bash\n./run.sh updat"
  },
  {
    "path": "docs/intro/architecture.mdx",
    "chars": 1653,
    "preview": "---\ntitle: 项目架构\ndescription: CowAgent 2.0 的系统架构和核心设计\n---\n\nCowAgent 2.0 从简单的聊天机器人全面升级为超级智能助理,采用 Agent 架构设计,具备自主思考、规划任务、长期"
  },
  {
    "path": "docs/intro/features.mdx",
    "chars": 2765,
    "preview": "---\ntitle: 功能介绍\ndescription: CowAgent 长期记忆、任务规划、技能系统详细说明\n---\n\n## 1. 长期记忆\n\n> 记忆系统让 Agent 能够长期记住重要信息。Agent 会在用户分享偏好、决策、事实等"
  },
  {
    "path": "docs/intro/index.mdx",
    "chars": 1944,
    "preview": "---\ntitle: 项目介绍\ndescription: CowAgent - 基于大模型的超级AI助理\n---\n\n<img src=\"https://cdn.link-ai.tech/doc/78c5dd674e2c828642ecc04"
  },
  {
    "path": "docs/ja/README.md",
    "chars": 7942,
    "preview": "<p align=\"center\"><img src=\"https://github.com/user-attachments/assets/eca9a9ec-8534-4615-9e0f-96c5ac1d10a3\" alt=\"CowAge"
  },
  {
    "path": "docs/ja/channels/dingtalk.mdx",
    "chars": 1807,
    "preview": "---\ntitle: DingTalk\ndescription: CowAgent を DingTalk アプリケーションに統合する\n---\n\nDingTalk オープンプラットフォームでインテリジェントロボットアプリを作成して、CowAg"
  },
  {
    "path": "docs/ja/channels/feishu.mdx",
    "chars": 1976,
    "preview": "---\ntitle: Feishu (Lark)\ndescription: CowAgent を Feishu アプリケーションに統合する\n---\n\n企業向けカスタムアプリを作成して、CowAgent を Feishu に統合します。管理者"
  },
  {
    "path": "docs/ja/channels/qq.mdx",
    "chars": 2505,
    "preview": "---\ntitle: QQ Bot\ndescription: CowAgent を QQ Bot に接続する(WebSocket ロングコネクション)\n---\n\n> QQ オープンプラットフォームの Bot API を介して CowAgen"
  },
  {
    "path": "docs/ja/channels/web.mdx",
    "chars": 1480,
    "preview": "---\ntitle: Web コンソール\ndescription: Web コンソールで CowAgent を使用する\n---\n\nWeb コンソールは CowAgent のデフォルトチャネルです。起動後に自動的に開始され、ブラウザを通じて "
  }
]

// ... and 194 more files (download for full content)

About this extraction

This page contains the full source code of the zhayujie/chatgpt-on-wechat GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 394 files (1.9 MB), approximately 505.0k tokens, and a symbol index with 1469 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!