[
  {
    "path": ".gitignore",
    "content": "# Python\n__pycache__/\n*.py[cod]\n*$py.class\n*.so\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# Virtual Environment\nvenv/\nenv/\nENV/\n\n# IDE\n.idea/\n.vscode/\n*.swp\n*.swo\n\n# OS specific\n.DS_Store\nThumbs.db\n\n# LangSmith\n.langchain.db\n.langsmith/\n\n# Logs\n*.log\n\n# Env\n.env\n\n# output\nexampels/logs/\nexampels/output/\nexamples/output/sandbox_test"
  },
  {
    "path": "README.md",
    "content": "# Mentis - Agent Development Kit\n\n[![Python Version](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) \n## 概述 (Overview)\n\nMentis 是一个基于 LangGraph 构建的、可扩展的多 Agent ADK(Agent Development Kit)。它的核心是一个**状态驱动的规划型 Supervisor Agent**，负责理解用户复杂请求、制定执行计划，并智能地协调一组具有不同专业能力的子 Agent (Specialist Agents) 来共同完成任务。\n\n此框架旨在实现复杂任务的自动化处理，通过 Agent 间的协作提供比单一 Agent 更强大、更灵活的问题解决能力。\n\n## 核心特性 (Core Features)\n\n* **Multi-Agent 架构**: 采用中心化的 Supervisor 协调多个专门的子 Agent (如 Research, Coder, Reporter, Designer, Data Analyst)。\n* **State-Based Planning**: 引入独立的 `Planner` 节点负责初始规划，`Supervisor` 专注于基于计划状态的执行和调度，`Evaluator` 节点负责评估子 Agent 结果并更新状态。计划状态通过 LangGraph 持久化（需配置 Checkpointer）。\n* **模块化 Agent 设计**: 基于 `BaseAgent` 和 `ReactAgent` 构建，易于添加或修改具有不同能力的子 Agent。\n* **工具注册与管理**: 通过 `core/tools/registry.py` 实现工具的集中注册、分类和动态加载。\n* **可配置 LLM**: 支持通过 `LLMManager` (或环境变量) 配置和切换不同的 LLM Provider (OpenAI, DeepSeek, XAI Grok via compatible endpoint) 和模型。\n* **持久化支持**: 基于 LangGraph 的 Checkpointer 机制，可以实现对话状态和计划的持久化。\n* **清晰的执行流程**: Planner -> Supervisor -> (Handoff -> Agent -> Evaluator -> Supervisor 循环) -> 最终输出/Reporter。\n* **A2A 协议支持**: 实现了 Google 的 Agent-to-Agent (A2A) 协议，使 Mentis Agents 能够与其他支持 A2A 协议的系统进行互操作。\n\n## 架构概览 (Architecture Overview)\n\n1.  **用户请求 (Input)**: 用户通过入口点 (`main.py` 或 API) 提交任务请求。\n2.  **规划节点 (Planner Node)**: 分析请求，生成一个包含任务步骤、建议 Agent 的初始计划 (`Plan`)，并更新到图状态 (`PlanningAgentState`)。\n3.  **主管节点 (Supervisor Node)**: 接收带有计划的状态，根据计划状态和消息历史决定下一步行动：\n    * 启动新任务 (标记 'in_progress')。\n    * 委派 'in_progress' 的任务给合适的子 Agent (通过 Handoff 工具)。\n    * 等待子 Agent 完成。\n    * 判断计划是否最终完成。\n    * 决定最终输出方式（自己总结或调用 Reporter）。\n4.  **切换执行器 (Handoff Executor)**: 处理 Supervisor 发出的 `transfer_to_` 工具调用，并将控制权和状态传递给目标子 Agent。\n5.  **子 Agent 节点 (Specialist Agent Nodes)**: 继承自 `ReactAgent` 或 `BaseAgent`，执行具体的任务（研究、编码、生成报告/图像、数据分析），可能调用其自身的工具。\n6.  **评估节点 (Evaluate Result Node)**: 接收子 Agent 的执行结果，进行确定性评估（成功/失败），更新对应任务的状态和 Plan 的整体状态。\n7.  **循环与结束**: 流程在 Evaluator -> Supervisor 之间循环，直到 Supervisor 判断 Plan 完成，然后路由到 `END` 或 `ReporterAgent`。\n\n## 快速开始 (Getting Started)\n\n### 1. 环境设置 (Prerequisites)\n\n* Python 3.11+\n* 使用 `pip` 或 `uv` 等工具管理依赖。\n\n### 2. 安装依赖 (Installation)\n\n在项目根目录运行：\n建议使用 uv 管理\n```bash\nuv venv\nsource .venv/bin/activate\nuv sync\n```\n\n```bash\n# pip install -r requirements.txt \n# 或者 uv pip install -r requirements.txt\n```\n(requirements.txt 我没维护，请确保 `requirements.txt` 文件包含了所有必要的库，如 `langchain`, `langgraph`, `langchain-openai`, `e2b` (如果使用 E2B), `replicate` (如果使用 Replicate), `tavily-python`, `exa-py`, `python-dotenv`, `anyio`, `tiktoken` 等)。\n\n### 3. 配置环境 (Configuration)\n\n* 复制 `.env.example` 文件为 `.env`。\n* 在 `.env` 文件中填入您所需的 API Keys/Tokens：\n    * `OPENAI_API_KEY` (如果使用 OpenAI 模型)\n    * `DEEPSEEK_API_KEY` (如果使用 DeepSeek 模型)\n    * `XAI_API_KEY` (如果使用 XAI Grok，并确认 Base URL)\n    * `REPLICATE_API_TOKEN` (如果使用 Replicate 工具)\n    * `E2B_API_KEY` (如果使用 E2B Code Interpreter，推荐！)\n    * `TAVILY_API_KEY` (如果使用 Tavily 搜索，推荐！)\n    * `EXA_API_KEY` (如果使用 Exa 搜索)\n    * `LANGCHAIN_TRACING_V2=\"true\"` (强烈推荐，用于 LangSmith 调试)\n    * `LANGCHAIN_API_KEY=\"ls_...\"` (您的 LangSmith Key)\n    * `LANGCHAIN_PROJECT=\"Your_Project_Name\"` (您在 LangSmith 上的项目名)\n* **LLM 配置**:\n    * 如果您使用了 `LLMManager`（如示例所示），请检查并配置其读取的模型配置文件（例如 `config/models.yaml`，路径可能不同）。\n    * 如果您在 `tools.py` 中直接根据环境变量初始化 LLM，请确保设置了对应的环境变量，如 `LLM_PROVIDER`, `LLM_MODEL_NAME`, `LLM_BASE_URL` (用于兼容 API)。\n* **工具配置**: 确保 `core/tools/__init__.py` 或 `registry.py` 中的工具预注册逻辑能够正确找到并初始化您需要的工具。\n\n### 4. 运行示例 (Running Examples)\n\n项目包含示例脚本以演示框架的使用：\n```bash\n# 从项目根目录 (mentis/) 运行\npython examples/state_based_supervisor_examples/03_multi_agents.py \n```\n脚本会提示您输入初始请求。您可以进行简单尝试：\n\n* `\"What is the capital of France?\"` (简单测试)\n* `\"Write a short, four-line poem about spring.\"` (测试 Reporter)\n* `\"Generate an image of a cat wearing a top hat, oil painting style.\"` (测试 Designer)\n* `\"Write a Python function to calculate factorial and run it for 5.\"` (测试 Coder)\n\n## 项目结构 (Project Structure)\n\n```\nmentis/\n├── api/             # (可选) API 服务相关代码\n├── core/            # 核心框架代码\n│   ├── a2a/         # A2A 协议的客户端和服务器实现\n│   ├── agents/      # Agent 定义 (base, react, supervisor, sub-agents)\n│   │   ├── base/\n│   │   ├── state_based_supervisor/ # Supervisor 相关 (graph, node, planner, evaluator)\n│   │   ├── sub_agents/             # 具体子 Agent 实现 (research, coder, etc.)\n│   │   └── sb_supervisor_agent.py  # SupervisorAgent 类定义\n│   ├── llm/         # (可选) LLM 管理或配置\n│   ├── tools/       # 工具定义和注册表 (registry, e2b, replicate, etc.)\n│   └── utils/       # 通用辅助函数\n├── examples/        # 示例和测试脚本\n│   └── state_based_supervisor_examples/\n│       └── 03_multi_agents.py # 我们使用的测试脚本\n├── super_agents/    # 独立功能型 Agent 实现\n│   └── deep_research/ # DeepResearch Agent 实现\n│       └── a2a_adapter/ # DeepResearch 的 A2A 协议适配器\n├── web/             # (可选) Web 客户端代码\n├── web_for_a2a/     # 基于 A2A 协议的 Web 界面\n├── .env.example     # 环境变量示例\n├── requirements.txt # Python 依赖\n└── README.md        # 本文件\n```\n\n## Super Agents (独立功能型 Agent)\n\n除了由 Supervisor 协调的、专注于单一技能的 Specialist Agents (如 Coder, Researcher) 之外，本框架也支持构建和集成更复杂的 **\"Super Agents\"**。\n\nSuper Agent 可以理解为一个**独立的、具有端到端能力、能够完成一个相对完整且复杂任务的 Agent 图**。它可以包含自己的规划、执行、甚至内部协调逻辑。\n\n这些 Super Agents 既可以**独立运行**以完成特定的大型任务，也可以被更高层的协调者（例如我们的 Supervisor Agent）**视为一种强大的“能力”或“工具”**来调用，以处理其复杂计划中的某个步骤。\n\n### DeepResearch Agent (第一个实例)\n\n\nhttps://github.com/user-attachments/assets/2a685709-5be0-43a3-9e2d-934ef5fa3315\n\n\n`DeepResearch Agent` 是我们在此框架理念下实现的第一个 Super Agent 实例（其早期版本是我们开发此 Multi-Agent 框架的基础）。\n\n* **核心功能**: 旨在针对用户给定的**任意主题**，自动化地执行一个**深度研究**流程。\n* **内部工作流**: 它包含自己的一套完整的内部步骤，大致如下：\n    1.  **研究规划 (Plan Research)**: 分析主题，生成初步的搜索查询和分析点。\n    2.  **多源搜索 (Multi-Source Search)**: 调用网页搜索 (Tavily)、学术搜索 (Exa) 等工具获取信息。\n    3.  **(可选) 分析执行 (Perform Analysis)**: 对搜索结果进行初步分析（如情感、SWOT 等）。\n    4.  **差距分析 (Gap Analysis)**: 评估已有信息，识别知识空白和局限性。\n    5.  **(可选) 补充搜索 (Gap Filling)**: 针对知识空白进行额外的、更具针对性的搜索。\n    6.  **最终综合 (Final Synthesis)**: 整合所有信息，提炼关键发现和不确定性。\n    7.  **报告生成 (Report Generation)**: 将综合结果和上下文信息，撰写成一份详细的、带引用的 Markdown 研究报告。\n* **当前状态**: 该 Agent 的核心逻辑和节点已基本实现，并且现在支持 A2A 协议和专用 Web 界面。\n\n#### A2A 协议支持\n\n我们为 DeepResearch Agent 实现了完整的 A2A 协议适配器，使其能够：\n\n* 作为标准的 A2A 服务被发现和调用\n* 通过 `tasks/send` 和 `tasks/sendSubscribe` 端点接收研究任务\n* 提供实时的流式研究进度更新\n* 返回结构化的研究结果\n* 支持推送通知机制\n\n这使得 DeepResearch Agent 可以轻松地与其他支持 A2A 协议的系统（如 Google Assistant）集成，或者被自定义的前端应用调用。\n\n#### 专用 Web 界面\n\n\nhttps://github.com/user-attachments/assets/640365c7-839b-4765-b9ac-ee0ac961ceb8\n\n\n我们还开发了一个基于 Next.js 的现代 Web 界面，专门用于与 DeepResearch A2A 服务交互：\n\n* 提供直观的用户界面，用于输入研究主题和启动研究任务\n* 实时显示研究进度和中间更新（通过 Server-Sent Events）\n* 美观地展示最终生成的研究报告\n* 演示了如何在前端应用中使用浏览器原生 API 处理 A2A 流式响应\n\n**如何体验 DeepResearch Agent:**\n\n1. **独立运行模式**:\n   * 确保环境配置: 确认您的 `.env` 文件中包含了所需的所有 API Keys（例如 `OPENAI_API_KEY`/`DEEPSEEK_API_KEY`, `TAVILY_API_KEY`, `EXA_API_KEY`）。\n   * 运行脚本: 在项目根目录执行：\n     ```bash\n     python super_agents/deep_research/main.py\n     ```\n   * 输入主题并查看结果: 生成的报告通常会保存在 `output/` 文件夹中。\n\n2. **A2A 服务模式**:\n   * 启动 A2A 服务器:\n     ```bash\n     cd super_agents/deep_research/a2a_adapter\n     python run_server.py\n     ```\n   * 服务器将在默认端口（通常是 8000）启动，并提供符合 A2A 规范的 API 端点。\n\n3. **Web 界面模式**:\n   * 确保 A2A 服务器正在运行\n   * 启动 Web 界面:\n     ```bash\n     cd web_for_a2a\n     npm install\n     npm run dev\n     ```\n   * 在浏览器中访问 http://localhost:3000/deepresearch 使用图形界面与 DeepResearch Agent 交互。\n\n## 未来工作 (Future Work / Contributing)\n\n* 完善子 Agent 的工具集和 Prompt。\n* 增强 Evaluator Node 的评估逻辑。\n* 添加更复杂的任务依赖处理。\n* 优化长对话历史的管理。\n* 集成持久化 Checkpointer (如 SQLite, Redis)。\n* 欢迎提出 Issue 或 Pull Request！\n* 有问题也可以添加我的微信 brown🩷cony999\n\n\n## 许可证 (License)\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n"
  },
  {
    "path": "__init__.py",
    "content": "# Project package initialization\n"
  },
  {
    "path": "api/__init__.py",
    "content": ""
  },
  {
    "path": "api/agent/__init__.py",
    "content": ""
  },
  {
    "path": "api/agent/loader.py",
    "content": "# Agent Loader Module\n# This module is responsible for loading agents from the web_agents directory\n\nimport importlib\nimport os\nimport sys\nfrom typing import Dict, Optional, Any, List\nfrom langgraph.graph import StateGraph\nfrom langgraph.graph.graph import CompiledGraph  # Add this import\n\n# Try to import deep_research_app\ntry:\n    # Adjust this import path based on your project structure\n    from super_agents.deep_research.reason_graph.graph import web_app as deep_research_app\nexcept ImportError:\n    print(\"Warning: Failed to import deep_research_app. DeepResearchAgent will be unavailable.\")\n    deep_research_app = None\n\n# Add examples directory to Python path to allow importing web_agents\nexamples_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'examples')\nif examples_path not in sys.path:\n    sys.path.append(examples_path)\n\n\ndef list_available_agents() -> Dict[str, str]:\n    \"\"\"List all available agents in the web_agents directory\n    \n    Returns:\n        Dict[str, str]: A dictionary mapping agent names to their descriptions\n    \"\"\"\n    agents = {}\n    web_agents_dir = os.path.join(examples_path, 'web_agents')\n    \n    # Check if web_agents directory exists\n    if not os.path.exists(web_agents_dir) or not os.path.isdir(web_agents_dir):\n        pass  # Continue with empty agents dict\n    else:\n        # Iterate through subdirectories in web_agents\n        for item in os.listdir(web_agents_dir):\n            agent_dir = os.path.join(web_agents_dir, item)\n            \n            # Skip non-directories and special directories\n            if not os.path.isdir(agent_dir) or item.startswith('__') or item.startswith('.'):\n                continue\n            \n            # Check if the directory contains an __init__.py file with get_graph function\n            init_file = os.path.join(agent_dir, '__init__.py')\n            if os.path.exists(init_file):\n                # Try to get description from README.md\n                readme_file = os.path.join(agent_dir, 'README.md')\n                description = item  # Default description is the directory name\n                \n                if os.path.exists(readme_file):\n                    try:\n                        with open(readme_file, 'r', encoding='utf-8') as f:\n                            first_line = f.readline().strip()\n                            if first_line.startswith('# '):\n                                description = first_line[2:]\n                    except Exception:\n                        pass\n                \n                agents[item] = description\n    \n    # Add deep_research to available agents if it's imported successfully\n    if deep_research_app is not None:\n        agents[\"deep_research\"] = \"Deep Research Agent for in-depth topic exploration\"\n    \n    return agents\n\n\ndef load_agent(agent_name: str) -> Optional[CompiledGraph]:\n    \"\"\"Load an agent from the web_agents directory or special agents\n    \n    Args:\n        agent_name (str): The name of the agent to load\n        \n    Returns:\n        Optional[CompiledGraph]: The compiled graph for the agent, or None if the agent could not be loaded\n    \"\"\"\n    # Special case for deep_research agent\n    if agent_name == \"deep_research\":\n        if deep_research_app:\n            return deep_research_app\n        else:\n            print(f\"ERROR: DeepResearchAgent requested but not available.\")\n            return None\n    \n    # Standard agents from web_agents directory\n    try:\n        # Import the agent module\n        module = importlib.import_module(f'web_agents.{agent_name}')\n        \n        # Check if the module has a get_graph function\n        if hasattr(module, 'get_graph'):\n            # Call the get_graph function to get the compiled graph\n            return module.get_graph()\n        else:\n            print(f\"Error: Agent '{agent_name}' does not have a get_graph function\")\n            return None\n    except ImportError as e:\n        print(f\"Error importing agent '{agent_name}': {e}\")\n        return None\n    except Exception as e:\n        print(f\"Error loading agent '{agent_name}': {e}\")\n        return None\n\n\n# Default agent to use if none is specified\nDEFAULT_AGENT = 'research_assistant'\n# DEFAULT_AGENT = 'weather_agent'\n\n\ndef get_default_agent() -> Optional[CompiledGraph]:\n    \"\"\"Get the default agent\n    \n    Returns:\n        Optional[CompiledGraph]: The compiled graph for the default agent, or None if it could not be loaded\n    \"\"\"\n    return load_agent(DEFAULT_AGENT)"
  },
  {
    "path": "api/server.py",
    "content": "import uvicorn\nfrom langgraph.types import Command, Interrupt\nfrom fastapi import FastAPI, Request, HTTPException, Query\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom sse_starlette.sse import EventSourceResponse\nfrom typing import AsyncGenerator, Dict, Optional, Union, Any\nfrom api.utils import message_chunk_event, interrupt_event, custom_event, checkpoint_event, format_state_snapshot, stream_update_event\nimport asyncio\nimport traceback\nimport json\nfrom langchain_core.messages import HumanMessage\nfrom langchain_core.runnables import RunnableConfig\n\n# Import the agent loader\nfrom api.agent.loader import load_agent, list_available_agents, get_default_agent\n\n# Load the default agent\ngraph = get_default_agent()\n\n# Track active connections\nactive_connections: Dict[str, asyncio.Event] = {}\n\napp = FastAPI(\n    title=\"LangGraph API\",\n    description=\"API for LangGraph interactions\",\n    version=\"0.1.0\"\n)\n\n# Configure CORS\napp.add_middleware(\n    CORSMiddleware,\n    allow_origins=[\"*\"],  # In production, replace with specific origins\n    allow_credentials=True,\n    allow_methods=[\"*\"],\n    allow_headers=[\"*\"],\n)\n\n\n@app.get(\"/agents\")\nasync def list_agents():\n    \"\"\"Endpoint returning a list of available agents.\"\"\"\n    return list_available_agents()\n\n\n@app.get(\"/state\")\nasync def state(thread_id: str | None = None, agent: Optional[str] = Query(None)):\n    \"\"\"Endpoint returning current graph state.\"\"\"\n    if not thread_id:\n        raise HTTPException(status_code=400, detail=\"thread_id is required\")\n    \n    # Load the specified agent if provided\n    current_graph = load_agent(agent) if agent else graph\n    if not current_graph:\n        raise HTTPException(status_code=404, detail=f\"Agent '{agent}' not found\")\n\n    config: RunnableConfig = {\"configurable\": {\"thread_id\": thread_id}}\n\n    state = await current_graph.aget_state(config)\n    return format_state_snapshot(state)\n\n\n@app.get(\"/history\")\nasync def history(thread_id: str | None = None, agent: Optional[str] = Query(None)):\n    \"\"\"Endpoint returning complete state history. Used for restoring graph.\"\"\"\n    if not thread_id:\n        raise HTTPException(status_code=400, detail=\"thread_id is required\")\n    \n    # Load the specified agent if provided\n    current_graph = load_agent(agent) if agent else graph\n    if not current_graph:\n        raise HTTPException(status_code=404, detail=f\"Agent '{agent}' not found\")\n\n    config: RunnableConfig  = {\"configurable\": {\"thread_id\": thread_id}}\n\n    records = []\n    async for state in current_graph.aget_state_history(config):\n        records.append(format_state_snapshot(state))\n    return records\n\n\n@app.post(\"/agent/stop\")\nasync def stop_agent(request: Request):\n    \"\"\"Endpoint for stopping the running agent.\"\"\"\n    body = await request.json()\n    thread_id = body.get(\"thread_id\")\n    if not thread_id:\n        raise HTTPException(status_code=400, detail=\"thread_id is required\")\n\n    if thread_id in active_connections:\n        active_connections[thread_id].set()\n        return {\"status\": \"stopped\", \"thread_id\": thread_id}\n    raise HTTPException(status_code=404, detail=\"Thread is not running\")\n\n\n@app.post(\"/agent\")\nasync def agent(request: Request):\n    \"\"\"Endpoint for running the agent.\"\"\"\n    body = await request.json()\n\n    request_type = body.get(\"type\")\n    if not request_type:\n        raise HTTPException(status_code=400, detail=\"type is required\")\n\n    thread_id = body.get(\"thread_id\")\n    if not thread_id:\n        raise HTTPException(status_code=400, detail=\"thread_id is required\")\n\n    # Get the agent name if provided\n    agent_name = body.get(\"agent\")\n    \n    # Load the specified agent if provided\n    current_graph = load_agent(agent_name) if agent_name else graph\n    if not current_graph:\n        raise HTTPException(status_code=404, detail=f\"Agent '{agent_name or 'default'}' not found\")\n\n    stop_event = asyncio.Event()\n    active_connections[thread_id] = stop_event\n\n    config: RunnableConfig = {\"configurable\": {\"thread_id\": thread_id}}\n    initial_graph_state: Dict[str, Any] = {}\n    input_for_astream: Optional[Union[Dict, Command]] = None  # input for astream\n\n    # Get initial state or messages from frontend\n    initial_state_input = body.get(\"state\", {\"messages\": []})\n    if not isinstance(initial_state_input, dict):\n        raise HTTPException(status_code=400, detail=\"state must be a dictionary\")\n\n    if agent_name == \"deep_research\":\n        # --- Prepare state for DeepResearch Agent ---\n        print(\"Preparing state for DeepResearchAgent...\")\n        # Extract topic from the first message in state['messages']\n        first_message_content = \"\"\n        try:\n            # Ensure initial_state_input['messages'] is a list and not empty\n            if isinstance(initial_state_input.get('messages'), list) and initial_state_input['messages']:\n                # Assume the first message's content is the topic\n                first_message_content = initial_state_input['messages'][0]['content']\n            else:\n                # Try to get topic from other fields in state (alternative)\n                first_message_content = initial_state_input.get('topic', '')\n                \n        except Exception as e:\n            print(f\"Warning: Could not extract topic from initial state input: {e}\")\n\n        if not first_message_content or not isinstance(first_message_content, str):\n            raise HTTPException(status_code=400, detail=\"A valid 'topic' string is required for deep_research agent, expected in state.messages[0].content or state.topic\")\n\n        # Build the ResearchState needed by DeepResearch Agent (at least topic and depth)\n        initial_graph_state = {\n            \"topic\": first_message_content,\n            \"depth\": initial_state_input.get(\"depth\", \"advanced\"),  # Optional: allow frontend to specify depth\n            \"messages\": [],  # DeepResearch manages its own message history\n            \"stream_updates\": [],  # Initialize stream_updates\n            # Initialize other ResearchState fields to None or default values\n            \"plan\": None, \"research_plan\": None, \"search_results\": [], \n            \"gap_analysis\": None, \"final_synthesis\": None, \n            \"final_report_markdown\": None,\n        }\n        print(f\"Initial ResearchState: {{'topic': '{initial_graph_state['topic']}', 'depth': '{initial_graph_state['depth']}', ...}}\")\n        \n        # DeepResearch Agent's astream input is the complete initial state\n        if request_type == \"run\":\n            input_for_astream = initial_graph_state\n        elif request_type == \"resume\":\n            # DeepResearch Agent might not support or need different resume approach\n            print(\"Warning: 'resume' might not be fully supported for DeepResearchAgent yet.\")\n            # Assume resume Command can be understood by the graph\n            input_for_astream = Command(resume=body.get(\"resume\"))\n            config[\"configurable\"][\"checkpoint_id\"] = body.get(\"resume\")  # Resume usually needs checkpoint ID\n        else:  # Fork, Replay typically only need config\n            config_from_request = body.get(\"config\")\n            if not config_from_request:\n                raise HTTPException(status_code=400, detail=\"config is required for fork/replay\")\n            config = config_from_request  # Use complete config provided in the request\n            input_for_astream = None\n\n    else:  # For Supervisor or other Agents (assume using PlanningAgentState)\n        print(\"Preparing state for Supervisor/Other Agent...\")\n        # --- Prepare PlanningAgentState ---\n        # Ensure messages list contains correct BaseMessage objects (or let BaseAgent preprocess)\n        initial_messages = initial_state_input.get(\"messages\", [])\n\n        initial_graph_state = {\n            \"messages\": initial_messages,\n            \"plan\": None,  # Planner node will create it\n            \"error\": None\n            # Add other fields needed by PlanningAgentState and set to None or default values\n        }\n        \n        # --- Set astream input (logic similar to before) ---\n        if request_type == \"run\":\n            # For PlanningAgentState, initial input typically only contains messages\n            input_for_astream = {\"messages\": initial_messages}\n        elif request_type == \"resume\":\n            resume_val = body.get(\"resume\")\n            if not resume_val:\n                raise HTTPException(status_code=400, detail=\"resume value is required\")\n            input_for_astream = Command(resume=resume_val)\n            # Ensure config includes checkpoint_id for resuming\n            if \"configurable\" not in config:\n                config[\"configurable\"] = {}\n            config[\"configurable\"][\"checkpoint_id\"] = resume_val \n        elif request_type == \"fork\": \n            config_from_request = body.get(\"config\")\n            if not config_from_request:\n                raise HTTPException(status_code=400, detail=\"config is required for fork\")\n            config = config_from_request  # Fork uses complete config provided\n            # Fork typically starts from specified checkpoint, no extra state dict input needed\n            input_for_astream = None \n        elif request_type == \"replay\": \n            config_from_request = body.get(\"config\")\n            if not config_from_request:\n                raise HTTPException(status_code=400, detail=\"config is required for replay\")\n            config = config_from_request\n            input_for_astream = None\n        else:\n            raise HTTPException(status_code=400, detail=\"invalid request type\")\n             \n    # Ensure config always has thread_id (important for all agents)\n    if \"configurable\" not in config:\n        config[\"configurable\"] = {}\n    config[\"configurable\"][\"thread_id\"] = thread_id\n\n    # --- State and Input preparation complete ---\n\n    async def generate_events() -> AsyncGenerator[dict, None]:\n        try:\n            # 设置recursion_limit为100，解决深度研究时的递归限制问题\n            if agent_name == \"deep_research\" and \"recursion_limit\" not in config:\n                config[\"recursion_limit\"] = 100\n                \n            async for chunk in current_graph.astream(\n                input_for_astream,  # Use prepared input\n                config,             # Use prepared config\n                stream_mode=[\"debug\", \"messages\", \"updates\", \"custom\"],\n            ):\n                if stop_event.is_set():\n                    break\n\n                chunk_type, chunk_data = chunk\n\n                if chunk_type == \"debug\":\n                    # type can be checkpoint, task, task_result\n                    if isinstance(chunk_data, dict) and \"type\" in chunk_data:\n                        debug_type = chunk_data[\"type\"]\n                        if debug_type == \"checkpoint\":\n                            yield checkpoint_event(chunk_data)\n                        elif debug_type == \"task_result\":\n                            interrupts = chunk_data[\"payload\"].get(\n                                \"interrupts\", [])\n                            if interrupts and len(interrupts) > 0:\n                                yield interrupt_event(interrupts)\n                elif chunk_type == \"messages\":\n                    # 确保chunk_data是一个包含至少两个元素的列表/元组，并且第二个元素是一个包含langgraph_node的字典\n                    if isinstance(chunk_data, (list, tuple)) and len(chunk_data) > 1 and isinstance(chunk_data[1], dict) and \"langgraph_node\" in chunk_data[1]:\n                        yield message_chunk_event(chunk_data[1][\"langgraph_node\"], chunk_data[0])\n                    else:\n                        print(f\"Warning: Unexpected messages chunk_data format: {chunk_data}\")\n                        # 尝试使用安全的默认值\n                        node_name = chunk_data[1].get(\"langgraph_node\", \"unknown\") if isinstance(chunk_data, (list, tuple)) and len(chunk_data) > 1 and isinstance(chunk_data[1], dict) else \"unknown\"\n                        message = chunk_data[0] if isinstance(chunk_data, (list, tuple)) and len(chunk_data) > 0 else None\n                        if message is not None:\n                            yield message_chunk_event(node_name, message)\n                elif chunk_type == \"custom\":\n                    # Check if this is a StreamUpdate\n                    if isinstance(chunk_data, dict) and all(k in chunk_data for k in ['id', 'type', 'status', 'title']):\n                        yield stream_update_event(chunk_data)\n                    else:\n                        yield custom_event(chunk_data)\n                elif chunk_type == \"updates\":\n                    # Handle state update events (e.g., real-time Plan updates)\n                    pass  # Currently ignore updates events, rely on checkpoint or custom\n            \n            # --- Loop ended ---\n            yield {\"event\": \"end\", \"data\": \"{}\"}  # Send an end event to frontend\n\n        except Exception as e:\n            print(f\"Error during agent execution stream: {e}\")\n            traceback.print_exc()\n            # Send error event to frontend\n            yield {\"event\": \"error\", \"data\": json.dumps({\"message\": f\"Agent execution error: {e}\"})}\n        finally:\n            if thread_id in active_connections:\n                del active_connections[thread_id]\n\n    return EventSourceResponse(generate_events())\n\n\ndef main():\n    uvicorn.run(\"api.server:app\", host=\"0.0.0.0\", port=8000, reload=True)\n\n\nif __name__ == \"__main__\":\n    import sys\n    import os\n    # 将项目根目录添加到 Python 路径中\n    sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))\n    main()\n"
  },
  {
    "path": "api/utils.py",
    "content": "import json\nfrom typing import Dict, Any, List, Optional\nfrom langchain_core.messages import BaseMessage, AIMessage, HumanMessage, ToolMessage\nfrom langgraph.types import StateSnapshot\n\n\ndef checkpoint_event(value):\n    \"\"\"Create a checkpoint event for the client.\"\"\"\n\n    def format_values(values: dict):\n        formatted_values = values.copy()\n        if \"messages\" in formatted_values:\n            formatted_values[\"messages\"] = [\n                {\n                    \"type\": msg.get(\"type\") if isinstance(msg, dict) else msg.type,\n                    \"content\": msg.get(\"content\") if isinstance(msg, dict) else msg.content,\n                    \"id\": msg.get(\"id\") if isinstance(msg, dict) else msg.id,\n                    \"tool_calls\": msg.get(\"tool_calls\") if isinstance(msg, dict) else (msg.tool_calls if hasattr(msg, 'tool_calls') else None)\n                }\n                for msg in formatted_values[\"messages\"]\n            ]\n        return formatted_values\n\n    def format_writes(writes: dict):\n        if writes is None:\n            return None\n        formatted_writes = {}\n        for key, value in writes.items():\n            if isinstance(value, dict):\n                formatted_writes[key] = format_values(value)\n            elif isinstance(value, list):\n                formatted_writes[key] = [format_values(item) if isinstance(\n                    item, dict) else item for item in value]\n            else:\n                formatted_writes[key] = value\n        return formatted_writes\n\n    configurable = value[\"payload\"][\"config\"][\"configurable\"]\n    data = {\n        \"next\": value[\"payload\"][\"next\"],\n        \"values\": format_values(value[\"payload\"][\"values\"]),\n        \"config\": {\n            \"configurable\": {\n                \"checkpoint_id\": configurable[\"checkpoint_id\"],\n                \"checkpoint_ns\": configurable[\"checkpoint_ns\"],\n                \"thread_id\": configurable[\"thread_id\"]\n            }\n        },\n        \"metadata\": {\n            \"source\": value[\"payload\"][\"metadata\"][\"source\"],\n            \"step\": value[\"payload\"][\"metadata\"][\"step\"],\n            \"writes\": format_writes(value[\"payload\"][\"metadata\"][\"writes\"]),\n            \"parents\": value[\"payload\"][\"metadata\"][\"parents\"]\n        }\n    }\n    return {\n        \"event\": \"checkpoint\",\n        \"data\": json.dumps(data)\n    }\n\n\ndef message_chunk_event(node_name, message_chunk):\n    \"\"\"Create a message chunk event for the client.\"\"\"\n\n    def format_messages(value):\n        \"\"\"Format message chunk into a serializable dictionary. \n        This is needed because the message class is not serializable.\n        \"\"\"\n        return {\n            \"content\": value.content,\n            \"id\": value.id,\n            \"tool_calls\": value.tool_calls if hasattr(value, 'tool_calls') else None,\n            \"tool_call_chunks\": value.tool_call_chunks if hasattr(value, 'tool_call_chunks') else None\n        }\n\n    return {\n        \"event\": \"message_chunk\",\n        \"data\": json.dumps({\n            \"node_name\": node_name,\n            \"message_chunk\": format_messages(message_chunk)\n        })\n    }\n\n\ndef interrupt_event(interrupts):\n    \"\"\"Create an interrupt event for the client.\"\"\"\n    formatted_interrupts = [{\"value\": interrupt[\"value\"]}\n                            for interrupt in interrupts]\n    return {\n        \"event\": \"interrupt\",\n        \"data\": json.dumps(formatted_interrupts)\n    }\n\n\ndef custom_event(value):\n    \"\"\"Create a custom event for the client.\"\"\"\n    return {\n        \"event\": \"custom\",\n        \"data\": json.dumps(value)\n    }\n\n\ndef format_state_snapshot(snapshot: StateSnapshot):\n    interrupts = []\n    for task in snapshot.tasks:\n        for interrupt in task.interrupts:\n            interrupts.append({\"value\": interrupt.value})\n    return {\n        \"values\": snapshot.values,\n        \"next\": snapshot.next,\n        \"config\": snapshot.config,\n        \"interrupts\": interrupts,\n        \"parent_config\": snapshot.parent_config,\n        \"metadata\": snapshot.metadata\n    }\n\n\ndef stream_update_event(data: dict):\n    \"\"\"为 DeepResearch Agent 的 StreamUpdateData 创建一个 stream_update 事件。\n\n    Args:\n        data: 从 add_stream_update 产生的、符合 StreamUpdateData 结构的字典。\n\n    Returns:\n        符合 SSE EventSourceResponse 格式的字典。\n    \"\"\"\n    if not isinstance(data, dict):\n        # 如果传入的不是字典，返回一个错误事件\n        return {\n            \"event\": \"error\",\n            \"data\": json.dumps({\"message\": \"Internal server error: Invalid stream update data type.\"})\n        }\n    \n    return {\n        \"event\": \"stream_update\",\n        \"data\": json.dumps(data, default=str)\n    }\n"
  },
  {
    "path": "core/__init__.py",
    "content": "# Core module initialization"
  },
  {
    "path": "core/a2a/README.md",
    "content": "# Mentis A2A (Agent2Agent) 协议集成\n\n本目录 (`core/a2a/`) 包含用于实现 Agent2Agent (A2A) 协议的客户端和服务器实现，使 Mentis Agents 能够与其他支持 A2A 协议的代理系统进行通信和协作。\n\n## 背景\n\nA2A 是由 Google 发起的开放标准，旨在使不同框架（如 LangGraph、CrewAI、Google ADK、Genkit）或不同供应商构建的 AI 代理能够发现彼此的能力，协商交互模式（文本、文件、数据等），并在任务上进行协作。\n\n## 核心组件\n\n### 1. A2A 客户端 (`A2AClient`)\n\n`A2AClient` 类（位于 `client/client.py`）提供了与支持 A2A 协议的服务器进行交互的功能：\n\n* **代理发现:** 支持通过 `.well-known/agent.json` 端点自动发现代理能力（Agent Card）。\n* **任务管理:** 提供发送、获取和取消任务的方法。\n* **推送通知:** 支持设置和获取任务的推送通知配置。\n* **流式响应:** 支持通过流式API接收任务执行的实时更新。\n* **异步架构:** 基于 `asyncio` 和 `httpx` 构建，适合异步应用。\n\n### 2. A2A 服务器 (`A2AServer`)\n\n`A2AServer` 类（位于 `server/server.py`）允许将现有的 Mentis Agent 暴露为支持 A2A 协议的服务：\n\n* **基于 Starlette:** 使用 Starlette 框架提供 HTTP 和 SSE 端点。\n* **任务处理:** 支持任务的创建、执行和状态跟踪。\n* **流式更新:** 通过 Server-Sent Events (SSE) 提供任务执行的实时更新。\n* **Agent Card:** 通过 `.well-known/agent.json` 端点公开代理能力。\n\n### 3. 辅助工具\n\n#### 推送通知认证 (`PushNotificationAuth`)\n\n`PushNotificationAuth` 类（位于 `utils/push_notification_auth.py`）提供了安全的推送通知机制：\n\n* **发送方认证 (`PushNotificationSenderAuth`):** \n  - 生成和管理 JWT 密钥对\n  - 验证推送通知 URL\n  - 签名并发送推送通知\n  - 提供 JWKS 端点供接收方获取公钥\n\n* **接收方认证 (`PushNotificationReceiverAuth`):** \n  - 从 JWKS URL 加载公钥\n  - 验证接收到的推送通知的完整性和时效性\n  - 防止重放攻击\n\n#### 内存缓存 (`InMemoryCache`)\n\n`InMemoryCache` 类（位于 `utils/in_memory_cache.py`）提供了线程安全的内存缓存实现：\n\n* **单例模式:** 确保应用中只有一个缓存实例\n* **TTL 支持:** 支持设置缓存项的过期时间\n* **线程安全:** 使用锁机制确保并发安全\n\n## 数据类型\n\nA2A 协议定义了几个关键数据类型（位于 `types.py`）：\n\n* **AgentCard:** 描述代理的元数据，包括名称、描述、URL、能力和技能。\n* **Task:** 表示代理执行的任务，包含状态、内容和产物。\n* **Part:** 内容的一部分，可以是文本、文件或数据。\n* **Artifact:** 代理产生的产物，如结果、生成的文件等。\n* **TaskState:** 任务状态枚举（已提交、进行中、需要输入、已完成、已取消、失败）。\n* **PushNotificationConfig:** 推送通知配置，包含回调URL和认证信息。\n\n## 如何使用\n\n### 1. 创建和使用 A2A 客户端\n\n```python\nimport asyncio\nfrom common.types import AgentCard\nfrom core.a2a.client.client import A2AClient\n\nasync def main():\n    # 方式1：直接指定URL创建客户端\n    async with A2AClient(url=\"http://localhost:8000/a2a\") as client:\n        # 发送任务\n        response = await client.send_task({\"text\": \"请帮我研究人工智能\"})\n        task_id = response[\"result\"][\"taskId\"]\n        \n        # 获取任务结果\n        task_response = await client.get_task({\"id\": task_id})\n        \n        # 设置推送通知\n        await client.set_task_callback({\n            \"taskId\": task_id,\n            \"callbackUrl\": \"https://your-callback-url.com/webhook\"\n        })\n        \n    # 方式2：通过Agent Card创建客户端\n    agent_card = AgentCard(name=\"Example Agent\", url=\"http://localhost:8000/a2a\")\n    async with A2AClient(agent_card=agent_card) as client:\n        # 使用流式API接收实时更新\n        async for update in client.send_task_streaming({\"text\": \"分析最新的AI趋势\"}):\n            print(update)\n\n# 运行\nasyncio.run(main())\n```\n\n### 2. 创建 A2A 服务器\n\n```python\nfrom core.a2a.server.server import A2AServer\nfrom core.a2a.server.task_manager import InMemoryTaskManager\nfrom common.types import AgentCard\n\n# 创建Agent卡片\nagent_card = AgentCard(\n    name=\"My Agent\",\n    description=\"一个示例代理\",\n    url=\"http://localhost:5000\"\n)\n\n# 创建任务管理器\ntask_manager = InMemoryTaskManager()\n\n# 创建服务器\nserver = A2AServer(\n    host=\"0.0.0.0\",\n    port=5000,\n    endpoint=\"/\",\n    agent_card=agent_card,\n    task_manager=task_manager\n)\n\n# 启动服务器\nserver.start()\n```\n\n### 3. 配置推送通知\n\n#### 发送方配置\n\n```python\nfrom core.a2a.utils.push_notification_auth import PushNotificationSenderAuth\n\n# 创建发送方认证\nsender_auth = PushNotificationSenderAuth()\n\n# 生成密钥对\nsender_auth.generate_jwk()\n\n# 添加JWKS端点到你的服务器\napp.add_route(\"/.well-known/jwks.json\", sender_auth.handle_jwks_endpoint)\n\n# 验证接收方URL\nis_valid = await sender_auth.verify_push_notification_url(\"https://receiver-url.com/webhook\")\n\n# 发送推送通知\nif is_valid:\n    await sender_auth.send_push_notification(\n        \"https://receiver-url.com/webhook\",\n        {\"event\": \"task_completed\", \"taskId\": \"123\"}\n    )\n```\n\n#### 接收方配置\n\n```python\nfrom core.a2a.utils.push_notification_auth import PushNotificationReceiverAuth\nfrom starlette.requests import Request\n\n# 创建接收方认证\nreceiver_auth = PushNotificationReceiverAuth()\n\n# 加载发送方的公钥\nawait receiver_auth.load_jwks(\"https://sender-url.com/.well-known/jwks.json\")\n\n# 在webhook处理函数中验证推送通知\nasync def webhook_handler(request: Request):\n    is_valid = await receiver_auth.verify_push_notification(request)\n    if is_valid:\n        # 处理推送通知...\n        data = await request.json()\n        print(f\"收到有效的推送通知: {data}\")\n```\n\n### 4. 使用内存缓存\n\n```python\nfrom core.a2a.utils.in_memory_cache import InMemoryCache\n\n# 获取缓存实例\ncache = InMemoryCache()\n\n# 设置缓存项（带TTL）\ncache.set(\"api_result\", {\"data\": \"some_value\"}, ttl=300)  # 5分钟过期\n\n# 获取缓存项\nresult = cache.get(\"api_result\")\nif result:\n    print(f\"从缓存获取结果: {result}\")\nelse:\n    print(\"缓存已过期或不存在\")\n    \n# 删除缓存项\ncache.delete(\"api_result\")\n\n# 清空所有缓存\ncache.clear()\n```\n\n## 完整示例\n\n查看 `examples/16_a2a_integration_test.py` 获取完整的集成示例，包括：\n\n1. 创建 A2A 服务器，将现有 Agent 暴露为 A2A 服务\n2. 使用 A2A 客户端连接到 A2A 服务器\n3. 创建一个 Agent，使用 A2A 客户端作为工具\n\n运行示例：\n\n```bash\n# 启动 A2A 服务器\npython -m examples.16_a2a_integration_test server\n\n# 运行 A2A 客户端\npython -m examples.16_a2a_integration_test client\n\n# 运行带有 A2A 工具的 Agent\npython -m examples.16_a2a_integration_test agent\n```\n\n## 与 MCP 的关系\n\nMentis 同时支持 MCP（Model Context Protocol）和 A2A（Agent2Agent）协议：\n\n* **MCP:** 专注于 AI 模型与外部工具/服务的交互，主要用于扩展单个 Agent 的能力。\n* **A2A:** 专注于不同 Agent 之间的通信和协作，使多个 Agent 能够协同工作。\n\n这两个协议是互补的，可以同时使用以构建功能强大的 Agent 系统。"
  },
  {
    "path": "core/a2a/__init__.py",
    "content": ""
  },
  {
    "path": "core/a2a/agent_task_manager.py",
    "content": "import asyncio\nimport logging\nimport traceback\nfrom typing import Dict, Any, Union, AsyncIterable, Optional\nfrom core.a2a.types import (\n    TaskState, TaskStatus, Task, Artifact, Message, TextPart,\n    SendTaskRequest, SendTaskResponse, GetTaskRequest, GetTaskResponse,\n    CancelTaskRequest, CancelTaskResponse, SendTaskStreamingRequest, SendTaskStreamingResponse,\n    SetTaskPushNotificationRequest, SetTaskPushNotificationResponse,\n    GetTaskPushNotificationRequest, GetTaskPushNotificationResponse,\n    TaskResubscriptionRequest, TaskSendParams, JSONRPCResponse, InvalidParamsError,\n    TaskNotFoundError, TaskNotCancelableError, PushNotificationNotSupportedError,\n    TaskArtifactUpdateEvent, TaskStatusUpdateEvent, InternalError, TaskIdParams,\n    PushNotificationConfig\n)\nfrom core.a2a.server.task_manager import TaskManager, InMemoryTaskManager\nfrom core.a2a.server import utils\n\nlogger = logging.getLogger(__name__)\n\nclass AgentTaskManager(InMemoryTaskManager):\n    \"\"\"\n    AgentTaskManager是连接LangGraph Agent与A2A协议的关键组件。\n    它负责管理任务生命周期、处理流式响应、更新任务状态以及发送推送通知。\n    \"\"\"\n    def __init__(self, agent, notification_sender_auth=None):\n        \"\"\"\n        初始化AgentTaskManager\n        \n        Args:\n            agent: LangGraph Agent实例\n            notification_sender_auth: 推送通知认证（可选）\n        \"\"\"\n        super().__init__()\n        self.agent = agent\n        self.notification_sender_auth = notification_sender_auth\n    \n    async def _run_streaming_agent(self, request: SendTaskStreamingRequest):\n        \"\"\"\n        运行流式Agent并处理响应\n        \n        Args:\n            request: 流式任务请求\n        \"\"\"\n        task_send_params: TaskSendParams = request.params\n        query = self._get_user_query(task_send_params)\n        try:\n            async for item in self.agent.stream(query, task_send_params.sessionId):\n                is_task_complete = item[\"is_task_complete\"]\n                require_user_input = item[\"require_user_input\"]\n                artifact = None\n                message = None\n                parts = [{\"type\": \"text\", \"text\": item[\"content\"]}]\n                end_stream = False\n                \n                if not is_task_complete and not require_user_input:\n                    task_state = TaskState.WORKING\n                    message = Message(role=\"agent\", parts=parts)\n                elif require_user_input:\n                    task_state = TaskState.INPUT_REQUIRED\n                    message = Message(role=\"agent\", parts=parts)\n                    end_stream = True\n                else:\n                    task_state = TaskState.COMPLETED\n                    artifact = Artifact(parts=parts, index=0, append=False)\n                    end_stream = True\n                \n                task_status = TaskStatus(state=task_state, message=message)\n                latest_task = await self.update_store(\n                    task_send_params.id,\n                    task_status,\n                    None if artifact is None else [artifact],\n                )\n                await self.send_task_notification(latest_task)\n\n                if artifact:\n                    task_artifact_update_event = TaskArtifactUpdateEvent(\n                        id=task_send_params.id, artifact=artifact\n                    )\n                    await self.enqueue_events_for_sse(\n                        task_send_params.id, task_artifact_update_event\n                    )                    \n                task_update_event = TaskStatusUpdateEvent(\n                    id=task_send_params.id, status=task_status, final=end_stream\n                )\n                await self.enqueue_events_for_sse(\n                    task_send_params.id, task_update_event\n                )\n        except Exception as e:\n            logger.error(f\"An error occurred while streaming the response: {e}\")\n            await self.enqueue_events_for_sse(\n                task_send_params.id,\n                InternalError(message=f\"An error occurred while streaming the response: {e}\")                \n            )\n\n    def _get_user_query(self, task_send_params: TaskSendParams) -> str:\n        \"\"\"\n        从任务参数中提取用户查询 (采用 Google Demo 的严格方法)\n\n        Args:\n            task_send_params: 任务发送参数\n\n        Returns:\n            str: 用户查询文本\n        \"\"\"\n        if not task_send_params.message or not task_send_params.message.parts:\n            logger.warning(f\"[_get_user_query] Message or parts are empty for task {task_send_params.id}\")\n            return \"\" # 或者可以抛出错误，取决于你的设计\n\n        # 直接获取第一个 part\n        part = task_send_params.message.parts[0]\n        logger.debug(f\"[_get_user_query] First part: type={type(part)}, value={part!r}\") # 保留调试日志\n\n        # 严格检查第一个 part 是否为 TextPart 实例\n        if not isinstance(part, TextPart):\n            logger.error(f\"[_get_user_query] First part is not a TextPart instance! Type: {type(part)}\")\n            # 直接抛出错误，这会中断流程并提供明确信息\n            raise ValueError(f\"Expected first message part to be TextPart, but got {type(part)}\")\n\n        # 如果检查通过，直接返回文本\n        logger.debug(f\"[_get_user_query] Extracted query from TextPart: '{part.text}'\")\n        return part.text\n\n\n    def _validate_request(\n        self, request: Union[SendTaskRequest, SendTaskStreamingRequest]\n    ) -> JSONRPCResponse | None:\n        \"\"\"\n        验证请求参数\n        \n        Args:\n            request: 任务请求\n            \n        Returns:\n            JSONRPCResponse | None: 错误响应或None\n        \"\"\"\n        task_send_params: TaskSendParams = request.params\n        if not utils.are_modalities_compatible(\n            task_send_params.acceptedOutputModes, self.agent.SUPPORTED_CONTENT_TYPES\n        ):\n            logger.warning(\n                \"Unsupported output mode. Received %s, Support %s\",\n                task_send_params.acceptedOutputModes,\n                self.agent.SUPPORTED_CONTENT_TYPES,\n            )\n            return utils.new_incompatible_types_error(request.id)\n        \n        if task_send_params.pushNotification and not task_send_params.pushNotification.url:\n            logger.warning(\"Push notification URL is missing\")\n            return JSONRPCResponse(id=request.id, error=InvalidParamsError(message=\"Push notification URL is missing\"))\n        \n        return None\n        \n    async def on_send_task(self, request: SendTaskRequest) -> SendTaskResponse:\n        \"\"\"\n        处理发送任务请求\n        \n        Args:\n            request: 任务请求\n            \n        Returns:\n            SendTaskResponse: 任务响应\n        \"\"\"\n        validation_error = self._validate_request(request)\n        if validation_error:\n            return SendTaskResponse(id=request.id, error=validation_error.error)\n        \n        if request.params.pushNotification:\n            if not await self.set_push_notification_info(request.params.id, request.params.pushNotification):\n                return SendTaskResponse(id=request.id, error=InvalidParamsError(message=\"Push notification URL is invalid\"))\n\n        await self.upsert_task(request.params)\n        task = await self.update_store(\n            request.params.id, TaskStatus(state=TaskState.WORKING), None\n        )\n        await self.send_task_notification(task)\n\n        task_send_params: TaskSendParams = request.params\n        query = self._get_user_query(task_send_params)\n        try:\n            agent_response = self.agent.invoke(query, task_send_params.sessionId)\n            # 处理Agent响应并更新任务状态\n            parts = [{\"type\": \"text\", \"text\": agent_response}]\n            artifact = Artifact(parts=parts, index=0, append=False)\n            task = await self.update_store(\n                task_send_params.id, \n                TaskStatus(state=TaskState.COMPLETED), \n                [artifact]\n            )\n            await self.send_task_notification(task)\n            return SendTaskResponse(id=request.id, result=task)\n\n        except Exception as e:\n            # 建议也稍微改进一下异常处理日志和返回信息\n            logger.error(f\"Error during agent invocation or task processing: {e}\", exc_info=True)\n            # 记录失败状态\n            try:\n                # 确保即使在异常处理中也能更新状态\n                task_failed : Task = await self.update_store(\n                    task_send_params.id,\n                    TaskStatus(state=TaskState.FAILED, error={\"message\": str(e)}),\n                    None\n                )\n                await self.send_task_notification(task_failed)\n            except Exception as update_err:\n                # 如果更新状态也失败，记录下来\n                logger.error(f\"Failed to update task status to FAILED after initial error: {update_err}\", exc_info=True)\n\n            # 返回更合适的错误类型和消息\n            # return SendTaskResponse(id=request.id, error=InvalidParamsError(message=f\"Error processing task: {e}\"))\n            # InternalError 可能更合适，因为错误发生在服务器内部处理中\n            return SendTaskResponse(id=request.id, error=InternalError(message=f\"Error processing task: {str(e) or type(e).__name__}\"))\n\n    \n    async def on_send_task_subscribe(\n        self, request: SendTaskStreamingRequest\n    ) -> AsyncIterable[SendTaskStreamingResponse] | JSONRPCResponse:\n        \"\"\"\n        处理流式任务请求\n        \n        Args:\n            request: 流式任务请求\n            \n        Returns:\n            AsyncIterable[SendTaskStreamingResponse] | JSONRPCResponse: 流式响应或错误\n        \"\"\"\n        try:\n            error = self._validate_request(request)\n            if error:\n                return error\n            \n            await self.upsert_task(request.params)\n            \n            if request.params.pushNotification:\n                if not await self.set_push_notification_info(request.params.id, request.params.pushNotification):\n                    return JSONRPCResponse(id=request.id, error=InvalidParamsError(message=\"Push notification URL is invalid\"))\n            \n            task_send_params: TaskSendParams = request.params\n            sse_event_queue = await self.setup_sse_consumer(task_send_params.id, False)            \n            asyncio.create_task(self._run_streaming_agent(request))\n            \n            return self.dequeue_events_for_sse(\n                request.id, task_send_params.id, sse_event_queue\n            )\n        except Exception as e:\n            logger.error(f\"Error in SSE stream: {e}\")\n            print(traceback.format_exc())\n            return JSONRPCResponse(\n                id=request.id,\n                error=InternalError(\n                    message=\"An error occurred while streaming the response\"\n                ),\n            )\n\n    async def _process_agent_response(\n        self, request: SendTaskRequest, agent_response: dict\n    ) -> SendTaskResponse:\n        \"\"\"Processes the agent's response and updates the task store.\"\"\"\n        task_send_params: TaskSendParams = request.params\n        task_id = task_send_params.id\n        history_length = task_send_params.historyLength\n        task_status = None\n\n        parts = [{\"type\": \"text\", \"text\": agent_response[\"content\"]}]\n        artifact = None\n        if agent_response[\"require_user_input\"]:\n            task_status = TaskStatus(\n                state=TaskState.INPUT_REQUIRED,\n                message=Message(role=\"agent\", parts=parts),\n            )\n        else:\n            task_status = TaskStatus(state=TaskState.COMPLETED)\n            artifact = Artifact(parts=parts)\n        task = await self.update_store(\n            task_id, task_status, None if artifact is None else [artifact]\n        )\n        task_result = self.append_task_history(task, history_length)\n        await self.send_task_notification(task)\n        return SendTaskResponse(id=request.id, result=task_result)\n    \n    async def on_resubscribe_to_task(\n        self, request: TaskResubscriptionRequest\n    ) -> AsyncIterable[SendTaskStreamingResponse] | JSONRPCResponse:\n        task_id_params: TaskIdParams = request.params\n        try:\n            sse_event_queue = await self.setup_sse_consumer(task_id_params.id, True)\n            return self.dequeue_events_for_sse(request.id, task_id_params.id, sse_event_queue)\n        except Exception as e:\n            logger.error(f\"Error while reconnecting to SSE stream: {e}\")\n            return JSONRPCResponse(\n                id=request.id,\n                error=InternalError(\n                    message=f\"An error occurred while reconnecting to stream: {e}\"\n                ),\n            )\n    \n    async def send_task_notification(self, task: Task):\n        if not await self.has_push_notification_info(task.id):\n            logger.info(f\"No push notification info found for task {task.id}\")\n            return\n        push_info = await self.get_push_notification_info(task.id)\n\n        logger.info(f\"Notifying for task {task.id} => {task.status.state}\")\n        await self.notification_sender_auth.send_push_notification(\n            push_info.url,\n            data=task.model_dump(exclude_none=True)\n        )\n\n    async def set_push_notification_info(self, task_id: str, push_notification_config: PushNotificationConfig):\n        # Verify the ownership of notification URL by issuing a challenge request.\n        if self.notification_sender_auth:\n            is_verified = await self.notification_sender_auth.verify_push_notification_url(push_notification_config.url)\n            if not is_verified:\n                return False\n        \n        await super().set_push_notification_info(task_id, push_notification_config)\n        return True"
  },
  {
    "path": "core/a2a/client/__init__.py",
    "content": ""
  },
  {
    "path": "core/a2a/client/card_resolver.py",
    "content": "import httpx\nfrom core.a2a.types import (\n    AgentCard,\n    A2AClientJSONError,\n)\nimport json\n\n\nclass A2ACardResolver:\n    def __init__(self, base_url, agent_card_path=\"/.well-known/agent.json\"):\n        self.base_url = base_url.rstrip(\"/\")\n        self.agent_card_path = agent_card_path.lstrip(\"/\")\n\n    def get_agent_card(self) -> AgentCard:\n        with httpx.Client() as client:\n            response = client.get(self.base_url + \"/\" + self.agent_card_path)\n            response.raise_for_status()\n            try:\n                return AgentCard(**response.json())\n            except json.JSONDecodeError as e:\n                raise A2AClientJSONError(str(e)) from e"
  },
  {
    "path": "core/a2a/client/client.py",
    "content": "import httpx\nfrom httpx_sse import connect_sse\nfrom typing import Any, AsyncIterable\nfrom core.a2a.types import (\n    AgentCard,\n    GetTaskRequest,\n    SendTaskRequest,\n    SendTaskResponse,\n    JSONRPCRequest,\n    GetTaskResponse,\n    CancelTaskResponse,\n    CancelTaskRequest,\n    SetTaskPushNotificationRequest,\n    SetTaskPushNotificationResponse,\n    GetTaskPushNotificationRequest,\n    GetTaskPushNotificationResponse,\n    A2AClientHTTPError,\n    A2AClientJSONError,\n    SendTaskStreamingRequest,\n    SendTaskStreamingResponse,\n)\nimport json\n\n\nclass A2AClient:\n    def __init__(self, agent_card: AgentCard = None, url: str = None):\n        if agent_card:\n            self.url = agent_card.url\n        elif url:\n            self.url = url\n        else:\n            raise ValueError(\"Must provide either agent_card or url\")\n\n    async def send_task(self, payload: dict[str, Any]) -> SendTaskResponse:\n        request = SendTaskRequest(params=payload)\n        return SendTaskResponse(**await self._send_request(request))\n\n    async def send_task_streaming(\n        self, payload: dict[str, Any]\n    ) -> AsyncIterable[SendTaskStreamingResponse]:\n        request = SendTaskStreamingRequest(params=payload)\n        with httpx.Client(timeout=None) as client:\n            with connect_sse(\n                client, \"POST\", self.url, json=request.model_dump()\n            ) as event_source:\n                try:\n                    for sse in event_source.iter_sse():\n                        yield SendTaskStreamingResponse(**json.loads(sse.data))\n                except json.JSONDecodeError as e:\n                    raise A2AClientJSONError(str(e)) from e\n                except httpx.RequestError as e:\n                    raise A2AClientHTTPError(400, str(e)) from e\n\n    async def _send_request(self, request: JSONRPCRequest) -> dict[str, Any]:\n        async with httpx.AsyncClient() as client:\n            try:\n                # Image generation could take time, adding timeout\n                response = await client.post(\n                    self.url, json=request.model_dump(), timeout=30\n                )\n                response.raise_for_status()\n                return response.json()\n            except httpx.HTTPStatusError as e:\n                raise A2AClientHTTPError(e.response.status_code, str(e)) from e\n            except json.JSONDecodeError as e:\n                raise A2AClientJSONError(str(e)) from e\n\n    async def get_task(self, payload: dict[str, Any]) -> GetTaskResponse:\n        request = GetTaskRequest(params=payload)\n        return GetTaskResponse(**await self._send_request(request))\n\n    async def cancel_task(self, payload: dict[str, Any]) -> CancelTaskResponse:\n        request = CancelTaskRequest(params=payload)\n        return CancelTaskResponse(**await self._send_request(request))\n\n    async def set_task_callback(\n        self, payload: dict[str, Any]\n    ) -> SetTaskPushNotificationResponse:\n        request = SetTaskPushNotificationRequest(params=payload)\n        return SetTaskPushNotificationResponse(**await self._send_request(request))\n\n    async def get_task_callback(\n        self, payload: dict[str, Any]\n    ) -> GetTaskPushNotificationResponse:\n        request = GetTaskPushNotificationRequest(params=payload)\n        return GetTaskPushNotificationResponse(**await self._send_request(request))"
  },
  {
    "path": "core/a2a/config.json",
    "content": "{\n  \"local_agent\": {\n    \"url\": \"http://127.0.0.1:8000/\",\n    \"auth\": {\n      \"type\": \"none\"\n    }\n  }\n}"
  },
  {
    "path": "core/a2a/server/__init__.py",
    "content": ""
  },
  {
    "path": "core/a2a/server/server.py",
    "content": "# core/a2a/server/server.py\nfrom starlette.applications import Starlette\nfrom starlette.responses import JSONResponse\nfrom sse_starlette.sse import EventSourceResponse\nfrom starlette.requests import Request\nfrom starlette.middleware import Middleware\nfrom starlette.middleware.cors import CORSMiddleware\n\n# --- 添加 Pydantic 的 ValidationError 导入 ---\nfrom pydantic import ValidationError\n# --- 导入结束 ---\n\nfrom core.a2a.types import (\n    A2ARequest,\n    JSONRPCResponse,\n    InvalidRequestError,\n    JSONParseError,\n    GetTaskRequest,\n    CancelTaskRequest,\n    SendTaskRequest,\n    SetTaskPushNotificationRequest,\n    GetTaskPushNotificationRequest,\n    InternalError,\n    AgentCard,\n    TaskResubscriptionRequest,\n    SendTaskStreamingRequest,\n    MethodNotFoundError,\n    # 确保 ValidationError 没有在这里导入\n)\nimport json\nfrom typing import AsyncIterable, Any, Optional, Union\nfrom core.a2a.server.task_manager import TaskManager\n\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass A2AServer:\n    def __init__(\n        self,\n        host=\"0.0.0.0\",\n        port=5000,\n        endpoint=\"/\",\n        agent_card: AgentCard = None,\n        task_manager: TaskManager = None,\n        allowed_origins: Optional[list[str]] = None,\n    ):\n        self.host = host\n        self.port = port\n        self.endpoint = endpoint\n        self.task_manager = task_manager\n        self.agent_card = agent_card\n\n        if allowed_origins is None:\n            # 本地开发时默认只允许 localhost:3000\n            allowed_origins = [\"http://localhost:3000\"]\n            logger.warning(\"CORS allow_origins set to 'http://localhost:3000' for local development.\")\n        else:\n            logger.info(f\"CORS allow_origins configured: {allowed_origins}\")\n\n        middleware = [\n            Middleware(\n                CORSMiddleware,\n                allow_origins=allowed_origins,\n                allow_credentials=True,\n                allow_methods=[\"*\"],\n                allow_headers=[\"*\"],\n            )\n        ]\n        self.app = Starlette(middleware=middleware, debug=True)\n        self.app.add_route(self.endpoint, self._process_request, methods=[\"POST\"])\n        self.app.add_route(\n            \"/.well-known/agent.json\", self._get_agent_card, methods=[\"GET\"]\n        )\n        logger.info(f\"A2AServer initialized. Endpoint: {self.endpoint}, Agent Card Endpoint: /.well-known/agent.json\")\n\n    def start(self):\n        if self.agent_card is None: raise ValueError(\"agent_card must be provided to A2AServer\")\n        if self.task_manager is None: raise ValueError(\"task_manager must be provided to A2AServer\")\n        import uvicorn\n        logger.info(f\"Starting Uvicorn server on {self.host}:{self.port}...\")\n        uvicorn.run(self.app, host=self.host, port=self.port)\n\n    def _get_agent_card(self, request: Request) -> JSONResponse:\n        logger.debug(\"Received request for /.well-known/agent.json\")\n        if not self.agent_card:\n             logger.error(\"Agent card requested but not configured in A2AServer.\")\n             return JSONResponse({\"error\": \"Agent card not configured\"}, status_code=500)\n        return JSONResponse(self.agent_card.model_dump(exclude_none=True))\n\n    async def _process_request(self, request: Request) -> Union[JSONResponse, EventSourceResponse]:\n        result = None; json_rpc_request = None; request_id_for_error = None\n        try:\n            try: body = await request.json(); logger.debug(f\"Received request body: {body}\")\n            except json.JSONDecodeError as e: logger.error(f\"JSON decoding failed: {e}\"); raise JSONParseError()\n\n            try:\n                json_rpc_request = A2ARequest.validate_python(body); request_id_for_error = getattr(json_rpc_request, 'id', None)\n                logger.info(f\"Processing valid A2A request: Method='{json_rpc_request.method}', ID='{request_id_for_error}', TaskID='{getattr(json_rpc_request.params, 'id', 'N/A')}'\")\n            except ValidationError as e:\n                logger.error(f\"A2A request validation failed: {e}\"); req_id_fallback = body.get('id') if isinstance(body, dict) else None\n                # 注意: 这里抛出的 InvalidRequestError 会在下面的 except Exception 中被捕获\n                raise InvalidRequestError(data=json.loads(e.json())) from e\n\n            # 分发给 TaskManager\n            if isinstance(json_rpc_request, GetTaskRequest): result = await self.task_manager.on_get_task(json_rpc_request)\n            elif isinstance(json_rpc_request, SendTaskRequest): result = await self.task_manager.on_send_task(json_rpc_request)\n            elif isinstance(json_rpc_request, SendTaskStreamingRequest): result = await self.task_manager.on_send_task_subscribe(json_rpc_request)\n            elif isinstance(json_rpc_request, CancelTaskRequest): result = await self.task_manager.on_cancel_task(json_rpc_request)\n            elif isinstance(json_rpc_request, SetTaskPushNotificationRequest): result = await self.task_manager.on_set_task_push_notification(json_rpc_request)\n            elif isinstance(json_rpc_request, GetTaskPushNotificationRequest): result = await self.task_manager.on_get_task_push_notification(json_rpc_request)\n            elif isinstance(json_rpc_request, TaskResubscriptionRequest): result = await self.task_manager.on_resubscribe_to_task(json_rpc_request)\n            else: logger.warning(f\"Unhandled validated request type: {type(json_rpc_request)}\"); raise MethodNotFoundError(data={\"method\": getattr(json_rpc_request, 'method', 'unknown')})\n\n            logger.debug(f\"[A2AServer] Result from TaskManager method '{json_rpc_request.method}': type={type(result)}\")\n            return self._create_response(result) # 调用 _create_response\n\n        except Exception as e:\n            # 统一处理所有在请求处理（包括验证和 task manager 调用）中发生的异常\n            logger.error(f\"Exception during request processing: {e}\", exc_info=True)\n            return self._handle_exception(e, request_id=request_id_for_error) # 使用 _handle_exception\n\n    def _handle_exception(self, e: Exception, request_id: Optional[Union[str, int]] = None) -> JSONResponse:\n        status_code = 500; json_rpc_error: Optional[JSONRPCError] = None\n        if isinstance(e, JSONParseError): json_rpc_error = e; status_code = 400\n        elif isinstance(e, InvalidRequestError): json_rpc_error = e; status_code = 400\n        elif isinstance(e, MethodNotFoundError): json_rpc_error = e; status_code = 404 # 或 501\n        # --- 现在可以正确捕获 Pydantic 的 ValidationError ---\n        elif isinstance(e, ValidationError):\n            logger.warning(f\"Pydantic Validation error caught in handler: {e}\")\n            error_data = str(e); \n            try: error_data = json.loads(e.json()) \n            except: pass\n            # 通常 Pydantic 验证错误发生在请求处理阶段是 InvalidRequestError 的一种\n            # 如果发生在响应创建阶段则更像是 InternalError\n            json_rpc_error = InvalidRequestError(message=\"Request/Response data validation failed\", data=error_data)\n            status_code = 400 # 认为是客户端请求或服务器返回的数据结构问题\n        # --- 捕获结束 ---\n        elif isinstance(e, ValueError) and \"Unexpected result type\" in str(e):\n             logger.error(f\"Internal error due to unexpected result type: {e}\", exc_info=False)\n             json_rpc_error = InternalError(message=\"Server error: Unexpected result type from handler.\")\n             status_code = 500\n        elif isinstance(e, NotImplementedError):\n             logger.error(f\"Method not implemented: {e}\", exc_info=True)\n             json_rpc_error = MethodNotFoundError(message=f\"Method not implemented: {e}\")\n             status_code = 501\n        else:\n            logger.error(f\"Unhandled internal exception: {e}\", exc_info=True)\n            json_rpc_error = InternalError(message=f\"An internal server error occurred: {type(e).__name__}\")\n            status_code = 500\n\n        response = JSONRPCResponse(id=request_id, error=json_rpc_error)\n        logger.debug(f\"Returning error response: {response.model_dump(exclude_none=True)}\")\n        return JSONResponse(response.model_dump(exclude_none=True), status_code=status_code)\n\n    def _create_response(self, result: Any) -> Union[JSONResponse, EventSourceResponse]:\n        if isinstance(result, AsyncIterable):\n            logger.debug(\"[A2AServer] Creating EventSourceResponse (text/event-stream)\")\n            async def event_generator(stream_result: AsyncIterable) -> AsyncIterable[dict[str, str]]:\n                try:\n                    async for item in stream_result:\n                        if hasattr(item, 'model_dump_json'):\n                            json_data = item.model_dump_json(exclude_none=True)\n                            logger.debug(f\"A2AServer yielding SSE data: {json_data}\")\n                            yield {\"data\": json_data}\n                        else:\n                            logger.warning(f\"Yielding non-Pydantic object in event stream: {type(item)}\")\n                            yield {\"data\": json.dumps(str(item))}\n                except Exception as gen_err:\n                    logger.error(f\"Error during SSE event generation: {gen_err}\", exc_info=True)\n                    try:\n                        # 尝试 yield 一个标准的 JSON-RPC 错误事件\n                        error_payload = JSONRPCResponse(id=None, error=InternalError(message=f\"Streaming generation error: {gen_err}\"))\n                        yield {\"event\": \"error\", \"data\": error_payload.model_dump_json(exclude_none=True)}\n                    except Exception as yield_err:\n                         logger.error(f\"Failed to yield error event to SSE stream: {yield_err}\", exc_info=True)\n\n            return EventSourceResponse(event_generator(result))\n        elif isinstance(result, JSONRPCResponse):\n            logger.debug(\"[A2AServer] Creating JSONResponse (application/json)\")\n            return JSONResponse(result.model_dump(exclude_none=True))\n        else:\n            logger.error(f\"Unexpected result type received by _create_response: {type(result)}\")\n            raise ValueError(f\"Unexpected result type: {type(result)}\")"
  },
  {
    "path": "core/a2a/server/task_manager.py",
    "content": "from abc import ABC, abstractmethod\nfrom typing import Union, AsyncIterable, List\nfrom core.a2a.types import Task\nfrom core.a2a.types import (\n    JSONRPCResponse,\n    TaskIdParams,\n    TaskQueryParams,\n    GetTaskRequest,\n    TaskNotFoundError,\n    SendTaskRequest,\n    CancelTaskRequest,\n    TaskNotCancelableError,\n    SetTaskPushNotificationRequest,\n    GetTaskPushNotificationRequest,\n    GetTaskResponse,\n    CancelTaskResponse,\n    SendTaskResponse,\n    SetTaskPushNotificationResponse,\n    GetTaskPushNotificationResponse,\n    PushNotificationNotSupportedError,\n    TaskSendParams,\n    TaskStatus,\n    TaskState,\n    TaskResubscriptionRequest,\n    SendTaskStreamingRequest,\n    SendTaskStreamingResponse,\n    Artifact,\n    PushNotificationConfig,\n    TaskStatusUpdateEvent,\n    JSONRPCError,\n    TaskPushNotificationConfig,\n    InternalError,\n)\nfrom core.a2a.server.utils import new_not_implemented_error\nimport asyncio\nimport logging\n\nlogger = logging.getLogger(__name__)\n\nclass TaskManager(ABC):\n    @abstractmethod\n    async def on_get_task(self, request: GetTaskRequest) -> GetTaskResponse:\n        pass\n\n    @abstractmethod\n    async def on_cancel_task(self, request: CancelTaskRequest) -> CancelTaskResponse:\n        pass\n\n    @abstractmethod\n    async def on_send_task(self, request: SendTaskRequest) -> SendTaskResponse:\n        pass\n\n    @abstractmethod\n    async def on_send_task_subscribe(\n        self, request: SendTaskStreamingRequest\n    ) -> Union[AsyncIterable[SendTaskStreamingResponse], JSONRPCResponse]:\n        pass\n\n    @abstractmethod\n    async def on_set_task_push_notification(\n        self, request: SetTaskPushNotificationRequest\n    ) -> SetTaskPushNotificationResponse:\n        pass\n\n    @abstractmethod\n    async def on_get_task_push_notification(\n        self, request: GetTaskPushNotificationRequest\n    ) -> GetTaskPushNotificationResponse:\n        pass\n\n    @abstractmethod\n    async def on_resubscribe_to_task(\n        self, request: TaskResubscriptionRequest\n    ) -> Union[AsyncIterable[SendTaskResponse], JSONRPCResponse]:\n        pass\n\n\nclass InMemoryTaskManager(TaskManager):\n    def __init__(self):\n        self.tasks: dict[str, Task] = {}\n        self.push_notification_infos: dict[str, PushNotificationConfig] = {}\n        self.lock = asyncio.Lock()\n        self.task_sse_subscribers: dict[str, List[asyncio.Queue]] = {}\n        self.subscriber_lock = asyncio.Lock()\n\n    async def on_get_task(self, request: GetTaskRequest) -> GetTaskResponse:\n        logger.info(f\"Getting task {request.params.id}\")\n        task_query_params: TaskQueryParams = request.params\n\n        async with self.lock:\n            task = self.tasks.get(task_query_params.id)\n            if task is None:\n                return GetTaskResponse(id=request.id, error=TaskNotFoundError())\n\n            task_result = self.append_task_history(\n                task, task_query_params.historyLength\n            )\n\n        return GetTaskResponse(id=request.id, result=task_result)\n\n    async def on_cancel_task(self, request: CancelTaskRequest) -> CancelTaskResponse:\n        logger.info(f\"Cancelling task {request.params.id}\")\n        task_id_params: TaskIdParams = request.params\n\n        async with self.lock:\n            task = self.tasks.get(task_id_params.id)\n            if task is None:\n                return CancelTaskResponse(id=request.id, error=TaskNotFoundError())\n\n        return CancelTaskResponse(id=request.id, error=TaskNotCancelableError())\n\n    @abstractmethod\n    async def on_send_task(self, request: SendTaskRequest) -> SendTaskResponse:\n        pass\n\n    @abstractmethod\n    async def on_send_task_subscribe(\n        self, request: SendTaskStreamingRequest\n    ) -> Union[AsyncIterable[SendTaskStreamingResponse], JSONRPCResponse]:\n        pass\n\n    async def set_push_notification_info(self, task_id: str, notification_config: PushNotificationConfig):\n        async with self.lock:\n            task = self.tasks.get(task_id)\n            if task is None:\n                raise ValueError(f\"Task not found for {task_id}\")\n\n            self.push_notification_infos[task_id] = notification_config\n\n        return\n    \n    async def get_push_notification_info(self, task_id: str) -> PushNotificationConfig:\n        async with self.lock:\n            task = self.tasks.get(task_id)\n            if task is None:\n                raise ValueError(f\"Task not found for {task_id}\")\n\n            return self.push_notification_infos[task_id]\n            \n        return\n    \n    async def has_push_notification_info(self, task_id: str) -> bool:\n        async with self.lock:\n            return task_id in self.push_notification_infos\n            \n\n    async def on_set_task_push_notification(\n        self, request: SetTaskPushNotificationRequest\n    ) -> SetTaskPushNotificationResponse:\n        logger.info(f\"Setting task push notification {request.params.id}\")\n        task_notification_params: TaskPushNotificationConfig = request.params\n\n        try:\n            await self.set_push_notification_info(task_notification_params.id, task_notification_params.pushNotificationConfig)\n        except Exception as e:\n            logger.error(f\"Error while setting push notification info: {e}\")\n            return JSONRPCResponse(\n                id=request.id,\n                error=InternalError(\n                    message=\"An error occurred while setting push notification info\"\n                ),\n            )\n            \n        return SetTaskPushNotificationResponse(id=request.id, result=task_notification_params)\n\n    async def on_get_task_push_notification(\n        self, request: GetTaskPushNotificationRequest\n    ) -> GetTaskPushNotificationResponse:\n        logger.info(f\"Getting task push notification {request.params.id}\")\n        task_params: TaskIdParams = request.params\n\n        try:\n            notification_info = await self.get_push_notification_info(task_params.id)\n        except Exception as e:\n            logger.error(f\"Error while getting push notification info: {e}\")\n            return GetTaskPushNotificationResponse(\n                id=request.id,\n                error=InternalError(\n                    message=\"An error occurred while getting push notification info\"\n                ),\n            )\n        \n        return GetTaskPushNotificationResponse(id=request.id, result=TaskPushNotificationConfig(id=task_params.id, pushNotificationConfig=notification_info))\n\n    async def upsert_task(self, task_send_params: TaskSendParams) -> Task:\n        logger.info(f\"Upserting task {task_send_params.id}\")\n        async with self.lock:\n            task = self.tasks.get(task_send_params.id)\n            if task is None:\n                task = Task(\n                    id=task_send_params.id,\n                    sessionId = task_send_params.sessionId,\n                    messages=[task_send_params.message],\n                    status=TaskStatus(state=TaskState.SUBMITTED),\n                    history=[task_send_params.message],\n                )\n                self.tasks[task_send_params.id] = task\n            else:\n                task.history.append(task_send_params.message)\n\n            return task\n\n    async def on_resubscribe_to_task(\n        self, request: TaskResubscriptionRequest\n    ) -> Union[AsyncIterable[SendTaskStreamingResponse], JSONRPCResponse]:\n        return new_not_implemented_error(request.id)\n\n    async def update_store(\n        self, task_id: str, status: TaskStatus, artifacts: list[Artifact]\n    ) -> Task:\n        async with self.lock:\n            try:\n                task = self.tasks[task_id]\n            except KeyError:\n                logger.error(f\"Task {task_id} not found for updating the task\")\n                raise ValueError(f\"Task {task_id} not found\")\n\n            task.status = status\n\n            if status.message is not None:\n                task.history.append(status.message)\n\n            if artifacts is not None:\n                if task.artifacts is None:\n                    task.artifacts = []\n                task.artifacts.extend(artifacts)\n\n            return task\n\n    def append_task_history(self, task: Task, historyLength: int | None):\n        new_task = task.model_copy()\n        if historyLength is not None and historyLength > 0:\n            new_task.history = new_task.history[-historyLength:]\n        else:\n            new_task.history = []\n\n        return new_task        \n\n    async def setup_sse_consumer(self, task_id: str, is_resubscribe: bool = False):\n        async with self.subscriber_lock:\n            if task_id not in self.task_sse_subscribers:\n                if is_resubscribe:\n                    raise ValueError(\"Task not found for resubscription\")\n                else:\n                    self.task_sse_subscribers[task_id] = []\n\n            sse_event_queue = asyncio.Queue(maxsize=0) # <=0 is unlimited\n            self.task_sse_subscribers[task_id].append(sse_event_queue)\n            return sse_event_queue\n\n    async def enqueue_events_for_sse(self, task_id, task_update_event):\n        async with self.subscriber_lock:\n            if task_id not in self.task_sse_subscribers:\n                return\n\n            current_subscribers = self.task_sse_subscribers[task_id]\n            for subscriber in current_subscribers:\n                await subscriber.put(task_update_event)\n\n    async def dequeue_events_for_sse(\n        self, request_id, task_id, sse_event_queue: asyncio.Queue\n    ) -> AsyncIterable[SendTaskStreamingResponse] | JSONRPCResponse:\n        try:\n            while True:                \n                event = await sse_event_queue.get()\n                if isinstance(event, JSONRPCError):\n                    yield SendTaskStreamingResponse(id=request_id, error=event)\n                    break\n                                                \n                yield SendTaskStreamingResponse(id=request_id, result=event)\n                if isinstance(event, TaskStatusUpdateEvent) and event.final:\n                    break\n        finally:\n            async with self.subscriber_lock:\n                if task_id in self.task_sse_subscribers:\n                    self.task_sse_subscribers[task_id].remove(sse_event_queue)\n"
  },
  {
    "path": "core/a2a/server/utils.py",
    "content": "from core.a2a.types import (\n    JSONRPCResponse,\n    ContentTypeNotSupportedError,\n    UnsupportedOperationError,\n)\nfrom typing import List\n\n\ndef are_modalities_compatible(\n    server_output_modes: List[str], client_output_modes: List[str]\n):\n    \"\"\"Modalities are compatible if they are both non-empty\n    and there is at least one common element.\"\"\"\n    if client_output_modes is None or len(client_output_modes) == 0:\n        return True\n\n    if server_output_modes is None or len(server_output_modes) == 0:\n        return True\n\n    return any(x in server_output_modes for x in client_output_modes)\n\n\ndef new_incompatible_types_error(request_id):\n    return JSONRPCResponse(id=request_id, error=ContentTypeNotSupportedError())\n\n\ndef new_not_implemented_error(request_id):\n    return JSONRPCResponse(id=request_id, error=UnsupportedOperationError())"
  },
  {
    "path": "core/a2a/types.py",
    "content": "from typing import Union, Any\nfrom pydantic import BaseModel, Field, TypeAdapter\nfrom typing import Literal, List, Annotated, Optional\nfrom datetime import datetime\nfrom pydantic import model_validator, ConfigDict, field_serializer\nfrom uuid import uuid4\nfrom enum import Enum\nfrom typing_extensions import Self\n\n\nclass TaskState(str, Enum):\n    SUBMITTED = \"submitted\"\n    WORKING = \"working\"\n    INPUT_REQUIRED = \"input-required\"\n    COMPLETED = \"completed\"\n    CANCELED = \"canceled\"\n    FAILED = \"failed\"\n    UNKNOWN = \"unknown\"\n\n\nclass TextPart(BaseModel):\n    type: Literal[\"text\"] = \"text\"\n    text: str\n    metadata: dict[str, Any] | None = None\n\n\nclass FileContent(BaseModel):\n    name: str | None = None\n    mimeType: str | None = None\n    bytes: str | None = None\n    uri: str | None = None\n\n    @model_validator(mode=\"after\")\n    def check_content(self) -> Self:\n        if not (self.bytes or self.uri):\n            raise ValueError(\"Either 'bytes' or 'uri' must be present in the file data\")\n        if self.bytes and self.uri:\n            raise ValueError(\n                \"Only one of 'bytes' or 'uri' can be present in the file data\"\n            )\n        return self\n\n\nclass FilePart(BaseModel):\n    type: Literal[\"file\"] = \"file\"\n    file: FileContent\n    metadata: dict[str, Any] | None = None\n\n\nclass DataPart(BaseModel):\n    type: Literal[\"data\"] = \"data\"\n    data: dict[str, Any]\n    metadata: dict[str, Any] | None = None\n\n\nPart = Annotated[Union[TextPart, FilePart, DataPart], Field(discriminator=\"type\")]\n\n\nclass Message(BaseModel):\n    role: Literal[\"user\", \"agent\"]\n    parts: List[Part]\n    metadata: dict[str, Any] | None = None\n\n\nclass TaskStatus(BaseModel):\n    state: TaskState\n    message: Message | None = None\n    timestamp: datetime = Field(default_factory=datetime.now)\n\n    @field_serializer(\"timestamp\")\n    def serialize_dt(self, dt: datetime, _info):\n        return dt.isoformat()\n\n\nclass Artifact(BaseModel):\n    name: str | None = None\n    description: str | None = None\n    parts: List[Part]\n    metadata: dict[str, Any] | None = None\n    index: int = 0\n    append: bool | None = None\n    lastChunk: bool | None = None\n\n\nclass Task(BaseModel):\n    id: str\n    sessionId: str | None = None\n    status: TaskStatus\n    artifacts: List[Artifact] | None = None\n    history: List[Message] | None = None\n    metadata: dict[str, Any] | None = None\n\n\nclass TaskStatusUpdateEvent(BaseModel):\n    id: str\n    status: TaskStatus\n    final: bool = False\n    metadata: dict[str, Any] | None = None\n\n\nclass TaskArtifactUpdateEvent(BaseModel):\n    id: str\n    artifact: Artifact    \n    metadata: dict[str, Any] | None = None\n\n\nclass AuthenticationInfo(BaseModel):\n    model_config = ConfigDict(extra=\"allow\")\n\n    schemes: List[str]\n    credentials: str | None = None\n\n\nclass PushNotificationConfig(BaseModel):\n    url: str\n    token: str | None = None\n    authentication: AuthenticationInfo | None = None\n\n\nclass TaskIdParams(BaseModel):\n    id: str\n    metadata: dict[str, Any] | None = None\n\n\nclass TaskQueryParams(TaskIdParams):\n    historyLength: int | None = None\n\n\nclass TaskSendParams(BaseModel):\n    id: str\n    sessionId: str = Field(default_factory=lambda: uuid4().hex)\n    message: Message\n    acceptedOutputModes: Optional[List[str]] = None\n    pushNotification: PushNotificationConfig | None = None\n    historyLength: int | None = None\n    metadata: dict[str, Any] | None = None\n\n\nclass TaskPushNotificationConfig(BaseModel):\n    id: str\n    pushNotificationConfig: PushNotificationConfig\n\n\n## RPC Messages\n\n\nclass JSONRPCMessage(BaseModel):\n    jsonrpc: Literal[\"2.0\"] = \"2.0\"\n    id: int | str | None = Field(default_factory=lambda: uuid4().hex)\n\n\nclass JSONRPCRequest(JSONRPCMessage):\n    method: str\n    params: dict[str, Any] | None = None\n\n\nclass JSONRPCError(BaseModel):\n    code: int\n    message: str\n    data: Any | None = None\n\n\nclass JSONRPCResponse(JSONRPCMessage):\n    result: Any | None = None\n    error: JSONRPCError | None = None\n\n\nclass SendTaskRequest(JSONRPCRequest):\n    method: Literal[\"tasks/send\"] = \"tasks/send\"\n    params: TaskSendParams\n\n\nclass SendTaskResponse(JSONRPCResponse):\n    result: Task | None = None\n\n\nclass SendTaskStreamingRequest(JSONRPCRequest):\n    method: Literal[\"tasks/sendSubscribe\"] = \"tasks/sendSubscribe\"\n    params: TaskSendParams\n\n\nclass SendTaskStreamingResponse(JSONRPCResponse):\n    result: TaskStatusUpdateEvent | TaskArtifactUpdateEvent | None = None\n\n\nclass GetTaskRequest(JSONRPCRequest):\n    method: Literal[\"tasks/get\"] = \"tasks/get\"\n    params: TaskQueryParams\n\n\nclass GetTaskResponse(JSONRPCResponse):\n    result: Task | None = None\n\n\nclass CancelTaskRequest(JSONRPCRequest):\n    method: Literal[\"tasks/cancel\",] = \"tasks/cancel\"\n    params: TaskIdParams\n\n\nclass CancelTaskResponse(JSONRPCResponse):\n    result: Task | None = None\n\n\nclass SetTaskPushNotificationRequest(JSONRPCRequest):\n    method: Literal[\"tasks/pushNotification/set\",] = \"tasks/pushNotification/set\"\n    params: TaskPushNotificationConfig\n\n\nclass SetTaskPushNotificationResponse(JSONRPCResponse):\n    result: TaskPushNotificationConfig | None = None\n\n\nclass GetTaskPushNotificationRequest(JSONRPCRequest):\n    method: Literal[\"tasks/pushNotification/get\",] = \"tasks/pushNotification/get\"\n    params: TaskIdParams\n\n\nclass GetTaskPushNotificationResponse(JSONRPCResponse):\n    result: TaskPushNotificationConfig | None = None\n\n\nclass TaskResubscriptionRequest(JSONRPCRequest):\n    method: Literal[\"tasks/resubscribe\",] = \"tasks/resubscribe\"\n    params: TaskIdParams\n\n\nA2ARequest = TypeAdapter(\n    Annotated[\n        Union[\n            SendTaskRequest,\n            GetTaskRequest,\n            CancelTaskRequest,\n            SetTaskPushNotificationRequest,\n            GetTaskPushNotificationRequest,\n            TaskResubscriptionRequest,\n            SendTaskStreamingRequest,\n        ],\n        Field(discriminator=\"method\"),\n    ]\n)\n\n## Error types\n\n\nclass JSONParseError(JSONRPCError):\n    code: int = -32700\n    message: str = \"Invalid JSON payload\"\n    data: Any | None = None\n\n\nclass InvalidRequestError(JSONRPCError):\n    code: int = -32600\n    message: str = \"Request payload validation error\"\n    data: Any | None = None\n\n\nclass MethodNotFoundError(JSONRPCError):\n    code: int = -32601\n    message: str = \"Method not found\"\n    data: None = None\n\n\nclass InvalidParamsError(JSONRPCError):\n    code: int = -32602\n    message: str = \"Invalid parameters\"\n    data: Any | None = None\n\n\nclass InternalError(JSONRPCError):\n    code: int = -32603\n    message: str = \"Internal error\"\n    data: Any | None = None\n\n\nclass TaskNotFoundError(JSONRPCError):\n    code: int = -32001\n    message: str = \"Task not found\"\n    data: None = None\n\n\nclass TaskNotCancelableError(JSONRPCError):\n    code: int = -32002\n    message: str = \"Task cannot be canceled\"\n    data: None = None\n\n\nclass PushNotificationNotSupportedError(JSONRPCError):\n    code: int = -32003\n    message: str = \"Push Notification is not supported\"\n    data: None = None\n\n\nclass UnsupportedOperationError(JSONRPCError):\n    code: int = -32004\n    message: str = \"This operation is not supported\"\n    data: None = None\n\n\nclass ContentTypeNotSupportedError(JSONRPCError):\n    code: int = -32005\n    message: str = \"Incompatible content types\"\n    data: None = None\n\n\nclass AgentProvider(BaseModel):\n    organization: str\n    url: str | None = None\n\n\nclass AgentCapabilities(BaseModel):\n    streaming: bool = False\n    pushNotifications: bool = False\n    stateTransitionHistory: bool = False\n\n\nclass AgentAuthentication(BaseModel):\n    schemes: List[str]\n    credentials: str | None = None\n\n\nclass AgentSkill(BaseModel):\n    id: str\n    name: str\n    description: str | None = None\n    tags: List[str] | None = None\n    examples: List[str] | None = None\n    inputModes: List[str] | None = None\n    outputModes: List[str] | None = None\n\n\nclass AgentCard(BaseModel):\n    name: str\n    description: str | None = None\n    url: str\n    provider: AgentProvider | None = None\n    version: str\n    documentationUrl: str | None = None\n    capabilities: AgentCapabilities\n    authentication: AgentAuthentication | None = None\n    defaultInputModes: List[str] = [\"text\"]\n    defaultOutputModes: List[str] = [\"text\"]\n    skills: List[AgentSkill]\n\n\nclass A2AClientError(Exception):\n    pass\n\n\nclass A2AClientHTTPError(A2AClientError):\n    def __init__(self, status_code: int, message: str):\n        self.status_code = status_code\n        self.message = message\n        super().__init__(f\"HTTP Error {status_code}: {message}\")\n\n\nclass A2AClientJSONError(A2AClientError):\n    def __init__(self, message: str):\n        self.message = message\n        super().__init__(f\"JSON Error: {message}\")\n\n\nclass MissingAPIKeyError(Exception):\n    \"\"\"Exception for missing API key.\"\"\"\n\n    pass"
  },
  {
    "path": "core/a2a/utils/__init__.py",
    "content": ""
  },
  {
    "path": "core/a2a/utils/in_memory_cache.py",
    "content": "\"\"\"In Memory Cache utility.\"\"\"\n\nimport threading\nimport time\nfrom typing import Any, Dict, Optional\n\n\nclass InMemoryCache:\n    \"\"\"A thread-safe Singleton class to manage cache data.\n\n    Ensures only one instance of the cache exists across the application.\n    \"\"\"\n\n    _instance: Optional[\"InMemoryCache\"] = None\n    _lock: threading.Lock = threading.Lock()\n    _initialized: bool = False\n\n    def __new__(cls):\n        \"\"\"Override __new__ to control instance creation (Singleton pattern).\n\n        Uses a lock to ensure thread safety during the first instantiation.\n\n        Returns:\n            The singleton instance of InMemoryCache.\n        \"\"\"\n        if cls._instance is None:\n            with cls._lock:\n                if cls._instance is None:\n                    cls._instance = super().__new__(cls)\n        return cls._instance\n\n    def __init__(self):\n        \"\"\"Initialize the cache storage.\n\n        Uses a flag (_initialized) to ensure this logic runs only on the very first\n        creation of the singleton instance.\n        \"\"\"\n        if not self._initialized:\n            with self._lock:\n                if not self._initialized:\n                    # print(\"Initializing SessionCache storage\")\n                    self._cache_data: Dict[str, Dict[str, Any]] = {}\n                    self._ttl: Dict[str, float] = {}\n                    self._data_lock: threading.Lock = threading.Lock()\n                    self._initialized = True\n\n    def set(self, key: str, value: Any, ttl: Optional[int] = None) -> None:\n        \"\"\"Set a key-value pair.\n\n        Args:\n            key: The key for the data.\n            value: The data to store.\n            ttl: Time to live in seconds. If None, data will not expire.\n        \"\"\"\n        with self._data_lock:\n            self._cache_data[key] = value\n\n            if ttl is not None:\n                self._ttl[key] = time.time() + ttl\n            else:\n                if key in self._ttl:\n                    del self._ttl[key]\n\n    def get(self, key: str, default: Any = None) -> Any:\n        \"\"\"Get the value associated with a key.\n\n        Args:\n            key: The key for the data within the session.\n            default: The value to return if the session or key is not found.\n\n        Returns:\n            The cached value, or the default value if not found.\n        \"\"\"\n        with self._data_lock:\n            if key in self._ttl and time.time() > self._ttl[key]:\n                del self._cache_data[key]\n                del self._ttl[key]\n                return default\n            return self._cache_data.get(key, default)\n\n    def delete(self, key: str) -> None:\n        \"\"\"Delete a specific key-value pair from a cache.\n\n        Args:\n            key: The key to delete.\n\n        Returns:\n            True if the key was found and deleted, False otherwise.\n        \"\"\"\n\n        with self._data_lock:\n            if key in self._cache_data:\n                del self._cache_data[key]\n                if key in self._ttl:\n                    del self._ttl[key]\n                return True\n            return False\n\n    def clear(self) -> bool:\n        \"\"\"Remove all data.\n\n        Returns:\n            True if the data was cleared, False otherwise.\n        \"\"\"\n        with self._data_lock:\n            self._cache_data.clear()\n            self._ttl.clear()\n            return True\n        return False"
  },
  {
    "path": "core/a2a/utils/push_notification_auth.py",
    "content": "from jwcrypto import jwk\nimport uuid\nfrom starlette.responses import JSONResponse\nfrom starlette.requests import Request\nfrom typing import Any\n\nimport jwt\nimport time\nimport json\nimport hashlib\nimport httpx\nimport logging\n\nfrom jwt import PyJWK, PyJWKClient\n\nlogger = logging.getLogger(__name__)\nAUTH_HEADER_PREFIX = 'Bearer '\n\nclass PushNotificationAuth:\n    def _calculate_request_body_sha256(self, data: dict[str, Any]):\n        \"\"\"Calculates the SHA256 hash of a request body.\n\n        This logic needs to be same for both the agent who signs the payload and the client verifier.\n        \"\"\"\n        body_str = json.dumps(\n            data,\n            ensure_ascii=False,\n            allow_nan=False,\n            indent=None,\n            separators=(\",\", \":\"),\n        )\n        return hashlib.sha256(body_str.encode()).hexdigest()\n\nclass PushNotificationSenderAuth(PushNotificationAuth):\n    def __init__(self):\n        self.public_keys = []\n        self.private_key_jwk: PyJWK = None\n\n    @staticmethod\n    async def verify_push_notification_url(url: str) -> bool:\n        async with httpx.AsyncClient(timeout=10) as client:\n            try:\n                validation_token = str(uuid.uuid4())\n                response = await client.get(\n                    url,\n                    params={\"validationToken\": validation_token}\n                )\n                response.raise_for_status()\n                is_verified = response.text == validation_token\n\n                logger.info(f\"Verified push-notification URL: {url} => {is_verified}\")            \n                return is_verified                \n            except Exception as e:\n                logger.warning(f\"Error during sending push-notification for URL {url}: {e}\")\n\n        return False\n\n    def generate_jwk(self):\n        key = jwk.JWK.generate(kty='RSA', size=2048, kid=str(uuid.uuid4()), use=\"sig\")\n        self.public_keys.append(key.export_public(as_dict=True))\n        self.private_key_jwk = PyJWK.from_json(key.export_private())\n    \n    def handle_jwks_endpoint(self, _request: Request):\n        \"\"\"Allow clients to fetch public keys.\n        \"\"\"\n        return JSONResponse({\n            \"keys\": self.public_keys\n        })\n    \n    def _generate_jwt(self, data: dict[str, Any]):\n        \"\"\"JWT is generated by signing both the request payload SHA digest and time of token generation.\n\n        Payload is signed with private key and it ensures the integrity of payload for client.\n        Including iat prevents from replay attack.\n        \"\"\"\n        \n        iat = int(time.time())\n\n        return jwt.encode(\n            {\"iat\": iat, \"request_body_sha256\": self._calculate_request_body_sha256(data)},\n            key=self.private_key_jwk,\n            headers={\"kid\": self.private_key_jwk.key_id},\n            algorithm=\"RS256\"\n        )\n\n    async def send_push_notification(self, url: str, data: dict[str, Any]):\n        jwt_token = self._generate_jwt(data)\n        headers = {'Authorization': f\"Bearer {jwt_token}\"}\n        async with httpx.AsyncClient(timeout=10) as client: \n            try:\n                response = await client.post(\n                    url,\n                    json=data,\n                    headers=headers\n                )\n                response.raise_for_status()\n                logger.info(f\"Push-notification sent for URL: {url}\")                            \n            except Exception as e:\n                logger.warning(f\"Error during sending push-notification for URL {url}: {e}\")\n\nclass PushNotificationReceiverAuth(PushNotificationAuth):\n    def __init__(self):\n        self.public_keys_jwks = []\n        self.jwks_client = None\n\n    async def load_jwks(self, jwks_url: str):\n        self.jwks_client = PyJWKClient(jwks_url)\n    \n    async def verify_push_notification(self, request: Request) -> bool:\n        auth_header = request.headers.get(\"Authorization\")\n        if not auth_header or not auth_header.startswith(AUTH_HEADER_PREFIX):\n            print(\"Invalid authorization header\")\n            return False\n        \n        token = auth_header[len(AUTH_HEADER_PREFIX):]\n        signing_key = self.jwks_client.get_signing_key_from_jwt(token)\n\n        decode_token = jwt.decode(\n            token,\n            signing_key,\n            options={\"require\": [\"iat\", \"request_body_sha256\"]},\n            algorithms=[\"RS256\"],\n        )\n\n        actual_body_sha256 = self._calculate_request_body_sha256(await request.json())\n        if actual_body_sha256 != decode_token[\"request_body_sha256\"]:\n            # Payload signature does not match the digest in signed token.\n            raise ValueError(\"Invalid request body\")\n        \n        if time.time() - decode_token[\"iat\"] > 60 * 5:\n            # Do not allow push-notifications older than 5 minutes.\n            # This is to prevent replay attack.\n            raise ValueError(\"Token is expired\")\n        \n        return True"
  },
  {
    "path": "core/agents/__init__.py",
    "content": "# Agents module initialization"
  },
  {
    "path": "core/agents/base/base_agent.py",
    "content": "import json\nfrom typing import List, Dict, Any, Optional, Union, Callable, Sequence, TypeVar, cast\nfrom langchain_core.language_models.chat_models import BaseChatModel\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.messages import BaseMessage, SystemMessage, HumanMessage, AIMessage, ToolMessage\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.runnables import RunnableConfig\nfrom langgraph.graph import StateGraph\nfrom langgraph.types import Checkpointer\nfrom langgraph.graph.graph import CompiledGraph\nfrom langgraph.graph.state import CompiledStateGraph\nimport logging\ntry:\n    import tiktoken\n    TIKTOKEN_AVAILABLE = True\nexcept ImportError:\n    TIKTOKEN_AVAILABLE = False\n    print(\"Warning: Tiktoken not installed. Using naive token estimation.\")\n\nlogger = logging.getLogger(__name__)\nDEFAULT_MODEL_NAME = \"gpt-4o-mini\"\n\nStateSchema = TypeVar(\"StateSchema\", bound=Union[dict, Any])\n\nclass BaseAgent:\n    def __init__(\n        self,\n        name: str,\n        model: Union[BaseChatModel, LanguageModelLike],\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        prompt: Optional[Union[str, SystemMessage, Callable]] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        max_context_messages: Optional[int] = None,  # Limit number of recent messages\n        max_context_tokens: Optional[int] = None,    # Limit total estimated tokens\n        model_name: Optional[str] = \"gpt-4o-mini\", # Optional, used for future token estimation improvements\n        description: str = \"No description provided.\"\n        \n    ):\n        if max_context_messages and max_context_tokens:\n            raise ValueError(\"Only one of max_context_messages or max_context_tokens should be set.\")\n        if name is None or name == \"LangGraph\":\n             raise ValueError(\"Agent name must be specified.\")\n\n        self.name = name\n        self.model = model\n        self.tools = tools or []\n        self.base_prompt = prompt\n        self.checkpointer = checkpointer\n        self.max_context_messages = max_context_messages\n        self.max_context_tokens = max_context_tokens\n        self.model_name = model_name or getattr(model, \"model_name\", DEFAULT_MODEL_NAME)\n        self.description = description\n        \n        self._workflow: Optional[StateGraph] = None\n        self._compiled_agent: Optional[CompiledGraph] = None # Stores the final compiled graph\n\n        self._tokenizer = None\n        if TIKTOKEN_AVAILABLE:\n            try: self._tokenizer = tiktoken.encoding_for_model(self.model_name)\n            except KeyError:\n                try:\n                     self._tokenizer = tiktoken.get_encoding(\"cl100k_base\")\n                     # print(f\"Warning: Tiktoken encoding for model '{self.model_name}' not found. Using 'cl100k_base'.\")\n                except Exception as e: print(f\"Error getting tiktoken encoding 'cl100k_base': {e}.\")\n            except Exception as e: print(f\"Error initializing tiktoken for model '{self.model_name}': {e}.\")\n\n\n    def _estimate_tokens(self, message: BaseMessage) -> int:\n        content_to_encode = \"\"\n        if isinstance(message, (HumanMessage, SystemMessage, AIMessage)):\n            if isinstance(message.content, str): content_to_encode = message.content\n            elif isinstance(message.content, list):\n                 for block in message.content:\n                     if isinstance(block, dict) and block.get(\"type\") == \"text\": content_to_encode += block.get(\"text\", \"\") + \"\\n\"\n        elif isinstance(message, ToolMessage):\n             content_to_encode = message.content if isinstance(message.content, str) else json.dumps(message.content)\n        else: content_to_encode = str(message)\n        if self._tokenizer:\n            try: return len(self._tokenizer.encode(content_to_encode, disallowed_special=()))\n            except Exception: pass\n        return len(content_to_encode) // 2\n\n   \n    def _truncate_by_tokens(self, messages: Sequence[BaseMessage]) -> List[BaseMessage]:\n        if not self.max_context_tokens: return list(messages)\n        truncated_messages: List[BaseMessage] = []\n        total_tokens = 0\n        preserved_system_message: Optional[SystemMessage] = None\n        # Check if the first message is a SystemMessage, preserve it if so\n        # Note: This assumes only ONE leading SystemMessage should be preserved.\n        if messages and isinstance(messages[0], SystemMessage):\n            preserved_system_message = messages[0]\n            messages_to_truncate = messages[1:]\n            try: \n                system_tokens = self._estimate_tokens(preserved_system_message)\n                # Only count if it doesn't exceed limit by itself\n                if system_tokens <= self.max_context_tokens:\n                     total_tokens += system_tokens\n                else:\n                     print(f\"Warning: System message alone ({system_tokens} tokens) exceeds token limit ({self.max_context_tokens}). It might be truncated if context grows.\")\n                     # Don't add to total_tokens yet, let truncation logic handle it.\n                     preserved_system_message = None # Don't preserve if it's too big initially\n\n            except Exception: pass # Ignore errors estimating system message\n        else:\n            messages_to_truncate = messages\n\n        # Iterate backwards from the most recent message\n        for msg in reversed(messages_to_truncate):\n            try:\n                msg_tokens = self._estimate_tokens(msg)\n                # Check if adding this message exceeds the limit\n                if total_tokens + msg_tokens <= self.max_context_tokens:\n                    truncated_messages.append(msg)\n                    total_tokens += msg_tokens\n                else:\n                    print(f\"Context Token Limit ({self.max_context_tokens}) reached. Truncating older messages.\")\n                    break # Limit reached\n            except Exception as e:\n                print(f\"Warning: Failed to estimate tokens for message, skipping: {e}\")\n                continue\n\n        # Re-add the system message at the beginning if it was preserved\n        final_list = list(reversed(truncated_messages))\n        if preserved_system_message:\n             try: system_tokens = self._estimate_tokens(preserved_system_message)\n             except Exception: system_tokens = 0\n             # Ensure adding system message doesn't push over limit *again* (edge case)\n             if total_tokens - (msg_tokens if 'msg_tokens' in locals() and total_tokens + msg_tokens > self.max_context_tokens else 0) + system_tokens <= self.max_context_tokens:\n                 final_list.insert(0, preserved_system_message)\n             elif not final_list: # If only system message fits\n                 return [preserved_system_message]\n             # Else: System message doesn't fit with the truncated history, omit it.\n\n        return final_list\n\n\n    def _truncate_messages(self, messages: Sequence[BaseMessage]) -> List[BaseMessage]:\n        \"\"\"根据配置（优先 token 数，其次消息数）截断消息历史。\"\"\"\n        if self.max_context_tokens is not None:\n            return self._truncate_by_tokens(messages)\n        elif self.max_context_messages is not None:\n            if messages and isinstance(messages[0], SystemMessage):\n                # Keep system message + last N-1 messages\n                keep_count = self.max_context_messages - 1\n                return [messages[0]] + list(messages[-keep_count:]) if keep_count > 0 and len(messages) > 1 else [messages[0]]\n            else:\n                return list(messages[-self.max_context_messages:])\n        return list(messages)\n\n    def _get_state_value(self, state: StateSchema, key: str, default: Any = None) -> Any:\n         return state.get(key, default) if isinstance(state, dict) else getattr(state, key, default)\n    \n    def _format_tools_for_prompt(self, tools: List[Union[BaseTool, Callable]]) -> str:\n        \"\"\"Formats the tool list for inclusion in the prompt.\"\"\"\n        if not tools:\n            return \"No tools available for use.\"\n        # 使用 getattr 安全地访问 name 和 description\n        return \"\\n\".join([\n            f\"- **{getattr(t, 'name', 'Unnamed Tool')}**: {getattr(t, 'description', 'No description available.')}\"\n            for t in tools\n        ])\n        \n    # --- build/compile/get_agent ---\n    def build(self) -> Optional[StateGraph]:\n        \"\"\"构建 Agent 的 LangGraph 工作流图定义。子类应实现。\"\"\"\n        raise NotImplementedError(\"Subclasses must implement build() or override compile() directly.\")\n\n    def compile(self) -> CompiledGraph:\n        \"\"\"编译 Agent 工作流。\"\"\"\n        if self._compiled_agent is not None:\n            return self._compiled_agent\n\n        # 尝试调用 build() 来获取 StateGraph\n        workflow = self.build()\n\n        if workflow is None or not isinstance(workflow, StateGraph):\n             # 如果 build() 不返回 StateGraph (例如 ReactAgent),\n             # 子类的 compile() 需要被覆盖以处理编译\n             raise ValueError(\n                 f\"Agent '{self.name}': build() did not return a valid StateGraph, \"\n                 \"and compile() was not overridden to handle direct compilation.\"\n             )\n\n        print(f\"Compiling graph for agent: {self.name}\")\n        try:\n            # 编译 StateGraph 并存储结果\n            self._compiled_agent = workflow.compile(\n                 checkpointer=self.checkpointer,\n                 debug=getattr(self, 'debug', False) # 传递 debug 标志\n            )\n            print(f\"Graph compiled successfully for agent: {self.name}\")\n            return self._compiled_agent\n        except Exception as e:\n             print(f\"!!! Error compiling graph for agent {self.name}: {e}\")\n             import traceback\n             traceback.print_exc()\n             raise e\n\n    def get_agent(self) -> CompiledGraph:\n         \"\"\"获取编译后的核心图实例，如果未编译则先编译。\"\"\"\n         if self._compiled_agent is None:\n              print(f\"Agent '{self.name}' not compiled yet. Compiling now.\")\n              self.compile()\n         if self._compiled_agent is None:\n              raise RuntimeError(f\"Failed to get compiled agent for '{self.name}'.\")\n         return self._compiled_agent\n        \n    # --- invoke/ainvoke: 标准入口点，调用编译后的图 ---\n    def invoke(self, state: Dict[str, Any], config: Optional[RunnableConfig] = None) -> Dict[str, Any]:\n        \"\"\"同步调用编译后的 Agent 图。\"\"\"\n        try:\n            compiled_agent = self.get_agent() # 获取 (或编译) 图\n            print(f\"--- Invoking Agent: {self.name} ---\")\n            # 直接调用编译后的图，预处理由图内部的 prompt callable 处理 (如果使用 ReactAgent)\n            # 或由 Supervisor 节点逻辑处理 (如果使用自定义 Supervisor)\n            result = compiled_agent.invoke(state, config=config)\n            print(f\"--- Agent Invocation Complete: {self.name} ---\")\n            return cast(Dict[str, Any], result) # 假设返回字典\n        except Exception as e:\n            print(f\"!!! Error during {self.name} agent invocation: {e}\")\n            import traceback\n            traceback.print_exc()\n            # 返回带错误标记的状态 (可能是输入状态)\n            state[\"error\"] = f\"Agent invocation failed: {e}\"\n            return state\n\n    async def ainvoke(self, state: Dict[str, Any], config: Optional[RunnableConfig] = None) -> Dict[str, Any]:\n        \"\"\"异步调用编译后的 Agent 图。\"\"\"\n        try:\n            compiled_agent = self.get_agent() # 获取 (或编译) 图\n            print(f\"--- Invoking Agent Async: {self.name} ---\")\n            # 直接调用编译后的图\n            result = await compiled_agent.ainvoke(state, config=config)\n            print(f\"--- Agent Invocation Complete Async: {self.name} ---\")\n            return cast(Dict[str, Any], result) # 假设返回字典\n        except Exception as e:\n            print(f\"!!! Error during {self.name} agent async invocation: {e}\")\n            import traceback\n            traceback.print_exc()\n            state[\"error\"] = f\"Agent async invocation failed: {e}\"\n            return state\n\n    def run(self, state: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"Run the supervisor workflow synchronously.\n\n        Args:\n            state: The input state for the workflow\n\n        Returns:\n            The output state from the workflow\n        \"\"\"\n        return self.invoke(state)\n    \n    async def arun(self, state: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"Run the supervisor workflow asynchronously.\n        Args:\n            state: The input state for the workflow\n        Returns:\n            The output state from the workflow\n        \"\"\"\n        return await self.ainvoke(state)\n\n    def reset(self):\n        \"\"\"重置编译状态，强制下次重新编译。\"\"\"\n        print(f\"Resetting compiled graph for agent '{self.name}'. Will recompile on next use.\")\n        self._compiled_agent = None\n        self._workflow = None\n\n    def add_tools(self, tools: List[Union[BaseTool, Callable]]) -> None:\n        \"\"\"添加工具到 Agent 的工具列表。\"\"\"\n        print(f\"Warning: Adding tools to {self.name} post-initialization. Agent needs recompilation.\")\n        self.tools.extend(tools)\n        self.reset()\n"
  },
  {
    "path": "core/agents/base/create_react_agent_wrapper.py",
    "content": "import logging\nfrom typing import Optional, Callable, Dict\nfrom langgraph.utils.runnable import RunnableCallable\nfrom langchain_core.runnables.config import RunnableConfig\n\nlogger = logging.getLogger(__name__)\n\nclass CreateReactAgentWrapper(RunnableCallable):\n    def __init__(\n        self, \n        agent, \n        name: str = \"agent\", \n        before_invoke: Optional[Callable[[dict], dict]] = None,\n        before_ainvoke: Optional[Callable[[dict], dict]] = None,\n        after_invoke: Optional[Callable[[dict, dict], None]] = None,\n        after_ainvoke: Optional[Callable[[dict, dict], None]] = None\n    ):\n        \"\"\"\n        :param agent: The underlying compiled graph or runnable\n        :param name: Unique name for this wrapper (avoid duplicates)\n        :param before_invoke: A sync callback that modifies the state before the wrapped agent call\n        :param before_ainvoke: An async callback that modifies the state before the wrapped agent call\n        :param after_invoke: A sync callback that inspects (state, output) after the wrapped call\n        :param after_ainvoke: An async callback that inspects (state, output) after the wrapped call\n        \"\"\"\n        self._agent = agent\n        self.name = name or getattr(agent, \"name\", \"agent\")\n        self.before_invoke = before_invoke\n        self.after_invoke = after_invoke\n        self.before_ainvoke = before_ainvoke\n        self.after_ainvoke = after_ainvoke\n\n        # We define the sync/async \"call\" functions for RunnableCallable\n        def call(state: Dict, config: Optional[RunnableConfig] = None, **kwargs) -> Dict:\n            logger.info(f\"[{self.name}] (sync) call() - started. State keys: {list(state.keys())}\")\n            # Or use print if you prefer\n            # print(f\"🟢 [Sync] Invoking wrapper: {self.name}, state keys: {list(state.keys())}\")\n\n            # before_invoke callback\n            if self.before_invoke:\n                state = self.before_invoke(state)\n\n            # Call the underlying agent\n            output = self._agent.invoke(state, config, **kwargs)\n\n            # after_invoke callback\n            if self.after_invoke:\n                self.after_invoke(state, output)\n\n            logger.info(f\"[{self.name}] (sync) call() - finished. Output keys: {list(output.keys())}\")\n            return output\n\n        async def acall(state: Dict, config: Optional[RunnableConfig] = None, **kwargs) -> Dict:\n            logger.info(f\"[{self.name}] (async) acall() - started. State keys: {list(state.keys())}\")\n            # print(f\"🟢 [Async] Invoking wrapper: {self.name}, state keys: {list(state.keys())}\")\n\n            if self.before_ainvoke:\n                state = await self.before_ainvoke(state)\n\n            output = await self._agent.ainvoke(state, config, **kwargs)\n\n            if self.after_ainvoke:\n                await self.after_ainvoke(state, output)\n\n            logger.info(f\"[{self.name}] (async) acall() - finished. Output keys: {list(output.keys())}\")\n            return output\n\n        # Pass these to RunnableCallable\n        super().__init__(call, acall, name=self.name)"
  },
  {
    "path": "core/agents/base/react_agent.py",
    "content": "from typing import Any, Callable, Dict, List, Optional, Type, Union, Literal, Sequence\n\nfrom langchain_core.language_models import LanguageModelLike, LanguageModelInput\nfrom langchain_core.tools import BaseTool\nfrom langgraph.graph import StateGraph\nfrom langgraph.graph.graph import CompiledGraph\nfrom langgraph.types import Checkpointer\nfrom langgraph.store.base import BaseStore\nfrom langchain_core.messages import BaseMessage, SystemMessage # 导入 SystemMessage\nfrom langgraph.prebuilt import create_react_agent\nfrom langgraph.prebuilt.chat_agent_executor import (\n    AgentState,\n    StateSchemaType,\n    StructuredResponseSchema,\n)\nfrom core.agents.base.base_agent import BaseAgent\nimport logging\nlogger = logging.getLogger(__name__)\n\nclass ReactAgent(BaseAgent):\n    \"\"\"ReAct Agent class for reasoning and acting with tools.\n    \n    This class provides a high-level interface for creating a ReAct agent workflow\n    that can perform multi-step reasoning and tool calling.\n    \"\"\"\n    \n    def __init__(\n        self,\n        model: LanguageModelLike,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        prompt: Optional[str] = None,\n        response_format: Optional[\n            Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]\n        ] = None,\n        state_schema: StateSchemaType = AgentState,\n        config_schema: Type[Any] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        store: Optional[BaseStore] = None,\n        interrupt_before: Optional[List[str]] = None,\n        interrupt_after: Optional[List[str]] = None,\n        debug: bool = False,\n        version: Literal[\"v1\", \"v2\"] = \"v1\",\n        name: str = \"react_agent\",\n        description: str = \"ReAct agent for reasoning and acting with tools.\",\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = None,\n        model_name: Optional[str] = \"gpt-4o-mini\",\n    ):\n        \"\"\"Initialize a ReAct agent.\n        \n        Args:\n            model: Language model to use for the agent\n            tools: Optional list of tools available to the agent\n            prompt: Optional prompt to use for the agent\n            response_format: Optional schema for structured output\n            state_schema: State schema to use for the agent graph\n            config_schema: Optional schema for configuration\n            interrupt_before: Optional list of nodes to interrupt before execution\n            interrupt_after: Optional list of nodes to interrupt after execution\n            debug: Whether to enable debug mode\n            version: Version of the ReAct agent (\"v1\" or \"v2\")\n            name: Name of the agent\n            max_context_messages: Optional limit on number of recent messages\n            max_context_tokens: Optional limit on total estimated tokens\n            model_name: Optional model name for token estimation\n        \"\"\"\n        # Call BaseAgent's __init__ to initialize parent class attributes\n        super().__init__(\n            name=name,\n            model=model,\n            tools=tools or [],\n            prompt=prompt,\n            description=description,\n            checkpointer=checkpointer,\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            model_name=model_name\n        )\n        \n        # Initialize ReactAgent specific attributes\n        self.response_format = response_format\n        self.react_state_schema = state_schema\n        self.react_config_schema = config_schema\n        self.react_store = store\n        self.react_interrupt_before = interrupt_before\n        self.react_interrupt_after = interrupt_after\n        self.react_debug = debug\n        self.react_version = version\n\n    def _prepare_llm_input(self, state: Dict[str, Any]) -> LanguageModelInput:\n        \"\"\"\n        准备 LLM 输入：截断消息历史并添加基础 System Prompt (如果存在)。\n        作为 Callable 传递给 create_react_agent 的 prompt 参数。\n        \"\"\"\n        # 1. 从状态获取消息 (BaseAgent 的方法)\n        messages = self._get_state_value(state, \"messages\", [])\n        \n        # 2. 截断消息 (BaseAgent 的方法)\n        # 注意：这里截断的是进入 LLM 前的列表，checkpointer 中的完整历史不受影响\n        # --- 添加 Debug 打印 (截断前) ---\n        # print(f\"\\nDEBUG _prepare_llm_input ({self.name}): BEFORE truncation (length {len(messages)}):\")\n        # for i, msg in enumerate(messages[-5:]): # 只看最后几条\n        #     print(f\"  Msg {i-5}: Type={type(msg).__name__}, ToolCallID={getattr(msg, 'tool_call_id', 'N/A')}\")\n        # ---\n\n        truncated_messages = self._truncate_messages(messages)\n\n        # --- 添加 Debug 打印 (截断后) ---\n        # print(f\"DEBUG _prepare_llm_input ({self.name}): AFTER truncation (length {len(truncated_messages)}):\")\n        # for i, msg in enumerate(truncated_messages[-5:]): # 只看最后几条\n        #     print(f\"  Msg {i-5}: Type={type(msg).__name__}, ToolCallID={getattr(msg, 'tool_call_id', 'N/A')}\")\n        # ---\n        \n        # 3. 添加基础 System Prompt (如果存在)\n        final_messages: List[BaseMessage] = []\n        if self.base_prompt:\n            if isinstance(self.base_prompt, str):\n                final_messages.append(SystemMessage(content=self.base_prompt))\n            elif isinstance(self.base_prompt, SystemMessage):\n                 final_messages.append(self.base_prompt)\n            # 如果 self.base_prompt 是其他 Runnable 或 Callable，需要相应处理\n            # 但 create_react_agent 的 prompt 通常是 str 或 SystemMessage\n            \n        final_messages.extend(truncated_messages)\n        \n        # print(f\"DEBUG [{self.name}]: Preparing LLM input with {len(final_messages)} messages.\") # Optional debug log\n        # 返回最终的消息列表给 LLM\n        return final_messages\n    \n    def build(self) -> Optional[StateGraph]:\n        \"\"\"对于 ReactAgent，核心图由 create_react_agent 直接创建，无需 build。\"\"\"\n        print(f\"Note: ReactAgent '{self.name}' uses create_react_agent in compile(). Build returns None.\")\n        self._workflow = None\n        return None\n    \n    def compile(self) -> CompiledGraph:\n        \"\"\"使用 create_react_agent 构建并编译核心 ReAct 工作流，存储在 _compiled_agent。\"\"\"\n        if self._compiled_agent is not None:\n            return self._compiled_agent\n\n        print(f\"[[DEBUG]] Compiling core ReAct agent for: {self.name} using create_react_agent\")\n        try:\n            # 使用 create_react_agent 创建编译后的图\n            # 将 self._prepare_llm_input 作为 prompt callable 传入\n            compiled_agent = create_react_agent(\n                model=self.model,\n                tools=self.tools,\n                prompt=self._prepare_llm_input, # <--- 关键改动：传入准备函数\n                state_schema=self.react_state_schema,\n                config_schema=self.react_config_schema,\n                checkpointer=self.checkpointer,\n                store=self.react_store,\n                interrupt_before=self.react_interrupt_before,\n                interrupt_after=self.react_interrupt_after,\n                debug=self.react_debug,\n                version=self.react_version,\n                name=self.name,\n            )\n            # 存储编译好的图\n            self._compiled_agent = compiled_agent\n            print(f\"Core ReAct graph compiled successfully for agent: {self.name}\")\n            return self._compiled_agent\n        except Exception as e:\n             print(f\"!!! Error compiling graph for agent {self.name} using create_react_agent: {e}\")\n             import traceback\n             traceback.print_exc()\n             self._compiled_agent = None\n             raise e"
  },
  {
    "path": "core/agents/react_based_supervisor/__init__.py",
    "content": "# 从当前目录导入create_supervisor函数\nfrom .supervisor import create_supervisor\n\n__all__ = [\"create_supervisor\"]\n"
  },
  {
    "path": "core/agents/react_based_supervisor/agent_name.py",
    "content": "import re\nfrom typing import Literal\n\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.messages import AIMessage, BaseMessage\nfrom langchain_core.runnables import RunnableLambda\n\nNAME_PATTERN = re.compile(r\"<name>(.*?)</name>\", re.DOTALL)\nCONTENT_PATTERN = re.compile(r\"<content>(.*?)</content>\", re.DOTALL)\n\nAgentNameMode = Literal[\"inline\"]\n\n\ndef _is_content_blocks_content(content: list[dict] | str) -> bool:\n    return (\n        isinstance(content, list)\n        and len(content) > 0\n        and isinstance(content[0], dict)\n        and \"type\" in content[0]\n    )\n\n\ndef add_inline_agent_name(message: BaseMessage) -> BaseMessage:\n    \"\"\"Add name and content XML tags to the message content.\n\n    Examples:\n\n        >>> add_inline_agent_name(AIMessage(content=\"Hello\", name=\"assistant\"))\n        AIMessage(content=\"<name>assistant</name><content>Hello</content>\", name=\"assistant\")\n\n        >>> add_inline_agent_name(AIMessage(content=[{\"type\": \"text\", \"text\": \"Hello\"}], name=\"assistant\"))\n        AIMessage(content=[{\"type\": \"text\", \"text\": \"<name>assistant</name><content>Hello</content>\"}], name=\"assistant\")\n    \"\"\"\n    if not isinstance(message, AIMessage) or not message.name:\n        return message\n\n    formatted_message = message.model_copy()\n    if _is_content_blocks_content(formatted_message.content):\n        text_blocks = [block for block in message.content if block[\"type\"] == \"text\"]\n        non_text_blocks = [block for block in message.content if block[\"type\"] != \"text\"]\n        content = text_blocks[0][\"text\"] if text_blocks else \"\"\n        formatted_content = f\"<name>{message.name}</name><content>{content}</content>\"\n        formatted_message.content = non_text_blocks + [{\"type\": \"text\", \"text\": formatted_content}]\n    else:\n        formatted_message.content = (\n            f\"<name>{message.name}</name><content>{formatted_message.content}</content>\"\n        )\n    return formatted_message\n\n\ndef remove_inline_agent_name(message: BaseMessage) -> BaseMessage:\n    \"\"\"Remove explicit name and content XML tags from the AI message content.\n\n    Examples:\n\n        >>> remove_inline_agent_name(AIMessage(content=\"<name>assistant</name><content>Hello</content>\", name=\"assistant\"))\n        AIMessage(content=\"Hello\", name=\"assistant\")\n\n        >>> remove_inline_agent_name(AIMessage(content=[{\"type\": \"text\", \"text\": \"<name>assistant</name><content>Hello</content>\"}], name=\"assistant\"))\n        AIMessage(content=[{\"type\": \"text\", \"text\": \"Hello\"}], name=\"assistant\")\n    \"\"\"\n    if not isinstance(message, AIMessage) or not message.name:\n        return message\n\n    is_content_blocks_content = _is_content_blocks_content(message.content)\n    if is_content_blocks_content:\n        text_blocks = [block for block in message.content if block[\"type\"] == \"text\"]\n        if not text_blocks:\n            return message\n\n        non_text_blocks = [block for block in message.content if block[\"type\"] != \"text\"]\n        content = text_blocks[0][\"text\"]\n    else:\n        content = message.content\n\n    name_match: re.Match | None = NAME_PATTERN.search(content)\n    content_match: re.Match | None = CONTENT_PATTERN.search(content)\n    if not name_match or not content_match:\n        return message\n\n    if name_match.group(1) != message.name:\n        return message\n\n    parsed_content = content_match.group(1)\n    parsed_message = message.model_copy()\n    if is_content_blocks_content:\n        content_blocks = non_text_blocks\n        if parsed_content:\n            content_blocks.append({\"type\": \"text\", \"text\": parsed_content})\n\n        parsed_message.content = content_blocks\n    else:\n        parsed_message.content = parsed_content\n    return parsed_message\n\n\ndef with_agent_name(\n    model: LanguageModelLike,\n    agent_name_mode: AgentNameMode,\n) -> LanguageModelLike:\n    \"\"\"Attach formatted agent names to the messages passed to and from a language model.\n\n    This is useful for making a message history with multiple agents more coherent.\n\n    NOTE: agent name is consumed from the message.name field.\n        If you're using an agent built with create_react_agent, name is automatically set.\n        If you're building a custom agent, make sure to set the name on the AI message returned by the LLM.\n\n    Args:\n        model: Language model to add agent name formatting to.\n        agent_name_mode: Use to specify how to expose the agent name to the LLM.\n            - \"inline\": Add the agent name directly into the content field of the AI message using XML-style tags.\n                Example: \"How can I help you\" -> \"<name>agent_name</name><content>How can I help you?</content>\".\n    \"\"\"\n    if agent_name_mode == \"inline\":\n        process_input_message = add_inline_agent_name\n        process_output_message = remove_inline_agent_name\n    else:\n        raise ValueError(\n            f\"Invalid agent name mode: {agent_name_mode}. Needs to be one of: {AgentNameMode.__args__}\"\n        )\n\n    def process_input_messages(messages: list[BaseMessage]) -> list[BaseMessage]:\n        return [process_input_message(message) for message in messages]\n\n    model = (\n        process_input_messages\n        | model\n        | RunnableLambda(process_output_message, name=\"process_output_message\")\n    )\n    return model\n"
  },
  {
    "path": "core/agents/react_based_supervisor/handoff.py",
    "content": "import re\nimport uuid\n\nfrom langchain_core.messages import AIMessage, ToolCall, ToolMessage\nfrom langchain_core.tools import BaseTool, InjectedToolCallId, tool\nfrom langgraph.prebuilt import InjectedState\nfrom langgraph.types import Command\nfrom typing_extensions import Annotated\n\nWHITESPACE_RE = re.compile(r\"\\s+\")\n\n\ndef _normalize_agent_name(agent_name: str) -> str:\n    \"\"\"Normalize an agent name to be used inside the tool name.\"\"\"\n    return WHITESPACE_RE.sub(\"_\", agent_name.strip()).lower()\n\n\ndef create_handoff_tool(*, agent_name: str) -> BaseTool:\n    \"\"\"Create a tool that can handoff control to the requested agent.\n\n    Args:\n        agent_name: The name of the agent to handoff control to, i.e.\n            the name of the agent node in the multi-agent graph.\n            Agent names should be simple, clear and unique, preferably in snake_case,\n            although you are only limited to the names accepted by LangGraph\n            nodes as well as the tool names accepted by LLM providers\n            (the tool name will look like this: `transfer_to_<agent_name>`).\n    \"\"\"\n    tool_name = f\"transfer_to_{_normalize_agent_name(agent_name)}\"\n\n    @tool(tool_name)\n    def handoff_to_agent(\n        state: Annotated[dict, InjectedState],\n        tool_call_id: Annotated[str, InjectedToolCallId],\n    ):\n        \"\"\"Ask another agent for help.\"\"\"\n        tool_message = ToolMessage(\n            content=f\"Successfully transferred to {agent_name}\",\n            name=tool_name,\n            tool_call_id=tool_call_id,\n        )\n        return Command(\n            goto=agent_name,\n            graph=Command.PARENT,\n            update={\"messages\": state[\"messages\"] + [tool_message]},\n        )\n\n    return handoff_to_agent\n\n\ndef create_handoff_back_messages(\n    agent_name: str, supervisor_name: str\n) -> tuple[AIMessage, ToolMessage]:\n    \"\"\"Create a pair of (AIMessage, ToolMessage) to add to the message history when returning control to the supervisor.\"\"\"\n    tool_call_id = str(uuid.uuid4())\n    tool_name = f\"transfer_back_to_{_normalize_agent_name(supervisor_name)}\"\n    tool_calls = [ToolCall(name=tool_name, args={}, id=tool_call_id)]\n    return (\n        AIMessage(\n            content=f\"Transferring back to {supervisor_name}\",\n            tool_calls=tool_calls,\n            name=agent_name,\n        ),\n        ToolMessage(\n            content=f\"Successfully transferred back to {supervisor_name}\",\n            name=tool_name,\n            tool_call_id=tool_call_id,\n        ),\n    )\n"
  },
  {
    "path": "core/agents/react_based_supervisor/planning_handler.py",
    "content": "import uuid\nimport datetime\nfrom typing import List, Dict, Optional\n\nclass PlanningStateHandler:\n    \"\"\"\n    Manages a project plan.\n    A plan is a dict with:\n      - title (str)\n      - description (str)\n      - status (str): \"planning\", \"in_progress\", or \"completed\"\n      - tasks (list): each task is a dict with:\n           id, description, status, agent, notes, evaluation\n      - current_task_id (str or None)\n      - created_at (str)\n      - updated_at (str)\n    \"\"\"\n\n    @staticmethod\n    def _now() -> str:\n        return datetime.datetime.now().isoformat()\n\n    @staticmethod\n    def _gen_id() -> str:\n        return str(uuid.uuid4())\n\n    @staticmethod\n    def create_plan(title: str, description: str) -> Dict:\n        now = PlanningStateHandler._now()\n        return {\n            \"title\": title,\n            \"description\": description,\n            \"status\": \"planning\",  # initial status\n            \"tasks\": [],\n            \"current_task_id\": None,\n            \"created_at\": now,\n            \"updated_at\": now\n        }\n\n    @staticmethod\n    def create_task(description: str,\n                    status: str = \"pending\",\n                    agent: str = \"\",\n                    notes: str = \"\",\n                    evaluation: str = \"\") -> Dict:\n        return {\n            \"id\": PlanningStateHandler._gen_id(),\n            \"description\": description.strip(),\n            \"status\": status.strip() if status else \"pending\",\n            \"agent\": agent.strip(),\n            \"notes\": notes.strip(),\n            \"evaluation\": evaluation.strip()\n        }\n\n    @staticmethod\n    def add_tasks(plan: Dict, tasks_data: List[Dict]) -> Dict:\n        for tinfo in tasks_data:\n            desc = tinfo.get(\"description\", \"Untitled Task\")\n            status = tinfo.get(\"status\", \"pending\")\n            agent = tinfo.get(\"agent\", \"\")\n            notes = tinfo.get(\"notes\", \"\")\n            eval_ = tinfo.get(\"evaluation\", \"\")\n            task = PlanningStateHandler.create_task(desc, status, agent, notes, eval_)\n            plan[\"tasks\"].append(task)\n        plan[\"updated_at\"] = PlanningStateHandler._now()\n        return plan\n\n    @staticmethod\n    def update_task(plan: Dict,\n                    by_id: Optional[str] = None,\n                    new_desc: Optional[str] = None,\n                    new_status: Optional[str] = None,\n                    new_agent: Optional[str] = None,\n                    new_notes: Optional[str] = None,\n                    new_evaluation: Optional[str] = None) -> Dict:\n        \"\"\"\n        Update a task identified by by_id.\n        \"\"\"\n        if not by_id:\n            raise ValueError(\"Must provide 'by_id' to update a task.\")\n        task = next((t for t in plan[\"tasks\"] if t[\"id\"] == by_id), None)\n        if not task:\n            raise ValueError(\"No matching task found with the given ID.\")\n\n        if new_desc is not None:\n            task[\"description\"] = new_desc.strip()\n        if new_status is not None:\n            task[\"status\"] = new_status.strip()\n        if new_agent is not None:\n            task[\"agent\"] = new_agent.strip()\n        if new_notes is not None:\n            task[\"notes\"] = new_notes.strip()\n        if new_evaluation is not None:\n            task[\"evaluation\"] = new_evaluation.strip()\n\n        plan[\"updated_at\"] = PlanningStateHandler._now()\n\n        # Determine overall plan status\n        if any(t[\"status\"] == \"in_progress\" for t in plan[\"tasks\"]):\n            plan[\"status\"] = \"in_progress\"\n        if all(t[\"status\"] == \"completed\" for t in plan[\"tasks\"]) and plan[\"tasks\"]:\n            plan[\"status\"] = \"completed\"\n\n        return plan\n\n    @staticmethod\n    def set_current_task(plan: Dict, task_id: str) -> Dict:\n        found = any(t[\"id\"] == task_id for t in plan[\"tasks\"])\n        if not found:\n            raise ValueError(\"Task ID not found in plan.\")\n        plan[\"current_task_id\"] = task_id\n        plan[\"updated_at\"] = PlanningStateHandler._now()\n        return plan\n\n    @staticmethod\n    def finish_plan(plan: Dict) -> Dict:\n        \"\"\"\n        Forcefully mark the plan as completed.\n        \"\"\"\n        plan[\"status\"] = \"completed\"\n        plan[\"updated_at\"] = PlanningStateHandler._now()\n        return plan"
  },
  {
    "path": "core/agents/react_based_supervisor/simple_planning_tool.py",
    "content": "import json\nfrom typing import Dict, List, Optional\nfrom langchain_core.tools import BaseTool\nfrom core.agents.supervisor.planning_handler import PlanningStateHandler\n\nclass SimplePlanningTool(BaseTool):\n    \"\"\"\n    A tool that manages a single project plan in memory.\n    It supports creating, viewing, adding tasks, updating tasks, setting the current task,\n    and finishing the plan. All operations return a JSON string.\n    \"\"\"\n    name: str = \"planning\"\n    description: str = (\"Manage a project plan with actions to create, view, add tasks, update tasks, \"\n                        \"set current task, and finish the plan. All data is stored in JSON.\")\n\n    def __init__(self):\n        super().__init__()\n        self._plan: Optional[Dict] = None\n\n    def _run(self, action: str, **kwargs) -> str:\n        try:\n            if action == \"create_plan\":\n                return self._handle_create_plan(**kwargs)\n            elif action == \"view_plan\":\n                return self._handle_view_plan()\n            elif action == \"add_tasks\":\n                return self._handle_add_tasks(**kwargs)\n            elif action == \"update_task\":\n                return self._handle_update_task(**kwargs)\n            elif action == \"set_current_task\":\n                return self._handle_set_current_task(**kwargs)\n            elif action == \"finish_plan\":\n                return self._handle_finish_plan()\n            else:\n                return self._json_error(f\"Unknown action: {action}\")\n        except Exception as e:\n            return self._json_error(str(e))\n\n    async def _arun(self, action: str, **kwargs) -> str:\n        return self._run(action, **kwargs)\n\n    def _handle_create_plan(self, **kwargs) -> str:\n        title = kwargs.get(\"title\", \"Untitled Plan\")\n        description = kwargs.get(\"description\", \"\")\n        tasks_data = kwargs.get(\"tasks\", [])\n        new_plan = PlanningStateHandler.create_plan(title, description)\n        PlanningStateHandler.add_tasks(new_plan, tasks_data)\n        self._plan = new_plan\n        return self._json_ok(self._plan)\n\n    def _handle_view_plan(self) -> str:\n        if not self._plan:\n            self._plan = PlanningStateHandler.create_plan(\"Untitled\", \"\")\n        return self._json_ok(self._plan)\n\n    def _handle_add_tasks(self, **kwargs) -> str:\n        if not self._plan:\n            self._plan = PlanningStateHandler.create_plan(\"Untitled\", \"\")\n        tasks_data = kwargs.get(\"tasks\", [])\n        PlanningStateHandler.add_tasks(self._plan, tasks_data)\n        return self._json_ok(self._plan)\n\n    def _handle_update_task(self, **kwargs) -> str:\n        if not self._plan:\n            raise ValueError(\"No plan exists. Please create a plan first.\")\n        # Use 'by_id' instead of 'task_id'\n        by_id = kwargs.get(\"by_id\")\n        new_desc = kwargs.get(\"description\")\n        new_status = kwargs.get(\"status\")\n        new_agent = kwargs.get(\"agent\")\n        new_notes = kwargs.get(\"notes\")\n        new_evaluation = kwargs.get(\"evaluation\")\n        updated = PlanningStateHandler.update_task(\n            self._plan,\n            by_id=by_id,\n            new_desc=new_desc,\n            new_status=new_status,\n            new_agent=new_agent,\n            new_notes=new_notes,\n            new_evaluation=new_evaluation\n        )\n        self._plan = updated\n        return self._json_ok(self._plan)\n\n    def _handle_set_current_task(self, **kwargs) -> str:\n        if not self._plan:\n            raise ValueError(\"No plan available to set current task.\")\n        tid = kwargs.get(\"task_id\")\n        if not tid:\n            raise ValueError(\"Must provide 'task_id' for set_current_task.\")\n        PlanningStateHandler.set_current_task(self._plan, tid)\n        return self._json_ok(self._plan)\n\n    def _handle_finish_plan(self) -> str:\n        if not self._plan:\n            raise ValueError(\"No plan exists to finish.\")\n        finished_plan = PlanningStateHandler.finish_plan(self._plan)\n        self._plan = finished_plan\n        return self._json_ok(finished_plan)\n\n    def _json_ok(self, plan_data: Dict) -> str:\n        return json.dumps({\"ok\": True, \"plan\": plan_data}, ensure_ascii=False, indent=2)\n\n    def _json_error(self, message: str) -> str:\n        return json.dumps({\"ok\": False, \"error\": message}, ensure_ascii=False, indent=2)"
  },
  {
    "path": "core/agents/react_based_supervisor/state_schema.py",
    "content": "from typing import Dict, List, Optional, Any, Literal, TypedDict, Union\nfrom langchain_core.messages import BaseMessage\nfrom langgraph.prebuilt.chat_agent_executor import AgentState\n\n# 定义计划状态类型\nPlanningStatus = Literal[\"not_started\", \"planning\", \"executing\", \"completed\", \"failed\"]\n\n# 定义任务状态类型\nTaskStatus = Literal[\"pending\", \"in_progress\", \"completed\", \"failed\"]\n\n# 定义任务项\nclass Task(TypedDict, total=False):\n    \"\"\"任务项定义\n    \n    表示计划中的一个任务项，包含任务描述、状态、分配的代理等信息\n    \"\"\"\n    id: str  # 任务唯一标识符\n    description: str  # 任务描述\n    status: TaskStatus  # 任务状态\n    agent: Optional[str]  # 分配的代理名称\n    created_at: str  # 创建时间\n    updated_at: str  # 更新时间\n    completed_at: Optional[str]  # 完成时间\n    dependencies: Optional[List[str]]  # 依赖的任务ID列表\n    notes: Optional[str]  # 任务备注\n\n# 定义计划\nclass Plan(TypedDict, total=False):\n    \"\"\"计划定义\n    \n    表示一个完整的计划，包含计划状态、任务列表等信息\n    \"\"\"\n    status: PlanningStatus  # 计划状态\n    tasks: List[Task]  # 任务列表\n    current_task_id: Optional[str]  # 当前执行的任务ID\n    created_at: str  # 创建时间\n    updated_at: str  # 更新时间\n    completed_at: Optional[str]  # 完成时间\n    title: Optional[str]  # 计划标题\n    description: Optional[str]  # 计划描述\n\n# 扩展AgentState以支持计划功能\nclass PlanningAgentState(AgentState):\n    \"\"\"支持计划功能的代理状态\n    \n    扩展了AgentState，添加了plan字段用于存储计划信息\n    \"\"\"\n    plan: Optional[Plan] = None"
  },
  {
    "path": "core/agents/react_based_supervisor/supervisor.py",
    "content": "import inspect\nfrom typing import Any, Callable, Literal, Optional, Type, Union, Dict, Optional\n\nfrom langchain_core.language_models import BaseChatModel, LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langgraph.graph import END, START, StateGraph\nfrom langgraph.prebuilt.chat_agent_executor import (\n    create_react_agent,\n    AgentState,\n    Prompt,\n    StateSchemaType,\n    StructuredResponseSchema,\n)\nfrom langgraph.pregel import Pregel\nfrom langgraph.utils.runnable import RunnableCallable\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.agents.supervisor.agent_name import AgentNameMode, with_agent_name\nfrom core.agents.supervisor.handoff import (\n    create_handoff_back_messages,\n    create_handoff_tool,\n)\nOutputMode = Literal[\"full_history\", \"last_message\"]\n\"\"\"Mode for adding agent outputs to the message history in the multi-agent workflow\n\n- `full_history`: add the entire agent message history\n- `last_message`: add only the last message\n\"\"\"\n\n\nMODELS_NO_PARALLEL_TOOL_CALLS = {\"o3-mini\"}\n\n\ndef _supports_disable_parallel_tool_calls(model: LanguageModelLike) -> bool:\n    if not isinstance(model, BaseChatModel):\n        return False\n\n    if hasattr(model, \"model_name\") and model.model_name in MODELS_NO_PARALLEL_TOOL_CALLS:\n        return False\n\n    if not hasattr(model, \"bind_tools\"):\n        return False\n\n    if \"parallel_tool_calls\" not in inspect.signature(model.bind_tools).parameters:\n        return False\n\n    return True\n\n\ndef _make_call_agent(\n    agent: Pregel,\n    output_mode: OutputMode,\n    add_handoff_back_messages: bool,\n    supervisor_name: str,\n) -> Callable[[dict], dict] | RunnableCallable:\n    if output_mode not in OutputMode.__args__:\n        raise ValueError(\n            f\"Invalid agent output mode: {output_mode}. Needs to be one of {OutputMode.__args__}\"\n        )\n\n    def _process_output(output: dict) -> dict:\n        messages = output[\"messages\"]\n        if output_mode == \"full_history\":\n            pass\n        elif output_mode == \"last_message\":\n            messages = messages[-1:]\n        else:\n            raise ValueError(\n                f\"Invalid agent output mode: {output_mode}. \"\n                f\"Needs to be one of {OutputMode.__args__}\"\n            )\n\n        if add_handoff_back_messages:\n            messages.extend(create_handoff_back_messages(agent.name, supervisor_name))\n\n        return {\n            **output,\n            \"messages\": messages,\n        }\n\n    def call_agent(state: dict) -> dict:\n        #print(f\"🟡 [Sync invoke] Handoff to agent '{agent.name}' with state keys: {list(state.keys())}\")\n        output = agent.invoke(state)\n        #print(f\"✅ [Sync invoke] Agent '{agent.name}' completed.\")\n        return _process_output(output)\n\n    async def acall_agent(state: dict) -> dict:\n        #print(f\"🟡 [Async invoke] Handoff to agent '{agent.name}' with state keys: {list(state.keys())}\")\n        output = await agent.ainvoke(state)\n        #print(f\"✅ [Async invoke] Agent '{agent.name}' completed.\")\n        return _process_output(output)\n\n    return RunnableCallable(call_agent, acall_agent)\n\n\ndef create_supervisor(\n    agents: list[Pregel],\n    *,\n    model: LanguageModelLike,\n    tools: list[BaseTool | Callable] | None = None,\n    prompt: Prompt | None = None,\n    response_format: Optional[\n        Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]\n    ] = None,\n    state_schema: StateSchemaType = AgentState,\n    config_schema: Type[Any] | None = None,\n    output_mode: OutputMode = \"last_message\",\n    add_handoff_back_messages: bool = True,\n    supervisor_name: str = \"supervisor\",\n    include_agent_name: AgentNameMode | None = None,\n) -> StateGraph:\n    \"\"\"Create a multi-agent supervisor.\n\n    Args:\n        agents: List of agents to manage\n        model: Language model to use for the supervisor\n        tools: Tools to use for the supervisor\n        prompt: Optional prompt to use for the supervisor. Can be one of:\n            - str: This is converted to a SystemMessage and added to the beginning of the list of messages in state[\"messages\"].\n            - SystemMessage: this is added to the beginning of the list of messages in state[\"messages\"].\n            - Callable: This function should take in full graph state and the output is then passed to the language model.\n            - Runnable: This runnable should take in full graph state and the output is then passed to the language model.\n        response_format: An optional schema for the final supervisor output.\n\n            If provided, output will be formatted to match the given schema and returned in the 'structured_response' state key.\n            If not provided, `structured_response` will not be present in the output state.\n            Can be passed in as:\n\n                - an OpenAI function/tool schema,\n                - a JSON Schema,\n                - a TypedDict class,\n                - or a Pydantic class.\n                - a tuple (prompt, schema), where schema is one of the above.\n                    The prompt will be used together with the model that is being used to generate the structured response.\n\n            !!! Important\n                `response_format` requires the model to support `.with_structured_output`\n\n            !!! Note\n                `response_format` requires `structured_response` key in your state schema.\n                You can use the prebuilt `langgraph.prebuilt.chat_agent_executor.AgentStateWithStructuredResponse`.\n        state_schema: State schema to use for the supervisor graph.\n        config_schema: An optional schema for configuration.\n            Use this to expose configurable parameters via supervisor.config_specs.\n        output_mode: Mode for adding managed agents' outputs to the message history in the multi-agent workflow.\n            Can be one of:\n            - `full_history`: add the entire agent message history\n            - `last_message`: add only the last message (default)\n        add_handoff_back_messages: Whether to add a pair of (AIMessage, ToolMessage) to the message history\n            when returning control to the supervisor to indicate that a handoff has occurred.\n        supervisor_name: Name of the supervisor node.\n        include_agent_name: Use to specify how to expose the agent name to the underlying supervisor LLM.\n\n            - None: Relies on the LLM provider using the name attribute on the AI message. Currently, only OpenAI supports this.\n            - \"inline\": Add the agent name directly into the content field of the AI message using XML-style tags.\n                Example: \"How can I help you\" -> \"<name>agent_name</name><content>How can I help you?</content>\"\n    \"\"\"\n    agent_names = set()\n    for agent in agents:\n        if agent.name is None or agent.name == \"LangGraph\":\n            raise ValueError(\n                \"Please specify a name when you create your agent, either via `create_react_agent(..., name=agent_name)` \"\n                \"or via `graph.compile(name=name)`.\"\n            )\n\n        if agent.name in agent_names:\n            raise ValueError(\n                f\"Agent with name '{agent.name}' already exists. Agent names must be unique.\"\n            )\n\n        agent_names.add(agent.name)\n\n    handoff_tools = [create_handoff_tool(agent_name=agent.name) for agent in agents]\n    all_tools = (tools or []) + handoff_tools\n\n    if _supports_disable_parallel_tool_calls(model):\n        model = model.bind_tools(all_tools, parallel_tool_calls=False)\n    else:\n        model = model.bind_tools(all_tools)\n\n    if include_agent_name:\n        model = with_agent_name(model, include_agent_name)\n                \n    supervisor = create_react_agent(\n        name=supervisor_name,\n        model=model,\n        tools=all_tools,\n        prompt=prompt,\n        state_schema=state_schema,\n        response_format=response_format,\n        debug=False,\n    )\n    # Build the multi-agent supervisor graph using the langgraph StateGraph setup\n    builder = StateGraph(state_schema, config_schema=config_schema)\n    builder.add_node(supervisor, destinations=tuple(agent_names) + (END,))\n    builder.add_edge(START, supervisor.name)\n    for agent in agents:\n        # If agent is a \"ReactAgent\" or similar\n        if hasattr(agent, \"get_agent\") and callable(agent.get_agent):\n            agent = agent.get_agent()  # retrieve the compiled subgraph\n       \n        builder.add_node(\n            agent.name,\n            _make_call_agent(\n                agent,\n                output_mode,\n                add_handoff_back_messages,\n                supervisor_name,\n            ),\n        )\n        builder.add_edge(agent.name, supervisor.name)\n\n    return builder\n"
  },
  {
    "path": "core/agents/react_supervisor_agent.py",
    "content": "from typing import Any, Callable, Dict, List, Optional, Union\nimport re\n\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langgraph.graph import StateGraph\nfrom langgraph.graph.state import CompiledStateGraph\nfrom langgraph.types import Checkpointer\nfrom langgraph.prebuilt.chat_agent_executor import (\n    AgentState,\n    StateSchemaType,\n)\nfrom langgraph.utils.runnable import RunnableCallable\nfrom core.agents.react_based_supervisor import create_supervisor\nfrom core.agents.react_based_supervisor.simple_planning_tool import SimplePlanningTool\nfrom core.agents.base.base_agent import BaseAgent\nfrom core.agents.react_based_supervisor.state_schema import PlanningAgentState\nimport logging\n\nlogger = logging.getLogger(__name__)\n\nclass SupervisorAgent(BaseAgent):\n    \"\"\"Supervisor class for managing multiple agents with planning capabilities.\n    \n    This class provides a high-level interface for creating a supervisor workflow\n    that can manage and coordinate multiple agents. It also includes planning capabilities\n    to create and manage a plan for complex tasks using a state-driven approach.\n    \n    The planning functionality is implemented using PlanningStateHandler and PlanningTool,\n    which provide a more structured and flexible way to manage tasks compared to the\n    previous TodolistTool approach.\n    \"\"\"\n    _PROMPT_TEMPLATE = \"\"\"You are a Supervisor Agent. Your job is to analyze user requests and coordinate multiple agents to complete tasks.\n\n## Task Approach Methodology\n\n### Understanding Requirements\n- Analyzing user requests to identify core needs\n- Asking clarifying questions when requirements are ambiguous\n- Breaking down complex requests into manageable components\n- Identifying potential challenges before beginning work\n\n### Coordination\n- Identifying appropriate agents for each task\n- Delegating tasks to specialized agents\n- Tracking progress and ensuring task completion\n- Synthesizing information from multiple agents\n\nRemember: Effective coordination is essential for successful task completion. Take time to understand the request and delegate appropriately.\n {tools}\n\"\"\"\n\n    _PLANNING_PROMPT_TEMPLATE = \"\"\"You are a Supervisor agent. Your role is to analyze user requests, break them down into actionable tasks, and coordinate specialized agents (e.g., research_expert, coder_expert, reporter_expert) to complete them.\n\n# Working with Complex Requests\n1. FIRST, carefully analyze the user's request and break it down into clear, actionable tasks\n2. Identify which agent is best suited for each part of the task\n3. Use the handoff tools to delegate tasks to appropriate agents ONE AT A TIME\n4. WAIT for each agent to COMPLETELY FINISH their assigned task before proceeding\n5. Review the output from each agent before delegating the next task\n6. Maintain a sequential workflow - never delegate multiple tasks simultaneously\n7. Synthesize the results and provide a coherent response to the user\n8. Provide a final summary when all tasks are done\n\"\"\"\n\n    _PLANNING_TOOL_TEMPLATE = \"\"\"\n# Planning Tool Instructions\nYou have access to a \"planning\" tool that uses JSON for all operations. Do NOT include any \"state\" field in your calls. Use the following actions exactly as defined:\n\n1. \"create_plan\": Create a new plan.\n   - Required fields:\n     - title (string)\n     - description (string)\n     - tasks (list of task objects). Each task object must include:\n         \"description\": string,\n         \"status\": \"pending\" (all tasks must have \"status\": \"pending\" initially),\n         \"agent\": string (empty if not assigned),\n         \"notes\": string (empty if none),\n         \"evaluation\": string (empty if none)\n   - Example:\n   {\n     \"action\": \"create_plan\",\n     \"title\": \"Python Scraper for Tech News\",\n     \"description\": \"Build a Python scraper to fetch the latest tech news and save it as CSV\",\n     \"tasks\": [\n       {\"description\": \"Research Python scraping libraries\", \"status\": \"pending\", \"agent\": \"\", \"notes\": \"\", \"evaluation\": \"\"},\n       {\"description\": \"Implement the scraper\", \"status\": \"pending\", \"agent\": \"\", \"notes\": \"\", \"evaluation\": \"\"},\n       {\"description\": \"Test the code\", \"status\": \"pending\", \"agent\": \"\", \"notes\": \"\", \"evaluation\": \"\"}\n     ]\n   }\n\n2. \"view_plan\": Retrieve the current plan.\n   - Example:\n   {\n     \"action\": \"view_plan\"\n   }\n\n3. \"add_tasks\": Add additional tasks to the plan.\n   - Required:\n     - tasks: list of task objects (same format as above)\n   - Example:\n   {\n     \"action\": \"add_tasks\",\n     \"tasks\": [\n       {\"description\": \"Write documentation\", \"status\": \"pending\", \"agent\": \"\", \"notes\": \"\", \"evaluation\": \"\"}\n     ]\n   }\n\n4. \"update_task\": Update an existing task.\n   - Identify the task by \"by_id\" (the task's unique ID from the plan).\n   - You may update any of: \"description\", \"status\", \"agent\", \"notes\", \"evaluation\".\n   - Example:\n   {\n     \"action\": \"update_task\",\n     \"by_id\": \"TASK-UUID\",\n     \"status\": \"completed\",\n     \"evaluation\": \"The scraper works perfectly.\"\n   }\n\n5. \"set_current_task\": Set the current task by its ID.\n   - Example:\n   {\n     \"action\": \"set_current_task\",\n     \"task_id\": \"TASK-UUID\"\n   }\n\n6. \"finish_plan\": Mark the entire plan as completed.\n   - Example:\n   {\n     \"action\": \"finish_plan\"\n   }\n\nImportant:\n- Always produce valid JSON for your tool calls.\n- Continuously update and monitor the plan until every task's status is \"completed\" before delivering your final answer.\n- If the plan is not fully completed, do not stop; keep updating the plan with appropriate calls.\n\"\"\"\n    def __init__(\n        self,\n        agents: List[BaseAgent],\n        model: LanguageModelLike,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        prompt: Optional[str] = None,\n        state_schema: StateSchemaType = AgentState,\n        supervisor_name: str = \"supervisor\",\n        checkpointer: Optional[Checkpointer] = None,\n        output_mode: str = \"last_message\", # * full_history or last_message *\n        enable_planning: bool = True, # * True or False *\n    ):\n        \"\"\"Initialize a supervisor.\n        \n        Args:\n            agents: List of agents to manage\n            model: Language model to use for the supervisor\n            tools: Optional list of tools available to the supervisor\n            prompt: Optional prompt to use for the supervisor\n            state_schema: State schema to use for the supervisor graph\n            supervisor_name: Name of the supervisor node\n            checkpointer: Optional checkpointer to use for the supervisor\n            output_mode: Mode for adding agent outputs to the message history\n                (\"full_history\" or \"last_message\")\n            enable_planning: Whether to enable planning capabilities\n            auto_planning: Whether to automatically generate plans for complex tasks\n        \"\"\"\n        # 设置规划相关属性\n        self._enable_planning = enable_planning\n        \n        # 如果启用规划功能，设置状态模式为PlanningAgentState\n        if self._enable_planning and state_schema == AgentState:\n            state_schema = PlanningAgentState\n            \n        # Store agent-specific attributes before super().__init__\n        self.agents = agents\n        self.output_mode = output_mode\n        self.supervisor_name = supervisor_name\n        self.state_schema = state_schema\n        self.checkpointer = checkpointer\n        self.tools = tools or []\n        self._workflow = None\n            \n        # 生成基础提示词\n        # _agents_prompt = self._generate_agents_prompt()\n        _final_prompt = self._PLANNING_PROMPT_TEMPLATE + \"/n/n\" + self._PLANNING_TOOL_TEMPLATE if self._enable_planning else self._PROMPT_TEMPLATE\n        \n        if tools is None:\n            tools = []\n        # 如果启用规划功能，添加规划提示模板并添加规划工具\n        if self._enable_planning:\n            tools.append(SimplePlanningTool())\n        \n        # 初始化BaseAgent父类\n        super().__init__(\n            name=supervisor_name,\n            model=model,\n            tools=tools,\n            checkpointer=checkpointer,\n            prompt=_final_prompt,\n        )\n    \n    def build(self) -> StateGraph:\n        \"\"\"Build the supervisor workflow.\n        \n        Returns:\n            The built StateGraph\n        \"\"\"\n        \n        if self._workflow is not None:\n            return self._workflow\n            \n        self._workflow = create_supervisor(\n            agents=self.agents,\n            model=self.model,\n            tools=self.tools,\n            prompt=self.base_prompt,\n            state_schema=self.state_schema,\n            supervisor_name=self.supervisor_name,\n            output_mode=self.output_mode,\n        )\n        \n        return self._workflow"
  },
  {
    "path": "core/agents/sb_supervisor_agent.py",
    "content": "# reason_graph/supervisor_agent.py\nfrom typing import  Callable, List, Optional, Union, cast, Literal\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langgraph.graph import StateGraph\nfrom langgraph.types import Checkpointer\n\n# 内部导入\nfrom core.agents.base.base_agent import BaseAgent\nfrom core.agents.state_based_supervisor.state_schema import PlanningAgentState, StateSchemaType # 导入 PlanningAgentState\n# 导入重构后的 create_supervisor 函数\nfrom core.agents.state_based_supervisor.supervisor_graph import create_supervisor\nfrom core.agents.state_based_supervisor.agent_name import AgentNameMode\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nclass SupervisorAgent(BaseAgent):\n    \"\"\"\n    Supervisor Agent 类 (最终版)\n    负责协调子 Agent 并管理规划 (使用状态驱动方法)。\n    invoke/ainvoke 继承自 BaseAgent，负责完整流程。\n    \"\"\"\n\n    def __init__(\n        self,\n        agents: List[BaseAgent], # 子 Agent 实例列表\n        model: LanguageModelLike, # Supervisor 使用的 LLM\n        tools: Optional[List[Union[BaseTool, Callable]]] = None, # Supervisor 特有工具\n        state_schema: StateSchemaType = PlanningAgentState,\n        supervisor_name: str = \"supervisor\",\n        checkpointer: Optional[Checkpointer] = None,\n        output_mode: str = \"last_message\",\n        # enable_planning: bool = True, # 不再需要，强制使用 Planning\n        include_agent_name: Optional[str] = \"inline\",\n        # BaseAgent 参数\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = None,\n        model_name: Optional[str] = None,\n    ):\n        \"\"\"初始化 Supervisor Agent\"\"\"\n        if state_schema != PlanningAgentState:\n             print(\"Warning: SupervisorAgent forces state_schema to PlanningAgentState.\")\n             state_schema = PlanningAgentState\n\n        self.sub_agents = agents\n        self.output_mode = output_mode\n        self.include_agent_name = cast(Optional[AgentNameMode], include_agent_name)\n\n        # 初始化 BaseAgent 父类\n        super().__init__(\n            name=supervisor_name,\n            model=model,\n            tools=tools or [],\n            checkpointer=checkpointer,\n            prompt=None, # 核心 Prompt 在 supervisor_node_logic 中处理\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            model_name=model_name,\n        )\n        # _workflow_definition 和 _executable_agent 由 BaseAgent 管理\n\n    def build(self) -> Optional[StateGraph]:\n        \"\"\"构建 Supervisor 的 LangGraph 工作流图定义。\"\"\"\n        # 调用重构后的 create_supervisor 函数来获取 StateGraph 定义\n        # 这个 StateGraph 包含了手写的 supervisor_node_logic\n        if self._workflow: return self._workflow\n        \n        print(f\"Building supervisor graph definition for '{self.name}'...\")\n        try:\n            graph_definition = create_supervisor(\n                model=self.model,\n                sub_agents=self.sub_agents,\n                state_schema=PlanningAgentState, # 强制使用\n                tools=self.tools,\n                output_mode=cast(Literal[\"full_history\", \"last_message\"], self.output_mode),\n                supervisor_name=self.name,\n                include_agent_name=self.include_agent_name,\n            )\n            self._workflow = graph_definition # 存储图定义\n            print(f\"Supervisor graph definition built for '{self.name}'.\")\n            return self._workflow\n        except Exception as e:\n            print(f\"!!! Error building supervisor graph definition '{self.name}': {e}\")\n            import traceback\n            traceback.print_exc()\n            self._workflow = None\n            raise e\n\n    # compile 方法继承自 BaseAgent\n    # 它会调用上面的 build() 获取 StateGraph 定义，然后编译它，\n    # 并创建包含预处理步骤的最终 _executable_agent\n\n    # invoke, ainvoke, get_agent, reset 继承自 BaseAgent"
  },
  {
    "path": "core/agents/state_based_supervisor/__init__.py",
    "content": ""
  },
  {
    "path": "core/agents/state_based_supervisor/agent_name.py",
    "content": "import re\nfrom typing import Literal\n\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.messages import AIMessage, BaseMessage\nfrom langchain_core.runnables import RunnableLambda\n\nNAME_PATTERN = re.compile(r\"<name>(.*?)</name>\", re.DOTALL)\nCONTENT_PATTERN = re.compile(r\"<content>(.*?)</content>\", re.DOTALL)\n\nAgentNameMode = Literal[\"inline\"]\n\n\ndef _is_content_blocks_content(content: list[dict] | str) -> bool:\n    return (\n        isinstance(content, list)\n        and len(content) > 0\n        and isinstance(content[0], dict)\n        and \"type\" in content[0]\n    )\n\n\ndef add_inline_agent_name(message: BaseMessage) -> BaseMessage:\n    \"\"\"Add name and content XML tags to the message content.\n\n    Examples:\n\n        >>> add_inline_agent_name(AIMessage(content=\"Hello\", name=\"assistant\"))\n        AIMessage(content=\"<name>assistant</name><content>Hello</content>\", name=\"assistant\")\n\n        >>> add_inline_agent_name(AIMessage(content=[{\"type\": \"text\", \"text\": \"Hello\"}], name=\"assistant\"))\n        AIMessage(content=[{\"type\": \"text\", \"text\": \"<name>assistant</name><content>Hello</content>\"}], name=\"assistant\")\n    \"\"\"\n    if not isinstance(message, AIMessage) or not message.name:\n        return message\n\n    formatted_message = message.model_copy()\n    if _is_content_blocks_content(formatted_message.content):\n        text_blocks = [block for block in message.content if block[\"type\"] == \"text\"]\n        non_text_blocks = [block for block in message.content if block[\"type\"] != \"text\"]\n        content = text_blocks[0][\"text\"] if text_blocks else \"\"\n        formatted_content = f\"<name>{message.name}</name><content>{content}</content>\"\n        formatted_message.content = non_text_blocks + [{\"type\": \"text\", \"text\": formatted_content}]\n    else:\n        formatted_message.content = (\n            f\"<name>{message.name}</name><content>{formatted_message.content}</content>\"\n        )\n    return formatted_message\n\n\ndef remove_inline_agent_name(message: BaseMessage) -> BaseMessage:\n    \"\"\"Remove explicit name and content XML tags from the AI message content.\n\n    Examples:\n\n        >>> remove_inline_agent_name(AIMessage(content=\"<name>assistant</name><content>Hello</content>\", name=\"assistant\"))\n        AIMessage(content=\"Hello\", name=\"assistant\")\n\n        >>> remove_inline_agent_name(AIMessage(content=[{\"type\": \"text\", \"text\": \"<name>assistant</name><content>Hello</content>\"}], name=\"assistant\"))\n        AIMessage(content=[{\"type\": \"text\", \"text\": \"Hello\"}], name=\"assistant\")\n    \"\"\"\n    if not isinstance(message, AIMessage) or not message.name:\n        return message\n\n    is_content_blocks_content = _is_content_blocks_content(message.content)\n    if is_content_blocks_content:\n        text_blocks = [block for block in message.content if block[\"type\"] == \"text\"]\n        if not text_blocks:\n            return message\n\n        non_text_blocks = [block for block in message.content if block[\"type\"] != \"text\"]\n        content = text_blocks[0][\"text\"]\n    else:\n        content = message.content\n\n    name_match: re.Match | None = NAME_PATTERN.search(content)\n    content_match: re.Match | None = CONTENT_PATTERN.search(content)\n    if not name_match or not content_match:\n        return message\n\n    if name_match.group(1) != message.name:\n        return message\n\n    parsed_content = content_match.group(1)\n    parsed_message = message.model_copy()\n    if is_content_blocks_content:\n        content_blocks = non_text_blocks\n        if parsed_content:\n            content_blocks.append({\"type\": \"text\", \"text\": parsed_content})\n\n        parsed_message.content = content_blocks\n    else:\n        parsed_message.content = parsed_content\n    return parsed_message\n\n\ndef with_agent_name(\n    model: LanguageModelLike,\n    agent_name_mode: AgentNameMode,\n) -> LanguageModelLike:\n    \"\"\"Attach formatted agent names to the messages passed to and from a language model.\n\n    This is useful for making a message history with multiple agents more coherent.\n\n    NOTE: agent name is consumed from the message.name field.\n        If you're using an agent built with create_react_agent, name is automatically set.\n        If you're building a custom agent, make sure to set the name on the AI message returned by the LLM.\n\n    Args:\n        model: Language model to add agent name formatting to.\n        agent_name_mode: Use to specify how to expose the agent name to the LLM.\n            - \"inline\": Add the agent name directly into the content field of the AI message using XML-style tags.\n                Example: \"How can I help you\" -> \"<name>agent_name</name><content>How can I help you?</content>\".\n    \"\"\"\n    if agent_name_mode == \"inline\":\n        process_input_message = add_inline_agent_name\n        process_output_message = remove_inline_agent_name\n    else:\n        raise ValueError(\n            f\"Invalid agent name mode: {agent_name_mode}. Needs to be one of: {AgentNameMode.__args__}\"\n        )\n\n    def process_input_messages(messages: list[BaseMessage]) -> list[BaseMessage]:\n        return [process_input_message(message) for message in messages]\n\n    model = (\n        process_input_messages\n        | model\n        | RunnableLambda(process_output_message, name=\"process_output_message\")\n    )\n    return model\n"
  },
  {
    "path": "core/agents/state_based_supervisor/evaluate_result_node.py",
    "content": "# reason_graph/evaluate_result_node.py\n\nimport json\nimport time\nimport copy\nimport traceback\nimport anyio \nfrom typing import Dict, Any, List, Optional, Union\nfrom langchain_core.messages import BaseMessage, AIMessage, ToolMessage\nfrom langchain_core.runnables import RunnableConfig\n\n# 内部导入 (确保路径正确)\ntry:\n    from .state_schema import PlanningAgentState, TaskStatus, Plan, Task\n    from .planning_handler import PlanningStateHandler\nexcept ImportError as e:\n    print(f\"Error importing modules in evaluate_result_node.py: {e}\")\n    # Fallbacks\n    class PlanningAgentState(Dict): pass; \n    class Plan(Dict): pass; \n    class Task(Dict): pass\n    TaskStatus = str \n    class PlanningStateHandler: # Dummy\n        @staticmethod \n        def update_task(plan, by_id, **kwargs): return plan\n        @staticmethod\n        def set_current_task(plan, task_id): return plan\n        @staticmethod\n        def get_task(plan, task_id): return None\n        @staticmethod\n        def update_plan_status(plan): return plan\n\n\nasync def evaluate_result_node_logic(state: PlanningAgentState, config: Optional[RunnableConfig] = None) -> Dict[str, Any]:\n    \"\"\"\n    评估子 Agent 返回结果并更新计划状态的节点逻辑 (异步, 优化评估逻辑)。\n    \"\"\"\n    print(f\"--- Entering Evaluate Result Node ---\")\n    messages: List[BaseMessage] = state.get('messages', [])\n    plan: Optional[Plan] = state.get('plan')\n    last_message = messages[-1] if messages else None\n    error_message: Optional[str] = None\n    plan_updated: bool = False\n    updated_plan: Optional[Plan] = copy.deepcopy(plan) if plan else None \n\n    if not updated_plan:\n        print(\"Evaluate Result Node: No plan found in state. Skipping.\")\n        return {} \n\n    current_task_id = updated_plan.get(\"current_task_id\")\n    if not current_task_id:\n        # Fallback logic for finding current task (不变)\n        print(\"Warning: Evaluate Result Node - No current_task_id found in plan...\")\n        in_progress_tasks = [t for t in updated_plan.get('tasks', []) if t.get('status') == 'in_progress']\n        if len(in_progress_tasks) == 1: current_task_id = in_progress_tasks[0].get('id'); print(f\"  Fallback: Found task {current_task_id}\")\n        else: error_message = \"Evaluation failed: Cannot determine finished task.\"; print(f\"ERROR: {error_message}\"); return {\"plan\": updated_plan, \"error\": error_message, \"messages\": []}\n\n    agent_result_content: Optional[str] = None\n    agent_name: Optional[str] = None\n    if isinstance(last_message, AIMessage): \n        agent_result_content = str(last_message.content) if last_message.content is not None else \"\" # Ensure string\n        agent_name = last_message.name or \"SubAgent\"\n        print(f\"  Evaluating result from: {agent_name} for task ID: {current_task_id}\")\n    else:\n        agent_result_content = f\"Error: Expected AIMessage result, got {type(last_message).__name__}.\"\n        agent_name = \"System/Error\"\n        print(f\"Warning: Last message not AIMessage. Assuming task failed for {current_task_id}.\")\n\n\n    # --- 优化的评估逻辑 ---\n    new_status: TaskStatus = \"completed\" # 默认成功\n    evaluation_notes = f\"Result received from {agent_name}.\"\n    \n    # 1. 检查是否为空内容 (或只有空白符)\n    if agent_result_content is None or not agent_result_content.strip():\n        new_status = \"failed\"\n        evaluation_notes = f\"Task failed: Agent {agent_name} returned empty content.\"\n        print(f\"  Task {current_task_id} evaluated as FAILED (Empty Result).\")\n    # 2. 检查是否以明确的错误标识开头 (需要工具配合)\n    #    假设工具出错时会在返回字符串前加上 \"Error: \" 或 \"Execution Failed: \"\n    elif agent_result_content.strip().startswith((\"Error:\", \"Execution Failed:\", \"Tool Error:\")):\n        new_status = \"failed\"\n        evaluation_notes = f\"Task failed: Agent {agent_name} reported an error: {agent_result_content[:150]}...\"\n        print(f\"  Task {current_task_id} evaluated as FAILED (Explicit Error Signal).\")\n    # 3. (可选) 添加其他特定检查，例如检查是否只是\"我不明白\"之类的回复\n    elif len(agent_result_content) < 50 and any(kw in agent_result_content.lower() for kw in [\"don't know\", \"cannot fulfill\", \"无法回答\", \"不明白\"]):\n         new_status = \"failed\" # 或 \"pending_review\" ? 暂时设为 failed\n         evaluation_notes = f\"Task likely failed: Agent {agent_name} indicated inability to fulfill request.\"\n         print(f\"  Task {current_task_id} evaluated as FAILED (Agent Indicated Inability).\")\n    else:\n        # 如果以上都不是，则认为是成功\n        new_status = \"completed\"\n        print(f\"  Task {current_task_id} evaluated as COMPLETED.\")\n    # --- 评估逻辑结束 ---\n\n\n    # --- 更新 Plan 状态 (逻辑不变) ---\n    try:\n        update_kwargs = {\n            \"new_status\": new_status, \n            \"new_evaluation\": evaluation_notes,\n            \"new_notes\": agent_result_content[:1000] + \"...\" if agent_result_content and len(agent_result_content) > 1000 else agent_result_content \n        }\n        print(f\"  Updating task {current_task_id} with: {{'status': '{new_status}', ...}}\")\n        \n        if updated_plan and PlanningStateHandler.get_task(updated_plan, current_task_id):\n             updated_plan = PlanningStateHandler.update_task(updated_plan, by_id=current_task_id, **update_kwargs)\n             updated_plan = PlanningStateHandler.set_current_task(updated_plan, None) \n             updated_plan = PlanningStateHandler.update_plan_status(updated_plan)\n             print(f\"  Plan status after evaluation update: {updated_plan.get('status')}\")\n             plan_updated = True\n        else:\n             raise ValueError(f\"Task ID '{current_task_id}' not found or plan invalid before update.\")\n\n    except ValueError as ve: error_message = f\"Error updating plan: {ve}\"; print(f\"ERROR: {error_message}\"); traceback.print_exc()\n    except Exception as e: error_message = f\"Unexpected error updating plan: {e}\"; print(f\"ERROR: {error_message}\"); traceback.print_exc()\n\n    # --- 准备返回字典 (逻辑不变) ---\n    updates: Dict[str, Any] = {}\n    if updated_plan is not None: updates[\"plan\"] = updated_plan \n    elif plan is not None: updates[\"plan\"] = plan \n    \n    # 记录本节点错误，或清除旧错误\n    current_state_error = state.get(\"error\") \n    if error_message: updates[\"error\"] = error_message \n    elif current_state_error: updates[\"error\"] = None \n\n    updates[\"messages\"] = [] # Evaluator 不添加消息\n\n    print(f\"--- Exiting Evaluate Result Node. Plan updated: {plan_updated} ---\")\n    return updates\n\n# --- 同步包装器 (保持不变) ---\ndef evaluate_result_node_logic_sync(state: PlanningAgentState, config: Optional[RunnableConfig] = None) -> Dict[str, Any]:\n    \"\"\"evaluate_result_node_logic 的同步包装器\"\"\"\n    print(f\"--- Entering Evaluate Result Node (Sync Wrapper) ---\")\n    try:\n        import anyio \n        return anyio.run(evaluate_result_node_logic, state, config) # type: ignore\n    except Exception as e:\n        print(f\"Error running evaluate_result_node_logic synchronously: {e}\")\n        traceback.print_exc()\n        return {\"error\": f\"Evaluate Result sync execution failed: {e}\", \"plan\": state.get(\"plan\"), \"messages\": []}"
  },
  {
    "path": "core/agents/state_based_supervisor/handoff.py",
    "content": "# reason_graph/handoff.py\n# (Paste the code user provided for handoff.py here)\nimport re\nimport uuid\nfrom typing import List, Tuple # Import Tuple\n\nfrom langchain_core.messages import AIMessage, ToolCall, ToolMessage, BaseMessage # Import BaseMessage\nfrom langchain_core.tools import BaseTool, InjectedToolCallId, tool\nfrom langgraph.prebuilt import InjectedState\nfrom langgraph.types import Command\nfrom typing_extensions import Annotated\n\nWHITESPACE_RE = re.compile(r\"\\s+\")\n\ndef _normalize_agent_name(agent_name: str) -> str:\n    \"\"\"Normalize an agent name to be used inside the tool name.\"\"\"\n    if not agent_name: return \"unknown_agent\"\n    return WHITESPACE_RE.sub(\"_\", agent_name.strip()).lower()\n\n# Note: The original code uses @tool decorator which requires function arguments.\n# To inject state, the decorated function needs the Annotated state argument.\n# Let's define the function first and then apply the decorator, or use functools.partial.\n# Using the function approach first for clarity.\n\ndef _handoff_to_agent_implementation(\n    state: Annotated[dict, InjectedState], # Inject state here\n    tool_call_id: Annotated[str, InjectedToolCallId], # Inject tool_call_id\n    target_agent_name: str, # Pass the target agent name\n    tool_name: str # Pass the specific tool name for the ToolMessage\n) -> Command:\n    \"\"\"Ask another agent for help. This is the core logic.\"\"\"\n    # Create the ToolMessage confirming the handoff BEFORE generating the Command\n    \"\"\"Handoff 核心逻辑，添加日志\"\"\"\n    print(f\"\\n--- DEBUG: Entering _handoff_to_agent_implementation ---\")\n    print(f\"  - Target Agent: {target_agent_name}\")\n    print(f\"  - Tool Name: {tool_name}\")\n    print(f\"  - Tool Call ID: {tool_call_id}\")\n    # print(f\"  - Current State Keys: {list(state.keys())}\") # 可选：打印状态键\n    tool_message = ToolMessage(\n        content=f\"Okay, handing off to {target_agent_name}. The current state and task context have been passed.\",\n        name=tool_name,\n        tool_call_id=tool_call_id,\n    )\n    print(f\"  - Created ToolMessage: ID={tool_message.tool_call_id}, Name={tool_message.name}\")\n    # The Command tells LangGraph to route to the target agent node\n    # It also includes the ToolMessage in the state update for the next step\n    command_obj = Command(\n        goto=target_agent_name,\n        # graph=Command.PARENT, # PARENT is default, usually not needed unless nested graphs\n        update={\"messages\": [tool_message]}, # Return only the NEW message to be added\n    )\n    print(f\"  - Created Command: goto='{command_obj.goto}', update contains {len(command_obj.update.get('messages',[]))} message(s)\")\n    print(f\"--- DEBUG: Exiting _handoff_to_agent_implementation ---\")\n    return command_obj\n\ndef create_handoff_tool(*, agent_name: str) -> BaseTool:\n    \"\"\"Create a tool that can handoff control to the requested agent.\"\"\"\n    if not agent_name:\n         raise ValueError(\"agent_name cannot be empty for create_handoff_tool\")\n\n    normalized_name = _normalize_agent_name(agent_name)\n    tool_name = f\"transfer_to_{normalized_name}\"\n\n    # Use functools.partial to fix the target_agent_name and tool_name arguments\n    import functools\n    specific_handoff_logic = functools.partial(\n        _handoff_to_agent_implementation,\n        target_agent_name=agent_name,\n        tool_name=tool_name\n    )\n\n    # Decorate the partial function\n    # The arguments 'state' and 'tool_call_id' will be automatically injected by LangGraph\n    # when the tool is called due to the Annotations used in _handoff_to_agent_implementation\n    @tool(tool_name)\n    def handoff_tool_wrapper(\n         state: Annotated[dict, InjectedState],\n         tool_call_id: Annotated[str, InjectedToolCallId]\n     ) -> Command:\n        \"\"\"Dynamically generated tool description: Ask the '{agent_name}' agent for help with the current task or question.\"\"\"\n        # --- 添加 Debug 日志 ---\n        print(f\"\\n--- DEBUG: Handoff Tool '{tool_name}' (wrapper) CALLED ---\")\n        # ---\n        return specific_handoff_logic(state=state, tool_call_id=tool_call_id) # type: ignore\n\n    # Set a more descriptive description\n    handoff_tool_wrapper.description = f\"Use this tool to delegate the current task or ask a question to the '{agent_name}' agent. Pass the necessary context or instructions in your reasoning before calling this tool.\"\n\n    return handoff_tool_wrapper\n\n\ndef create_handoff_back_messages(\n    agent_name: str, supervisor_name: str\n) -> Tuple[AIMessage, ToolMessage]:\n    \"\"\"Create a pair of (AIMessage, ToolMessage) to add to the message history when returning control to the supervisor.\"\"\"\n    tool_call_id = str(uuid.uuid4())\n    # Although no tool exists for transferring back, we simulate the pattern\n    # The AIMessage signals intent, the ToolMessage confirms the transition occurred in the graph logic\n    simulated_tool_name = f\"transfer_back_to_{_normalize_agent_name(supervisor_name)}\"\n\n    # The AIMessage contains the *final output* of the sub-agent in its content field\n    # It should also indicate the intent to hand back, though the graph logic forces this anyway.\n    # The content here is just a placeholder - the actual content comes from the agent's final response.\n    ai_message_content = f\"Task completed. Transferring back to {supervisor_name}.\"\n\n    # We still generate a ToolCall structure for consistency in the AIMessage, even if no real tool is called on supervisor side for hand-back.\n    tool_calls = [ToolCall(name=simulated_tool_name, args={}, id=tool_call_id)]\n\n    # Create the AIMessage - crucial to include the sub-agent's name\n    ai_message = AIMessage(\n            content=ai_message_content, # Placeholder - see note below\n            tool_calls=tool_calls,\n            name=agent_name, # Identify which agent is responding\n        )\n\n    # The ToolMessage confirms the transition happened from the graph's perspective\n    tool_message = ToolMessage(\n            content=f\"Successfully transferred back to {supervisor_name} from {agent_name}.\",\n            name=simulated_tool_name,\n            tool_call_id=tool_call_id,\n        )\n\n    # IMPORTANT NOTE: The `_make_call_agent` helper function should populate the\n    # `ai_message.content` with the *actual* final response message(s) from the sub-agent,\n    # replacing the placeholder content above. It keeps the tool_calls structure.\n    # The code provided for `_make_call_agent` seems to handle extracting `output['messages']`.\n    # We need to ensure it correctly structures the AIMessage part of the tuple returned here.\n    # Let's refine create_handoff_back_messages to just create the ToolMessage,\n    # as the AIMessage content comes from the sub-agent's actual final output.\n\n    # Refined approach: _make_call_agent gets the final AI response, we only need the ToolMessage here?\n    # No, the pattern expects both. Let's assume _make_call_agent takes the *last* message from the\n    # sub-agent's output and packages it into this AIMessage structure.\n\n    return ai_message, tool_message # Return both for the standard pattern"
  },
  {
    "path": "core/agents/state_based_supervisor/planner_node.py",
    "content": "import re\nimport json\nimport time\nimport copy\nimport ast\nimport traceback\nimport anyio # <--- 导入 anyio\nfrom typing import Dict, Any, List, Optional, Union\nfrom datetime import datetime\nfrom langchain_core.messages import BaseMessage, AIMessage, SystemMessage, HumanMessage\nfrom langchain_core.runnables import RunnableConfig\n\n# 内部导入\ntry:\n    from .state_schema import PlanningAgentState, Plan\n    from .planning_handler import PlanningStateHandler\n    from .prompt import PLANNER_SYSTEM_PROMPT_TEMPLATE\nexcept ImportError as e:\n    print(f\"Error importing modules in planner_node.py: {e}\")\n    class PlanningAgentState(Dict): pass; \n    class Plan(Dict): pass; \n    class PlanningStateHandler: pass\n    PLANNER_SYSTEM_PROMPT_TEMPLATE = \"Fallback Planner Prompt: Error loading template. Args: {agent_descriptions}\"\n\n# --- Planner 节点核心逻辑 (异步) ---\nasync def planner_node_logic(\n    state: PlanningAgentState,\n    config: Optional[RunnableConfig],\n    model: Any, # Planner 使用的 LLM\n    agent_description_map: Dict[str, str] # 需要 Agent 描述来分配任务\n) -> Dict[str, Any]:\n    \"\"\"Planner 节点逻辑：分析请求，生成初始计划\"\"\"\n    print(f\"--- Entering Planner Node ---\")\n    messages: List[BaseMessage] = state.get('messages', [])\n    # Planner 通常在 plan 为空时运行\n    plan: Optional[Plan] = state.get('plan')\n    if plan:\n         print(\"Planner Node: Plan already exists. Skipping plan creation.\")\n         # 如果计划已存在，Planner 不应再执行，直接返回当前状态？\n         # 或者返回一个空更新，让图流向 Supervisor？\n         # 返回空更新更安全，让 Supervisor 继续\n         return {} # 返回空字典，状态不变\n\n    if not messages:\n         print(\"Planner Node: No messages found to create a plan from.\")\n         return {\"error\": \"Planner received no messages.\"}\n\n    # --- 1. 准备 Planner Prompt ---\n    # Planner 只需要 Agent 描述，不需要 plan_json 或 current_date?\n    # 可以让它知道日期\n    desc_list = [f\"- {name}: {desc}\" for name, desc in agent_description_map.items()]\n    agent_descriptions_str = \"\\n\".join(desc_list)\n    current_date_str = datetime.now().strftime(\"%a, %b %d, %Y\") # Planner 也可能需要日期\n\n    system_prompt_text = \"Error: Planner prompt template could not be loaded/formatted.\"\n    try:\n        # 加载 Planner 的模板\n        from .prompt import PLANNER_SYSTEM_PROMPT_TEMPLATE\n        system_prompt_text = PLANNER_SYSTEM_PROMPT_TEMPLATE.format(\n            agent_descriptions=agent_descriptions_str,\n            # 如果 Planner Prompt 需要日期：\n            current_date=current_date_str\n        )\n    except ImportError: print(\"ERROR: Could not import PLANNER_SYSTEM_PROMPT_TEMPLATE\")\n    except KeyError as e: print(f\"ERROR: Missing key in planner prompt formatting: {e}\")\n    except Exception as e: print(f\"ERROR: Unexpected error loading/formatting planner prompt: {e}\")\n\n    # Planner 的输入只需要 System Prompt 和用户的初始请求（通常是第一条）\n    # 或者传递最后几条消息？为了简单，先只用第一条 HumanMessage\n    initial_user_request = next((m for m in messages if isinstance(m, HumanMessage)), None)\n    if not initial_user_request:\n         print(\"Planner Node: No HumanMessage found in initial state.\")\n         return {\"error\": \"Planner did not find initial user request.\"}\n\n    llm_input_messages = [SystemMessage(content=system_prompt_text), initial_user_request]\n\n    # --- 2. 调用 Planner LLM ---\n    print(\"--- Calling Planner LLM ---\")\n    response: Optional[AIMessage] = None\n    llm_error_msg: Optional[str] = None\n    try:\n        response = await model.ainvoke(llm_input_messages, config=config)\n        if not isinstance(response, AIMessage): raise TypeError(\"Planner LLM returned non-AIMessage.\")\n        # Planner 的回复主要是指令，可以不设置 name\n        print(f\"Planner LLM Raw Response Content: {response.content[:300]}...\")\n        # Planner 不应该调用工具\n        if response.tool_calls: print(\"Warning: Planner LLM unexpectedly generated tool calls!\")\n        messages_to_add: List[BaseMessage] = [response] # 可以选择是否将 Planner 的思考过程加入 history\n    except Exception as e:\n        print(f\"!!! Error invoking Planner LLM: {e}\"); traceback.print_exc()\n        llm_error_msg = f\"Planner LLM invocation failed: {e}\"\n        messages_to_add = []\n        response = None\n\n    # --- 3. 处理 Planner LLM 回复 (解析 CREATE_PLAN) ---\n    new_plan: Optional[Plan] = None\n    plan_updated: bool = False # 标记计划是否在本节点成功创建\n    directive_error_msg: Optional[str] = None\n\n    if response and isinstance(response.content, str):\n        try:\n            plan_match = re.search(r\"PLAN_UPDATE:\\s*CREATE_PLAN\\s*(\\{.*?\\})\\s*$\", response.content, re.IGNORECASE | re.DOTALL | re.MULTILINE)\n            if plan_match:\n                args_json_str = plan_match.group(1)\n                print(f\"Planner directive found: CREATE_PLAN with args: {args_json_str[:100]}...\")\n                try:\n                     args = json.loads(args_json_str)\n                     if not isinstance(args, dict): raise ValueError(\"Args JSON not a dict.\")\n                     \n                     title=args.get(\"title\", \"Plan\"); desc=args.get(\"description\",\"\"); tasks=args.get(\"tasks\",[])\n                     if isinstance(tasks, list) and all(isinstance(t, dict) and 'description' in t for t in tasks):\n                          for task_data in tasks: task_data['status'] = 'pending' # 强制状态\n                          new_plan = PlanningStateHandler.create_plan(title, desc)\n                          new_plan = PlanningStateHandler.add_tasks(new_plan, tasks); plan_updated = True\n                          print(\"DEBUG: Plan successfully created by Planner node.\")\n                     else: raise ValueError(\"Invalid 'tasks' format (must be list of dicts with 'description').\")\n\n                except (json.JSONDecodeError, ValueError, KeyError, TypeError) as e:\n                     err_msg = f\"Error processing CREATE_PLAN directive: {type(e).__name__} - {e}\"\n                     print(err_msg); traceback.print_exc(); directive_error_msg = err_msg\n                except Exception as e:\n                     err_msg = f\"Unexpected error processing CREATE_PLAN: {type(e).__name__} - {e}\"\n                     print(err_msg); traceback.print_exc(); directive_error_msg = err_msg\n            else:\n                 directive_error_msg = \"Planner LLM did not output a valid PLAN_UPDATE: CREATE_PLAN directive.\"\n                 print(f\"Warning: {directive_error_msg}\")\n                 # 即使没有指令，也可能需要返回 Planner 的回复消息\n                 # 但如果没有 plan，流程可能无法继续，所以记录错误\n\n        except Exception as outer_e:\n             directive_error_msg = f\"Error searching for PLAN_UPDATE directive: {outer_e}\"\n             print(f\"ERROR: {directive_error_msg}\"); traceback.print_exc()\n\n    # --- 4. 准备返回的状态更新 ---\n    updates: Dict[str, Any] = {\"messages\": messages_to_add} # 添加 Planner 的回复消息\n    if plan_updated and new_plan:\n        updates[\"plan\"] = new_plan # 返回新创建的 Plan\n    \n    final_error = llm_error_msg or directive_error_msg\n    if final_error: # 记录 Planner 步骤中遇到的第一个错误\n        updates[\"error\"] = final_error\n\n    print(f\"--- Exiting Planner Node. Plan created: {plan_updated} ---\")\n    return updates\n\n\n# --- Planner 节点的同步包装器 (使用 anyio) ---\ndef planner_node_logic_sync(\n    state: PlanningAgentState,\n    config: Optional[RunnableConfig],\n    model: Any,\n    agent_description_map: Dict[str, str]\n) -> Dict[str, Any]:\n    \"\"\"planner_node_logic 的同步包装器\"\"\"\n    print(f\"--- Entering Planner Node (Sync Wrapper) ---\")\n    try:\n        # 使用 anyio 在同步函数中运行异步函数\n        return anyio.run( # type: ignore\n            planner_node_logic, state, config, model, agent_description_map\n        )\n    except Exception as e:\n        print(f\"Error running planner_node_logic synchronously: {e}\")\n        traceback.print_exc()\n        return {\"error\": f\"Planner sync execution failed: {e}\", \"messages\": state.get(\"messages\",[])}"
  },
  {
    "path": "core/agents/state_based_supervisor/planning_handler.py",
    "content": "# reason_graph/planning_handler.py\nimport uuid\nimport datetime\nfrom typing import List, Dict, Optional, Any\nfrom .state_schema import TaskStatus, PlanningStatus, Task, Plan # 从 state_schema 导入类型\n\nclass PlanningStateHandler:\n    \"\"\"\n    使用静态方法管理一个表示项目计划的字典。\n    计划现在存储在 LangGraph 的状态中，此类提供操作该字典的函数。\n    \"\"\"\n\n    @staticmethod\n    def _now() -> str:\n        return datetime.datetime.now(datetime.timezone.utc).isoformat()\n\n    @staticmethod\n    def _gen_id() -> str:\n        # 生成更易读的任务 ID (可选)\n        # return f\"task_{str(uuid.uuid4())[:8]}\"\n        return str(uuid.uuid4())\n\n    @staticmethod\n    def create_plan(title: str, description: str) -> Plan:\n        \"\"\"创建一个新的 Plan 字典\"\"\"\n        now = PlanningStateHandler._now()\n        return Plan(\n            title=title,\n            description=description,\n            status=\"planning\",  # 初始状态为规划中\n            tasks=[],\n            current_task_id=None,\n            created_at=now,\n            updated_at=now,\n            completed_at=None,\n        )\n\n    @staticmethod\n    def create_task(description: str,\n                    agent: Optional[str] = None,\n                    dependencies: Optional[List[str]] = None) -> Task:\n        \"\"\"创建一个新的 Task 字典\"\"\"\n        now = PlanningStateHandler._now()\n        return Task(\n            id=PlanningStateHandler._gen_id(),\n            description=description.strip(),\n            status=\"pending\", # 初始状态为待处理\n            agent=agent.strip() if agent else None,\n            created_at=now,\n            updated_at=now,\n            completed_at=None,\n            dependencies=dependencies or [],\n            notes=None,\n            evaluation=None,\n            result=None,\n        )\n\n    @staticmethod\n    def add_tasks(plan: Plan, tasks_data: List[Dict[str, Any]]) -> Plan:\n        \"\"\"向 Plan 字典中添加任务\"\"\"\n        if not isinstance(plan, dict) or \"tasks\" not in plan:\n             raise ValueError(\"Invalid plan structure provided.\")\n        if not isinstance(tasks_data, list):\n             raise ValueError(\"tasks_data must be a list of task dictionaries.\")\n\n        for tinfo in tasks_data:\n            desc = tinfo.get(\"description\")\n            if not desc: continue # 跳过没有描述的任务\n            agent = tinfo.get(\"agent\")\n            deps = tinfo.get(\"dependencies\")\n            task = PlanningStateHandler.create_task(desc, agent, deps)\n            plan[\"tasks\"].append(task)\n\n        # 如果添加任务时计划仍在 planning 阶段，可以转为 ready\n        if plan.get(\"status\") == \"planning\":\n             plan[\"status\"] = \"ready\"\n\n        plan[\"updated_at\"] = PlanningStateHandler._now()\n        return plan\n\n    @staticmethod\n    def update_task(plan: Plan,\n                    by_id: Optional[str] = None,\n                    new_desc: Optional[str] = None,\n                    new_status: Optional[TaskStatus] = None,\n                    new_agent: Optional[str] = None,\n                    new_notes: Optional[str] = None,\n                    new_evaluation: Optional[str] = None,\n                    new_result: Optional[Any] = None) -> Plan:\n        \"\"\"更新 Plan 字典中指定 ID 的任务\"\"\"\n        if not isinstance(plan, dict) or \"tasks\" not in plan:\n             raise ValueError(\"Invalid plan structure provided.\")\n        if not by_id:\n            raise ValueError(\"Must provide 'by_id' to update a task.\")\n\n        task = next((t for t in plan[\"tasks\"] if t.get(\"id\") == by_id), None)\n        if not task:\n            raise ValueError(f\"No matching task found with ID: {by_id}\")\n\n        updated = False\n        if new_desc is not None and task.get(\"description\") != new_desc.strip():\n            task[\"description\"] = new_desc.strip()\n            updated = True\n        if new_status is not None and task.get(\"status\") != new_status.strip():\n            task[\"status\"] = new_status.strip()\n            if new_status.strip() == \"completed\":\n                task[\"completed_at\"] = PlanningStateHandler._now()\n            updated = True\n        if new_agent is not None and task.get(\"agent\") != new_agent.strip():\n            task[\"agent\"] = new_agent.strip()\n            updated = True\n        if new_notes is not None and task.get(\"notes\") != new_notes.strip():\n            task[\"notes\"] = new_notes.strip()\n            updated = True\n        if new_evaluation is not None and task.get(\"evaluation\") != new_evaluation.strip():\n            task[\"evaluation\"] = new_evaluation.strip()\n            updated = True\n        if new_result is not None: # 直接更新结果（谨慎使用，可能很大）\n             task[\"result\"] = new_result\n             updated = True\n\n        if updated:\n            task[\"updated_at\"] = PlanningStateHandler._now()\n            plan[\"updated_at\"] = PlanningStateHandler._now() # 更新整个计划的更新时间\n\n        # 检查并更新整个计划的状态\n        plan = PlanningStateHandler.update_plan_status(plan)\n\n        return plan\n\n    @staticmethod\n    def update_plan_status(plan: Plan) -> Plan:\n         \"\"\"根据任务状态自动更新计划状态\"\"\"\n         if not isinstance(plan, dict) or \"tasks\" not in plan:\n              return plan # Return as is if invalid\n\n         tasks = plan[\"tasks\"]\n         if not tasks: # 没有任务\n              if plan.get(\"status\") not in [\"completed\", \"failed\", \"error\"]:\n                   plan[\"status\"] = \"ready\" # 或 \"completed\" 如果没有任务就算完成? 设为 ready 似乎更合理\n              return plan\n\n         all_completed = all(t.get(\"status\") == \"completed\" for t in tasks)\n         any_failed = any(t.get(\"status\") == \"failed\" for t in tasks)\n         any_in_progress = any(t.get(\"status\") in [\"in_progress\", \"pending_review\"] for t in tasks)\n         any_pending = any(t.get(\"status\") == \"pending\" for t in tasks)\n\n         current_status = plan.get(\"status\")\n         new_status = current_status\n\n         if any_failed:\n             new_status = \"failed\" # 或 \"error\"\n         elif all_completed:\n             new_status = \"completed\"\n             plan[\"completed_at\"] = PlanningStateHandler._now()\n         elif any_in_progress:\n             new_status = \"executing\"\n         elif any_pending or not any_in_progress: # 如果还有 pending 或所有任务都结束了但不是 completed/failed\n              if current_status not in [\"completed\", \"failed\", \"error\"]: # 避免覆盖最终状态\n                 new_status = \"ready\" # 准备好执行或等待新任务\n\n         if new_status != current_status:\n              plan[\"status\"] = new_status\n              plan[\"updated_at\"] = PlanningStateHandler._now()\n\n         return plan\n\n    @staticmethod\n    def set_current_task(plan: Plan, task_id: Optional[str]) -> Plan:\n        \"\"\"设置 Plan 字典中的当前任务 ID\"\"\"\n        if not isinstance(plan, dict):\n             raise ValueError(\"Invalid plan structure provided.\")\n\n        if task_id is None:\n             plan[\"current_task_id\"] = None\n             plan[\"updated_at\"] = PlanningStateHandler._now()\n             return plan\n\n        found = any(t.get(\"id\") == task_id for t in plan.get(\"tasks\", []))\n        if not found:\n            raise ValueError(f\"Task ID '{task_id}' not found in plan.\")\n\n        if plan.get(\"current_task_id\") != task_id:\n            plan[\"current_task_id\"] = task_id\n            plan[\"updated_at\"] = PlanningStateHandler._now()\n        return plan\n\n    @staticmethod\n    def get_task(plan: Plan, task_id: str) -> Optional[Task]:\n         \"\"\"根据 ID 获取任务字典\"\"\"\n         if not isinstance(plan, dict) or \"tasks\" not in plan:\n              return None\n         return next((t for t in plan[\"tasks\"] if t.get(\"id\") == task_id), None)\n\n    @staticmethod\n    def get_next_pending_task(plan: Plan) -> Optional[Task]:\n         \"\"\"获取下一个处于 pending 状态且所有依赖已完成的任务\"\"\"\n         if not isinstance(plan, dict) or \"tasks\" not in plan:\n              return None\n\n         completed_task_ids = {t[\"id\"] for t in plan[\"tasks\"] if t.get(\"status\") == \"completed\"}\n\n         for task in plan[\"tasks\"]:\n              if task.get(\"status\") == \"pending\":\n                   dependencies = task.get(\"dependencies\", [])\n                   if not dependencies or all(dep_id in completed_task_ids for dep_id in dependencies):\n                        return task\n         return None # 没有找到合适的下一个任务\n\n    @staticmethod\n    def finish_plan(plan: Plan) -> Plan:\n        \"\"\"强制将 Plan 标记为完成\"\"\"\n        if not isinstance(plan, dict):\n             raise ValueError(\"Invalid plan structure provided.\")\n        if plan.get(\"status\") != \"completed\":\n            plan[\"status\"] = \"completed\"\n            plan[\"completed_at\"] = PlanningStateHandler._now()\n            plan[\"updated_at\"] = PlanningStateHandler._now()\n        return plan"
  },
  {
    "path": "core/agents/state_based_supervisor/prompt.py",
    "content": "# # --- Planner Agent System Prompt (新增) ---\n# PLANNER_SYSTEM_PROMPT_TEMPLATE = \"\"\"You are an expert planning agent. Your sole responsibility is to analyze a user request and create a detailed, step-by-step plan to fulfill it by coordinating specialized agents.\n\n# The current date is {current_date}.\n\n# ## Agent Descriptions:\n# {agent_descriptions}\n# *(This list includes the capabilities of available specialist agents.)*\n\n# ## Task:\n# Analyze the user request provided in the message history. Break it down into a sequence of logical tasks. For each task, determine the most suitable agent from the descriptions provided.\n\n# ## Output Format:\n# You MUST output **ONLY** a single `PLAN_UPDATE: CREATE_PLAN <JSON_ARGS>` directive in your response content. The JSON arguments MUST be valid and contain:\n# - \"title\": A concise title for the overall plan.\n# - \"description\": A brief description summarizing the user's goal.\n# - \"tasks\": A list of task objects. Each task object MUST contain:\n#     - \"description\": A clear and actionable description of the specific sub-task.\n#     - \"agent\": The name of the MOST SUITABLE agent from the Agent Descriptions to perform this task. Leave empty (\"\") if unsure or if it's a general task.\n#     - \"status\": Set **all** initial tasks to **\"pending\"**.\n#     - (Optional) \"dependencies\": A list of task IDs (UUIDs that will be generated later) this task depends on, if any (usually empty for initial plan).\n\n# **Example JSON Args:**\n# `{{\"title\": \"Research and Report on AI Ethics\", \"description\": \"User wants a report on AI ethics, including research and writing.\", \"tasks\": [{{\"description\": \"Research current trends in AI ethics using web search\", \"agent\": \"research_expert\", \"status\": \"pending\"}}, {{\"description\": \"Write a structured report summarizing the findings\", \"agent\": \"reporter_expert\", \"status\": \"pending\", \"dependencies\": [\"<ID_of_research_task>\"]}}]}}` \n# *(Note: Actual IDs are UUIDs generated later, dependencies often added via UPDATE_TASK)*\n\n# **CRITICAL**: Output **ONLY** the `PLAN_UPDATE: CREATE_PLAN <JSON_ARGS>` directive and nothing else. Do not add conversational text. Make sure the JSON is valid.\n# \"\"\"\n\n# SUPERVISOR_PLANNING_PROMPT_TEMPLATE = \"\"\"You are a meticulous top-level Supervisor agent responsible for executing an existing plan, coordinating specialist agents, and managing task execution based on the provided state. You rely on an external evaluator node to assess task completion after agents run.\n\n# The current date is {current_date}.\n\n# ## Current Plan State:\n# ```json\n# {plan_json}\n# ```\n# *(Review plan status and individual task statuses and IDs (UUIDs). Your main goal is to drive the plan status to 'completed'.)*\n\n# ## Agent Descriptions:\n# {agent_descriptions}\n\n# ## Your Goal:\n# Execute the **existing plan** strictly step-by-step towards 'completed' status. Make **exactly one** logical primary decision per turn. **Do NOT evaluate agent results or mark tasks 'completed'/'failed' yourself.**\n\n# ## Workflow & Decision Process (Strict Sequence):\n# 1.  **Analyze State**: Review the latest messages and the 'Current Plan State'. (Note: If the last message is from a sub-agent, an evaluator node has already processed it and updated the plan state before your turn).\n# 2.  **Determine ONE Next Action**: Execute the FIRST matching condition below and **IMMEDIATELY END YOUR TURN**:\n\n#     * **A. Initiate Next Task**: If the plan is 'ready' or 'executing', AND no task is currently 'in_progress', AND a 'pending' task is ready (dependencies met):\n#         * **Action**: Find the FIRST such task. Output **ONLY** `PLAN_UPDATE: UPDATE_TASK <JSON_ARGS_status_in_progress>`. **CRITICAL: Use the exact UUID for `by_id`!** JSON Args should be ` {{\"by_id\": \"<task_uuid>\", \"status\": \"in_progress\"}}`.\n#     * **B. Delegate In-Progress Task**: If a task **currently has status 'in_progress'** (check plan state):\n#         * **Action**: Identify the best agent. Output **ONLY** the `transfer_to_<agent_name>` tool call. **CRITICAL**: Tool call args **MUST** include `\"task_id\": \"<TASK_UUID_FROM_PLAN>\"` and clear `\"instructions\"`.\n#     * **C. Finish Plan**: If **ALL** tasks in the plan now have status 'completed' AND the plan status is NOT 'completed' yet (check plan state provided):\n#         * **Action**: Output **ONLY** `PLAN_UPDATE: FINISH_PLAN {{}}`.\n#     * **D. Generate Final Output**: If the **Plan Status IS 'completed'** (check plan state provided):\n#         * **Action**: Decide final output format based on original request. EITHER call `transfer_to_reporter_expert` (passing context in args, like relevant task IDs) OR generate the final `AIMessage` content yourself summarizing the overall result.\n#     * **E. Waiting/Blocked/Failed**: If no other action is appropriate (e.g., plan status 'failed', or waiting for dependencies):\n#         * **Action**: Output a brief waiting or status message explaining the situation.\n\n# ## Output Constraints:\n# - Your response MUST contain exactly ONE primary action (ONE PLAN_UPDATE directive OR ONE transfer_to tool call OR the final answer OR a status message).\n# - `PLAN_UPDATE:` directives MUST be in the text content with **valid JSON arguments**.\n# - **CRITICAL**: `UPDATE_TASK` **MUST** use the correct Task UUID string for `\"by_id\"`.\n\n# ## Planning Directives Format (Mandatory - JSON Args in text):\n# - `PLAN_UPDATE: ADD_TASKS {{\"tasks\": [...]}}` # You can still add tasks if needed mid-plan\n# - `PLAN_UPDATE: UPDATE_TASK {{\"by_id\": \"<task-uuid-from-plan>\", \"status\": \"in_progress\", \"notes\": \"<optional notes>\"}}` (**UUID!** Only use non-terminal statuses).\n# - `PLAN_UPDATE: FINISH_PLAN {{}}`\n\n# ## Tool Usage:\n# - Only `transfer_to_<agent_name>` tools. Args **MUST** include `\"task_id\"` and `\"instructions\"`.\n\n# Now, analyze the current state (which reflects any recent evaluations) and the LAST message, and determine the single next action based strictly on the workflow for **executing the existing plan**. Remember, you do **not** evaluate results or mark tasks complete/failed.\n# \"\"\"\n\n# --- Planner Agent System Prompt  ---\nPLANNER_SYSTEM_PROMPT_TEMPLATE = \"\"\"You are an expert planning agent. Your sole responsibility is to analyze a user request and create a detailed, step-by-step plan to fulfill it by coordinating specialized agents.\n\nThe current date is {current_date}.\n\n## Agent Descriptions:\n{agent_descriptions}\n*(This list includes the capabilities of available specialist agents.)*\n\n## Task:\nAnalyze the user request provided in the message history. Break it down into a sequence of logical tasks. For each task, determine the most suitable agent from the descriptions provided.\n\n## Task Granularity Guidelines:\n- **IMPORTANT**: Maintain appropriate task granularity based on complexity:\n  - For simple requests, create just 1-2 tasks that can be completed by a single agent\n  - For complex requests, break down into 3-5 logical steps\n  - Avoid excessive fragmentation of simple tasks\n  - Each task should represent a meaningful unit of work\n\n## Output Format:\nYou MUST output **ONLY** a single `PLAN_UPDATE: CREATE_PLAN <JSON_ARGS>` directive in your response content. The JSON arguments MUST be valid and contain:\n- \"title\": A concise title for the overall plan.\n- \"description\": A brief description summarizing the user's goal.\n- \"tasks\": A list of task objects. Each task object MUST contain:\n    - \"description\": A clear and actionable description of the specific sub-task.\n    - \"agent\": The name of the MOST SUITABLE agent from the Agent Descriptions to perform this task. Leave empty (\"\") if unsure or if it's a general task.\n    - \"status\": Set **all** initial tasks to **\"pending\"**.\n    - (Optional) \"dependencies\": Usually empty for initial plan.\n\n**Example JSON Args for SIMPLE request:**\n`{{\"title\": \"Answer Question About Python\", \"description\": \"User wants to know how to use list comprehensions in Python\", \"tasks\": [{{\"description\": \"Provide a comprehensive explanation of Python list comprehensions with examples\", \"agent\": \"coder_expert\", \"status\": \"pending\"}}]}}`\n\n**Example JSON Args for COMPLEX request:**\n`{{\"title\": \"Research and Report on AI Ethics\", \"description\": \"User wants a detailed report on AI ethics\", \"tasks\": [{{\"description\": \"Research current trends in AI ethics using web search\", \"agent\": \"research_expert\", \"status\": \"pending\"}}, {{\"description\": \"Write a structured report summarizing the findings\", \"agent\": \"reporter_expert\", \"status\": \"pending\"}}]}}`\n\n**CRITICAL**: Output **ONLY** the `PLAN_UPDATE: CREATE_PLAN <JSON_ARGS>` directive and nothing else. Do not add conversational text. Make sure the JSON is valid.\n\"\"\"\n\n# --- Supervisor Planning Prompt (允许动作组合 + 强制UUID/JSON) ---\nSUPERVISOR_PLANNING_PROMPT_TEMPLATE = \"\"\"You are a meticulous top-level Supervisor agent responsible for executing an existing plan, coordinating specialist agents, and managing task execution based on the provided state.\n\nThe current date is {current_date}.\n\n## Current Plan State:\n```json\n{plan_json}\n```\n*(Review plan status and individual task statuses and IDs (UUIDs). Your main goal is to drive the plan status to 'completed'.)*\n\n## Agent Descriptions:\n{agent_descriptions}\n*(This list includes specialist agents and yourself.)*\n\n## Your Goal:\nExecute the **existing plan** step-by-step towards 'completed' status by making logical decisions and issuing appropriate directives and tool calls.\n\n## Workflow & Decision Guidelines:\n1.  **Analyze State**: Review the latest messages (especially agent results) and the 'Current Plan State'.\n2.  **Determine Next Action(s)**: Based on the analysis, decide the next logical step(s).\n\n    * **If a sub-agent just returned results**:\n        a. Evaluate the result against the task.\n        b. Issue the `PLAN_UPDATE: UPDATE_TASK <JSON_ARGS_status_completed_or_other>`. **CRITICAL: Use the exact Task UUID for `by_id`!** Include `evaluation` and `notes`.\n        c. **After** the update directive, **if** more tasks are pending and ready, you **CAN** identify the next task, issue `PLAN_UPDATE: UPDATE_TASK <JSON_ARGS_status_in_progress>` (using its UUID), **AND** issue the corresponding `transfer_to_<agent_name>` tool call **in the same response**.\n    * **If no agent just returned, AND a 'pending' task is ready**:\n        a. Identify the *next* suitable 'pending' task.\n        b. Issue `PLAN_UPDATE: UPDATE_TASK <JSON_ARGS_status_in_progress>` (using its UUID).\n        c. **Immediately following** the directive in the same response, issue the corresponding `transfer_to_<agent_name>` tool call with instructions (including Task UUID).\n    * **If ALL tasks are 'completed' AND plan status is NOT 'completed' yet**:\n        a. Issue `PLAN_UPDATE: FINISH_PLAN {{}}`.\n        b. **In the same response**, decide the final output: EITHER call `transfer_to_reporter_expert` OR generate the final `AIMessage` content yourself.\n    * **If Plan Status IS 'completed'**:\n        a. Your job is done. Generate the final `AIMessage` content if you didn't call the reporter in the previous step.\n    * **If Waiting/Blocked/Failed**: Output a status message explaining the situation.\n\n## Output Constraints:\n- Your response **CAN** contain **both** a `PLAN_UPDATE:` directive (in content) and a `transfer_to_` tool call if logically appropriate (e.g., completing one task and starting the next).\n- Your response **CAN** contain **both** `PLAN_UPDATE: FINISH_PLAN` and the final action (call reporter or final answer).\n- **NEVER** delegate to more than one agent simultaneously (only one `transfer_to_` tool call per response).\n- `PLAN_UPDATE:` directives MUST be in the text content with **valid JSON arguments**.\n- **CRITICAL**: `UPDATE_TASK` **MUST** use the correct Task UUID string for `\"by_id\"`.\n\n## Planning Directives Format (Mandatory - JSON Args in text):\nUse these exact formats **within your response content**. Arguments **MUST** be a valid JSON string.\n- `PLAN_UPDATE: ADD_TASKS {{\"tasks\": [...]}}`\n- `PLAN_UPDATE: UPDATE_TASK {{\"by_id\": \"<task-uuid-from-plan>\", \"status\": \"<new_status>\", \"evaluation\": \"<text>\", \"notes\": \"<text>\"}}` (**UUID!**)\n- `PLAN_UPDATE: FINISH_PLAN {{}}`\n*(Note: CREATE_PLAN is handled by the Planner Agent)*\n\n## Tool Usage:\n- Only `transfer_to_<agent_name>` tools are callable by you. Args **MUST** include `\"task_id\"` and `\"instructions\"`.\n\nNow, analyze the current state and messages, and determine the necessary action(s) for this turn.\n\"\"\"\n\n\n# **主要调整说明:**\n\n# 1.  **允许动作组合**: 修改了 Workflow 和 Output Constraints，明确允许 Supervisor 在一个回合中既更新 Plan 状态（通过 `PLAN_UPDATE:` 指令）又委派任务（通过 `transfer_to_` 工具调用），或者在结束计划的同时进行最终输出操作。这给予 LLM 更大的灵活性，可能更符合它的“思考习惯”。\n# 2.  **保留核心要求**: 仍然**强制要求** `PLAN_UPDATE` 的参数必须是有效的 JSON，并且 `UPDATE_TASK` **必须**使用正确的 Task UUID。同时，**仍然禁止**一次委派多个 Agent。\n# 3.  **移除了严格的 `STOP` 指令**: 不再强制要求 LLM 在发出 `PLAN_UPDATE` 后必须结束当前回合。\n\n# **预期效果:**\n\n# * Supervisor LLM 在处理完子 Agent 的结果并更新任务状态后，如果发现下一个任务已准备就绪，它可能会在同一个回复中直接发出 `transfer_to_` 指令，从而减少一个交互回合，提高效率。\n# * 在所有任务完成后，它可以一步到位地发出 `FINISH_PLAN` 并同时决定最终输出（调用 Reporter 或自己总结）。\n# * **潜在风险**: 这种灵活性也可能使得 LLM 在复杂情况下更容易出错（例如，忘记更新状态就去委派，或者错误地组合了动作）。但鉴于之前严格分步也遇到了问题，这种方式值得一试。"
  },
  {
    "path": "core/agents/state_based_supervisor/state_schema.py",
    "content": "# reason_graph/state_schema.py\nimport operator\nfrom typing import Dict, List, Optional, Any, Literal, TypedDict, Sequence, Annotated, Union\nfrom langchain_core.messages import BaseMessage\nfrom langgraph.graph.message import add_messages\nfrom langgraph.managed import IsLastStep, RemainingSteps\n\n# 定义计划状态类型\nPlanningStatus = Literal[\"not_started\", \"planning\", \"ready\", \"executing\", \"completed\", \"failed\", \"error\"]\n\n# 定义任务状态类型\nTaskStatus = Literal[\"pending\", \"ready\", \"in_progress\", \"completed\", \"failed\", \"skipped\", \"pending_review\", \"revision_needed\"]\n\n# 定义任务项\nclass Task(TypedDict, total=False):\n    \"\"\"任务项定义\n\n    表示计划中的一个任务项，包含任务描述、状态、分配的代理等信息\n    \"\"\"\n    id: str  # 任务唯一标识符\n    description: str  # 任务描述\n    status: TaskStatus  # 任务状态\n    agent: Optional[str]  # 分配的代理名称 (建议的执行者)\n    created_at: str  # 创建时间 (ISO 格式)\n    updated_at: str  # 更新时间 (ISO 格式)\n    completed_at: Optional[str]  # 完成时间 (ISO 格式)\n    dependencies: Optional[List[str]]  # 依赖的任务ID列表\n    notes: Optional[str]  # 关于任务执行情况的备注 (可由 Agent 或 Supervisor 更新)\n    evaluation: Optional[str] # 对任务完成情况的评估 (可由 Supervisor LLM 或 Evaluator Agent 更新)\n    result: Optional[Any] # (可选) 存储任务的直接输出结果摘要\n\n# 定义计划\nclass Plan(TypedDict, total=False):\n    \"\"\"计划定义\n\n    表示一个完整的计划，包含计划状态、任务列表等信息\n    \"\"\"\n    status: PlanningStatus  # 计划状态\n    tasks: List[Task]  # 任务列表\n    current_task_id: Optional[str]  # 当前 Supervisor 关注或正在处理的任务ID\n    created_at: str  # 创建时间 (ISO 格式)\n    updated_at: str  # 更新时间 (ISO 格式)\n    completed_at: Optional[str]  # 完成时间 (ISO 格式)\n    title: Optional[str]  # 计划标题\n    description: Optional[str]  # 计划描述 (通常是用户原始请求)\n\n# 扩展基础 AgentState 以支持计划功能\nclass PlanningAgentState(TypedDict):\n    \"\"\"支持计划功能的、用于 Supervisor 图的状态定义\"\"\"\n    messages: Annotated[Sequence[BaseMessage], add_messages] # 消息历史\n    plan: Optional[Plan] = None # 存储计划对象\n    # last_agent_result: Optional[Dict[str, Any]] = None # 存储刚结束的子 Agent 的 {name: ..., content: ...}\n    is_last_step: IsLastStep # LangGraph 内部状态\n    remaining_steps: RemainingSteps # LangGraph 内部状态, 用于防止无限循环\n    error: Optional[str] = None # 用于记录执行中发生的错误信息\n    # 可以根据需要添加其他全局共享的状态字段\n    # 例如: shared_context: Optional[Dict] = None\n\n# 可以为子 Agent 定义一个稍微不同的状态（如果它们不需要 plan）\nclass BasicAgentState(TypedDict):\n    \"\"\"基础 Agent 状态，仅包含消息历史\"\"\"\n    messages: Annotated[Sequence[BaseMessage], add_messages]\n    is_last_step: IsLastStep\n    remaining_steps: RemainingSteps\n    error: Optional[str] = None\n\n# 方便类型提示\nStateSchemaType = Union[Dict[str, Any], PlanningAgentState, BasicAgentState]"
  },
  {
    "path": "core/agents/state_based_supervisor/supervisor_graph.py",
    "content": "# reason_graph/supervisor_graph.py\nimport inspect\nimport re\nimport functools\nimport uuid\nimport asyncio\nimport anyio\nimport traceback \nfrom typing import Any, Callable, List, Optional, Type, Union, Dict, Literal, Sequence, cast # <--- 导入 cast\n\nfrom langchain_core.language_models import BaseChatModel, LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import AIMessage, ToolMessage, BaseMessage, ToolCall, SystemMessage # <--- 导入 SystemMessage\nfrom langchain_core.runnables import RunnableConfig\nfrom langgraph.utils.runnable import RunnableCallable\n\nfrom langgraph.graph import END, START, StateGraph\nfrom langgraph.graph.state import CompiledStateGraph\nfrom langgraph.prebuilt import ToolNode\nfrom langgraph.pregel import Pregel\n\n# 内部导入\ntry:\n    from core.agents.base.base_agent import BaseAgent\n    from .handoff import create_handoff_tool, _normalize_agent_name # 确保导入 _normalize_agent_name\n    from .state_schema import PlanningAgentState, Plan # 导入 PlanningAgentState 和 Plan\n    from .supervisor_node import supervisor_node_logic # 导入异步节点逻辑\n    from .planner_node import planner_node_logic, planner_node_logic_sync # <--- 导入 Planner 逻辑\n    from .evaluate_result_node import evaluate_result_node_logic, evaluate_result_node_logic_sync # <--- 导入 Evaluator 逻辑\n    from .agent_name import AgentNameMode, with_agent_name\nexcept ImportError as e:\n     print(f\"Error importing modules in supervisor_graph.py: {e}\")\n     # Add Dummy classes for type hints if needed\n     class BaseAgent: pass\n     class PlanningAgentState(Dict): pass\n     class Plan(Dict): pass\n     class Pregel: pass\n     AgentNameMode = Literal[\"inline\"]\n     def create_handoff_tool(*args, **kwargs): return None # type: ignore\n     def _normalize_agent_name(s: str) -> str: return s\n     async def supervisor_node_logic(*args, **kwargs): return {}\n     async def planner_node_logic(*args, **kwargs): return {} # <--- 添加 planner_node_logic\n     def planner_node_logic_sync(*args, **kwargs): return {} # <--- 添加 planner_node_logic_sync\n     async def evaluate_result_node_logic(*args, **kwargs): return {} # 添加 evaluate_result_node_logic  \n     def evaluate_result_node_logic_sync(*args, **kwargs): return {} # 添加 evaluate_result_node_logic_sync\n     def with_agent_name(model, mode): return model\n\n\n# 定义 OutputMode, MODELS_NO_PARALLEL_TOOL_CALLS, _supports_disable_parallel_tool_calls (保持不变)\nOutputMode = Literal[\"full_history\", \"last_message\"]\nMODELS_NO_PARALLEL_TOOL_CALLS = {\"o3-mini\"}\ndef _supports_disable_parallel_tool_calls(model: LanguageModelLike) -> bool:\n    if not isinstance(model, BaseChatModel): return False\n    if hasattr(model, \"model_name\") and model.model_name in MODELS_NO_PARALLEL_TOOL_CALLS: return False\n    if not hasattr(model, \"bind_tools\"): return False\n    if \"parallel_tool_calls\" not in inspect.signature(model.bind_tools).parameters: return False\n    return True\n\n\n# _make_call_agent (保持不变 - 已支持同步/异步)\ndef _make_call_agent(\n    agent_graph: Pregel, \n    output_mode: OutputMode,\n    add_handoff_back_messages: bool, \n    supervisor_name: str,\n) -> RunnableCallable:\n    if output_mode not in [\"full_history\", \"last_message\"]: raise ValueError(...)\n\n    async def acall_agent(state: Dict, config: Optional[RunnableConfig] = None) -> Dict:\n        agent_name = getattr(agent_graph, 'name', 'sub_agent')\n        print(f\"🟡 [Async invoke] Handoff to agent '{agent_name}'\")\n        sub_agent_input = {\"messages\": state.get(\"messages\", [])}\n        output: Dict[str, Any] = {}\n        agent_error: Optional[str] = None\n\n        try:\n             output = await agent_graph.ainvoke(sub_agent_input, config=config)\n             print(f\"✅ [Async invoke] Agent '{agent_name}' completed.\")\n        except Exception as e:\n             print(f\"!!! Error during sub-agent {agent_name} ainvoke: {e}\"); traceback.print_exc()\n             agent_error = f\"Error executing agent '{agent_name}': {type(e).__name__}\"\n\n        sub_agent_messages: List[BaseMessage] = output.get(\"messages\", [])\n        returned_messages: List[BaseMessage] = []\n        if not sub_agent_messages and not agent_error:\n             returned_messages = [AIMessage(content=\"(No output received from agent)\", name=agent_name)]\n        elif output_mode == \"last_message\":\n             last_ai_message = next((m for m in reversed(sub_agent_messages) if isinstance(m, AIMessage)), None)\n             returned_messages = [last_ai_message] if last_ai_message else sub_agent_messages[-1:]\n        else:\n             returned_messages = sub_agent_messages\n             \n        last_content = agent_error\n        if not last_content and returned_messages:\n             last_content = str(returned_messages[-1].content) if hasattr(returned_messages[-1], 'content') else \"(No textual content)\"\n\n        return {\n            \"messages\": returned_messages,\n            \"last_agent_result\": {\n                 \"agent_name\": agent_name,\n                 \"content\": last_content or \"(Agent execution finished without specific output or error)\"\n            }\n        }\n\n    def call_agent(state: Dict, config: Optional[RunnableConfig] = None) -> Dict:\n        agent_name = getattr(agent_graph, 'name', 'sub_agent')\n        print(f\"🟡 [Sync invoke] Handoff to agent '{agent_name}'\")\n        sub_agent_input = {\"messages\": state.get(\"messages\", [])}\n        output: Dict[str, Any] = {}\n        agent_error: Optional[str] = None\n\n        try: output = agent_graph.invoke(sub_agent_input, config=config); print(f\"✅ [Sync invoke] Agent '{agent_name}' completed.\")\n        except NotImplementedError: agent_error = f\"Error: Sync invoke not supported by agent '{agent_name}'.\"; print(agent_error)\n        except Exception as e: agent_error = f\"Error during sub-agent {agent_name} invoke: {e}\"; print(f\"!!! {agent_error}\")\n        \n        sub_agent_messages: List[BaseMessage] = output.get(\"messages\", [])\n        returned_messages: List[BaseMessage] = []\n        if not sub_agent_messages and not agent_error: returned_messages = [AIMessage(content=\"(No output received)\", name=agent_name)]\n        elif output_mode == \"last_message\":\n             last_ai_message = next((m for m in reversed(sub_agent_messages) if isinstance(m, AIMessage)), None)\n             returned_messages = [last_ai_message] if last_ai_message else sub_agent_messages[-1:]\n        else: returned_messages = sub_agent_messages\n        \n        last_content = agent_error\n        if not last_content and returned_messages: last_content = str(returned_messages[-1].content) if hasattr(returned_messages[-1], 'content') else \"(No content)\"\n\n        return {\n            \"messages\": returned_messages,\n            \"last_agent_result\": {\n                 \"agent_name\": agent_name,\n                 \"content\": last_content or \"(Agent sync execution finished)\"\n            }\n        }\n\n    return RunnableCallable(func=call_agent, afunc=acall_agent, name=f\"Call_{getattr(agent_graph, 'name', 'sub_agent')}\")\n\n\ndef supervisor_node_logic_sync(\n    state: PlanningAgentState,\n    config: Optional[RunnableConfig],\n    model: Any,\n    supervisor_name: str,\n    agent_description_map: Dict[str, str]\n) -> Dict[str, Any]:\n    print(f\"--- Entering Supervisor Node (Sync Wrapper) ---\")\n    try:\n        return anyio.run(\n            supervisor_node_logic, state, config, model, supervisor_name, agent_description_map\n        )\n    except Exception as e:\n        print(f\"Error running supervisor_node_logic synchronously using anyio: {e}\")\n        import traceback\n        traceback.print_exc()\n        return {\"error\": f\"Sync execution wrapper failed: {e}\", \"messages\": state.get(\"messages\",[])}\n\n\ndef create_supervisor(\n    model: LanguageModelLike,\n    sub_agents: List[BaseAgent],\n    state_schema: Type[PlanningAgentState] = PlanningAgentState,\n    config_schema: Type[Any] | None = None,\n    tools: list[BaseTool | Callable] | None = None,\n    output_mode: OutputMode = \"last_message\",\n    add_handoff_back_messages: bool = False,\n    supervisor_name: str = \"supervisor\",\n    planner_node_name: str = \"planner\",\n    evaluator_node_name: str = \"evaluate_result\",\n    handoff_executor_name: str = \"handoff_executor\",\n    include_agent_name: AgentNameMode | None = \"inline\",\n) -> StateGraph:\n    agent_graphs: Dict[str, Pregel] = {}\n    agent_names: List[str] = []\n    agent_description_map: Dict[str, str] = {}\n    # --- 1. 提取 Agent 信息  ---\n    for agent in sub_agents:\n        if not isinstance(agent, BaseAgent): raise TypeError(...)\n        if not agent.name or agent.name == \"LangGraph\": raise ValueError(...)\n        if agent.name in agent_graphs: raise ValueError(...)\n        agent_names.append(agent.name)\n        agent_description_map[agent.name] = getattr(agent, 'description', '...')\n        try:\n            compiled_graph = agent.get_agent()\n            if not isinstance(compiled_graph, Pregel): \n                 core_graph = getattr(compiled_graph, 'last', None)\n                 if isinstance(core_graph, Pregel):\n                      compiled_graph = core_graph\n                 else:\n                      raise TypeError(f\"Could not retrieve Pregel instance from agent '{agent.name}'.get_agent()\")\n            agent_graphs[agent.name] = compiled_graph\n        except Exception as e: raise e\n\n     # --- 2. 创建 Handoff 工具 ---\n    handoff_tools = [create_handoff_tool(agent_name=name) for name in agent_names]\n    supervisor_callable_tools = (tools or []) + handoff_tools\n    print(f\"Supervisor '{supervisor_name}' bound with tools: {[t.name for t in supervisor_callable_tools]}\")\n\n    # --- 3. 绑定工具到 Supervisor 模型 ---\n    bound_supervisor_model: LanguageModelLike\n    if not supervisor_callable_tools:\n         print(f\"Warning: Supervisor '{supervisor_name}' has no tools bound.\")\n         bound_supervisor_model = model\n    elif _supports_disable_parallel_tool_calls(model):\n        bound_supervisor_model = model.bind_tools(supervisor_callable_tools, parallel_tool_calls=False)\n    else:\n        bound_supervisor_model = model.bind_tools(supervisor_callable_tools)\n    if include_agent_name:\n        bound_supervisor_model = with_agent_name(bound_supervisor_model, include_agent_name)\n\n    # --- 4. 构建 StateGraph ---\n    builder = StateGraph(state_schema, config_schema=config_schema)\n    \n    # --- 5. 添加 Planner 节点 (使用同步/异步包装) ---\n    planner_logic_partial_async = functools.partial(\n        planner_node_logic,\n        model=model,\n        agent_description_map=agent_description_map,\n    )\n    planner_logic_partial_sync = functools.partial(\n        planner_node_logic_sync,\n        model=model,\n        agent_description_map=agent_description_map,\n    )\n    planner_runnable = RunnableCallable(\n        func=planner_logic_partial_sync,\n        afunc=planner_logic_partial_async,\n        name=planner_node_name\n    )\n    builder.add_node(planner_node_name, planner_runnable)\n\n    # --- 6. 添加 Supervisor 节点 (使用同步/异步包装) ---\n    supervisor_logic_partial_async = functools.partial(\n        supervisor_node_logic,\n        model=bound_supervisor_model,\n        supervisor_name=supervisor_name,\n        agent_description_map=agent_description_map,\n    )\n    supervisor_logic_partial_sync = functools.partial(\n        supervisor_node_logic_sync,\n        model=bound_supervisor_model,\n        supervisor_name=supervisor_name,\n        agent_description_map=agent_description_map,\n    )\n    supervisor_runnable = RunnableCallable(\n        func=supervisor_logic_partial_sync,\n        afunc=supervisor_logic_partial_async,\n        name=supervisor_name\n    )\n    builder.add_node(supervisor_name, supervisor_runnable)\n\n    # --- 7. 添加子 Agent 节点 ---\n    for name, compiled_graph in agent_graphs.items():\n        builder.add_node(name, _make_call_agent(compiled_graph, output_mode, add_handoff_back_messages, supervisor_name))\n        builder.add_edge(name, evaluator_node_name)\n\n    # --- 8. 添加 Handoff Tool 执行节点 ---\n    handoff_executor_node = ToolNode(handoff_tools, name=handoff_executor_name)\n    builder.add_node(handoff_executor_name, handoff_executor_node)\n    \n    # --- 9. 添加 Evaluate Result 节点 ---\n    evaluator_runnable = RunnableCallable(func=evaluate_result_node_logic_sync, afunc=evaluate_result_node_logic, name=evaluator_node_name)\n    # Evaluator 不需要 model 或 agent descriptions 作为直接参数\n    builder.add_node(evaluator_node_name, evaluator_runnable) # type: ignore\n    # --- 10. 设置图的入口和边 ---\n    builder.set_entry_point(planner_node_name)\n    builder.add_edge(planner_node_name, supervisor_name)\n\n    def route_from_supervisor(state: PlanningAgentState) -> str:\n        messages = state.get('messages', [])\n        plan = state.get('plan')\n        last_message = messages[-1] if messages else None\n\n        if not isinstance(last_message, AIMessage):\n            print(\"Routing: Last message not AIMessage, looping supervisor.\")\n            return supervisor_name\n\n        if last_message.tool_calls:\n            tool_call = last_message.tool_calls[0]\n            agent_name_match = re.match(r\"transfer_to_(\\w+)\", tool_call[\"name\"])\n            if agent_name_match and agent_name_match.group(1) in agent_names: \n                 extracted_name = agent_name_match.group(1)\n                 print(f\"DEBUG route_from_supervisor: Tool Call Name = {repr(tool_call['name'])}\") \n                 print(f\"DEBUG route_from_supervisor: Extracted Target Name = {repr(extracted_name)}\") \n                 print(f\"DEBUG route_from_supervisor: Available Agent Names = {repr(agent_names)}\") \n                 print(f\"Routing: Supervisor -> HandoffExecutor (for {extracted_name})\")\n                 return handoff_executor_name\n            else:\n                 print(f\"DEBUG route_from_supervisor: Membership check failed! ('{extracted_name}' in {repr(agent_names)}) is False.\")\n                 print(f\"Warning: Supervisor called unknown/invalid tool: {tool_call['name']}. Looping supervisor.\")\n                 return supervisor_name\n\n        if plan and plan.get(\"status\") == \"completed\":\n             print(\"Routing: Plan completed -> END\")\n             return END\n\n        print(f\"Routing: No tool call and plan not completed (status: {plan.get('status') if plan else 'None'}). Looping supervisor.\")\n        return supervisor_name\n\n    builder.add_conditional_edges(\n        supervisor_name,\n        route_from_supervisor,\n        {\n            handoff_executor_name: handoff_executor_name,\n            supervisor_name: supervisor_name,\n            END: END,\n        }\n    )\n    \n    # Handoff Executor 完成后, LangGraph 处理 Command(goto=...) 直接路由到子 Agent\n    # 不需要从 Handoff Executor 出发的显式边\n\n    # --- 关键修改: 子 Agent 完成后 -> Evaluator ---\n    for name in agent_names:\n        builder.add_edge(name, evaluator_node_name) # <--- 修改: 指向 Evaluator\n\n    # --- 新增: Evaluator 完成后 -> Supervisor ---\n    builder.add_edge(evaluator_node_name, supervisor_name) # <--- 新增: Evaluator 指回 Supervisor\n\n    print(\"Supervisor graph definition created with Planner and Evaluator nodes.\")\n    return builder # 返回 StateGraph 定义"
  },
  {
    "path": "core/agents/state_based_supervisor/supervisor_node.py",
    "content": "# reason_graph/supervisor_node.py\n\nimport re\nimport json\nimport time\nimport copy\nimport ast \nimport traceback\nfrom typing import Dict, Any, List, Optional, Union, cast\nfrom datetime import datetime \nfrom langchain_core.messages import BaseMessage, AIMessage, SystemMessage, HumanMessage, ToolMessage\nfrom langchain_core.messages import ToolCall  # 确保导入\nfrom langchain_core.runnables import RunnableConfig\nfrom langgraph.graph import END\n\n# 内部导入 (确保路径正确)\ntry:\n    from .state_schema import PlanningAgentState, TaskStatus, Plan\n    from .planning_handler import PlanningStateHandler\n    from .prompt import SUPERVISOR_PLANNING_PROMPT_TEMPLATE\nexcept ImportError as e:\n    print(f\"Error importing modules in supervisor_node.py: {e}\")\n    # Fallbacks\n    class PlanningAgentState(Dict): pass\n    class Plan(Dict): pass\n    class PlanningStateHandler: \n        @staticmethod\n        def update_task(*args, **kwargs): return kwargs.get('plan')\n        @staticmethod\n        def create_plan(*args, **kwargs): return {}\n        @staticmethod\n        def add_tasks(*args, **kwargs): return kwargs.get('plan')\n        @staticmethod\n        def finish_plan(*args, **kwargs): return kwargs.get('plan')\n        @staticmethod\n        def get_task(*args, **kwargs): return None\n        @staticmethod\n        def update_plan_status(*args, **kwargs): return kwargs.get('plan')\n        @staticmethod\n        def set_current_task(*args, **kwargs): return kwargs.get('plan')\n    SUPERVISOR_PLANNING_PROMPT_TEMPLATE = \"Fallback Prompt: Error loading template.\"\n\n\n# --- 参数解析函数 (使用 JSON / ast.literal_eval) ---\ndef parse_directive_args(directive_str: str) -> Dict[str, Any]:\n    \"\"\"从指令字符串中解析 JSON 参数\"\"\"\n    args = {}\n    # 查找第一个 '{' 到最后一个 '}' 之间的内容作为 JSON 字符串\n    json_match = re.search(r\"(\\{.*?\\})\\s*$\", directive_str.split(maxsplit=1)[1] if len(directive_str.split(maxsplit=1)) > 1 else \"\", re.DOTALL)\n    if json_match:\n        args_json_str = json_match.group(1)\n        try:\n            args = json.loads(args_json_str)\n            if not isinstance(args, dict): raise ValueError(\"Args JSON not a dict.\")\n            print(f\"DEBUG: Parsed args via JSON: {args}\")\n            return args\n        except json.JSONDecodeError as json_err:\n            print(f\"Warning: JSON parsing failed ({json_err}), trying ast.literal_eval...\")\n            try:\n                 args = ast.literal_eval(args_json_str)\n                 if not isinstance(args, dict): raise ValueError(\"ast.literal_eval didn't return dict.\")\n                 print(f\"DEBUG: Parsed args via ast.literal_eval: {args}\")\n                 return args\n            except Exception as ast_err:\n                 raise ValueError(f\"Failed to parse args: {ast_err}. Raw: '{args_json_str}'\") from ast_err\n    elif directive_str.strip().upper().endswith(\"{}\"): # 处理 FINISH_PLAN {} 的情况\n         return {} # 返回空字典\n    else:\n         # 如果找不到有效的 JSON 参数，但指令需要参数，则抛出错误或返回空字典\n         print(f\"Warning: Could not find valid JSON arguments in directive: '{directive_str}'. Returning empty args.\")\n         return {}\n\n\n# --- Supervisor 节点核心逻辑 (移除结果处理，增加设置 current_task_id) ---\nasync def supervisor_node_logic(\n    state: PlanningAgentState,\n    config: Optional[RunnableConfig],\n    model: Any,\n    supervisor_name: str,\n    agent_description_map: Dict[str, str]\n) -> Dict[str, Any]:\n    \"\"\"Supervisor 节点核心逻辑 (不再处理 Agent 结果状态更新)\"\"\"\n    print(f\"--- Entering Supervisor Node ({supervisor_name}) ---\")\n    messages: List[BaseMessage] = state.get('messages', [])\n    plan: Optional[Plan] = state.get('plan')\n    current_error = state.get('error'); state['error'] = None\n    if current_error: print(f\"  Supervisor saw previous error: {current_error}\")\n\n    # --- 0. 检查 Plan 是否存在 (不变) ---\n    if not plan:\n         print(\"ERROR: Supervisor node requires a plan, but none found in state.\")\n         return {\"error\": \"Plan is missing.\", \"messages\": []}\n\n    # --- 1. 准备 Prompt (不变) ---\n    plan_json_str = json.dumps(plan, indent=2, ensure_ascii=False)\n    desc_list = [f\"- {name}: {desc}\" for name, desc in agent_description_map.items()]\n    desc_list.append(f\"- {supervisor_name}: Coordinates tasks...\")\n    agent_descriptions_str = \"\\n\".join(desc_list)\n    system_prompt_text = \"Error loading/formatting prompt\"\n    try:\n         current_date_str = datetime.now().strftime(\"%a, %b %d, %Y\")\n         system_prompt_text = SUPERVISOR_PLANNING_PROMPT_TEMPLATE.format(\n             plan_json=plan_json_str, \n             agent_descriptions=agent_descriptions_str, \n             current_date=current_date_str\n         )\n    except Exception as e: print(f\"ERROR loading/formatting prompt: {e}\")\n    llm_input_messages = [SystemMessage(content=system_prompt_text)] + messages\n\n    # --- 2. 调用 Supervisor LLM (不变) ---\n    print(\"--- Calling Supervisor LLM ---\"); response=None; llm_error_msg=None\n    try: \n        response = await model.ainvoke(llm_input_messages, config=config)\n        if not isinstance(response, AIMessage): raise TypeError(f\"LLM returned non-AIMessage: {type(response)}\")\n        if not response.name: response.name = supervisor_name\n        print(f\"Supervisor LLM Raw Response Content: {response.content[:300]}...\")\n        if response.tool_calls: print(f\"Supervisor LLM Tool Calls: {response.tool_calls}\")\n        messages_to_add = [response]\n    except Exception as e: \n        print(f\"!!! Error invoking Supervisor LLM: {e}\"); traceback.print_exc()\n        llm_error_msg = f\"LLM failed: {e}\"; messages_to_add = []; response = None\n\n    # --- 3. 处理 LLM 回复 ---\n    plan_updated: bool = False\n    updated_plan: Optional[Plan] = copy.deepcopy(plan) # 从当前 plan 开始\n    directive_error_msg: Optional[str] = None\n    task_id_to_delegate: Optional[str] = None # <-- 存储本轮要委派的任务 ID\n\n    if response and isinstance(response.content, str):\n        # --- A. 先解析并执行所有 PLAN_UPDATE 指令 (移除 status='completed/failed' 的处理) ---\n        try:\n            plan_directives = re.findall(r\"PLAN_UPDATE:\\s*(\\w+)\\s*(\\{.*?\\})\\s*$\", response.content, re.IGNORECASE | re.DOTALL | re.MULTILINE)\n            plan_directives.extend(re.findall(r\"PLAN_UPDATE:\\s*(FINISH_PLAN)\\s*(\\{\\})\\s*$\", response.content, re.IGNORECASE | re.DOTALL | re.MULTILINE))\n\n            if plan_directives:\n                 print(f\"Found {len(plan_directives)} PLAN_UPDATE directive(s).\")\n                 for command, args_json_str in plan_directives:\n                      command = command.upper(); args_json_str = args_json_str if args_json_str else \"{}\"\n                      print(f\"Processing directive: {command} with args JSON: {args_json_str[:100]}...\")\n                      try:\n                           args = json.loads(args_json_str) # 使用 JSON 解析\n                           if not isinstance(args, dict): raise ValueError(\"Args not dict.\")\n\n                           # --- 执行规划指令 ---\n                           if command == \"ADD_TASKS\":\n                                if not updated_plan: raise ValueError(\"No plan.\"); tasks=args.get(\"tasks\",[])\n                                if isinstance(tasks, list): \n                                    # 确保新任务状态是 pending\n                                    for task_data in tasks: task_data['status'] = 'pending'\n                                    updated_plan = PlanningStateHandler.add_tasks(updated_plan, tasks); plan_updated = True\n                                else: raise ValueError(\"Invalid 'tasks'.\")\n\n                           elif command == \"UPDATE_TASK\":\n                                if not updated_plan: raise ValueError(\"No plan.\")\n                                by_id=args.get(\"by_id\")\n                                if not by_id or not isinstance(by_id, str): raise ValueError(\"Requires string 'by_id'.\")\n                                by_id = by_id.strip()\n                                task_exists = PlanningStateHandler.get_task(updated_plan, by_id)\n                                if not task_exists: raise ValueError(f\"Task ID '{by_id}' not found!\")\n                                \n                                # 只处理状态为 'in_progress' 或 其他非终结状态的更新，以及 notes/evaluation\n                                new_status=args.get(\"status\"); notes_text=args.get(\"notes\"); eval_text=args.get(\"evaluation\") # 保留 evaluation 用于记录 LLM 的想法\n                                update_kwargs = {}\n                                # **不再**设置 \"completed\", \"failed\", \"pending_review\"\n                                if new_status and new_status == \"in_progress\": \n                                     update_kwargs['new_status'] = \"in_progress\"\n                                     task_id_to_delegate = by_id # 记录这个 ID，将在 Handoff 前设置\n                                # 总是可以更新 notes 和 evaluation (如果 LLM 提供了)\n                                if notes_text is not None: update_kwargs['new_notes'] = notes_text\n                                if eval_text is not None: update_kwargs['new_evaluation'] = eval_text \n\n                                if update_kwargs: # 只有当确实需要更新时才调用\n                                     print(f\"Updating task {by_id} with: {update_kwargs}\")\n                                     updated_plan = PlanningStateHandler.update_task(updated_plan, by_id=by_id, **update_kwargs); plan_updated = True\n\n                           elif command == \"FINISH_PLAN\":\n                                if not updated_plan: raise ValueError(\"No plan.\")\n                                updated_plan = PlanningStateHandler.finish_plan(updated_plan); plan_updated = True\n                           \n                           else: print(f\"Warning: Unknown PLAN_UPDATE command '{command}' ignored by Supervisor.\")\n\n                      except (json.JSONDecodeError, ValueError, KeyError, TypeError) as e:\n                           err_msg = f\"Error processing plan directive '{command} {args_json_str}': {type(e).__name__} - {e}\"\n                           print(err_msg); traceback.print_exc()\n                           if not directive_error_msg: directive_error_msg = err_msg # 只记录第一个错误\n                      except Exception as e:\n                           err_msg = f\"Unexpected error processing directive '{command} {args_json_str}': {type(e).__name__} - {e}\"\n                           print(err_msg); traceback.print_exc()\n                           if not directive_error_msg: directive_error_msg = err_msg\n                 \n                 # --- 重新计算 Plan 状态 ---\n                 if plan_updated and updated_plan:\n                      updated_plan = PlanningStateHandler.update_plan_status(updated_plan)\n                      print(f\"Plan status after updates by Supervisor: {updated_plan.get('status')}\")\n\n        except Exception as outer_e:\n             err_msg = f\"Error occurred while searching for PLAN_UPDATE directives: {outer_e}\"\n             print(err_msg); traceback.print_exc()\n             if not directive_error_msg: directive_error_msg = err_msg\n\n    # --- B. 检查 Tool Calls 并设置 Current Task ID ---\n    handoff_tool_call: Optional[Dict] = None # 显式初始化\n    if response and response.tool_calls:\n        for tool_call in response.tool_calls:\n             agent_name_match = re.match(r\"transfer_to_(\\w+)\", tool_call[\"name\"])\n             # **使用 agent_description_map.keys() 来检查**\n             if agent_name_match and agent_name_match.group(1) in agent_description_map.keys():\n                  handoff_tool_call = cast(Dict, tool_call) # 找到第一个有效的就用它\n                  break\n\n    # 如果决定 Handoff，尝试设置 plan 中的 current_task_id\n    if handoff_tool_call and updated_plan:\n         # **关键**: 尝试从 Tool Call 的 args 中获取 task_id (Prompt 要求 LLM 必须提供)\n         tool_args = handoff_tool_call.get(\"args\", {})\n         task_id_from_tool = tool_args.get(\"task_id\") if isinstance(tool_args, dict) else None\n         \n         # 如果 Tool args 中没有，再使用之前记录的 task_id_to_delegate (标记为 in_progress 的)\n         effective_task_id = task_id_from_tool or task_id_to_delegate \n\n         if effective_task_id:\n             print(f\"Setting current_task_id in plan to: {effective_task_id}\")\n             try:\n                 # 验证 ID 存在\n                 if PlanningStateHandler.get_task(updated_plan, effective_task_id):\n                      updated_plan = PlanningStateHandler.set_current_task(updated_plan, effective_task_id)\n                      # plan_updated 标志可能已经被 Plan Directive 设置，这里不需要重复设置\n                 else:\n                      print(f\"Warning: Task ID '{effective_task_id}' provided for delegation not found. Cannot set current_task_id.\")\n                      # 记录错误，阻止 Handoff? 或者让路由回到 Supervisor?\n                      directive_error_msg = directive_error_msg or f\"Invalid Task ID '{effective_task_id}' for delegation.\"\n             except Exception as e:\n                   err_msg = f\"Error setting current_task_id to '{effective_task_id}': {e}\"\n                   print(f\"ERROR: {err_msg}\")\n                   if not directive_error_msg: directive_error_msg = err_msg\n\n    # --- 4. 准备最终返回的状态更新字典 ---\n    updates: Dict[str, Any] = {\"messages\": messages_to_add}\n    if updated_plan is not None: updates[\"plan\"] = updated_plan\n    elif plan is not None: updates[\"plan\"] = plan\n    \n    final_error = llm_error_msg or directive_error_msg\n    if final_error: updates[\"error\"] = final_error\n    elif state.get(\"error\"): updates[\"error\"] = None # 清除旧错误\n\n    print(f\"--- Exiting Supervisor Node. Plan updated this step: {plan_updated} ---\")\n    return updates"
  },
  {
    "path": "core/agents/sub_agents/__init__.py",
    "content": ""
  },
  {
    "path": "core/agents/sub_agents/coder_agent.py",
    "content": "# Refactored coder_agent.py\nfrom typing import Any, List, Optional, Union, Callable, Type\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import SystemMessage\nfrom langgraph.types import Checkpointer\n\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.tools.registry import get_tools_by_category, ToolCategory, get_tool_instance # Import get_tool_instance\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nclass CoderAgent(ReactAgent):\n    \"\"\"\n    Coder Agent (Refactored)\n    - Interacts with a sandboxed Linux environment via code execution tools.\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str = \"coder_expert\",\n        model: LanguageModelLike = None,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = 100000, # Coding might need more context\n        **kwargs\n    ):\n        # 1. Define Description\n        description = \"Writes, executes, tests, and debugs Python code and Linux shell commands within a secure sandboxed environment. Can install packages, manage files, and interact with the network.\"\n\n        # 2. Get Tools from Registry\n        agent_tools = []\n        default_tool_name = \"e2b_code_interpreter\" # Expected tool name\n        try:\n            code_tools = get_tools_by_category(ToolCategory.CODE_INTERPRETER) + get_tools_by_category(ToolCategory.FILE_SYSTEM)\n\n            agent_tools.extend(code_tools)\n            # Optionally add file system tools if not included in interpreter tool\n            # fs_tools = get_tools_by_category(ToolCategory.FILE_SYSTEM)\n            # agent_tools.extend(fs_tools)\n            print(f\"[{name}] Loaded tools from registry: {[t.name for t in agent_tools if hasattr(t,'name')]}\")\n            # Verify the main execution tool is present\n            if not any(getattr(t,'name', None) == default_tool_name for t in agent_tools):\n                 print(f\"CRITICAL Warning: CoderAgent '{name}' is missing the primary '{default_tool_name}' tool!\")\n                 # Attempt to get it specifically if missing?\n                 specific_tool = get_tool_instance(default_tool_name)\n                 if specific_tool: agent_tools.append(specific_tool)\n\n        except Exception as e:\n             print(f\"Warning: Failed to get tools from registry for {name}: {e}\")\n\n        if tools: # Merge extra tools\n             existing_names = {t.name for t in agent_tools if hasattr(t,'name')}\n             agent_tools.extend([t for t in tools if getattr(t, 'name', None) not in existing_names])\n\n        if not agent_tools:\n             print(f\"CRITICAL Warning: CoderAgent '{name}' initialized with NO tools!\")\n\n        # 3. Define System Prompt (using the capabilities)\n        tool_name_for_prompt = next((t.name for t in agent_tools if hasattr(t, 'name') and 'code' in t.name.lower()), default_tool_name) # Try to get actual tool name\n\n        base_prompt = f\"\"\"You are an expert Coder Agent interacting with a secure, sandboxed Linux environment provided by the '{tool_name_for_prompt}' tool. Your goal is to fulfill coding, file manipulation, or shell command requests by generating and executing appropriate code or commands within this sandbox.\n\nAvailable Tools:\n{self._format_tools_for_prompt(agent_tools)}\n- **{tool_name_for_prompt}**: Executes Python code or shell commands within the sandboxed Linux environment. Returns stdout, stderr, execution errors, and potentially file outputs or structured results (like image data). To run shell commands, generate Python code that uses the 'subprocess' module OR if the tool directly supports it, prefix the command with '!'. Always prefer generating Python code for complex shell operations or when needing output capture.\n\nKey Capabilities of the Sandbox Environment (via the tool):\n- Execute Python 3 code.\n- Install Python packages using pip (generate code like `import subprocess; subprocess.run(['pip', 'install', 'requests'], check=True)`).\n- Run standard Linux shell commands (e.g., `ls`, `pwd`, `mkdir`, `curl`, `git`, etc. using Python's subprocess).\n- Access and manipulate a persistent filesystem within the sandbox (typically starting in `/home/user/` or `/`). Create, read, write, delete files and directories.\n- Access the internet from within the sandbox for tasks like cloning repos or fetching data.\n\nWorkflow & Instructions:\n1.  **Analyze Request**: Understand the goal, constraints, and required inputs/outputs.\n2.  **Plan Steps**: Outline the necessary code or commands. Consider file paths, dependencies, and error handling.\n3.  **Generate Code/Command**: Write the Python code or shell command sequence needed. For non-trivial Python, include comments.\n4.  **Execute using Tool**: Prepare the arguments for the '{tool_name_for_prompt}' tool (usually the code string or command string) and invoke the tool.\n5.  **Analyze Output**: Carefully review the stdout, stderr, errors, and any results returned by the tool.\n6.  **Debug/Iterate**: If errors occurred or the output is not as expected, analyze the error, revise the code/command, and execute again using the tool.\n7.  **Final Output**: Once the task is successfully completed, provide the final working code (if relevant), a summary of the execution results (stdout/stderr highlights), confirmation of file operations, and any requested explanation. If the task cannot be completed, explain why.\n8.  **File Handling**: If generating files (code, data, images), clearly state the full path within the sandbox where the file was saved (e.g., `/home/user/my_script.py`, `/home/user/output.csv`). Do not attempt to display images directly in your response.\n\nFocus strictly on tasks achievable within the sandboxed environment using the provided tool. Be precise and careful with file paths and commands.\n\"\"\"\n\n        # 4. Call super().__init__\n        super().__init__(\n            name=name,\n            model=model,\n            tools=agent_tools,\n            prompt=base_prompt,\n            description=description,\n            checkpointer=checkpointer,\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            **kwargs\n        )\n        print(f\"CoderAgent '{self.name}' initialized with tools: {[t.name for t in self.tools if hasattr(t,'name')]}\")\n\n    # Inherits _format_tools_for_prompt and other methods from BaseAgent/ReactAgent"
  },
  {
    "path": "core/agents/sub_agents/data_analyst_agent.py",
    "content": "# data_analyst_agent.py (or in main.py)\n\nfrom typing import Any, List, Optional, Union, Callable, Type\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import SystemMessage\nfrom langgraph.types import Checkpointer\n\n# Internal imports - ensure paths are correct\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.tools.registry import get_tools_by_category, ToolCategory, get_tool_instance # Import necessary functions\n\nimport logging\nlogger = logging.getLogger(__name__)\n\n# Assume ToolCategory.CODE_INTERPRETER exists\n# Assume ToolCategory.FILE_SYSTEM exists if needed\n\nclass DataAnalystAgent(ReactAgent):\n    \"\"\"\n    Data Analyst Agent (Refactored)\n    - Focuses on analyzing structured data using code execution sandbox.\n    - Generates insights and saves visualizations to files.\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str = \"data_analyst_expert\",\n        model: LanguageModelLike = None,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = 120000, # Analysis might need decent context\n        debug: bool = False,\n        **kwargs\n    ):\n        # 1. Define Description for Supervisor\n        description = \"Analyzes structured data (provided in context or potentially read from sandbox files) using Python (Pandas, NumPy, Matplotlib, Seaborn) within a secure code execution environment. Performs statistical analysis, identifies trends, generates insights, and creates data visualizations (saved as files in the sandbox).\"\n\n        # 2. Get Tools from Registry\n        agent_tools = []\n        default_tool_name = \"e2b_code_interpreter\" # Tool needed for execution\n        try:\n            # Primarily needs Code Interpreter\n            code_tools = get_tools_by_category(ToolCategory.CODE_INTERPRETER) + get_tools_by_category(ToolCategory.FILE_SYSTEM) # 需要代码和文件工具\n\n            agent_tools.extend(code_tools)\n            # Optionally, add File System tools if needed to read data files\n            # fs_tools = get_tools_by_category(ToolCategory.FILE_SYSTEM)\n            # agent_tools.extend(fs_tools)\n            print(f\"[{name}] Loaded tools from registry: {[t.name for t in agent_tools if hasattr(t,'name')]}\")\n            # Verify the main execution tool is present\n            if not any(getattr(t,'name', None) == default_tool_name for t in agent_tools):\n                 print(f\"CRITICAL Warning: DataAnalystAgent '{name}' is missing the primary '{default_tool_name}' tool!\")\n                 specific_tool = get_tool_instance(default_tool_name)\n                 if specific_tool: agent_tools.append(specific_tool)\n\n        except Exception as e:\n             print(f\"Warning: Failed to get tools from registry for {name}: {e}\")\n\n        if tools: # Merge extra tools\n             existing_names = {t.name for t in agent_tools if hasattr(t,'name')}\n             agent_tools.extend([t for t in tools if getattr(t, 'name', None) not in existing_names])\n\n        if not agent_tools:\n             print(f\"CRITICAL Warning: DataAnalystAgent '{name}' initialized with NO execution tools!\")\n\n        # 3. Define System Prompt\n        tool_name_for_prompt = next((t.name for t in agent_tools if hasattr(t, 'name') and 'code' in t.name.lower()), default_tool_name)\n\n        base_prompt = f\"\"\"You are an expert Data Analyst. Your task is to analyze data using Python code within a secure sandbox environment accessed via the '{tool_name_for_prompt}' tool. Libraries like Pandas, NumPy, Matplotlib, and Seaborn are available (install if needed using pip in your code).\n\nAvailable Tools:\n{self._format_tools_for_prompt(agent_tools)}\n- **{tool_name_for_prompt}**: Executes Python code in the sandbox. Returns stdout, stderr, errors, and potentially structured results.\n\nKey Instructions:\n1.  **Understand Data & Goal**: Identify the data source (likely provided in previous messages or mentioned as a sandbox file path like '/home/user/data.csv') and the specific analysis question or goal.\n2.  **Plan Analysis**: Briefly outline the Python code steps (e.g., load data into Pandas DataFrame, clean/transform data, perform calculations, generate plot).\n3.  **Write Python Code**: Generate the necessary Python code. Use libraries effectively. Import necessary libraries (e.g., `import pandas as pd`, `import matplotlib.pyplot as plt`).\n4.  **Handle Files (If Needed)**: If reading/writing files within the sandbox, use standard Python file I/O within your code (e.g., `pd.read_csv('/home/user/data.csv')`, `df.to_csv('/home/user/output.csv')`).\n5.  **Handle Visualizations**: If asked to create plots:\n    * Generate the plot using Matplotlib/Seaborn.\n    * **MUST save the plot to a file** inside the sandbox (e.g., `/home/user/plots/my_plot.png`). Use `plt.savefig('/home/user/plots/my_plot.png')`. Create directories if necessary (`os.makedirs('/home/user/plots', exist_ok=True)`).\n    * Use `plt.show()` or `plt.close()` after saving to clear the plot buffer.\n    * **DO NOT attempt to return image data directly.** Images cannot be displayed in the response.\n    * In your response, **state that the plot was generated and provide the full path** where it was saved in the sandbox (e.g., \"I have generated a scatter plot and saved it to /home/user/plots/scatter_plot.png\").\n6.  **Execute Code**: Use the '{tool_name_for_prompt}' tool to run your complete Python script.\n7.  **Analyze Results**: Interpret the output (stdout, numerical results, errors) from the tool execution.\n8.  **Present Findings**: Summarize your analysis and findings clearly. Use Markdown tables for structured data if helpful. Mention any plots saved and their paths. If errors occurred, explain them.\n9.  **Focus**: Concentrate on data analysis using code execution. Do not perform web searches unless specifically instructed and given tools for it.\n\"\"\"\n\n        # 4. Call super().__init__\n        super().__init__(\n            name=name,\n            model=model,\n            tools=agent_tools,\n            prompt=base_prompt,\n            description=description,\n            checkpointer=checkpointer,\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            debug=debug,\n            **kwargs\n        )\n        print(f\"DataAnalystAgent '{self.name}' initialized.\")\n\n    # Inherits _format_tools_for_prompt and other methods"
  },
  {
    "path": "core/agents/sub_agents/designer_agent.py",
    "content": "# 文件路径示例: reason_graph/designer_agent.py\n\nfrom typing import Any, List, Optional, Union, Callable, Type\nfrom langchain_core.language_models import LanguageModelLike # 确保导入正确类型\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import SystemMessage\nfrom langgraph.types import Checkpointer\n\n# 内部导入\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.tools.registry import get_tools_by_category, ToolCategory # 导入 Registry\n# 假设您的 Flux 工具已注册或在此导入\n# from core.tools.flux_image_tool import FluxImageGeneratorTool \n\nimport logging\nlogger = logging.getLogger(__name__)\n\n# 假设的 ToolCategory.IMAGE_GENERATION\nif not hasattr(ToolCategory, 'IMAGE_GENERATION'):\n     ToolCategory.IMAGE_GENERATION = ToolCategory.OTHER\n\nclass DesignerAgent(ReactAgent):\n    \"\"\"\n    设计 Agent (重构版)\n    - 能够理解图像上下文，并使用工具生成新的视觉内容。\n    - 应用设计原则来完成海报、网页等设计任务。\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str = \"designer_expert\",\n        model: LanguageModelLike = None, # <--- 必须传入多模态模型 (e.g., gpt-4o)\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = 8000, # 调整上下文需求\n        debug: bool = False,\n        **kwargs\n    ):\n        # 1. 定义 Agent 描述\n        description = \"Understands images provided in context and generates new visual content (images, mockups, diagrams) using specialized image generation tools (like Flux). Can apply design thinking for tasks like poster or web page layout design.\"\n\n        # 2. 获取工具 (主要是图像生成工具)\n        agent_tools = []\n        try:\n            # 从 Registry 获取图像生成工具\n            img_tools = get_tools_by_category(ToolCategory.IMAGE_GENERATION)\n            agent_tools.extend(img_tools)\n            # 也可以直接实例化\n            # agent_tools.append(FluxImageGeneratorTool()) # 如果不使用 Registry\n            print(f\"[{name}] Loaded tools: {[t.name for t in agent_tools if hasattr(t,'name')]}\")\n        except Exception as e:\n             print(f\"Warning: Failed to get IMAGE_GENERATION tools for {name}: {e}\")\n\n        if tools: # 合并额外工具\n             existing_names = {t.name for t in agent_tools if hasattr(t,'name')}\n             agent_tools.extend([t for t in tools if getattr(t, 'name', None) not in existing_names])\n\n        if not agent_tools:\n             print(f\"CRITICAL Warning: DesignerAgent '{name}' initialized with NO generation tools!\")\n\n        # 3. 定义 System Prompt\n        tool_name_for_prompt = next((t.name for t in agent_tools if hasattr(t, 'name') and 'generat' in t.name.lower()), \"image_generator_tool\") # 获取工具名\n\n        base_prompt = f\"\"\"You are an expert Visual Designer and Creative Assistant. Your capabilities include understanding images provided in the conversation history and generating new images using available tools based on detailed text prompts.\n\nAvailable Tools:\n{self._format_tools_for_prompt(agent_tools)}\n- **{tool_name_for_prompt}**: Use this tool to generate images. Input requires a detailed 'prompt'.\n\nKey Instructions & Workflow:\n\n1.  **Understand Request**: Analyze the user request, paying attention to both text and any images provided in the message history. Identify the core visual goal (e.g., analyze image, generate image, design layout).\n2.  **Image Understanding (If Applicable)**: If the request involves analyzing or describing an existing image from the history, provide your analysis directly based on your multimodal understanding.\n3.  **Design Thinking (For Generation/Design Tasks)**:\n    * **Clarify**: If the request is vague (e.g., \"design a logo\"), think about necessary elements: target audience, brand feeling, key symbols, color preferences, desired style (minimalist, vintage, futuristic, etc.). You might need to state assumptions if details are missing.\n    * **Conceptualize**: Describe the visual elements, layout, color palette, and overall composition you plan to generate.\n    * **Formulate Prompt for Tool**: Translate your design concept into a **highly detailed and descriptive text prompt** suitable for the `{tool_name_for_prompt}`. Include style, mood, composition, colors, and specific objects.\n4.  **Use Generation Tool**: Call the `{tool_name_for_prompt}` with the detailed prompt you formulated.\n5.  **Present Result**:\n    * State that you have generated the image.\n    * Provide the result from the tool (e.g., the image URL or identifier).\n    * Briefly describe the generated image and how it matches the design concept or request.\n    * **Important**: Do NOT attempt to display the image directly in your text response. Only provide the URL or description.\n6.  **Handle Errors**: If the tool fails, report the error clearly.\n\nFocus on visual design and generation tasks. Use your understanding of design principles when conceptualizing visuals for requests like posters or web mockups.\n\"\"\"\n\n        # 4. 调用父类 __init__\n        super().__init__(\n            name=name,\n            model=model, # 必须是多模态模型\n            tools=agent_tools,\n            prompt=base_prompt,\n            description=description,\n            checkpointer=checkpointer,\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            debug=debug,\n            **kwargs\n        )\n        print(f\"DesignerAgent '{self.name}' initialized.\")\n\n    # 继承 _format_tools_for_prompt 和其他 BaseAgent/ReactAgent 方法"
  },
  {
    "path": "core/agents/sub_agents/reporter_agent.py",
    "content": "# 文件路径: reason_graph/reporter_agent.py\n\nimport json\nimport time\nfrom datetime import datetime\nfrom typing import Dict, Any, List, Optional, Union, Type, cast, Sequence\n\n# --- LangChain / LangGraph ---\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import SystemMessage, HumanMessage, BaseMessage, AIMessage\nfrom langchain_core.runnables import RunnableConfig, Runnable\nfrom langgraph.graph import StateGraph, END, START # 导入 StateGraph, END, START\nfrom langgraph.graph.graph import CompiledGraph\nfrom langgraph.types import Checkpointer\n\n# --- 内部导入 ---\nfrom core.agents.base.base_agent import BaseAgent # 导入最终版 BaseAgent\n# 导入最终报告的 Prompt 模板\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nclass ReporterAgent(BaseAgent):\n    \"\"\"\n    报告 Agent (最终版)\n    - 继承自 BaseAgent。\n    - 负责基于完整的消息历史和明确指令生成最终 Markdown 报告。\n    - 内部包含一个简单的图用于执行报告生成任务。\n    \"\"\"\n\n    FINAL_REPORT_SYSTEM_PROMPT_TEMPLATE = \"\"\"You are a professional writer and editor AI assistant. Your primary goal is to generate high-quality, well-structured text content based on the specific instructions provided in the latest message and the relevant information available in the preceding conversation history.\n\nThe current date is {current_date}.\n\n**Your Task Execution Workflow:**\n1.  **Identify Instructions:** Carefully read the **last message** you received, which contains the specific writing task assigned to you by the supervisor. Understand the desired output (e.g., summary, report section, full report), format, tone, and any other requirements.\n2.  **Gather Context:** Review the preceding messages in the conversation history to find the necessary information, data points, findings, or creative elements needed to complete the assigned task.\n3.  **Compose Output:** Write the text according to the instructions.\n    * If asked for creative content (like a poem), focus on fulfilling the creative request.\n    * If asked for a summary or section, synthesize the relevant information concisely and accurately.\n    * If asked to compile a **full report**, structure it logically (e.g., Introduction, Body, Conclusion), use Markdown formatting effectively, and incorporate information/citations from the history as instructed. Adhere to any specified length or style guidelines.\n4.  **Final Response:** Your output should be **only** the requested written text. Do not add extra conversational phrases unless necessary for context. Do not include planning directives or attempt to call tools (unless a specific writing/editing tool was provided and instructed for use). If you cannot fulfill the request due to missing information in the history, state that clearly.\n\"\"\"\n\n\n    def __init__(\n        self,\n        name: str = \"reporter_expert\",\n        model: LanguageModelLike = None, # 应传入适合长文本生成的模型\n        checkpointer: Optional[Checkpointer] = None,\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = 16000, # 报告生成可能需要处理长上下文\n        debug: bool = False,\n        prompt_template: str = FINAL_REPORT_SYSTEM_PROMPT_TEMPLATE, # 使用最终报告模板\n        **kwargs # 接收其他 BaseAgent 参数\n    ):\n        # 1. 定义 Agent 描述 (给 Supervisor 看)\n        description = \"Synthesizes information from the complete conversation history and task results into a final, comprehensive, well-structured, and potentially cited Markdown research report, following specific instructions.\"\n\n        # 2. 定义工具列表 (Reporter 通常不需要工具)\n        agent_tools = []\n\n        # 3. 存储基础 Prompt 模板 (将在节点逻辑中使用)\n        # 注意：我们将模板本身（或其引用）存储起来，而不是格式化后的 prompt\n        self.report_prompt_template = prompt_template\n\n        # 4. 调用父类 __init__\n        super().__init__(\n            name=name,\n            model=model, # 传入用于报告生成的 LLM\n            tools=agent_tools,\n            prompt=None, # BaseAgent 的 prompt 字段不直接用于此 Agent 的核心逻辑\n            description=description,\n            checkpointer=checkpointer,\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            # **kwargs 传递 debug 等\n            **kwargs\n        )\n        print(f\"ReporterAgent '{self.name}' initialized.\")\n\n\n    async def _generate_report_node_logic(self, state: Dict[str, Any], config: RunnableConfig) -> Dict[str, Any]:\n        \"\"\"报告生成节点的核心逻辑\"\"\"\n        # 注意：这里的 state 已经是经过 BaseAgent._preprocess_state 处理后的状态\n        print(f\"--- Entering Node: {self.name}._generate_report_node_logic ---\")\n\n        messages: List[BaseMessage] = state.get(\"messages\", [])\n        # 理论上，所有需要的信息都应该在 messages 历史中，\n        # 特别是 Supervisor 委派时的最后一条指令消息。\n\n        if not messages:\n             error_msg = \"Error: No messages found in state for report generation.\"\n             print(error_msg)\n             return {\"messages\": [AIMessage(content=f\"# Report Generation Failed\\n\\n{error_msg}\", name=self.name)]}\n\n        # --- 格式化 System Prompt (包含日期) ---\n        try:\n            current_date_str = datetime.now().strftime(\"%a, %b %d, %Y\")\n            system_prompt = self.report_prompt_template.format(current_date=current_date_str)\n        except Exception as e:\n            print(f\"Error formatting report system prompt: {e}\")\n            system_prompt = \"You are a report writing assistant. Synthesize the provided messages into a final report.\" # Fallback\n\n        # --- 准备 LLM 输入 ---\n        # 输入是 System Prompt + 完整的、经过预处理（截断）的消息历史\n        # BaseAgent 的 _preprocess_state 已经处理了截断\n        llm_input_messages = [SystemMessage(content=system_prompt)] + messages\n\n        # --- 调用 LLM 生成报告 ---\n        final_report_markdown = \"\"\n        llm_error = None\n        try:\n            print(f\"--- Calling LLM for Final Report Generation ({self.name}) ---\")\n            # 使用 self.model (初始化时传入的 LLM 实例)\n            response = await self.model.ainvoke(llm_input_messages, config=config)\n            final_report_markdown = response.content\n            print(f\"--- Report Generation LLM Call Successful ({self.name}). Length: {len(final_report_markdown)} chars ---\")\n        except Exception as e:\n             print(f\"!!! Error during Report Generation LLM call ({self.name}): {e}\")\n             llm_error = f\"Report generation failed due to LLM error: {e}\"\n             final_report_markdown = f\"# Report Generation Failed\\n\\nError: {str(e)}\"\n             # 可以在这里打印更详细的 traceback\n             # import traceback\n             # traceback.print_exc()\n\n        # --- 返回包含报告或错误的状态更新 ---\n        # Reporter 的最终输出就是报告本身，放入 messages 中，替换掉历史？\n        # 不，应该追加，让调用者（Supervisor 或 main）能看到完整历史和最终报告\n        # 使用 AIMessage 返回报告\n        return {\n            \"messages\": [AIMessage(content=final_report_markdown, name=self.name)],\n            \"error\": state.get(\"error\") or llm_error # 保留或记录错误\n        }\n\n    def build(self) -> Optional[StateGraph]:\n        \"\"\"构建 Reporter Agent 的简单工作流： Start -> GenerateReport -> End \"\"\"\n        if self._workflow: return self._workflow\n\n        print(f\"Building internal graph for ReporterAgent '{self.name}'\")\n        # Reporter 通常使用 BasicAgentState，因为它不直接操作 Plan\n        # 但为了兼容 Supervisor 可能传递 PlanningAgentState，这里可以暂时用 Any\n        # 或者定义一个 ReporterState\n        workflow = StateGraph(Dict[str, Any]) # 使用通用字典状态，因为它只关心 messages\n\n        # 添加报告生成节点，确保它能访问 self.model\n        # functools.partial 不能直接用于异步实例方法，需要包装\n        async def node_wrapper(state, config):\n             return await self._generate_report_node_logic(state, config)\n\n        workflow.add_node(\"generate_report\", node_wrapper) # type: ignore\n        workflow.add_edge(START, \"generate_report\")\n        workflow.add_edge(\"generate_report\", END)\n\n        self._workflow = workflow\n        return workflow\n\n    # compile 方法继承自 BaseAgent\n    # 它会调用上面的 build() 获取 StateGraph 定义，然后编译它，\n    # 并创建包含预处理步骤 (_preprocess_state) 的最终 _executable_agent\n\n    # invoke, ainvoke, get_agent (get_executable_agent), reset 继承自 BaseAgent"
  },
  {
    "path": "core/agents/sub_agents/research_agent.py",
    "content": "# 文件路径示例: reason_graph/research_agent.py\n\nfrom typing import Any, List, Optional, Union, Callable, Type, cast\nfrom langchain_core.language_models import LanguageModelLike\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import SystemMessage\nfrom langgraph.types import Checkpointer\n\n# 内部导入 - 请确保路径正确\nfrom core.agents.base.react_agent import ReactAgent\n# 导入工具 Registry 相关 - 只需要 get_tools_by_category 和 ToolCategory\nfrom core.tools.registry import get_tools_by_category, ToolCategory\n# *** 不再需要导入 get_tool 或 get_registered_tools ***\n\nimport logging\nlogger = logging.getLogger(__name__)\n\n# 假设 ToolCategory 包含 SEARCH 和 WEB_Browse\nif not hasattr(ToolCategory, 'SEARCH'): ToolCategory.SEARCH = ToolCategory.OTHER\nif not hasattr(ToolCategory, 'WEB_Browse'): ToolCategory.WEB_Browse = ToolCategory.OTHER\n\n\nclass ResearchAgent(ReactAgent):\n    \"\"\"\n    研究 Agent (重构版)\n    - 继承自新的 ReactAgent\n    - 专注于定义自身工具和 Prompt\n    - 移除了自定义的状态管理和方法\n    \"\"\"\n\n    def __init__(\n        self,\n        name: str = \"research_expert\",\n        model: LanguageModelLike = None,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = 8000,\n        debug: bool = False,\n        **kwargs\n    ):\n\n        # 1. 定义 Agent 描述 (不变)\n        description = \"Expert at finding, extracting, and synthesizing the latest information, data, and background knowledge on specific topics using search engines (like Tavily, Google Search) and web Browse tools (like Firecrawl, Arxiv). Capable of providing source links and content summaries.\"\n\n        # 2. --- 从 Registry 获取和合并工具 ---\n        agent_tools: List[Union[BaseTool, Callable]] = []\n        search_tools_loaded: List[Union[BaseTool, Callable]] = [] # 用于后续检查\n        Browse_tools_loaded: List[Union[BaseTool, Callable]] = []\n\n        try:\n            search_tools_loaded = get_tools_by_category(ToolCategory.SEARCH)\n            agent_tools.extend(search_tools_loaded)\n            try:\n                 Browse_tools_loaded = get_tools_by_category(ToolCategory.WEB_Browse)\n                 agent_tools.extend(Browse_tools_loaded)\n            except Exception as e:\n                 if debug: print(f\"[{name}] Info: Failed to get WEB_Browse tools: {e}\")\n            print(f\"[{name}] Loaded tools from registry: {[t.name for t in agent_tools if hasattr(t,'name')]}\")\n\n            # --- 简化核心工具检查 ---\n            if not search_tools_loaded: # 直接检查从 Registry 加载的搜索工具列表是否为空\n                 print(f\"CRITICAL Warning: ResearchAgent '{name}' initialized without any SEARCH tools from registry!\")\n            # ------------------------\n\n        except Exception as e:\n             print(f\"Warning: Failed to get tools from registry for {name}: {e}\")\n\n        # 合并外部传入的 `tools` 参数 (逻辑不变)\n        if tools:\n            # ... (合并逻辑不变) ...\n             existing_tool_names = {t.name for t in agent_tools if hasattr(t, 'name')}\n             added_external_count = 0\n             for tool in tools:\n                 tool_name = getattr(tool, 'name', None)\n                 if tool_name and tool_name not in existing_tool_names:\n                      agent_tools.append(tool)\n                      existing_tool_names.add(tool_name)\n                      added_external_count +=1\n                 elif not tool_name: \n                      agent_tools.append(tool)\n                      added_external_count += 1\n             if added_external_count > 0: print(f\"[{name}] Merged {added_external_count} external tool(s).\")\n\n\n        # --- 简化最终工具检查 ---\n        if not agent_tools:\n             print(f\"CRITICAL Warning: ResearchAgent '{name}' initialized with NO tools configured!\")\n        # 不再需要那个复杂的 any(...) 检查\n        # ----------------------\n\n        # 3. 定义 Agent 的 System Prompt (逻辑不变)\n        base_prompt = f\"\"\"You are a professional Research Analyst expert...\nAvailable Tools:\n{self._format_tools_for_prompt(agent_tools)} \nInstructions:\n\n- Analyze the request in the message history.\n\n- If the request requires searching for current information, facts, data, or background knowledge, you MUST use one of your search tools (like 'tavily_search_results').\n\n- When using tools, formulate concise and effective search queries based on the request.\n\n- Synthesize the information found from the tools into a clear and informative answer.\n\n- If you use information from a tool, cite the source implicitly in your response (e.g., \"According to [Source Title], ...\").\n\n- If the initial search is insufficient, analyze the results and decide if further searches with refined queries or different tools are needed.\n\n- If you cannot find the information after thorough searching, or if the tools return errors, clearly state the limitations encountered. Do not invent information.\n\"\"\"\n\n        # 4. 调用父类 __init__ (逻辑不变)\n        super().__init__(\n            name=name,\n            model=model,\n            tools=agent_tools,\n            prompt=base_prompt,\n            description=description,\n            checkpointer=checkpointer,\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            debug=debug,\n            **kwargs\n        )\n        print(f\"ResearchAgent '{self.name}' initialized with final tools: {[t.name for t in self.tools if hasattr(t,'name')]}\")\n"
  },
  {
    "path": "core/llm/llm_manager.py",
    "content": "# reason_graph/llm_manager.py\nimport os\nfrom enum import Enum, auto\nfrom typing import Any, Dict, List, Optional, Type, Union, Callable, Tuple\nfrom langchain_core.language_models import BaseChatModel, LanguageModelLike\nfrom langchain_openai import ChatOpenAI\n# (移除 ChatGroq 导入)\n\nfrom dotenv import load_dotenv\n\n# 加载环境变量\nload_dotenv()\n\nclass ModelType(Enum):\n    \"\"\"模型提供商类型枚举\"\"\"\n    OPENAI = auto()\n    XAI = auto()\n    DEEPSEEK = auto()\n    CUSTOM = auto() # 保持用于其他 OpenAI 兼容 API\n\nclass ModelCapability(Enum):\n    \"\"\"模型能力枚举\"\"\"\n    GENERAL = auto(); PLANNING = auto(); REASONING = auto()\n    CREATIVE = auto(); RESEARCH = auto(); CODE = auto()\n    LONG_CONTEXT = auto()\n\nclass LLMManager:\n    \"\"\"\n    模型管理器 (融合版 V2)\n    - 在初始化时根据配置自动注册模型。\n    - 支持按能力获取模型。\n    - 支持延迟实例化。\n    - 从环境变量加载 API Keys/Base URLs。\n    \"\"\"\n\n    def __init__(self):\n        \"\"\"初始化模型管理器，加载配置并自动注册模型\"\"\"\n        self._models_config: Dict[str, Dict[str, Any]] = {}\n        self._models_instance: Dict[str, BaseChatModel] = {}\n        self._default_model_id: Optional[str] = None\n        self._capability_models: Dict[ModelCapability, str] = {}\n\n        # 加载 API Keys 和 Base URLs (保持不变)\n        self._loaded_api_keys = {\n            ModelType.OPENAI: os.getenv(\"OPENAI_API_KEY\"),\n            ModelType.XAI: os.getenv(\"XAI_API_KEY\"),\n            ModelType.DEEPSEEK: os.getenv(\"DEEPSEEK_API_KEY\"),\n            ModelType.CUSTOM: os.getenv(\"LLM_API_KEY\"),\n        }\n        self._loaded_base_urls = {\n            ModelType.OPENAI: os.getenv(\"OPENAI_BASE_URL\"),\n            ModelType.XAI: os.getenv(\"XAI_BASE_URL\"),\n            ModelType.DEEPSEEK: os.getenv(\"DEEPSEEK_BASE_URL\", \"https://api.deepseek.com/v1\"),\n            ModelType.CUSTOM: os.getenv(\"LLM_BASE_URL\"),\n        }\n        print(\"LLMManager initialized.\")\n        print(\"Loaded API Keys for:\", [k.name for k, v in self._loaded_api_keys.items() if v])\n        print(\"Loaded Base URLs for:\", {k.name: v for k, v in self._loaded_base_urls.items() if v})\n\n        # --- 自动注册模型 ---\n        try:\n            from .model_config import SUPPORTED_MODELS_CONFIG # 从配置文件导入\n            \n            print(\"Registering models from config...\")\n            for model_id, config in SUPPORTED_MODELS_CONFIG.items():\n                # 检查所需 Key/URL 是否存在，如果不存在则跳过注册并警告\n                model_type = config.get(\"model_type\")\n                api_key = config.get(\"config_override\", {}).get(\"api_key\") or self._loaded_api_keys.get(model_type)\n                base_url = config.get(\"config_override\", {}).get(\"base_url\") or self._loaded_base_urls.get(model_type)\n                \n                # OpenAI 可以只依赖 OPENAI_API_KEY 环境变量\n                if model_type == ModelType.OPENAI and not api_key:\n                    api_key = os.getenv(\"OPENAI_API_KEY\") # 再次检查 OpenAI 专用 Key\n\n                # 对于需要 Key 的类型进行检查\n                key_required = model_type not in [ModelType.CUSTOM] # 假设 CUSTOM 可能匿名\n                url_required = model_type in [ModelType.XAI, ModelType.CUSTOM] # DeepSeek 有默认值\n\n                if key_required and not api_key:\n                    print(f\"  Skipping registration for '{model_id}': Required API key for type '{model_type.name}' not found.\")\n                    continue\n                if url_required and not base_url:\n                     print(f\"  Skipping registration for '{model_id}': Required Base URL for type '{model_type.name}' not found.\")\n                     continue\n\n                # 调用内部注册方法\n                self._register_model(\n                    model_id=model_id,\n                    model_type=config[\"model_type\"],\n                    model_name=config[\"model_name\"],\n                    model_class=config.get(\"model_class\"), # 可能为 None\n                    capabilities=config.get(\"capabilities\", [ModelCapability.GENERAL]),\n                    set_as_default=config.get(\"is_default\", False),\n                    config_override=config.get(\"config_override\"),\n                    **config.get(\"kwargs\", {})\n                )\n            print(\"Model registration complete.\")\n            # 可以在这里设置一个环境变量的默认模型 ID，如果配置中没有 is_default=True\n            if not self._default_model_id and self._models_config:\n                 fallback_default = list(self._models_config.keys())[0]\n                 print(f\"Warning: No default model marked in config. Falling back to first registered: '{fallback_default}'\")\n                 self._default_model_id = fallback_default\n\n\n        except ImportError:\n            print(\"Warning: Could not import model_config.py. No models registered automatically.\")\n        except Exception as e:\n            print(f\"Error during automatic model registration: {e}\")\n\n        print(f\"Default model set to: {self._default_model_id}\")\n        print(f\"Capability mapping: {self.list_capabilities()}\")\n        print(\"-\" * 20)\n\n\n    # register_model 现在是内部方法\n    def _register_model(\n        self, model_id: str, model_type: ModelType, model_name: str,\n        model_class: Optional[Type[BaseChatModel]] = None,\n        capabilities: List[ModelCapability] = [ModelCapability.GENERAL],\n        set_as_default: bool = False,\n        config_override: Optional[Dict[str, Any]] = None,\n        **kwargs\n    ) -> None:\n        \"\"\"(Internal) Registers a model configuration.\"\"\"\n        if model_id in self._models_config:\n            # Decide on behavior: overwrite or ignore? Let's overwrite with warning.\n            print(f\"  Overwriting registration for existing model_id: '{model_id}'\")\n            # pass # If ignore is preferred\n\n        if model_class is None:\n            model_class = ChatOpenAI\n\n        self._models_config[model_id] = {\n            \"type\": model_type, \"name\": model_name, \"class\": model_class,\n            \"capabilities\": list(set(capabilities)),\n            \"config_override\": config_override or {},\n            \"kwargs\": kwargs,\n        }\n        print(f\"  Registered model config: '{model_id}' (Type: {model_type.name}, Class: {model_class.__name__})\")\n\n        if set_as_default:\n            self._default_model_id = model_id\n            print(f\"    Set '{model_id}' as default.\")\n\n        for capability in capabilities:\n            if capability not in self._capability_models:\n                self._capability_models[capability] = model_id\n                print(f\"    Mapped capability '{capability.name}' to '{model_id}'.\")\n\n    def set_default_model(self, model_id: str) -> None:\n        \"\"\"设置默认模型\"\"\"\n        if model_id not in self._models_config: raise ValueError(...)\n        self._default_model_id = model_id\n\n    def set_capability_model(self, capability: ModelCapability, model_id: str) -> None:\n        \"\"\"设置特定能力的模型\"\"\"\n        if model_id not in self._models_config: raise ValueError(...)\n        model_info = self._models_config[model_id]\n        if capability not in model_info.get(\"capabilities\", []):\n              print(f\"Warning: Model '{model_id}' not registered with capability '{capability.name}'.\")\n        self._capability_models[capability] = model_id\n\n    # _get_instance (核心实例化逻辑)\n    def _get_instance(self, model_id: str) -> BaseChatModel:\n        \"\"\"(Internal) Gets or creates a model instance.\"\"\"\n        if model_id in self._models_instance:\n            return self._models_instance[model_id]\n\n        if model_id not in self._models_config:\n            raise ValueError(f\"Model ID '{model_id}' not registered or registration skipped due to missing config.\")\n\n        config = self._models_config[model_id]\n        model_type = config[\"type\"]\n        model_name = config[\"name\"]\n        model_class = config[\"class\"]\n        config_override = config[\"config_override\"]\n        kwargs = config[\"kwargs\"]\n\n        # 确定 Key/URL (优先 override, 其次 env)\n        api_key = config_override.get(\"api_key\", self._loaded_api_keys.get(model_type))\n        base_url = config_override.get(\"base_url\", self._loaded_base_urls.get(model_type))\n\n        # OpenAI 特殊 Key 处理\n        if model_type == ModelType.OPENAI and not api_key:\n            api_key = os.getenv(\"OPENAI_API_KEY\")\n\n        # 检查必要配置\n        key_required = model_type not in [ModelType.CUSTOM]\n        url_required = model_type in [ModelType.XAI, ModelType.DEEPSEEK, ModelType.CUSTOM]\n        if key_required and not api_key:\n            raise ValueError(f\"API key required but not found for '{model_id}' (Type: {model_type.name}). Set in .env or config_override.\")\n        if url_required and not base_url:\n            raise ValueError(f\"Base URL required but not found for '{model_id}' (Type: {model_type.name}). Set in .env or config_override.\")\n\n        print(f\"Instantiating model: ID='{model_id}', Type='{model_type.name}', Name='{model_name}'\")\n\n        # 准备构造函数参数\n        init_kwargs = kwargs.copy()\n        if model_class == ChatOpenAI:\n             init_kwargs['model'] = model_name\n             if api_key: init_kwargs['openai_api_key'] = api_key\n             if base_url: init_kwargs['openai_api_base'] = base_url\n        # elif model_class == ChatGroq: ... # Removed\n        else: # 尝试通用参数\n             init_kwargs['model'] = model_name # 很多兼容类可能也认 model\n             init_kwargs['model_name'] = model_name\n             if api_key: init_kwargs['api_key'] = api_key\n             if base_url: init_kwargs['base_url'] = base_url\n\n        # 移除内部配置键\n        for k in [\"config_override\", \"capabilities\", \"type\", \"class\", \"name\", \"instance\"]:\n            init_kwargs.pop(k, None)\n            \n        # 实例化\n        try:\n            instance = model_class(**init_kwargs)\n            self._models_instance[model_id] = instance\n            return instance\n        except Exception as e:\n            print(f\"!!! Failed to instantiate model '{model_id}'\")\n            raise e\n\n    # get_model 和 get_model_for_capability (保持不变, 调用 _get_instance)\n    def get_model(self, model_id: Optional[str] = None) -> BaseChatModel:\n        \"\"\"获取模型实例 (通过 ID 或默认)\"\"\"\n        target_id = model_id\n        if target_id is None:\n            if self._default_model_id is None: raise ValueError(\"No default model set.\")\n            target_id = self._default_model_id\n        if target_id not in self._models_config: raise ValueError(f\"Model ID '{target_id}' not registered.\")\n        return self._get_instance(target_id)\n\n    def get_model_for_capability(self, capability: ModelCapability) -> BaseChatModel:\n        \"\"\"获取具有特定能力的模型实例\"\"\"\n        if capability not in self._capability_models:\n            print(f\"No preferred model for '{capability.name}'. Falling back to default.\")\n            if self._default_model_id is None: raise ValueError(f\"No model for '{capability.name}' and no default set.\")\n            model_id = self._default_model_id\n        else: model_id = self._capability_models[capability]\n        print(f\"Using model '{model_id}' for capability '{capability.name}'.\")\n        return self.get_model(model_id)\n\n    # list_models 和 list_capabilities (保持不变)\n    def list_models(self) -> Dict[str, Dict[str, Any]]:\n        \"\"\"列出所有注册的模型及其配置\"\"\"\n        result = {}; # ... (populate result) ...\n        for model_id, model_info in self._models_config.items():\n            result[model_id] = {\n                \"type\": model_info[\"type\"].name,\n                \"name\": model_info[\"name\"],\n                \"class\": model_info[\"class\"].__name__,\n                \"capabilities\": [c.name for c in model_info.get(\"capabilities\", [])],\n                \"is_default\": model_id == self._default_model_id,\n                \"kwargs\": model_info.get(\"kwargs\"),\n                \"config_override\": model_info.get(\"config_override\"),\n            }\n        return result\n\n    def list_capabilities(self) -> Dict[str, str]:\n        return {capability.name: model_id for capability, model_id in self._capability_models.items()}"
  },
  {
    "path": "core/llm/model_config.py",
    "content": "# reason_graph/model_config.py\nfrom langchain_openai import ChatOpenAI\n# from langchain_groq import ChatGroq # 不再需要\n# (如果未来支持其他非 OpenAI 兼容的，在这里 import)\n\nfrom .llm_manager import ModelType, ModelCapability # 从同级 llm_manager 导入枚举\n\n# 定义支持的模型及其配置\n# key 是我们内部使用的 model_id\nSUPPORTED_MODELS_CONFIG = {\n    \"openai_gpt4o\": {\n        \"model_type\": ModelType.OPENAI,\n        \"model_name\": \"gpt-4o\", # API 调用名\n        \"model_class\": ChatOpenAI,\n        \"capabilities\": [\n            ModelCapability.GENERAL, ModelCapability.PLANNING, ModelCapability.REASONING,\n            ModelCapability.CREATIVE, ModelCapability.LONG_CONTEXT, ModelCapability.CODE,\n            ModelCapability.RESEARCH # GPT-4o 也能做一定研究\n        ],\n        \"is_default\": False, # 不设为默认\n        \"config_override\": {}, # 允许覆盖 env vars, e.g., {'api_key': '...'}\n        \"kwargs\": {\"temperature\": 0.1} # 传递给构造函数的额外参数\n    },\n    \"openai_gpt4o_mini\": {\n        \"model_type\": ModelType.OPENAI,\n        \"model_name\": \"gpt-4o-mini\",\n        \"model_class\": ChatOpenAI,\n        \"capabilities\": [ModelCapability.GENERAL, ModelCapability.REASONING, ModelCapability.CREATIVE],\n        \"is_default\": True, # <--- 将其设为默认模型\n        \"config_override\": {},\n        \"kwargs\": {\"temperature\": 0.0}\n    },\n    \"xai_grok\": { # 假设 ID 命名为 xai_grok\n        \"model_type\": ModelType.XAI,\n        \"model_name\": \"grok-2-latest\", # 或者是 xAI API 实际接受的模型名\n        \"model_class\": ChatOpenAI, # 假设使用兼容 OpenAI 的方式连接\n        \"capabilities\": [ModelCapability.GENERAL, ModelCapability.REASONING, ModelCapability.LONG_CONTEXT, ModelCapability.CREATIVE],\n        \"is_default\": False,\n        \"config_override\": {}, # Key/URL 将从 env (XAI_API_KEY, XAI_BASE_URL) 加载\n        \"kwargs\": {\"temperature\": 0.2}\n    },\n    \"deepseek_v3\": { # 假设 ID 命名为 deepseek_chat\n        \"model_type\": ModelType.DEEPSEEK,\n        \"model_name\": \"deepseek/deepseek-v3-0324\", # DeepSeek Chat 模型 API 名\n        \"model_class\": ChatOpenAI, # 使用兼容 OpenAI 的方式连接\n        \"capabilities\": [ModelCapability.GENERAL, ModelCapability.REASONING, ModelCapability.CODE, ModelCapability.LONG_CONTEXT],\n        \"is_default\": False,\n        \"config_override\": {}, # Key/URL 将从 env (DEEPSEEK_API_KEY, DEEPSEEK_BASE_URL) 加载\n        \"kwargs\": {\"temperature\": 0.0}\n    },\n    # --- 可以继续添加其他模型配置 ---\n    # \"groq_llama3_70b\": {\n    #     \"model_type\": ModelType.GROQ,\n    #     \"model_name\": \"llama3-70b-8192\",\n    #     \"model_class\": ChatGroq, # 需要导入 ChatGroq\n    #     \"capabilities\": [...],\n    #     \"is_default\": False,\n    #     \"config_override\": {},\n    #     \"kwargs\": {\"temperature\": 0.1}\n    # },\n}"
  },
  {
    "path": "core/mcp/README.md",
    "content": "# Mentis MCP 客户端与配置指南\n\n本目录 (`core/mcp/`) 包含用于与模型上下文协议 (MCP - Model Context Protocol) 服务器进行交互的 Python 客户端实现。\n\n## 背景\n\nMCP 旨在为 AI 模型（如 LLM Agent）提供一个标准的、与外部工具或服务进行交互的协议。本客户端的目标是提供一种灵活、可配置的方式来连接这些 MCP 服务器，并将它们提供的工具集成到 LangChain Agent 中。\n\n## 客户端 (`MCPClient`)\n\n核心实现是 `MCPClient` 类 (位于 `client.py`)，它具备以下特性：\n\n* **配置驱动:** 通过读取一个位于 `core/mcp/config.json` 的 JSON 文件来管理一个或多个服务器的连接/启动信息。兼容 \"Cursor 风格\" 的配置格式。\n* **灵活连接:**\n    * **启动本地服务 (stdio):** 如果配置文件中提供了 `command` 和 `args`，客户端会尝试执行该命令启动服务器进程，并通过 **STDIO** 建立通信。这对于使用 `uvx` 或 `python -m` 启动的标准 MCP 服务器很有用。\n    * **连接远程服务 (sse):** 如果配置文件中提供了 `url`，客户端会直接通过 **SSE** 连接到该 URL 对应的、已在运行的 MCP 服务器。\n* **异步架构:** 基于 `asyncio` 构建，适合异步应用。\n* **健壮的资源管理:** 使用 `contextlib.AsyncExitStack` 管理连接和会话，旨在提高关闭时的稳定性。\n* **LangChain 集成支持:** 提供了加载 MCP 工具为 LangChain `BaseTool` 对象的基础（尽管存在适配器问题，见下文）。\n\n## 如何使用\n\n### 1. 配置服务器 (`core/mcp/config.json`)\n\n你需要在此目录下创建一个 `config.json` 文件，定义你想要连接的 MCP 服务器。文件是一个 JSON 对象，键是服务器的逻辑名称，值是该服务器的配置详情。\n\n**示例 `config.json` (只包含外部标准服务器):**\n\n```json\n{\n  \"fetch_via_uvx\": {\n    \"id\": \"fetch-uvx-stdio\",\n    \"type\": \"mcp-server\",\n    \"description\": \"Fetch Server launched by uvx via stdio\",\n    \"connection\": {\n      \"transport\": \"stdio\",\n      \"command\": \"uvx\",\n      \"args\": [ \"mcp-server-fetch\" ],\n      \"timeout\": 45\n    }\n  },\n  \"everything\": {\n    \"id\": \"everything-stdio\",\n    \"type\": \"mcp-server\",\n    \"description\": \"Everything Server launched by npx via stdio\",\n    \"connection\": {\n      \"transport\": \"stdio\",\n      \"command\": \"npx\",\n      \"args\": [ \"-y\", \"@modelcontextprotocol/server-everything\" ],\n      \"env\": {\n        // 如果 Everything Server 需要 API Keys, 在此添加\n        // 或确保运行客户端脚本的环境变量会被继承\n        // \"OPENAI_API_KEY\": \"YOUR_KEY\",\n        // \"TAVILY_API_KEY\": \"YOUR_KEY\"\n      },\n      \"timeout\": 60\n    }\n  },\n  \"external_sse_example\": {\n    \"id\": \"external-sse\",\n    \"type\": \"mcp-server\",\n    \"description\": \"Connect to a pre-running SSE server (Example)\",\n    \"connection\": {\n        \"transport\": \"sse\",\n        \"url\": \"http://localhost:9001/sse\" // 假设有服务器在此运行\n    }\n  }\n}\n```\n\n**重要:**\n\n* 使用 `command` 启动服务器时，确保 `command` (如 `uvx`, `npx`, `python`) 在你的环境中可用。\n* 如果服务器需要 API Keys，请通过 `env` 字段或系统环境变量提供。\n* `transport: \"stdio\"` 告诉我们的客户端使用 stdio 连接，`transport: \"sse\"` 告诉它使用 sse 连接。\n\n### 2. 客户端代码示例\n\n使用 `config_loader.py` 加载配置，并通过 `async with` 语句使用 `MCPClient`。\n\n```python\nimport asyncio\nimport os\nfrom core.mcp.client import MCPClient\nfrom core.mcp.config_loader import load_config\n# 导入 LangChain 相关 (如果需要 Agent)\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.prebuilt import create_react_agent\nfrom langchain_core.tools import BaseTool, Tool\n# 导入工具的 Pydantic Schema (用于手动创建 Tool)\nfrom pydantic.v1 import BaseModel, Field # 或 v2\n\n# --- Fetch Schema 示例 ---\nclass FetchInputSchema(BaseModel):\n     url: str = Field(..., description=\"URL to fetch\")\n     # ... 其他字段 ...\n\nasync def main():\n    # --- 加载配置 ---\n    config_path = os.path.join(os.path.dirname(__file__), \"config.json\") # 假设 config 在同目录\n    try:\n        all_configs = load_config(config_path)\n        # 选择要使用的配置\n        server_key = \"fetch_via_uvx\" # 或 \"everything\", \"e2b_stdio\" 等\n        mcp_config = all_configs.get(server_key)\n        if not mcp_config:\n            print(f\"Config '{server_key}' not found.\")\n            return\n    except Exception as e:\n        print(f\"Failed to load config: {e}\")\n        return\n\n    # --- 使用 MCPClient ---\n    async with MCPClient(mcp_config) as client:\n        print(f\"Connected to MCP Server '{server_key}'. Session active: {client.session is not None}\")\n        if not client.session: return\n\n        # --- 获取和使用工具 ---\n\n        # 方式一: 标准方式 (但存在已知问题)\n        # print(\"\\nAttempting standard tool loading via load_mcp_tools...\")\n        # loaded_tools = client.get_tools() # 内部调用 load_mcp_tools\n        # print(f\"load_mcp_tools returned {len(loaded_tools)} tools.\")\n        # # !! 注意：对于某些服务器实现 (如此处之前的 MentisMCPServer),\n        # # !! load_mcp_tools 返回的工具对象的 args_schema 可能是错误的！\n        # # !! 这会导致 Agent 调用失败。但对于 Fetch Server 这样的标准服务器，\n        # # !! 它加载的 Schema 可能是正确的。需要根据打印的 Schema 判断。\n\n        # 方式二: 【当前推荐】手动创建 Tool 对象 (绕过 load_mcp_tools 问题)\n        print(\"\\nManually creating Tool object with correct schema...\")\n        tool_name = \"fetch\" # 假设测试 Fetch Server\n        tool_description = \"Fetches URL content.\" # 可以从服务器获取或手写\n        correct_schema = FetchInputSchema # 使用正确的 Pydantic 模型\n\n        # 定义调用逻辑\n        async def call_mcp_tool_wrapper(**kwargs) -> str:\n             # ... (内部使用 client.session.call_tool 发送正确请求) ...\n            # 参考 examples/14_mcp_fetch_test.py 中的实现\n            if not client or not client.session: return \"ERROR: Session lost.\"\n            try:\n                req_params = {\"name\": tool_name, \"arguments\": kwargs}\n                from mcp.types import CallToolRequest # 需要导入\n                request = CallToolRequest(method='tools/call', params=req_params)\n                result = await client.session.call_tool(request)\n                if hasattr(result, 'result'): return str(result.result)\n                elif hasattr(result, 'error'): return f\"Tool Error: {result.error.message}\"\n                else: return \"Unknown response\"\n            except Exception as e: return f\"Error: {e}\"\n\n        # 创建 LangChain Tool\n        manual_tool = Tool.from_function(\n            name=tool_name,\n            description=tool_description,\n            args_schema=correct_schema,\n            coroutine=call_mcp_tool_wrapper\n        )\n        tools_for_agent = [manual_tool]\n        print(f\"Manual tool '{manual_tool.name}' created.\")\n\n        # --- 使用 Agent ---\n        try:\n            # model = llm_manager.get_model(\"openai_gpt4o_mini\") # 获取 LLM\n            # agent = create_react_agent(model, tools_for_agent)\n            # response = await agent.ainvoke(...)\n            # print(\"Agent Response:\", response)\n            print(\"\\nAgent execution part skipped in README example.\")\n            print(\"Refer to examples/14_mcp_fetch_test.py for full Agent integration.\")\n        except Exception as e:\n            print(f\"Agent execution error: {e}\")\n\n# if __name__ == \"__main__\":\n#     asyncio.run(main())\n```\n\n## 关于自建 MCP Server (MentisMCPServer)\n\n我们在之前的开发中，尝试在 `core/mcp/server.py` 中构建了一个 `MentisMCPServer` 类，目的是将我们内部工具注册表 (`core/tools/registry.py`) 中的 LangChain `BaseTool` 动态包装成 MCP 工具。\n\n**当前遇到的主要挑战：**\n\n我们发现，当使用 `FastMCP` 库的 `@mcp.tool` 装饰器来动态注册这些包装器时，服务器未能正确地向客户端广播这些工具的**输入模式 (Schema)**。这导致客户端的 `load_mcp_tools` 收到了错误的 Schema 信息，进而使 LangChain Agent 在调用工具时因参数错误而失败。\n\n虽然我们通过重构服务器的注册逻辑（改为在 `run_server.py` 中直接使用 `FastMCP` 实例注册顶层包装函数）**成功解决**了 Schema 广播的问题，使得 `load_mcp_tools` 能够获取到正确的 Schema，但后续测试发现 Agent (`create_react_agent`) 在调用这些工具时仍可能出现内部错误 (`TypeError`)。\n\n**结论与建议：**\n\n由于在结合 LangChain 工具、动态包装、`FastMCP` 和 LangChain Agent 时遇到了较深的库交互和调试障碍，我们**目前不建议**将 `MentisMCPServer` 作为稳定可靠的方案对外提供服务。\n\n**推荐使用以下方式来提供或使用 MCP Server:**\n\n1.  **使用社区标准服务器:** 直接使用像 `mcp-server-fetch`, `@modelcontextprotocol/server-everything` 这样由社区或官方提供的、预构建好的 MCP 服务器。通过 `config.json` 配置 `command` (如 `uvx`, `npx`, `python -m`) 或 `url` 来使用它们。\n2.  **采用简单服务器模式:** 如果你需要自己实现 MCP Server 来暴露特定功能，建议参考 `modelcontextprotocol/servers` 仓库中的简单示例（如 `math_server`, `time_server`），采用**直接注册工具函数**（用 `@mcp_instance.tool` 装饰顶层 `async def` 函数）的模式，避免复杂的动态包装层。"
  },
  {
    "path": "core/mcp/__init__.py",
    "content": "# core/mcp/__init__.py\n\"\"\"\nMCP (Model Context Protocol) 功能模块\n\"\"\""
  },
  {
    "path": "core/mcp/client.py",
    "content": "import os\nimport asyncio\nfrom pathlib import Path\nfrom typing import List, Dict, Any, Optional, Union, Type, Literal, TypedDict, cast\nfrom types import TracebackType\nimport re\nimport sys\nimport json\nimport traceback\nfrom contextlib import asynccontextmanager, AsyncExitStack\n\n# --- MCP Imports ---\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom mcp.client.sse import sse_client\n# --- Adapter Import ---\ntry:\n     from langchain_mcp_adapters.tools import load_mcp_tools\n     LOAD_MCP_TOOLS_AVAILABLE = True\nexcept ImportError:\n     print(\"警告: 未找到 langchain-mcp-adapters。 load_mcp_tools 将不可用。\")\n     async def load_mcp_tools(session: ClientSession) -> list: return []\n     LOAD_MCP_TOOLS_AVAILABLE = False\n# --- LangChain / Pydantic Imports ---\nfrom langchain_core.tools import BaseTool\ntry: from pydantic.v1 import BaseModel as BaseModelV1\nexcept ImportError: from pydantic import BaseModel as BaseModelV1 # Fallback\n# --- Config Loader Import ---\ntry: from .config_loader import MCPConfig, StdioConfig, SSEConfig\nexcept ImportError: print(\"WARNING: Could not import config models from .config_loader.\"); MCPConfig=Any; StdioConfig=Any; SSEConfig=Any # Placeholders\n\nprint(\"--- DEBUG: Loading FINAL client.py (Config-Driven + AsyncExitStack) ---\")\n\nclass MCPClient:\n    \"\"\"Config-driven MCP Client using AsyncExitStack.\"\"\"\n    def __init__(self, config: MCPConfig):\n        self.config = config\n        self.session: Optional[ClientSession] = None\n        self.tools: List[BaseTool] = []\n        self._stack: AsyncExitStack = AsyncExitStack()\n        self._server_process: Optional[asyncio.subprocess.Process] = None\n\n    async def __aenter__(self) -> \"MCPClient\":\n        print(f\"DEBUG: MCPClient entering context for config ID: {getattr(self.config, 'id', 'N/A')}\")\n        try:\n            connection_config = self.config.connection\n            transport_ctx = None\n            reader = None\n            writer = None\n\n            if isinstance(connection_config, SSEConfig) and connection_config.url:\n                # --- Direct SSE ---\n                print(f\"DEBUG: Connecting via SSE to {connection_config.url}\")\n                transport_ctx = sse_client(\n                    connection_config.url, getattr(connection_config,'headers', None),\n                    getattr(connection_config,'timeout', 5.0), getattr(connection_config,'sse_read_timeout', 300.0)\n                )\n                reader, writer = await self._stack.enter_async_context(transport_ctx)\n                print(\"DEBUG: SSE transport context entered.\")\n\n            elif isinstance(connection_config, StdioConfig) and connection_config.command:\n                # --- Launch via Command + STDIO ---\n                print(f\"DEBUG: Launching command via STDIO: {connection_config.command} {' '.join(connection_config.args)}\")\n                merged_env = os.environ.copy();\n                if connection_config.env: merged_env.update(connection_config.env)\n                server_params = StdioServerParameters(\n                    command=connection_config.command, args=connection_config.args, env=merged_env,\n                    cwd=connection_config.cwd, encoding=connection_config.encoding,\n                    encoding_error_handler=connection_config.encoding_error_handler,\n                    startup_timeout=connection_config.timeout\n                )\n                transport_ctx = stdio_client(server_params)\n                reader, writer = await self._stack.enter_async_context(transport_ctx)\n                print(\"DEBUG: STDIO transport context entered.\")\n\n            else: # Fallback/Error - Handle case where config might be wrong or transport missing\n                 # Added check for command presence before assuming SSE launch\n                 if hasattr(connection_config, 'command') and connection_config.command:\n                      # This is the complex \"launch then connect SSE\" case from the guide\n                      # Keeping it simple for now - if transport isn't 'stdio', it must be 'sse' with a URL\n                      raise NotImplementedError(\"Launching command for SSE connection (URL capture) not implemented in this client version. Use direct SSE URL or STDIO command.\")\n                 else:\n                      raise ValueError(\"Invalid configuration: must have 'url' for SSE or 'command' for STDIO.\")\n\n\n            # --- Establish ClientSession ---\n            session_kwargs = getattr(connection_config, 'session_kwargs', None) or {}\n            session_ctx = ClientSession(reader, writer, **session_kwargs)\n            self.session = await self._stack.enter_async_context(session_ctx)\n            print(\"DEBUG: ClientSession context entered.\")\n\n            # --- Initialize and Load Tools (with Schema Check) ---\n            print(\"Initializing MCP session...\")\n            await asyncio.wait_for(self.session.initialize(), timeout=30.0)\n            print(\"MCP session initialized.\")\n\n            if LOAD_MCP_TOOLS_AVAILABLE:\n                print(\"Loading MCP tools (via langchain-mcp-adapters)...\")\n                loaded_tools_from_mcp = await load_mcp_tools(self.session)\n                print(f\"Successfully loaded {len(loaded_tools_from_mcp)} tool descriptions.\")\n                print(\"--- Loaded Tools & Args Schema (Diagnostic) ---\")\n                self.tools = []\n                for i, tool in enumerate(loaded_tools_from_mcp):\n                     schema = getattr(tool, 'args_schema', 'N/A'); tool_name = getattr(tool, 'name', f'Tool_{i+1}')\n                     print(f\"{i+1}. Tool Name: {tool_name}\")\n                     schema_detail = \"N/A\"\n                     is_correct = None # Undetermined\n                     if schema != 'N/A': # Schema printing and basic check\n                          schema_dict = None\n                          if isinstance(schema, type) and issubclass(schema, BaseModelV1):\n                               try: schema_dict = schema.schema(); schema_detail = f\"(PydanticV1): {json.dumps(schema_dict, indent=2)}\"\n                               except Exception as e_schema: schema_detail = f\"(PydanticV1): Error - {e_schema}\"\n                          elif hasattr(schema, 'model_json_schema'):\n                               try: schema_dict = schema.model_json_schema(); schema_detail = f\"(PydanticV2): {json.dumps(schema_dict, indent=2)}\"\n                               except Exception as e_schema: schema_detail = f\"(PydanticV2): Error - {e_schema}\"\n                          else: schema_detail = f\"(Unknown Type): {schema}\"\n                          # Basic check: does it look like the faulty kwargs schema?\n                          if isinstance(schema_dict, dict):\n                               props = schema_dict.get('properties', {})\n                               if list(props.keys()) == ['kwargs'] and props['kwargs'].get('type') == 'string':\n                                    is_correct = False\n                                    schema_detail += \" <-- LOOKS WRONG (kwargs only!)\"\n                               elif props:\n                                    is_correct = True # Has properties other than just kwargs\n                                    schema_detail += \" <-- Looks structured correctly\"\n                               else:\n                                     is_correct = True # No properties, might be simple input\n                                     schema_detail += \" <-- No properties defined\"\n                     else: is_correct = False # No schema is usually wrong\n                     print(f\"   Args Schema: {schema_detail}\")\n                     print(\"-\" * 15); self.tools.append(tool)\n                print(f\"Schema Check Result: {'All schemas look structured correctly.' if all(s is not False for s in [getattr(t, 'args_schema', None) != 'N/A' and 'kwargs' not in str(getattr(t, 'args_schema', '')).lower() for t in self.tools]) else 'One or more schemas look incorrect (kwargs only or missing)!'}\")\n                print(\"-------------------------------------------\")\n            else: print(\"Warning: load_mcp_tools unavailable.\"); self.tools = []\n            print(f\"MCPClient ready. Loaded {len(self.tools)} tools via adapter.\")\n            return self\n        except Exception as enter_err:\n            print(f\"ERROR: Failed during MCPClient __aenter__: {type(enter_err).__name__}: {enter_err}\")\n            await self.close(); raise\n\n    async def __aexit__(self, exc_type: Optional[Type[BaseException]], exc_val: Optional[BaseException], exc_tb: Optional[TracebackType]):\n        print(\"DEBUG: MCPClient exiting context...\"); await self.close(); print(\"DEBUG: MCPClient context exited.\")\n\n    async def close(self):\n        \"\"\"Closes connections and resets state using AsyncExitStack.\"\"\"\n        print(\"Closing MCP Client...\");\n        if hasattr(self, '_stack') and self._stack:\n            print(\"  Closing managed async contexts (via AsyncExitStack)...\")\n            try: await self._stack.aclose(); print(\"  AsyncExitStack closed.\")\n            except Exception as e: print(f\"WARNING: Error closing AsyncExitStack: {type(e).__name__}: {e}\")\n            finally: self._stack = None\n        else: print(\"  No active AsyncExitStack.\")\n        self.session = None; self.tools = []; self._transport_ctx = None; self._server_process = None\n        print(\"MCP Client state reset.\")\n\n    def get_tools(self) -> List[BaseTool]:\n        \"\"\"Returns the list of tools loaded by load_mcp_tools.\"\"\"\n        return self.tools"
  },
  {
    "path": "core/mcp/config_loader.py",
    "content": "# core/mcp/config_loader.py (修改 load_config 返回类型)\nimport json\nimport os\nfrom pathlib import Path\nfrom typing import Dict, Any, Optional, List, Literal, Union, Type # 导入 Type\ntry:\n    from pydantic.v1 import BaseModel, Field, ValidationError, validator\n    PYDANTIC_V = 1\nexcept ImportError:\n    try:\n        from pydantic import BaseModel, Field, ValidationError, validator # type: ignore\n        PYDANTIC_V = 2\n    except ImportError: raise ImportError(\"Pydantic (v1 or v2) required.\")\nfrom typing_extensions import TypedDict\n\nEncodingErrorHandler = Literal[\"strict\", \"ignore\", \"replace\"]\n\nclass StdioConfig(BaseModel):\n    transport: Literal[\"stdio\"] = \"stdio\"; command: str = Field(...)\n    args: List[str] = Field(default_factory=list); env: Optional[Dict[str, str]] = None\n    cwd: Optional[Union[str, Path]] = None; encoding: str = Field(default=\"utf-8\")\n    encoding_error_handler: EncodingErrorHandler = Field(default=\"strict\")\n    timeout: int = Field(default=30, gt=0); session_kwargs: Optional[Dict[str, Any]] = None\n    if PYDANTIC_V == 1: \n        class Config: extra = 'forbid'\n    else: model_config = {'extra': 'forbid'}\n\nclass SSEConfig(BaseModel):\n    transport: Literal[\"sse\"] = \"sse\"; url: str = Field(...)\n    headers: Optional[Dict[str, Any]] = None; timeout: float = Field(default=5.0, gt=0)\n    sse_read_timeout: float = Field(default=300.0, gt=0); session_kwargs: Optional[Dict[str, Any]] = None\n    if PYDANTIC_V == 1: \n        class Config: extra = 'forbid'\n    else: model_config = {'extra': 'forbid'}\n\nclass MCPConfig(BaseModel):\n    \"\"\"Represents the structure for a single server configuration.\"\"\"\n    id: Optional[str] = Field(default=None)\n    type: Literal[\"mcp-server\"] = Field(default=\"mcp-server\")\n    description: Optional[str] = Field(default=None)\n    connection: Union[StdioConfig, SSEConfig] = Field(..., discriminator='transport')\n    if PYDANTIC_V == 1: \n        class Config: extra = 'forbid'\n    else: model_config = {'extra': 'forbid'}\n\n\n# --- 修改 load_config ---\ndef load_config(config_path: Union[str, Path]) -> Dict[str, MCPConfig]:\n    \"\"\"\n    Loads the central MCP configuration JSON file and validates each server entry.\n\n    Args:\n        config_path: Path to the central config.json file.\n\n    Returns:\n        A dictionary where keys are server names and values are validated MCPConfig objects.\n    \"\"\"\n    config_p = Path(config_path).resolve()\n    if not config_p.is_file():\n        raise FileNotFoundError(f\"Configuration file not found at: {config_p}\")\n\n    print(f\"DEBUG: Loading central MCP configuration from: {config_p}\")\n    validated_configs: Dict[str, MCPConfig] = {}\n    try:\n        with open(config_p, 'r', encoding='utf-8') as f:\n            raw_config_dict = json.load(f)\n\n        if not isinstance(raw_config_dict, dict):\n            raise TypeError(\"Root configuration must be a JSON object (dictionary).\")\n\n        # 遍历字典中的每个服务器配置并验证\n        for server_name, config_data in raw_config_dict.items():\n            print(f\"DEBUG: Validating config for server: '{server_name}'\")\n            if not isinstance(config_data, dict):\n                 print(f\"WARNING: Entry for '{server_name}' is not a dictionary. Skipping.\")\n                 continue\n            try:\n                 # 确保 connection 和 transport 存在\n                 if 'connection' not in config_data: raise ValueError(\"Missing 'connection'\")\n                 if 'transport' not in config_data.get('connection', {}): raise ValueError(\"Missing 'transport' in connection\")\n\n                 if PYDANTIC_V == 2:\n                      validated_config = MCPConfig.model_validate(config_data)\n                 else: # Pydantic V1\n                      validated_config = MCPConfig.parse_obj(config_data)\n                 validated_configs[server_name] = validated_config\n                 print(f\"DEBUG: Config for '{server_name}' validated successfully.\")\n            except (ValidationError, ValueError) as e_val:\n                 print(f\"ERROR: Validation failed for server '{server_name}' config:\\n{e_val}\\nSkipping this server.\")\n                 #可以选择继续加载其他配置，或者在这里 raise 让整个加载失败\n\n        if not validated_configs:\n             print(\"WARNING: No valid server configurations were loaded.\")\n\n        print(f\"DEBUG: Central configuration loaded. Found {len(validated_configs)} valid server configs.\")\n        return validated_configs\n    except json.JSONDecodeError as e:\n        print(f\"ERROR: Failed to decode JSON from {config_p}: {e}\"); raise\n    except Exception as e:\n        print(f\"ERROR: An unexpected error occurred loading config {config_p}: {e}\"); raise"
  },
  {
    "path": "core/mcp/mcp_server_config.json",
    "content": "{\n    \"fetch_via_uvx\": {\n      \"id\": \"fetch-uvx-stdio\",\n      \"type\": \"mcp-server\",\n      \"description\": \"Fetch Server launched by uvx via stdio\",\n      \"connection\": {\n        \"transport\": \"stdio\",\n        \"command\": \"uvx\",\n        \"args\": [\n          \"mcp-server-fetch\"\n        ],\n        \"env\": null,\n        \"cwd\": null,\n        \"encoding\": \"utf-8\",\n        \"encoding_error_handler\": \"strict\",\n        \"timeout\": 45\n      }\n    },\n    \"everything\": {\n      \"id\": \"everything-stdio\",\n      \"type\": \"mcp-server\",\n      \"description\": \"Everything Server\",\n      \"connection\": {\n        \"transport\": \"stdio\",\n        \"command\": \"npx\",\n        \"args\": [\n          \"-y\",\n          \"@modelcontextprotocol/server-everything\"\n        ],\n        \"env\": null,\n        \"cwd\": null,\n        \"encoding\": \"utf-8\",\n        \"encoding_error_handler\": \"strict\",\n        \"timeout\": 45\n      }\n    }\n  }"
  },
  {
    "path": "core/mcp/run_server.py",
    "content": "# core/mcp/run_server.py (FINAL - Direct FastMCP Registration)\nimport os\nimport sys\nimport argparse\nimport traceback\nimport logging\nfrom typing import List, Dict, Any, Optional, Type\n\n# --- Standard Setup ---\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')\nlogger = logging.getLogger(\"mcp_server_direct\")\n\ncurrent_dir = os.path.dirname(os.path.abspath(__file__)); \nproject_root = os.path.dirname(os.path.dirname(os.path.dirname(current_dir))); sys.path.insert(0, project_root)\n\n# --- Imports ---\nfrom mcp.server.fastmcp import FastMCP # Import FastMCP directly\n# Assume registry is populated correctly by preregister_core_tools\nfrom core.tools.registry import get_registered_tools, get_tool_instance\ntry: \n    from core.tools import preregister_core_tools; \n    PREREGISTER_AVAILABLE = True\nexcept ImportError: \n    print(\"WARNING: preregister_core_tools not found\"); \n    def preregister_core_tools(): pass; \n    PREREGISTER_AVAILABLE = False\nfrom langchain_core.tools import BaseTool\nimport asyncio\nimport time\nimport json\nimport functools\nimport inspect # Needed for func_metadata potentially\n\nprint(\"--- DEBUG: Loading FINAL run_server.py (Direct FastMCP Registration) ---\")\n\n# --- Tool Wrapper Creation Logic (as a standalone function) ---\ndef create_tool_wrapper(tool_instance: BaseTool):\n    \"\"\"\n    Creates the async wrapper function for a given tool instance.\n    This function will be decorated LATER by the mcp_instance.\n    \"\"\"\n    tool_name = getattr(tool_instance, 'name', 'unknown_tool')\n    print(f\"    DEBUG: Defining wrapper function for tool: '{tool_name}'\")\n\n    # Define the actual wrapper coroutine\n    async def dynamic_tool_wrapper(tool_to_run=tool_instance, **kwargs): # Bind instance\n        _tool_name = tool_to_run.name\n        log_file = \"/tmp/mcp_wrapper.log\"; \n        timestamp = time.strftime(\"%Y-%m-%d %H:%M:%S\"); \n        log_prefix = f\"--- {timestamp} WRAPPER for '{_tool_name}' ---\"\n        log_lines = [f\"{log_prefix} START\", f\"Received kwargs: {kwargs}\"]\n        try: # Main execution block\n            result = None\n            if hasattr(tool_to_run, '_arun'):\n                log_lines.append(f\"Calling await tool_to_run._arun(**kwargs)\")\n                result = await tool_to_run._arun(**kwargs)\n                log_lines.append(f\"Await _arun completed.\")\n            elif hasattr(tool_to_run, '_run'):\n                log_lines.append(f\"Calling tool_to_run._run(**kwargs) via run_in_executor\")\n                loop = asyncio.get_running_loop()\n                sync_func_with_args = functools.partial(tool_to_run._run, **kwargs)\n                result = await loop.run_in_executor(None, sync_func_with_args)\n                log_lines.append(f\"Executor _run completed.\")\n            else: log_lines.append(\"ERROR: Tool no _arun/_run!\"); raise NotImplementedError(f\"Tool {_tool_name} no method.\")\n\n            log_lines.append(f\"Raw result type: {type(result)}\"); log_lines.append(f\"Raw value snippet: {str(result)[:500]}...\")\n            final_result = result\n            try: json.dumps(result); log_lines.append(\"Result JSON serializable.\")\n            except TypeError: log_lines.append(f\"WARN: Non-JSON type {type(result)}.->str.\"); final_result = str(result)\n            log_lines.append(f\"Returning final (type {type(final_result)}).\"); log_lines.append(f\"{log_prefix} END (Success)\")\n            return {\"result\": final_result}\n        except Exception as e: # Catch execution errors\n            log_lines.append(f\"!!! EXCEPTION in tool exec for '{_tool_name}': {e} !!!\"); tb_lines = traceback.format_exc().splitlines(); log_lines.append(\"--- Traceback ---\"); log_lines.extend(tb_lines); log_lines.append(\"-----------------\"); log_lines.append(f\"{log_prefix} END (Exception)\")\n            return f\"ERROR_EXECUTING_TOOL_{_tool_name}: {str(e)}\" # Return error string\n        finally: # Ensure logging\n            try:\n                for line in log_lines: print(line, flush=True, file=sys.stderr)\n                with open(log_file, \"a\") as f: f.write(\"\\n\".join(log_lines) + \"\\n\\n\")\n            except Exception as log_e: print(f\"!!! Logging Error for tool {_tool_name}: {log_e} !!!\", flush=True, file=sys.stderr)\n\n    # Return the created wrapper function AND the original tool's metadata\n    return dynamic_tool_wrapper, tool_name, getattr(tool_instance, 'description', f\"Tool {tool_name}\")\n\n# --- Main Execution Logic ---\ndef main():\n    parser = argparse.ArgumentParser(description='Start Mentis MCP Server (Direct Registration)')\n    parser.add_argument('--transport', type=str, choices=['stdio', 'sse'], default='stdio'); parser.add_argument('--host', type=str, default='0.0.0.0'); parser.add_argument('--port', type=int, default=8000); parser.add_argument('--name', type=str, default='MentisMCP'); parser.add_argument('--tools', nargs='+'); parser.add_argument('--debug', action='store_true')\n    args = parser.parse_args()\n\n    if args.debug: logger.setLevel(logging.DEBUG); print(\"DEBUG Logging Enabled\")\n\n    try:\n        # --- 1. Preregister tools into the central registry ---\n        if PREREGISTER_AVAILABLE:\n             print(\"DEBUG: Calling preregister_core_tools...\")\n             preregister_core_tools() # This populates the registry\n             print(\"DEBUG: preregister_core_tools finished.\")\n        else: print(\"DEBUG: Skipping preregister_core_tools (unavailable).\")\n\n        # --- 2. Create FastMCP instance ---\n        print(f\"DEBUG: Creating FastMCP instance: name='{args.name}'\")\n        fastmcp_kwargs = {}\n        if args.transport == 'sse':\n            if args.host: fastmcp_kwargs['host'] = args.host\n            if args.port: fastmcp_kwargs['port'] = args.port\n        mcp_instance = FastMCP(args.name, **fastmcp_kwargs) # Create instance directly\n        print(f\"DEBUG: FastMCP instance created.\")\n\n        # --- 3. Load tools from registry and register wrappers with FastMCP ---\n        registered_count = 0\n        target_tools = args.tools # List of names, or None for all\n\n        # Get all tools first if needed\n        all_tools_dict = get_registered_tools(as_dict=True)\n\n        tools_to_register = {}\n        if target_tools: # Filter if specific tools requested\n             print(f\"DEBUG: Filtering for specific tools: {target_tools}\")\n             for name in target_tools:\n                  if name in all_tools_dict:\n                       tools_to_register[name] = all_tools_dict[name]\n                  else:\n                       print(f\"ERROR: Requested tool '{name}' not found in registry.\")\n        else: # Register all tools found in registry\n             print(\"DEBUG: Registering all tools found in registry...\")\n             tools_to_register = all_tools_dict\n\n        # Iterate and register the selected tools\n        print(f\"DEBUG: Attempting to register {len(tools_to_register)} tools with FastMCP...\")\n        for tool_name, tool_info in tools_to_register.items():\n            tool_instance = tool_info.get(\"tool\")\n            if isinstance(tool_instance, BaseTool):\n                 try:\n                      # Create the wrapper function and get metadata\n                      wrapper_func, name, description = create_tool_wrapper(tool_instance)\n                      # Register the wrapper directly using the mcp_instance decorator method\n                      mcp_instance.tool(name=name, description=description)(wrapper_func)\n                      print(f\"DEBUG: Successfully registered '{name}' with FastMCP.\")\n                      registered_count += 1\n                 except Exception as e_register:\n                      print(f\"ERROR: Failed to register wrapper for tool '{tool_name}': {e_register}\")\n                      traceback.print_exc()\n            else:\n                 print(f\"WARNING: Item '{tool_name}' not a BaseTool, skipping.\")\n\n        print(f\"DEBUG: Tool registration complete. {registered_count} tools registered with FastMCP.\")\n        if registered_count == 0: print(\"WARNING: No tools were registered!\")\n\n        # --- 4. Run the FastMCP server ---\n        print(f\"Starting MCP Server '{args.name}' (Transport: {args.transport})...\")\n        mcp_instance.run(transport=args.transport)\n\n    except KeyboardInterrupt: print(\"Server shutting down...\"); sys.exit(0)\n    except Exception as e: print(f\"Error starting server: {e}\"); traceback.print_exc(); sys.exit(1)\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "core/mcp/server.py",
    "content": "import os\nimport sys\nimport traceback\nimport asyncio\nimport time\nimport json\nimport functools\nfrom typing import Dict, Any, Optional, List\n\n# mcp & fastmcp\nfrom mcp.server.fastmcp import FastMCP\nfrom mcp.types import CallToolResult, TextContent, ErrorData  # <-- 关键导入\n\n# 修正路径，导入你自己的工具与 BaseTool\nsys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(__file__))))\nfrom core.tools.registry import get_registered_tools, get_tool_instance\nfrom langchain_core.tools import BaseTool\n\nprint(\"--- DEBUG: Loading REFACTORED server.py (Fix InvalidSignature) ---\")\n\nclass MentisMCPServer:\n    def __init__(self, name: str = \"MentisMCP\", host: Optional[str] = None, port: Optional[int] = None):\n        print(f\"DEBUG: Initializing MentisMCPServer(name='{name}', host={host}, port={port})\")\n        fastmcp_kwargs = {}\n        if host is not None:\n            fastmcp_kwargs['host'] = host\n        if port is not None:\n            fastmcp_kwargs['port'] = port\n\n        try:\n            print(f\"DEBUG: Calling FastMCP(name='{name}', **{fastmcp_kwargs})\")\n            self.mcp = FastMCP(name, **fastmcp_kwargs)\n            print(\"DEBUG: FastMCP initialized successfully.\")\n        except Exception as e_fastmcp:\n            print(\"ERROR: Failed to initialize FastMCP!\")\n            print(traceback.format_exc())\n            raise\n\n        # 记录注册成功的工具包装器\n        self.registered_tools_wrappers = {}\n\n    def register_all_tools(self):\n        \"\"\"批量注册所有在 registry 中找到的 BaseTool\"\"\"\n        tools_dict = get_registered_tools(as_dict=True)\n        print(f\"DEBUG: Registering all tools ({len(tools_dict)} found)...\")\n        registered_count = 0\n        for tool_name, tool_info in tools_dict.items():\n            tool_instance = tool_info.get(\"tool\")\n            if isinstance(tool_instance, BaseTool):\n                if self._register_tool_with_simplified_wrapper(tool_instance):\n                    registered_count += 1\n            else:\n                print(f\"WARNING: Item '{tool_name}' not BaseTool, skipping.\")\n        print(f\"DEBUG: Finished registering all tools. Registered: {registered_count}\")\n\n    def register_single_tool(self, tool_name: str):\n        \"\"\"仅注册特定名称的一个工具\"\"\"\n        print(f\"DEBUG: Attempting to register single tool: {tool_name}\")\n        try:\n            tool_instance = get_tool_instance(tool_name)\n            if not tool_instance:\n                print(f\"ERROR: Tool '{tool_name}' not found in registry.\")\n                return\n            if isinstance(tool_instance, BaseTool):\n                if self._register_tool_with_simplified_wrapper(tool_instance):\n                    print(f\"DEBUG: Successfully registered single tool: {tool_instance.name}\")\n                else:\n                    print(f\"ERROR: Failed wrapper registration for: {tool_instance.name}\")\n            else:\n                print(f\"WARNING: Tool '{tool_name}' not BaseTool, skipping.\")\n        except Exception as e:\n            print(f\"ERROR during register_single_tool for '{tool_name}': {e}\")\n            print(traceback.format_exc())\n\n    def _register_tool_with_simplified_wrapper(self, tool: BaseTool) -> bool:\n        \"\"\"\n        为工具创建并注册一个简化的包装器 (Fix InvalidSignature),\n        并确保返回的数据符合 CallToolResult，以便客户端解析.\n        \"\"\"\n        try:\n            tool_name = getattr(tool, 'name', None)\n            tool_description = getattr(tool, 'description', None)\n            if not tool_name or not isinstance(tool_name, str):\n                print(f\"ERROR: Invalid tool name: {tool_name}. Skip.\")\n                return False\n            if not tool_description or not isinstance(tool_description, str):\n                print(f\"WARNING: Empty/invalid description for '{tool_name}'.\")\n                tool_description = f\"Tool {tool_name}\"\n\n            print(f\"DEBUG: Defining wrapper for tool: '{tool_name}'\")\n\n            @self.mcp.tool(name=tool_name, description=tool_description)\n            async def simplified_tool_wrapper(tool_for_wrapper=tool, **kwargs):\n                \"\"\"\n                同步或异步地调用 tool_for_wrapper，并将结果包装到\n                CallToolResult 中返回给客户端，以匹配 .content 或 .error.\n                \"\"\"\n                _tool_name = tool_for_wrapper.name\n                log_file = \"/tmp/mcp_wrapper.log\"\n                timestamp = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n                log_prefix = f\"--- {timestamp} WRAPPER for '{_tool_name}' ---\"\n                log_lines = [f\"{log_prefix} START\", f\"Received kwargs: {kwargs}\"]\n\n                try:\n                    # 根据工具方法签名决定调用 _arun (异步) 或 _run (同步)\n                    result = None\n                    if hasattr(tool_for_wrapper, '_arun'):\n                        log_lines.append(\"Calling await tool_for_wrapper._arun(**kwargs)\")\n                        result = await tool_for_wrapper._arun(**kwargs)\n                        log_lines.append(\"Await _arun completed.\")\n                    elif hasattr(tool_for_wrapper, '_run'):\n                        log_lines.append(\"Calling tool_for_wrapper._run(**kwargs) via run_in_executor\")\n                        loop = asyncio.get_running_loop()\n                        sync_func_with_args = functools.partial(tool_for_wrapper._run, **kwargs)\n                        result = await loop.run_in_executor(None, sync_func_with_args)\n                        log_lines.append(\"Executor _run completed.\")\n                    else:\n                        log_lines.append(f\"ERROR: Tool '{_tool_name}' has no _arun/_run!\")\n                        raise NotImplementedError(f\"Tool '{_tool_name}' cannot be invoked directly.\")\n\n                    # 记录结果类型和内容片段\n                    log_lines.append(f\"Raw result type: {type(result)}\")\n                    log_lines.append(f\"Raw value snippet: {str(result)[:500]}...\")\n\n                    # 关键：将结果包装成 CallToolResult，让客户端能识别 .content\n                    call_result = CallToolResult(\n                        content=[TextContent(text=str(result))]\n                    )\n                    log_lines.append(\"Returning standard CallToolResult with .content.\")\n                    log_lines.append(f\"{log_prefix} END (Success)\")\n                    return call_result\n\n                except Exception as e:\n                    # 出现异常则使用 .error 返回\n                    log_lines.append(f\"!!! EXCEPTION in tool exec for '{_tool_name}': {e} !!!\")\n                    tb_lines = traceback.format_exc().splitlines()\n                    log_lines.append(\"--- Traceback ---\")\n                    log_lines.extend(tb_lines)\n                    log_lines.append(\"-----------------\")\n                    log_lines.append(f\"{log_prefix} END (Exception)\")\n                    err_msg = f\"ERROR_EXECUTING_TOOL_{_tool_name}: {str(e)}\"\n                    return CallToolResult(error=ErrorData(message=err_msg))\n\n                finally:\n                    # 日志记录\n                    try:\n                        for line in log_lines:\n                            print(line, flush=True, file=sys.stderr)\n                        with open(log_file, \"a\") as f:\n                            f.write(\"\\n\".join(log_lines) + \"\\n\\n\")\n                    except Exception as log_e:\n                        print(f\"!!! Logging Error for '{_tool_name}': {log_e} !!!\",\n                              flush=True, file=sys.stderr)\n\n            # 修正下包装器的名字，避免重复\n            simplified_tool_wrapper.__name__ = f\"{tool_name}_simplified_wrapper\"\n            self.registered_tools_wrappers[tool_name] = simplified_tool_wrapper\n            print(f\"DEBUG: Registered simplified wrapper for tool: '{tool_name}'\")\n            return True\n\n        except Exception as registration_error:\n            failed_tool_name = getattr(tool, 'name', 'unknown')\n            print(f\"ERROR: Failed to create/register wrapper for tool '{failed_tool_name}': {registration_error}\")\n            print(traceback.format_exc())\n            return False\n\n    def run(self, transport: str = \"stdio\"):\n        \"\"\"运行 MCP 服务器 (签名中移除了 host/port)\"\"\"\n        print(f\"DEBUG: MentisMCPServer.run(transport='{transport}') called.\")\n        print(f\"正在启动 MCP 服务器，传输方式: {transport}\")\n\n        if transport == \"sse\":\n            # SSE 方式\n            host = 'N/A'\n            port = 'N/A'\n            if hasattr(self.mcp, 'settings'):\n                host = getattr(self.mcp.settings, 'host', 'N/A')\n                port = getattr(self.mcp.settings, 'port', 'N/A')\n            print(f\"配置 SSE 服务器监听在: http://{host}:{port} (如果 N/A 表示未配置或获取失败)\")\n            try:\n                import importlib\n                try:\n                    fastmcp_module = importlib.import_module('mcp.server.fastmcp')\n                    print(f\"FastMCP version: {getattr(fastmcp_module, '__version__', '未知')}\")\n                except:\n                    pass\n                import uvicorn\n                import fastapi\n                print(f\"FastAPI: {fastapi.__version__}, Uvicorn: {uvicorn.__version__}\")\n                print(f\"DEBUG: Calling self.mcp.run(transport='{transport}') for SSE\")\n                self.mcp.run(transport=transport)\n            except Exception as e:\n                print(f\"SSE 服务器启动失败: {e}\")\n                print(traceback.format_exc())\n                raise\n        else:\n            # 默认 stdio 模式\n            print(\"启动 stdio 模式服务器...\")\n            try:\n                print(f\"DEBUG: Calling self.mcp.run(transport='{transport}') for STDIO\")\n                self.mcp.run(transport=transport)\n            except Exception as e:\n                print(f\"stdio 服务器启动失败: {e}\")\n                print(traceback.format_exc())\n                raise\n"
  },
  {
    "path": "core/mcp/test/README.md",
    "content": "# MCP 测试框架说明\n\n## 概述\n\nMCP（Machine Conversation Protocol）是一个用于机器对话的协议框架，它允许不同的系统通过标准化的接口进行通信。本测试框架提供了一种方式来测试MCP服务器的功能和性能。\n\n## 测试文件结构\n\n测试框架包含以下主要文件：\n\n### 1. minimal_fastmcp_test.py\n\n这是一个最小化的FastMCP服务器实现，用于测试基本功能：\n\n- 创建FastMCP实例\n- 注册简单的工具函数（ping工具）\n- 通过STDIO传输方式运行服务器\n\n该文件可以独立运行，也可以被其他测试脚本作为子进程启动。\n\n### 2. test_minimal_client.py\n\n这个脚本使用MCP客户端库来测试minimal_fastmcp_test.py：\n\n- 导入必要的MCP客户端库（ClientSession, stdio_client等）\n- 连接到minimal_fastmcp_test.py并测试ping工具\n- 展示如何使用客户端API进行工具调用\n\n## 测试方法\n\n### 客户端库测试（test_minimal_client.py）\n\n这种测试方法使用MCP客户端库与MCP服务器通信，展示了如何在实际应用中使用MCP客户端。测试流程如下：\n\n1. 创建ClientSession对象\n2. 连接到MCP服务器\n3. 调用工具并处理结果\n\n## 运行测试\n\n### 运行客户端库测试\n\n```bash\npython core/mcp/test/test_minimal_client.py\n```\n\n## 扩展测试\n\n### 添加新工具\n\n要在minimal_fastmcp_test.py中添加新工具，可以按照以下步骤操作：\n\n1. 定义新的异步工具函数\n2. 使用FastMCP实例的装饰器注册工具\n\n示例：\n```python\nasync def new_tool(param1: str, param2: int = 0) -> str:\n    \"\"\"A new tool description.\"\"\"\n    # 工具实现\n    return f\"Result: {param1}, {param2}\"\n\nmcp_server.tool(name=\"new_tool\", description=\"New tool description.\")(new_tool)\n```\n\n### 创建新的测试脚本\n\n可以参考现有的测试脚本创建新的测试脚本，测试不同的功能或场景。\n\n## 常见问题\n\n### 服务器无响应\n\n- 确保服务器进程正在运行\n- 检查传输方式是否正确（stdio或sse）\n- 检查客户端连接参数是否正确\n\n### 工具调用失败\n\n- 确保工具名称正确\n- 检查参数是否符合工具的要求\n- 查看服务器日志以获取更多信息\n\n## 总结\n\nMCP测试框架提供了使用MCP客户端库测试MCP服务器功能的方法。通过这些测试，可以验证MCP服务器的基本功能和性能，为开发和调试提供支持。"
  },
  {
    "path": "core/mcp/test/__init__.py",
    "content": "# MCP测试模块\n# 包含用于测试MCP（Message Control Protocol）功能的各种测试脚本"
  },
  {
    "path": "core/mcp/test/minimal_fastmcp_test.py",
    "content": "import asyncio\nfrom mcp.server.fastmcp import FastMCP\nimport logging\n\n# 配置基本日志，看FastMCP内部是否有更多信息\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(\"minimal_test\")\n\nprint(\"--- Minimal FastMCP Server Test ---\")\n\n# 1. 创建 FastMCP 实例\n# (假设 FastMCP 对于 stdio 不需要 host/port in __init__)\nmcp_server = FastMCP(name=\"MinimalServer\")\nprint(\"FastMCP instance created.\")\n\n# 2. 定义一个简单的 async 工具函数\nasync def ping_tool(query: str = \"default ping\") -> str:\n    \"\"\"A very basic tool that just returns pong.\"\"\"\n    print(f\"\\n--- PING TOOL CALLED! ---\") # 在工具内部打印日志\n    print(f\"Received query: {query}\")\n    result = f\"pong: {query}\"\n    print(f\"Returning: {result}\")\n    print(f\"--- PING TOOL END ---\")\n    return result\n\n# 3. 直接用 FastMCP 实例的装饰器注册\ntry:\n    mcp_server.tool(name=\"ping\", description=\"Returns pong plus the query.\")(ping_tool)\n    # 上一行等价于:\n    # @mcp_server.tool(name=\"ping\", description=\"Returns pong plus the query.\")\n    # async def ping_tool(...) ...\n    print(\"Tool 'ping' registered directly with FastMCP.\")\nexcept Exception as e_reg:\n    print(f\"Error registering tool directly: {e_reg}\")\n    import traceback\n    traceback.print_exc()\n    exit(1)\n\n# 4. 运行服务器 (使用 STDIO)\ntry:\n    print(\"Starting minimal server with STDIO transport...\")\n    # 假设 run() 只需 transport 参数对 stdio 有效\n    mcp_server.run(transport=\"stdio\")\n    print(\"Server finished.\") # 理应不会执行到，除非服务器停止\nexcept Exception as e_run:\n    print(f\"Error running minimal server: {e_run}\")\n    import traceback\n    traceback.print_exc()\n    exit(1)"
  },
  {
    "path": "core/mcp/test/test_minimal_client.py",
    "content": "# test_minimal_client_fixed.py - 用于测试minimal_fastmcp_test.py的客户端脚本（修复版）\nimport os\nimport sys\nimport asyncio\nimport json\nimport traceback\nfrom typing import Optional, Dict, Any\n\n# 添加项目根目录到路径\nsys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\n# 导入必要的MCP客户端库\ntry:\n    from mcp import ClientSession\n    from mcp.client.stdio import stdio_client, StdioServerParameters\n    from mcp.types import CallToolRequest\n    DEPS_OK = True\nexcept ImportError as e:\n    print(f\"错误: 缺少必要的依赖: {e}\")\n    print(\"请确保已安装mcp库: pip install mcp\")\n    DEPS_OK = False\n\nasync def main():\n    \"\"\"连接到minimal_fastmcp_test.py并测试ping工具\"\"\"\n    print(\"=== MCP最小客户端测试（修复版）===\\n\")\n    \n    if not DEPS_OK:\n        print(\"缺少必要的依赖，无法继续。\")\n        return\n    \n    # 准备minimal_fastmcp_test.py的路径\n    script_path = os.path.join(os.path.dirname(__file__), \"minimal_fastmcp_test.py\")\n    cmd = [sys.executable, script_path]\n    print(f\"准备连接到服务器: {script_path}\")\n    \n    try:\n        # 创建StdioServerParameters对象\n        server_params = StdioServerParameters(\n            command=sys.executable,\n            args=[script_path],\n            # 可以根据需要添加其他参数，如env, cwd等\n        )\n        print(\"已创建服务器参数配置。\")\n        \n        # 创建STDIO客户端连接\n        print(\"\\n创建STDIO客户端连接...\")\n        async with stdio_client(server_params) as (reader, writer):\n            print(\"STDIO连接已建立。创建ClientSession...\")\n            async with ClientSession(reader, writer) as session:\n                print(\"ClientSession已创建。初始化会话...\")\n                await session.initialize()\n                print(\"会话已初始化。\")\n                \n                # 获取服务器支持的工具列表\n                print(\"\\n获取服务器支持的工具列表...\")\n                tools_result = await session.list_tools()\n                print(f\"服务器支持的工具: {tools_result}\")\n                \n                # 调用ping工具\n                print(\"\\n调用ping工具...\")\n                ping_request = CallToolRequest(\n                    method=\"tools/call\",\n                    params={\n                        \"name\": \"ping\",\n                        \"arguments\": {\"query\": \"Hello, MCP!\"}\n                    }\n                )\n                \n                try:\n                    print(f\"发送请求: {ping_request}\")\n                    result = await session.call_tool(\"ping\", {\"query\": \"Hello, MCP!\"})\n                    print(f\"\\n收到响应: {result}\")\n                    if hasattr(result, 'result'):\n                        print(f\"结果: {result.result}\")\n                    elif hasattr(result, 'error'):\n                        print(f\"错误: {result.error}\")\n                    else:\n                        print(f\"未知响应格式: {result}\")\n                except Exception as e:\n                    print(f\"调用工具时出错: {e}\")\n                    print(traceback.format_exc())\n    \n    except Exception as e:\n        print(f\"运行测试时出错: {e}\")\n        print(traceback.format_exc())\n\nif __name__ == \"__main__\":\n    asyncio.run(main())"
  },
  {
    "path": "core/tools/__init__.py",
    "content": "# Tools package initialization\nfrom langchain_community.agent_toolkits.load_tools import load_tools\nfrom core.tools.registry import register_tool, ToolCategory, get_registered_tools\nfrom core.tools.firecrawl_tool import FireCrawlTool\nfrom core.tools.e2b_tool import E2BCodeInterpreterTool\nimport os\nimport importlib\nimport inspect\nfrom typing import Any, Dict, List, Type, Optional\nfrom langchain_core.tools import BaseTool\n\n# 导入预注册所需的工具\nfrom langchain_community.tools import (\n    TavilySearchResults,\n    ArxivQueryRun,\n)\nfrom langchain_community.agent_toolkits import FileManagementToolkit\nfrom langchain_community.agent_toolkits.openapi.toolkit import RequestsToolkit,TextRequestsWrapper\nfrom langchain_community.tools.riza.command import ExecPython, ExecJavaScript\n\nfrom dotenv import load_dotenv\nload_dotenv()  # 自动加载 .env 文件\n\n# 预注册核心工具列表 - 定义需要预注册的核心工具\ndef preregister_core_tools():\n    \"\"\"预注册核心工具，确保系统启动时这些工具已经可用\"\"\"\n    print(\"开始预注册核心工具...\")\n    \n    # 注册搜索类工具\n    try:\n        # Tavily搜索工具\n        tavily_search = TavilySearchResults()\n        register_tool(tavily_search, ToolCategory.SEARCH)\n        print(f\"已预注册工具: {tavily_search.name} (类别: {ToolCategory.SEARCH.value})\")\n    except Exception as e:\n        print(f\"预注册Tavily搜索工具失败: {e}\")\n    \n    # 注册网页浏览类工具\n    try:\n        # Arxiv查询工具\n        arxiv_tool = ArxivQueryRun()\n        register_tool(arxiv_tool, ToolCategory.WEB_BROWSING)\n        print(f\"已预注册工具: {arxiv_tool.name} (类别: {ToolCategory.WEB_BROWSING.value})\")\n    except Exception as e:\n        print(f\"预注册Arxiv查询工具失败: {e}\")\n    \n    try:\n        # RequestoolKit请求工具\n        # 创建TextRequestsWrapper实例作为请求包装器\n        requests_wrapper = TextRequestsWrapper(headers={})\n        # 初始化RequestsToolkit，提供必要的参数\n        requests_toolkit = RequestsToolkit(\n            requests_wrapper=requests_wrapper,\n            allow_dangerous_requests=True  # 允许危险请求，使工具可用\n        )\n        for req_tool in requests_toolkit.get_tools():\n            register_tool(req_tool, ToolCategory.WEB_BROWSING)\n            print(f\"已预注册工具: {req_tool.name} (类别: {ToolCategory.WEB_BROWSING.value})\")\n    except Exception as e:\n        print(f\"预注册 RequestoolKit请求工具失败: {e}\")\n    \n    # 注册文件系统工具\n    try:\n        # 获取当前目录作为文件系统工具的根目录\n        current_dir = os.getcwd()\n        # 创建文件系统工具集\n        filesystem_toolkit = FileManagementToolkit(\n            root_dir=current_dir,\n            selected_tools=[\"write_file\", \"read_file\", \"list_directory\"]\n        )\n        # 获取文件系统工具并注册\n        for fs_tool in filesystem_toolkit.get_tools():\n            register_tool(fs_tool, ToolCategory.FILE_SYSTEM)\n            print(f\"已预注册工具: {fs_tool.name} (类别: {ToolCategory.FILE_SYSTEM.value})\")\n    except Exception as e:\n        print(f\"预注册文件系统工具失败: {e}\")\n    \n    # 注册代码解释器工具\n    # try:\n    #     # Python REPL工具\n    #     python_repl = ExecPython()\n    #     register_tool(python_repl, ToolCategory.CODE_INTERPRETER)\n    #     print(f\"已预注册工具: {python_repl.name} (类别: {ToolCategory.CODE_INTERPRETER.value})\")\n    # except Exception as e:\n    #     print(f\"预注册Python REPL工具失败: {e}\")\n\n    # # 注册代码解释器工具\n    # try:\n    #     # Python REPL工具\n    #     javascript_repl = ExecJavaScript()\n    #     register_tool(javascript_repl, ToolCategory.CODE_INTERPRETER)\n    #     print(f\"已预注册工具: {javascript_repl.name} (类别: {ToolCategory.CODE_INTERPRETER.value})\")\n    # except Exception as e:\n    #     print(f\"预注册Python REPL工具失败: {e}\")\n    \n    # 注册自定义工具 - FireCrawl工具\n    try:\n        firecrawl_tool = FireCrawlTool()\n        register_tool(firecrawl_tool, ToolCategory.WEB_BROWSING)\n        print(f\"已预注册工具: {firecrawl_tool.name} (类别: {ToolCategory.WEB_BROWSING.value})\")\n    except Exception as e:\n        print(f\"预注册FireCrawl工具失败: {e}\")\n    \n    # 注册E2B代码解释器工具\n    try:\n        e2b_tool = E2BCodeInterpreterTool()\n        register_tool(e2b_tool, ToolCategory.CODE_INTERPRETER)\n        print(f\"已预注册工具: {e2b_tool.name} (类别: {ToolCategory.CODE_INTERPRETER.value})\")\n    except Exception as e:\n        print(f\"预注册E2B代码解释器工具失败: {e}\")\n\n\n    from .replicate_flux_tool import ReplicateFluxImageTool, category \n    try:\n        flux_tool = ReplicateFluxImageTool()\n        if flux_tool._is_available:\n            register_tool(flux_tool, category)\n    except Exception as e:\n        print(f\"Failed to register ReplicateFluxImageTool: {e}\")\n\nprint(\"核心工具预注册完成\")\n\n# 执行预注册\npreregister_core_tools()\n\n# 注册 LangChain 工具 - 使用load_tools加载的工具列表\ntry:\n    langchain_tools = load_tools([\"serpapi\"])\n    for tool in langchain_tools:\n        register_tool(tool, ToolCategory.SEARCH)\n        print(f\"已注册LangChain工具: {tool.name} (类别: {ToolCategory.SEARCH.value})\")\nexcept Exception as e:\n    print(f\"加载LangChain工具失败: {e}\")\n\n# 工具类别映射 - 用于自动分类直接导入的工具\ntool_category_mapping = {\n    # 搜索类工具\n    \"TavilySearchResults\": ToolCategory.SEARCH,\n    \"GoogleSearchResults\": ToolCategory.SEARCH,\n    \"GoogleSerperResults\": ToolCategory.SEARCH,\n    \"WikipediaQueryRun\": ToolCategory.SEARCH,\n    \"FireCrawl\": ToolCategory.SEARCH,\n    \n    # 网页浏览类工具\n    \"WebBrowser\": ToolCategory.WEB_BROWSING,\n    \"ArxivQueryRun\": ToolCategory.WEB_BROWSING,\n    \"RequestsGet\": ToolCategory.WEB_BROWSING,\n    \"RequestsPost\": ToolCategory.WEB_BROWSING,\n    \n    # 文件系统类工具\n    \"WriteFile\": ToolCategory.FILE_SYSTEM,\n    \"ReadFile\": ToolCategory.FILE_SYSTEM,\n    \"ListDirectory\": ToolCategory.FILE_SYSTEM,\n    \n    # 代码解释器类工具\n    \"PythonREPL\": ToolCategory.CODE_INTERPRETER,\n    \"ShellTool\": ToolCategory.CODE_INTERPRETER,\n    \"E2BCodeInterpreterTool\": ToolCategory.CODE_INTERPRETER,\n    \n    # 数据库类工具\n    \"SQLDatabaseTool\": ToolCategory.DATABASE,\n    \n    # 默认为其他类别\n    \"default\": ToolCategory.OTHER\n}\n\ndef register_direct_tool(tool_instance: BaseTool, category: ToolCategory = None) -> None:\n    \"\"\"注册直接从langchain_community.tools导入的工具\n    \n    Args:\n        tool_instance: 工具实例\n        category: 工具类别，如果为None则自动根据工具名称判断类别\n    \"\"\"\n    if not category:\n        # 获取工具类名\n        tool_class_name = tool_instance.__class__.__name__\n        # 根据工具类名自动判断类别\n        category = tool_category_mapping.get(tool_class_name, tool_category_mapping[\"default\"])\n    \n    # 注册工具\n    register_tool(tool_instance, category)\n    print(f\"已注册工具: {tool_instance.name} (类别: {category.value})\")\n\n# 获取 tools 目录路径\ntools_dir = os.path.dirname(__file__)\n\n# 遍历目录中的所有文件，注册自定义工具\nfor filename in os.listdir(tools_dir):\n    # 只处理 .py 文件，且排除 __init__.py 和 registry.py\n    if filename.endswith('.py') and filename not in ['__init__.py', 'registry.py']:\n        # 提取模块名（去掉 .py 后缀）\n        module_name = filename[:-3]\n        try:\n            # 动态导入模块\n            module = importlib.import_module(f'.{module_name}', package='core.tools')\n            \n            # 查找模块中的工具类（继承自BaseTool的类）\n            for name, obj in inspect.getmembers(module):\n                # 检查是否是类且是BaseTool的子类\n                if inspect.isclass(obj) and issubclass(obj, BaseTool) and obj != BaseTool:\n                    # 检查该类是否已经被实例化并注册\n                    tool_name = getattr(obj, 'name', None)\n                    if tool_name and tool_name not in [info['tool'].name for info in get_registered_tools().values()]:\n                        # 确定工具类别\n                        category = getattr(module, 'category', ToolCategory.OTHER)\n                        # 实例化并注册工具\n                        try:\n                            tool_instance = obj()\n                            register_tool(tool_instance, category)\n                            print(f\"已注册工具类: {name} (工具名: {tool_instance.name}, 类别: {category.value})\")\n                        except Exception as e:\n                            print(f\"实例化工具类 {name} 时出错: {e}\")\n        except Exception as e:\n            print(f\"导入 {module_name} 时出错: {e}\")"
  },
  {
    "path": "core/tools/e2b_tool.py",
    "content": "# core/tools/e2b_tool.py\n\nimport os\nimport json\nimport asyncio\nimport traceback\nfrom typing import Dict, Any, Optional, Type, List # 确保导入 List\nfrom pydantic import BaseModel, Field, PrivateAttr\nfrom langchain_core.tools import BaseTool\n\n\n# --- E2B Imports ---\ntry:\n    from e2b_code_interpreter import Sandbox\n    from e2b_code_interpreter.exceptions import TimeoutException\n    E2B_AVAILABLE = True\nexcept ImportError:\n    Sandbox = None # type: ignore\n    SandboxException = Exception # type: ignore # Fallback to base Exception\n    TimeoutException = TimeoutError # type: ignore # Fallback to base TimeoutError\n    E2B_AVAILABLE = False\n    print(\"Warning: 'e2b' package not installed (pip install e2b). E2BCodeInterpreterTool will not work.\")\n\n# --- Tool Category ---\ntry:\n    from .registry import ToolCategory, register_tool\n    if not hasattr(ToolCategory, 'CODE_INTERPRETER'):\n         ToolCategory.CODE_INTERPRETER = ToolCategory.OTHER\n    category = ToolCategory.CODE_INTERPRETER\nexcept ImportError:\n    category = None\n    print(\"Tool registry not found.\")\n\n# --- Input Schema (保持不变) ---\nclass E2BCodeInterpreterToolInput(BaseModel):\n    code: str = Field(description=\"要执行的Python代码\")\n\n# --- Tool Class (优化版) ---\nclass E2BCodeInterpreterTool(BaseTool):\n    \"\"\"\n    使用 E2B SDK 在安全沙箱中执行 Python 代码的工具 (修正异常处理版)。\n    返回执行结果的字符串摘要。\n    \"\"\"\n    name: str = \"e2b_code_interpreter\"\n    description: str = ( # 可以稍微调整描述，强调是 Python 执行环境\n        \"Executes Python code in a sandboxed environment. \"\n        \"Input MUST be a JSON object with a 'code' key containing the Python code string. \"\n        \"Libraries like matplotlib, pandas, numpy, sympy are available. Install others using pip (e.g., `import subprocess; subprocess.run(['pip', 'install', 'requests'])`). \"\n        \"Use 'print()' to output results. For plots, save them to a file (e.g., '/home/user/plot.png') and state the path; do not return raw image data. \"\n        \"Returns a string summarizing execution status, stdout, stderr, and any errors.\"\n    )\n    args_schema: Type[BaseModel] = E2BCodeInterpreterToolInput\n\n    _sandbox: Optional[Any] = PrivateAttr(default=None)\n    _is_available: bool = PrivateAttr(default=False)\n    _init_error: Optional[str] = PrivateAttr(default=None)\n    # 不再需要 self.ExceptionClass\n\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self._initialize_sandbox()\n\n    def _initialize_sandbox(self):\n        \"\"\"初始化沙箱环境\"\"\"\n        if not E2B_AVAILABLE:\n            self._init_error = \"Package 'e2b' not installed.\"\n            print(f\"ERROR: {self._init_error}\")\n            return\n\n        if \"E2B_API_KEY\" not in os.environ:\n            self._init_error = \"Environment variable E2B_API_KEY not set.\"\n            print(f\"ERROR: {self._init_error}\")\n            return\n\n        try:\n            print(\"Initializing E2B Sandbox...\")\n            # 实例化 Sandbox\n            self._sandbox = Sandbox() # 使用导入的 Sandbox 类\n            print(\"E2B Sandbox initialized successfully!\")\n            self._is_available = True\n            self._init_error = None\n        except (SandboxException, TimeoutException) as e: # <--- 捕获特定的 E2B 异常\n            self._init_error = f\"Failed to initialize E2B Sandbox (E2B Error): {e}\"\n            print(f\"ERROR: {self._init_error}\")\n            self._is_available = False\n        except Exception as e: # 捕获其他意外错误\n            self._init_error = f\"An unexpected error occurred during E2B Sandbox initialization: {e}\"\n            print(f\"ERROR: {self._init_error}\")\n            self._is_available = False\n\n    def _run(self, code: str, **kwargs) -> str:\n        \"\"\"同步执行 Python 代码并返回结果摘要字符串\"\"\"\n        if not self._is_available or self._sandbox is None:\n            # ... (返回包含设置指南的错误信息，不变) ...\n            error_message = \"E2B Sandbox is not available\"\n            if self._init_error: error_message += f\": {self._init_error}\"\n            setup_guide = \"\\n\\nSetup: pip install e2b; export E2B_API_KEY='...'\"\n            return f\"Execution Failed: {error_message}{setup_guide}\"\n\n        output_summary = \"\"\n        try:\n            print(f\"--- E2B: Executing code synchronously ---\\n{code}\\n--------------------------------------\")\n            # 使用 run_python 方法\n            execution = self._sandbox.run_code(code)\n\n            # 构建结果字符串 (逻辑保持不变)\n            if execution.error:\n                output_summary += f\"Execution Failed!\\nError Name: {execution.error.name}\\nError Value: {execution.error.value}\\n\"\n                if execution.error.traceback:\n                     traceback_lines = execution.error.traceback.splitlines()\n                     output_summary += f\"Traceback (last few lines):\\n...\\n\" + \"\\n\".join(traceback_lines[-5:])\n            else:\n                 output_summary += \"Execution Successful.\\n\"\n            if execution.logs.stdout: output_summary += f\"\\nSTDOUT:\\n{execution.logs.stdout}\"\n            if execution.logs.stderr: output_summary += f\"\\nSTDERR:\\n{execution.logs.stderr}\"\n            if execution.results: output_summary += \"\\n\\nNote: Execution produced structured results (e.g., plots saved as files).\"\n            if not output_summary.strip() or output_summary.strip() == \"Execution Successful.\": output_summary = \"Code executed successfully with no textual output.\"\n\n            print(f\"--- E2B: Execution finished ---\\nResult Summary:\\n{output_summary[:500]}...\\n-----------------------------\")\n            return output_summary.strip()\n\n        except (SandboxException, TimeoutException) as e: # <--- 捕获特定的 E2B 异常\n             error_str = f\"Execution Failed (E2B Error)!\\nError Name: {getattr(e, 'name', type(e).__name__)}\\nDetails: {e}\"\n             # TimeoutException 可能没有 traceback 属性，SandboxException 通常有\n             tb = getattr(e, 'traceback', traceback.format_exc())\n             if tb:\n                 tb_lines = tb.splitlines()\n                 error_str += f\"\\nTraceback (last few lines):\\n...\\n\" + \"\\n\".join(tb_lines[-5:])\n             print(f\"ERROR during E2B execution: {error_str}\")\n             return error_str\n        except Exception as e: # 其他错误\n            error_str = f\"Execution Failed (Unexpected Error)!\\nError Type: {type(e).__name__}\\nError Details: {str(e)}\\nTraceback:\\n{traceback.format_exc()}\"\n            print(f\"ERROR during E2B execution: {error_str}\")\n            return error_str\n\n    async def _arun(self, code: str, **kwargs) -> str:\n        \"\"\"异步执行 Python 代码并返回结果摘要字符串\"\"\"\n        if not self._is_available or self._sandbox is None:\n             # ... (返回错误信息) ...\n             error_message = f\"E2B Sandbox is not available: {self._init_error}\"\n             return f\"Execution Failed: {error_message}\"\n\n        try:\n            loop = asyncio.get_running_loop()\n            import functools\n            # 注意：传递给 run_in_executor 的函数应该是可调用的\n            # 这里 _run 是实例方法，所以直接传递 self._run 即可\n            # 但为了确保 code 参数被正确传递，可以用 lambda 或 partial\n            sync_run_with_args = functools.partial(self._run, code=code, **kwargs)\n\n            print(f\"--- E2B: Executing code asynchronously via executor ---\\n{code}\\n--------------------------------------\")\n            result_summary = await loop.run_in_executor(\n                None, sync_run_with_args\n            )\n            print(f\"--- E2B: Async execution finished ---\")\n            return result_summary\n        except Exception as e: # run_in_executor 或 _run 的异常会在这里捕获\n            error_str = f\"Execution Failed (Async Wrapper Error)!\\nError Type: {type(e).__name__}\\nError Details: {str(e)}\"\n            # 尝试获取 Traceback\n            tb = traceback.format_exc()\n            error_str += f\"\\nTraceback:\\n{tb}\"\n            print(f\"ERROR during E2B async execution: {error_str}\")\n            return error_str\n\n\n    def close(self):\n        \"\"\"关闭沙箱，释放资源。\"\"\"\n        if hasattr(self, \"sandbox\") and self._is_available and self._sandbox is not None:\n            try:\n                print(\"Attempting to close E2B Sandbox...\")\n                self._sandbox.kill()\n                print(\"E2B Sandbox closed successfully.\")\n                self._is_available = False\n                self._sandbox = None\n            except (SandboxException, TimeoutException) as e: # 捕获特定异常\n                print(f\"Error closing E2B Sandbox (E2B Error): {e}\")\n            except Exception as e:\n                print(f\"An unexpected error occurred while closing E2B Sandbox: {e}\")\n\n    model_config = {\n        \"arbitrary_types_allowed\": True\n    }\n\n    # __del__ 方法用于对象销毁，通常不保证执行，不建议依赖它来关闭资源\n    # def __del__(self): self.close()"
  },
  {
    "path": "core/tools/firecrawl_tool.py",
    "content": "# 文件路径: core/tools/firecrawl_tool.py (或您存放工具的文件)\n\nimport os\nimport json # 虽然不直接返回 JSON，但可能用于处理 metadata\nfrom typing import Dict, List, Literal, Optional, Tuple, Type, Union, Any # 确保导入\nfrom pydantic import BaseModel, Field, PrivateAttr # 导入 PrivateAttr\nfrom langchain_core.callbacks import (\n    AsyncCallbackManagerForToolRun,\n    CallbackManagerForToolRun,\n)\nfrom langchain_core.tools import BaseTool\nfrom dotenv import load_dotenv\nload_dotenv()  # 自动加载 .env 文件\n\n\n# 尝试导入 FireCrawlLoader，如果失败则标记\ntry:\n    from langchain_community.document_loaders import FireCrawlLoader\n    FIRECRAWL_LOADER_AVAILABLE = True\nexcept ImportError:\n    FireCrawlLoader = None # type: ignore\n    FIRECRAWL_LOADER_AVAILABLE = False\n    print(\"Warning: langchain_community or firecrawl-py not installed? FireCrawlLoader unavailable.\")\n    print(\"Run: pip install -U langchain-community firecrawl-py\")\n\n# 定义输入 Schema (保持不变)\nclass FireCrawlInput(BaseModel):\n    \"\"\"Input for the FireCrawl tool.\"\"\"\n    url: str = Field(description=\"URL to crawl or scrape\")\n    mode: str = Field(\n        default=\"scrape\", # <-- 将默认模式改为 'scrape' 可能更常用\n        description=\"Mode: 'scrape' (single page), 'crawl' (multiple pages). Default: 'scrape'\",\n    )\n    # 可以添加 params 字段如果希望 LLM 控制更多参数\n    # params: Optional[Dict[str, Any]] = Field(default=None, description=\"Optional dictionary of additional FireCrawl parameters (e.g., {'pageOptions': {'onlyMainContent': True}})\")\n\n\nclass FireCrawlTool(BaseTool):\n    \"\"\"\n    Tool that uses FireCrawl API to crawl or scrape web content and return a summary.\n\n    Setup:\n        pip install -U langchain-community firecrawl-py\n        export FIRECRAWL_API_KEY=\"your-api-key\"\n\n    Instantiate:\n        tool = FireCrawlTool() # Reads API key from env\n        # Or explicitly: tool = FireCrawlTool(api_key=\"...\")\n\n    Invoke:\n        tool.invoke({\"url\": \"https://example.com\", \"mode\": \"scrape\"})\n    \"\"\"\n\n    name: str = \"firecrawl_web_content\" # 建议用更描述性的名字\n    description: str = (\n        \"Fetches and extracts the main textual content from a given URL. \"\n        \"Use 'scrape' mode (default) for a single page, or 'crawl' mode to follow links (use sparingly). \"\n        \"Input should be a URL. Returns a textual summary of the content.\"\n    )\n    args_schema: Type[BaseModel] = FireCrawlInput\n\n    # --- 配置属性 ---\n    # API Key 可以通过 __init__ 传入，或者留空让 loader 从环境变量读取\n    _api_key: Optional[str] = PrivateAttr(default=None) # 使用 PrivateAttr 避免 Pydantic 验证\n    _api_url: Optional[str] = PrivateAttr(default=None)\n    # 可以在 __init__ 中设置默认 mode 和 params，或者在 _run/_arun 中处理\n    default_mode: str = \"scrape\" # 工具级别的默认模式\n    default_params: Dict[str, Any] = Field(default_factory=dict) # 工具级别的默认参数\n\n    # 添加 __init__ 以便可以传入 api_key (可选)\n    def __init__(self, api_key: Optional[str] = None, api_url: Optional[str] = None,\n                 mode: str = \"scrape\", params: Optional[Dict[str, Any]] = None, **kwargs):\n        super().__init__(**kwargs)\n        # Pydantic V2 中，非 model 字段需要用 PrivateAttr 或在 model_config 中设置\n        self._api_key = api_key\n        self._api_url = api_url\n        self.default_mode = mode\n        self.default_params = params or {}\n        # 检查 Loader 是否可用\n        if not FIRECRAWL_LOADER_AVAILABLE:\n            print(\"ERROR: FireCrawlLoader is unavailable. Please install required packages.\")\n\n    def _run(\n        self,\n        url: str,\n        mode: Optional[str] = None,\n        run_manager: Optional[CallbackManagerForToolRun] = None,\n    ) -> str: # <--- 返回值必须是字符串\n        \"\"\"使用工具同步获取网页内容。\"\"\"\n        if not FIRECRAWL_LOADER_AVAILABLE:\n            return \"Error: FireCrawlLoader is not available. Required packages might be missing.\"\n            \n        # 确定使用的 API Key (优先实例属性，其次环境变量)\n        key = self._api_key or os.getenv('FIRECRAWL_API_KEY')\n        if not key:\n             return \"Error: FIRECRAWL_API_KEY not found in environment variables or instantiation.\"\n        \n        # 打印 Debug 信息 (可选)\n        print(f\"DEBUG [FireCrawlTool]: Running for URL='{url}', Mode='{mode or self.default_mode}'\")\n        # print(f\"DEBUG [FireCrawlTool]: Effective API Key = {'*' * (len(key) - 4) + key[-4:] if key else 'None'}\")\n\n        try:\n            current_mode = mode or self.default_mode\n            loader = FireCrawlLoader(\n                url=url,\n                api_key=key, # 传递最终确定的 key\n                api_url=self._api_url, # 传递实例属性或 None\n                mode=current_mode,\n                params=self.default_params, # 传递实例默认参数\n            )\n\n            print(f\"--- Calling FireCrawl API (Sync) for: '{url}' ---\")\n            docs = loader.load()\n            print(f\"--- FireCrawl API call successful for: '{url}', received {len(docs)} document(s) ---\")\n\n            # --- 格式化结果为字符串 ---\n            if not docs:\n                return f\"FireCrawl successful but returned no content from {url} (Mode: {current_mode}). The page might be empty or restricted.\"\n\n            summary_parts = [f\"Content summary from {url} (Mode: {current_mode}):\"]\n            content_limit = 4000 # 限制返回给 LLM 的总字符数 (可调整)\n            current_length = len(summary_parts[0])\n            doc_count = 0\n\n            for doc in docs:\n                 # 可以考虑只返回第一个文档的内容，如果文档很多\n                 # if doc_count >= 1 and current_mode == 'scrape': break \n                 \n                 source_info = f\"\\n\\n--- Source: {doc.metadata.get('sourceURL', url)} ---\"\n                 page_content = doc.page_content or \"\"\n                 \n                 available_length = content_limit - current_length - len(source_info) - 20 # 预留空间\n                 if available_length <= 0 and doc_count > 0: # 如果已经有内容且空间不足\n                      summary_parts.append(\"\\n\\n... (further content truncated)\")\n                      break\n\n                 content = source_info + \"\\n\" + page_content\n                 \n                 if len(content) > available_length:\n                      content = content[:available_length] + \"... (truncated)\"\n                 \n                 summary_parts.append(content)\n                 current_length += len(content)\n                 doc_count += 1\n                 if current_length >= content_limit: break # 达到总长度限制\n\n            return \"\\n\".join(summary_parts).strip()\n            # --- 格式化结束 ---\n\n        except Exception as e:\n            error_msg = f\"Error during FireCrawl for {url} (Mode: {mode or self.default_mode}): {repr(e)}\"\n            print(f\"ERROR: {error_msg}\")\n            return error_msg # 返回错误信息字符串\n\n    async def _arun(\n        self,\n        url: str,\n        mode: Optional[str] = None,\n        run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n    ) -> str: # <--- 返回值必须是字符串\n        \"\"\"使用工具异步获取网页内容。\"\"\"\n        if not FIRECRAWL_LOADER_AVAILABLE:\n            return \"Error: FireCrawlLoader is not available.\"\n            \n        key = self.api_key or os.getenv('FIRECRAWL_API_KEY')\n        if not key:\n             return \"Error: FIRECRAWL_API_KEY not found.\"\n        \n        print(f\"DEBUG [FireCrawlTool]: Running async for URL='{url}', Mode='{mode or self.default_mode}'\")\n\n        try:\n            current_mode = mode or self.default_mode\n            loader = FireCrawlLoader(\n                url=url, api_key=key, api_url=self.api_url,\n                mode=current_mode, params=self.default_params,\n            )\n\n            print(f\"--- Calling FireCrawl API (Async) for: '{url}' ---\")\n            # 使用 aload 进行异步加载\n            docs = await loader.aload()\n            print(f\"--- FireCrawl API call successful for: '{url}', received {len(docs)} document(s) ---\")\n\n            # --- 格式化结果为字符串 (与 _run 逻辑相同) ---\n            if not docs: return f\"FireCrawl successful but returned no content from {url} (Mode: {current_mode}).\"\n            summary_parts = [f\"Content summary from {url} (Mode: {current_mode}):\"]\n            content_limit = 4000; current_length = len(summary_parts[0]); doc_count = 0\n            for doc in docs:\n                 # if doc_count >= 1 and current_mode == 'scrape': break\n                 source_info = f\"\\n\\n--- Source: {doc.metadata.get('sourceURL', url)} ---\"\n                 page_content = doc.page_content or \"\"\n                 available_length = content_limit - current_length - len(source_info) - 20\n                 if available_length <= 0 and doc_count > 0:\n                      summary_parts.append(\"\\n\\n... (further content truncated)\"); break\n                 content = source_info + \"\\n\" + page_content\n                 if len(content) > available_length: content = content[:available_length] + \"... (truncated)\"\n                 summary_parts.append(content); current_length += len(content); doc_count += 1\n                 if current_length >= content_limit: break\n            return \"\\n\".join(summary_parts).strip()\n            # --- 格式化结束 ---\n\n        except Exception as e:\n            error_msg = f\"Error during Async FireCrawl for {url} (Mode: {mode or self.default_mode}): {repr(e)}\"\n            print(f\"ERROR: {error_msg}\")\n            return error_msg\n\n    # Pydantic V2: 允许额外的私有属性\n    model_config = {\n        \"arbitrary_types_allowed\": True\n    }"
  },
  {
    "path": "core/tools/registry.py",
    "content": "from enum import Enum\nfrom typing import List, Dict, Union, Optional\nfrom langchain.tools import Tool\n\n# 定义工具分类枚举\nclass ToolCategory(Enum):\n    SEARCH = \"Search\"\n    CODE_INTERPRETER = \"Code Interpreter\"\n    WEB_BROWSING = \"Web Browsing\"\n    DATABASE = \"Database\"\n    FILE_SYSTEM = \"FileSystem\"\n    IMAGE_GENERATION = \"Image Generation\"\n    OTHER = \"Other\"\n\n# 全局工具注册表\n_registered_tools = {}\n\ndef register_tool(tool: Tool, category: ToolCategory) -> None:\n    \"\"\"注册一个工具到全局字典中，带有分类信息\n    \n    如果工具名已存在，将覆盖现有的工具注册信息\n    \"\"\"\n    if tool.name in _registered_tools:\n        print(f\"警告: 工具名 {tool.name} 已存在，将覆盖现有注册信息\")\n    _registered_tools[tool.name] = {\n        \"tool\": tool,\n        \"category\": category\n    }\n\ndef get_registered_tools(as_dict: bool = False) -> Union[List[Tool], Dict[str, Dict]]:\n    \"\"\"返回所有已注册的工具\n    \n    Args:\n        as_dict: 如果为True，返回原始字典格式；如果为False，返回工具列表\n        \n    Returns:\n        如果as_dict为True，返回原始字典格式；否则返回工具列表\n    \"\"\"\n    if as_dict:\n        return _registered_tools\n    return [info[\"tool\"] for info in _registered_tools.values()]\n\ndef get_tools_list() -> List[Tool]:\n    \"\"\"返回所有已注册的工具列表，直接可用于Agent初始化\n    \n    Returns:\n        所有已注册工具的列表\n    \"\"\"\n    return [info[\"tool\"] for info in _registered_tools.values()]\n\ndef get_tools_dict() -> Dict[str, Tool]:\n    \"\"\"返回工具名称到工具实例的映射字典\n    \n    Returns:\n        工具名称到工具实例的映射字典\n    \"\"\"\n    return {name: info[\"tool\"] for name, info in _registered_tools.items()}\n\ndef get_tool(name: str) -> Optional[Dict]:\n    \"\"\"根据名称获取工具及其分类\n    \n    Args:\n        name: 工具名称\n        \n    Returns:\n        包含工具和分类的字典，如果工具不存在则返回None\n    \"\"\"\n    tool_info = _registered_tools.get(name)\n    if tool_info:\n        return {\n            \"tool\": tool_info[\"tool\"],\n            \"category\": tool_info[\"category\"].value\n        }\n    return None\n\ndef get_tool_instance(name: str) -> Optional[Tool]:\n    \"\"\"根据名称直接获取工具实例\n    \n    Args:\n        name: 工具名称\n        \n    Returns:\n        工具实例，如果工具不存在则返回None\n    \"\"\"\n    tool_info = _registered_tools.get(name)\n    return tool_info[\"tool\"] if tool_info else None\n\ndef get_tools_by_category(category: ToolCategory, return_instances: bool = True) -> List[Union[str, Tool]]:\n    \"\"\"返回指定分类的工具列表\n    \n    Args:\n        category: 工具分类\n        return_instances: 如果为True，返回工具实例列表；如果为False，返回工具名称列表\n        \n    Returns:\n        工具实例列表或工具名称列表\n    \"\"\"\n    if return_instances:\n        return [info[\"tool\"] for name, info in _registered_tools.items() if info[\"category\"] == category]\n    return [name for name, info in _registered_tools.items() if info[\"category\"] == category]"
  },
  {
    "path": "core/tools/replicate_flux_tool.py",
    "content": "# 文件路径: core/tools/replicate_flux_tool.py (或类似)\n\nimport os\nimport asyncio\nimport json\nfrom typing import Dict, Any, Optional, Type, List, Literal\nfrom pydantic import BaseModel, Field, PrivateAttr\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.callbacks import (\n    AsyncCallbackManagerForToolRun,\n    CallbackManagerForToolRun,\n)\n\n# --- Replicate Client ---\ntry:\n    import replicate\n    REPLICATE_AVAILABLE = True\nexcept ImportError:\n    replicate = None # type: ignore\n    REPLICATE_AVAILABLE = False\n    print(\"Warning: 'replicate' package not installed (pip install replicate). ReplicateFluxImageTool will not work.\")\n\n# --- Tool Category (可选, 用于 Registry) ---\ntry:\n    from .registry import ToolCategory, register_tool\n    if not hasattr(ToolCategory, 'IMAGE_GENERATION'):\n         ToolCategory.IMAGE_GENERATION = ToolCategory.OTHER\n    category = ToolCategory.IMAGE_GENERATION\nexcept ImportError:\n    category = None\n    print(\"Tool registry not found. Cannot auto-register ReplicateFluxImageTool.\")\n\n\n# --- Input Schema based on flux-dev ---\nclass ReplicateFluxToolInput(BaseModel):\n    \"\"\"Input schema for the Replicate Flux Image Generator Tool.\"\"\"\n    prompt: str = Field(description=\"Required. Detailed text description of the image to be generated.\")\n    aspect_ratio: Literal[\"1:1\", \"16:9\", \"21:9\", \"3:2\", \"2:3\", \"4:5\", \"5:4\", \"3:4\", \"4:3\", \"9:16\", \"9:21\"] = Field(\n        default=\"1:1\", description=\"Aspect ratio for the generated image.\"\n    )\n    num_outputs: int = Field(\n        default=1, description=\"Number of images to generate (1-4).\", ge=1, le=4\n    )\n    guidance: float = Field(\n        default=3.0, description=\"Guidance scale (0-10).\", ge=0, le=10\n    )\n    num_inference_steps: int = Field(\n        default=28, description=\"Number of denoising steps (1-50). Lower is faster, lower quality.\", ge=1, le=50\n    )\n    seed: Optional[int] = Field(default=None, description=\"Random seed for reproducible generation.\")\n    # Add other relevant fields from the schema if needed, e.g., megapixels, output_format\n    # megapixels: Literal[\"1\", \"0.25\"] = Field(default=\"1\", description=\"Approximate megapixels for output.\")\n    # output_format: Literal[\"webp\", \"jpg\", \"png\"] = Field(default=\"webp\", description=\"Output image format.\")\n\n\n# --- Tool Class (修正返回值处理) ---\nclass ReplicateFluxImageTool(BaseTool):\n    \"\"\"Generates images using 'black-forest-labs/flux-dev' on Replicate.\"\"\"\n    name: str = \"replicate_flux_image_generator\"\n    description: str = (\n        \"Generates high-quality images based on a detailed text prompt using the Flux model on Replicate. \"\n        \"Specify 'prompt' and optionally other parameters like 'aspect_ratio'. \"\n        \"Returns a string containing the URL(s) of the generated image(s).\"\n    )\n    args_schema: Type[BaseModel] = ReplicateFluxToolInput\n    _client: Any = PrivateAttr(default=None)\n    _is_available: bool = PrivateAttr(default=False)\n    _init_error: Optional[str] = PrivateAttr(default=None)\n    model_identifier: str = \"black-forest-labs/flux-dev\"\n\n    def __init__(self, api_token: Optional[str] = None, model_id: Optional[str] = None, **kwargs):\n        \"\"\"Initialize the Replicate client.\"\"\"\n        super().__init__(**kwargs)\n        if not REPLICATE_AVAILABLE: self._init_error = \"...\"; print(f\"ERROR: {self._init_error}\"); return\n        token = api_token or os.getenv(\"REPLICATE_API_TOKEN\")\n        if not token: self._init_error = \"...\"; print(f\"ERROR: {self._init_error}\"); return\n        try:\n            print(\"Initializing Replicate client...\")\n            self._client = replicate.Client(api_token=token)\n            print(\"Replicate client initialized successfully.\")\n            self._is_available = True; self._init_error = None\n            if model_id: self.model_identifier = model_id\n        except Exception as e: self._init_error = f\"...: {e}\"; print(f\"ERROR: {self._init_error}\"); self._is_available = False\n\n    def _run( self, run_manager: Optional[CallbackManagerForToolRun] = None, **kwargs: Any ) -> str:\n        \"\"\"Generates image(s) synchronously.\"\"\"\n        if not self._is_available or self._client is None:\n             error_message = f\"Replicate client unavailable: {self._init_error}\"\n             print(f\"ERROR: {error_message}\"); return f\"Error: {error_message}\"\n\n        input_data = {k: v for k, v in kwargs.items() if v is not None and k in self.args_schema.__fields__}\n        prompt_short = str(input_data.get('prompt', ''))[:100]\n        print(f\"--- TOOL CALL: {self.name} ---\")\n        print(f\"   Input: Prompt='{prompt_short}...', Args={ {k:v for k,v in input_data.items() if k != 'prompt'} }\")\n\n        try:\n            # output 现在预期是包含特殊对象 (如 FileOutput 或 URL 字符串) 的列表\n            output: List[Any] = self._client.run(self.model_identifier, input=input_data)\n\n            if not output or not isinstance(output, list):\n                result_str = \"Image generation failed: Replicate API returned no output or unexpected format.\"\n                print(f\"   Warning: {result_str}\"); return f\"Error: {result_str}\"\n\n            # --- 从返回的对象中提取 URL ---\n            image_urls: List[str] = []\n            for item in output:\n                if isinstance(item, str): # 如果直接返回了 URL 字符串\n                    image_urls.append(item)\n                elif hasattr(item, 'url') and isinstance(getattr(item, 'url'), str): # 检查是否有 .url 属性且是字符串\n                    image_urls.append(getattr(item, 'url'))\n                elif hasattr(item, 'read'): # 如果是文件类对象，可能需要其他处理或报错\n                     print(f\"Warning: Received file-like object from Replicate, cannot directly get URL: {item}\")\n                     # 或者尝试其他属性？这个需要根据 replicate 库的具体 FileOutput 类型确定\n                else:\n                     print(f\"Warning: Unknown item type in Replicate output list: {type(item)}\")\n\n            if not image_urls:\n                 result_str = \"Image generation succeeded but failed to extract image URLs from the response.\"\n                 print(f\"   Warning: {result_str}\"); return f\"Error: {result_str}\"\n            # --- 提取结束 ---\n\n            # 格式化 URL 列表为字符串\n            url_list_str = \"\\n\".join(image_urls)\n            result_str = f\"Successfully generated {len(image_urls)} image(s):\\n{url_list_str}\"\n            print(f\"   Result: {result_str}\")\n            return result_str\n\n        except Exception as e: # 捕获 Replicate API 错误等\n            # 检查是否是 ReplicateError 并提取更具体的细节\n            error_detail = str(e)\n            if REPLICATE_AVAILABLE and isinstance(e, replicate.exceptions.ReplicateError):\n                 error_detail = f\"ReplicateError (Status: {e.status}): {e.title} - {e.detail}\"\n\n            error_msg = f\"Error calling Replicate API ({self.model_identifier}): {error_detail}\"\n            print(f\"   Error: {error_msg}\")\n            # traceback.print_exc() # 可以在调试时取消注释\n            return f\"Error: {error_msg}\" # 返回错误信息给 LLM\n\n    async def _arun( self, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, **kwargs: Any ) -> str:\n        \"\"\"Generates image(s) asynchronously using run_in_executor.\"\"\"\n        if not self._is_available or self._client is None:\n             error_message = f\"Replicate client unavailable: {self._init_error}\"\n             print(f\"ERROR: {error_message}\"); return f\"Error: {error_message}\"\n\n        input_data = {k: v for k, v in kwargs.items() if v is not None and k in self.args_schema.__fields__}\n        prompt_short = str(input_data.get('prompt', ''))[:100]\n        print(f\"--- TOOL CALL (Async): {self.name} ---\")\n        print(f\"   Input: Prompt='{prompt_short}...', Args={ {k:v for k,v in input_data.items() if k != 'prompt'} }\")\n\n        try:\n            loop = asyncio.get_running_loop()\n            import functools\n            sync_call_with_args = functools.partial( self._client.run, self.model_identifier, input=input_data )\n            output: List[Any] = await loop.run_in_executor( None, sync_call_with_args )\n\n            if not output or not isinstance(output, list):\n                result_str = \"Async image generation failed: Replicate API returned no output or unexpected format.\"\n                print(f\"   Warning: {result_str}\"); return f\"Error: {result_str}\"\n\n            # --- 从返回的对象中提取 URL (逻辑同 _run) ---\n            image_urls: List[str] = []\n            for item in output:\n                if isinstance(item, str): image_urls.append(item)\n                elif hasattr(item, 'url') and isinstance(getattr(item, 'url'), str): image_urls.append(getattr(item, 'url'))\n                else: print(f\"Warning: Unknown item type in async Replicate output list: {type(item)}\")\n            if not image_urls:\n                 result_str = \"Async image generation succeeded but failed to extract image URLs.\"\n                 print(f\"   Warning: {result_str}\"); return f\"Error: {result_str}\"\n            # --- 提取结束 ---\n\n            url_list_str = \"\\n\".join(image_urls)\n            result_str = f\"Successfully generated {len(image_urls)} image(s) asynchronously:\\n{url_list_str}\"\n            print(f\"   Result: {result_str}\")\n            return result_str\n\n        except Exception as e: # 捕获 Replicate API 错误等\n            error_detail = str(e)\n            if REPLICATE_AVAILABLE and isinstance(e, replicate.exceptions.ReplicateError):\n                 error_detail = f\"ReplicateError (Status: {e.status}): {e.title} - {e.detail}\"\n            error_msg = f\"Error calling Replicate API asynchronously ({self.model_identifier}): {error_detail}\"\n            print(f\"   Error: {error_msg}\")\n            # traceback.print_exc()\n            return f\"Error: {error_msg}\"\n\n\n    def close(self):\n        \"\"\"关闭沙箱（如果需要的话）。Replicate Client 通常不需要关闭。\"\"\"\n        print(f\"Info: Replicate client for '{self.name}' does not require explicit closing.\")\n        pass # Replicate client 通常不需要显式关闭\n\n    model_config = {\"arbitrary_types_allowed\": True}\n"
  },
  {
    "path": "core/utils/agent_utils.py",
    "content": "import os\nfrom typing import Dict, Any, Optional, Literal\nfrom langchain_core.messages import AIMessage, ToolMessage\nimport inspect\n\ndef log_agent_actions(state: Dict[str, Any]) -> None:\n    \"\"\"记录Agent的思考过程和行动\n    \n    这个函数用于在控制台打印Agent的思考过程、工具调用和工具返回结果，\n    便于观察和调试Agent的行为。\n    \n    Args:\n        state: 包含消息历史的状态字典\n    \"\"\"\n    print(\"\\n\" + \"=\" * 50)\n    print(\"当前状态:\")\n    \n    # 打印最新消息\n    if state.get(\"messages\") and len(state[\"messages\"]) > 0:\n        latest_message = state[\"messages\"][-1]\n        \n        if isinstance(latest_message, AIMessage):\n            print(f\"\\nAI思考过程:\")\n            print(latest_message.content)\n            \n            # 如果有工具调用，打印工具调用信息\n            if latest_message.tool_calls:\n                print(f\"\\n工具调用:\")\n                for tool_call in latest_message.tool_calls:\n                    print(f\"- 工具: {tool_call['name']}\")\n                    print(f\"- 参数: {tool_call['args']}\")\n        \n        elif isinstance(latest_message, ToolMessage):\n            print(f\"\\n工具返回结果:\")\n            print(f\"- 工具: {latest_message.name}\")\n            # 只打印结果的前500个字符，避免输出过长\n            content = latest_message.content\n            if len(content) > 500:\n                content = content[:500] + \"... (更多内容省略)\"\n            print(f\"- 结果: {content}\")\n    \n    print(\"=\" * 50)\n\ndef save_agent_graph(\n    agent, \n    caller_file_path: Optional[str] = None,\n    output_format: Literal[\"png\", \"svg\", \"mermaid\"] = \"png\",\n    custom_filename: Optional[str] = None,\n    output_dir: Optional[str] = None\n) -> str:\n    \"\"\"保存Agent的图表到指定目录\n    \n    这个函数用于生成Agent的图表并保存到指定目录，\n    默认情况下文件名与调用者的文件名保持一致（不含扩展名）。\n    \n    Args:\n        agent: Agent对象，必须有get_graph方法\n        caller_file_path: 调用者的文件路径，如果为None则使用调用栈获取\n        output_format: 输出格式，可选\"png\"、\"svg\"或\"mermaid\"\n        custom_filename: 自定义文件名（不含扩展名），如果提供则使用此名称\n        output_dir: 自定义输出目录，如果提供则使用此目录\n        \n    Returns:\n        str: 保存的图表路径\n    \"\"\"\n    # 如果没有提供调用者文件路径，则从调用栈获取\n    if caller_file_path is None:\n        # 获取调用者的栈帧\n        frame = inspect.currentframe().f_back\n        caller_file_path = frame.f_code.co_filename\n    \n    try:\n        # 获取图对象\n        graph = agent.get_graph()\n    except AttributeError:\n        raise ValueError(\"提供的agent对象没有get_graph方法\") \n    except Exception as e:\n        raise RuntimeError(f\"获取图表时出错: {str(e)}\")\n    \n    # 确定文件名\n    if custom_filename:\n        file_name_without_ext = custom_filename\n    else:\n        # 获取当前文件名（不含路径和扩展名）\n        current_file = os.path.basename(caller_file_path)\n        file_name_without_ext = os.path.splitext(current_file)[0]\n    \n    # 确定输出目录\n    if output_dir:\n        graph_dir = output_dir\n    else:\n        # 如果调用者在examples目录下，则使用examples/graphs\n        # 否则在调用者所在目录创建graphs子目录\n        if 'examples' in caller_file_path:\n            base_dir = os.path.dirname(os.path.dirname(caller_file_path))\n            graph_dir = os.path.join(base_dir, \"examples\", \"graphs\")\n        else:\n            graph_dir = os.path.join(os.path.dirname(caller_file_path), \"graphs\")\n    \n    # 确保graphs目录存在\n    os.makedirs(graph_dir, exist_ok=True)\n    \n    # 根据输出格式生成相应文件\n    try:\n        if output_format == \"png\":\n            image_data = graph.draw_mermaid_png()\n            graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.png\")\n            with open(graph_path, \"wb\") as f:\n                f.write(image_data)\n                \n        elif output_format == \"svg\":\n            image_data = graph.draw_mermaid_svg()\n            graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.svg\")\n            with open(graph_path, \"wb\") as f:\n                f.write(image_data)\n                \n        elif output_format == \"mermaid\":\n            mermaid_code = graph.get_mermaid()\n            graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.mmd\")\n            with open(graph_path, \"w\") as f:\n                f.write(mermaid_code)\n        else:\n            raise ValueError(f\"不支持的输出格式: {output_format}\")\n            \n    except Exception as e:\n        raise RuntimeError(f\"保存图表时出错: {str(e)}\")\n        \n    print(f\"图表已保存为 {graph_path}\")\n    return graph_path\n\ndef visualize_agent(agent, **kwargs):\n    \"\"\"可视化Agent的快捷方法\n    \n    这是save_agent_graph的简便包装，用于快速可视化Agent\n    \n    Args:\n        agent: Agent对象\n        **kwargs: 传递给save_agent_graph的其他参数\n        \n    Returns:\n        str: 保存的图表路径\n    \"\"\"\n    # 获取调用者的栈帧\n    frame = inspect.currentframe().f_back\n    caller_file_path = frame.f_code.co_filename\n    \n    return save_agent_graph(agent, caller_file_path=caller_file_path, **kwargs)"
  },
  {
    "path": "core/utils/timezone.py",
    "content": "from datetime import datetime\nimport os\nfrom typing import Optional\nfrom zoneinfo import ZoneInfo\n\ndef get_timezone() -> str:\n    \"\"\"Get timezone from environment variable or use default.\n    \n    Returns:\n        str: Timezone string (e.g. 'Asia/Shanghai')\n    \"\"\"\n    return os.getenv('TZ', 'UTC')\n\ndef get_formatted_date(timezone: Optional[str] = None) -> str:\n    \"\"\"Get formatted date string with timezone awareness.\n    \n    Args:\n        timezone: Optional timezone string. If not provided, uses TZ from env or UTC.\n        \n    Returns:\n        str: Formatted date string (e.g. 'Today's Date: Mon, Jan 01, 2024')\n    \"\"\"\n    tz = ZoneInfo(timezone or get_timezone())\n    now = datetime.now(tz)\n    return f\"Today's Date: {now.strftime('%a, %b %d, %Y')}\"\n\ndef get_current_time(timezone: Optional[str] = None) -> datetime:\n    \"\"\"Get current time with timezone awareness.\n    \n    Args:\n        timezone: Optional timezone string. If not provided, uses TZ from env or UTC.\n        \n    Returns:\n        datetime: Current time with timezone information\n    \"\"\"\n    tz = ZoneInfo(timezone or get_timezone())\n    return datetime.now(tz)"
  },
  {
    "path": "examples/01_supervisor_test.py",
    "content": "from langgraph.prebuilt import create_react_agent\nfrom core.agents.supervisor import create_supervisor\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.func import entrypoint, task\nfrom langgraph.graph import add_messages\nfrom dotenv import load_dotenv\nfrom core.utils.agent_utils import visualize_agent\n\nload_dotenv()  # 自动加载 .env 文件\n# 1. 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n##############################################################################\n# Agent 1: Joke Generator (Functional API)\n##############################################################################\n\n@task\ndef generate_joke(messages):\n    \"\"\"Generate a short joke (no tool calls).\"\"\"\n    system_message = {\n        \"role\": \"system\", \n        \"content\": \"You are a witty comedian. Write a short joke.\"\n    }\n    # 直接调用 model.invoke，拼接 system_message + 用户消息\n    msg = model.invoke([system_message] + messages)\n    return msg\n\n@entrypoint()\ndef joke_agent(state):\n    # 调用上面的函数型任务\n    joke = generate_joke(state['messages']).result()\n    # 将产物插入消息列表\n    messages = add_messages(state[\"messages\"], [joke])\n    return {\"messages\": messages}\n\njoke_agent.name = \"joke_agent\"\n\n##############################################################################\n# Agent 2: Research Expert (Graph API)\n##############################################################################\n\ndef web_search(query: str) -> str:\n    \"\"\"Search the web for information. (Mocked data here)\"\"\"\n    return (\n        \"Here are the headcounts for each of the FAANG companies in 2024:\\n\"\n        \"1. **Facebook (Meta)**: 67,317 employees.\\n\"\n        \"2. **Apple**: 164,000 employees.\\n\"\n        \"3. **Amazon**: 1,551,000 employees.\\n\"\n        \"4. **Netflix**: 14,000 employees.\\n\"\n        \"5. **Google (Alphabet)**: 181,269 employees.\"\n    )\n\nresearch_agent = create_react_agent(\n    model=model,\n    tools=[web_search],\n    name=\"research_expert\",\n    # Prompt 告诉它是一个研究型 Agent，可调用 web_search\n    prompt=(\n        \"You are a world-class researcher. You have access to a 'web_search(query: str)' tool. \"\n        \"Do not do any complicated math, just provide factual info from the web_search if needed.\"\n    ),\n)\n\n##############################################################################\n# Supervisor Workflow\n##############################################################################\n\n# 让 Supervisor 在一次对话中可以多轮调用 joke_agent 和 research_expert\n# 这里的 prompt 告诉它：如果用户要“先讲笑话再查信息”，请先调用 joke_agent，再调用 research_expert，\n# 这样可以在同一个用户请求下顺序执行两个 Agent。\n# 这是最简单的示例，只是为了演示 create_supervisor 的基本用法，该方法没有被封装成一个 Agent\n# 也不具备 Planning 能力\nworkflow = create_supervisor(\n    [research_agent, joke_agent],\n    model=model,\n    prompt=(\n        \"You are the overall supervisor. You manage two specialized agents:\\n\"\n        \"1) joke_agent: for telling jokes.\\n\"\n        \"2) research_expert: for factual or data-related questions.\\n\\n\"\n        \"If the user wants a joke AND some research data in the same query, \"\n        \"you MUST call joke_agent first, get the joke, then call research_expert for the data. \"\n        \"After both calls, provide a final combined response. \"\n        \"Do not call more than one agent in a single LLM message; do it step by step.\"\n    ),\n)\n\n# 编译得到一个可调用的\"App\"\nagent = workflow.compile()\n# 保存为一个可视化的图\n# visualize_agent(agent)\n##############################################################################\n# 测试：单个用户请求想要 \"先讲笑话，再查Apple的2024年人数\" 并合并结果\n##############################################################################\nresult = agent.invoke({\n    \"messages\": [\n        {\n            \"role\": \"user\",\n            \"content\": (\n                \"Hi! I'd like to start with a short joke to lighten the mood, \"\n                \"then please check Apple's headcount in 2024. Summarize both.\"\n            )\n        }\n    ]\n})\n\n##############################################################################\n# 打印最终对话消息\n##############################################################################\nfor m in result[\"messages\"]:\n    m.pretty_print()"
  },
  {
    "path": "examples/02_supervisor_agent_test.py",
    "content": "from langgraph.prebuilt import create_react_agent\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.agents.react_supervisor_agent import SupervisorAgent\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.func import entrypoint, task\nfrom langgraph.graph import add_messages\nfrom dotenv import load_dotenv\nload_dotenv()  # 自动加载 .env 文件\n# 1. 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n##############################################################################\n# Agent 1: Joke Generator (Functional API)\n##############################################################################\n\n@task\ndef generate_joke(messages):\n    \"\"\"Generate a short joke (no tool calls).\"\"\"\n    system_message = {\n        \"role\": \"system\", \n        \"content\": \"You are a witty comedian. Write a short joke.\"\n    }\n    # 直接调用 model.invoke，拼接 system_message + 用户消息\n    msg = model.invoke([system_message] + messages)\n    return msg\n\n@entrypoint()\ndef joke_agent(state):\n    # 调用上面的函数型任务\n    joke = generate_joke(state['messages']).result()\n    # 将产物插入消息列表\n    messages = add_messages(state[\"messages\"], [joke])\n    return {\"messages\": messages}\n\njoke_agent.name = \"joke_agent\"\n\n##############################################################################\n# Agent 2: Research Expert (Graph API)\n##############################################################################\n\ndef web_search(query: str) -> str:\n    \"\"\"Search the web for information. (Mocked data here)\"\"\"\n    return (\n        \"Here are the headcounts for each of the FAANG companies in 2024:\\n\"\n        \"1. **Facebook (Meta)**: 67,317 employees.\\n\"\n        \"2. **Apple**: 164,000 employees.\\n\"\n        \"3. **Amazon**: 1,551,000 employees.\\n\"\n        \"4. **Netflix**: 14,000 employees.\\n\"\n        \"5. **Google (Alphabet)**: 181,269 employees.\"\n    )\n\n# research_agent = create_react_agent(\n#     model=model,\n#     tools=[web_search],\n#     name=\"research_expert\",\n#     # Prompt 告诉它是一个研究型 Agent，可调用 web_search\n#     prompt=(\n#         \"You are a world-class researcher. You have access to a 'web_search(query: str)' tool. \"\n#         \"Do not do any complicated math, just provide factual info from the web_search if needed.\"\n#     ),\n# )\nresearch_agent = ReactAgent(\n    model=model,\n    tools=[web_search],\n    name=\"research_expert\",\n    # Prompt 告诉它是一个研究型 Agent，可调用 web_search\n    prompt=(\n        \"You are a world-class researcher. You have access to a 'web_search(query: str)' tool. \"\n        \"Do not do any complicated math, just provide factual info from the web_search if needed.\"\n    ),\n)\n\n##############################################################################\n# 使用 SupervisorAgent 类替代直接调用 create_supervisor 函数\n##############################################################################\n\n# 创建 SupervisorAgent 实例\nsupervisor = SupervisorAgent(\n    agents=[research_agent],\n    model=model,\n    # prompt=(\n    #     \"You are the overall supervisor. You manage two specialized agents:\\n\"\n    #     \"1) joke_agent: for telling jokes.\\n\"\n    #     \"2) research_expert: for factual or data-related questions.\\n\\n\"\n    #     \"If the user wants a joke AND some research data in the same query, \"\n    #     \"you MUST call joke_agent first, get the joke, then call research_expert for the data. \"\n    #     \"After both calls, provide a final combined response. \"\n    #     \"Do not call more than one agent in a single LLM message; do it step by step.\"\n    # ),\n)\n##############################################################################\n# 测试：单个用户请求想要 \"先讲笑话，再查Apple的2024年人数\" 并合并结果\n##############################################################################\nresult = supervisor.invoke({\n    \"messages\": [\n        {\n            \"role\": \"user\",\n            \"content\": (\n                \"Hi! I'd like to start with a short joke to lighten the mood, \"\n                \"then please check Apple's headcount in 2024. Summarize both.\"\n            )\n        }\n    ]\n})\n\n##############################################################################\n# 打印最终对话消息\n##############################################################################\nfor m in result[\"messages\"]:\n    m.pretty_print()"
  },
  {
    "path": "examples/03_tavily_tools_test.py",
    "content": "import os\nfrom langgraph.prebuilt import create_react_agent\nfrom core.agents.react_supervisor_agent import SupervisorAgent\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.func import entrypoint, task\nfrom langgraph.graph import add_messages\nfrom langchain_community.tools import TavilySearchResults\nfrom dotenv import load_dotenv\nload_dotenv()  # 自动加载 .env 文件\n# 1. 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n##############################################################################\n# Agent 1: Joke Generator (Functional API)\n##############################################################################\n\n@task\ndef generate_joke(messages):\n    \"\"\"Generate a short joke (no tool calls).\"\"\"\n    system_message = {\n        \"role\": \"system\", \n        \"content\": \"You are a witty comedian. Write a short joke.\"\n    }\n    # 直接调用 model.invoke，拼接 system_message + 用户消息\n    msg = model.invoke([system_message] + messages)\n    return msg\n\n@entrypoint()\ndef joke_agent(state):\n    # 调用上面的函数型任务\n    joke = generate_joke(state['messages']).result()\n    # 将产物插入消息列表\n    messages = add_messages(state[\"messages\"], [joke])\n    return {\"messages\": messages}\n\njoke_agent.name = \"joke_agent\"\n\n##############################################################################\n# Agent 2: Research Expert with Tavily Search (Graph API)\n##############################################################################\n\n# 创建Tavily搜索工具\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=False,\n    include_images=False,\n    search_depth=\"advanced\"\n)\n\nresearch_agent = create_react_agent(\n    model=model,\n    tools=[tavily_search],\n    name=\"research_expert\",\n    # Prompt 告诉它是一个研究型 Agent，可调用 tavily_search\n    prompt=(\n        \"You are a world-class researcher. You have access to the 'tavily_search_results_json' tool \"\n        \"which can search the web for real-time information. \"\n        \"When asked a question, use this tool to find accurate and up-to-date information. \"\n        \"Summarize the search results in a clear and concise manner. \"\n        \"Always cite your sources by including the URLs from the search results.\"\n    ),\n)\n\n##############################################################################\n# 使用 SupervisorAgent 类来协调多个智能体\n##############################################################################\n\n# 创建 SupervisorAgent 实例\nsupervisor = SupervisorAgent(\n    agents=[research_agent, joke_agent],\n    model=model,\n    prompt=(\n        \"You are the overall supervisor. You manage two specialized agents:\\n\"\n        \"1) joke_agent: for telling jokes.\\n\"\n        \"2) research_expert: for factual or data-related questions using real-time web search.\\n\\n\"\n        \"If the user wants a joke, call joke_agent.\\n\"\n        \"If the user wants factual information or research data, call research_expert.\\n\"\n        \"If the user wants a joke AND some research data in the same query, \"\n        \"you MUST call joke_agent first, get the joke, then call research_expert for the data. \"\n        \"After both calls, provide a final combined response. \"\n        \"Do not call more than one agent in a single LLM message; do it step by step.\"\n    ),\n)\n\n# 编译得到一个可调用的\"App\"\napp = supervisor.compile()\n\n# # 获取当前文件名（不含路径和扩展名）\n# current_file = os.path.basename(__file__)\n# file_name_without_ext = os.path.splitext(current_file)[0]\n# graph_dir = os.path.join(os.path.dirname(__file__), \"graphs\")\n\n# # 确保 graphs 目录存在\n# os.makedirs(graph_dir, exist_ok=True)\n\n# # 生成与文件名一致的图片名，并保存到 examples/graphs 目录\n# image_data = app.get_graph().draw_mermaid_png()\n# graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.png\")\n\n# # 保存图片（如果已存在则覆盖）\n# with open(graph_path, \"wb\") as f:\n#     f.write(image_data)\n\n# print(f\"Image saved as {graph_path}\")\n\n# 使用示例\nif __name__ == \"__main__\":\n    # 示例1：只询问笑话\n    result1 = app.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"讲个笑话\"}]})\n    print(\"\\n示例1 - 只询问笑话:\")\n    for message in result1[\"messages\"]:\n        message.pretty_print()\n    \n    # 示例2：只询问研究数据\n    result2 = app.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"谁是现任美国总统？\"}]})\n    print(\"\\n示例2 - 只询问研究数据:\")\n    for message in result2[\"messages\"]:\n        message.pretty_print()\n    \n    # 示例3：同时询问笑话和研究数据\n    result3 = app.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"讲个关于人工智能的笑话，然后告诉我什么是大型语言模型\"}]})\n    print(\"\\n示例3 - 同时询问笑话和研究数据:\")\n    for message in result3[\"messages\"]:\n        message.pretty_print()"
  },
  {
    "path": "examples/04_react_agent_test.py",
    "content": "import os\nimport json\nfrom langgraph.prebuilt import create_react_agent\nfrom langchain_openai import ChatOpenAI\nfrom langchain_community.tools import TavilySearchResults\nfrom typing import Dict, Any\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom dotenv import load_dotenv\nfrom core.utils.agent_utils import log_agent_actions, save_agent_graph\nload_dotenv()  # 自动加载 .env 文件\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n##############################################################################\n# 创建Tavily搜索工具 - 配置为深度搜索模式\n##############################################################################\n\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=True,  # 包含原始内容，便于分析\n    include_images=False,\n    search_depth=\"advanced\"  # 使用高级搜索深度\n)\n\n##############################################################################\n# 创建REACT Agent - 使用更详细的提示词引导多步思考\n##############################################################################\n\nreact_agent = create_react_agent(\n    model=model,\n    tools=[tavily_search],\n    name=\"tesla_research_expert\",\n    # 提示词强调分解问题、多步思考和综合信息\n    prompt=(\n        \"你是一位专业的研究分析师，擅长分析复杂问题并提供深入见解。\\n\"\n        \"你有一个强大的工具'tavily_search_results_json'可以搜索网络获取实时信息。\\n\\n\"\n        \"当面对复杂问题时，请遵循以下REACT方法论：\\n\"\n        \"1. 分解问题：将复杂问题分解为更小的子问题\\n\"\n        \"2. 制定计划：确定需要搜索哪些信息，以及搜索的顺序\\n\"\n        \"3. 执行搜索：使用tavily_search_results_json工具执行搜索\\n\"\n        \"4. 分析结果：分析搜索结果，确定是否需要进一步搜索\\n\"\n        \"5. 综合信息：将所有搜索结果综合成一个连贯的回答\\n\\n\"\n        \"重要提示：\\n\"\n        \"- 不要一次性搜索过于宽泛的问题\\n\"\n        \"- 对于复杂问题，进行多次有针对性的搜索\\n\"\n        \"- 每次搜索后评估结果，决定下一步行动\\n\"\n        \"- 在最终回答中引用来源，包括搜索结果中的URL\\n\"\n        \"- 清晰地展示你的思考过程，包括问题分解和计划制定\\n\"\n    ),\n)\n\n# 保存Agent图表\n# save_agent_graph(react_agent)\n\n##############################################################################\n# 测试：查询\"特斯拉2025年的发展预期\"\n##############################################################################\n\nif __name__ == \"__main__\":\n    # 复杂查询测试\n    print(\"\\n开始测试REACT Agent处理复杂查询的能力...\\n\")\n    print(\"查询: 特斯拉2025年的发展预期\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            {\"role\": \"user\", \"content\": \"分析特斯拉2025年的发展预期，包括新车型计划、销量目标、技术创新和市场扩张战略。\"}\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in react_agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用原有的log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print(\"\\n最终回答:\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                message.pretty_print()"
  },
  {
    "path": "examples/05_react_agent_user_input.py",
    "content": "import asyncio\nimport os\nfrom typing import Dict, Any\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\n\nfrom core.agents.base.react_agent import ReactAgent\nfrom langchain_community.tools import TavilySearchResults\nfrom dotenv import load_dotenv\nload_dotenv()  # 自动加载 .env 文件\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n##############################################################################\n# 创建一个记录Agent思考过程的函数\n##############################################################################\n\ndef log_agent_actions(state: Dict[str, Any]) -> None:\n    \"\"\"记录Agent的思考过程和行动\"\"\"\n    print(\"\\n\" + \"=\" * 50)\n    print(\"当前状态:\")\n    \n    # 打印最新消息\n    if state.get(\"messages\") and len(state[\"messages\"]) > 0:\n        latest_message = state[\"messages\"][-1]\n        \n        if isinstance(latest_message, AIMessage):\n            print(f\"\\nAI思考过程:\")\n            print(latest_message.content)\n            \n            # 如果有工具调用，打印工具调用信息\n            if latest_message.tool_calls:\n                print(f\"\\n工具调用:\")\n                for tool_call in latest_message.tool_calls:\n                    print(f\"- 工具: {tool_call['name']}\")\n                    print(f\"- 参数: {tool_call['args']}\")\n        \n        elif isinstance(latest_message, ToolMessage):\n            print(f\"\\n工具返回结果:\")\n            print(f\"- 工具: {latest_message.name}\")\n            # 只打印结果的前200个字符，避免输出过长\n            content = latest_message.content\n            if len(content) > 200:\n                content = content[:200] + \"... (更多内容省略)\"\n            print(f\"- 结果: {content}\")\n    \n    print(\"=\" * 50)\n\n##############################################################################\n# 创建Tavily搜索工具 - 配置为深度搜索模式\n##############################################################################\n\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=True,  # 包含原始内容，便于分析\n    include_images=False,\n    search_depth=\"advanced\"  # 使用高级搜索深度\n)\n\n##############################################################################\n# 创建ReactAgent实例\n##############################################################################\n\ndef create_react_agent_instance():\n    \"\"\"创建并返回ReactAgent实例\"\"\"\n    react_agent = ReactAgent(\n        model=model,\n        tools=[tavily_search],\n        name=\"research_assistant\",\n        # 提示词强调分解问题、多步思考和综合信息\n        prompt=(\n            \"你是一位专业的研究分析师，擅长分析复杂问题并提供深入见解。\\n\"\n            \"你有一个强大的工具'tavily_search_results_json'可以搜索网络获取实时信息。\\n\\n\"\n            \"当面对复杂问题时，请遵循以下REACT方法论：\\n\"\n            \"1. 分解问题：将复杂问题分解为更小的子问题\\n\"\n            \"2. 制定计划：确定需要搜索哪些信息，以及搜索的顺序\\n\"\n            \"3. 执行搜索：使用tavily_search_results_json工具执行搜索\\n\"\n            \"4. 分析结果：分析搜索结果，确定是否需要进一步搜索\\n\"\n            \"5. 综合信息：将所有搜索结果综合成一个连贯的回答\\n\\n\"\n            \"重要提示：\\n\"\n            \"- 不要一次性搜索过于宽泛的问题\\n\"\n            \"- 对于复杂问题，进行多次有针对性的搜索\\n\"\n            \"- 每次搜索后评估结果，决定下一步行动\\n\"\n            \"- 在最终回答中引用来源，包括搜索结果中的URL\\n\"\n            \"- 清晰地展示你的思考过程，包括问题分解和计划制定\\n\"\n        ),\n    )\n    \n    # 获取图对象并保存\n    agent = react_agent.compile()    \n    return agent\n\n##############################################################################\n# 主函数 - 处理用户输入\n##############################################################################\n\nasync def main():\n    # 创建ReactAgent实例\n    react_agent = create_react_agent_instance()\n    \n    while True:\n        # 获取用户输入\n        user_input = await asyncio.to_thread(input, \"\\n请输入您想了解的问题 (输入'退出'结束): \")\n        \n        # 检查是否退出\n        if user_input.lower() in ['退出', 'exit', 'quit']:\n            print(\"\\n感谢使用，再见！\")\n            break\n        \n        # 准备初始状态\n        initial_state = {\n            \"messages\": [HumanMessage(content=user_input)]\n        }\n        \n        try:\n            print(\"\\n=== 🔍 开始研究 ===\\n\")\n            \n            # 使用stream方法逐步获取中间状态\n            final_state = None\n            for partial_state in react_agent.stream(initial_state, stream_mode=\"values\"):\n                # 保存最终状态\n                final_state = partial_state\n                \n                # 获取消息列表\n                messages = partial_state.get(\"messages\", [])\n                if not messages:\n                    continue\n                    \n                # 获取最新消息\n                latest_message = messages[-1]\n                \n                # 使用log_agent_actions函数记录状态\n                log_agent_actions({\"messages\": [latest_message]})\n            \n            # 打印最终回答\n            print(\"\\n最终回答:\")\n            if final_state and final_state.get(\"messages\"):\n                for message in final_state[\"messages\"]:\n                    if isinstance(message, AIMessage) and not message.tool_calls:\n                        print(\"\\n\" + \"=\" * 80)\n                        print(message.content)\n                        print(\"=\" * 80 + \"\\n\")\n        \n        except Exception as e:\n            print(f\"\\n处理查询时出错: {e}\")\n\n##############################################################################\n# 程序入口\n##############################################################################\n\nif __name__ == \"__main__\":\n    print(\"\\n欢迎使用ReactAgent研究助手！\")\n    print(\"这个助手可以帮助您研究各种问题，使用Tavily搜索工具获取最新信息。\")\n    print(\"您可以输入任何问题，助手将使用REACT方法论进行分析和回答。\")\n    \n    # 运行主函数\n    asyncio.run(main())"
  },
  {
    "path": "examples/06_web_extraction_tools_test.py",
    "content": "import os\nimport sys\nfrom langgraph.prebuilt import create_react_agent\nfrom langchain_openai import ChatOpenAI\nimport json\nfrom typing import Dict, Any\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom dotenv import load_dotenv\nfrom langchain_community.tools import JinaSearch\nfrom core.tools.firecrawl_tool import FireCrawlTool\n\n\nload_dotenv()  # 自动加载 .env 文件\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n\n\n##############################################################################\n# 创建一个记录Agent思考过程的函数\n##############################################################################\n\ndef log_agent_actions(state: Dict[str, Any]) -> None:\n    \"\"\"记录Agent的思考过程和行动\"\"\"\n    print(\"\\n\" + \"=\" * 50)\n    print(\"当前状态:\")\n    \n    # 打印最新消息\n    if state.get(\"messages\") and len(state[\"messages\"]) > 0:\n        latest_message = state[\"messages\"][-1]\n        \n        if isinstance(latest_message, AIMessage):\n            print(f\"\\nAI思考过程:\")\n            print(latest_message.content)\n            \n            # 如果有工具调用，打印工具调用信息\n            if latest_message.tool_calls:\n                print(f\"\\n工具调用:\")\n                for tool_call in latest_message.tool_calls:\n                    print(f\"- 工具: {tool_call['name']}\")\n                    print(f\"- 参数: {tool_call['args']}\")\n        \n        elif isinstance(latest_message, ToolMessage):\n            print(f\"\\n工具返回结果:\")\n            print(f\"- 工具: {latest_message.name}\")\n            # 只打印结果的前200个字符，避免输出过长\n            content = latest_message.content\n            if len(content) > 300:\n                content = content[:300] + \"... (更多内容省略)\"\n            print(f\"- 结果: {content}\")\n    \n    print(\"=\" * 50)\n\n##############################################################################\n# 创建Web提取工具 - FireCrawl用于网站结构，Jina用于内容提取\n##############################################################################\n\n# 创建FireCrawl工具 - 用于网站结构分析\nfirecrawl_tool = FireCrawlTool(\n    mode=\"crawl\",  # 使用爬取模式\n    params={\"max_pages\": 10}  # 限制爬取页面数量\n)\n\n# 创建Jina Reader工具 - 用于内容提取\njina_reader_tool = JinaSearch()\n\n##############################################################################\n# 创建REACT Agent - 使用更详细的提示词引导多步思考\n##############################################################################\n\nreact_agent = create_react_agent(\n    model=model,\n    tools=[firecrawl_tool, jina_reader_tool],\n    name=\"web_extraction_expert\",\n    # 提示词强调分解问题、多步思考和综合信息\n    prompt=(\n        \"你是一位专业的网页内容分析专家，擅长提取和分析网站结构与内容。\\n\"\n        \"你有两个强大的工具:\\n\"\n        \"1. 'firecrawl_tool': 用于爬取网站结构和下级页面\\n\"\n        \"2. 'jina_reader_tool': 用于从特定URL提取结构化内容，获取干净可读的内容\\n\\n\"\n        \"当面对网站分析任务时，请遵循以下方法论:\\n\"\n        \"1. 分析任务: 明确需要从网站获取什么信息\\n\"\n        \"2. 网站结构分析: 使用firecrawl_tool爬取网站结构，了解可用页面\\n\"\n        \"3. 内容提取: 根据网站结构，使用jina_reader_tool从关键页面提取内容\\n\"\n        \"4. 信息整合: 将提取的内容整合成有条理的分析结果\\n\\n\"\n        \"重要提示:\\n\"\n        \"- 先使用firecrawl_tool了解网站结构，再使用jina_reader_tool提取具体内容\\n\"\n        \"- 对于大型网站，先分析网站结构，再有针对性地选择重要页面进行内容提取\\n\"\n        \"- 每次工具使用后评估结果，决定下一步行动\\n\"\n        \"- 在最终回答中提供结构化的分析，包括网站组织方式和关键内容摘要\\n\"\n        \"- 清晰地展示你的思考过程，包括为什么选择特定页面进行分析\\n\"\n    ),\n)\n\n##############################################################################\n# 测试：分析LangGraph文档网站\n##############################################################################\n\nif __name__ == \"__main__\":\n    # 测试网站分析\n    print(\"\\n开始测试Web提取Agent分析网站的能力...\\n\")\n    print(\"分析目标: LangGraph文档网站\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            {\"role\": \"user\", \"content\": \"爬取LangGraph文档网站的每个章节的内容(https://langchain-ai.github.io/langgraph/how-tos/) \"}\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in react_agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用原有的log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print(\"\\n最终分析结果:\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                message.pretty_print()"
  },
  {
    "path": "examples/07_web_extraction_with_filesystem.py",
    "content": "import os\nimport sys\nimport json\nimport asyncio\nfrom datetime import datetime\nfrom typing import Dict, Any, List\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom langchain_community.agent_toolkits import FileManagementToolkit\nfrom langgraph.prebuilt import create_react_agent\nfrom langgraph.checkpoint.memory import MemorySaver\nfrom dotenv import load_dotenv\nfrom langchain_community.tools import TavilySearchResults\nfrom core.agents.react_supervisor_agent import SupervisorAgent\n\nload_dotenv()  # 自动加载 .env 文件\n\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n##############################################################################\n# 创建一个记录Agent思考过程的函数\n##############################################################################\n\ndef log_agent_actions(state: Dict[str, Any]) -> None:\n    \"\"\"记录Agent的思考过程和行动\"\"\"\n    print(\"\\n\" + \"=\" * 50)\n    print(\"当前状态:\")\n    \n    # 打印最新消息\n    if state.get(\"messages\") and len(state[\"messages\"]) > 0:\n        latest_message = state[\"messages\"][-1]\n        \n        if isinstance(latest_message, AIMessage):\n            print(f\"\\nAI思考过程:\")\n            # 限制内容长度，避免过长输出\n            content = latest_message.content\n            if len(content) > 500:\n                content = content[:250] + \"\\n... (内容过长，已截断) ...\\n\" + content[-250:]\n            print(content)\n            \n            # 如果有工具调用，打印工具调用信息\n            if latest_message.tool_calls:\n                print(f\"\\n工具调用:\")\n                for tool_call in latest_message.tool_calls:\n                    print(f\"- 工具: {tool_call['name']}\")\n                    # 限制参数输出长度\n                    args = str(tool_call['args'])\n                    if len(args) > 100:\n                        args = args[:100] + \"... (参数过长，已截断)\"\n                    print(f\"- 参数: {args}\")\n        \n        elif isinstance(latest_message, ToolMessage):\n            print(f\"\\n工具返回结果:\")\n            print(f\"- 工具: {latest_message.name}\")\n            # 只打印结果的前200个字符，避免输出过长\n            content = latest_message.content\n            if len(content) > 200:\n                content = content[:100] + \"\\n... (更多内容省略) ...\\n\" + content[-100:]\n            print(f\"- 结果: {content}\")\n    \n    print(\"=\" * 50)\n\n##############################################################################\n# 创建Web提取工具\n##############################################################################\n# 创建Tavily搜索工具\n\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=False,\n    include_images=False,\n    search_depth=\"advanced\"\n)\n\n##############################################################################\n# 创建文件系统工具 - 用于保存提取的内容\n##############################################################################\n\n# 设置文件系统工具的根目录为examples/output\noutput_dir = os.path.join(os.path.dirname(__file__), \"output\")\nos.makedirs(output_dir, exist_ok=True)\n\n# 创建文件系统工具集\nfilesystem_toolkit = FileManagementToolkit(\n    root_dir=output_dir,\n    selected_tools=[\"write_file\", \"read_file\", \"list_directory\"]\n)\n\n# 获取文件系统工具\nfilesystem_tools = filesystem_toolkit.get_tools()\n\n##############################################################################\n# 创建Research Agent - 用于网站内容提取\n##############################################################################\n\nresearch_agent = create_react_agent(\n    model=model,\n    tools=[tavily_search],\n    name=\"research_agent\",\n    # 提示词强调分解问题、多步思考和综合信息\n    prompt=(\n        \"You are a world-class researcher. You have access to the 'tavily_search_results_json' tool \"\n        \"which can search the web for real-time information. \"\n        \"When asked a question, use this tool to find accurate and up-to-date information. \"\n        \"Summarize the search results in a clear and concise manner. \"\n        \"Always cite your sources by including the URLs from the search results.\"\n    ),\n    debug=False)\n\n##############################################################################\n# 创建FileSystem Agent - 用于保存提取的内容\n##############################################################################\n\nfilesystem_agent = create_react_agent(\n    model=model,\n    tools=filesystem_tools,\n    name=\"filesystem_agent\",\n    # 提示词强调文件操作和内容保存\n    prompt=(\n        \"你是一位专业的文件系统管理专家，负责将网页内容保存到本地文件系统。\\n\"\n        \"你有以下工具可以使用:\\n\"\n        \"1. 'write_file': 用于将内容写入文件\\n\"\n        \"2. 'read_file': 用于读取文件内容\\n\"\n        \"3. 'list_directory': 用于列出目录内容\\n\\n\"\n        \"当接收到保存内容的请求时，请遵循以下方法论:\\n\"\n        \"1. 分析内容: 确定内容的类型和结构\\n\"\n        \"2. 确定文件名: 根据内容类型和来源创建合适的文件名\\n\"\n        \"3. 保存内容: 使用write_file工具将内容保存到文件中\\n\"\n        \"4. 验证保存: 使用read_file或list_directory工具验证内容已正确保存\\n\\n\"\n        \"重要提示:\\n\"\n        \"- 为文件创建有意义的名称，包含日期和内容描述\\n\"\n        \"- 对于结构化数据，优先使用JSON格式保存\\n\"\n        \"- 对于文本内容，使用TXT或MD格式保存\\n\"\n        \"- 确保文件名不包含非法字符\\n\"\n        \"- 在保存前，检查是否已存在同名文件，避免覆盖重要内容\\n\"\n    ),\n)\n\n##############################################################################\n# 创建Supervisor Agent - 协调Research Agent和FileSystem Agent\n##############################################################################\n# 创建内存存储器用于保存对话状态\nmemory_saver = MemorySaver()\nsupervisor = SupervisorAgent(\n    agents=[research_agent, filesystem_agent],\n    model=model,\n    prompt=(\n        \"你是一个智能助手的总协调者，负责管理两个专业智能体:\\n\"\n        \"1) research_agent: 网页内容分析专家，可以爬取和分析网站内容\\n\"\n        \"2) filesystem_agent: 文件系统管理专家，可以将内容保存到本地文件系统\\n\\n\"\n        \"你的工作流程如下:\\n\"\n        \"1. 分析用户请求，确定是需要网页内容提取还是文件操作，或两者都需要\\n\"\n        \"2. 如果需要网页内容提取，调用research_agent获取网页内容\\n\"\n        \"3. 如果需要将提取的内容保存到文件，调用filesystem_agent进行保存\\n\"\n        \"4. 如果用户同时需要提取内容并保存，先调用research_agent获取内容，再调用filesystem_agent保存内容\\n\\n\"\n        \"重要规则:\\n\"\n        \"- 不要在一个消息中同时调用多个智能体，必须一步一步来\\n\"\n        \"- 当调用filesystem_agent保存内容时，必须提供完整的内容和建议的文件名\\n\"\n        \"- 确保在最终回复中告知用户内容已成功提取和/或保存\\n\"\n        \"- 如果用户只想提取内容而不保存，只调用research_agent\\n\"\n        \"- 如果用户只想操作文件而不提取新内容，只调用filesystem_agent\\n\\n\"\n        \"上下文管理指南:\\n\"\n        \"- 当处理大型网站或多个页面时，指导research_agent采用分批处理策略\\n\"\n        \"- 对于大型内容提取任务，先让research_agent获取网站结构，再逐步处理各个页面\\n\"\n        \"- 当发现research_agent返回的内容过大时，指导它进行内容摘要或分批处理\\n\"\n        \"- 如果research_agent一次性尝试处理过多页面导致上下文超限，指导它减少并行处理的页面数量\\n\"\n        \"- 对于需要保存的大型内容，考虑将其分割成多个小文件，而不是一个大文件\\n\"\n        \"- 在处理多页面内容时，可以采用先保存再处理的策略，减轻上下文负担\\n\"\n    ),\n    checkpointer=memory_saver\n)\n\n\n\n# 编译得到一个可调用的\"App\"，添加checkpointer实现记忆功能\napp = supervisor.compile()\n\n# # 获取当前文件名（不含路径和扩展名）\n# current_file = os.path.basename(__file__)\n# file_name_without_ext = os.path.splitext(current_file)[0]\n# graph_dir = os.path.join(os.path.dirname(__file__), \"graphs\")\n\n# # 确保 graphs 目录存在\n# os.makedirs(graph_dir, exist_ok=True)\n\n# # 生成与文件名一致的图片名，并保存到 examples/graphs 目录\n# image_data = app.get_graph().draw_mermaid_png()\n# graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.png\")\n\n# # 保存图片（如果已存在则覆盖）\n# with open(graph_path, \"wb\") as f:\n#     f.write(image_data)\n\n# print(f\"图表已保存为 {graph_path}\")\n\n##############################################################################\n# 主函数 - 处理用户输入\n##############################################################################\n\nasync def main():\n    # 创建一个固定的thread_id用于保持对话上下文\n    thread_id = \"user_session_1\"\n    \n    # 创建配置对象，包含thread_id\n    config = {\"configurable\": {\"thread_id\": thread_id}}\n    \n    print(\"\\n当前会话ID:\", thread_id)\n    print(\"(所有对话将在同一会话中保持上下文记忆)\")\n    \n    while True:\n        # 获取用户输入\n        user_input = await asyncio.to_thread(input, \"\\n请输入您想了解的问题 (输入'退出'结束): \")\n        \n        # 检查是否退出\n        if user_input.lower() in ['退出', 'exit', 'quit']:\n            print(\"\\n感谢使用，再见！\")\n            break\n        \n        # 准备初始状态 - 只包含当前用户消息\n        initial_state = {\n            \"messages\": [HumanMessage(content=user_input)]\n        }\n        \n        try:\n            print(\"\\n=== 🔍 开始研究 ===\\n\")\n            \n            # 使用stream方法逐步获取中间状态，传入config以使用相同的thread_id\n            for partial_state in app.stream(initial_state, config, stream_mode=\"values\"):\n                # 保存最终状态\n                final_state = partial_state\n                \n                # 获取消息列表\n                messages = partial_state.get(\"messages\", [])\n                if not messages:\n                    continue\n                    \n                # 获取最新消息\n                latest_message = messages[-1]\n                \n                # 使用log_agent_actions函数记录状态\n                log_agent_actions({\"messages\": [latest_message]})\n        \n        except Exception as e:\n            print(f\"\\n处理查询时出错: {e}\")\n            print(\"可能是由于上下文长度超出限制，请尝试减少查询范围或使用'批处理大小设置为X'命令调整批处理大小(1-5之间)\")\n\n##############################################################################\n# 程序入口\n##############################################################################\n\nif __name__ == \"__main__\":\n    print(\"\\n欢迎使用具有记忆功能的网页爬取助手！\")\n    print(\"本助手可以记住您之前的对话内容，实现连续对话体验。\")\n    print(\"您可以询问之前提到过的内容，助手会根据上下文理解您的问题。\")\n    \n    # 运行主函数\n    asyncio.run(main())"
  },
  {
    "path": "examples/08_react_agent_tool_registry_test.py",
    "content": "import os\nimport sys\nimport json\nfrom typing import Dict, Any, List\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom langchain_community.tools import JinaSearch, WikipediaQueryRun\nfrom langchain_community.utilities import WikipediaAPIWrapper\nfrom dotenv import load_dotenv\n\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.tools import register_direct_tool\nfrom core.tools.registry import get_registered_tools, ToolCategory\nfrom core.tools.firecrawl_tool import FireCrawlTool\n\nload_dotenv()  # 自动加载 .env 文件\n\n##############################################################################\n# 工具注册和ReactAgent测试 - 美联储研究任务\n##############################################################################\n\ndef print_separator(title):\n    \"\"\"打印分隔符\"\"\"\n    print(\"\\n\" + \"=\" * 80)\n    print(f\" {title} \".center(80, \"=\"))\n    print(\"=\" * 80)\n\n##############################################################################\n# 创建一个记录Agent思考过程的函数\n##############################################################################\n\ndef log_agent_actions(state: Dict[str, Any]) -> None:\n    \"\"\"记录Agent的思考过程和行动\"\"\"\n    print(\"\\n\" + \"-\" * 50)\n    print(\"当前状态:\")\n    \n    # 打印最新消息\n    if state.get(\"messages\") and len(state[\"messages\"]) > 0:\n        latest_message = state[\"messages\"][-1]\n        \n        if isinstance(latest_message, AIMessage):\n            print(f\"\\nAI思考过程:\")\n            print(latest_message.content)\n            \n            # 如果有工具调用，打印工具调用信息\n            if latest_message.tool_calls:\n                print(f\"\\n工具调用:\")\n                for tool_call in latest_message.tool_calls:\n                    print(f\"- 工具: {tool_call['name']}\")\n                    print(f\"- 参数: {tool_call['args']}\")\n        \n        elif isinstance(latest_message, ToolMessage):\n            print(f\"\\n工具返回结果:\")\n            print(f\"- 工具: {latest_message.name}\")\n            # 只打印结果的前200个字符，避免输出过长\n            content = latest_message.content\n            if len(content) > 200:\n                content = content[:200] + \"... (更多内容省略)\"\n            print(f\"- 结果: {content}\")\n    \n    print(\"-\" * 50)\n\n##############################################################################\n# 注册工具\n##############################################################################\n\nprint_separator(\"注册搜索工具\")\n\n# 创建JinaSearch工具实例\njina_search = JinaSearch()\n\n# 创建Wikipedia工具实例\n# wiki_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())\n\nfirecrawl_tool = FireCrawlTool()\n\n# 使用register_direct_tool函数注册工具\nregister_direct_tool(jina_search)\nregister_direct_tool(firecrawl_tool)\n# 注册自定义工具 - FireCrawlTool\n\n# 获取所有已注册的工具（以字典格式）\nregistered_tools = get_registered_tools(as_dict=True)\n\n# 打印所有已注册的工具\nprint(\"\\n已注册的工具:\")\nfor name, info in registered_tools.items():\n    print(f\"- {name} (类别: {info['category'].value})\")\n\n##############################################################################\n# 创建ReactAgent实例\n##############################################################################\n\nprint_separator(\"创建ReactAgent实例\")\n\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n# 从注册表中只获取搜索类工具列表\nfrom core.tools.registry import get_tools_by_category, ToolCategory\ntools_list = get_tools_by_category(ToolCategory.SEARCH)\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=model,\n    tools=tools_list,\n    name=\"fed_research_agent\",\n    # 提示词强调分解问题、多步思考和综合信息\n    prompt=(\n        \"你是一位专业的经济研究分析师，擅长分析复杂的经济问题并提供深入见解。\\n\"\n        \"你有多个强大的工具可以搜索网络获取实时信息：\\n\"\n        \"当面对复杂问题时，请遵循以下方法论：\\n\"\n        \"1. 分解问题：将复杂问题分解为更小的子问题\\n\"\n        \"2. 制定计划：确定需要搜索哪些信息，以及使用哪些工具\\n\"\n        \"3. 执行搜索：使用适当的工具执行搜索\\n\"\n        \"4. 分析结果：分析搜索结果，确定是否需要进一步搜索\\n\"\n        \"5. 综合信息：将所有搜索结果综合成一个连贯的回答\\n\\n\"\n        \"重要提示：\\n\"\n        \"- 每次搜索后评估结果，决定下一步行动\\n\"\n        \"- 在最终回答中引用来源\\n\"\n        \"- 清晰地展示你的思考过程，包括问题分解和计划制定\\n\"\n    ),\n)\n\n# agent = react_agent.compile()\n# 获取图对象\n# graph = agent.get_graph()\n\n# # 获取当前文件名（不含路径和扩展名）\n# current_file = os.path.basename(__file__)\n# file_name_without_ext = os.path.splitext(current_file)[0]\n# graph_dir = os.path.join(os.path.dirname(__file__), \"graphs\")\n\n# # 确保 graphs 目录存在\n# os.makedirs(graph_dir, exist_ok=True)\n\n# # 生成与文件名一致的图片名，并保存到 examples/graphs 目录\n# image_data = graph.draw_mermaid_png()\n# graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.png\")\n\n# # 保存图片（如果已存在则覆盖）\n# with open(graph_path, \"wb\") as f:\n#     f.write(image_data)\n\n# print(f\"工作流图已保存为 {graph_path}\")\n\n##############################################################################\n# 测试：查询\"美联储的详细介绍和它如何影响全球经济\"\n##############################################################################\n\nif __name__ == \"__main__\":\n    print_separator(\"开始测试ReactAgent处理美联储研究任务\")\n    print(\"\\n查询: 美联储的详细介绍和它如何影响全球经济\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"请提供2025年美联储(Federal Reserve)的详细介绍，包括其历史、结构、职能，以及它如何通过货币政策影响全球经济。\")\n        ]\n    }\n    result = react_agent.run(inputs)\n##############################################################################\n# 打印最终对话消息\n##############################################################################\n    for m in result[\"messages\"]:\n        m.pretty_print()"
  },
  {
    "path": "examples/09_e2b_code_interpreter_test.py",
    "content": "import os\nimport sys\nimport json\nfrom typing import Dict, Any, List\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom dotenv import load_dotenv\n\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.tools.registry import get_registered_tools, ToolCategory, get_tools_by_category\nfrom core.tools.e2b_tool import E2BCodeInterpreterTool\n\nload_dotenv()  # 自动加载 .env 文件\n\n##############################################################################\n# E2B代码解释器工具测试\n##############################################################################\n\ndef print_separator(title):\n    \"\"\"打印分隔符\"\"\"\n    print(\"\\n\" + \"=\" * 80)\n    print(f\" {title} \".center(80, \"=\"))\n    print(\"=\" * 80)\n\n##############################################################################\n# 检查E2B代码解释器工具是否已注册\n##############################################################################\n\nprint_separator(\"检查E2B代码解释器工具是否已注册\")\n\n# 获取所有已注册的工具（以字典格式）\nregistered_tools = get_registered_tools(as_dict=True)\n\n# 打印所有已注册的工具\nprint(\"\\n已注册的工具:\")\nfor name, info in registered_tools.items():\n    print(f\"- {name} (类别: {info['category'].value})\")\n\n# 检查E2B代码解释器工具是否已注册\ne2b_tool_name = \"e2b_code_interpreter\"\nif e2b_tool_name in registered_tools:\n    print(f\"\\nE2B代码解释器工具已成功注册: {e2b_tool_name}\")\nelse:\n    print(f\"\\n警告: E2B代码解释器工具未注册\")\n    # 手动注册E2B代码解释器工具\n    print(\"尝试手动注册E2B代码解释器工具...\")\n    try:\n        from core.tools.registry import register_tool\n        e2b_tool = E2BCodeInterpreterTool()\n        register_tool(e2b_tool, ToolCategory.CODE_INTERPRETER)\n        print(f\"已手动注册工具: {e2b_tool.name}\")\n    except Exception as e:\n        print(f\"手动注册E2B代码解释器工具失败: {e}\")\n\n##############################################################################\n# 创建ReactAgent实例\n##############################################################################\n\nprint_separator(\"创建ReactAgent实例\")\n\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n# 从注册表中只获取代码解释器类工具列表\ntools_list = get_tools_by_category(ToolCategory.CODE_INTERPRETER)\n\n# 打印获取到的代码解释器工具\nprint(\"\\n获取到的代码解释器工具:\")\nfor tool in tools_list:\n    print(f\"- {tool.name}: {tool.description}\")\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=model,\n    tools=tools_list,\n    name=\"code_interpreter_agent\",\n    # 提示词强调使用代码解释器工具进行数据分析和可视化\n    prompt=(\n        \"你是一位专业的数据分析师和编程助手，擅长使用Python进行数据分析和可视化。\\n\"\n        \"你有多个强大的代码执行工具可以使用：\\n\"\n        \"- e2b_code_interpreter: 用于执行Python代码，支持数据分析和可视化\\n\"\n        \"当面对编程和数据分析问题时，请遵循以下方法论：\\n\"\n        \"1. 分析问题：理解用户的需求和问题本质\\n\"\n        \"2. 制定计划：确定解决方案和需要使用的工具\\n\"\n        \"3. 编写代码：使用适当的工具编写和执行代码\\n\"\n        \"4. 分析结果：解释代码执行结果，提供见解\\n\"\n        \"5. 优化方案：如有必要，优化代码或提供改进建议\\n\\n\"\n        \"重要提示：\\n\"\n        \"- 优先使用e2b_code_interpreter工具执行Python代码\\n\"\n        \"- 对于数据分析和可视化任务，确保导入必要的库（如pandas, matplotlib, numpy等）\\n\"\n        \"- 对于不存在的库，工具会自动尝试使用pip install进行安装\\n\"\n        \"- 在代码中添加详细注释，解释关键步骤\\n\"\n        \"- 执行代码后，解释结果含义和见解\\n\"\n    ),\n)\n\n# 编译Agent\nagent = react_agent.compile()\n\n# # 获取图对象\n# graph = agent.get_graph()\n\n# # 获取当前文件名（不含路径和扩展名）\n# current_file = os.path.basename(__file__)\n# file_name_without_ext = os.path.splitext(current_file)[0]\n# graph_dir = os.path.join(os.path.dirname(__file__), \"graphs\")\n\n# # 确保 graphs 目录存在\n# os.makedirs(graph_dir, exist_ok=True)\n\n# # 生成与文件名一致的图片名，并保存到 examples/graphs 目录\n# image_data = graph.draw_mermaid_png()\n# graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.png\")\n\n# # 保存图片（如果已存在则覆盖）\n# with open(graph_path, \"wb\") as f:\n#     f.write(image_data)\n\n# print(f\"工作流图已保存为 {graph_path}\")\n\n##############################################################################\n# 测试：使用E2B代码解释器执行简单的数据分析任务\n##############################################################################\n\nif __name__ == \"__main__\":\n    print_separator(\"开始测试ReactAgent使用E2B代码解释器\")\n    print(\"\\n查询: 使用Python生成一个简单的正弦波图形\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"使用Python生成一个简单的正弦波图形，如果有找不到的模块，需要自动安装\")\n        ]\n    }\n    result = agent.invoke(inputs)\n\n    for m in result[\"messages\"]:\n        m.pretty_print()"
  },
  {
    "path": "examples/10_financial_data_analysis.py",
    "content": "import os\nimport sys\nimport json\nfrom typing import Dict, Any, List\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom dotenv import load_dotenv\n\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.tools.registry import get_registered_tools, ToolCategory, get_tools_by_category\nfrom core.tools.e2b_tool import E2BCodeInterpreterTool\n\nload_dotenv()  # 自动加载 .env 文件\n\n##############################################################################\n# 财务数据分析报表生成示例\n##############################################################################\n\ndef print_separator(title):\n    \"\"\"打印分隔符\"\"\"\n    print(\"\\n\" + \"=\" * 80)\n    print(f\" {title} \".center(80, \"=\"))\n    print(\"=\" * 80)\n\n##############################################################################\n# 创建一个记录Agent思考过程的函数\n##############################################################################\n\ndef log_agent_actions(state: Dict[str, Any]) -> None:\n    \"\"\"记录Agent的思考过程和行动\"\"\"\n    print(\"\\n\" + \"-\" * 50)\n    print(\"当前状态:\")\n    \n    # 打印最新消息\n    if state.get(\"messages\") and len(state[\"messages\"]) > 0:\n        latest_message = state[\"messages\"][-1]\n        \n        if isinstance(latest_message, AIMessage):\n            print(f\"\\nAI思考过程:\")\n            print(latest_message.content)\n            \n            # 如果有工具调用，打印工具调用信息\n            if latest_message.tool_calls:\n                print(f\"\\n工具调用:\")\n                for tool_call in latest_message.tool_calls:\n                    print(f\"- 工具: {tool_call['name']}\")\n                    print(f\"- 参数: {tool_call['args']}\")\n        \n        elif isinstance(latest_message, ToolMessage):\n            print(f\"\\n工具返回结果:\")\n            print(f\"- 工具: {latest_message.name}\")\n            content = latest_message.content\n            print(f\"- 结果: {content}\")\n    \n    print(\"-\" * 50)\n\n##############################################################################\n# 检查E2B代码解释器工具是否已注册\n##############################################################################\n\nprint_separator(\"检查E2B代码解释器工具是否已注册\")\n\n# 获取所有已注册的工具（以字典格式）\nregistered_tools = get_registered_tools(as_dict=True)\n\n# 打印所有已注册的工具\nprint(\"\\n已注册的工具:\")\nfor name, info in registered_tools.items():\n    print(f\"- {name} (类别: {info['category'].value})\")\n\n# 检查E2B代码解释器工具是否已注册\ne2b_tool_name = \"e2b_code_interpreter\"\nif e2b_tool_name in registered_tools:\n    print(f\"\\nE2B代码解释器工具已成功注册: {e2b_tool_name}\")\nelse:\n    print(f\"\\n警告: E2B代码解释器工具未注册\")\n    # 手动注册E2B代码解释器工具\n    print(\"尝试手动注册E2B代码解释器工具...\")\n    try:\n        from core.tools.registry import register_tool\n        e2b_tool = E2BCodeInterpreterTool()\n        register_tool(e2b_tool, ToolCategory.CODE_INTERPRETER)\n        print(f\"已手动注册工具: {e2b_tool.name}\")\n    except Exception as e:\n        print(f\"手动注册E2B代码解释器工具失败: {e}\")\n\n##############################################################################\n# 创建ReactAgent实例\n##############################################################################\n\nprint_separator(\"创建ReactAgent实例\")\n\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n# 从注册表中只获取代码解释器类工具列表\ntools_list = get_tools_by_category(ToolCategory.CODE_INTERPRETER)\n\n# 打印获取到的代码解释器工具\nprint(\"\\n获取到的代码解释器工具:\")\nfor tool in tools_list:\n    print(f\"- {tool.name}: {tool.description}\")\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=model,\n    tools=tools_list,\n    name=\"financial_data_analyst\",\n    # 提示词强调使用代码解释器工具进行财务数据分析和可视化\n    prompt=(\n        \"你是一位专业的财务数据分析师，擅长使用Python进行财务数据分析和可视化。\\n\"\n        \"你有强大的代码执行工具可以使用：\\n\"\n        \"- e2b_code_interpreter: 用于执行Python代码，支持数据分析和可视化\\n\\n\"\n        \"当面对财务数据分析问题时，请遵循以下方法论：\\n\"\n        \"1. 分析问题：理解用户的需求和问题本质\\n\"\n        \"2. 制定计划：确定解决方案和需要使用的工具\\n\"\n        \"3. 编写代码：使用适当的工具编写和执行代码\\n\"\n        \"4. 分析结果：解释代码执行结果，提供财务见解\\n\"\n        \"5. 优化方案：如有必要，优化代码或提供改进建议\\n\\n\"\n        \"重要提示：\\n\"\n        \"- 优先使用e2b_code_interpreter工具执行Python代码\\n\"\n        \"- 对于财务数据分析和可视化任务，确保导入必要的库（如pandas, matplotlib, numpy等）\\n\"\n        \"- 对于不存在的库，工具会自动尝试使用pip install进行安装\\n\"\n        \"- 在代码中添加详细注释，解释关键步骤\\n\"\n        \"- 执行代码后，解释结果含义和财务见解\\n\"\n    ),\n)\n\n# # 编译Agent\n# agent = react_agent.compile()\n\n# # 获取图对象\n# graph = agent.get_graph()\n\n# # 获取当前文件名（不含路径和扩展名）\n# current_file = os.path.basename(__file__)\n# file_name_without_ext = os.path.splitext(current_file)[0]\n# graph_dir = os.path.join(os.path.dirname(__file__), \"graphs\")\n\n# # 确保 graphs 目录存在\n# os.makedirs(graph_dir, exist_ok=True)\n\n# # 生成与文件名一致的图片名，并保存到 examples/graphs 目录\n# image_data = graph.draw_mermaid_png()\n# graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.png\")\n\n# # 保存图片（如果已存在则覆盖）\n# with open(graph_path, \"wb\") as f:\n#     f.write(image_data)\n\n# print(f\"工作流图已保存为 {graph_path}\")\n\n##############################################################################\n# 从沙箱下载文件到本地的函数\n##############################################################################\nimport os\n\ndef download_file_from_sandbox(sandbox, sandbox_path, local_path):\n    \"\"\"从 e2b 沙箱中下载文件并保存到本地，自动区分文本和二进制文件\"\"\"\n    try:\n        print(f\"读取文件: {sandbox_path}\")\n\n        # 判断是否为常见二进制文件类型（可自行扩展）\n        binary_extensions = (\n            '.png', '.jpg', '.jpeg', '.gif', '.pdf', '.svg',\n            '.xlsx', '.xls', '.zip', '.bin', '.pyc', '.pyd',\n            '.pptx', '.docx', '.mp3', '.mp4', '.avi', '.mov',\n        )\n        is_binary = sandbox_path.lower().endswith(binary_extensions)\n\n        # 创建目录\n        os.makedirs(os.path.dirname(local_path), exist_ok=True)\n\n        if is_binary:\n            print(\"📦 识别为二进制文件，使用 sandbox.download_file()\")\n            content = sandbox.files.read(sandbox_path)  # 返回 bytes\n            with open(local_path, 'wb') as f:\n                f.write(content)\n        else:\n            print(\"📄 识别为文本文件，使用 sandbox.files.read()\")\n            content = sandbox.files.read(sandbox_path)  # 返回 str\n            with open(local_path, 'w', encoding='utf-8') as f:\n                f.write(content)\n\n        print(f\"✅ 文件已保存到本地: {local_path}\")\n        return True\n\n    except Exception as e:\n        print(f\"❌ 下载失败: {e}\")\n        return False\n    \ndef download_directory_from_sandbox(sandbox, sandbox_dir_path, local_dir_path):\n    \"\"\"从沙箱下载整个目录内容到本地\n    \n    Args:\n        sandbox: 沙箱实例\n        sandbox_dir_path: 沙箱中的目录路径\n        local_dir_path: 本地保存目录路径\n    \n    Returns:\n        bool: 是否成功下载所有文件\n    \"\"\"\n    try:\n        print(f\"尝试下载目录: {sandbox_dir_path} -> {local_dir_path}\")\n        \n        # 确保本地目录存在\n        os.makedirs(local_dir_path, exist_ok=True)\n        \n        # 列出沙箱中指定目录下的所有文件\n        try:\n            files = sandbox.files.list(sandbox_dir_path)\n            # print(f\"获取到文件列表: {sandbox_dir_path}, 类型: {type(files)}\")\n            # if files and len(files) > 0:\n            #     print(f\"第一个文件类型: {type(files[0])}, 内容: {files[0]}\")\n            #     # 检查对象属性\n            #     print(f\"文件对象可用属性: {dir(files[0])}\")\n        except Exception as e:\n            print(f\"列出文件时出错: {sandbox_dir_path}, 错误: {str(e)}\")\n            return False\n        \n        if not files:\n            print(f\"沙箱中目录 {sandbox_dir_path} 为空或不存在\")\n            return False\n            \n        downloaded_count = 0\n        # 定义需要跳过的系统文件\n        skip_files = {'.bashrc', '.bash_logout', '.profile'}\n        \n        # 遍历并下载每个文件\n        for file_info in files:\n            try:\n                # 使用dir()查看对象有哪些属性\n                print(f\"文件信息对象属性: {dir(file_info)}\")\n                \n                # 尝试安全获取name和type属性\n                file_name = getattr(file_info, \"name\", None)\n                if file_name is None:\n                    print(f\"警告: 无法获取文件名, 跳过此文件\")\n                    continue\n                    \n                file_type = getattr(file_info, \"type\", \"file\")  # 默认为文件类型\n                # 如果 file_type 是枚举, 使用其 value 进行判断\n                type_value = file_type.value if hasattr(file_type, \"value\") else file_type\n                \n                # 跳过不需要的系统文件或系统目录（隐藏文件/目录）\n                if file_name in skip_files or (file_name.startswith('.') and type_value == 'dir'):\n                    print(f\"跳过系统文件或目录: {file_name}\")\n                    continue\n                \n                print(f\"处理文件: {file_name}, 类型: {type_value}\")\n                \n                sandbox_file_path = f\"{sandbox_dir_path}/{file_name}\"\n                local_file_path = os.path.join(local_dir_path, file_name)\n                \n                if type_value == 'dir':\n                    # 递归下载子目录\n                    print(f\"发现子目录: {sandbox_file_path}\")\n                    if download_directory_from_sandbox(sandbox, sandbox_file_path, local_file_path):\n                        downloaded_count += 1\n                else:\n                    # 下载文件\n                    print(f\"下载文件: {sandbox_file_path} -> {local_file_path}\")\n                    if download_file_from_sandbox(sandbox, sandbox_file_path, local_file_path):\n                        downloaded_count += 1\n            except Exception as e:\n                print(f\"处理文件时出错: {str(e)}\")\n                import traceback\n                print(f\"详细错误跟踪: {traceback.format_exc()}\")\n                continue\n        \n        if downloaded_count > 0:\n            print(f\"从 {sandbox_dir_path} 下载了 {downloaded_count} 个文件/目录到 {local_dir_path}\")\n            return True\n        return False\n        \n    except Exception as e:\n        print(f\"从沙箱下载目录时出错: {str(e)}\")\n        import traceback\n        print(f\"详细错误跟踪: {traceback.format_exc()}\")\n        return False\n\n##############################################################################\n# 测试：使用E2B代码解释器生成财务数据分析报表\n##############################################################################\n\nif __name__ == \"__main__\":\n    print_separator(\"开始测试ReactAgent使用E2B代码解释器进行财务数据分析\")\n    print(\"\\n查询: 生成模拟财务数据并进行分析，生成财务报表\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"请生成一组模拟的公司财务数据（包括收入、支出、利润等），对数据进行分析，将处理过程（代码）和最终生成的结果保存到本地。\")\n        ]\n    }\n    result = react_agent.run(inputs)\n\n    for m in result[\"messages\"]:\n        m.pretty_print()\n\n    print(\"\\n下载沙盒里的文件\")\n    try:\n        # 遍历 react_agent.tools 以查找 E2B 相关工具\n        sandbox = None\n        for tool in react_agent.tools:\n            if hasattr(tool, \"sandbox\"):\n                sandbox = tool.sandbox\n                break  # 找到后就退出循环\n\n        if sandbox:\n            # 设定输出目录\n            output_dir = os.path.join(os.getcwd(), \"examples/output/sandbox_files\")\n            os.makedirs(output_dir, exist_ok=True)\n\n            # 直接下载主要工作目录\n            print(\"\\n从沙箱下载文件到本地...\")\n            download_directory_from_sandbox(sandbox, \"/home/user\", output_dir)\n\n            # 下载临时目录中可能的图表和数据文件\n            # download_directory_from_sandbox(sandbox, \"/tmp\", output_dir)\n\n            print(f\"\\n文件已保存到目录: {output_dir}\")\n            sandbox.close()\n    except Exception as e:\n        print(f\"从沙箱下载文件时出错: {str(e)}\")"
  },
  {
    "path": "examples/11_e2b_sandbox_test.py",
    "content": "import os\nimport sys\nimport json\nfrom typing import Dict, Any, List\nfrom datetime import datetime\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom dotenv import load_dotenv\n\nfrom core.agents.base.react_agent import ReactAgent\nfrom core.tools.registry import get_registered_tools, ToolCategory, get_tools_by_category\nfrom core.tools.e2b_tool import E2BCodeInterpreterTool\n\nload_dotenv()  # 自动加载 .env 文件\n\n##############################################################################\n# E2B沙盒环境测试程序\n##############################################################################\n\ndef print_separator(title):\n    \"\"\"打印分隔符\"\"\"\n    print(\"\\n\" + \"=\" * 80)\n    print(f\" {title} \".center(80, \"=\"))\n    print(\"=\" * 80)\n\n##############################################################################\n# 创建一个记录Agent思考过程的函数\n##############################################################################\n\ndef log_agent_actions(state: Dict[str, Any]) -> None:\n    \"\"\"记录Agent的思考过程和行动\"\"\"\n    print(\"\\n\" + \"-\" * 50)\n    print(\"当前状态:\")\n    \n    # 打印最新消息\n    if state.get(\"messages\") and len(state[\"messages\"]) > 0:\n        latest_message = state[\"messages\"][-1]\n        \n        if isinstance(latest_message, AIMessage):\n            print(f\"\\nAI思考过程:\")\n            print(latest_message.content)\n            \n            # 如果有工具调用，打印工具调用信息\n            if latest_message.tool_calls:\n                print(f\"\\n工具调用:\")\n                for tool_call in latest_message.tool_calls:\n                    print(f\"- 工具: {tool_call['name']}\")\n                    print(f\"- 参数: {tool_call['args']}\")\n        \n        elif isinstance(latest_message, ToolMessage):\n            print(f\"\\n工具返回结果:\")\n            print(f\"- 工具: {latest_message.name}\")\n            content = latest_message.content\n            if len(content) > 500:\n                content = content[:250] + \"\\n... (内容过长，已截断) ...\\n\" + content[-250:]\n            print(f\"- 结果: {content}\")\n    \n    print(\"-\" * 50)\n\n##############################################################################\n# 从沙箱下载文件到本地的函数\n##############################################################################\n\ndef download_file_from_sandbox(sandbox, sandbox_path, local_path):\n    \"\"\"从 e2b 沙箱中下载文件并保存到本地，自动区分文本和二进制文件\"\"\"\n    try:\n        print(f\"读取文件: {sandbox_path}\")\n\n        # 判断是否为常见二进制文件类型（可自行扩展）\n        binary_extensions = (\n            '.png', '.jpg', '.jpeg', '.gif', '.pdf', '.svg',\n            '.xlsx', '.xls', '.zip', '.bin', '.pyc', '.pyd',\n            '.pptx', '.docx', '.mp3', '.mp4', '.avi', '.mov',\n        )\n        is_binary = sandbox_path.lower().endswith(binary_extensions)\n\n        # 创建目录\n        os.makedirs(os.path.dirname(local_path), exist_ok=True)\n\n        if is_binary:\n            print(\"📦 识别为二进制文件，使用 sandbox.download_file()\")\n            content = sandbox.files.read(sandbox_path)  # 返回 bytes\n            with open(local_path, 'wb') as f:\n                f.write(content)\n        else:\n            print(\"📄 识别为文本文件，使用 sandbox.files.read()\")\n            content = sandbox.files.read(sandbox_path)  # 返回 str\n            with open(local_path, 'w', encoding='utf-8') as f:\n                f.write(content)\n\n        print(f\"✅ 文件已保存到本地: {local_path}\")\n        return True\n\n    except Exception as e:\n        print(f\"❌ 下载失败: {e}\")\n        return False\n\ndef run_ai_generated_code(sandbox, code: str, save_results_dir=None):\n    \"\"\"在 E2B 沙箱中执行 AI 生成的代码\n    \n    Args:\n        sandbox: 沙箱实例\n        code: AI 生成的代码字符串\n        save_results_dir: 用于保存结果文件的本地目录路径（可选）\n    \n    Returns:\n        dict: 包含执行结果的字典\n    \"\"\"\n    try:\n        print(\"在沙箱中执行 AI 生成的代码...\")\n        # 确保代码是字符串类型\n        if not isinstance(code, str):\n            code = str(code)\n            \n        # 执行代码\n        execution = sandbox.run_code(code)\n        print(\"代码执行完成!\")\n        \n        # 准备结果字典\n        result = {\n            \"success\": True,\n            \"stdout\": \"\",\n            \"results\": []\n        }\n        \n        # 提取标准输出\n        if hasattr(execution, \"stdout\"):\n            result[\"stdout\"] = execution.stdout\n            \n        # 检查代码是否执行成功\n        if hasattr(execution, \"error\") and execution.error:\n            error_name = getattr(execution.error, \"name\", \"Unknown\")\n            error_value = getattr(execution.error, \"value\", \"Unknown error\")\n            error_traceback = getattr(execution.error, \"traceback\", \"\")\n            \n            print(\"AI 生成的代码执行出错:\")\n            print(f\"错误类型: {error_name}\")\n            print(f\"错误信息: {error_value}\")\n            if error_traceback:\n                print(f\"错误追踪: {error_traceback}\")\n                \n            result[\"success\"] = False\n            result[\"error\"] = {\n                \"name\": error_name,\n                \"value\": error_value,\n                \"traceback\": error_traceback\n            }\n            return result\n        \n        # 处理执行结果\n        if hasattr(execution, \"results\") and execution.results:\n            import base64\n            result_idx = 0\n            \n            for res in execution.results:\n                # 默认为文本结果\n                result_data = {\"type\": \"text\", \"value\": str(res)}\n                \n                # 检查是否有PNG图像\n                if hasattr(res, \"png\") and res.png:\n                    result_data[\"type\"] = \"png\"\n                    result_data[\"value\"] = res.png  # base64编码的字符串\n                    \n                    # 如果指定了保存目录，保存图像到本地\n                    if save_results_dir:\n                        try:\n                            os.makedirs(save_results_dir, exist_ok=True)\n                            image_path = os.path.join(save_results_dir, f\"result-{result_idx}.png\")\n                            \n                            # 解码并保存图像\n                            with open(image_path, 'wb') as f:\n                                f.write(base64.b64decode(res.png))\n                            print(f\"图像已保存到: {image_path}\")\n                            result_data[\"local_path\"] = image_path\n                        except Exception as img_err:\n                            print(f\"保存图像时出错: {str(img_err)}\")\n                \n                result[\"results\"].append(result_data)\n                result_idx += 1\n        \n        return result\n        \n    except Exception as e:\n        print(f\"执行AI生成的代码时出错: {str(e)}\")\n        import traceback\n        print(f\"详细错误: {traceback.format_exc()}\")\n        return {\n            \"success\": False,\n            \"error\": {\n                \"name\": type(e).__name__,\n                \"value\": str(e),\n                \"traceback\": traceback.format_exc()\n            }\n        }\n\ndef download_directory_from_sandbox(sandbox, sandbox_dir_path, local_dir_path):\n    \"\"\"从沙箱下载整个目录内容到本地\n    \n    Args:\n        sandbox: 沙箱实例\n        sandbox_dir_path: 沙箱中的目录路径\n        local_dir_path: 本地保存目录路径\n    \n    Returns:\n        bool: 是否成功下载所有文件\n    \"\"\"\n    try:\n        print(f\"尝试下载目录: {sandbox_dir_path} -> {local_dir_path}\")\n        \n        # 确保本地目录存在\n        os.makedirs(local_dir_path, exist_ok=True)\n        \n        # 列出沙箱中指定目录下的所有文件\n        try:\n            files = sandbox.files.list(sandbox_dir_path)\n            # print(f\"获取到文件列表: {sandbox_dir_path}, 类型: {type(files)}\")\n            # if files and len(files) > 0:\n            #     print(f\"第一个文件类型: {type(files[0])}, 内容: {files[0]}\")\n            #     # 检查对象属性\n            #     print(f\"文件对象可用属性: {dir(files[0])}\")\n        except Exception as e:\n            print(f\"列出文件时出错: {sandbox_dir_path}, 错误: {str(e)}\")\n            return False\n        \n        if not files:\n            print(f\"沙箱中目录 {sandbox_dir_path} 为空或不存在\")\n            return False\n            \n        downloaded_count = 0\n        # 定义需要跳过的系统文件\n        skip_files = {'.bashrc', '.bash_logout', '.profile'}\n        \n        # 遍历并下载每个文件\n        for file_info in files:\n            try:\n                # 使用dir()查看对象有哪些属性\n                print(f\"文件信息对象属性: {dir(file_info)}\")\n                \n                # 尝试安全获取name和type属性\n                file_name = getattr(file_info, \"name\", None)\n                if file_name is None:\n                    print(f\"警告: 无法获取文件名, 跳过此文件\")\n                    continue\n                    \n                file_type = getattr(file_info, \"type\", \"file\")  # 默认为文件类型\n                # 如果 file_type 是枚举, 使用其 value 进行判断\n                type_value = file_type.value if hasattr(file_type, \"value\") else file_type\n                \n                # 跳过不需要的系统文件或系统目录（隐藏文件/目录）\n                if file_name in skip_files or (file_name.startswith('.') and type_value == 'dir'):\n                    print(f\"跳过系统文件或目录: {file_name}\")\n                    continue\n                \n                print(f\"处理文件: {file_name}, 类型: {type_value}\")\n                \n                sandbox_file_path = f\"{sandbox_dir_path}/{file_name}\"\n                local_file_path = os.path.join(local_dir_path, file_name)\n                \n                if type_value == 'dir':\n                    # 递归下载子目录\n                    print(f\"发现子目录: {sandbox_file_path}\")\n                    if download_directory_from_sandbox(sandbox, sandbox_file_path, local_file_path):\n                        downloaded_count += 1\n                else:\n                    # 下载文件\n                    print(f\"下载文件: {sandbox_file_path} -> {local_file_path}\")\n                    if download_file_from_sandbox(sandbox, sandbox_file_path, local_file_path):\n                        downloaded_count += 1\n            except Exception as e:\n                print(f\"处理文件时出错: {str(e)}\")\n                import traceback\n                print(f\"详细错误跟踪: {traceback.format_exc()}\")\n                continue\n        \n        if downloaded_count > 0:\n            print(f\"从 {sandbox_dir_path} 下载了 {downloaded_count} 个文件/目录到 {local_dir_path}\")\n            return True\n        return False\n        \n    except Exception as e:\n        print(f\"下载整个目录时出错: {str(e)}\")\n        import traceback\n        print(f\"详细错误跟踪: {traceback.format_exc()}\")\n\n##############################################################################\n# 检查E2B代码解释器工具是否已注册\n##############################################################################\n\nprint_separator(\"检查E2B代码解释器工具是否已注册\")\n\n# 获取所有已注册的工具（以字典格式）\nregistered_tools = get_registered_tools(as_dict=True)\n\n# 打印所有已注册的工具\nprint(\"\\n已注册的工具:\")\nfor name, info in registered_tools.items():\n    print(f\"- {name} (类别: {info['category'].value})\")\n\n# 检查E2B代码解释器工具是否已注册\ne2b_tool_name = \"e2b_code_interpreter\"\nif e2b_tool_name in registered_tools:\n    print(f\"\\nE2B代码解释器工具已成功注册: {e2b_tool_name}\")\nelse:\n    print(f\"\\n警告: E2B代码解释器工具未注册\")\n    # 手动注册E2B代码解释器工具\n    print(\"尝试手动注册E2B代码解释器工具...\")\n    try:\n        from core.tools.registry import register_tool\n        e2b_tool = E2BCodeInterpreterTool()\n        register_tool(e2b_tool, ToolCategory.CODE_INTERPRETER)\n        print(f\"已手动注册工具: {e2b_tool.name}\")\n    except Exception as e:\n        print(f\"手动注册E2B代码解释器工具失败: {e}\")\n\n##############################################################################\n# 创建ReactAgent实例\n##############################################################################\n\nprint_separator(\"创建ReactAgent实例\")\n\n# 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n# 从注册表中只获取代码解释器类工具列表\ntools_list = get_tools_by_category(ToolCategory.CODE_INTERPRETER)\n\n# 打印获取到的代码解释器工具\nprint(\"\\n获取到的代码解释器工具:\")\nfor tool in tools_list:\n    print(f\"- {tool.name}: {tool.description}\")\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=model,\n    tools=tools_list,\n    name=\"sandbox_test_agent\",\n    # 提示词强调测试沙箱环境的各种功能\n    prompt=(\n        \"你是一位专业的沙箱环境测试专家，负责测试E2B代码解释器沙箱环境的各种功能。\\n\"\n        \"你有强大的代码执行工具可以使用：\\n\"\n        \"- e2b_code_interpreter: 用于在沙箱环境中执行Python代码\\n\\n\"\n        \"当进行沙箱环境测试时，请遵循以下方法论：\\n\"\n        \"1. 分析测试需求：理解需要测试的沙箱功能\\n\"\n        \"2. 设计测试用例：针对特定功能设计测试代码\\n\"\n        \"3. 执行测试：使用e2b_code_interpreter工具执行测试代码\\n\"\n        \"4. 分析结果：解释测试结果，判断功能是否正常\\n\"\n        \"5. 记录问题：如有异常，记录问题并提供详细信息\\n\\n\"\n        \"重要提示：\\n\"\n        \"- 优先使用e2b_code_interpreter工具执行Python代码\\n\"\n        \"- 测试代码应包含详细注释，解释测试目的和预期结果\\n\"\n        \"- 所有文件和图片必须保存在沙盒环境中的特定目录，不要直接返回图片\\n\"\n        \"- 图片不允许在回复中展示！Images are not allowed in the response!\\n\"\n        \"- 测试应覆盖沙箱的各种功能，包括但不限于：\\n\"\n        \"  * 基本Python代码执行\\n\"\n        \"  * 文件系统操作（创建、读取、写入文件）\\n\"\n        \"  * 包管理（安装和使用第三方包）\\n\"\n        \"  * 系统命令执行（使用!前缀执行shell命令）\\n\"\n        \"  * 数据处理和可视化\\n\"\n        \"  * 异常处理和错误恢复\\n\"\n    ),\n)\n\n# 添加调试信息，验证工具列表和沙箱实例的初始状态\nprint(\"\\n验证ReactAgent工具列表和沙箱实例初始状态:\")\nprint(f\"react_agent.tools类型: {type(react_agent.tools)}\")\nprint(f\"react_agent.tools长度: {len(react_agent.tools)}\")\n\n# 遍历所有工具，检查是否有sandbox属性\nfor i, tool in enumerate(react_agent.tools):\n    print(f\"\\n工具[{i}]类型: {type(tool)}\")\n    print(f\"工具[{i}]名称: {getattr(tool, 'name', '未知')}\")\n    print(f\"工具[{i}]是否有sandbox属性: {'sandbox' in dir(tool)}\")\n    \n    # 如果有sandbox属性，打印沙箱实例信息\n    if 'sandbox' in dir(tool):\n        print(f\"工具[{i}]的sandbox类型: {type(tool.sandbox)}\")\n        print(f\"工具[{i}]的sandbox是否可用: {getattr(tool, '_is_available', False)}\")\n        print(f\"工具[{i}]的初始化错误: {getattr(tool, '_init_error', None)}\")\n\n# 编译Agent\nagent = react_agent.compile()\n\n# # 获取图对象\n# graph = agent.get_graph()\n\n# # 获取当前文件名（不含路径和扩展名）\n# current_file = os.path.basename(__file__)\n# file_name_without_ext = os.path.splitext(current_file)[0]\n# graph_dir = os.path.join(os.path.dirname(__file__), \"graphs\")\n\n# # 确保 graphs 目录存在\n# os.makedirs(graph_dir, exist_ok=True)\n\n# # 生成与文件名一致的图片名，并保存到 examples/graphs 目录\n# image_data = graph.draw_mermaid_png()\n# graph_path = os.path.join(graph_dir, f\"{file_name_without_ext}.png\")\n\n# # 保存图片（如果已存在则覆盖）\n# with open(graph_path, \"wb\") as f:\n#     f.write(image_data)\n\n# print(f\"工作流图已保存为 {graph_path}\")\n\n##############################################################################\n# 测试用例1：基本Python代码执行和环境信息\n##############################################################################\n\ndef run_test_case_1():\n    print_separator(\"测试用例1：基本Python代码执行和环境信息\")\n    print(\"\\n查询: 测试基本Python代码执行和获取环境信息\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"请执行一段Python代码，测试基本的数学运算、字符串操作，并获取沙箱环境的系统信息（Python版本、操作系统信息等）。\")\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print_separator(\"测试用例1结果\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                print(message.content)\n\n##############################################################################\n# 测试用例2：文件系统操作\n##############################################################################\n\ndef run_test_case_2():\n    print_separator(\"测试用例2：文件系统操作\")\n    print(\"\\n查询: 测试沙箱环境的文件系统操作\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"请测试沙箱环境的文件系统操作，包括创建目录、创建文件、写入内容、读取内容、列出目录内容等。创建一个测试目录结构，并将操作结果保存到文件中。文件保存到 /home/user/test_dir\")\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print_separator(\"测试用例2结果\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                print(message.content)\n                \n                # 检查是否有E2B沙箱实例，尝试下载生成的文件\n                for msg in final_state[\"messages\"]:\n                    if isinstance(msg, ToolMessage) and msg.name == \"e2b_code_interpreter\":\n                        try:\n                            # 尝试解析工具消息内容\n                            tool_output = json.loads(msg.content)\n                            print(f\"\\n工具消息内容解析成功: {type(tool_output)}\")\n                            \n                            # 检查是否有原始输出\n                            if hasattr(msg, 'raw_output') and msg.raw_output:\n                                print(f\"\\n消息包含raw_output属性: {type(msg.raw_output)}\")\n                                \n                                # 打印react_agent.tools的信息\n                                print(f\"\\nreact_agent.tools类型: {type(react_agent.tools)}\")\n                                print(f\"react_agent.tools长度: {len(react_agent.tools)}\")\n                                \n                                # 遍历所有工具，检查是否有sandbox属性\n                                for i, tool in enumerate(react_agent.tools):\n                                    print(f\"\\n工具[{i}]类型: {type(tool)}\")\n                                    print(f\"工具[{i}]名称: {getattr(tool, 'name', '未知')}\")\n                                    print(f\"工具[{i}]是否有sandbox属性: {'sandbox' in dir(tool)}\")\n                                    if 'sandbox' in dir(tool):\n                                        print(f\"工具[{i}]的sandbox类型: {type(tool.sandbox)}\")\n                                \n                                # 遍历 react_agent.tools 以查找 E2B 相关工具\n                                sandbox = None\n                                for tool in react_agent.tools:\n                                    if hasattr(tool, \"sandbox\"):\n                                        sandbox = tool.sandbox\n                                        break  # 找到后就退出循环\n                                \n                                if sandbox:\n                                    print(\"\\n成功获取沙箱实例!\")\n                                    print(f\"沙箱实例类型: {type(sandbox)}\")\n                                    \n                                    # 从沙箱下载生成的文件\n                                    output_dir = os.path.join(os.path.dirname(__file__), \"output\", \"sandbox_test\")\n                                    os.makedirs(output_dir, exist_ok=True)\n                                    print(f\"输出目录已创建: {output_dir}\")\n                                    \n                                    # 尝试下载测试目录，路径和提示中保持一致\n                                    sandbox_test_path = \"/home/user/test_dir\"\n                                    print(f\"尝试从沙箱下载目录: {sandbox_test_path}\")\n                                    download_directory_from_sandbox(sandbox, sandbox_test_path, os.path.join(output_dir, \"test_dir\"))\n                                else:\n                                    print(\"\\n错误: 无法获取沙箱实例，没有找到具有sandbox属性的工具\")\n                            else:\n                                print(\"\\n错误: 消息没有raw_output属性\")\n                        except Exception as e:\n                            print(f\"处理工具消息时出错: {str(e)}\")\n\n##############################################################################\n# 测试用例3：包管理和第三方库使用\n##############################################################################\n\ndef run_test_case_3():\n    print_separator(\"测试用例3：包管理和第三方库使用\")\n    print(\"\\n查询: 测试沙箱环境的包管理和第三方库使用\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"请测试沙箱环境的包管理功能，安装一个不常见的第三方库（如wordcloud、pycountry等），并使用该库编写一个简单的示例程序。验证包安装和使用是否正常。\")\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print_separator(\"测试用例3结果\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                print(message.content)\n\n##############################################################################\n# 测试用例4：Shell命令执行\n##############################################################################\n\ndef run_test_case_4():\n    print_separator(\"测试用例4：Shell命令执行\")\n    print(\"\\n查询: 测试沙箱环境的Shell命令执行\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"请测试沙箱环境中执行Shell命令的功能，使用!前缀执行一系列Linux命令，包括系统信息查询、目录操作、文件查找等。将命令执行结果保存到文件（/home/user/shell_commands_results.txt）中。\")\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print_separator(\"测试用例4结果\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                print(message.content)\n                \n                # 尝试下载生成的文件\n                for msg in final_state[\"messages\"]:\n                    if isinstance(msg, ToolMessage) and msg.name == \"e2b_code_interpreter\":\n                        try:\n                            print(f\"\\n测试用例4: 检查工具消息类型: {type(msg)}\")\n                            print(f\"测试用例4: 工具消息名称: {msg.name}\")\n                            \n                            # 检查react_agent.tools的信息\n                            print(f\"\\n测试用例4: react_agent.tools类型: {type(react_agent.tools)}\")\n                            print(f\"测试用例4: react_agent.tools长度: {len(react_agent.tools)}\")\n                            \n                            # 遍历 react_agent.tools 以查找 E2B 相关工具\n                            sandbox = None\n                            for tool in react_agent.tools:\n                                if hasattr(tool, \"sandbox\"):\n                                    sandbox = tool.sandbox\n                                    break  # 找到后就退出循环\n                            \n                            if sandbox:\n                                print(\"\\n测试用例4: 成功获取沙箱实例!\")\n                                print(f\"测试用例4: 沙箱实例类型: {type(sandbox)}\")\n                                print(f\"测试用例4: 沙箱实例属性: {dir(sandbox)[:10]}...\")\n                                \n                                output_dir = os.path.join(os.path.dirname(__file__), \"output\", \"sandbox_test\")\n                                os.makedirs(output_dir, exist_ok=True)\n                                print(f\"测试用例4: 输出目录已创建: {output_dir}\")\n                                \n                                # 尝试下载shell命令结果文件，路径和提示中保持一致\n                                sandbox_file_path = \"/home/user/shell_commands_results.txt\"\n                                local_file_path = os.path.join(output_dir, \"shell_commands_results.txt\")\n                                print(f\"测试用例4: 尝试下载文件: {sandbox_file_path} -> {local_file_path}\")\n                                download_file_from_sandbox(sandbox, sandbox_file_path, local_file_path)\n                            else:\n                                print(\"\\n测试用例4: 错误: 无法获取沙箱实例，没有找到具有sandbox属性的工具\")\n                                print(f\"测试用例4: react_agent.tools的类型和长度: {type(react_agent.tools)}, {len(react_agent.tools)}\")\n                        except Exception as e:\n                            print(f\"下载文件时出错: {str(e)}\")\n\n##############################################################################\n# 测试用例5：数据处理和可视化\n##############################################################################\n\ndef run_test_case_5():\n    print_separator(\"测试用例5：数据处理和可视化\")\n    print(\"\\n查询: 测试沙箱环境的数据处理和可视化功能\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=(\n                \"请测试沙箱环境的数据处理和可视化功能，生成一些随机数据，使用pandas进行数据处理，\"\n                \"然后使用matplotlib创建多种类型的图表（折线图、柱状图、散点图等）。\\n\"\n                \"严格按照以下要求:\\n\"\n                \"1. 将所有图表保存到 /home/user/visualizations 目录\\n\"\n                \"2. 不要在回复中包含图片 - 图片直接保存到上述目录即可\\n\"\n                \"3. Images are not allowed in the response!\\n\"\n                \"4. 只需描述你做了什么，创建了哪些图表，并说明它们保存在哪里\\n\"\n                \"5. 请确保目录存在后再保存图片\\n\"\n            ))\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print_separator(\"测试用例5结果\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                print(message.content)\n                \n                # 尝试下载生成的图表文件\n                for msg in final_state[\"messages\"]:\n                    if isinstance(msg, ToolMessage) and msg.name == \"e2b_code_interpreter\":\n                        try:\n                            # 遍历 react_agent.tools 以查找 E2B 相关工具\n                            sandbox = None\n                            for tool in react_agent.tools:\n                                if hasattr(tool, \"sandbox\"):\n                                    sandbox = tool.sandbox\n                                    break  # 找到后就退出循环\n                            \n                            if sandbox:\n                                output_dir = os.path.join(os.path.dirname(__file__), \"output\", \"sandbox_test\")\n                                os.makedirs(output_dir, exist_ok=True)\n                                \n                                # 针对性地下载可视化目录中的图表\n                                vis_dir = \"/home/user/visualizations\"\n                                local_vis_dir = os.path.join(output_dir, \"visualizations\")\n                                os.makedirs(local_vis_dir, exist_ok=True)\n                                print(f\"测试用例5: 下载可视化图表目录: {vis_dir} -> {local_vis_dir}\")\n                                \n                                # 尝试列出可视化目录中的文件\n                                try:\n                                    files = sandbox.files.list(vis_dir)\n                                    if files:\n                                        print(f\"找到图表文件:\")\n                                        for file_info in files:\n                                            file_name = getattr(file_info, \"name\", \"未知文件\")\n                                            print(f\"- {file_name}\")\n                                    else:\n                                        print(f\"警告: 可视化目录为空或不存在\")\n                                except Exception as e:\n                                    print(f\"列出可视化目录文件时出错: {str(e)}\")\n                                \n                                # 执行下载\n                                success = download_directory_from_sandbox(sandbox, vis_dir, local_vis_dir)\n                                if success:\n                                    print(f\"✅ 成功下载可视化图表\")\n                                else:\n                                    print(f\"⚠️ 下载可视化图表失败，尝试下载整个用户目录作为备份\")\n                                    download_directory_from_sandbox(sandbox, \"/home/user\", output_dir)\n                            else:\n                                print(\"\\n错误: 无法获取沙箱实例，没有找到具有sandbox属性的工具\")\n                        except Exception as e:\n                            print(f\"下载文件时出错: {str(e)}\")\n                            import traceback\n                            print(f\"错误详情: {traceback.format_exc()}\")\n\n##############################################################################\n# 测试用例6：异常处理和错误恢复\n##############################################################################\n\ndef run_test_case_6():\n    print_separator(\"测试用例6：异常处理和错误恢复\")\n    print(\"\\n查询: 测试沙箱环境的异常处理和错误恢复能力\")\n    \n    # 定义输入\n    inputs = {\n        \"messages\": [\n            HumanMessage(content=\"请测试沙箱环境的异常处理和错误恢复能力。编写一段包含各种常见错误的Python代码（如语法错误、除零错误、文件不存在错误等），然后展示如何捕获和处理这些异常。验证沙箱环境是否能正确报告错误并继续执行后续代码。\")\n        ]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取消息列表\n        messages = partial_state.get(\"messages\", [])\n        if not messages:\n            continue\n            \n        # 获取最新消息\n        latest_message = messages[-1]\n        \n        # 使用log_agent_actions函数记录状态\n        log_agent_actions({\"messages\": [latest_message]})\n    \n    # 打印最终回答\n    print_separator(\"测试用例6结果\")\n    if final_state and final_state.get(\"messages\"):\n        for message in final_state[\"messages\"]:\n            if isinstance(message, AIMessage) and not message.tool_calls:\n                print(message.content)\n\n##############################################################################\n# 主函数 - 运行所有测试用例\n##############################################################################\n\nif __name__ == \"__main__\":\n    print_separator(\"开始测试E2B沙箱环境\")\n    \n    try:\n        # 确保输出目录存在\n        output_dir = os.path.join(os.path.dirname(__file__), \"output\", \"sandbox_test\")\n        os.makedirs(output_dir, exist_ok=True)\n        print(f\"创建输出目录: {output_dir}\")\n        \n        # 确保可视化输出目录存在\n        vis_output_dir = os.path.join(output_dir, \"visualizations\")\n        os.makedirs(vis_output_dir, exist_ok=True)\n        print(f\"创建可视化输出目录: {vis_output_dir}\")\n        \n        # # 运行测试用例\n        # # 运行测试用例1：基本Python代码执行和环境信息\n        # run_test_case_1()\n        \n        # # 运行测试用例2：文件系统操作\n        # run_test_case_2()\n        \n        # # 运行测试用例3：包管理和第三方库使用\n        # run_test_case_3()\n        \n        # # 运行测试用例4：Shell命令执行\n        # run_test_case_4()\n        \n        # 运行测试用例5：数据处理和可视化\n        run_test_case_5()\n        \n        # # 运行测试用例6：异常处理和错误恢复\n        # run_test_case_6()\n        \n        print_separator(\"E2B沙箱环境测试完成\")\n        print(\"测试结果已保存到 examples/output/sandbox_test 目录\")\n        \n    except Exception as e:\n        print(f\"测试过程中出错: {str(e)}\")\n    finally:\n        # 关闭E2B沙箱\n        print(\"\\n正在关闭E2B沙箱...\")\n        for tool in react_agent.tools:\n            if hasattr(tool, 'close'):\n                tool.close()"
  },
  {
    "path": "examples/12_planning_supervisor_test.py",
    "content": "from langgraph.prebuilt import create_react_agent\nfrom core.agents.react_supervisor_agent import SupervisorAgent\nfrom core.agents.research_agent import ResearchAgent\nfrom core.agents.base.react_agent import ReactAgent\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.func import entrypoint, task\nfrom langgraph.graph import add_messages\nfrom dotenv import load_dotenv\nfrom langchain_community.tools import TavilySearchResults\nload_dotenv()  # 自动加载 .env 文件\n\n# 1. 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n\n##############################################################################\n# Agent 1: Joke Generator (Functional API)\n##############################################################################\n\n@task\ndef generate_joke(messages):\n    \"\"\"Generate a short joke (no tool calls).\"\"\"\n    system_message = {\n        \"role\": \"system\", \n        \"content\": \"You are a witty comedian. Write a short joke.\"\n    }\n    # 直接调用 model.invoke，拼接 system_message + 用户消息\n    msg = model.invoke([system_message] + messages)\n    return msg\n\n@entrypoint()\ndef joke_agent(state):\n    # 调用上面的函数型任务\n    joke = generate_joke(state['messages']).result()\n    # 将产物插入消息列表\n    messages = add_messages(state[\"messages\"], [joke])\n    return {\"messages\": messages}\n\njoke_agent.name = \"joke_agent\"\n\n##############################################################################\n# Agent 2: Research Expert with Tavily Search (Graph API)\n##############################################################################\n\n# 创建Tavily搜索工具\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=False,\n    include_images=False,\n    search_depth=\"advanced\"\n)\n\n# 使用我们自定义的ResearchAgent替代create_react_agent创建的agent\nresearch_agent = ResearchAgent(\n    name=\"research_expert\",\n    model=model,\n    max_iterations=5,\n    cache_enabled=True,\n    debug=False\n)\nresearch_agent_2 = ReactAgent(\n    name=\"research_expert\",\n    model=model,\n    tools=[tavily_search])\n\n##############################################################################\n# 使用带有Planning功能的SupervisorAgent\n##############################################################################\n\n# 创建 SupervisorAgent 实例，启用Planning功能\nsupervisor = SupervisorAgent(\n    agents=[joke_agent,research_agent_2],\n    model=model,\n)\n##############################################################################\n# 测试：复杂请求需要规划和多个步骤\n##############################################################################\nresult = supervisor.run({\n    \"messages\": [\n        {\n            \"role\": \"user\",\n            \"content\": (\n                \"I'm preparing a presentation about tech companies. I need three things: \"\n                \"1) A joke about tech companies to start with, \"\n                \"2) The employee count for FANNG, and \"\n                \"3) A comparison of which company has more employees.\"\n            )\n        }\n    ]\n})\n\n##############################################################################\n# 打印最终对话消息\n##############################################################################\nfor m in result[\"messages\"]:\n    m.pretty_print()\n\n# 打印任务列表\nprint(\"\\n##############################################################################\")\nprint(\"# 最终任务列表\")\nprint(\"##############################################################################\")\nif \"plan\" in result and result[\"plan\"] and \"tasks\" in result[\"plan\"]:\n    tasks = result[\"plan\"][\"tasks\"]\n    print(f\"总共 {len(tasks)} 个任务:\")\n    for i, task in enumerate(tasks):\n        print(f\"\\n任务 {i+1}: {task['description']}\")\n        print(f\"  状态: {task['status']}\")\n        print(f\"  代理: {task['agent'] if task['agent'] else '未分配'}\")\n        print(f\"  创建时间: {task['created_at']}\")\n        print(f\"  完成时间: {task['completed_at'] if task['completed_at'] else '未完成'}\")\nelse:\n    print(\"没有任务列表信息\")\n\n# 打印原始任务列表（如果存在）\nif \"tasks\" in result:\n    print(\"\\n原始任务列表:\")\n    for t in result[\"tasks\"]:\n        t.pretty_print()"
  },
  {
    "path": "examples/13_multi_agent_roles_test.py",
    "content": "from langgraph.prebuilt import create_react_agent\nfrom core.agents.react_supervisor_agent import SupervisorAgent\nfrom core.agents.sub_agents.research_agent import ResearchAgent\nfrom core.agents.sub_agents.coder_agent import CoderAgent\nfrom core.agents.sub_agents.reporter_agent import ReporterAgent\nfrom core.agents.sub_agents.designer_agent import DesignerAgent\nfrom core.agents.sub_agents.data_analyst_agent import DataAnalystAgent\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\nfrom langgraph.func import entrypoint, task\nfrom langgraph.graph import add_messages\nfrom dotenv import load_dotenv\nfrom langchain_community.tools import TavilySearchResults\nimport os\nimport logging\nimport sys\nimport io\nimport json\nfrom contextlib import redirect_stdout, redirect_stderr\n\nload_dotenv()  # 自动加载 .env 文件\n\n# 1. 初始化大模型\nmodel = ChatOpenAI(model=\"gpt-4o-mini\")\n# 设置日志捕获\nclass LogCapture:\n    def __init__(self):\n        self.log_buffer = io.StringIO()\n        self.log_content = []\n    \n    def start_capture(self):\n        self.log_buffer = io.StringIO()\n        return self.log_buffer\n    \n    def stop_capture(self):\n        output = self.log_buffer.getvalue()\n        self.log_content.append(output)\n        return output\n    \n    def get_content(self):\n        return \"\\n\".join(self.log_content)\n\nlog_capture = LogCapture()\n\n##############################################################################\n# 从沙箱下载文件到本地的函数\n##############################################################################\n\ndef download_file_from_sandbox(sandbox, sandbox_path, local_path):\n    \"\"\"从 e2b 沙箱中下载文件并保存到本地，自动区分文本和二进制文件\"\"\"\n    try:\n        print(f\"读取文件: {sandbox_path}\")\n\n        # 判断是否为常见二进制文件类型（可自行扩展）\n        binary_extensions = (\n            '.png', '.jpg', '.jpeg', '.gif', '.pdf', '.svg',\n            '.xlsx', '.xls', '.zip', '.bin', '.pyc', '.pyd',\n            '.pptx', '.docx', '.mp3', '.mp4', '.avi', '.mov',\n        )\n        is_binary = sandbox_path.lower().endswith(binary_extensions)\n\n        # 创建目录\n        os.makedirs(os.path.dirname(local_path), exist_ok=True)\n\n        if is_binary:\n            print(\"📦 识别为二进制文件，使用 sandbox.download_file()\")\n            content = sandbox.files.read(sandbox_path)  # 返回 bytes\n            with open(local_path, 'wb') as f:\n                f.write(content)\n        else:\n            print(\"📄 识别为文本文件，使用 sandbox.files.read()\")\n            content = sandbox.files.read(sandbox_path)  # 返回 str\n            with open(local_path, 'w', encoding='utf-8') as f:\n                f.write(content)\n\n        print(f\"✅ 文件已保存到本地: {local_path}\")\n        return True\n\n    except Exception as e:\n        print(f\"❌ 下载失败: {e}\")\n        return False\n\ndef download_directory_from_sandbox(sandbox, sandbox_dir_path, local_dir_path):\n    \"\"\"从沙箱下载整个目录内容到本地\n    \n    Args:\n        sandbox: 沙箱实例\n        sandbox_dir_path: 沙箱中的目录路径\n        local_dir_path: 本地保存目录路径\n    \n    Returns:\n        bool: 是否成功下载所有文件\n    \"\"\"\n    try:\n        print(f\"尝试下载目录: {sandbox_dir_path} -> {local_dir_path}\")\n        \n        # 确保本地目录存在\n        os.makedirs(local_dir_path, exist_ok=True)\n        \n        # 列出沙箱中指定目录下的所有文件\n        try:\n            files = sandbox.files.list(sandbox_dir_path)\n        except Exception as e:\n            print(f\"列出文件时出错: {sandbox_dir_path}, 错误: {str(e)}\")\n            return False\n        \n        if not files:\n            print(f\"沙箱中目录 {sandbox_dir_path} 为空或不存在\")\n            return False\n            \n        downloaded_count = 0\n        # 定义需要跳过的系统文件\n        skip_files = {'.bashrc', '.bash_logout', '.profile'}\n        \n        # 遍历并下载每个文件\n        for file_info in files:\n            try:\n                # 尝试安全获取name和type属性\n                file_name = getattr(file_info, \"name\", None)\n                if file_name is None:\n                    print(f\"警告: 无法获取文件名, 跳过此文件\")\n                    continue\n                    \n                file_type = getattr(file_info, \"type\", \"file\")  # 默认为文件类型\n                # 如果 file_type 是枚举, 使用其 value 进行判断\n                type_value = file_type.value if hasattr(file_type, \"value\") else file_type\n                \n                # 跳过不需要的系统文件或系统目录（隐藏文件/目录）\n                if file_name in skip_files or (file_name.startswith('.') and type_value == 'dir'):\n                    print(f\"跳过系统文件或目录: {file_name}\")\n                    continue\n                \n                print(f\"处理文件: {file_name}, 类型: {type_value}\")\n                \n                sandbox_file_path = f\"{sandbox_dir_path}/{file_name}\"\n                local_file_path = os.path.join(local_dir_path, file_name)\n                \n                if type_value == 'dir':\n                    # 递归下载子目录\n                    print(f\"发现子目录: {sandbox_file_path}\")\n                    if download_directory_from_sandbox(sandbox, sandbox_file_path, local_file_path):\n                        downloaded_count += 1\n                else:\n                    # 下载文件\n                    print(f\"下载文件: {sandbox_file_path} -> {local_file_path}\")\n                    if download_file_from_sandbox(sandbox, sandbox_file_path, local_file_path):\n                        downloaded_count += 1\n            except Exception as e:\n                print(f\"处理文件时出错: {str(e)}\")\n                import traceback\n                print(f\"详细错误跟踪: {traceback.format_exc()}\")\n                continue\n        \n        if downloaded_count > 0:\n            print(f\"从 {sandbox_dir_path} 下载了 {downloaded_count} 个文件/目录到 {local_dir_path}\")\n            return True\n        return False\n        \n    except Exception as e:\n        print(f\"下载整个目录时出错: {str(e)}\")\n        import traceback\n        print(f\"详细错误跟踪: {traceback.format_exc()}\")\n\n\n##############################################################################\n# Agent 2: Research Expert - 使用自定义的ResearchAgent\n##############################################################################\n\nresearch_agent = ResearchAgent(\n    name=\"research_expert\",\n    model=model,\n    max_iterations=5,\n    cache_enabled=True,\n    debug=True\n)\n\n##############################################################################\n# Agent 3: Coder - 使用自定义的CoderAgent\n##############################################################################\nfrom core.tools.e2b_tool import E2BCodeInterpreterTool\ne2b_tool = E2BCodeInterpreterTool()\n\ncoder_agent = CoderAgent(\n    name=\"coder_expert\",\n    model=model,\n    tools=[e2b_tool],\n    max_iterations=5,\n    cache_enabled=True,\n    debug=True\n)\n\n##############################################################################\n# Agent 4: Reporter - 使用自定义的ReporterAgent\n##############################################################################\n\nreporter_agent = ReporterAgent(\n    name=\"reporter_expert\",\n    model=model,\n    max_iterations=5,\n    cache_enabled=True,\n)\n\n##############################################################################\n# Agent 5: Designer - 使用自定义的DesignerAgent\n##############################################################################\n\ndesigner_agent = DesignerAgent(\n    name=\"designer_expert\",\n    model=model,\n    max_iterations=5,\n    cache_enabled=True,\n)\n\n##############################################################################\n# Agent 6: Data Analyst - 使用自定义的DataAnalystAgent\n##############################################################################\n\ndata_analyst_agent = DataAnalystAgent(\n    name=\"data_analyst_expert\",\n    model=model,\n    max_iterations=5,\n    cache_enabled=True,\n)\n\n##############################################################################\n# 使用带有Planning功能的SupervisorAgent协调所有角色\n##############################################################################\n\n# 创建 SupervisorAgent 实例，启用Planning功能\nsupervisor = SupervisorAgent(\n    agents=[\n        research_agent,\n        coder_agent,\n        reporter_agent,\n        designer_agent,\n        data_analyst_agent,\n    ],\n    model=model,\n    enable_planning=True,\n    output_mode=\"last_message\"\n)\n\n# 获取当前文件名（不含路径和扩展名）\ncurrent_file = os.path.basename(__file__)\nfile_name_without_ext = os.path.splitext(current_file)[0]\nlogs_dir = os.path.join(os.path.dirname(__file__), \"logs\")\n# 创建图表输出文件路径\nos.makedirs(logs_dir, exist_ok=True)\n# 创建Markdown输出文件路径\nmarkdown_path = os.path.join(logs_dir, f\"{file_name_without_ext}.md\")\n\n##############################################################################\n# 测试：复杂请求需要规划和多个步骤\n##############################################################################\n\ndef save_markdown_log():\n    \"\"\"将执行结果保存为Markdown文件\"\"\"\n    with open(markdown_path, \"w\", encoding=\"utf-8\") as f:\n        f.write(f\"# 执行结果: {file_name_without_ext}\\n\\n\")\n        f.write(\"## 图表\\n\\n\")\n        f.write(\"## 执行日志\\n\\n\")\n        f.write(\"```\\n\")\n        f.write(log_capture.get_content())\n        f.write(\"\\n```\\n\")\n    print(f\"执行日志已保存到 {markdown_path}\")\n\nif __name__ == \"__main__\":\n    try:\n        # 开始捕获输出\n        log_buffer = log_capture.start_capture()\n        \n        with redirect_stdout(log_buffer), redirect_stderr(log_buffer):\n            print(f\"开始执行 {current_file} 测试...\")\n            \n            # 测试1：需要研究和编码的任务\n            print(\"\\n## 测试1：需要研究和编码的任务\")\n            final_state = supervisor.run({\n                \"messages\": [\n                    {\n                        \"role\": \"user\",\n                        \"content\": (\n                            \"我需要一个Python爬虫来获取 https://www.paulgraham.com/articles.html 所有articles列表，并将结果保存为CSV文件,放在/home/user下面。\"\n                            \"并将你测试通过的爬虫代码返回给我。\"\n                            \"请确保你的代码能够正常运行。\"\n                            \"如果遇到问题，请重试。\"\n                        )\n                    }\n                ]\n            })\n            \n            print(\"\\n测试1结果:\")\n            for m in final_state[\"messages\"]:\n                m.pretty_print()\n            \n            # 遍历 react_agent.tools 以查找 E2B 相关工具\n            try:\n            # 遍历 react_agent.tools 以查找 E2B 相关工具\n                sandbox = None\n                for tool in coder_agent.tools:\n                    if hasattr(tool, \"sandbox\"):\n                        sandbox = tool.sandbox\n                        break  # 找到后就退出循环\n\n                if sandbox:\n                    # 设定输出目录\n                    output_dir = os.path.join(os.getcwd(), \"examples/output/sandbox_files\")\n                    os.makedirs(output_dir, exist_ok=True)\n\n                    # 直接下载主要工作目录\n                    print(\"\\n从沙箱下载文件到本地...\")\n                    download_directory_from_sandbox(sandbox, \"/home/user\", output_dir)\n\n                    # 下载临时目录中可能的图表和数据文件\n                    # download_directory_from_sandbox(sandbox, \"/tmp\", output_dir)\n\n                    print(f\"\\n文件已保存到目录: {output_dir}\")\n                    sandbox.close()\n            except Exception as e:\n                print(f\"从沙箱下载文件时出错: {str(e)}\")\n\n           \n    finally:\n        # 停止捕获并保存结果\n        log_capture.stop_capture()\n        save_markdown_log()"
  },
  {
    "path": "examples/14_mcp_client_fetch_test.py",
    "content": "import os\nimport sys\nimport asyncio\nimport traceback\nfrom typing import Dict, Optional, Type\n\nfrom dotenv import load_dotenv\n\n# 在这里添加项目根目录到路径，方便导入\nsys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\nload_dotenv()\n\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.prebuilt import create_react_agent\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import HumanMessage\n\ntry:\n    from pydantic.v1 import BaseModel, Field\nexcept ImportError:\n    from pydantic import BaseModel, Field  # type: ignore\n\nfrom core.mcp.client import MCPClient\nfrom core.mcp.config_loader import load_config, MCPConfig, StdioConfig\nfrom core.llm.llm_manager import LLMManager\n\ntry:\n    from mcp.types import CallToolRequest\n    CALL_TOOL_REQ_AVAILABLE = True\nexcept ImportError:\n    CallToolRequest = None\n    CALL_TOOL_REQ_AVAILABLE = False\n\n# 这是唯一保留的 fetch schema\ntry:\n    class FetchInputSchema(BaseModel):\n        url: str = Field(..., description=\"URL to fetch\")\n        max_length: Optional[int] = Field(default=5000)\n        start_index: Optional[int] = Field(default=0)\n        raw: Optional[bool] = Field(default=False)\n    FETCH_SCHEMA_AVAILABLE = True\nexcept Exception:\n    FetchInputSchema = None\n    FETCH_SCHEMA_AVAILABLE = False\n\nCENTRAL_CONFIG_PATH = os.path.join(os.path.dirname(__file__), \"..\", \"core\", \"mcp\", \"mcp_server_config.json\")\nLLM_ID_FOR_TESTING = \"openai_gpt4o_mini\"\nllm_manager = LLMManager()\n\nclass MCPToolRunner(BaseTool):\n    name: str = \"needs_override\"\n    description: str = \"needs_override\"\n    args_schema: Optional[Type[BaseModel]] = None\n\n    client: MCPClient = Field(exclude=True)\n\n    class Config:\n        arbitrary_types_allowed = True\n\n    async def _arun(self, **kwargs) -> str:\n        if not self.client or not self.client.session:\n            return f\"ERROR: MCP Client session inactive for {self.name}.\"\n        if not CALL_TOOL_REQ_AVAILABLE:\n            return \"ERROR: CallToolRequest unavailable.\"\n\n        try:\n            print(f\"    [_arun:{self.name}] Sending MCP request with args: {kwargs}\")\n            result_message = await asyncio.wait_for(\n                self.client.session.call_tool(self.name, kwargs),\n                timeout=120.0\n            )\n            # 简化: 只检查 result 和 error\n            if hasattr(result_message, \"result\"):\n                return str(result_message.result)\n            elif hasattr(result_message, \"error\"):\n                return f\"Tool Error: {result_message.error.message}\"\n            else:\n                return \"Unknown response\"\n        except asyncio.TimeoutError:\n            return \"Error: Timeout.\"\n        except Exception as e:\n            return f\"Error: {e}\\n{traceback.format_exc()}\"\n\n    def _run(self, **kwargs) -> str:\n        print(f\"    [_run:{self.name}] Running async method via asyncio.run()...\")\n        try:\n            return asyncio.run(self._arun(**kwargs))\n        except Exception as e:\n            return f\"Error in sync wrapper: {e}\"\n\nasync def run_fetch_test(server_config_key: str, all_configs: Dict[str, MCPConfig]):\n    print(f\"\\n=== Running STDIO BaseTool Test for Server '{server_config_key}' (Tool: 'fetch') ===\")\n    if not FETCH_SCHEMA_AVAILABLE:\n        print(\"ERROR: Fetch Schema missing.\")\n        return False\n    if not CALL_TOOL_REQ_AVAILABLE:\n        print(\"ERROR: CallToolRequest unavailable.\")\n        return False\n\n    server_config = all_configs.get(server_config_key)\n    if not server_config:\n        print(f\"ERROR: Config for '{server_config_key}' not found.\")\n        return False\n    if not isinstance(server_config.connection, StdioConfig):\n        print(f\"ERROR: Config '{server_config_key}' not STDIO.\")\n        return False\n\n    try:\n        model = llm_manager.get_model(LLM_ID_FOR_TESTING)\n        print(f\"Using LLM: {getattr(model, 'model_name', LLM_ID_FOR_TESTING)}\")\n    except ValueError as e:\n        print(f\"获取 LLM 出错: {e}.\")\n        return False\n\n    test_success = False\n    async with MCPClient(server_config) as client:\n        if not client.session:\n            print(\"ERROR: MCP session not established!\")\n            return False\n\n        try:\n            runner = MCPToolRunner(\n                client=client,\n                name=\"fetch\",\n                description=\"Fetches URL content as markdown.\",\n                args_schema=FetchInputSchema\n            )\n            tools = [runner]\n        except Exception as e_inst:\n            print(f\"ERROR: Failed to instantiate MCPToolRunner: {e_inst}\")\n            return False\n\n        agent = create_react_agent(model, tools)\n        query = (\n            \"Use the fetch tool to get the content of https://www.google.com \"\n            \"and tell me its title (first 50 chars).\"\n        )\n        print(f\"\\nQuery: {query}\")\n\n        try:\n            response = await asyncio.wait_for(\n                agent.ainvoke({\"messages\": [{\"role\": \"user\", \"content\": query}]}),\n                timeout=180.0\n            )\n            print(f\"\\nAgent Final Response:\")\n            if response and \"messages\" in response and response[\"messages\"]:\n                response_content = response[\"messages\"][-1].content\n                print(response_content)\n                if \"google\" in response_content.lower():\n                    print(\"\\n✅ Test PASS\")\n                    test_success = True\n                else:\n                    print(\"\\n❌ Test FAIL (title not found)\")\n                    test_success = False\n            else:\n                print(\"No valid response from agent.\")\n                test_success = False\n        except Exception as e:\n            print(f\"Exception: {e}\")\n            test_success = False\n\n    return test_success\n\nasync def main():\n    print(\"Starting a simplified MCP Integration Test for 'fetch_via_uvx' only...\")\n    try:\n        all_configs = load_config(CENTRAL_CONFIG_PATH)\n        print(f\"Loaded {len(all_configs)} server configs.\")\n    except Exception as e:\n        print(f\"Error loading config: {e}\")\n        return\n\n    # 只测试 fetch_via_uvx\n    result = await run_fetch_test(\"fetch_via_uvx\", all_configs)\n    if result:\n        print(\"\\nALL GOOD: 'fetch' test passed.\")\n    else:\n        print(\"\\nTEST FAILED: 'fetch' test didn't pass.\")\n    print(\"Done.\")\n\nif __name__ == \"__main__\":\n    asyncio.run(main())"
  },
  {
    "path": "examples/15_mcp_agent_test.py",
    "content": "# examples/14_mcp_fetch_basetool_test.py (最终版 - BaseTool 子类)\nimport os\nimport sys\nimport asyncio\nimport json\nfrom dotenv import load_dotenv\nimport traceback\nfrom typing import List, Dict, Any, Optional, Type\n\n# --- 前置要求 ---\n# 1. 确保 core/mcp/client.py 和 core/mcp/config_loader.py 是最新版本 (含 AsyncExitStack 和导入修复)。\n# 2. 确保 core/mcp/config.json 文件存在，并包含 \"fetch_via_uvx\" 配置 (使用 uvx + stdio)。\n# 3. 确保已安装 uv (`pip install uv`) 和 mcp-server-fetch。\n# 4. 确保 OpenAI API Key (或其他 LLM Key) 在 .env 或环境变量中设置。\n# 5. 推荐设置 LangSmith 环境变量用于详细追踪 Agent 行为。\n# ---\n\n# 添加项目根目录到路径\nsys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\nload_dotenv()\n\n# --- 核心依赖导入 ---\n# LangChain\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.prebuilt import create_react_agent\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.messages import HumanMessage\ntry:\n    # 尝试导入 Pydantic v1 (LangChain 常用的版本)\n    from langchain_core.pydantic_v1 import BaseModel, Field\nexcept ImportError:\n    try:\n        # 如果 V1 不可用，尝试导入 V2\n        from pydantic import BaseModel, Field # type: ignore\n    except ImportError:\n         print(\"CRITICAL ERROR: Pydantic (v1 or v2) not found.\")\n         sys.exit(1)\n# MCP Client/Config\ntry: from core.mcp.client import MCPClient\nexcept ImportError: print(\"CRITICAL ERROR: Cannot import MCPClient.\"); sys.exit(1)\ntry: from core.mcp.config_loader import load_config, MCPConfig, StdioConfig\nexcept ImportError: print(\"CRITICAL ERROR: Cannot import config loader.\"); sys.exit(1)\n# LLM\nfrom core.llm.llm_manager import LLMManager\n# MCP Types\ntry: from mcp.types import CallToolRequest; CALL_TOOL_REQ_AVAILABLE = True\nexcept ImportError: CallToolRequest = None; CALL_TOOL_REQ_AVAILABLE = False\n# ---\n\n# --- Fetch Tool Schema 定义 ---\nFETCH_SCHEMA_AVAILABLE = False\nFetchInputSchema = None\ntry:\n    class FetchInputSchema(BaseModel): # 使用导入的 BaseModel\n         url: str = Field(..., description=\"URL to fetch\")\n         max_length: Optional[int] = Field(default=5000, description=\"Maximum number of characters to return\")\n         start_index: Optional[int] = Field(default=0, description=\"Start content from this character index\")\n         raw: Optional[bool] = Field(default=False, description=\"Get raw content without markdown conversion\")\n    FETCH_SCHEMA_AVAILABLE = True\nexcept Exception as e_pyd_fetch: print(f\"ERROR defining FetchInputSchema: {e_pyd_fetch}\")\n# ---\n\n# --- 全局设置 ---\n# **重要**: 确认此路径指向你的中央配置文件\nCENTRAL_CONFIG_PATH = os.path.join(os.path.dirname(__file__), \"..\", \"core\", \"mcp\", \"mcp_server_config.json\")\n# 使用 OpenAI 模型通常更稳定\nLLM_ID_FOR_TESTING = \"openai_gpt4o_mini\"\n# 要测试的服务器在 config.json 中的 key\nSERVER_KEY_TO_TEST = \"fetch_via_uvx\"\n# 要测试的工具名称\nTOOL_NAME_TO_TEST = \"fetch\"\n# 要测试的工具的正确 Schema\nCORRECT_SCHEMA_FOR_TOOL = FetchInputSchema\n# 要测试的工具的描述\nTOOL_DESCRIPTION = \"Fetches web content as markdown. Input requires 'url' (string) and optional 'max_length', 'start_index', 'raw'.\"\n\n# --- Everything MCP 服务器设置 ---\nEVERYTHING_SERVER_KEY = \"everything\"\nEVERYTHING_ECHO_TOOL = \"echo\"\nEVERYTHING_ADD_TOOL = \"add\"\n\n# --- Everything MCP 工具 Schema 定义 ---\nECHO_SCHEMA_AVAILABLE = False\nEchoInputSchema = None\ntry:\n    class EchoInputSchema(BaseModel):\n        message: str = Field(..., description=\"Message to echo back\")\n    ECHO_SCHEMA_AVAILABLE = True\nexcept Exception as e_pyd_echo: print(f\"ERROR defining EchoInputSchema: {e_pyd_echo}\")\n\nADD_SCHEMA_AVAILABLE = False\nAddInputSchema = None\ntry:\n    class AddInputSchema(BaseModel):\n        a: float = Field(..., description=\"First number\")\n        b: float = Field(..., description=\"Second number\")\n    ADD_SCHEMA_AVAILABLE = True\nexcept Exception as e_pyd_add: print(f\"ERROR defining AddInputSchema: {e_pyd_add}\")\n\nllm_manager = LLMManager()\n\n# --- 标准 BaseTool 子类定义，用于桥接 MCP 调用 ---\nclass MCPToolRunner(BaseTool):\n    \"\"\"\n    通过 MCP 调用服务器上工具的标准 BaseTool 实现。\n    \"\"\"\n    # --- 类属性 (将在实例化时被覆盖) ---\n    name: str = \"mcp_tool_runner\" # Default name\n    description: str = \"Runs a tool via MCP\"\n    args_schema: Optional[Type[BaseModel]] = None\n\n    # --- 实例属性 ---\n    client: MCPClient = Field(exclude=True) # 存储客户端引用\n\n    # Pydantic 配置 (根据你使用的 BaseModel 版本)\n    class Config: arbitrary_types_allowed = True\n\n    async def _arun(self, **kwargs) -> str:\n        \"\"\"异步执行：构造 MCP 请求并调用 client.session.call_tool\"\"\"\n        if not self.client or not self.client.session: return f\"ERROR: MCP Client session inactive for {self.name}.\"\n        if not CALL_TOOL_REQ_AVAILABLE: return \"ERROR: CallToolRequest unavailable.\"\n\n        try:\n            # kwargs 应该是 LangChain 根据 args_schema 验证和准备好的参数\n            print(f\"    [_arun:{self.name}] Preparing MCP request with args: {kwargs}\")\n            # 不再需要构造CallToolRequest对象，直接传递工具名称和参数\n            print(f\"    [_arun:{self.name}] Calling tool '{self.name}' with args: {kwargs}\")\n\n            # 调用 MCP session - 直接传递工具名称和参数\n            result_message = await asyncio.wait_for(\n                self.client.session.call_tool(self.name, kwargs),\n                timeout=120.0 # 给予足够的网络和执行超时\n            )\n\n            # 处理结果 - 简化处理逻辑，直接检查content属性\n            print(f\"    [_arun:{self.name}] MCP Response received, type: {type(result_message)}\")\n            \n            # 直接检查是否有content属性（根据日志显示的响应结构）\n            if hasattr(result_message, 'content'):\n                content = result_message.content\n                print(f\"    [_arun:{self.name}] Found content attribute, type: {type(content)}\")\n                \n                # 如果content是列表且不为空\n                if isinstance(content, list) and len(content) > 0:\n                    first_item = content[0]\n                    print(f\"    [_arun:{self.name}] Content is a list, first item type: {type(first_item)}\")\n                    \n                    # 尝试获取text属性\n                    if hasattr(first_item, 'text'):\n                        print(f\"    [_arun:{self.name}] First item has text attribute, returning text\")\n                        return first_item.text\n                    else:\n                        print(f\"    [_arun:{self.name}] First item has no text attribute, converting to string\")\n                        return str(first_item)\n                elif hasattr(content, 'text'):\n                    print(f\"    [_arun:{self.name}] Content has text attribute, returning text\")\n                    return content.text\n                else:\n                    print(f\"    [_arun:{self.name}] Content has no text attribute, converting to string\")\n                    return str(content)\n            # 如果没有content属性，回退到检查result属性\n            elif hasattr(result_message, 'result'):\n                res_val = result_message.result\n                print(f\"    [_arun:{self.name}] Found result attribute: {str(res_val)[:500]}...\")\n                return str(res_val) if not isinstance(res_val, str) else res_val\n            elif hasattr(result_message, 'error'):\n                err_msg = result_message.error.message\n                print(f\"    [_arun:{self.name}] MCP Tool Error: {err_msg}\")\n                # 对于 Agent，返回错误通常比抛出异常更好处理\n                return f\"Tool Error: {err_msg}\"\n            else:\n                # 打印完整的响应对象，帮助诊断问题\n                print(f\"    [_arun:{self.name}] Unknown MCP response format. Full response object: {result_message}\")\n                print(f\"    [_arun:{self.name}] Response type: {type(result_message)}\")\n                print(f\"    [_arun:{self.name}] Response dir: {dir(result_message)}\")\n                \n                # 尝试处理特殊的响应格式\n                if hasattr(result_message, 'content'):\n                    content = result_message.content\n                    print(f\"    [_arun:{self.name}] Found content attribute in response\")\n                    \n                    # 处理content是列表的情况\n                    if isinstance(content, list) and len(content) > 0:\n                        print(f\"    [_arun:{self.name}] Content is a list with {len(content)} items\")\n                        first_item = content[0]\n                        if hasattr(first_item, 'text'):\n                            print(f\"    [_arun:{self.name}] First item has text attribute, returning text\")\n                            return first_item.text\n                        elif hasattr(first_item, 'type') and hasattr(first_item, 'text'):\n                            print(f\"    [_arun:{self.name}] First item has type and text attributes, returning text\")\n                            return first_item.text\n                        else:\n                            print(f\"    [_arun:{self.name}] First item has no text attribute, converting to string\")\n                            return str(first_item)\n                    # 处理content是单个对象的情况\n                    elif hasattr(content, 'text'):\n                        print(f\"    [_arun:{self.name}] Content has text attribute, returning text\")\n                        return content.text\n                    else:\n                        print(f\"    [_arun:{self.name}] Content has no text attribute, converting to string\")\n                        return str(content)\n                \n                # 尝试提取更多信息\n                response_details = \"\"\n                for attr in dir(result_message):\n                    if not attr.startswith('_'):\n                        try:\n                            value = getattr(result_message, attr)\n                            if not callable(value):\n                                response_details += f\"\\n    - {attr}: {value}\"\n                        except Exception as attr_err:\n                            response_details += f\"\\n    - {attr}: [Error accessing: {attr_err}]\"\n                print(f\"    [_arun:{self.name}] Response details: {response_details}\")\n                return f\"Unknown response from MCP tool {self.name}. Details: {response_details}\"\n        except asyncio.TimeoutError:\n            print(f\"    [_arun:{self.name}] MCP call timeout.\")\n            return f\"Error: Timeout calling MCP tool {self.name}.\"\n        except Exception as e:\n            print(f\"    [_arun:{self.name}] Unexpected error during MCP call: {e}\")\n            print(traceback.format_exc())\n            # 返回包含 Traceback 的错误，方便调试\n            return f\"Unexpected Error calling {self.name}: {e}\\n{traceback.format_exc()}\"\n\n    def _run(self, **kwargs) -> str:\n        \"\"\"同步执行 (简单实现，通过运行异步方法)\"\"\"\n        print(f\"    [_run:{self.name}] Running async method via asyncio.run()...\")\n        try:\n            # 注意: 在已运行的事件循环中调用 asyncio.run 会报错\n            # 更好的方法是检查当前循环或使用 anyio/nest_asyncio\n            # 但为了满足 BaseTool 要求，先用简单方式，如果 Agent 只用 async 就没问题\n            # 如果 Agent 强制用 sync，可能需要更复杂的处理\n            # return asyncio.run(self._arun(**kwargs))\n            # 更安全的方式是提示不支持或使用更复杂的同步转异步\n             return \"Synchronous execution not fully supported, please use async.\"\n        except Exception as e:\n             print(f\"    [_run:{self.name}] Error: {e}\")\n             return f\"Error in sync wrapper: {e}\"\n# ---\n\n# --- 主要测试逻辑 ---\nasync def run_fetch_test():\n    \"\"\"运行 Fetch Server 测试 (使用 BaseTool 子类)\"\"\"\n    print(f\"\\n=== Running Fetch Server Test (BaseTool Subclass Method) ===\")\n\n    # 检查依赖和 Schema 定义\n    if not FETCH_SCHEMA_AVAILABLE: print(\"ERROR: FetchInputSchema not available.\"); return False\n    if not CALL_TOOL_REQ_AVAILABLE: print(\"ERROR: CallToolRequest unavailable.\"); return False\n\n    # 加载配置\n    config: Optional[MCPConfig] = None\n    try:\n        all_configs = load_config(CENTRAL_CONFIG_PATH)\n        config = all_configs.get(SERVER_KEY_TO_TEST)\n        if not config: print(f\"ERROR: Config key '{SERVER_KEY_TO_TEST}' not found in '{CENTRAL_CONFIG_PATH}'.\"); return False\n        if not isinstance(config.connection, StdioConfig): print(\"ERROR: Config connection is not STDIO.\"); return False\n        print(f\"Successfully loaded config for '{SERVER_KEY_TO_TEST}'.\")\n    except Exception as e_load: print(f\"ERROR loading config: {e_load}\"); return False\n\n    # 获取 LLM\n    try: model = llm_manager.get_model(LLM_ID_FOR_TESTING); print(f\"Using LLM: {getattr(model, 'model_name', LLM_ID_FOR_TESTING)}\")\n    except ValueError as e: print(f\"获取 LLM 出错: {e}.\"); return False\n\n    test_success = False\n    # 使用 MCPClient 连接 (它会根据 config 启动服务器)\n    async with MCPClient(config) as client:\n        print(\"\\nMCPClient context entered.\")\n        if not client.session: print(\"ERROR: MCP session not established!\"); return False\n\n        # --- 实例化我们定义的 MCPToolRunner ---\n        try:\n            print(f\"Instantiating MCPToolRunner for '{TOOL_NAME_TO_TEST}'...\")\n            mcp_tool_instance = MCPToolRunner(\n                client=client, # 注入 client\n                name=TOOL_NAME_TO_TEST,\n                description=TOOL_DESCRIPTION,\n                args_schema=CORRECT_SCHEMA_FOR_TOOL\n            )\n            tools = [mcp_tool_instance]\n            print(f\"Tool instance created successfully.\")\n        except Exception as e_inst: print(f\"ERROR instantiating MCPToolRunner: {e_inst}\"); return False\n        # ---\n\n        # --- Agent 执行 ---\n        agent = create_react_agent(model, tools) # Agent 使用这个标准工具\n        query = \"Use the fetch tool to get the main content (first 2000 chars) from https://developer.mozilla.org/en-US/docs/Web/HTML\"\n        print(f\"\\nRunning Agent Query...\")\n        print(f\"Query: {query}\")\n        print(\"--- NOTE: Enable LangSmith for detailed tracing! ---\")\n        try:\n            response = await asyncio.wait_for( agent.ainvoke({\"messages\": [{\"role\": \"user\",\"content\": query}]}), timeout=180.0 )\n            print(f\"\\nAgent Final Response:\")\n            if response and \"messages\" in response and response[\"messages\"]:\n                 response_content = response[\"messages\"][-1].content; print(response_content)\n                 # 检查是否成功获取内容且无报错\n                 contains_error = \"error\" in response_content.lower() or \"fail\" in response_content.lower() or \"issue\" in response_content.lower() or \"apologi\" in response_content.lower() or \"unable\" in response_content.lower() or \"tool error\" in response_content.lower()\n                 contains_expected = \"HTML\" in response_content\n\n                 if not contains_error and contains_expected:\n                      print(f\"\\n✅ Test PASS: Agent successfully used tool and got expected content.\")\n                      test_success = True\n                 else: print(f\"\\n❌ Test FAIL: Agent reported error or didn't get expected content.\"); test_success = False\n            else: print(\"Agent returned no valid response.\"); test_success = False\n        except asyncio.TimeoutError: print(f\"Agent execution timed out\"); test_success = False\n        except Exception as e: print(f\"Agent execution failed: {e}\"); print(f\"Traceback:\\n{traceback.format_exc()}\"); test_success = False\n        # ---\n\n    # async with 会自动调用 client.close()\n    print(f\"\\n--- Fetch Server Test Result: {'PASS' if test_success else 'FAIL'} ---\")\n    return test_success\n\nasync def run_everything_test():\n    \"\"\"运行 Everything MCP Server 测试 (使用 BaseTool 子类)\"\"\"\n    print(f\"\\n=== Running Everything MCP Server Test (BaseTool Subclass Method) ===\")\n\n    # 检查依赖和 Schema 定义\n    if not ECHO_SCHEMA_AVAILABLE: print(\"ERROR: EchoInputSchema not available.\"); return False\n    if not ADD_SCHEMA_AVAILABLE: print(\"ERROR: AddInputSchema not available.\"); return False\n    if not CALL_TOOL_REQ_AVAILABLE: print(\"ERROR: CallToolRequest unavailable.\"); return False\n\n    # 加载配置\n    config: Optional[MCPConfig] = None\n    try:\n        all_configs = load_config(CENTRAL_CONFIG_PATH)\n        config = all_configs.get(EVERYTHING_SERVER_KEY)\n        if not config: print(f\"ERROR: Config key '{EVERYTHING_SERVER_KEY}' not found in '{CENTRAL_CONFIG_PATH}'.\"); return False\n        if not isinstance(config.connection, StdioConfig): print(\"ERROR: Config connection is not STDIO.\"); return False\n        print(f\"Successfully loaded config for '{EVERYTHING_SERVER_KEY}'.\")\n    except Exception as e_load: print(f\"ERROR loading config: {e_load}\"); return False\n\n    # 获取 LLM\n    try: model = llm_manager.get_model(LLM_ID_FOR_TESTING); print(f\"Using LLM: {getattr(model, 'model_name', LLM_ID_FOR_TESTING)}\")\n    except ValueError as e: print(f\"获取 LLM 出错: {e}.\"); return False\n\n    test_success = False\n    # 使用 MCPClient 连接 (它会根据 config 启动服务器)\n    async with MCPClient(config) as client:\n        print(\"\\nMCPClient context entered for Everything MCP.\")\n        if not client.session: print(\"ERROR: MCP session not established!\"); return False\n\n        # --- 实例化我们定义的 MCPToolRunner 用于 echo 工具 ---\n        try:\n            print(f\"Instantiating MCPToolRunner for '{EVERYTHING_ECHO_TOOL}'...\")\n            echo_tool = MCPToolRunner(\n                client=client, # 注入 client\n                name=EVERYTHING_ECHO_TOOL,\n                description=\"Echoes back the input message\",\n                args_schema=EchoInputSchema\n            )\n            \n            print(f\"Instantiating MCPToolRunner for '{EVERYTHING_ADD_TOOL}'...\")\n            add_tool = MCPToolRunner(\n                client=client, # 注入 client\n                name=EVERYTHING_ADD_TOOL,\n                description=\"Adds two numbers together\",\n                args_schema=AddInputSchema\n            )\n            \n            tools = [echo_tool, add_tool]\n            print(f\"Tool instances created successfully.\")\n        except Exception as e_inst: print(f\"ERROR instantiating MCPToolRunner: {e_inst}\"); return False\n        # ---\n\n        # --- Agent 执行 ---\n        agent = create_react_agent(model, tools) # Agent 使用这些工具\n        query = \"First, use the echo tool to echo back the message 'Hello from Everything MCP!'. Then, use the add tool to calculate 42 + 58.\"\n        print(f\"\\nRunning Agent Query...\")\n        print(f\"Query: {query}\")\n        print(\"--- NOTE: Enable LangSmith for detailed tracing! ---\")\n        try:\n            response = await asyncio.wait_for(agent.ainvoke({\"messages\": [{\"role\": \"user\",\"content\": query}]}), timeout=180.0)\n            print(f\"\\nAgent Final Response:\")\n            if response and \"messages\" in response and response[\"messages\"]:\n                response_content = response[\"messages\"][-1].content; print(response_content)\n                # 检查是否成功获取内容且无报错\n                contains_error = \"error\" in response_content.lower() or \"fail\" in response_content.lower() or \"issue\" in response_content.lower() or \"apologi\" in response_content.lower() or \"unable\" in response_content.lower() or \"tool error\" in response_content.lower()\n                contains_echo = \"Hello from Everything MCP!\" in response_content\n                contains_add = \"100\" in response_content\n\n                if not contains_error and contains_echo and contains_add:\n                    print(f\"\\n✅ Test PASS: Agent successfully used both tools and got expected content.\")\n                    test_success = True\n                else: \n                    print(f\"\\n❌ Test FAIL: Agent reported error or didn't get expected content.\")\n                    print(f\"  - Contains error: {contains_error}\")\n                    print(f\"  - Contains echo response: {contains_echo}\")\n                    print(f\"  - Contains add result: {contains_add}\")\n                    test_success = False\n            else: print(\"Agent returned no valid response.\"); test_success = False\n        except asyncio.TimeoutError: print(f\"Agent execution timed out\"); test_success = False\n        except Exception as e: print(f\"Agent execution failed: {e}\"); print(f\"Traceback:\\n{traceback.format_exc()}\"); test_success = False\n        # ---\n\n    # async with 会自动调用 client.close()\n    print(f\"\\n--- Everything MCP Server Test Result: {'PASS' if test_success else 'FAIL'} ---\")\n    return test_success\n\nasync def main():\n    \"\"\"主函数 - 运行所有测试\"\"\"\n    print(\"Starting MCP Integration Tests...\")\n    \n    # 运行 Fetch 测试\n    fetch_success = await run_fetch_test()\n    \n    # 运行 Everything MCP 测试\n    everything_success = await run_everything_test()\n    \n    print(\"\\n\" + \"=\"*20 + \" FINAL TEST SUMMARY \" + \"=\"*20);\n    print(f\"  Fetch Server Test: {'PASS' if fetch_success else 'FAIL'}\")\n    print(f\"  Everything MCP Test: {'PASS' if everything_success else 'FAIL'}\")\n    print(\"=\"*20 + \" MCP Integration Test Finished \" + \"=\"*20)\n\nif __name__ == \"__main__\":\n    # 简化依赖检查\n    print(\"--- Dependency Check ---\")\n    deps_ok = True\n    try: import mcp; print(\"mcp available: True\")\n    except ImportError: print(\"mcp available: False\"); deps_ok = False\n    if CALL_TOOL_REQ_AVAILABLE: print(\"CallToolRequest available: True\")\n    else: print(\"CallToolRequest available: False\"); deps_ok = False # 需要它\n    try: import langgraph; print(\"langgraph available: True\")\n    except ImportError: print(\"langgraph available: False\"); deps_ok = False\n    try: import langchain_openai; print(\"langchain_openai available: True\")\n    except ImportError: print(\"langchain_openai available: False\"); deps_ok = False\n    try: import dotenv; print(\"dotenv available: True\")\n    except ImportError: print(\"dotenv available: False\"); deps_ok = False\n    try: import pydantic; print(\"pydantic available: True\")\n    except ImportError: print(\"pydantic available: False\"); deps_ok = False\n    try: from core.mcp.client import MCPClient; print(\"MCPClient available: True\")\n    except ImportError: print(\"MCPClient available: False\"); deps_ok = False\n    try: from core.mcp.config_loader import load_config; print(\"config_loader available: True\")\n    except ImportError: print(\"config_loader available: False\"); deps_ok = False\n    if not FETCH_SCHEMA_AVAILABLE: print(\"FetchInputSchema available: False\"); deps_ok=False\n    else: print(\"FetchInputSchema available: True\")\n    if not ECHO_SCHEMA_AVAILABLE: print(\"EchoInputSchema available: False\"); deps_ok=False\n    else: print(\"EchoInputSchema available: True\")\n    if not ADD_SCHEMA_AVAILABLE: print(\"AddInputSchema available: False\"); deps_ok=False\n    else: print(\"AddInputSchema available: True\")\n    print(f\"------------------------\")\n\n    if not deps_ok:\n        print(\"CRITICAL ERROR: Necessary dependencies missing.\")\n        sys.exit(1)\n\n    asyncio.run(main())"
  },
  {
    "path": "examples/16_google_a2a/README.md",
    "content": "# LangGraph Agent 与 A2A 协议集成框架\n\n## 概述\n\n本项目提供了一个将 **LangGraph Agent**（特别是基于 ReAct 模式并能调用工具的 Agent）与 **A2A (Agent-to-Agent) 协议** 相集成的框架和示例。目标是展示如何将一个用 LangGraph 构建的复杂 Agent 能力，通过标准化的 A2A 接口暴露给外部客户端或其他 Agent。\n\n此框架的核心在于 `AgentTaskManager`，它充当了 A2A 协议层与具体 Agent 实现之间的桥梁。项目包含了一个完整的端到端示例，其中 `CurrencyAgent`（使用 `create_react_agent` 构建，并带有计算器和搜索工具）通过 `A2AServer` 提供服务，并提供了两个不同的客户端示例 (`client_example.py` 和 `currency_agent_test.py`) 来演示如何与之交互。\n\n关键技术栈包括：\n* **A2A 协议:** 定义交互规范。\n* **LangGraph:** 用于构建具备状态管理和工具调用能力的 Agent。\n* **`create_react_agent`:** LangGraph 提供的预构建 ReAct Agent 实现（作为示例）。\n* **Pydantic:** 用于定义和验证 A2A 协议中的数据结构 (`core/a2a/types.py`)。\n* **Starlette/Uvicorn:** 作为底层 Web 框架运行 A2A 服务器 (`core/a2a/server/server.py`)。\n* **OpenAI API:** 作为 LangGraph Agent 使用的后端大语言模型（可替换）。\n\n## 特性\n\n* **A2A 协议兼容:** 提供符合 A2A 规范的服务端点 (`/.well-known/agent.json` 和主任务端点)。\n* **LangGraph Agent 集成:** 可将任意（满足特定接口要求的）LangGraph Agent 作为 A2A 服务的核心处理逻辑。\n* **工具使用:** 集成的 Agent 能够根据需要调用外部工具（示例中为计算器和搜索）。\n* **同步任务处理:** 支持客户端发送任务并等待最终结果。\n* **流式基础:** 包含了处理流式请求和响应的框架（Agent 端流式逻辑需开发者实现）。\n* **类型安全:** 使用 Pydantic 进行严格的数据校验。\n* **环境配置:** 支持通过 `.env` 文件配置 API 密钥等敏感信息。\n* **客户端示例:** 提供了基础和场景化的客户端示例代码。\n\n## 目录结构\n\n```\n.\n├── core/                           # 核心 A2A 协议实现\n│   └── a2a/\n│       ├── client/\n│       │   └── client.py           # A2AClient 客户端库实现\n│       ├── server/\n│       │   ├── server.py           # A2AServer HTTP 服务器实现\n│       │   └── task_manager.py     # TaskManager 基础接口 (被 AgentTaskManager 使用)\n│       ├── agent_task_manager.py     # AgentTaskManager 实现 (连接 A2A 与 LangGraph)\n│       └── types.py                # A2A 协议的 Pydantic 模型定义\n├── examples/                       # 示例代码\n│   └── a2a/\n│       ├── langgraph_integration.py # 服务端设置和示例 LangGraph Agent (CurrencyAgent) 定义\n│       ├── client_example.py          # 基础 A2A 客户端使用示例脚本\n│       └── currency_agent_test.py     # 场景化 A2A 客户端测试脚本\n├── .env                            # 存储环境变量 (例如 OPENAI_API_KEY) - *需要自行创建*\n├── requirements.txt                # Python 依赖项列表 (假设存在)\n└── README.md                       # 本文档\n```\n\n## 核心组件说明\n\n* **`core/a2a/types.py`:** 定义所有 A2A 数据结构，是协议的基础和校验依据。\n* **`core/a2a/server/server.py` (`A2AServer`):** 基于 Starlette 的 HTTP 服务器，处理 A2A JSON-RPC 请求路由，将请求交给 `AgentTaskManager`。通过 `.start()` 方法启动。\n* **`core/a2a/agent_task_manager.py` (`AgentTaskManager`):** **核心适配器**。连接 A2A 层和 Agent 层。它接收来自 `A2AServer` 的请求，管理任务状态，并调用注入的 Agent 实例的 `invoke` 或 `stream` 方法。\n* **`examples/a2a/langgraph_integration.py`:** 包含 `CurrencyAgent` (使用 `create_react_agent` 的示例 Agent) 的定义，以及如何配置和启动 `A2AServer` 来运行这个 Agent 的完整脚本。\n* **`core/a2a/client/client.py` (`A2AClient`):** 基础 A2A 客户端库。\n* **`examples/a2a/client_example.py`:** 一个简单的脚本，演示如何使用 `A2AClient` 发送基本请求。\n* **`examples/a2a/currency_agent_test.py`:** 一个更复杂的客户端脚本，包含多个测试场景，用于测试服务器端 Agent 的不同交互模式。\n\n## 先决条件\n\n* Python (推荐 3.10 或更高版本)\n* `pip` (Python 包安装器)\n* 虚拟环境 (强烈推荐)\n* 大语言模型 API Key (例如 OpenAI API Key)\n\n## 安装与设置\n\n1.  **克隆仓库:**\n    ```bash\n    git clone <your-repo-url>\n    cd <your-repo-directory>\n    ```\n2.  **创建并激活虚拟环境:**\n    ```bash\n    uv venv\n    source .venv/bin/activate\n    ```\n3.  **安装依赖项:**\n    ```bash\n    uv sync\n    ```\n4.  **设置环境变量:**\n    * 在项目根目录下创建 `.env` 文件。\n    * 添加所需的 API Key，例如：\n        ```dotenv\n        OPENAI_API_KEY=\"sk-...\"\n        ```\n\n## 运行示例\n\n1.  **启动 A2A 服务器:**\n    * 在终端中，激活虚拟环境后运行：\n        ```bash\n        python -m examples.a2a.langgraph_integration\n        ```\n    * 服务器将在 `http://127.0.0.1:8000` 启动并监听。\n\n2.  **运行 A2A 客户端:**\n    * 打开**新的**终端，激活虚拟环境。\n    * 你可以选择运行任一客户端示例：\n        * **基础示例:**\n            ```bash\n            python -m examples.a2a.client_example\n            ```\n        * **场景化测试:**\n            ```bash\n            python -m examples.a2a.currency_agent_test\n            ```\n\n3.  **预期输出:**\n    * **服务器终端**会显示接收请求、调用 LLM 和工具（如果被触发）的日志。\n    * **客户端终端**会显示发送任务、轮询状态（对于同步任务）、接收结果或（模拟的）流式事件的输出。`currency_agent_test.py` 会按场景输出结果。\n\n---\n\n## **重要：集成新的 LangGraph Agent 指南**\n\n如果你创建了一个新的基于 LangGraph 的 Agent，并希望将其接入到这个 A2A 框架中，你需要遵循以下步骤和约定：\n\n### 1. Agent 类必须实现的接口\n\n你的新 Agent 类（例如 `MyNewAgent`）需要被 `AgentTaskManager` 调用。为此，它**必须**实现以下方法和属性：\n\n* **`__init__(self, llm, ...)`:**\n    * 构造函数，用于初始化 Agent 所需的资源，例如 LLM 实例、工具列表等。\n    * **关键:** 在这里构建或获取你的 LangGraph **Runnable** 实例（例如通过 `create_react_agent` 或手动构建 `StateGraph().compile()`），并将其存储为类的成员（例如 `self.agent_runnable`）。\n\n* **`invoke(self, query: str, session_id: Optional[str] = None) -> str:`**\n    * 处理 A2A 的**同步** `tasks/send` 请求。\n    * 接收从 `AgentTaskManager` 传递过来的纯文本用户查询 `query` 和可选的 `session_id`。\n    * **内部逻辑:**\n        * 将 `query` 包装成你的 LangGraph Runnable 所需的输入格式。对于基于 `create_react_agent` 或类似使用消息列表的 Agent，通常是 `{\"messages\": [(\"user\", query)]}`。如果需要 `session_id`，也应包含在内。\n        * 调用 LangGraph Runnable 的 `.invoke()` 方法，传入构造好的输入字典。\n        * 处理 Runnable 返回的结果字典。对于 ReAct Agent，最终的文本答案通常位于结果字典内 `messages` 列表的最后一条消息的内容中。你需要编写逻辑来提取这个最终答案。\n    * **返回值:** **必须**返回一个包含最终答案的**字符串**。\n\n* **`stream(self, query: str, session_id: Optional[str] = None) -> AsyncIterable[Dict[str, Any]]:`**\n    * 处理 A2A 的**流式** `tasks/sendSubscribe` 请求。\n    * 接收 `query` 和 `session_id`。\n    * **必须**是一个**异步生成器** (`async def` 包含 `yield`)。\n    * **内部逻辑:**\n        * 准备 LangGraph Runnable 流式调用所需的输入（通常与 `invoke` 类似，例如 `{\"messages\": [(\"user\", query)]}`）。\n        * 调用 LangGraph Runnable 的流式方法，例如 `self.agent_runnable.astream(...)` 或 `self.agent_runnable.astream_log(...)`。\n        * 使用 `async for chunk in ...:` 迭代 LangGraph Runnable 返回的流式数据块 (`chunk`)。\n        * **解析 `chunk`**: LangGraph 流式输出的 `chunk` 格式取决于你调用的方法（`astream` vs `astream_log`）和图的结构。你需要解析这些 `chunk`（可能是状态变更、日志补丁等）来获取有意义的中间或最终内容。\n        * **`yield` 符合格式的字典**: 对于每个希望发送给客户端的更新，你需要 `yield` 一个字典。这个字典**必须**包含以下键（供 `AgentTaskManager._run_streaming_agent` 使用）：\n            * `\"content\"`: `str` - 当前步骤生成的文本内容。\n            * `\"is_task_complete\"`: `bool` - 指示这是否是任务的最终产物/结束信号。\n            * `\"require_user_input\"`: `bool` - 指示任务是否暂停并需要用户输入。\n    * **返回值:** 返回一个异步可迭代对象（由 `async def` + `yield` 自动创建）。\n\n* **`SUPPORTED_CONTENT_TYPES: List[str]` (类属性):**\n    * 一个包含 Agent 支持的输出内容类型的列表。对于主要处理文本的 Agent，通常是 `[\"text\"]`。`AgentTaskManager` 会用它来验证客户端请求的 `acceptedOutputModes`。\n\n### 2. `AgentState` 的一致性\n\n如果你手动构建 LangGraph 图，你定义的 `AgentState`（传递给 `StateGraph`）需要与你的 `invoke` 和 `stream` 方法处理输入/输出的方式保持一致。特别是，如果你依赖 `messages` 列表来管理对话历史或传递输入/输出，`AgentState` 中需要正确定义它。\n\n### 3. 集成步骤\n\n1.  **创建 Agent 类:**\n    * 在你的项目中创建一个新的 Python 文件（例如 `my_new_agent.py`）。\n    * 定义你的 Agent 类（例如 `MyNewAgent`），确保它实现了上面描述的 `__init__`, `invoke`, `stream` 方法和 `SUPPORTED_CONTENT_TYPES` 属性。\n    * 在 `__init__` 中构建或加载你的 LangGraph Runnable。\n\n2.  **修改服务器启动脚本 (例如 `examples/a2a/langgraph_integration.py`):**\n    * **导入**你的新 Agent 类：`from my_new_agent import MyNewAgent`。\n    * **实例化**你的新 Agent：`my_agent = MyNewAgent(llm)` (确保传递了所需的依赖，如 `llm`)。\n    * **更新 `AgentCard`**: 修改 `name`, `description` 和 `skills` 列表以反映新 Agent 的信息。确保 `AgentSkill` 具有唯一的 `id` 和正确的 `name`。\n    * **实例化 `AgentTaskManager`**: 使用你的新 Agent 实例：`task_manager = AgentTaskManager(my_agent)`。\n    * **实例化 `A2AServer`**: 使用更新后的 `agent_card` 和 `task_manager`。\n\n3.  **运行服务器:**\n    * 启动修改后的服务器脚本：`python -m examples.a2a.your_server_script`。\n\n4.  **测试:**\n    * 使用 `client_example.py` 或 `currency_agent_test.py`（可能需要修改发送的查询或 `metadata` 中的 `skill_name`）来向新启动的服务器发送请求，验证你的新 Agent 是否能通过 A2A 协议正常工作。\n\n### 示例 Agent 骨架\n\n```python\n# my_new_agent.py\nimport logging\nfrom typing import List, Optional, AsyncIterable, Dict, Any, Tuple\nfrom langchain_core.language_models import BaseChatModel # 示例 LLM 类型\nfrom langgraph.graph.state import StateGraph # 如果手动构建图\n# from langgraph.prebuilt import create_some_agent # 如果使用预构建\nfrom typing import TypedDict\n\nlogger = logging.getLogger(__name__)\n\n# 1. 定义你 Agent 使用的 State (如果需要)\nclass MyAgentState(TypedDict):\n    messages: List[Tuple[str, str]]\n    # ... 其他状态字段\n\nclass MyNewAgent:\n    SUPPORTED_CONTENT_TYPES: List[str] = [\"text\"]\n\n    def __init__(self, llm: BaseChatModel):\n        self.llm = llm\n        # TODO: 在这里构建或加载你的 LangGraph Runnable\n        # 例如: self.agent_runnable = self._build_my_graph()\n        # 或者: self.agent_runnable = create_some_agent(llm, tools)\n        self.agent_runnable = self._get_placeholder_runnable() # 示例\n        logger.info(\"MyNewAgent initialized.\")\n\n    def _get_placeholder_runnable(self):\n        # 这是一个模拟的 Runnable，你需要替换成真实的 LangGraph Runnable\n        class PlaceholderRunnable:\n            def invoke(self, input_dict):\n                logger.info(f\"PlaceholderRunnable received invoke: {input_dict}\")\n                query = input_dict.get(\"messages\", [(\"\", \"\")])[-1][1]\n                return {\"messages\": [(\"assistant\", f\"模拟回应 '{query}'\")]}\n            async def astream(self, input_dict):\n                logger.info(f\"PlaceholderRunnable received astream: {input_dict}\")\n                query = input_dict.get(\"messages\", [(\"\", \"\")])[-1][1]\n                yield {\"messages\": [(\"assistant\", f\"模拟流式回应1 '{query}' ...\")]}\n                await asyncio.sleep(0.5)\n                yield {\"messages\": [(\"assistant\", f\"模拟流式回应2 '{query}' 完毕。\")]}\n        return PlaceholderRunnable()\n\n    # def _build_my_graph(self):\n    #     # 如果你手动构建图，在这里实现\n    #     # workflow = StateGraph(MyAgentState)\n    #     # ... add nodes, edges ...\n    #     # return workflow.compile()\n    #     pass\n\n    def invoke(self, query: str, session_id: Optional[str] = None) -> str:\n        logger.debug(f\"[MyNewAgent.invoke] query: '{query}', session_id: '{session_id}'\")\n        # 1. 准备输入\n        invoke_input = {\"messages\": [(\"user\", query)]}\n        # 2. 调用 Runnable\n        try:\n            result = self.agent_runnable.invoke(invoke_input)\n            logger.debug(f\"[MyNewAgent.invoke] Runnable result: {result}\")\n            # 3. 解析结果\n            final_output = \"错误：未能解析 Agent 响应。\"\n            if isinstance(result, dict) and isinstance(result.get(\"messages\"), list) and result[\"messages\"]:\n                 last_message = result[\"messages\"][-1]\n                 if isinstance(last_message, tuple) and len(last_message) == 2:\n                     final_output = last_message[1]\n                 elif hasattr(last_message, 'content'):\n                      final_output = last_message.content\n            return str(final_output)\n        except Exception as e:\n            logger.error(f\"[MyNewAgent.invoke] Error: {e}\", exc_info=True)\n            raise # 重新抛出异常，让 TaskManager 处理\n\n    async def stream(self, query: str, session_id: Optional[str] = None) -> AsyncIterable[Dict[str, Any]]:\n        logger.debug(f\"[MyNewAgent.stream] query: '{query}', session_id: '{session_id}'\")\n        # 1. 准备输入\n        stream_input = {\"messages\": [(\"user\", query)]}\n        # 2. 调用 Runnable 的流式方法\n        try:\n            # 使用 astream 或 astream_log\n            async for chunk in self.agent_runnable.astream(stream_input):\n                logger.debug(f\"[MyNewAgent.stream] Received chunk: {chunk}\")\n                # 3. 解析 chunk 并 yield 符合格式的字典\n                #    这里的解析逻辑高度依赖于你的图和使用的流式方法\n                #    你需要根据实际的 chunk 内容提取 content, is_task_complete, require_user_input\n                # --- 这是一个 **高度简化** 的示例解析 ---\n                content_to_yield = \"\"\n                is_complete = False # 你需要根据 chunk 判断任务是否真的结束\n                is_input_required = False # 你需要根据 chunk 判断是否需要输入\n\n                # 尝试从 chunk 中提取 'messages' 的最新内容作为 content\n                if isinstance(chunk, dict) and isinstance(chunk.get(\"messages\"), list) and chunk[\"messages\"]:\n                    last_message = chunk[\"messages\"][-1]\n                    if isinstance(last_message, tuple) and len(last_message) == 2:\n                        content_to_yield = last_message[1]\n                    elif hasattr(last_message, 'content'):\n                        content_to_yield = last_message.content\n\n                if content_to_yield: # 只在有内容时 yield\n                    # 在实际应用中，你需要更复杂的逻辑判断 is_task_complete\n                    # 例如，检查 LangGraph 图是否到达了 END 节点，或者某个特定的最终节点状态\n                    # is_complete = ???\n                    yield {\n                        \"content\": content_to_yield,\n                        \"is_task_complete\": is_complete, # 需要正确设置\n                        \"require_user_input\": is_input_required # 需要正确设置\n                    }\n                # --- 简化示例结束 ---\n\n            # **重要**: 在循环结束后，如果任务确实完成了，需要再 yield 一个最终状态\n            # (除非上面的循环中最后一个 yield 的 is_task_complete 已经是 True)\n            # 例如:\n            # final_result = await self.agent_runnable.ainvoke(stream_input) # 可能需要再调用一次 invoke 获取最终确认状态\n            # final_text = ... # 解析最终文本\n            # yield {\"content\": final_text, \"is_task_complete\": True, \"require_user_input\": False}\n\n        except Exception as e:\n            logger.error(f\"[MyNewAgent.stream] Error: {e}\", exc_info=True)\n            # 在流中抛出异常可能会中断 SSE 连接，或者你可以 yield 一个错误信息\n            yield {\n                \"content\": f\"处理流式请求时出错: {e}\",\n                \"is_task_complete\": True, # 标记任务失败并结束\n                \"require_user_input\": False\n            }\n```\n\n## 当前状态与限制\n\n* 同步任务执行，包括 LangGraph Agent 调用 LLM 和工具，已成功实现并验证。\n* A2A 协议的服务端和客户端基础结构已建立。\n* **Agent 端的流式处理 (`CurrencyAgent.stream`) 目前是模拟的**，并未真正调用 LangGraph 的流式接口。真实的流式更新尚未实现。\n* 当前 Agent 实现 (`CurrencyAgent`) 不支持需要跨请求保持状态的多轮对话澄清。\n* 错误处理可以进一步增强。\n* 任务存储仅在内存中 (`InMemoryTaskManager`)。\n\n## 未来方向\n\n* **实现真实流式输出:** 按照上述指南，在 Agent 类中实现 `stream` 方法，调用 LangGraph 的 `astream` 或 `astream_log`，并正确解析和 `yield` A2A 所需格式的字典。\n* **支持多轮对话:** 修改 `AgentState` 以包含可累加的消息历史 (例如使用 `Annotated[List[BaseMessage], operator.add]`)，并调整 Agent 的 `invoke` 和 `stream` 方法以处理和利用这个历史记录。可能还需要 Agent 能返回 `input-required` 状态。\n* **增强错误处理:** 为网络问题、Agent 执行错误、工具调用失败、类型验证错误等提供更详细、用户友好的错误报告。\n* **持久化任务存储:** 替换 `InMemoryTaskManager`。\n* **配置管理:** 外部化配置。\n* **多技能支持:** 添加路由逻辑。\n"
  },
  {
    "path": "examples/16_google_a2a/__init__.py",
    "content": "# examples/a2a/__init__.py\n\n\"\"\"\nA2A协议与LangGraph集成示例\n\n本目录包含了A2A协议与LangGraph Agent集成的示例和文档。\n\"\"\""
  },
  {
    "path": "examples/16_google_a2a/agent_task_manager_test.py",
    "content": "# examples/a2a/agent_task_manager_test.py\n\nimport os\nimport sys\nimport asyncio\nimport logging\nfrom typing import TypedDict, Any, List, Optional,Tuple\n\n# 添加项目根目录到路径\nsys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\n# 导入环境变量\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# 导入A2A相关组件\nfrom core.a2a.types import (\n    TaskState, TaskStatus, Task, Artifact, Message,\n    SendTaskRequest, SendTaskResponse, SendTaskStreamingRequest,\n    TaskSendParams, JSONRPCResponse\n)\nfrom core.a2a.agent_task_manager import AgentTaskManager\n\n# 导入LangChain和LLM相关组件\nfrom langchain_core.tools import tool\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.graph import END, StateGraph\nfrom langgraph.prebuilt import create_react_agent\n\n# 配置日志\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(__name__)\n\n# 定义一个简单的工具\n@tool\ndef search(query: str) -> str:\n    \"\"\"搜索互联网获取信息\"\"\"\n    return f\"这是关于 '{query}' 的搜索结果。\"\n\n@tool\ndef calculator(expression: str) -> str:\n    \"\"\"计算数学表达式\"\"\"\n    try:\n        result = eval(expression)\n        return f\"计算结果: {result}\"\n    except Exception as e:\n        return f\"计算错误: {e}\"\n\n# 定义一个简单的LangGraph Agent\n\nclass AgentState(TypedDict):\n    messages: List[Tuple[str, str]]\n    session_id: Optional[str] # 保留 session_id\n\nclass TestAgent:\n    \"\"\"测试用Agent\"\"\"\n    \n    # 支持的内容类型\n    SUPPORTED_CONTENT_TYPES = [\"text\"]\n    \n    def __init__(self, llm=None):\n        if llm is None:\n            try:\n                llm = ChatOpenAI(model=\"gpt-4o-mini\")\n            except Exception as e:\n                print(f\"警告: 无法创建OpenAI LLM ({e})，使用模拟模式\")\n                from langchain.llms.fake import FakeListLLM\n                llm = FakeListLLM(responses=[\"这是一个模拟的LLM响应\"])\n                \n        self.tools = [search, calculator]\n        self.agent = create_react_agent(llm, self.tools)\n        self.graph = self._build_graph()\n    \n    def _build_graph(self):\n        \"\"\"构建Agent的工作流图\"\"\"\n        workflow = StateGraph(AgentState)\n        workflow.add_node(\"agent\", self.agent)\n        workflow.set_entry_point(\"agent\")\n        workflow.add_edge(\"agent\", END)\n        return workflow.compile()\n    \n    def invoke(self, query: str, session_id: str = None) -> str:\n        \"\"\"同步调用Agent\"\"\"\n        result = self.graph.invoke({\"input\": query, \"session_id\": session_id})\n        return result[\"output\"]\n    \n    async def stream(self, query: str, session_id: str = None):\n        \"\"\"流式调用Agent\"\"\"\n        # 模拟流式输出\n        chunks = [\n            \"正在处理您的请求...\",\n            \"正在搜索相关信息...\",\n            \"找到了一些结果，正在整理...\",\n            f\"关于 '{query}' 的信息如下：这是一个模拟的流式响应。\"\n        ]\n        \n        for i, chunk in enumerate(chunks):\n            is_last = i == len(chunks) - 1\n            yield {\n                \"content\": chunk,\n                \"is_task_complete\": is_last,\n                \"require_user_input\": False\n            }\n            await asyncio.sleep(0.5)  # 模拟延迟\n\n# 测试AgentTaskManager的同步任务处理\nasync def test_sync_task():\n    print(\"\\n=== 测试同步任务处理 ===\\n\")\n    \n    # 创建Agent和AgentTaskManager\n    agent = TestAgent()\n    task_manager = AgentTaskManager(agent)\n    \n    # 创建任务请求\n    task_id = \"test_sync_task_1\"\n    session_id = \"test_session_1\"\n    content = [{\"type\": \"text\", \"text\": \"计算 123 + 456 的结果\"}]\n    \n    task_params = TaskSendParams(\n        id=task_id,\n        sessionId=session_id,\n        message=Message(role=\"user\", parts=content),\n        acceptedOutputModes=[\"text\"],\n        historyLength=10\n    )\n    \n    request = SendTaskRequest(id=\"req1\", params=task_params)\n    \n    # 发送任务\n    response = await task_manager.on_send_task(request)\n    \n    # 打印结果\n    print(f\"任务ID: {task_id}\")\n    print(f\"响应类型: {type(response)}\")\n    \n    if hasattr(response, \"error\") and response.error:\n        print(f\"错误: {response.error}\")\n    else:\n        print(\"任务成功完成\")\n        \n        # 获取任务\n        task = task_manager.tasks.get(task_id)\n        if task:\n            print(f\"任务状态: {task.status.state}\")\n            if task.artifacts:\n                for artifact in task.artifacts:\n                    for part in artifact.parts:\n                        if part.get(\"type\") == \"text\":\n                            print(f\"任务结果: {part.get('text')}\")\n\n# 测试AgentTaskManager的流式任务处理\nasync def test_streaming_task():\n    print(\"\\n=== 测试流式任务处理 ===\\n\")\n    \n    # 创建Agent和AgentTaskManager\n    agent = TestAgent()\n    task_manager = AgentTaskManager(agent)\n    \n    # 创建任务请求\n    task_id = \"test_stream_task_1\"\n    session_id = \"test_session_1\"\n    content = [{\"type\": \"text\", \"text\": \"搜索关于人工智能的信息\"}]\n    \n    task_params = TaskSendParams(\n        id=task_id,\n        sessionId=session_id,\n        message=Message(role=\"user\", parts=content),\n        acceptedOutputModes=[\"text\"],\n        historyLength=10\n    )\n    \n    request = SendTaskStreamingRequest(id=\"req2\", params=task_params)\n    \n    # 发送流式任务\n    response_generator = await task_manager.on_send_task_subscribe(request)\n    \n    # 检查响应类型\n    if isinstance(response_generator, JSONRPCResponse):\n        print(f\"错误: {response_generator.error}\")\n        return\n    \n    # 处理流式响应\n    print(\"开始接收流式响应:\")\n    async for response in response_generator:\n        if hasattr(response, \"error\") and response.error:\n            print(f\"流式响应错误: {response.error}\")\n        else:\n            result = response.result\n            if hasattr(result, \"status\") and result.status and result.status.message:\n                for part in result.status.message.parts:\n                    # --- 修改开始 ---\n                    # 直接访问对象的属性 type 和 text\n                    if hasattr(part, 'type') and part.type == \"text\":\n                        text_content = getattr(part, 'text', '') # 安全获取 text\n                        print(f\"流式更新: {text_content}\")\n                    # --- 修改结束 ---\n\n            if hasattr(result, \"artifact\") and result.artifact:\n                for part in result.artifact.parts:\n                    # --- 修改开始 ---\n                    # 直接访问对象的属性 type 和 text\n                    if hasattr(part, 'type') and part.type == \"text\":\n                         text_content = getattr(part, 'text', '') # 安全获取 text\n                         print(f\"流式结果: {text_content}\")\n                    # --- 修改结束 ---\n\n            if hasattr(result, \"final\") and result.final:\n                print(\"流式响应结束\")\n\n# 主函数\nasync def main():\n    print(\"=== AgentTaskManager 测试 ===\\n\")\n    \n    # 测试同步任务\n    await test_sync_task()\n    \n    # 测试流式任务\n    await test_streaming_task()\n\n# 运行测试\nif __name__ == \"__main__\":\n    asyncio.run(main())"
  },
  {
    "path": "examples/16_google_a2a/client_example.py",
    "content": "# examples/a2a/client_example.py\n\nimport os\nimport sys\nimport asyncio\nimport json\nimport logging # 添加 logging\nfrom typing import Dict, Any, List, Optional\nfrom uuid import uuid4 # 用于生成示例 Task ID\n\n# 添加项目根目录到路径\nsys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\n# 导入环境变量\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# 导入A2A客户端和类型\nfrom core.a2a.client.client import A2AClient\n# 导入 Message 和 TextPart 以构建请求，导入响应类型以进行类型提示\nfrom core.a2a.types import (\n    Part, TextPart, Message, TaskState, # 添加 TaskState\n    SendTaskResponse, GetTaskResponse, SendTaskStreamingResponse, Task, # 添加 Task\n    JSONRPCError # 添加 JSONRPCError\n)\n\n# 配置日志\nlogging.basicConfig(level=logging.INFO) # 可以改为 DEBUG 获取更详细客户端日志\nlogger = logging.getLogger(__name__)\n\n# 示例: 使用A2A客户端连接到A2A服务器\nasync def run_a2a_client():\n    print(\"\\n=== 运行A2A客户端示例 ===\\n\")\n\n    # 创建A2A客户端\n    client = A2AClient(url=\"http://127.0.0.1:8000\") # 指向你的服务器地址\n\n    # 发送同步任务\n    await send_sync_task(client)\n\n    # 发送流式任务\n    await send_streaming_task(client)\n\n# --- 修正发送同步任务 ---\nasync def send_sync_task(client: A2AClient):\n    print(\"\\n=== 发送同步任务 ===\\n\")\n    query = \"请计算 123 + 456 的结果\"\n    task_id = \"client_sync_\" + uuid4().hex # 生成一个唯一的任务 ID\n    try:\n        # 1. 构建 Message 对象\n        message = Message(role=\"user\", parts=[TextPart(text=query)])\n\n        # 2. 构建 TaskSendParams 对应的 payload 字典 (添加 id)\n        payload_dict = {\n            \"id\": task_id, # --- 添加必需的 id 字段 ---\n            \"sessionId\": \"client_session_sync_1\",\n            \"message\": message.model_dump(),\n            \"acceptedOutputModes\": [\"text\"],\n            \"metadata\": {\"skill_name\": \"react_query\"}\n        }\n        logger.debug(f\"Sending sync task with payload: {payload_dict}\")\n\n        # 3. 调用 send_task，传入 payload 字典\n        response: SendTaskResponse = await client.send_task(payload=payload_dict)\n        logger.debug(f\"Send task response: {response}\")\n\n        # 4. 处理响应\n        if response.error:\n            # 类型提示帮助访问属性\n            error: JSONRPCError = response.error\n            print(f\"发送任务时出错: Code={error.code}, Message={error.message}\")\n            return\n        # SendTaskResponse 的 result 是 Task 对象或 None\n        if not response.result:\n             print(f\"发送任务成功，但响应中未包含任务详情: {response}\")\n             # 我们可以继续使用我们发送的 task_id 来查询状态\n        elif response.result.id != task_id:\n            # 理论上服务器应该使用或确认客户端提供的 ID\n             logger.warning(f\"服务器返回的任务ID '{response.result.id}' 与客户端发送的ID '{task_id}' 不匹配。\")\n             task_id = response.result.id # 以服务器返回的为准（如果存在）\n\n\n        print(f\"任务已发送，ID: {task_id}\")\n\n        # --- 轮询等待任务完成 ---\n        print(\"等待任务完成...\")\n        task_result: Optional[Task] = None # 用于存储最终的任务对象\n        for attempt in range(10): # 最多尝试 10 次\n            await asyncio.sleep(2) # 等待 2 秒\n\n            # 5. 构建 get_task 的 payload\n            get_payload = {\"id\": task_id}\n            logger.debug(f\"Getting task with payload: {get_payload} (Attempt {attempt+1})\")\n\n            # 6. 获取任务结果 (传入 payload 字典)\n            get_response: GetTaskResponse = await client.get_task(payload=get_payload)\n            logger.debug(f\"Get task response: {get_response}\")\n\n            if get_response.error:\n                 error: JSONRPCError = get_response.error\n                 print(f\"获取任务时出错: Code={error.code}, Message={error.message}\")\n                 return # 出错则停止轮询\n            if not get_response.result:\n                 print(f\"获取任务成功，但未收到任务详情: {get_response}\")\n                 continue # 继续轮询\n\n            task_result = get_response.result # 获取任务对象\n            print(f\"  当前任务状态: {task_result.status.state}\")\n            # 检查任务是否完成或失败\n            if task_result.status.state in [TaskState.COMPLETED, TaskState.FAILED, TaskState.CANCELED, TaskState.INPUT_REQUIRED]:\n                break\n        else:\n            print(\"任务在限定时间内未完成。\")\n            return\n\n        # 7. 处理最终任务结果 (使用属性访问)\n        if task_result.status.state == TaskState.COMPLETED and task_result.artifacts:\n            print(\"任务成功完成。结果:\")\n            for artifact in task_result.artifacts:\n                 if artifact.parts:\n                    for part in artifact.parts:\n                        if isinstance(part, TextPart):\n                             print(f\"  - {part.text}\")\n        elif task_result.status.state == TaskState.FAILED:\n             error_msg = \"未知错误\"\n             if task_result.status.message and task_result.status.message.parts:\n                 # 假设错误信息在第一个 TextPart\n                 if isinstance(task_result.status.message.parts[0], TextPart):\n                    error_msg = task_result.status.message.parts[0].text\n             print(f\"任务失败: {error_msg}\")\n        else:\n             print(f\"任务最终状态为: {task_result.status.state}\")\n\n    except Exception as e:\n        logger.error(f\"发送或处理同步任务时发生异常: {e}\", exc_info=True)\n        print(f\"发送同步任务失败: {e}\")\n\n# --- 修正发送流式任务 ---\nasync def send_streaming_task(client: A2AClient):\n    print(\"\\n=== 发送流式任务 ===\\n\")\n    query = \"请搜索关于人工智能的最新进展\"\n    task_id = \"client_stream_\" + uuid4().hex # 为流式任务生成 ID\n    try:\n        # 1. 构建 Message 对象\n        message = Message(role=\"user\", parts=[TextPart(text=query)])\n\n        # 2. 构建 TaskSendParams 对应的 payload 字典 (添加 id)\n        payload_dict = {\n            \"id\": task_id, # --- 添加必需的 id 字段 ---\n            \"sessionId\": \"client_session_stream_1\",\n            \"message\": message.model_dump(),\n            \"acceptedOutputModes\": [\"text\"],\n            \"metadata\": {\"skill_name\": \"react_query\"}\n        }\n        logger.debug(f\"Sending streaming task with payload: {payload_dict}\")\n        print(f\"任务已发送，ID: {task_id}\") # 流式任务 ID 在发送时就已知\n\n        # 3. 调用 send_task_streaming (不再使用 await)\n        # 它返回一个异步生成器\n        event_stream_generator = client.send_task_streaming(payload=payload_dict)\n\n        # 4. 使用 async for 处理流式事件\n        print(\"开始接收流式响应:\")\n        async for event_response in event_stream_generator: # 正确迭代异步生成器\n            logger.debug(f\"Received stream event: {event_response}\")\n\n            # 检查整个响应是否有错误\n            if event_response.error:\n                 error: JSONRPCError = event_response.error\n                 print(f\"流式传输中出错: Code={error.code}, Message={error.message}\")\n                 continue # 或 break\n\n            # 获取事件具体内容\n            event = event_response.result\n            if not event:\n                 logger.warning(\"Received stream response with empty result.\")\n                 continue\n\n            # 处理状态更新事件中的消息部分\n            if hasattr(event, \"status\") and event.status and event.status.message:\n                 if event.status.message.parts:\n                    for part in event.status.message.parts:\n                        if isinstance(part, TextPart):\n                            print(f\"  流式更新: {part.text}\")\n\n            # 处理制品更新事件\n            if hasattr(event, \"artifact\") and event.artifact:\n                 print(\"  收到 Artifact:\")\n                 if event.artifact.parts:\n                    for part in event.artifact.parts:\n                        if isinstance(part, TextPart):\n                            print(f\"    流式结果 (TextPart): {part.text}\")\n\n            # 检查流结束标志\n            if hasattr(event, \"final\") and event.final:\n                 print(\"流式响应结束标志收到。\")\n\n        print(\"流式任务处理完成。\")\n\n    except Exception as e:\n        logger.error(f\"发送或处理流式任务时发生异常: {e}\", exc_info=True)\n        print(f\"发送流式任务失败: {e}\")\n\n# 主函数\nif __name__ == \"__main__\":\n    # 使用 asyncio.run 运行顶层异步函数\n    asyncio.run(run_a2a_client())"
  },
  {
    "path": "examples/16_google_a2a/currency_agent_test.py",
    "content": "# examples/a2a/currency_agent_test.py\n\nimport os\nimport sys\nimport asyncio\nimport json\nimport logging\nfrom typing import Dict, Any, List, Optional\nfrom uuid import uuid4 # Import uuid\n\n# 添加项目根目录到路径\nsys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\n# 导入环境变量\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# 导入A2A客户端和所需类型\nfrom core.a2a.client.client import A2AClient\n# 导入 Message, TextPart, TaskState, SendTaskResponse, GetTaskResponse, Task, JSONRPCError\nfrom core.a2a.types import (\n    Part, TextPart, Message, TaskState,\n    SendTaskResponse, GetTaskResponse, Task, JSONRPCError,\n    SendTaskStreamingResponse # 导入流式响应类型\n)\n\n# 配置日志\nlogging.basicConfig(level=logging.INFO) # 可以改为 DEBUG 获取详细日志\nlogger = logging.getLogger(__name__)\n\n# 测试场景1: 同步请求 - 货币转换查询 (修正)\nasync def test_sync_currency_conversion(client: A2AClient):\n    print(\"\\n=== 测试场景1: 同步请求 - Agent 调用 (计算器) ===\")\n    # query = \"How much is the exchange rate for 1 USD to INR?\" # 这个查询可能需要搜索工具\n    query = \"计算 58 * 34 的结果\" # 使用计算器工具确保能得到结果\n    task_id = \"test_sync_\" + uuid4().hex # 客户端生成任务ID\n    try:\n        # 1. 构建 Message 对象\n        message = Message(role=\"user\", parts=[TextPart(text=query)])\n\n        # 2. 构建 TaskSendParams 对应的 payload 字典\n        payload_dict = {\n            \"id\": task_id,\n            \"sessionId\": \"test_session_sync_1\",\n            \"message\": message.model_dump(), # 序列化为字典\n            \"acceptedOutputModes\": [\"text\"],\n            \"metadata\": {\"skill_name\": \"react_query\"} # 与 AgentCard 中的 skill name/id 对应\n        }\n        logger.debug(f\"Sending sync task with payload: {payload_dict}\")\n\n        # 3. 调用 send_task，传入 payload 字典\n        response: SendTaskResponse = await client.send_task(payload=payload_dict)\n        logger.debug(f\"Send task response: {response}\")\n\n        # 4. 处理响应\n        if response.error:\n            error: JSONRPCError = response.error\n            print(f\"发送任务时出错: Code={error.code}, Message={error.message}\")\n            return None\n        if not response.result:\n             print(f\"发送任务成功，但未收到任务详情: {response}\")\n             # 继续使用我们发送的 task_id 查询\n        elif response.result.id != task_id:\n            logger.warning(f\"服务器返回的任务ID '{response.result.id}' 与客户端发送的ID '{task_id}' 不匹配。\")\n            task_id = response.result.id # 以服务器返回的为准\n\n        print(f\"任务已发送，ID: {task_id}\")\n\n        # 5. 轮询等待任务完成\n        print(\"等待任务完成...\")\n        task_result: Optional[Task] = None\n        for attempt in range(10):\n            await asyncio.sleep(2)\n            get_payload = {\"id\": task_id}\n            logger.debug(f\"Getting task with payload: {get_payload} (Attempt {attempt+1})\")\n            get_response: GetTaskResponse = await client.get_task(payload=get_payload)\n            logger.debug(f\"Get task response: {get_response}\")\n\n            if get_response.error:\n                 error: JSONRPCError = get_response.error\n                 print(f\"获取任务时出错: Code={error.code}, Message={error.message}\")\n                 return None\n            if not get_response.result:\n                 print(f\"获取任务成功，但未收到任务详情: {get_response}\")\n                 continue\n\n            task_result = get_response.result\n            print(f\"  当前任务状态: {task_result.status.state.value}\") # 使用 .value 获取枚举值\n            if task_result.status.state in [TaskState.COMPLETED, TaskState.FAILED, TaskState.CANCELED]:\n                break\n        else:\n            print(\"任务在限定时间内未完成。\")\n            return None\n\n        # 6. 处理最终任务结果 (使用属性访问)\n        if task_result.status.state == TaskState.COMPLETED and task_result.artifacts:\n            print(\"任务成功完成。结果:\")\n            for artifact in task_result.artifacts:\n                 if artifact.parts:\n                    for part in artifact.parts:\n                        if isinstance(part, TextPart): # 检查类型\n                             print(f\"  - {part.text}\") # 访问属性\n        elif task_result.status.state == TaskState.FAILED:\n             error_msg = \"未知错误\"\n             if task_result.status.message and task_result.status.message.parts:\n                 if isinstance(task_result.status.message.parts[0], TextPart):\n                    error_msg = task_result.status.message.parts[0].text\n             print(f\"任务失败: {error_msg}\")\n        else:\n             print(f\"任务最终状态为: {task_result.status.state.value}\")\n\n        return task_result\n\n    except Exception as e:\n        logger.error(f\"处理同步任务时发生异常: {e}\", exc_info=True)\n        print(f\"发送同步任务失败: {e}\")\n        return None\n\n# 测试场景2: 多轮对话 - 不完整信息 (修正，但有局限性)\nasync def test_multi_turn_conversation(client: A2AClient):\n    print(\"\\n=== 测试场景2: 多轮对话 (Agent 可能不支持) ===\")\n    print(\"注意：当前服务器端的 Agent 实现可能不支持真正的多轮状态保持。\")\n\n    # --- 第一轮对话 ---\n    session_id = \"test_session_multi_\" + uuid4().hex # 为多轮对话创建唯一 session ID\n    query1 = \"100美元等于多少\" # 故意不指定目标货币\n    task_id_1 = \"test_multi_1_\" + uuid4().hex\n\n    try:\n        print(f\"\\n第一轮对话 (Session: {session_id}): 发送 '{query1}'\")\n        # 1a. 构建 Message 和 Payload\n        message1 = Message(role=\"user\", parts=[TextPart(text=query1)])\n        payload_dict1 = {\n            \"id\": task_id_1,\n            \"sessionId\": session_id, # 传递 session ID\n            \"message\": message1.model_dump(),\n            \"acceptedOutputModes\": [\"text\"],\n            \"metadata\": {\"skill_name\": \"react_query\"}\n        }\n        logger.debug(f\"Sending multi-turn task 1 with payload: {payload_dict1}\")\n\n        # 1b. 发送任务\n        response1: SendTaskResponse = await client.send_task(payload=payload_dict1)\n        logger.debug(f\"Send task 1 response: {response1}\")\n\n        if response1.error:\n            error: JSONRPCError = response1.error\n            print(f\"发送第一轮任务时出错: Code={error.code}, Message={error.message}\")\n            return None\n        if response1.result:\n            task_id_1 = response1.result.id # Use server-confirmed ID\n\n        print(f\"第一轮任务已发送，ID: {task_id_1}\")\n\n        # 1c. 轮询获取结果\n        print(\"等待第一轮任务响应...\")\n        task1_result: Optional[Task] = None\n        for attempt in range(5): # 减少轮询次数\n            await asyncio.sleep(2)\n            get_payload1 = {\"id\": task_id_1}\n            get_response1: GetTaskResponse = await client.get_task(payload=get_payload1)\n            if get_response1.result:\n                task1_result = get_response1.result\n                print(f\"  当前任务状态: {task1_result.status.state.value}\")\n                if task1_result.status.state != TaskState.WORKING:\n                    break\n        else:\n            print(\"第一轮任务在限定时间内未完成或未开始。\")\n            return None\n\n        # 1d. 检查 Agent 是否要求输入 (当前 Agent 可能直接完成或失败)\n        if task1_result.status.state == TaskState.INPUT_REQUIRED and task1_result.status.message:\n             print(\"Agent 要求更多信息:\")\n             for part in task1_result.status.message.parts:\n                 if isinstance(part, TextPart):\n                     print(f\"  Agent: {part.text}\")\n\n             # --- 第二轮对话 ---\n             query2 = \"日元\" # 提供目标货币\n             task_id_2 = \"test_multi_2_\" + uuid4().hex\n             print(f\"\\n第二轮对话 (Session: {session_id}): 发送 '{query2}'\")\n\n             # 2a. 构建 Message 和 Payload\n             message2 = Message(role=\"user\", parts=[TextPart(text=query2)])\n             payload_dict2 = {\n                 \"id\": task_id_2,\n                 \"sessionId\": session_id, # 必须使用相同的 session ID\n                 \"message\": message2.model_dump(),\n                 \"acceptedOutputModes\": [\"text\"],\n                 \"metadata\": {\"skill_name\": \"react_query\"}\n             }\n             logger.debug(f\"Sending multi-turn task 2 with payload: {payload_dict2}\")\n\n             # 2b. 发送任务\n             response2: SendTaskResponse = await client.send_task(payload=payload_dict2)\n             logger.debug(f\"Send task 2 response: {response2}\")\n\n             if response2.error:\n                 error: JSONRPCError = response2.error\n                 print(f\"发送第二轮任务时出错: Code={error.code}, Message={error.message}\")\n                 return None\n             if response2.result:\n                 task_id_2 = response2.result.id\n\n             print(f\"第二轮任务已发送，ID: {task_id_2}\")\n\n             # 2c. 轮询获取最终结果\n             print(\"等待第二轮任务完成...\")\n             task2_result: Optional[Task] = None\n             for attempt in range(10):\n                 await asyncio.sleep(2)\n                 get_payload2 = {\"id\": task_id_2}\n                 get_response2: GetTaskResponse = await client.get_task(payload=get_payload2)\n                 if get_response2.result:\n                     task2_result = get_response2.result\n                     print(f\"  当前任务状态: {task2_result.status.state.value}\")\n                     if task2_result.status.state != TaskState.WORKING:\n                         break\n             else:\n                 print(\"第二轮任务在限定时间内未完成。\")\n                 return None\n\n             # 2d. 处理最终结果\n             if task2_result.status.state == TaskState.COMPLETED and task2_result.artifacts:\n                 print(\"多轮任务成功完成。最终结果:\")\n                 for artifact in task2_result.artifacts:\n                      if artifact.parts:\n                         for part in artifact.parts:\n                             if isinstance(part, TextPart):\n                                  print(f\"  - {part.text}\")\n             else:\n                  print(f\"第二轮任务最终状态为: {task2_result.status.state.value}\")\n\n             return task2_result\n\n        elif task1_result.status.state == TaskState.COMPLETED:\n            print(\"Agent 在第一轮就已完成任务 (可能直接使用了默认货币或无法处理):\")\n            if task1_result.artifacts:\n                for artifact in task1_result.artifacts:\n                     if artifact.parts:\n                        for part in artifact.parts:\n                            if isinstance(part, TextPart):\n                                print(f\"  - {part.text}\")\n            return task1_result\n        else:\n            print(f\"第一轮任务未要求输入，最终状态为: {task1_result.status.state.value}\")\n            return task1_result\n\n    except Exception as e:\n        logger.error(f\"处理多轮对话时发生异常: {e}\", exc_info=True)\n        print(f\"多轮对话测试失败: {e}\")\n        return None\n\n\n# 测试场景3: 流式响应 (修正)\nasync def test_streaming_response(client: A2AClient):\n    print(\"\\n=== 测试场景3: 流式响应 (Agent 端为模拟) ===\")\n\n    # query = \"What are the current exchange rates between USD, EUR, and JPY?\"\n    query = \"用中文写一首关于春天的短诗\" # 更适合流式输出的查询\n    task_id = \"test_stream_\" + uuid4().hex\n    try:\n        # 1. 构建 Message 和 Payload\n        message = Message(role=\"user\", parts=[TextPart(text=query)])\n        payload_dict = {\n            \"id\": task_id,\n            \"sessionId\": \"test_session_stream_1\",\n            \"message\": message.model_dump(),\n            \"acceptedOutputModes\": [\"text\"],\n            \"metadata\": {\"skill_name\": \"react_query\"}\n        }\n        logger.debug(f\"Sending streaming task with payload: {payload_dict}\")\n        print(f\"任务已发送，ID: {task_id}\")\n\n        # 2. 调用 send_task_streaming (不使用 await) 并使用 async for 迭代\n        event_stream_generator = client.send_task_streaming(payload=payload_dict)\n\n        print(\"开始接收流式响应:\")\n        async for event_response in event_stream_generator:\n            logger.debug(f\"Received stream event: {event_response}\")\n\n            if event_response.error:\n                 error: JSONRPCError = event_response.error\n                 print(f\"流式传输中出错: Code={error.code}, Message={error.message}\")\n                 continue\n\n            event = event_response.result\n            if not event:\n                 logger.warning(\"Received stream response with empty result.\")\n                 continue\n\n            # 处理状态更新事件\n            if hasattr(event, \"status\") and event.status and event.status.message:\n                 if event.status.message.parts:\n                    for part in event.status.message.parts:\n                        if isinstance(part, TextPart):\n                            print(f\"  流式更新: {part.text}\")\n\n            # 处理 Artifact 事件\n            if hasattr(event, \"artifact\") and event.artifact:\n                 # print(\"  收到 Artifact:\") # 打印多次可能比较干扰，注释掉\n                 if event.artifact.parts:\n                    for part in event.artifact.parts:\n                        if isinstance(part, TextPart):\n                            print(f\"  流式结果: {part.text}\")\n\n            # 检查结束标志\n            if hasattr(event, \"final\") and event.final:\n                 print(\"流式响应结束标志收到。\")\n\n        print(\"流式任务处理完成。\")\n        return True\n\n    except Exception as e:\n        logger.error(f\"处理流式任务时发生异常: {e}\", exc_info=True)\n        print(f\"发送流式任务失败: {e}\")\n        return False\n\n# 主函数 (修正)\nasync def main():\n    print(\"=== LangGraph Agent A2A协议测试 ===\\n\")\n    # print(\"此测试脚本将测试LangGraph Currency Agent通过A2A协议的三种交互场景:\")\n    # print(\"1. 同步请求 - Agent 调用 (计算器)\")\n    # print(\"2. 多轮对话 - 处理不完整信息 (Agent 可能不支持)\")\n    # print(\"3. 流式响应 - 实时状态更新 (Agent 端为模拟)\")\n\n    # 创建A2A客户端\n    client = A2AClient(url=\"http://127.0.0.1:8000\")\n\n    # --- 移除了 get_agent_info 调用 ---\n    # (如果需要验证服务器是否在线，可以尝试发送一个简单的任务)\n    print(\"尝试连接到服务器并运行测试...\")\n    print(\"-\" * 30)\n\n    # 执行测试场景\n    await test_sync_currency_conversion(client)\n    print(\"-\" * 30)\n    # 注意：多轮对话测试依赖于 Agent 对话状态的处理能力\n    await test_multi_turn_conversation(client)\n    print(\"-\" * 30)\n    await test_streaming_response(client)\n    print(\"-\" * 30)\n    print(\"所有测试场景执行完毕。\")\n\n# 运行测试\nif __name__ == \"__main__\":\n    asyncio.run(main())"
  },
  {
    "path": "examples/16_google_a2a/currency_agent_test_README.md",
    "content": "# LangGraph Agent A2A协议交互测试\n\n## 概述\n\n本测试脚本 (`examples/a2a/currency_agent_test.py`) 旨在通过具体的交互场景，测试和演示如何使用 A2A 客户端与先前通过 `langgraph_integration.py` 启动的 LangGraph Agent 服务进行通信。它覆盖了同步请求（涉及工具调用）、尝试进行多轮对话以及接收（模拟的）流式响应等场景。\n\n## 测试场景说明\n\n此脚本包含以下三个主要测试场景：\n\n1.  **场景 1: 同步请求 - Agent 调用 (涉及工具)**\n    * **目的:** 测试发送一个需要 Agent 调用内部工具（如此示例中的计算器）才能完成的请求。\n    * **流程:** 客户端发送一个计算任务 -> 服务器端 Agent (LangGraph ReAct) 解析任务 -> 调用 `calculator` 工具 -> 获取结果 -> LLM 整合答案 -> 服务器返回最终结果 -> 客户端轮询获取并显示结果。\n    * **预期:** 客户端能成功获取到 Agent 计算后的准确结果。\n\n2.  **场景 2: 多轮对话尝试 (Agent 当前实现有限)**\n    * **目的:** 测试客户端在需要多步交互时的请求发送方式（使用 `sessionId`），并观察当前 Agent 的响应行为。\n    * **流程:**\n        * 第一轮：客户端发送一个信息不明确的查询（例如 \"100美元等于多少\"，缺少目标货币），并附带 `sessionId`。\n        * 客户端轮询获取结果。\n        * **注意:** *根据我们当前的 Agent 实现 (`CurrencyAgent` 使用 `create_react_agent` 且 `invoke` 未特殊处理对话历史)，Agent 很可能不会返回 `input-required` 状态来请求更多信息，而是会直接尝试处理或告知无法处理，然后将任务标记为 `completed` 或 `failed`。*\n        * (理想流程中，如果 Agent 返回 `input-required`，客户端会发送第二轮请求补充信息，使用相同的 `sessionId`。)\n    * **预期:** 客户端能够正确发送带 `sessionId` 的请求，并能处理 Agent 的最终响应（即使它没有按预期进入多轮澄清状态）。此测试主要验证客户端的多轮请求发送能力和对 Agent 当前行为的观察。\n\n3.  **场景 3: 流式响应 (Agent 端模拟)**\n    * **目的:** 测试客户端接收 A2A 流式响应 (Server-Sent Events) 的能力。\n    * **流程:** 客户端发送一个适合流式输出的查询 -> 服务器端的 `AgentTaskManager` 调用 `CurrencyAgent.stream` 方法 -> **注意:** *`CurrencyAgent.stream` 当前是一个模拟实现，它会发送预设的文本块，而不是真正调用 LangGraph 的流式接口。* -> 客户端接收并打印这些模拟的流式事件。\n    * **预期:** 客户端能够成功连接 SSE 端点，并接收、打印服务器发送的（模拟）流式事件。\n\n## 运行测试\n\n### 前提条件\n\n* Python (推荐 3.10 或更高版本)\n* 已根据项目 `requirements.txt` 安装所有必需的 Python 依赖库。\n* 在项目根目录下的 `.env` 文件中配置了有效的 `OPENAI_API_KEY` (或其他所需的 LLM API 密钥)。\n\n### 步骤\n\n1.  **启动 A2A 服务器:**\n    * 确保你位于项目的根目录。\n    * 在终端中运行 (如果尚未运行):\n        ```bash\n        python -m examples.a2a.langgraph_integration\n        ```\n    * 服务器应成功启动并监听在 `http://127.0.0.1:8000`。\n\n2.  **运行本测试脚本:**\n    * 打开 **另一个** 终端。\n    * 确保你位于项目的根目录并激活了相同的虚拟环境。\n    * 运行测试脚本:\n        ```bash\n        python -m examples.a2a.currency_agent_test\n        ```\n\n## 测试输出示例 (基于实际运行结果)\n\n以下是运行此测试脚本时预期的输出格式，反映了当前 Agent 的实际行为：\n\n### 同步请求示例 (计算器调用)\n\n```\n=== 测试场景1: 同步请求 - Agent 调用 (计算器) ===\n\n任务已发送，ID: test_sync_...\n等待任务完成...\n  当前任务状态: completed\n任务成功完成。结果:\n  - 58 * 34 的结果是 1972。\n```\n\n### 多轮对话示例 (Agent 第一轮即完成)\n\n```\n=== 测试场景2: 多轮对话 (Agent 可能不支持) ===\n注意：当前服务器端的 Agent 实现可能不支持真正的多轮状态保持。\n\n第一轮对话 (Session: test_session_multi_...): 发送 '100美元等于多少'\n第一轮任务已发送，ID: test_multi_1_...\n等待第一轮任务响应...\n  当前任务状态: completed\nAgent 在第一轮就已完成任务 (可能直接使用了默认货币或无法处理):\n  - 目前无法提供100美元等于多少人民币的具体信息。你可以查阅最新的汇率数据或使用汇率转换工具来获取准确的结果。\n```\n*(注意：Agent 的具体回复可能因 LLM 的不同调用而略有差异)*\n\n### 流式响应示例 (Agent 端模拟)\n\n```\n=== 测试场景3: 流式响应 (Agent 端为模拟) ===\n\n任务已发送，ID: test_stream_...\n开始接收流式响应:\n  流式更新: 正在处理您的请求...\n  流式结果: 关于 '用中文写一首关于春天的短诗' 的信息如下：这是一个模拟的回应，因为真实流未实现。\n流式响应结束标志收到。\n流式任务处理完成。\n```\n\n## 注意事项\n\n* 确保在运行测试前已正确设置 `.env` 文件中的环境变量。\n* 测试脚本默认连接 `http://127.0.0.1:8000`。如果服务器地址或端口不同，请修改脚本中的 `A2AClient` 初始化 URL。\n* 如果连接失败或测试出错，请优先检查 A2A 服务器是否已正确启动且正在运行，并查看服务器端的日志输出。\n\n---\n\n## 两个客户端示例的命名与区别\n\n你项目中有两个客户端示例文件，我们可以为它们命名并说明其侧重点：\n\n1.  **`examples/a2a/client_example.py` -> \"基础客户端示例 (Basic Client Example)\"**\n    * **目的:** 这个脚本更侧重于**基础演示**，展示了调用 `A2AClient` 库中几个核心方法（`send_task`, `get_task`, `send_task_streaming`）的最基本用法。\n    * **特点:** 代码相对简洁，逻辑直接，主要目的是让使用者快速了解如何发起不同类型的 A2A 请求并处理最简单的成功响应。它包含了一个简单的轮询逻辑。\n\n2.  **`examples/a2a/currency_agent_test.py` -> \"场景化测试客户端 (Scenario-based Test Client)\"**\n    * **目的:** 这个脚本的定位是**功能测试和场景演示**。它针对我们集成的 LangGraph Agent 设计了几个具体的交互场景（同步工具调用、尝试多轮对话、流式接收），以验证端到端的流程和观察 Agent 在特定情况下的行为。\n    * **特点:** 结构更清晰地划分为不同的测试函数，包含了更具体的业务逻辑查询（尽管有些是模拟的或揭示了 Agent 的局限性），并且其输出更侧重于展示每个测试场景的结果。它也使用了轮询，并尝试了多轮交互的状态传递（通过 `sessionId`）。\n\n**主要区别总结:**\n\n| 特性         | `client_example.py` (基础示例)                 | `currency_agent_test.py` (场景化测试)                 |\n| :----------- | :--------------------------------------------- | :---------------------------------------------------- |\n| **目标** | 演示 Client API 基本用法                       | 测试/演示特定交互场景                                 |\n| **结构** | 简单的顺序调用                                 | 按测试场景划分函数                                    |\n| **复杂度** | 较低，核心 API 调用                            | 略高，包含场景逻辑（如尝试多轮）                      |\n| **查询内容** | 通用示例（计算、搜索）                         | 针对场景设计（计算、不完整查询、适合流式的查询）        |\n| **侧重点** | 如何调用 API                                   | Agent 在特定场景下的行为和端到端流程验证              |\n"
  },
  {
    "path": "examples/16_google_a2a/langgraph_integration.py",
    "content": "# examples/a2a/langgraph_integration.py\n\nimport os\nimport sys\nimport asyncio # asyncio 仍然可能被依赖库使用，保留导入\nimport logging\n# 确保导入了 List, Tuple, Optional, TypedDict\nfrom typing import Dict, Any, List, Optional, AsyncIterable, Union, TypedDict, Tuple\n\n# 添加项目根目录到路径\nsys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\n# 导入环境变量\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# 导入A2A相关组件\n# 从你的项目结构导入\nfrom core.a2a.types import (\n    AgentCard, AgentCapabilities, AgentSkill,\n    Task, TaskState, TaskStatus, Artifact, Message, TextPart, # TextPart 可能不再直接使用\n    JSONRPCResponse, InvalidParamsError, InternalError,\n    SendTaskRequest, SendTaskResponse, TaskSendParams\n)\nfrom core.a2a.server.server import A2AServer\nfrom core.a2a.agent_task_manager import AgentTaskManager\n\n# 导入LangChain和LLM相关组件\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.tools import tool\n# StateGraph 和 END 不再直接使用，但保留导入\nfrom langgraph.graph import END, StateGraph\nfrom langgraph.prebuilt import create_react_agent\n\n# 配置日志\nlogging.basicConfig(level=logging.INFO) # 可以改为 DEBUG 获取更详细日志\nlogger = logging.getLogger(__name__)\n\n# --- 定义工具 (保持不变) ---\n@tool\ndef search(query: str) -> str:\n    \"\"\"搜索互联网获取信息\"\"\"\n    # 实际应用中应调用真实搜索引擎 API\n    logger.info(f\"Tool 'search' called with query: {query}\")\n    return f\"这是关于 '{query}' 的模拟搜索结果。\"\n\n@tool\ndef calculator(expression: str) -> str:\n    \"\"\"计算数学表达式\"\"\"\n    logger.info(f\"Tool 'calculator' called with expression: {expression}\")\n    try:\n        # 注意：生产环境中使用 eval 非常危险，这里仅作示例\n        # 限制 eval 的能力，只允许简单的数学运算\n        allowed_names = {\n            k: v for k, v in __import__(\"math\").__dict__.items() if not k.startswith(\"_\")\n        }\n        allowed_names.update({\"abs\": abs, \"int\": int, \"float\": float}) # 添加常用函数\n        code = compile(expression, \"<string>\", \"eval\")\n\n        for name in code.co_names:\n             if name not in allowed_names:\n                  raise NameError(f\"Use of name '{name}' not allowed\")\n\n        result = eval(code, {\"__builtins__\": {}}, allowed_names)\n        return f\"计算结果: {result}\"\n    except NameError as e:\n         logger.error(f\"Calculation error (NameError): {e} in expression '{expression}'\")\n         return f\"计算错误: 不允许的名称 '{e.name}'\"\n    except Exception as e:\n        logger.error(f\"Calculation error: {e} in expression '{expression}'\")\n        return f\"计算错误: {e}\"\n\n# --- 修正 AgentState 定义 ---\nclass AgentState(TypedDict):\n    # 使用 'messages' 字段来传递对话内容\n    # 格式为 (角色, 内容) 的元组列表\n    messages: List[Tuple[str, str]]\n    # session_id 可以保留，如果Agent内部逻辑需要的话 (create_react_agent 通常不需要)\n    # session_id: Optional[str]\n    # 注意: ReAct Agent 运行时可能会在状态中添加其他键 (例如 intermediate_steps)\n\n# --- 修正 CurrencyAgent 类 ---\nclass CurrencyAgent:\n    \"\"\"一个简单的货币转换和信息查询Agent (已修正)\"\"\"\n\n    # 支持的内容类型 (保持不变)\n    SUPPORTED_CONTENT_TYPES = [\"text\"]\n\n    def __init__(self, llm):\n        \"\"\"初始化Agent，直接使用 create_react_agent 创建的 Runnable\"\"\"\n        self.tools = [search, calculator]\n        # create_react_agent 返回一个可直接调用的 Runnable (图)\n        self.agent_runnable = create_react_agent(llm, self.tools)\n        logger.info(\"CurrencyAgent initialized with ReAct runnable.\")\n\n    def invoke(self, query: str, session_id: str = None) -> str:\n        \"\"\"同步调用Agent Runnable\"\"\"\n        # (session_id 在此实现中未传递给 agent_runnable，如果需要可以添加)\n        logger.debug(f\"[CurrencyAgent.invoke] Received query: '{query}', session_id: '{session_id}'\")\n        if not query:\n             logger.error(\"[CurrencyAgent.invoke] Query is empty!\")\n             return \"错误：输入查询为空。\"\n\n        # 准备 ReAct Agent Runnable 所需的输入\n        invoke_input = {\"messages\": [(\"user\", query)]}\n\n        logger.debug(f\"[CurrencyAgent.invoke] Invoking agent runnable with input: {invoke_input}\")\n        try:\n            # 直接调用 create_react_agent 返回的 runnable\n            result = self.agent_runnable.invoke(invoke_input)\n            logger.debug(f\"[CurrencyAgent.invoke] Agent runnable result: {result}\")\n\n            # 提取最终响应\n            final_output = \"错误：未能从Agent获取有效响应。\"\n            if isinstance(result, dict) and isinstance(result.get(\"messages\"), list) and result[\"messages\"]:\n                last_message = result[\"messages\"][-1]\n                if isinstance(last_message, tuple) and len(last_message) == 2:\n                    final_output = last_message[1]\n                elif hasattr(last_message, 'content'):\n                     final_output = last_message.content\n                else:\n                     logger.warning(f\"[CurrencyAgent.invoke] Last message format unexpected: {last_message!r}\")\n            else:\n                 logger.warning(f\"[CurrencyAgent.invoke] Could not find 'messages' list in result: {result}\")\n\n            logger.debug(f\"[CurrencyAgent.invoke] Returning output: {final_output}\")\n            return str(final_output)\n        except Exception as e:\n             logger.error(f\"[CurrencyAgent.invoke] Exception during agent invocation: {e}\", exc_info=True)\n             raise\n\n    async def ainvoke(self, inputs: dict) -> dict:\n        \"\"\"异步调用Agent Runnable (输入格式也需调整)\"\"\"\n        # TODO: 确认这里的输入格式是否也需要转换为 {\"messages\": [...]}\n        logger.debug(f\"[CurrencyAgent.ainvoke] Invoking agent runnable async with input: {inputs}\")\n        # 假设输入字典已经包含了正确的 \"messages\" 键\n        return await self.agent_runnable.ainvoke(inputs)\n\n    async def stream(self, query: str, session_id: str = None):\n        \"\"\"流式调用Agent (当前为模拟)\"\"\"\n        # TODO: 实现真实的流式调用\n        logger.warning(\"[CurrencyAgent.stream] Stream method is currently mocked.\")\n        # --- 模拟实现 ---\n        yield { \"content\": \"正在处理您的请求...\", \"is_task_complete\": False, \"require_user_input\": False }\n        await asyncio.sleep(0.5)\n        final_simulated_answer = f\"关于 '{query}' 的信息如下：这是一个模拟的回应，因为真实流未实现。\"\n        yield { \"content\": final_simulated_answer, \"is_task_complete\": True, \"require_user_input\": False }\n        # --- 模拟结束 ---\n\n\n# --- A2A 服务器设置 (修正函数定义和 AgentCard) ---\n# 将函数改为同步定义 (def 而不是 async def)\ndef setup_a2a_server():\n    \"\"\"设置并返回 A2A 服务器实例 (同步函数)\"\"\"\n    print(\"\\n=== 配置 LangGraph A2A 服务器 ===\\n\")\n\n    # 创建LLM\n    try:\n        llm = ChatOpenAI(model=\"gpt-4o-mini\")\n        logger.info(\"Using OpenAI LLM: gpt-4o-mini\")\n    except Exception as e:\n        print(f\"警告: 无法创建OpenAI LLM ({e})，将使用模拟模式\")\n        from langchain.llms.fake import FakeListLLM\n        llm = FakeListLLM(responses=[\"这是一个模拟的LLM响应\"])\n        logger.info(\"Using FakeListLLM (simulation mode)\")\n\n    # 创建 Agent 实例\n    agent = CurrencyAgent(llm)\n\n    # 创建 Agent 卡片 (添加缺失字段)\n    agent_card = AgentCard(\n        name=\"LangGraph ReAct Agent\",\n        description=\"一个使用LangGraph ReAct处理查询并调用工具的Agent\",\n        url=\"http://127.0.0.1:8000/agent\", # Agent 的访问 URL (示例)\n        version=\"0.1.0\",                  # Agent 的版本号\n        capabilities=AgentCapabilities(   # 设置 Agent 的能力\n            streaming=False,              # 当前 stream 是模拟的，设为 False\n            pushNotifications=False       # 假设不支持推送\n        ),\n        skills=[                          # skills 列表在 AgentCard 顶层\n            AgentSkill(\n                id=\"react_query_skill\",   # 技能的唯一 ID\n                name=\"react_query\",\n                description=\"处理自然语言查询，可使用搜索和计算器工具\",\n                inputModes=[\"text\"],\n                outputModes=[\"text\"]\n            )\n        ]\n        # 其他可选字段可以按需添加\n    )\n\n    # 创建 AgentTaskManager\n    task_manager = AgentTaskManager(agent)\n\n    # 创建A2A服务器实例 (不在此处设置 host/port)\n    server = A2AServer(agent_card=agent_card, task_manager=task_manager)\n    print(\"A2A服务器实例已创建。\")\n    return server # 返回实例\n\n\n# --- 主函数入口 (修正启动逻辑) ---\nif __name__ == \"__main__\":\n    try:\n        # 调用同步函数来设置服务器\n        server_instance = setup_a2a_server()\n\n        # 定义 HOST 和 PORT\n        HOST = \"127.0.0.1\"\n        PORT = 8000\n        print(f\"准备启动A2A服务器，监听地址 http://{HOST}:{PORT}\")\n\n        # 在调用 start 前设置 host 和 port\n        # (或者修改 A2AServer 的 __init__ 让其接受 host/port)\n        server_instance.host = HOST\n        server_instance.port = PORT\n\n        # 启动服务器 (调用同步的 start 方法)\n        server_instance.start()\n\n    except KeyboardInterrupt:\n        print(\"\\n服务器已手动停止。\")\n    except Exception as e:\n        # 捕获设置或启动过程中的其他异常\n        logger.error(f\"启动服务器时发生未处理的异常: {e}\", exc_info=True)"
  },
  {
    "path": "examples/TODO_computer_tool_demo.py",
    "content": "from typing import Annotated, Literal\nfrom langchain_core.messages import HumanMessage, AIMessage\nfrom langchain.agents import AgentExecutor, create_openai_tools_agent\nfrom langchain_core.runnables import Runnable, RunnablePassthrough\nfrom langchain_core.runnables.graph import StateGraph, END, START\nfrom langchain.tools.render import render_text_description\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.tools import Tool\nfrom langchain.agents.format_scratchpad import format_to_openai_tool_messages\nfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\nfrom langchain_core.runnables.config import RunnableConfig\nfrom langgraph.graph import END, StateGraph\nfrom langgraph.prebuilt import ToolNode\nfrom langgraph.graph.message import Command, InjectedState\n\n# Import our custom computer tool\n# TODO: MarinaBox - Import our custom computer tool\nfrom marinabox import mb_start_computer, mb_stop_computer, mb_use_computer_tool\n\n# Set up model with tools\nmodel = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\ntools = [mt_use_computer_tool()]\nmodel_with_tools = model.bind_tools(tools)\n\n# Define workflow nodes\ndef should_continue(state: Annotated[dict, InjectedState()]):\n    messages = state[\"messages\"]\n    if len(messages) > 0:\n        last_message = messages[-1]\n        if last_message.tool_calls:\n            return Command(goto=\"tool_node\")\n    else:\n        return Command(goto=\"stop_computer\")\n    return Command(goto=\"stop_computer\")\n\ndef call_model(state: Annotated[dict, InjectedState()]):\n    input_message = input(\"Enter your message: \")\n    if input_message != \"stop_computer\":\n        messages = [HumanMessage(content=input_message)]\n        response = model_with_tools.invoke(messages)\n        return {\"messages\": [response], \"session_id\": state.get(\"session_id\")}\n    else:\n        return {\"messages\": [], \"session_id\": state.get(\"session_id\")}\n\n# Set up workflow\nworkflow = StateGraph(dict)\nworkflow.add_node(\"start_computer\", mt_start_computer)\nworkflow.add_node(\"agent\", call_model)\nworkflow.add_node(\"tool_node\", ToolNode(tools=tools))\nworkflow.add_node(\"stop_computer\", mt_stop_computer)\nworkflow.add_node(\"should_continue\", should_continue)\n\n# Define workflow edges\nworkflow.add_edge(START, \"start_computer\")\nworkflow.add_edge(\"start_computer\", \"agent\")\nworkflow.add_edge(\"tool_node\", \"agent\")\nworkflow.add_edge(\"agent\", \"should_continue\")\nworkflow.add_edge(\"stop_computer\", END)\n\n# Compile and run workflow\napp = workflow.compile()\nif __name__ == \"__main__\":\n    app.invoke({\"messages\": \"\"})\n"
  },
  {
    "path": "examples/__init__.py",
    "content": ""
  },
  {
    "path": "examples/state_based_supervisor_examples/01_simple.py",
    "content": "import asyncio\nimport json\nimport os\nimport re\nimport time \nfrom datetime import datetime \nfrom typing import Literal, List, Dict, Any, Optional, cast\n\n# --- LangChain / LangGraph ---\ntry:\n    # 使用 langchain_openai (或你选择的模型提供商)\n    from langchain_openai import ChatOpenAI \nexcept ImportError:\n     ChatOpenAI = None \n     print(\"Warning: langchain_openai not installed.\")\n\n# 核心消息类型\nfrom langchain_core.messages import HumanMessage, AIMessage, BaseMessage, ToolMessage \n# LangChain 工具相关\nfrom langchain_core.tools import tool, BaseTool \n\n# --- OpenAI 错误处理 ---\ntry:\n    from openai import RateLimitError\nexcept ImportError:\n    class RateLimitError(Exception): pass\n\n# --- 内部模块导入 (请确保路径正确) ---\ntry:\n    # 假设这些是你当前的路径\n    from core.agents.sb_supervisor_agent import SupervisorAgent \n    from core.agents.supervisor.state_schema import PlanningAgentState\n    from core.agents.base.react_agent import ReactAgent # 导入 ReactAgent\n    # 导入 StreamUpdate (如果需要在最终状态中检查它，但这里主要关注消息)\n    # from core.agents.supervisor.schemas import StreamUpdate \n\nexcept ImportError as e:\n    print(f\"Error importing agent components: {e}\")\n    print(\"Please ensure paths like 'core.agents.sb_supervisor_agent' are correct.\")\n\nimport traceback\n\n# --- 定义 Web Search 工具 ---\n# 使用 @tool 装饰器明确这是一个工具\n@tool\ndef web_search(query: str) -> str:\n    \"\"\"Search the web for current information about a given query. Use this for recent events, data, or facts.\"\"\"\n    print(f\"--- TOOL CALLED: web_search(query='{query}') ---\") # 添加日志确认工具被调用\n    # Mocked data - 实际使用时会调用 Tavily 或其他搜索引擎\n    if \"apple\" in query.lower() and \"headcount\" in query.lower() and \"2024\" in query:\n        return (\n            \"According to recent (mocked) reports for 2024, Apple's headcount is approximately 164,000 employees globally.\"\n        )\n    elif \"joke\" in query.lower():\n         # 这个工具不适合讲笑话\n         return \"I am a web search tool, I cannot tell jokes.\"\n    else:\n        return f\"Mock search results for query: '{query}'. Found relevant information on various websites.\"\n\n# --- 主执行逻辑 ---\nasync def main():\n     # --- 初始化 LLM (确保 API Key 在环境中) ---\n     try:\n        model_name = os.getenv(\"LLM_MODEL_NAME\", \"gpt-4o\") \n        print(f\"Using LLM: {model_name}\")\n        if not ChatOpenAI: raise ImportError(\"ChatOpenAI not available.\")\n        # 使用温度稍高的模型可能有助于 ReAct 思考和调用工具\n        model = ChatOpenAI(model=model_name, temperature=0.2) \n     except Exception as e:\n         print(f\"Failed to initialize ChatOpenAI model: {e}\")\n         return\n\n     # --- 实例化 Agents ---\n     try:\n        # research_agent 现在有了一个明确定义的 web_search 工具\n        research_agent = ReactAgent(\n             name=\"research_expert\", \n             tools=[web_search], # <--- 传入工具列表\n             model=model,\n             # 添加明确的 Prompt 引导工具使用\n             prompt=(\n                 \"You are a research expert. Use available tools to find information. \"\n                 \"You have access to 'web_search'. Use it for questions about current data, facts, or events.\"\n             ),\n             max_context_tokens=8000 \n         ) \n         \n        all_agents = [research_agent]\n\n        # --- 实例化 Supervisor ---\n        supervisor = SupervisorAgent(\n             agents=all_agents,\n             model=model, # Supervisor 使用相同的模型\n             state_schema=PlanningAgentState, \n             include_agent_name=\"inline\" \n             # checkpointer=... \n         )\n     except Exception as e:\n         print(f\"Failed to initialize agents or supervisor: {e}\")\n         traceback.print_exc()\n         return\n\n     # --- 准备初始请求 ---\n     # 用户请求包含两个意图：讲笑话 + 查信息\n     user_request = (\n                \"Hi! I'd like to start with a short joke to lighten the mood, \"\n                \"then please check Apple's headcount in 2024. Summarize both.\"\n            )\n     print(f\"Initial Request: '{user_request}'\")\n\n     # --- 准备初始状态 ---\n     initial_graph_state: PlanningAgentState = {\n         \"messages\": [HumanMessage(content=user_request)], # 使用 HumanMessage\n         \"plan\": None,\n         \"error\": None\n     }\n\n     # --- 执行 Supervisor (使用 ainvoke) ---\n     final_state: Optional[Dict[str, Any]] = None\n     error_occurred: Optional[Exception] = None\n     config = {\"recursion_limit\": 100} \n\n     try:\n         print(\"\\n--- Invoking Supervisor Agent (ainvoke) ---\")\n         final_state = await supervisor.ainvoke(initial_graph_state, config=config)\n         print(\"\\n--- Supervisor Agent Invocation Complete ---\")\n\n     # --- 错误处理 ---\n     except RateLimitError as e: error_occurred = e; print(f\"\\n!!! OpenAI Quota Error: {e}\")\n     except Exception as e: error_occurred = e; print(f\"\\n!!! Error during graph execution: {e}\"); traceback.print_exc()\n\n     # --- 处理并打印最终结果 ---\n     if error_occurred: print(\"\\n--- Graph Execution INTERRUPTED or FAILED ---\")\n     else: print(\"\\n--- Graph Execution Finished ---\")\n\n     if not final_state:\n         print(\"Error: No final state available.\")\n         return\n\n     print(\"\\n--- FINAL STATE ---\")\n     # 打印错误（如果在状态中记录了）\n     if final_state.get(\"error\"): print(f\"\\nERROR RECORDED IN STATE: {final_state['error']}\")\n     # 打印计划\n     final_plan = final_state.get('plan')\n     if final_plan: print(\"\\nFinal Plan State:\", json.dumps(final_plan, indent=2, default=str))\n     else: print(\"\\nFinal Plan State: Not available.\")\n     # 打印消息历史\n     final_messages = final_state.get(\"messages\", [])\n     if final_messages:\n         print(\"\\nFinal Message History (Last 10):\")\n         for m in final_messages[-10:]:\n             try:\n                 if hasattr(m, 'pretty_print'): m.pretty_print()\n                 else: print(json.dumps(m, indent=2, default=str))\n                 print(\"-\" * 10)\n             except Exception as print_err: print(f\"Error printing final message: {print_err}\")\n     else: print(\"\\nFinal Message History: Empty.\")\n\n     print(\"\\n--- END OF TEST ---\")\n\n\nif __name__ == \"__main__\":\n    try:\n        asyncio.run(main())\n    except KeyboardInterrupt:\n        print(\"\\nExecution interrupted by user.\")\n    except Exception as e:\n         print(f\"\\nAn unexpected top-level error occurred: {e}\")\n         traceback.print_exc()"
  },
  {
    "path": "examples/state_based_supervisor_examples/02_tavily.py",
    "content": "# main.py (用于测试 State-Based Supervisor 和 ReactAgent)\n\nimport asyncio\nimport json\nimport os\nfrom typing import Dict, Any, Optional\nfrom langchain_community.tools import TavilySearchResults\n# --- LangChain / LangGraph ---\n# 假设模型直接在此初始化或从别处导入\nfrom dotenv import load_dotenv\nload_dotenv()  # 自动加载 .env 文件\ntry:\n    from langchain_openai import ChatOpenAI # 或者你使用的其他模型\nexcept ImportError:\n     ChatOpenAI = None\n     print(\"Warning: langchain_openai not installed.\")\n\n# 核心消息类型\nfrom langchain_core.messages import HumanMessage, AIMessage, BaseMessage, ToolMessage\n\n# --- OpenAI 错误处理 ---\ntry:\n    from openai import RateLimitError\nexcept ImportError:\n    class RateLimitError(Exception): pass\n\n# --- 内部模块导入 (请确保路径正确) ---\ntry:\n    # 从你提供的 core.agents... 路径导入\n    from core.agents.sb_supervisor_agent import SupervisorAgent # 你的 Supervisor 实现\n    from core.agents.state_based_supervisor.state_schema import PlanningAgentState # 包含 Plan 的状态\n    from core.agents.base.react_agent import ReactAgent # 你的 ReactAgent 实现\n    from core.llm.llm_manager import LLMManager # LLM 管理器\n    # (如果你的子 Agent 有更具体的类，在这里导入它们)\n    # 例如:\n    # from core.agents.researcher import ResearchAgent\n    # from core.agents.coder import CoderAgent\n\n    # --- 如果没有具体子 Agent 类，使用 ReactAgent 作为示例 ---\n    # (确保 ReactAgent 可以被直接实例化用于测试)\n    if not issubclass(ReactAgent, object): # 简单检查 ReactAgent 是否有效\n         raise ImportError(\"ReactAgent class not found or invalid.\")\n\nexcept ImportError as e:\n    print(f\"Error importing agent components: {e}\")\n    print(\"Please ensure paths like 'core.agents.sb_supervisor_agent' are correct relative to your execution path.\")\n\nimport traceback\n\n# --- 主执行函数 (简化版，只关注最终结果) ---\nasync def run_supervisor_test(supervisor_agent: SupervisorAgent, initial_state: Dict[str, Any]):\n    \"\"\"Executes the supervisor graph using ainvoke and prints the final state.\"\"\"\n\n    print(\"--- Starting Supervisor Graph Test ---\")\n    # 获取初始消息列表，检查是否为空\n    messages_list = initial_state.get(\"messages\", [])\n    initial_query = \"N/A\" # 默认值\n    if messages_list:\n        first_message = messages_list[0]\n        # 检查第一个消息是否有 content 属性 (更健壮)\n        if hasattr(first_message, 'content'):\n            initial_query = first_message.content\n        else:\n             # 如果第一个元素不是预期的消息对象，记录一下\n             print(f\"Warning: First item in initial messages is not a standard message object: {type(first_message)}\")\n             initial_query = str(first_message) # 尝试转换为字符串\n    print(f\"Initial Query: '{initial_query}'\")\n    print(\"-\" * 30)\n\n    config = {\"recursion_limit\": 100} # 使用较高的递归限制\n    final_state: Optional[Dict[str, Any]] = None\n    error_occurred: Optional[Exception] = None\n\n    try:\n        print(\"--- Invoking Supervisor Agent (ainvoke) ---\")\n        # 直接调用 ainvoke 获取最终状态\n        final_state = await supervisor_agent.ainvoke(initial_state, config=config)\n        print(\"--- Supervisor Agent Invocation Complete ---\")\n\n    # --- 错误处理 ---\n    except RateLimitError as e:\n        error_occurred = e\n        print(\"\\n\" + \"=\"*40 + \"\\n!!! OpenAI API Error: Insufficient Quota !!!\\n\" + \"=\"*40)\n        print(\"Execution stopped. Check OpenAI plan/billing.\")\n        print(f\"Original error: {e}\")\n    except TypeError as e:\n         error_occurred = e\n         print(\"\\n\" + \"=\"*40 + \"\\n!!! TypeError During Graph Execution !!!\\n\" + \"=\"*40)\n         print(f\"Error details: {e}\")\n         if \"synchronous function provided\" in str(e):\n              print(\"Hint: Ensure all graph nodes support async or run the graph synchronously if needed.\")\n         traceback.print_exc()\n    except Exception as e:\n         error_occurred = e\n         print(\"\\n\" + \"=\"*40 + \"\\n!!! An Unexpected Error Occurred !!!\\n\" + \"=\"*40)\n         print(f\"Error type: {type(e).__name__}\\nError details: {e}\")\n         traceback.print_exc()\n\n    # --- Process Final State ---\n    if error_occurred: print(\"\\n--- Graph Execution INTERRUPTED or FAILED ---\")\n    else: print(\"\\n--- Graph Execution Finished ---\")\n\n    if not final_state:\n         # 如果 ainvoke 返回 None 或在出错前未赋值 (理论上 ainvoke 会抛错或返回字典)\n         print(\"Error: No final state available (Execution might have failed early).\")\n         # 尝试从 supervisor agent 获取最后状态 (如果 checkpointer 可用且实现了 get_state)\n         if hasattr(supervisor_agent, 'checkpointer') and supervisor_agent.checkpointer and hasattr(supervisor_agent.checkpointer, 'get'):\n             try:\n                 # 需要知道配置中的 thread_id (这里假设是 'test_thread')\n                 last_checkpoint = supervisor_agent.checkpointer.get({\"configurable\": {\"thread_id\": \"test_thread\"}})\n                 if last_checkpoint:\n                      print(\"Attempting to display last known checkpoint state:\")\n                      final_state = last_checkpoint.get('channel_values', {})\n                 else:\n                      print(\"Could not retrieve last checkpoint state.\")\n             except Exception as cp_err:\n                  print(f\"Error retrieving checkpoint state: {cp_err}\")\n\n    # 即使出错，也尝试打印 final_state (可能是包含错误信息的状态)\n    if final_state and isinstance(final_state, dict):\n        print(\"\\n--- FINAL STATE ---\")\n\n        # 1. 打印错误信息 (如果存在)\n        if final_state.get(\"error\"):\n             print(f\"\\nERROR RECORDED IN STATE: {final_state['error']}\")\n\n        # 2. 打印最终消息历史 (尝试 pretty_print)\n        final_messages = final_state.get(\"messages\", [])\n        if final_messages and isinstance(final_messages, list):\n             print(\"\\nFinal Message History (Last ~10):\")\n             for m in final_messages[-10:]: # 只打印最后一部分\n                  try:\n                       if hasattr(m, 'pretty_print'):\n                            m.pretty_print()\n                       else: # Fallback for dict or other types\n                            print(json.dumps(m, indent=2, default=str))\n                       print(\"-\" * 10)\n                  except Exception as print_err:\n                       print(f\"Error printing final message: {print_err}\")\n        else:\n             print(\"\\nFinal Message History: Not available or empty.\")\n\n        # 3. 打印最终计划状态\n        final_plan = final_state.get('plan')\n        if final_plan and isinstance(final_plan, dict):\n            print(\"\\nFinal Plan State:\")\n            print(json.dumps(final_plan, indent=2, default=str))\n        else:\n            print(\"\\nFinal Plan State: Not available or not generated.\")\n\n    else:\n        print(\"\\n--- No Final State Could Be Displayed ---\")\n\n\n    print(\"\\n--- END OF TEST ---\")\n    return final_state\n\n# --- Main Execution Block ---\nasync def main():\n    # --- 1. 初始化 LLM 管理器 (它会自动注册配置好的模型) ---\n    try:\n        model_manager = LLMManager()\n         # 可以选择打印一下注册了哪些模型\n        print(\"Registered Models:\", json.dumps(model_manager.list_models(), indent=2))\n        print(\"Capability Mapping:\", model_manager.list_capabilities())\n    except Exception as e:\n        print(f\"Failed to initialize LLMManager: {e}\")\n        return\n\n     # --- 2. 实例化 Agents (使用 ModelManager 获取模型) ---\n    try:\n         # 获取默认模型用于基础任务\n        grok = model_manager.get_model(\"xai_grok\") # 获取 ID 由 config 或第一个注册的决定\n        deepseek_v3 = model_manager.get_model(\"deepseek_v3\") # 获取 DeepSeek 模型\n         # 创建Tavily搜索工具\n        tavily_search = TavilySearchResults(\n            max_results=3,\n            include_answer=True,\n            include_raw_content=False,\n            include_images=False,\n            search_depth=\"advanced\"\n        )\n\n         # 确保 ReactAgent 使用与 Supervisor 兼容的状态 (例如 BasicAgentState)\n         # 或者 Supervisor 能够处理不同类型的子 Agent 状态返回\n        researcher_system_prompt = \"\"\"You are a research expert. Use available tools to find the most up-to-date information to answer the user's query. You have access to a 'tavily_search_results_json' tool.\"\"\"\n\n        research_agent = ReactAgent(\n            name=\"research_expert\", \n            tools=[tavily_search],\n            description=\"Research expert with access to Tavily search.\",\n            model=deepseek_v3,\n            prompt=researcher_system_prompt,\n         ) \n         \n        all_agents = [research_agent] # 只包含一个子 Agent 用于测试\n\n         # --- 实例化 Supervisor (使用 PlanningAgentState) ---\n        supervisor = SupervisorAgent(\n             agents=all_agents,\n             model=deepseek_v3, # Supervisor 使用的 LLM\n             state_schema=PlanningAgentState, # 明确 Supervisor 使用 Planning 状态\n             # enable_planning=True, # 不再需要此参数，因为 state_schema 暗示了规划\n             include_agent_name=\"inline\" # 推荐\n             # checkpointer=... # 添加 Checkpointer 以测试持久化\n         )\n    except Exception as e:\n         print(f\"Failed to initialize agents or supervisor: {e}\")\n         import traceback\n         traceback.print_exc()\n         return\n\n     # --- 获取用户输入 ---\n    topic = input(\"Please enter the initial request for the supervisor: \")\n    if not topic:\n         print(\"No request entered. Exiting.\")\n         return\n\n     # --- 准备初始状态 (使用 PlanningAgentState) ---\n    initial_graph_state: PlanningAgentState = {\n         \"messages\": [HumanMessage(content=topic)], # 确保是 HumanMessage 对象\n         \"plan\": None, # 初始没有计划\n         \"error\": None\n     }\n\n     # --- 运行测试 ---\n    await run_supervisor_test(supervisor, initial_graph_state)\n\n\nif __name__ == \"__main__\":\n    try:\n        asyncio.run(main())\n    except KeyboardInterrupt:\n        print(\"\\nExecution interrupted by user.\")\n    except Exception as e:\n         print(f\"\\nAn unexpected top-level error occurred: {e}\")\n         traceback.print_exc()"
  },
  {
    "path": "examples/state_based_supervisor_examples/03_multi_agents.py",
    "content": "# main.py (Multi-Agent Test with State-Based Supervisor)\n\nimport asyncio\nimport json\nimport os\nimport re\nimport time\nimport traceback # 导入 traceback\nfrom datetime import datetime\nfrom typing import Dict, Any, Optional, List, Literal, cast\n\n# --- LangChain / LangGraph / OpenAI Imports ---\nfrom langchain_core.messages import HumanMessage, AIMessage, BaseMessage, ToolMessage\n\n\n\n# --- Agent 和工具导入 (确保路径正确) ---\ntry:\n    from core.agents.sb_supervisor_agent import SupervisorAgent # 替换为你的 SupervisorAgent 类路径\n    from core.agents.state_based_supervisor.state_schema import PlanningAgentState\n    from core.agents.base.react_agent import ReactAgent # 导入 ReactAgent 基类\n\n    # 导入所有重构后的子 Agent 类\n    from core.agents.sub_agents.research_agent import ResearchAgent # 假设路径\n    from core.agents.sub_agents.coder_agent import CoderAgent       # 假设路径\n    from core.agents.sub_agents.reporter_agent import ReporterAgent   # 假设路径\n    from core.agents.sub_agents.designer_agent import DesignerAgent   # 假设路径\n    from core.agents.sub_agents.data_analyst_agent import DataAnalystAgent # 假设路径\n\n    # 导入工具注册表函数和枚举\n    from core.tools.registry import get_tools_by_category, ToolCategory, register_tool # 导入 register_tool\n    from core.llm.llm_manager import LLMManager # LLM 管理器\n    # 导入特定工具实例或类 (如果 Registry 没有预注册所有工具)\n    from langchain_community.tools.tavily_search import TavilySearchResults # 示例\n    # from core.tools.e2b_tool import E2BCodeInterpreterTool # 示例\n    # from core.tools.replicate_flux_tool import ReplicateFluxImageTool # 示例\n\n    # --- 确保工具已注册 ---\n    # 运行 registry 初始化 (通常在 core/tools/__init__.py 中完成)\n    try:\n        import core.tools # 尝试导入以触发 __init__.py 中的注册\n        print(\"Tool registry potentially initialized.\")\n    except ImportError:\n        print(\"Warning: Could not import 'core.tools' to initialize registry.\")\n    except Exception as reg_err:\n         print(f\"Error during tool registry initialization: {reg_err}\")\n         \n    # (可选) 在这里可以检查或手动注册缺失的核心工具\n    # Example: Check and register Tavily if not present\n    if not any(getattr(t, 'name', '') == 'tavily_search_results_json' for t in get_tools_by_category(ToolCategory.SEARCH)):\n        try:\n            print(\"Attempting to register TavilySearchResults...\")\n            tavily_tool = TavilySearchResults(max_results=3)\n            register_tool(tavily_tool, ToolCategory.SEARCH)\n        except Exception as e:\n            print(f\"Warning: Failed to register TavilySearchResults manually: {e}\")\n            \n    # ... 检查并注册其他必要的工具 ...\n\n\nexcept ImportError as e:\n    print(f\"Error importing agent/tool components: {e}\")\n    print(\"Please ensure all agent/tool class paths and registry setup are correct.\")\n    exit(1)\n\n\n# --- 助手函数 ---\ndef slugify(text: str) -> str:\n    \"\"\"Converts text to a safe filename part.\"\"\"\n    # ... (保持不变) ...\n    if not text: return \"no_topic\"\n    text = text.lower(); text = re.sub(r'\\s+', '_', text)\n    text = re.sub(r'[^\\w\\-]+', '', text); text = text.strip('_')\n    return text[:100] if text else \"sanitized_topic\"\n\n# --- 主研究/测试函数 ---\nasync def run_supervisor_test(supervisor_agent: SupervisorAgent, initial_state: Dict[str, Any]):\n    \"\"\"Executes the supervisor graph using ainvoke and prints the final state.\"\"\"\n\n    print(\"\\n--- Starting Supervisor Graph Execution ---\")\n    initial_query = initial_state.get(\"messages\", [{}])[0].content if initial_state.get(\"messages\") and hasattr(initial_state.get(\"messages\")[0], 'content') else \"N/A\"\n    print(f\"Initial Query: '{initial_query}'\")\n    print(\"-\" * 30)\n\n    config = {\"recursion_limit\": 100} # 保持较高的递归限制\n\n    final_state: Optional[Dict[str, Any]] = None\n    error_occurred: Optional[Exception] = None\n\n    try:\n        print(\"--- Invoking Supervisor Agent (ainvoke) ---\")\n        # 直接调用 ainvoke 获取最终状态\n        final_state = await supervisor_agent.ainvoke(initial_state, config=config)\n        print(\"--- Supervisor Agent Invocation Complete ---\")\n\n    # --- 错误处理 ---\n    except Exception as e: error_occurred = e; print(f\"\\n!!! Error during graph execution: {e}\"); traceback.print_exc()\n\n\n    # --- 处理最终状态 ---\n    if error_occurred: print(\"\\n--- Graph Execution INTERRUPTED or FAILED ---\")\n    else: print(\"\\n--- Graph Execution Finished ---\")\n\n    if not final_state:\n         print(\"Error: No final state available (Execution might have failed early).\")\n         # 尝试从 checkpointer 获取最后状态 (如果配置了)\n         # ... (checkpoint retrieval logic - optional) ...\n         return None\n\n    print(\"\\n--- FINAL STATE ---\")\n    # 打印错误 (如果在状态中记录了)\n    if final_state.get(\"error\"): print(f\"\\nERROR RECORDED IN STATE: {final_state['error']}\")\n\n    # 打印计划\n    final_plan = final_state.get('plan')\n    if final_plan and isinstance(final_plan, dict):\n        print(\"\\nFinal Plan State:\")\n        print(json.dumps(final_plan, indent=2, default=str))\n    else: print(\"\\nFinal Plan State: Not available or not generated.\")\n\n    # 打印最终消息历史\n    final_messages = final_state.get(\"messages\", [])\n    if final_messages and isinstance(final_messages, list):\n         print(\"\\nFinal Message History (Last 10):\")\n         for m in final_messages[-10:]:\n              try:\n                   if hasattr(m, 'pretty_print'): m.pretty_print()\n                   else: print(json.dumps(m, indent=2, default=str)) # Fallback\n                   print(\"-\" * 10)\n              except Exception as print_err: print(f\"Error printing final message: {print_err}\")\n    else: print(\"\\nFinal Message History: Empty.\")\n\n    # --- 保存最终报告 (如果 Reporter Agent 被调用且成功) ---\n    # 检查最后一条消息是否来自 Reporter\n    final_report_content = None\n    if final_messages and isinstance(final_messages[-1], AIMessage) and final_messages[-1].name == \"reporter_expert\":\n         final_report_content = final_messages[-1].content\n         print(\"\\n--- Final Report Found from Reporter Agent ---\")\n\n    if not error_occurred and final_report_content and isinstance(final_report_content, str) and \"Failed\" not in final_report_content:\n        print(\"\\n--- Saving Final Output to Markdown ---\")\n        try:\n            markdown_content = final_report_content\n            # 获取原始请求作为文件名基础\n            initial_query_text = final_state.get('messages', [{}])[0].content if final_state.get('messages') and hasattr(final_state.get('messages')[0], 'content') else 'unknown_request'\n            topic_slug = slugify(initial_query_text)\n            timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n            filename = f\"multi_agent_report_{topic_slug}_{timestamp}.md\"\n\n            script_dir = os.path.dirname(os.path.abspath(__file__))\n            output_dir = os.path.join(script_dir, \"Output\")\n            os.makedirs(output_dir, exist_ok=True)\n            filepath = os.path.join(output_dir, filename)\n\n            with open(filepath, \"w\", encoding=\"utf-8\") as f: f.write(markdown_content)\n            print(f\"Successfully saved output to: {filepath}\")\n        except Exception as e: print(f\"Error saving output to Markdown: {e}\")\n    elif error_occurred: print(\"\\nFinal Report: Not saved due to execution error.\")\n    else: print(\"\\nFinal Report: Not generated or not found.\")\n\n    print(\"\\n--- END OF TEST ---\")\n    return final_state\n\n\n# --- Main Execution Block ---\nasync def main():\n    # --- 1. 初始化 LLM 管理器 ---\n    try:\n        model_manager = LLMManager()\n        print(\"Registered Models:\", json.dumps(model_manager.list_models(), indent=2))\n    except Exception as e:\n        print(f\"Failed to initialize LLMManager: {e}\")\n        return\n\n    # --- 2. 实例化所有 Agents ---\n    try:\n        # 获取模型实例\n        # 确保 'deepseek_v3' 和 'gpt-4o' 是你 LLMManager 中有效的 ID\n        deepseek_model = model_manager.get_model(\"deepseek_v3\")\n        gpt4o_model = model_manager.get_model(\"openai_gpt4o\") # 多模态模型\n\n        # 实例化 ResearchAgent\n        research_agent = ResearchAgent(\n            model=deepseek_model,\n        )\n\n        # 实例化 CoderAgent\n        coder_agent = CoderAgent(\n            model=deepseek_model,\n        )\n\n        # 实例化 ReporterAgent\n        reporter_agent = ReporterAgent(\n            model=deepseek_model\n        )\n\n        # 实例化 DesignerAgent\n        designer_agent = DesignerAgent(\n            model=gpt4o_model,\n        )\n\n        # 实例化 DataAnalystAgent\n        data_analyst_agent = DataAnalystAgent(\n            model=deepseek_model,\n        )\n\n        # --- 3. 组合 Agent 列表 ---\n        all_agents = [\n            research_agent,\n            coder_agent,\n            reporter_agent,\n            designer_agent,\n            data_analyst_agent,\n        ]\n\n        # --- 4. 实例化 Supervisor ---\n        supervisor = SupervisorAgent(\n             agents=all_agents,\n             model=deepseek_model, # Supervisor 自身使用的模型\n             # model = gpt4o_model,\n             state_schema=PlanningAgentState,\n             include_agent_name=\"inline\"\n             # checkpointer=... # 可选: 添加 Checkpointer 实现持久化\n        )\n\n    except Exception as e:\n         print(f\"Failed to initialize agents or supervisor: {e}\")\n         traceback.print_exc()\n         return\n\n    # --- 5. 获取用户输入 ---\n    topic = input(\"Please enter the initial request for the supervisor: \")\n    if not topic:\n         print(\"No request entered. Using default topic.\")\n         topic = \"\"\"我需要获取法国巴黎当前的实时气温。请按以下步骤操作：\n1. 首先，帮我调研一个可以免费获取巴黎当前天气数据的 API (例如 Open-Meteo, WeatherAPI.com 或其他类似的)，重点是找到获取当前气温的 API 端点(endpoint URL)以及如何构造请求（如果可能，选择不需要 API key 的）。\n2. 然后，编写一个 Python 脚本，使用 'requests' 库来调用上一步找到的 API 端点，并从中提取出巴黎当前的温度（摄氏度）。\n3. 使用你的代码执行工具来运行这个 Python 脚本。\n4. 最后，告诉我你找到的当前巴黎温度是多少。\"\"\"\n\n    # --- 6. 准备初始状态 ---\n    initial_graph_state: PlanningAgentState = {\n         \"messages\": [HumanMessage(content=topic)], \n         \"plan\": None,\n         \"error\": None\n    }\n\n    # --- 7. 运行测试 ---\n    await run_supervisor_test(supervisor, initial_graph_state)\n\n\nif __name__ == \"__main__\":\n    try:\n        asyncio.run(main())\n    except KeyboardInterrupt:\n        print(\"\\nExecution interrupted by user.\")\n    except Exception as e:\n         print(f\"\\nAn unexpected top-level error occurred: {e}\")\n         traceback.print_exc()"
  },
  {
    "path": "examples/web_agents/README.md",
    "content": "# Web Agents\n\n这个目录包含可以通过web界面加载的代理示例。每个子目录代表一个独立的代理实现，可以被server.py动态加载。\n\n## 目录结构\n\n每个代理应遵循以下结构：\n\n```\nagent_name/\n  __init__.py  # 包含get_graph()函数，返回编译好的LangGraph\n  README.md    # 代理的说明文档\n```\n\n## 接口规范\n\n每个代理必须实现以下接口：\n\n```python\ndef get_graph():\n    \"\"\"返回编译好的LangGraph实例\"\"\"\n    # 构建并返回图\n    return compiled_graph\n```"
  },
  {
    "path": "examples/web_agents/README_SPEC.md",
    "content": "# Web Agent 开发规范\n\n## 1. 概述\n\n本规范旨在统一Web Agent的开发流程和命名约定，确保前后端协同工作，避免出现前端组件无法正确显示后端数据的问题。本文档基于实际开发经验，特别强调前后端节点命名一致性的重要性。\n\n## 2. 节点命名规范\n\n## 2. 前后端交互核心机制\n\n### 2.1 关键概念\n\n- **节点名称匹配**: 前端渲染组件时，会根据后端节点的名称来选择对应的组件进行渲染\n- **状态数据结构**: 后端节点生成的状态数据必须符合前端组件期望的结构\n- **渲染函数**: 前端的`renderNode`函数是连接后端节点和前端组件的关键桥梁\n\n### 2.2 渲染流程\n\n1. 后端节点执行并生成状态数据\n2. 前端通过`useLangGraphAgent`钩子接收节点数据\n3. 前端的`renderNode`函数根据节点名称选择对应组件\n4. 组件根据状态数据进行渲染\n\n## 3. 节点命名规范\n\n### 3.1 关键节点命名\n\n所有Web Agent必须在图结构中包含处理消息的节点，这些节点名称必须与前端`renderNode`函数中的case语句匹配：\n\n```python\n# 后端节点命名 - 必须与前端renderNode函数中的case匹配\nbuilder.add_node(\"agent\", agent_function)  # 或其他在前端已注册的节点名称\n```\n\n**重要提示**: 前端`page.tsx`中的`renderNode`函数定义了可识别的节点名称。目前支持的节点名称有：\n- `__start__`\n- `agent` (替代了原来的`chatbot`)\n- `weather`\n- `reminder`\n- `research`\n- `search`\n- `report`\n\n如果后端使用了其他节点名称，必须在前端的`renderNode`函数中添加对应的case语句。\n\n### 3.2 状态字段命名\n\n- 状态字段名称应与前端组件期望的字段名称保持一致\n- 使用蛇形命名法（snake_case）命名状态字段\n- 复杂数据结构应使用数组形式，即使只有一个元素\n\n### 3.3 必要的状态字段\n\n每个Web Agent必须在`agent-types.ts`文件中定义其状态接口，并确保后端发送的状态与此接口匹配：\n\n```typescript\nexport interface AgentState extends WithMessages {\n  // 定义Agent特有的状态字段\n  weather_forecast?: WeatherForecast[];\n  research_status?: ResearchStatus[];\n  // 其他状态字段\n}\n```\n\n## 4. 前端组件规范\n\n### 4.1 组件结构\n\n- 主组件应根据节点名称渲染不同的子组件\n- 子组件应检查所需状态字段是否存在，并提供合理的默认行为\n\n```typescript\nexport default function renderNode(checkpoint, node) {\n  switch (node.name) {\n    case '__start__':\n    case 'agent':  // 注意：这里使用'agent'替代了原来的'chatbot'\n      return <ChatbotNode nodeState={node.state} />;\n    case 'weather':\n      return <WeatherNode nodeState={node.state} />;\n    // 其他节点类型\n    default:\n      return null;\n  }\n}\n```\n\n### 4.2 组件注册\n\n所有Web Agent的节点组件必须在`page.tsx`的`renderNode`函数中正确注册：\n\n```typescript\nconst renderNode = (checkpoint, node) => {\n  switch (node.name) {\n    // 确保这里的节点名称与后端图定义中的节点名称一致\n    case '__start__':\n    case 'agent':  // 注意：这里使用'agent'替代了原来的'chatbot'\n      return <ChatbotNode nodeState={node.state} />;\n    case 'weather':\n      return <WeatherNode nodeState={node.state} />;\n    case 'reminder':\n      return <Reminder interruptValue={checkpoint.interruptValue} onResume={handleResume} />;\n    case 'research':\n    case 'search':\n    case 'report':\n      return <ResearchNode nodeState={node.state} />;\n    // 添加新节点类型的渲染逻辑\n    default:\n      return null;\n  }\n}\n```\n\n## 5. 后端图结构规范\n\n### 5.1 节点函数\n\n- 节点函数应使用适当的参数来处理状态\n- 消息处理必须在与前端匹配的节点中进行\n\n```python\nasync def agent(state):  # 注意：这里使用'agent'替代了原来的'chatbot'\n    # 处理消息并返回结果\n    return {\"messages\": [...]}  # 必须包含messages字段\n```\n\n### 5.2 图构建\n\n- 图必须包含与前端匹配的节点，用于处理消息\n- 必须实现`get_graph()`函数返回编译好的图实例\n\n```python\ndef get_graph():\n    \"\"\"返回编译好的LangGraph实例\"\"\"\n    builder = StateGraph()\n    builder.add_node(\"agent\", agent)  # 注意：这里使用'agent'替代了原来的'chatbot'\n    # 添加边和其他节点\n    graph = builder.compile(checkpointer=MemorySaver())\n    return graph\n```\n\n## 6. 开发流程\n\n### 6.1 新建Web Agent流程\n\n1. 在`examples/web_agents/`下创建新的Agent目录\n2. 创建`graph.py`文件，实现Agent的图结构，确保节点名称与前端`renderNode`函数中的case语句匹配\n3. 在`web/app/chat/[id]/agent-types.ts`中添加Agent所需的状态接口\n4. 在`web/app/chat/[id]/components/`下创建Agent的组件\n5. 在`web/app/chat/[id]/page.tsx`的`renderNode`函数中注册Agent的节点组件（如果使用新的节点名称）\n\n### 6.2 测试验证\n\n在提交代码前，必须进行以下测试：\n\n1. 确认后端图结构中的节点名称与前端`renderNode`函数中的case语句匹配\n2. 验证前端组件能正确渲染不同类型的节点\n3. 检查状态字段名称与前端组件期望的字段名称一致\n\n## 7. 常见问题与解决方案\n\n### 7.1 前端不显示消息问题\n\n如果前端不显示消息内容，请检查：\n\n1. 后端图结构中的节点名称是否与前端`renderNode`函数中的case语句匹配\n2. 前端`renderNode`函数是否正确处理了对应的节点名称\n3. 消息是否正确包含在state的messages字段中\n\n### 7.2 状态更新不生效\n\n确保状态更新时，字段名称与前端期望的字段名称一致，并且数据结构符合前端组件的预期。\n\n### 7.3 添加新节点类型\n\n如果需要添加新的节点类型，必须：\n\n1. 在后端图结构中定义新节点\n2. 在前端`page.tsx`的`renderNode`函数中添加对应的case语句\n3. 创建新节点对应的前端组件\n4. 在`agent-types.ts`中添加新节点所需的状态接口\n\n---\n\n遵循本规范可以有效避免前后端不一致导致的显示问题，提高Web Agent的开发效率和质量。"
  },
  {
    "path": "examples/web_agents/__init__.py",
    "content": "# Web Agents Package\n# This package contains web agents that can be loaded by the server"
  },
  {
    "path": "examples/web_agents/research_assistant/README.md",
    "content": "# 研究助手\n\n这是一个强大的研究助手代理，可以帮助用户进行在线研究、信息收集和报告生成。\n\n## 功能\n\n- 在线搜索信息\n- 提取和总结网页内容\n- 生成研究报告\n- 实时显示研究进度\n\n## 使用方法\n\n用户可以通过自然语言与代理交互，例如：\n\n- \"帮我研究人工智能在医疗领域的应用\"\n- \"查找关于气候变化的最新研究\"\n- \"总结量子计算的基本原理\"\n\n## 技术实现\n\n该代理使用LangGraph构建，结合了Supervisor和React模式，包含以下节点：\n\n- supervisor: 协调整个研究流程\n- search: 执行在线搜索\n- extract: 提取网页内容\n- analyze: 分析收集的信息\n- report: 生成研究报告\n\n研究过程中会实时更新状态，让用户了解当前进度。"
  },
  {
    "path": "examples/web_agents/research_assistant/__init__.py",
    "content": "# Research Assistant Agent\n# This module provides a research assistant agent that can crawl websites and extract content\n\nfrom .graph import get_graph\n\n__all__ = [\"get_graph\"]"
  },
  {
    "path": "examples/web_agents/research_assistant/graph.py",
    "content": "from langgraph.prebuilt import create_react_agent\nfrom langchain_openai import ChatOpenAI\nfrom typing import Dict, Any\nfrom dotenv import load_dotenv\nfrom langchain_community.tools import TavilySearchResults\nfrom langgraph.checkpoint.memory import MemorySaver\nfrom core.tools.e2b_tool import E2BCodeInterpreterTool\nfrom core.tools.registry import register_tool, ToolCategory\nfrom core.llm.llm_manager import LLMManager\n\nload_dotenv()  # 自动加载 .env 文件\n# 初始化大模型\nmodel = LLMManager().get_model(\"deepseek_v3\")\n\n# 创建Tavily搜索工具\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=False,\n    include_images=False,\n    search_depth=\"advanced\"\n)\n\n# 创建E2B代码解释器工具\ne2b_code_interpreter = E2BCodeInterpreterTool()\n\n\nresearch_agent = create_react_agent(\n    model=model,\n    tools=[tavily_search, e2b_code_interpreter],\n    name=\"research_expert\",\n    # Prompt 告诉它是一个研究型 Agent，可调用 tavily_search 和 e2b_code_interpreter\n    prompt=(\n        \"你是一位世界级的研究专家和数据分析师，擅长信息检索和数据分析。你有两个强大的工具可以使用：\\n\"\n        \"1. 'tavily_search_results_json'：用于搜索网络获取实时信息\\n\"\n        \"2. 'e2b_code_interpreter'：用于执行Python代码，支持数据分析和可视化\\n\\n\"\n        \"当面对问题时，请遵循以下方法论：\\n\"\n        \"1. 分析问题：理解用户的需求和问题本质\\n\"\n        \"2. 制定计划：确定需要搜索哪些信息，以及是否需要进行数据分析\\n\"\n        \"3. 执行搜索：使用tavily_search_results_json工具获取最新信息\\n\"\n        \"4. 数据分析：如果需要，使用e2b_code_interpreter工具编写和执行Python代码进行数据分析和可视化\\n\"\n        \"5. 综合信息：将搜索结果和数据分析结果综合成一个连贯的回答\\n\\n\"\n        \"重要提示：\\n\"\n        \"- 对于信息检索任务，使用tavily_search_results_json工具，并在回答中引用来源URL\\n\"\n        \"- 对于数据分析和可视化任务，使用e2b_code_interpreter工具执行Python代码\\n\"\n        \"- 在使用代码解释器时，确保导入必要的库（如pandas, matplotlib, numpy等）\\n\"\n        \"- 在代码中添加详细注释，解释关键步骤\\n\"\n        \"- 执行代码后，解释结果含义和见解\"\n    ),\n    checkpointer=MemorySaver(),\n)\n\n\ndef get_graph():\n    return research_agent"
  },
  {
    "path": "examples/web_agents/weather_agent/README.md",
    "content": "# 天气代理\n\n这是一个简单的天气查询代理，可以回答用户关于天气的问题，并提供天气预报信息。\n\n## 功能\n\n- 查询当前天气\n- 创建提醒\n\n## 使用方法\n\n用户可以通过自然语言与代理交互，例如：\n\n- \"今天北京的天气怎么样？\"\n- \"帮我设置一个提醒，明天早上8点去开会\"\n\n## 技术实现\n\n该代理使用LangGraph构建，包含以下节点：\n\n- chatbot: 处理用户输入并生成回复\n- weather: 处理天气查询请求\n- reminder: 处理提醒创建请求"
  },
  {
    "path": "examples/web_agents/weather_agent/__init__.py",
    "content": "# Weather Agent Example\n# This is a simple weather agent that can be loaded by the server\n\nimport operator\nfrom typing import Literal, TypedDict, Any, Annotated\nfrom dotenv import load_dotenv\nfrom langchain_openai import ChatOpenAI\nfrom langgraph.graph import StateGraph, MessagesState, START, END\nfrom langgraph.checkpoint.memory import MemorySaver\nfrom langgraph.types import StreamWriter, interrupt, Send\nfrom langchain_core.messages import ToolMessage\nfrom langchain_core.tools import tool\nimport random\nimport asyncio\n\nload_dotenv()\n\n\nclass Weather(TypedDict):\n    location: str\n    search_status: str\n    result: str\n\n\nclass State(MessagesState):\n    weather_forecast: Annotated[list[Weather], operator.add]\n\n\nclass WeatherInput(TypedDict):\n    location: str\n    tool_call_id: str\n\n\nclass ToolNodeArgs(TypedDict):\n    name: str\n    args: dict[str, Any]\n    id: str\n\n\n@tool\nasync def weather_tool(query: str) -> str:\n    \"\"\"Call to get current weather\"\"\"\n    return \"Sunny\"\n\n\n@tool\nasync def create_reminder_tool(reminder_text: str) -> str:\n    \"\"\"Call to create a reminder\"\"\"\n    return \"Reminder created\"\n\n\nasync def weather(input: WeatherInput, writer: StreamWriter):\n    location = input[\"location\"]\n    tool_call_id = input[\"tool_call_id\"]\n\n    # Send custom event to the client. It will update the state of the last checkpoint and all child nodes.\n    # Note: if there are multiple child nodes (e.g. parallel nodes), the state will be updated for all of them.\n    writer({\"weather_forecast\": [\n           {\"location\": location, \"search_status\": f\"Checking weather in {location}\"}]})\n\n    await asyncio.sleep(2)\n    weather = random.choice([\"Sunny\", \"Cloudy\", \"Rainy\", \"Snowy\"])\n\n    return {\"messages\": [ToolMessage(content=weather, tool_call_id=tool_call_id)], \"weather_forecast\": [{\"location\": location, \"search_status\": \"\", \"result\": weather}]}\n\n\nasync def reminder(input: ToolNodeArgs):\n    res = interrupt(input['args']['reminder_text'])\n\n    tool_answer = \"Reminder created.\" if res == 'approve' else \"Reminder creation cancelled by user.\"\n\n    return {\"messages\": [ToolMessage(content=tool_answer, tool_call_id=input[\"id\"])]}\n\n\nasync def chatbot(state: State):\n    llm = ChatOpenAI(\n        model=\"gpt-4o-mini\").bind_tools([weather_tool, create_reminder_tool])\n    response = await llm.ainvoke(state[\"messages\"])\n    return {\"messages\": [response]}\n\n\ndef tool_router(state: State) -> Literal[\"weather\", \"reminder\", \"__end__\"]:\n    messages = state[\"messages\"]\n    last_message = messages[-1]\n    if last_message.tool_calls:\n        if last_message.tool_calls[0][\"name\"] == \"weather_tool\":\n            return \"weather\"\n        elif last_message.tool_calls[0][\"name\"] == \"create_reminder_tool\":\n            return \"reminder\"\n    return \"__end__\"\n\n\n# Chatbot node router. Based on tool calls, creates the list of the next parallel nodes.\ndef assign_tool(state: State) -> Literal[\"weather\", \"reminder\", \"__end__\"]:\n    messages = state[\"messages\"]\n    last_message = messages[-1]\n    if last_message.tool_calls:\n        send_list = []\n        for tool in last_message.tool_calls:\n            if tool[\"name\"] == 'weather_tool':\n                send_list.append(\n                    Send('weather', {'location': tool['args']['query'], 'tool_call_id': tool['id']}))\n            elif tool[\"name\"] == 'create_reminder_tool':\n                send_list.append(Send('reminder', tool))\n        return send_list if len(send_list) > 0 else \"__end__\"\n    return \"__end__\"\n\n\ndef get_graph():\n    \"\"\"Return the compiled graph for this agent\"\"\"\n    builder = StateGraph(State)\n\n    builder.add_node(\"chatbot\", chatbot)\n    builder.add_node(\"weather\", weather)\n    builder.add_node(\"reminder\", reminder)\n\n    builder.add_edge(START, \"chatbot\")\n    builder.add_conditional_edges(\"chatbot\", assign_tool)\n    builder.add_edge(\"weather\", \"chatbot\")\n    builder.add_edge(\"reminder\", \"chatbot\")\n\n    builder.add_edge(\"chatbot\", END)\n\n    memory = MemorySaver()\n    return builder.compile(checkpointer=memory)"
  },
  {
    "path": "instructions/00.Langgraph 和 React Agent.md",
    "content": "# 一、LangGraph 的核心思想\n\nLangGraph 是一个可以让开发者以**图（Graph）**的方式来编排对话式AI流程的库，提供了以下能力：\n\n1. **状态驱动**：在传统的对话模型中，我们经常需要维护对话上下文、剩余步骤等各种内部变量。LangGraph 将这些变量统一到一个“状态(State)”里，并约定任何节点的输入/输出都以“状态(State)”的形式表示。\n   \n2. **可视化执行流**：LangGraph 将对话/工具调用/自定义逻辑封装成“节点(Node)”与“边(Edge)”。当图被编译后，执行流会在节点之间穿梭，处理对话消息、调用工具、终止或转向某些分支。\n\n3. **可组合**：你可以把一个复杂的对话逻辑拆分为多个可复用的子图，每个子图都可以独立进行单元测试或复用在更大的图中。\n\n4. **多步思考 + 工具调用**：通过 ReAct Agent（一个经典的多步推理+工具调用范式），LangGraph 可以帮助你自动管理**多次**调用语言模型及其衍生工具的过程——只要你把“工具”注册到图里。\n\n在使用时，你基本会经历如下步骤：\n\n1. **定义状态模式（state schema）**：说明 state 中必须包含哪些字段（如：对话消息 `messages`，剩余可用步骤 `remaining_steps`，等等）。\n2. **定义节点（Node）**：比如一个负责调用LLM的节点、一个负责执行特定工具的节点、或者你自定义的Python逻辑节点。\n3. **连接边（Edges）**：决定每个节点之后，下一步走到哪个节点；也可以做条件分支或循环。\n4. **编译图（Compile）**：LangGraph 会把你的“编排逻辑”转换为一个 LangChain 兼容的“可调用对象(CompiledGraph)”。\n5. **执行或流式执行**：可以直接一次性 `graph.invoke(...)` 得到最终结果，也可以使用 `graph.stream(...)` 流式获取每个“阶段性状态（partial state）” 。\n\n---\n\n# 二、LangGraph 核心概念详解\n\nLangGraph 构建的是一个\"流程图\"，每个智能体（agent）或功能模块（tool调用、分支逻辑）都是这个流程图的一个节点（node）。让我们深入理解其中的核心概念：\n\n## 2.1 Graph：有状态的数据流图\n\nGraph 是整个 Agent 系统的执行框架，定义了哪些模块怎么串联、怎么流转。你构建的 graph 是一个有向图：\n\n```python\ngraph = StateGraph(state_schema=MyState)\n\n# 添加节点\ngraph.add_node(\"supervisor\", supervisor_runnable)\ngraph.add_node(\"writer\", writer_runnable)\n\n# 添加边来连接节点\ngraph.add_edge(\"supervisor\", \"writer\")\ngraph.add_edge(\"writer\", \"supervisor\")\n```\n\nLangGraph 根据这些连接关系来控制执行流程，决定在某个节点执行完后下一步应该去哪里。\n\n## 2.2 Node：图中的\"一个执行单元\"\n\n每个 node 是图中的一个处理模块（通常就是一个智能体）。它接受一个输入 state，做点事情，然后返回一个新的 state：\n\n```python\ndef my_node(state: dict) -> dict:\n    # 处理 state 中的数据\n    new_state = state.copy()\n    # 修改状态内容\n    new_state[\"some_key\"] = \"new_value\"\n    return new_state\n```\n\n节点可以是：\n- 函数（同步或异步）\n- LLM Agent（如 create_react_agent 返回的）\n- 包装后的 Agent（如 MemorySlidingReactAgent）\n\n## 2.3 State：每一轮节点处理的输入/输出\n\n每轮调用，LangGraph 会传递一个 \"state\"（字典类型）给当前节点。这个 state 中可以包含：\n- `messages`: 当前对话历史（主上下文）【默认】\n- `memory`: 你自定义的长期记忆（可以注入系统提示）\n- `todo_list`, `current_task`: 其他任务状态\n- 任何你自定义的字段\n\n每个节点执行后，返回新的 state：\n\n```python\ndef writer(state):\n    new_msg = generate_chapter(state[\"current_task\"])\n    state[\"messages\"].append({\"role\": \"assistant\", \"content\": new_msg})\n    return state\n```\n\n## 2.4 Runnable：Node 的运行接口\n\nLangGraph 要求，每个节点（node）必须是可以运行的，也就是说：你交给 `add_node()` 的对象必须有 `.invoke(state)` 或 `.ainvoke(state)` 方法。\n\n比如：\n- 函数本身（它会自动包装成 Runnable）\n- Agent（React agent 本身就支持 `.invoke`）\n- `RunnableCallable(...)` 是 LangGraph 用来显式包装函数的工具\n\n举个例子：\n\n```python\ndef my_function(state: dict) -> dict:\n    # 处理逻辑\n    return state\n\nrunnable = RunnableCallable(my_function, async_version)\ngraph.add_node(\"writer\", runnable)\n```\n\n## 2.5 执行流程\n\nLangGraph 的执行流程大致如下：\n\n```\nLangGraph Graph:\n   [START]\n      ↓\n  [Supervisor Node]\n      ↓\n  [Writer Node]\n      ↓\n  [Supervisor Node]\n      ↓\n   [END]\n```\n\n每次节点执行时：\n1. 传入当前 state\n2. `.invoke(state)` 被调用\n3. 返回更新后的 state\n4. 下一节点接着执行\n\n## 2.6 概念类比\n\n| LangGraph 概念 | 类比 |\n|----------------|------|\n| Graph | 工作流程图/数据流图 |\n| Node | 每个处理步骤/智能体 |\n| State | 当前上下文与执行状态（黑匣子） |\n| Runnable | 每个节点\"能被执行\"的接口定义 |\n\n---\n\n# 三、ReAct Agent 与 create_react_agent 概念\n\n## 3.1 什么是 ReAct Agent\n\n“ReAct” 是一种典型的LLM多步推理与工具调用策略。它主要包含两部分：\n\n1. **Reasoning**：先让语言模型（LLM）进行一步推理，产出一个潜在的思考过程以及可能的工具调用。\n2. **Acting**：如果模型说“我要调用某个工具”，则执行该工具，得到结果，再把结果加入对话，然后让模型再次 Reason，看看是否还需要再调用工具，或输出最后的答案。\n\n这个循环可以**多次往返**，直到模型不再调用工具，输出最终结果。\n\n## 3.2 create_react_agent 做了什么\n\n`create_react_agent(...)` 是 LangGraph 中的一个快捷方法，用于**快速创建**一个可执行的“ReAct风格”图（Graph）：\n\n- **自动添加“agent节点”**：用来调用你的语言模型（并在对话中发出可能的工具调用）。\n- **自动添加“tools节点”**：如果 agent 的输出中含有工具调用（tool_calls），则会交给 tools 节点逐个执行，并把执行结果以 `ToolMessage` 的形式返回到对话中。\n- **自动在 agent ↔ tools 之间连线**：只要 agent 产生了工具调用，就进入 tools；tools 执行完返回消息后，再回到 agent；直到不再有工具调用为止。\n- **可选 structured output**：如果你传入了 `response_format` 参数，LangGraph 会在最后一步生成一个结构化输出(“JSON Schema”、“Pydantic”、“OpenAI function schema”等)，以便你获取可解析的最终结果。\n- **控制“剩余步骤”**：Agent 每次回答后会检查是否还可以继续调用工具，或者是否需要中止并返回错误（“抱歉，需要更多步骤”）。\n\n因此，调用 `create_react_agent(...)` 得到的结果，是一个**已经配置好**的 “CompiledGraph”。这个图中带有 “agent” 节点（LLM） 和 “tools” 节点（调用工具），以及检查**是否还有工具要调**的逻辑。你可以直接拿这个对象执行，获得一个 ReAct 流程的多轮对话+工具使用。\n\n---\n\n# 四、执行流程：从输入到输出\n\n创建好 ReAct 图后，你给它一个输入状态（最少包含 `\"messages\"`，如 `{\"messages\": [(\"user\", \"Hello!\")]}`）。执行过程大体是：\n\n1. **entry point: \"agent\"**  \n   进入 agent 节点，它会从 state[\"messages\"] 中取出消息，交给 LLM 生成一个 AIMessage。如果 AIMessage 包含 tool_calls，那么 state 会更新多一些字段，比如 `messages` 后面多了这个 AIMessage。\n\n2. **检查是否要调用工具**  \n   - 如果 `tool_calls` 不为空，则顺着 edges 进入 \"tools\" 节点。\n   - 如果没有 tool_calls，则表示 agent 没有想调用任何工具 -> 流程会判断是否要去 “generate_structured_response” 或 “END”。\n   \n3. **tools 节点执行**  \n   \"tools\" 节点会去匹配 agent 要调用的工具，比如：  \n   ```json\n   {\n     \"name\": \"search_tool\",\n     \"args\": {\"query\": \"something\"},\n     \"id\": \"call_abc123\"\n   }\n   ```\n   然后运行相应的 Python 函数，得到结果后，包装成 `ToolMessage`，附加回 state[\"messages\"] 列表里。\n   - 如果 agent 一次性请求了多个工具，在 v1 版本中则会并行执行，再把返回结果依次追加到 messages 里。\n   - 在 v2 版本中，LangGraph 会拆分 tool_calls 分批执行。\n\n4. **回到 agent**  \n   现在 agent 再次拿到新的 state[\"messages\"]（多了“ToolMessage”），就会针对最新的对话上下文重新进行思考——是否要再调用别的工具、或者是否直接产出最终回答？\n\n5. **循环，直到不再调用工具**  \n   只要 AIMessage 继续发出 tool_calls，就进入 Tools 节点；Tools 执行完再回到 Agent 节点。这一过程可能多次往返。（如果你设定了 `remaining_steps`，LangGraph 在每一轮都会减少1，直到不足时终止或报错，避免死循环。）\n\n6. **可选：结构化输出**  \n   在最后如果 `response_format` 存在，图会跳到“generate_structured_response”节点，再次对(几乎)所有对话做一次 LLM 调用，要求 LLM 产出符合**你给定schema**的 JSON，并存入 `structured_response` 字段中。然后再返回 END。\n\n7. **结束**  \n   整个 ReAct 流程完成后，图会返回一个最终状态，如：\n   ```python\n   {\n     \"messages\": [  # 所有对话消息(包含了Human/AI/Tools等),\n       ...,\n       AIMessage(content=\"Here is the final answer\", tool_calls=[])\n     ],\n     \"remaining_steps\": 2,\n     \"structured_response\": { ... }  # 如果使用了response_format\n   }\n   ```\n   你可以从中拿到想要的最终 AI 回答。\n\n---\n\n# 五、如何查看“中间推理”或“工具调用”？\n\n从 **langgraph 0.3** 开始，`create_react_agent` 及其返回的 Graph 已经**不再支持** `graph.add_state_change_listener` 或在函数参数里传入 `callbacks`。如果你想**监听**或**打印** Agent 的中间思考、工具调用等过程，最好的方式是 **使用 `graph.stream(...)`**——它会在每一小步执行结束后产出一个“部分状态( partial_state )”，你可以在循环里进行日志记录、可视化或其他操作。示例：\n\n```python\ngraph = create_react_agent(model, tools=[...], prompt=\"...\")\n\ninputs = {\n    \"messages\": [\n       (\"user\", \"请分析特斯拉2025年的发展预期，包括新车型计划、销量目标、技术创新和市场扩张战略。\")\n    ]\n}\n\nfor partial_state in graph.stream(inputs, stream_mode=\"values\"):\n    messages = partial_state[\"messages\"]\n    last_msg = messages[-1]\n    if last_msg.type == \"ai\":\n        print(\"[AIMessage] => \", last_msg.content)\n        if last_msg.tool_calls:\n            print(\"AI wants to call tools:\", last_msg.tool_calls)\n    elif last_msg.type == \"tool\":\n        print(\"[ToolMessage] => Name:\", last_msg.name, \"Content:\", last_msg.content)\n    elif last_msg.type == \"human\":\n        print(\"[User] => \", last_msg.content)\n\n# 最后一次迭代时，partial_state 就是最终结果\nfinal_answer = partial_state[\"messages\"][-1].content\nprint(\"最终回答:\", final_answer)\n```\n\n这样就能够**在每一次** Agent 或 Tools 完成后都获取状态，不需要“回调监听器”。\n\n---\n\n# 六、关于一些进阶用法\n\n1. **`interrupt_before` / `interrupt_after`**  \n   如果你希望在“agent”节点**执行前**或者**后**打断，可以设置这两个可选参数，比如：\n   ```python\n   create_react_agent(\n       model,\n       tools=[...],\n       interrupt_before=[\"tools\"],\n       interrupt_after=[\"agent\"],\n       ...\n   )\n   ```\n   当执行流程跑到 agent 或 tools 时，会先/后给你一个“交互点”机会，你可以在**流式**执行中察觉到这个点，或者抛出异常提前终止等。但是它比较适合做“用户确认”或“调试介入”，而不是实时日志。\n\n2. **`checkpointer` / `store`**  \n   - `checkpointer` 主要用来将单个“线程”（单条对话）的状态进行保存、恢复，可以在多回合对话里保留上下文。\n   - `store` 提供了更跨线程或跨用户的持久化能力。  \n   通过把 `store` 绑定到 Graph，工具调用里还可以使用 `InjectedStore`，把数据写入或读取到 store 中（如相当于“全局数据库”）。\n\n3. **`response_format`**  \n   如果你想让最终输出符合某种 JSON Schema 或 Pydantic 验证，可以这样写：\n   ```python\n   from langchain_core.prompts import ChatPromptTemplate\n   from pydantic import BaseModel, Field\n\n   class TeslaPlan(BaseModel):\n       new_models: list[str] = Field(..., description=\"新车型列表\")\n       sales_target: int = Field(..., description=\"预计销量\")\n       technology_innovations: str\n       market_strategy: str\n\n   my_response_format = TeslaPlan\n\n   graph = create_react_agent(\n       model, \n       tools, \n       prompt=\"你是一个专业汽车分析师。\",\n       response_format=my_response_format\n   )\n   ```\n   当 ReAct 流程结束后，LangGraph 会调用一次 LLM 并要求它返回符合 `TeslaPlan` 的 JSON。最终的 `state[\"structured_response\"]` 就是一个 Python 字典或 Pydantic 实例。\n\n4. **`version=\"v1\" / \"v2\"`**  \n   - **v1**: 工具调用是“把当前 AIMessage 中的所有 tool_calls 一次性并行执行” → tools → 再回到 agent。  \n   - **v2**: 更细粒度地把每个 tool_call 拆开，每个都进入一个独立的 ToolNode 实例。如果一个 AIMessage 里有 3 个 tool_calls，就会做 3 次独立的“tools执行→回到agent”循环**（通过 Send API）**。这种方式可以在多工具协作里更灵活，也可以插入更多自定义逻辑，但要做好相应的结构化处理。\n\n---\n\n# 七、常见问题与答疑\n\n1. **Q: 我在旧版本使用 `graph.add_state_change_listener` 或 `callbacks`，现在为什么报错？**  \n   A: 因为新的 LangGraph 0.3 取消了这种回调API，推荐使用 `graph.stream(...)` 在每一步迭代中自行处理日志或监听逻辑。\n\n2. **Q: 如果不想每次都多轮循环，而只想 LLMC 接受一次输入就结束，怎么做？**  \n   A: 你可以传递 `tools=[]`（空）到 `create_react_agent`，这样它就生成一个不支持工具调用的图；agent 只会输出一次，然后就结束。此时相当于纯LLM调用。\n\n3. **Q: 要怎么限制调用工具的次数？**  \n   A: 你可以在输入的 `state` 里设置 `remaining_steps`，或自定义 `AgentState` 包含 `remaining_steps=3` 一类初始值，每次 agent节点执行后，LangGraph 会自动减少1。用完就不会再允许工具调用了。\n\n4. **Q: ReAct 会在同一个消息里多次请求调用工具吗？**  \n   A: 是可能的。尤其是当 LLM 在回答中生成多个 tool_calls，就会全部执行。你可以在 `v1` 模式下并行运行它们，也可以在 `v2` 模式下逐个执行。\n\n5. **Q: structured response 里的提示是如何工作的？**  \n   A: 当 `response_format` 是 `(system_prompt, schema)` 这种 tuple 时，LangGraph 会在最后的 LLM 调用里给一个额外的 system_prompt，引导 LLM 返回符合 schema 的 JSON。这样可以做更严格的结构化要求。\n\n---\n\n# 八、总结\n\n- **LangGraph** 是一个以“图”来编排对话和工具调用的框架；  \n- **create_react_agent** 是“快捷构造 ReAct 风格图”的核心函数，一次性帮你搭建“agent(LLM) ↔ tools(工具节点) ↔ agent”循环；  \n- 执行时默认从 `agent` 开始，如果 `AIMessage` 包含 `tool_calls` 就调用 `tools` 并注入结果，直到不再有工具调用；  \n- 可以**流式**(`graph.stream(...)`) 或**一次性**(`graph.invoke(...)`)获取结果；  \n- 要想查看中间推理和调用日志，使用 `stream` 在每一步循环里记录；  \n- 可选地，你能通过 `interrupt_before` / `interrupt_after` 或 `checkpointer` / `store` 等更高级特性进一步定制执行流程或存储/恢复状态。\n\n这就是从**原理**到**源码**再到**执行流程**的完整解析。希望能帮助你在实际项目里更好地运用 `create_react_agent` 和 LangGraph！"
  },
  {
    "path": "instructions/01.supervisor_pattern.md",
    "content": "# Supervisor 模式：多智能体协作的核心实现\n\n## 1. 引言\n\n在人工智能领域，多智能体系统（Multi-Agent System）是一种将复杂任务分解为多个专业智能体协同完成的架构模式。本文将详细介绍我们在 Mentis 项目中实现的 Supervisor（监督者）模式，这是一种高效组织和协调多个智能体的方法。\n\n## 2. 多智能体系统的基本概念\n\n多智能体系统由多个具有不同专业能力的智能体组成，每个智能体负责特定的任务领域。在这种系统中，智能体之间需要有效地协作和通信，以完成复杂的任务。\n\n在我们的实现中，主要包含以下角色：\n\n- **Supervisor（监督者）**：负责任务分发、协调和结果整合的中央控制智能体\n- **Specialized Agents（专业智能体）**：具有特定领域专长的执行智能体\n\n## 3. Supervisor 模式的工作流程\n\n### 3.1 基本工作流程\n\nSupervisor 模式的工作流程如下：\n\n1. 用户向系统提交请求\n2. Supervisor 接收请求并进行任务分析\n3. Supervisor 决定调用哪个专业智能体处理任务\n4. 专业智能体执行任务并返回结果\n5. Supervisor 接收结果，可能进一步调用其他智能体\n6. Supervisor 整合所有结果并返回给用户\n\n### 3.2 控制权转移机制\n\nSupervisor 模式的核心是控制权的转移机制。在我们的实现中，这通过 `handoff` 工具实现：\n\n1. Supervisor 通过调用特定的 `handoff` 工具将控制权转移给目标智能体\n2. 目标智能体完成任务后，通过 `handoff_back_messages` 将控制权返回给 Supervisor\n3. 这种机制确保了在任何时刻只有一个智能体在处理任务，避免了冲突\n\n## 4. Supervisor 的核心实现\n\n### 4.1 核心代码分析\n\n在 `supervisor.py` 中，`create_supervisor` 函数是实现 Supervisor 模式的核心：\n\n```python\ndef create_supervisor(\n    agents: list[Pregel],\n    *,\n    model: LanguageModelLike,\n    tools: list[BaseTool | Callable] | None = None,\n    prompt: Prompt | None = None,\n    # ... 其他参数 ...\n) -> StateGraph:\n    # 检查智能体名称唯一性\n    agent_names = set()\n    for agent in agents:\n        if agent.name is None or agent.name == \"LangGraph\":\n            raise ValueError(\"Please specify a name when you create your agent...\")\n        if agent.name in agent_names:\n            raise ValueError(f\"Agent with name '{agent.name}' already exists...\")\n        agent_names.add(agent.name)\n    \n    # 为每个智能体创建 handoff 工具\n    handoff_tools = [create_handoff_tool(agent_name=agent.name) for agent in agents]\n    all_tools = (tools or []) + handoff_tools\n    \n    # 绑定工具到模型\n    model = model.bind_tools(all_tools)\n    \n    # 创建 supervisor 智能体\n    supervisor_agent = create_react_agent(\n        name=supervisor_name,\n        model=model,\n        tools=all_tools,\n        prompt=prompt,\n        # ... 其他参数 ...\n    )\n    \n    # 构建状态图\n    builder = StateGraph(state_schema, config_schema=config_schema)\n    builder.add_node(supervisor_agent, destinations=tuple(agent_names) + (END,))\n    builder.add_edge(START, supervisor_agent.name)\n    \n    # 添加智能体节点和边\n    for agent in agents:\n        builder.add_node(\n            agent.name,\n            _make_call_agent(\n                agent,\n                output_mode,\n                add_handoff_back_messages,\n                supervisor_name,\n            ),\n        )\n        builder.add_edge(agent.name, supervisor_agent.name)\n    \n    return builder\n```\n\n### 4.2 智能体调用机制\n\n`_make_call_agent` 函数负责创建智能体调用的包装函数：\n\n```python\ndef _make_call_agent(\n    agent: Pregel,\n    output_mode: OutputMode,\n    add_handoff_back_messages: bool,\n    supervisor_name: str,\n) -> Callable[[dict], dict] | RunnableCallable:\n    # ... 参数验证 ...\n    \n    def _process_output(output: dict) -> dict:\n        messages = output[\"messages\"]\n        # 根据输出模式处理消息\n        if output_mode == \"full_history\":\n            pass\n        elif output_mode == \"last_message\":\n            messages = messages[-1:]\n        \n        # 添加控制权返回消息\n        if add_handoff_back_messages:\n            messages.extend(create_handoff_back_messages(agent.name, supervisor_name))\n        \n        return {\n            **output,\n            \"messages\": messages,\n        }\n    \n    def call_agent(state: dict) -> dict:\n        output = agent.invoke(state)\n        return _process_output(output)\n    \n    # ... 异步版本 ...\n    \n    return RunnableCallable(call_agent, acall_agent)\n```\n\n### 4.3 设计亮点与最佳实践\n\nSupervisor 模式的实现包含了多个多智能体系统设计的黄金经验，以下是关键设计亮点：\n\n#### 4.3.1 自动控制权回传机制\n\n`_make_call_agent` 中的自动 handoff back 机制非常巧妙：\n\n```python\nif add_handoff_back_messages:\n    messages.extend(create_handoff_back_messages(agent.name, supervisor_name))\n```\n\n这种设计的优势在于：\n- **隐式交接**：专业智能体无需知道 supervisor 的存在\n- **自动转发**：智能体完成任务后，系统自动将结果打包并转交回 supervisor\n- **消息插入**：在消息历史中自动插入 AIMessage 和 ToolMessage，表明控制权已转移\n- **零侵入性**：对智能体代码没有任何侵入，实现了完全的关注点分离\n\n#### 4.3.2 智能的上下文管理策略\n\n`output_mode` 参数提供了对消息历史的精确控制：\n\n```python\nif output_mode == \"last_message\":\n    messages = messages[-1:]\n```\n\n这允许开发者灵活选择：\n- **全量历史模式**（`full_history`）：保留智能体输出的完整历史，提供完整上下文\n- **最后消息模式**（`last_message`）：仅保留最后一条消息，有效节省 token 消耗\n\n这种灵活的上下文压缩策略，在长对话或多轮智能体调用场景中尤为重要，可以有效防止上下文爆炸。\n\n#### 4.3.3 动态工具生成与绑定\n\n系统会自动为每个智能体创建对应的 handoff 工具：\n\n```python\nhandoff_tools = [create_handoff_tool(agent_name=agent.name) for agent in agents]\n```\n\n这些工具允许 supervisor 通过类似 `transfer_to_writer()` 或 `transfer_to_researcher()` 的函数调用来转移控制权，实现了：\n- **声明式调度**：调度逻辑由 LLM 决定，而非硬编码规则\n- **可解释性**：每次转移都有明确的工具调用，便于追踪和调试\n- **灵活性**：可以根据当前状态动态决定下一步调用哪个智能体\n\n#### 4.3.4 统一的 Runnable 接口封装\n\n每个智能体都被统一封装为 `RunnableCallable`：\n\n```python\nbuilder.add_node(agent.name, _make_call_agent(...))\n```\n\n这种封装提供了多种优势：\n- **统一接口**：所有智能体都遵循相同的调用接口\n- **状态管理**：状态由 LangGraph 自动管理，无需手动处理\n- **异步支持**：同时支持同步和异步调用，适应不同场景\n- **自动处理**：输入/输出状态转换自动完成，减少样板代码\n\n#### 4.3.5 灵活的配置选项\n\n系统支持多种配置选项，适应不同需求：\n- **多种提示格式**：支持字符串、SystemMessage 或可调用函数作为提示\n- **结构化输出**：支持 JSON schema、TypedDict 或 Pydantic 类作为输出格式\n- **状态模式**：可自定义状态结构，支持复杂的状态追踪和管理\n- **并行工具调用控制**：可以针对不同模型配置是否支持并行工具调用\n\n## 5. 实践案例：笑话生成与研究专家\n\n在 `01_supervisor_test.py` 中，我们实现了一个包含两个专业智能体的系统：\n\n### 5.1 智能体创建\n\n我们使用了两种不同的方式创建智能体：\n\n#### 5.1.1 功能型 API（Functional API）\n\n笑话生成器使用功能型 API 创建：\n\n```python\n@task\ndef generate_joke(messages):\n    \"\"\"Generate a short joke (no tool calls).\"\"\"\n    system_message = {\n        \"role\": \"system\", \n        \"content\": \"You are a witty comedian. Write a short joke.\"\n    }\n    msg = model.invoke([system_message] + messages)\n    return msg\n\n@entrypoint()\ndef joke_agent(state):\n    joke = generate_joke(state['messages']).result()\n    messages = add_messages(state[\"messages\"], [joke])\n    return {\"messages\": messages}\n\njoke_agent.name = \"joke_agent\"\n```\n\n#### 5.1.2 图形 API（Graph API）\n\n研究专家使用图形 API 创建：\n\n```python\ndef web_search(query: str) -> str:\n    \"\"\"Search the web for information. (Mocked data here)\"\"\"\n    return (\n        \"Here are the headcounts for each of the FAANG companies in 2024:\\n\"\n        # ... 模拟数据 ...\n    )\n\nresearch_agent = create_react_agent(\n    model=model,\n    tools=[web_search],\n    name=\"research_expert\",\n    prompt=(\n        \"You are a world-class researcher. You have access to a 'web_search(query: str)' tool. \"\n        \"Do not do any complicated math, just provide factual info from the web_search if needed.\"\n    ),\n)\n```\n\n### 5.2 Supervisor 配置\n\n我们创建了一个 Supervisor 来协调这两个智能体：\n\n```python\nworkflow = create_supervisor(\n    [research_agent, joke_agent],\n    model=model,\n    prompt=(\n        \"You are the overall supervisor. You manage two specialized agents:\\n\"\n        \"1) joke_agent: for telling jokes.\\n\"\n        \"2) research_expert: for factual or data-related questions.\\n\\n\"\n        \"If the user wants a joke AND some research data in the same query, \"\n        \"you MUST call joke_agent first, get the joke, then call research_expert for the data. \"\n        \"After both calls, provide a final combined response. \"\n        \"Do not call more than one agent in a single LLM message; do it step by step.\"\n    ),\n)\n```\n\n### 5.3 执行流程\n\n当用户请求同时需要笑话和研究数据时，执行流程如下：\n\n1. Supervisor 接收用户请求\n2. Supervisor 分析请求，决定先调用 joke_agent\n3. joke_agent 生成笑话并返回结果\n4. Supervisor 接收笑话，然后调用 research_expert\n5. research_expert 查询数据并返回结果\n6. Supervisor 整合两个结果，生成最终回复\n\n## 6. 可视化与调试\n\n我们使用 LangGraph 的可视化功能生成了工作流图表，保存在 `examples/graphs/1_supervisor_test_01.png`，这有助于理解和调试多智能体系统的工作流程。\n\n## 7. 总结\n\nSupervisor 模式是一种高效组织多智能体系统的方法，它通过中央控制智能体协调专业智能体的工作，实现复杂任务的分解与协作。在我们的实现中，通过精心设计的 handoff 机制实现了智能体之间的控制权转移，确保系统的有序运行。\n\n这种模式的优势在于：\n\n1. **模块化**：每个智能体专注于特定领域，便于开发和维护\n2. **可扩展性**：可以方便地添加新的专业智能体\n3. **灵活性**：Supervisor 可以根据任务需求动态调用不同的智能体\n4. **结果整合**：Supervisor 负责整合各个智能体的结果，提供一致的用户体验\n5. **低耦合**：智能体之间通过消息传递交互，减少直接依赖\n6. **可追踪性**：每次控制权转移都有明确的工具调用记录，便于调试和监控\n7. **资源优化**：通过上下文管理策略，有效控制 token 消耗\n8. **开发便捷**：统一的接口和自动化的状态管理，减少样板代码\n\n通过本文的实践案例和深入分析，我们不仅展示了如何使用 LangGraph 和 LangChain 框架实现 Supervisor 模式，更揭示了背后的设计思想和最佳实践，为构建复杂的多智能体系统提供了宝贵参考。这些设计模式和技巧可以帮助开发者构建更加健壮、可维护和高效的智能体系统。"
  },
  {
    "path": "instructions/02.supervisor_pattern_agent.md",
    "content": "# Supervisor 模式：多智能体协作的核心实现 （Agent 封装模式）\n\n## 1. 引言\n\n在人工智能领域，多智能体系统（Multi-Agent System）是一种将复杂任务分解为多个专业智能体协同完成的架构模式。本文将详细介绍我们在 Mentis 项目中实现的 Supervisor（监督者）模式，这是一种高效组织和协调多个智能体的方法。\n\n## 2. 多智能体系统的基本概念\n\n多智能体系统由多个具有不同专业能力的智能体组成，每个智能体负责特定的任务领域。在这种系统中，智能体之间需要有效地协作和通信，以完成复杂的任务。\n\n在我们的实现中，主要包含以下角色：\n\n- **Supervisor（监督者）**：负责任务分发、协调和结果整合的中央控制智能体\n- **Specialized Agents（专业智能体）**：具有特定领域专长的执行智能体\n\n## 3. Supervisor 模式的工作流程\n\n### 3.1 基本工作流程\n\nSupervisor 模式的工作流程如下：\n\n1. 用户向系统提交请求\n2. Supervisor 接收请求并进行任务分析\n3. Supervisor 决定调用哪个专业智能体处理任务\n4. 专业智能体执行任务并返回结果\n5. Supervisor 接收结果，可能进一步调用其他智能体\n6. Supervisor 整合所有结果并返回给用户\n\n### 3.2 控制权转移机制\n\nSupervisor 模式的核心是控制权的转移机制。在我们的实现中，这通过 `handoff` 工具实现：\n\n1. Supervisor 通过调用特定的 `handoff` 工具将控制权转移给目标智能体\n2. 目标智能体完成任务后，通过 `handoff_back_messages` 将控制权返回给 Supervisor\n3. 这种机制确保了在任何时刻只有一个智能体在处理任务，避免了冲突\n\n## 4. 基础架构：BaseAgent 类\n\n在我们的重构中，我们引入了 `BaseAgent` 基类，作为所有智能体的基础。这种设计使得不同类型的智能体可以共享通用功能，同时保持各自的特性。\n\n### 4.1 BaseAgent 核心实现\n\n```python\nclass BaseAgent:\n    _PROMPT_TEMPLATE = \"\"\"\n    You have access to the following tools:\n    {tools}\n    Use the above tools to answer the question at the end.\n    \"\"\"\n    def __init__(\n        self,\n        name: str,\n        model: Union[BaseChatModel, LanguageModelLike],\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        prompt: Optional[Union[str, SystemMessage, Callable]] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        max_context_messages: Optional[int] = None,  # 限制最近消息数量\n        max_context_tokens: Optional[int] = None,    # 限制总估计token数\n        model_name: Optional[str] = \"gpt-4o-mini\", # 用于未来token估计改进\n    ):\n        # 初始化基本属性\n        self.name = name\n        self.model = model\n        self.tools = tools or []\n        self.prompt = prompt\n        self.checkpointer = checkpointer\n        self.max_context_messages = max_context_messages\n        self.max_context_tokens = max_context_tokens\n        self.model_name = model_name\n        self._workflow = None\n        self._agent = None\n```\n\n### 4.2 上下文管理机制\n\n`BaseAgent` 提供了智能的上下文管理机制，可以根据配置自动截断消息历史：\n\n```python\ndef _inject_context(self, state: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"注入记忆并根据配置截断消息。\"\"\"\n    memory = state.get(\"memory\") or []\n    messages = state.get(\"messages\", [])\n    messages = self._truncate_messages(messages)\n    memory_messages = [SystemMessage(content=chunk) for chunk in memory]\n    state[\"messages\"] = memory_messages + messages\n    return state\n```\n\n### 4.3 通用方法接口\n\n`BaseAgent` 定义了所有智能体共享的核心方法接口：\n\n```python\ndef build(self) -> StateGraph:\n    \"\"\"构建工作流。\"\"\"\n    \ndef compile(self) -> CompiledStateGraph:\n    \"\"\"编译工作流。\"\"\"\n    \ndef invoke(self, state: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"同步调用工作流。\"\"\"\n    \nasync def ainvoke(self, state: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"异步调用工作流。\"\"\"\n```\n\n## 5. ReactAgent 类实现\n\n`ReactAgent` 是我们实现的基于 ReAct（Reasoning and Acting）模式的智能体，它继承自 `BaseAgent`，专注于推理和工具调用。\n\n### 5.1 ReactAgent 类设计\n\n```python\nclass ReactAgent(BaseAgent):\n    \"\"\"ReAct Agent class for reasoning and acting with tools.\n    \n    This class provides a high-level interface for creating a ReAct agent workflow\n    that can perform multi-step reasoning and tool calling.\n    \"\"\"\n    \n    def __init__(\n        self,\n        model: LanguageModelLike,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        prompt: Optional[str] = None,\n        response_format: Optional[\n            Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]\n        ] = None,\n        state_schema: StateSchemaType = AgentState,\n        config_schema: Type[Any] = None,\n        checkpointer: Optional[Checkpointer] = None,\n        store: Optional[BaseStore] = None,\n        interrupt_before: Optional[List[str]] = None,\n        interrupt_after: Optional[List[str]] = None,\n        debug: bool = False,\n        version: Literal[\"v1\", \"v2\"] = \"v1\",\n        name: str = \"react_agent\",\n        max_context_messages: Optional[int] = None,\n        max_context_tokens: Optional[int] = None,\n        model_name: Optional[str] = \"gpt-4o-mini\",\n    ):\n        # 调用父类初始化\n        super().__init__(\n            name=name,\n            model=model,\n            tools=tools or [],\n            prompt=prompt,\n            checkpointer=checkpointer,\n            max_context_messages=max_context_messages,\n            max_context_tokens=max_context_tokens,\n            model_name=model_name\n        )\n        \n        # 初始化ReactAgent特有属性\n        self.response_format = response_format\n        self.state_schema = state_schema\n        self.config_schema = config_schema\n        self.store = store\n        self.interrupt_before = interrupt_before\n        self.interrupt_after = interrupt_after\n        self.debug = debug\n        self.version = version\n        self._agent = None\n```\n\n### 5.2 核心方法实现\n\n#### 5.2.1 compile 方法\n\n`compile` 方法负责编译 ReactAgent 工作流：\n\n```python\ndef compile(self) -> CompiledGraph:\n    \"\"\"构建 ReAct agent 工作流。\n    \n    Returns:\n        编译后的 CompiledGraph\n    \"\"\"\n    # 如果_agent已经存在，直接返回，避免重复构建\n    if self._agent is not None:\n        return self._agent\n        \n    _react_agent = create_react_agent(\n        model=self.model,\n        tools=self.tools,\n        prompt=self.prompt,\n        response_format=self.response_format,\n        state_schema=self.state_schema,\n        config_schema=self.config_schema,\n        checkpointer=self.checkpointer,\n        store=self.store,\n        interrupt_before=self.interrupt_before,\n        interrupt_after=self.interrupt_after,\n        debug=self.debug,\n        version=self.version,\n        name=self.name,\n    )\n    \n    self._agent = CreateReactAgentWrapper(_react_agent, \n                                          name=self.name,\n                                          before_invoke=self.invoke,\n                                          before_ainvoke=self.ainvoke)\n    return self._agent\n```\n\n#### 5.2.2 invoke 和 ainvoke 方法\n\n`invoke` 和 `ainvoke` 方法负责调用 ReactAgent 处理用户请求，并提供调试信息：\n\n```python\ndef invoke(self, state: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"同步调用入口 (真正的 Agent 执行逻辑).\"\"\"\n    # 打印调试信息\n    messages = state.get(\"messages\", [])\n    if messages:\n        for i, msg in enumerate(messages, 1):\n            type_str = type(msg).__name__\n            print(f\"第 {i} 条消息 - {type_str} (Name: {msg.name}):\")\n            msg.pretty_print()\n\n    # 上下文注入\n    state = self._inject_context(state)\n    return state\n\nasync def ainvoke(self, state: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"异步调用入口.\"\"\"\n    # 上下文注入\n    state = await self._inject_context(state)\n    return state\n```\n\n## 6. SupervisorAgent 类实现\n\n`SupervisorAgent` 类继承自 `BaseAgent`，专注于协调多个智能体的工作。在重构后，它增加了规划功能，可以更有效地管理复杂任务。\n\n### 6.1 SupervisorAgent 类设计\n\n```python\nclass SupervisorAgent(BaseAgent):\n    \"\"\"Supervisor class for managing multiple agents with planning capabilities.\n    \n    This class provides a high-level interface for creating a supervisor workflow\n    that can manage and coordinate multiple agents. It also includes planning capabilities\n    to create and manage a plan for complex tasks using a state-driven approach.\n    \n    The planning functionality is implemented using PlanningStateHandler and PlanningTool,\n    which provide a more structured and flexible way to manage tasks compared to the\n    previous TodolistTool approach.\n    \"\"\"\n    \n    def __init__(\n        self,\n        agents: List[BaseAgent],\n        model: LanguageModelLike,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        prompt: Optional[str] = None,\n        state_schema: StateSchemaType = AgentState,\n        supervisor_name: str = \"supervisor\",\n        checkpointer: Optional[Checkpointer] = None,\n        output_mode: str = \"last_message\", # * full_history or last_message *\n        enable_planning: bool = True, # * True or False *\n    ):\n        # 设置规划相关属性\n        self._enable_planning = enable_planning\n        \n        # 如果启用规划功能，设置状态模式为PlanningAgentState\n        if self._enable_planning and state_schema == AgentState:\n            state_schema = PlanningAgentState\n            \n        # 存储特定于智能体的属性\n        self.agents = agents\n        self.output_mode = output_mode\n        self.supervisor_name = supervisor_name\n        self.state_schema = state_schema\n        self.checkpointer = checkpointer\n        self.tools = tools or []\n        self._workflow = None\n        self._agent = None\n            \n        # 生成基础提示词\n        _final_prompt = self._PLANNING_PROMPT_TEMPLATE + \"/n/n\" + self._PLANNING_TOOL_TEMPLATE if self._enable_planning else self._PROMPT_TEMPLATE\n        \n        # 如果启用规划功能，添加规划工具\n        if self._enable_planning:\n            tools = tools or []\n            tools.append(SimplePlanningTool())\n        \n        # 初始化BaseAgent父类\n        super().__init__(\n            name=supervisor_name,\n            model=model,\n            tools=tools,\n            checkpointer=checkpointer,\n            prompt=_final_prompt,\n        )\n```\n\n### 6.2 核心方法实现\n\n#### 6.2.1 build 方法\n\n`build` 方法负责构建 Supervisor 工作流：\n\n```python\ndef build(self) -> StateGraph:\n    \"\"\"构建 supervisor 工作流。\n    \n    Returns:\n        构建的 StateGraph\n    \"\"\"\n    \n    if self._workflow is not None:\n        return self._workflow\n        \n    self._workflow = create_supervisor(\n        agents=self.agents,\n        model=self.model,\n        tools=self.tools,\n        prompt=self.prompt,\n        state_schema=self.state_schema,\n        supervisor_name=self.supervisor_name,\n        output_mode=self.output_mode,\n    )\n    \n    return self._workflow\n```\n\n## 7. create_supervisor 函数实现\n\n`create_supervisor` 函数是 SupervisorAgent 的核心依赖，它负责创建多智能体协作的工作流。\n\n```python\ndef create_supervisor(\n    agents: list[Pregel],\n    *,\n    model: LanguageModelLike,\n    tools: list[BaseTool | Callable] | None = None,\n    prompt: Prompt | None = None,\n    response_format: Optional[\n        Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]\n    ] = None,\n    state_schema: StateSchemaType = AgentState,\n    config_schema: Type[Any] | None = None,\n    output_mode: OutputMode = \"last_message\",\n    add_handoff_back_messages: bool = True,\n    supervisor_name: str = \"supervisor\",\n    include_agent_name: AgentNameMode | None = None,\n) -> StateGraph:\n    # 检查智能体名称唯一性\n    agent_names = set()\n    for agent in agents:\n        if agent.name is None or agent.name == \"LangGraph\":\n            raise ValueError(\n                \"Please specify a name when you create your agent...\"\n            )\n\n        if agent.name in agent_names:\n            raise ValueError(\n                f\"Agent with name '{agent.name}' already exists. Agent names must be unique.\"\n            )\n\n        agent_names.add(agent.name)\n\n    # 为每个智能体创建 handoff 工具\n    handoff_tools = [create_handoff_tool(agent_name=agent.name) for agent in agents]\n    all_tools = (tools or []) + handoff_tools\n\n    # 绑定工具到模型\n    if _supports_disable_parallel_tool_calls(model):\n        model = model.bind_tools(all_tools, parallel_tool_calls=False)\n    else:\n        model = model.bind_tools(all_tools)\n\n    # 处理智能体名称显示方式\n    if include_agent_name:\n        model = with_agent_name(model, include_agent_name)\n                \n    # 创建 supervisor 智能体\n    _react_agent = ReactAgent(\n        name=supervisor_name,\n        model=model,\n        tools=all_tools,\n        prompt=prompt,\n        state_schema=state_schema,\n        response_format=response_format,\n        debug=False,\n    )\n    supervisor_agent = _react_agent.compile()\n    \n    # 构建状态图\n    builder = StateGraph(state_schema, config_schema=config_schema)\n    builder.add_node(supervisor_agent, destinations=tuple(agent_names) + (END,))\n    builder.add_edge(START, supervisor_agent.name)\n    \n    # 添加智能体节点和边\n    for agent in agents:\n        # 如果智能体是 \"ReactAgent\" 或类似类型\n        if hasattr(agent, \"get_agent\") and callable(agent.get_agent):\n            agent = agent.get_agent()  # 获取编译后的子图\n            \n        builder.add_node(\n            agent.name,\n            _make_call_agent(\n                agent,\n                output_mode,\n                add_handoff_back_messages,\n                supervisor_name,\n            ),\n        )\n        builder.add_edge(agent.name, supervisor_agent.name)\n\n    return builder\n```\n\n## 8. 实践案例\n\n### 8.1 使用 create_supervisor 函数（原始方式）\n\n在 `01_supervisor_test.py` 中，我们使用原始的 `create_supervisor` 函数实现了一个包含两个专业智能体的系统：\n\n```python\nworkflow = create_supervisor(\n    [research_agent, joke_agent],\n    model=model,\n    prompt=(\n        \"You are the overall supervisor. You manage two specialized agents:\\n\"\n        \"1) joke_agent: for telling jokes.\\n\"\n        \"2) research_expert: for factual or data-related questions.\\n\\n\"\n        \"If the user wants a joke AND some research data in the same query, \"\n        \"you MUST call joke_agent first, get the joke, then call research_expert for the data. \"\n        \"After both calls, provide a final combined response. \"\n        \"Do not call more than one agent in a single LLM message; do it step by step.\"\n    ),\n)\n\n# 编译得到一个可调用的\"App\"\napp = workflow.compile()\n```\n\n### 8.2 使用 SupervisorAgent 类（封装方式）\n\n在 `02_supervisor_agent_test.py` 中，我们使用封装的 `SupervisorAgent` 类实现了相同的功能，但增加了规划能力：\n\n```python\n# 创建 SupervisorAgent 实例\nsupervisor = SupervisorAgent(\n    agents=[research_agent, joke_agent],\n    model=model,\n    prompt=(\n        \"You are the overall supervisor. You manage two specialized agents:\\n\"\n        \"1) joke_agent: for telling jokes.\\n\"\n        \"2) research_expert: for factual or data-related questions.\\n\\n\"\n        \"If the user wants a joke AND some research data in the same query, \"\n        \"you MUST call joke_agent first, get the joke, then call research_expert for the data. \"\n        \"After both calls, provide a final combined response. \"\n        \"Do not call more than one agent in a single LLM message; do it step by step.\"\n    ),\n    enable_planning=True,  # 启用规划功能\n)\n\n# 编译得到一个可调用的\"App\"\napp = supervisor.compile()\n```\n\n### 8.3 两种方式的比较\n\n两种实现方式在基本功能上相似，但使用 `SupervisorAgent` 类的方式有以下优势：\n\n1. **更简洁的 API**：封装了复杂的参数和配置，提供了更简洁的接口\n2. **更好的封装性**：将相关功能封装在一个类中，便于维护和扩展\n3. **更好的可读性**：代码结构更清晰，意图更明确\n4. **更好的可重用性**：可以方便地在不同项目中复用\n5. **规划功能**：内置了任务规划能力，可以更有效地管理复杂任务\n6. **上下文管理**：通过 BaseAgent 继承了智能的上下文管理机制\n\n## 9. 总结\n\n在重构后的实现中，我们引入了以下关键改进：\n\n1. **BaseAgent 基类**：提供了所有智能体共享的基础功能，如上下文管理、工作流构建等\n2. **ReactAgent 重构**：现在继承自 BaseAgent，使用 CreateReactAgentWrapper 增强功能\n3. **SupervisorAgent 重构**：现在继承自 BaseAgent，增加了规划功能\n4. **统一的接口**：所有智能体类型现在共享相同的核心方法接口\n5. **智能上下文管理**：可以根据配置自动截断消息历史，优化性能\n\nSupervisor 模式是一种高效组织多智能体系统的方法，它通过中央控制智能体协调专业智能体的工作，实现复杂任务的分解与协作。在我们的重构实现中，通过引入 BaseAgent 基类和增强 SupervisorAgent 的规划能力，使得多智能体系统更加灵活、高效，同时保持了良好的可维护性和可扩展性。\n\n这种模式特别适合以下场景：\n- 需要多种专业知识协作的复杂任务\n- 需要动态决策调用不同专家的场景\n- 需要结果整合和质量控制的任务流程\n- 需要有计划地执行多步骤任务的场景\n\n未来，我们将继续优化 Supervisor 模式的实现，增强其灵活性和可扩展性，并探索更多的应用场景。"
  },
  {
    "path": "instructions/03.tavily_search_integration.md",
    "content": "# Tavily搜索工具集成：为多智能体系统提供实时信息能力\n\n## 1. 引言\n\n在多智能体系统中，获取实时、准确的外部信息是提升系统实用性的关键因素。本文将详细介绍我们在 Mentis 项目中集成 Tavily 搜索工具的实现，这使得我们的智能体系统能够获取最新的网络信息，大幅提升了系统的实用价值。\n\n## 2. Tavily 搜索工具概述\n\nTavily 是一个专为 AI 应用设计的搜索 API，它提供了高质量、结构化的网络搜索结果。在我们的实现中，Tavily 工具具有以下特点：\n\n- **实时性**：能够获取最新的网络信息\n- **结构化输出**：返回格式化的搜索结果，便于智能体处理\n- **可配置性**：支持多种参数配置，如搜索深度、结果数量等\n- **多媒体支持**：可选择性地包含图片等多媒体内容\n\n## 3. Tavily 工具的实现\n\n### 3.1 核心代码分析\n\n在 `tavily_tools.py` 中，我们实现了 `TavilySearchResults` 类，它继承自 LangChain 的 `BaseTool`：\n\n```python\nclass TavilySearchResults(BaseTool):\n    \"\"\"Tool that queries the Tavily Search API and gets back json.\"\"\"\n    \n    name: str = \"tavily_search_results_json\"\n    description: str = (\n        \"A search engine optimized for comprehensive, accurate, and trusted results. \"\n        \"Useful for when you need to answer questions about current events. \"\n        \"Input should be a search query.\"\n    )\n    args_schema: Type[BaseModel] = TavilyInput\n    \n    max_results: int = 5\n    \"\"\"Max search results to return, default is 5\"\"\"\n    search_depth: str = \"advanced\"\n    \"\"\"The depth of the search. It can be \"basic\" or \"advanced\"\"\"\"\n    include_domains: List[str] = []\n    \"\"\"A list of domains to specifically include in the search results.\"\"\"\n    exclude_domains: List[str] = []\n    \"\"\"A list of domains to specifically exclude from the search results.\"\"\"\n    include_answer: bool = False\n    \"\"\"Include a short answer to original query in the search results.\"\"\"\n    include_raw_content: bool = False\n    \"\"\"Include cleaned and parsed HTML of each site search results.\"\"\"\n    include_images: bool = False\n    \"\"\"Include a list of query related images in the response.\"\"\"\n    \n    api_wrapper: TavilySearchAPIWrapper = Field(default_factory=TavilySearchAPIWrapper)\n    response_format: Literal[\"content_and_artifact\"] = \"content_and_artifact\"\n```\n\n### 3.2 搜索执行方法\n\n`TavilySearchResults` 类提供了同步和异步两种搜索方法：\n\n```python\ndef _run(\n    self,\n    query: str,\n    run_manager: Optional[CallbackManagerForToolRun] = None,\n) -> Tuple[Union[List[Dict[str, str]], str], Dict]:\n    \"\"\"Use the tool.\"\"\"\n    try:\n        raw_results = self.api_wrapper.raw_results(\n            query,\n            self.max_results,\n            self.search_depth,\n            self.include_domains,\n            self.exclude_domains,\n            self.include_answer,\n            self.include_raw_content,\n            self.include_images,\n        )\n    except Exception as e:\n        return repr(e), {}\n    return self.api_wrapper.clean_results(raw_results[\"results\"]), raw_results\n\nasync def _arun(\n    self,\n    query: str,\n    run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n) -> Tuple[Union[List[Dict[str, str]], str], Dict]:\n    \"\"\"Use the tool asynchronously.\"\"\"\n    # 异步实现...\n```\n\n## 4. 在多智能体系统中集成 Tavily 工具\n\n### 4.1 创建研究型智能体\n\n在我们的多智能体系统中，我们创建了一个专门的研究型智能体，它使用 Tavily 搜索工具获取实时信息：\n\n```python\n# 创建Tavily搜索工具\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=False,\n    include_images=False,\n    search_depth=\"advanced\"\n)\n\nresearch_agent = create_react_agent(\n    model=model,\n    tools=[tavily_search],\n    name=\"research_expert\",\n    prompt=(\n        \"You are a world-class researcher. You have access to the 'tavily_search_results_json' tool \"\n        \"which can search the web for real-time information. \"\n        \"When asked a question, use this tool to find accurate and up-to-date information. \"\n        \"Summarize the search results in a clear and concise manner. \"\n        \"Always cite your sources by including the URLs from the search results.\"\n    ),\n)\n```\n\n### 4.2 与 Supervisor 集成\n\n研究型智能体作为专业智能体，被集成到 Supervisor 模式中：\n\n```python\n# 创建 SupervisorAgent 实例\nsupervisor = SupervisorAgent(\n    agents=[research_agent, joke_agent],\n    model=model,\n    prompt=(\n        \"You are the overall supervisor. You manage two specialized agents:\\n\"\n        \"1) joke_agent: for telling jokes.\\n\"\n        \"2) research_expert: for factual or data-related questions using real-time web search.\\n\\n\"\n        \"If the user wants a joke, call joke_agent.\\n\"\n        \"If the user wants factual information or research data, call research_expert.\\n\"\n        \"If the user wants a joke AND some research data in the same query, \"\n        \"you MUST call joke_agent first, get the joke, then call research_expert for the data. \"\n        \"After both calls, provide a final combined response. \"\n        \"Do not call more than one agent in a single LLM message; do it step by step.\"\n    ),\n)\n```\n\n## 5. 实践案例\n\n### 5.1 只询问研究数据\n\n当用户只询问研究数据时，Supervisor 会直接调用研究型智能体：\n\n```python\n# 示例2：只询问研究数据\nresult2 = app.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"谁是现任美国总统？\"}]})\n```\n\n在这种情况下，研究型智能体会使用 Tavily 搜索工具获取最新信息，并返回结构化的回答，包括引用的来源。\n\n### 5.2 混合查询\n\n当用户同时需要笑话和研究数据时，Supervisor 会先调用笑话智能体，然后调用研究型智能体：\n\n```python\n# 示例3：同时询问笑话和研究数据\nresult3 = app.invoke({\"messages\": [{\"role\": \"user\", \"content\": \"讲个关于人工智能的笑话，然后告诉我什么是大型语言模型\"}]})\n```\n\n这种情况下，Supervisor 会协调两个智能体的工作，并整合它们的结果。\n\n## 6. 可视化与调试\n\n我们使用 LangGraph 的可视化功能生成了工作流图表，保存在 `examples/graphs/03_tavily_tools_test.png`。这个图表展示了包含 Tavily 搜索工具的多智能体系统的工作流程，有助于理解和调试系统。\n\n## 7. 总结\n\nTavily 搜索工具的集成为我们的多智能体系统带来了以下优势：\n\n1. **实时信息获取**：系统能够获取最新的网络信息，不再局限于模型训练数据的时间范围\n2. **信息准确性提升**：通过引用可靠的网络来源，提高了系统回答的准确性\n3. **功能扩展**：使系统能够回答关于最新事件、数据和信息的问题\n4. **灵活配置**：可以根据需要调整搜索参数，优化搜索结果\n\n通过 Tavily 搜索工具的集成，我们的多智能体系统从一个封闭的知识系统转变为一个能够获取实时信息的开放系统，大大提升了系统的实用价值和应用范围。\n\n未来，我们计划进一步优化搜索工具的使用策略，提高搜索效率和结果质量，并探索更多外部工具的集成，使系统能够处理更复杂的任务。"
  },
  {
    "path": "instructions/04.react_agent.md",
    "content": "# ReactAgent：基于ReAct方法论的多步推理与工具调用框架\n\n## 1. 引言\n\nReactAgent是一个基于ReAct方法论的智能体框架，它能够通过多步推理和工具调用来解决复杂问题。本文将详细介绍ReactAgent的核心概念、工作原理、实现方式以及在实际应用中的使用方法。\n\n## 2. ReactAgent的核心概念\n\n### 2.1 什么是ReAct方法论\n\nReAct（Reasoning + Acting）是一种结合推理和行动的AI问题解决方法论，它包含两个核心步骤：\n\n1. **推理（Reasoning）**：让语言模型进行思考，分析问题，并决定下一步行动。\n2. **行动（Acting）**：执行具体的工具调用，获取外部信息或执行特定操作。\n\n这两个步骤可以多次循环往复，直到问题被解决。ReAct方法论特别适合处理需要多步骤、多工具协作的复杂问题。\n\n### 2.2 ReactAgent与LangGraph的关系\n\nReactAgent是基于LangGraph框架实现的，它利用LangGraph的图结构来编排推理和行动的流程。在LangGraph中，ReactAgent被表示为一个包含多个节点和边的有向图：\n\n- **节点（Node）**：包括Agent节点（负责推理）和Tools节点（负责执行工具调用）\n- **边（Edge）**：定义节点之间的转换条件，例如当Agent生成工具调用时，流程转向Tools节点\n\n## 3. ReactAgent的实现\n\n### 3.1 ReactAgent类的设计\n\n在我们的实现中，ReactAgent类继承自LangGraph的Pregel类，提供了一个高级接口来创建和管理ReAct工作流：\n\n```python\nclass ReactAgent(Pregel):\n    \"\"\"ReAct Agent class for reasoning and acting with tools.\n    \n    This class provides a high-level interface for creating a ReAct agent workflow\n    that can perform multi-step reasoning and tool calling.\n    \"\"\"\n    \n    def __init__(\n        self,\n        model: LanguageModelLike,\n        tools: Optional[List[Union[BaseTool, Callable]]] = None,\n        prompt: Optional[str] = None,\n        response_format: Optional[\n            Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]\n        ] = None,\n        state_schema: StateSchemaType = AgentState,\n        config_schema: Type[Any] = None,\n        interrupt_before: Optional[List[str]] = None,\n        interrupt_after: Optional[List[str]] = None,\n        debug: bool = False,\n        version: Literal[\"v1\", \"v2\"] = \"v1\",\n        name: str = \"react_agent\",\n    ):\n        # 初始化代码...\n```\n\n### 3.2 核心方法\n\nReactAgent类提供了以下核心方法：\n\n1. **build()**: 构建ReAct工作流图\n2. **compile()**: 编译工作流为可执行应用\n3. **invoke()**: 同步执行ReAct工作流\n4. **ainvoke()**: 异步执行ReAct工作流\n5. **stream()**: 流式执行，可以获取中间状态\n6. **get_graph()**: 获取底层图结构，用于可视化或调试\n\n### 3.3 与create_react_agent的关系\n\nReactAgent类内部使用了LangGraph提供的`create_react_agent`函数来构建工作流图。这个函数自动处理了：\n\n- 创建Agent节点（用于调用语言模型）\n- 创建Tools节点（用于执行工具调用）\n- 在节点之间建立连接\n- 处理状态管理和流程控制\n\n## 4. 使用ReactAgent解决复杂问题\n\n### 4.1 基本使用流程\n\n使用ReactAgent的基本流程如下：\n\n1. **初始化ReactAgent**：提供语言模型和工具\n2. **编译工作流**：调用compile()方法\n3. **准备初始状态**：通常包含用户的问题\n4. **执行或流式执行**：使用invoke()或stream()方法\n5. **处理结果**：分析最终状态或中间状态\n\n### 4.2 集成Tavily搜索工具\n\n在实际应用中，我们经常将ReactAgent与Tavily搜索工具集成，使其能够获取实时网络信息：\n\n```python\n# 创建Tavily搜索工具\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=True,\n    include_images=False,\n    search_depth=\"advanced\"\n)\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=model,\n    tools=[tavily_search],\n    prompt=(\n        \"你是一位专业的研究分析师，擅长分析复杂问题并提供深入见解。\\n\"\n        \"当面对复杂问题时，请遵循以下REACT方法论：\\n\"\n        \"1. 分解问题：将复杂问题分解为更小的子问题\\n\"\n        \"2. 制定计划：确定需要搜索哪些信息，以及搜索的顺序\\n\"\n        \"3. 执行搜索：使用tavily_search_results_json工具执行搜索\\n\"\n        \"4. 分析结果：分析搜索结果，确定是否需要进一步搜索\\n\"\n        \"5. 综合信息：将所有搜索结果综合成一个连贯的回答\\n\"\n    ),\n)\n\n# 编译工作流\nagent = react_agent.compile()\n```\n\n### 4.3 处理用户输入\n\n以下是处理用户输入的示例代码：\n\n```python\n# 准备初始状态\ninitial_state = {\n    \"messages\": [HumanMessage(content=user_input)]\n}\n\n# 流式执行并获取中间状态\nfor partial_state in react_agent.stream(initial_state, stream_mode=\"values\"):\n    # 处理中间状态\n    messages = partial_state.get(\"messages\", [])\n    if messages:\n        latest_message = messages[-1]\n        # 记录或显示最新消息\n        log_agent_actions({\"messages\": [latest_message]})\n\n# 处理最终结果\nfinal_state = partial_state  # 最后一个状态就是最终状态\n```\n\n## 5. ReactAgent的优势与应用场景\n\n### 5.1 优势\n\n- **多步推理**：能够分解复杂问题，逐步解决\n- **工具调用**：可以集成各种外部工具，扩展能力边界\n- **状态管理**：自动管理对话状态和中间结果\n- **可视化**：支持工作流可视化，便于调试和理解\n- **流式执行**：可以获取中间状态，实现更好的用户体验\n\n### 5.2 应用场景\n\n- **研究助手**：帮助用户研究复杂问题，获取最新信息\n- **数据分析**：分步骤处理数据分析任务\n- **决策支持**：通过多步推理和信息收集辅助决策\n- **教育辅导**：分解复杂概念，逐步引导学习\n\n## 6. 实际案例：研究特斯拉2025年发展预期\n\n以下是使用ReactAgent研究特斯拉2025年发展预期的实际案例：\n\n1. **问题分解**：将问题分解为新车型计划、销量目标、技术创新和市场扩张战略\n2. **执行搜索**：针对每个子问题执行Tavily搜索\n3. **分析结果**：分析每个搜索的结果，提取关键信息\n4. **综合信息**：将所有信息整合为一个全面的分析报告\n\n通过这种方式，ReactAgent能够提供比单次查询更全面、更深入的分析结果。\n\n## 7. 总结\n\nReactAgent是一个强大的基于ReAct方法论的智能体框架，它通过多步推理和工具调用来解决复杂问题。在实际应用中，ReactAgent特别适合需要分步骤思考、收集信息和综合分析的任务。通过与Tavily等工具的集成，ReactAgent能够获取实时信息，大幅提升其实用价值。\n\n在未来的开发中，我们将继续优化ReactAgent的性能，增强其推理能力，并集成更多实用工具，使其能够应对更广泛的应用场景。"
  },
  {
    "path": "instructions/05.react_agent_user_input.md",
    "content": "# ReactAgent与用户交互：构建交互式研究助手\n\n## 1. 引言\n\n本文将介绍如何使用ReactAgent构建一个能够与用户进行交互的研究助手，该助手能够接收用户输入，使用搜索工具获取信息，并提供深入的分析结果。这种交互式助手特别适合需要实时信息和多轮对话的场景。\n\n## 2. 交互式研究助手的核心概念\n\n### 2.1 用户输入处理\n\n交互式研究助手需要能够处理用户的自然语言输入，理解用户的意图，并将其转化为可执行的搜索查询或其他操作。这涉及到：\n\n1. **输入解析**：分析用户输入，提取关键信息和查询意图\n2. **查询重构**：将用户的自然语言问题转化为更有效的搜索查询\n3. **上下文维护**：在多轮对话中保持对话上下文的连贯性\n\n### 2.2 搜索工具集成\n\n研究助手的核心功能是能够获取和分析信息，这通常通过集成各种搜索工具来实现：\n\n1. **Tavily搜索**：提供高质量的网络搜索结果，支持深度搜索模式\n2. **结果处理**：对搜索结果进行过滤、排序和整合，提取最相关的信息\n3. **多次搜索策略**：对复杂问题进行分解，执行多次有针对性的搜索\n\n## 3. 实现交互式研究助手\n\n### 3.1 基本架构\n\n交互式研究助手的基本架构包括：\n\n```\n用户输入 → ReactAgent → 搜索工具 → 结果分析 → 回复生成 → 用户\n```\n\n这个流程可以多次循环，形成多轮对话。\n\n### 3.2 ReactAgent配置\n\n以下是创建交互式研究助手的核心代码：\n\n```python\ndef create_react_agent_instance():\n    \"\"\"创建并返回ReactAgent实例\"\"\"\n    react_agent = ReactAgent(\n        model=model,\n        tools=[tavily_search],\n        name=\"research_assistant\",\n        # 提示词强调分解问题、多步思考和综合信息\n        prompt=(\n            \"你是一位专业的研究分析师，擅长分析复杂问题并提供深入见解。\\n\"\n            \"你有一个强大的工具'tavily_search_results_json'可以搜索网络获取实时信息。\\n\\n\"\n            \"当面对复杂问题时，请遵循以下REACT方法论：\\n\"\n            \"1. 分解问题：将复杂问题分解为更小的子问题\\n\"\n            \"2. 制定计划：确定需要搜索哪些信息，以及搜索的顺序\\n\"\n            \"3. 执行搜索：使用tavily_search_results_json工具执行搜索\\n\"\n            \"4. 分析结果：分析搜索结果，确定是否需要进一步搜索\\n\"\n            \"5. 综合信息：将所有搜索结果综合成一个连贯的回答\\n\\n\"\n            \"重要提示：\\n\"\n            \"- 不要一次性搜索过于宽泛的问题\\n\"\n            \"- 对于复杂问题，进行多次有针对性的搜索\\n\"\n            \"- 每次搜索后评估结果，决定下一步行动\\n\"\n            \"- 在最终回答中引用来源，包括搜索结果中的URL\\n\"\n            \"- 清晰地展示你的思考过程，包括问题分解和计划制定\\n\"\n        ),\n    )\n    \n    return react_agent\n```\n\n### 3.3 Tavily搜索工具配置\n\n```python\ntavily_search = TavilySearchResults(\n    max_results=3,\n    include_answer=True,\n    include_raw_content=True,  # 包含原始内容，便于分析\n    include_images=False,\n    search_depth=\"advanced\"  # 使用高级搜索深度\n)\n```\n\n### 3.4 用户交互循环\n\n用户交互循环的核心是通过`stream`方法获取中间状态，并实时显示Agent的思考过程：\n\n```python\ndef process_user_query(query):\n    # 创建ReactAgent实例\n    react_agent = create_react_agent_instance()\n    agent = react_agent.compile()\n    \n    # 准备输入\n    inputs = {\n        \"messages\": [HumanMessage(content=query)]\n    }\n    \n    # 使用stream方法逐步获取中间状态\n    final_state = None\n    for partial_state in react_agent.stream(inputs, stream_mode=\"values\"):\n        # 保存最终状态\n        final_state = partial_state\n        \n        # 获取最新消息并记录\n        messages = partial_state.get(\"messages\", [])\n        if messages:\n            latest_message = messages[-1]\n            log_agent_actions({\"messages\": [latest_message]})\n    \n    # 返回最终回答\n    return final_state\n```\n\n## 4. 最佳实践与优化策略\n\n### 4.1 提示词优化\n\n提示词对研究助手的性能至关重要，应包含以下要素：\n\n1. **角色定义**：明确助手的专业身份和能力\n2. **方法论指导**：提供结构化的问题解决方法\n3. **工具使用指南**：说明如何有效使用搜索工具\n4. **输出格式要求**：规定回答的结构和引用方式\n\n### 4.2 搜索策略优化\n\n为提高搜索效率和结果质量，可采用以下策略：\n\n1. **渐进式搜索**：从一般到具体，逐步缩小搜索范围\n2. **多角度查询**：使用不同的关键词和表述方式进行搜索\n3. **结果验证**：通过交叉检查多个来源验证信息的准确性\n4. **深度参数调整**：根据问题复杂度调整搜索深度参数\n\n### 4.3 用户体验优化\n\n提升用户体验的关键点包括：\n\n1. **透明的思考过程**：展示Agent的推理过程，增强可信度\n2. **实时反馈**：通过流式输出提供即时反馈\n3. **引用来源**：清晰标注信息来源，便于用户进一步探索\n4. **交互式引导**：在复杂问题上引导用户提供更多上下文或澄清问题\n\n## 5. 应用场景\n\n交互式研究助手适用于多种场景：\n\n1. **学术研究**：帮助研究人员快速获取和分析相关文献\n2. **市场分析**：收集和整合市场趋势、竞争对手信息\n3. **新闻摘要**：汇总和分析最新新闻事件\n4. **技术调研**：探索新技术、框架或工具的特性和评价\n5. **教育辅助**：为学生提供学习资料和解答问题\n\n## 6. 总结\n\nReactAgent结合用户交互和搜索工具，可以构建功能强大的研究助手，能够处理复杂查询并提供深入分析。通过优化提示词、搜索策略和用户体验，可以进一步提升助手的性能和实用性。未来的发展方向包括集成更多专业数据源、增强多模态能力，以及提供更个性化的信息服务。"
  },
  {
    "path": "instructions/06.web_extraction_tools.md",
    "content": "# 网页提取工具：FireCrawl与Jina的集成与应用\n\n## 1. 引言\n\n网页内容提取是智能体系统中的重要能力，它使智能体能够从互联网获取、分析和处理结构化和非结构化的网页内容。本文将详细介绍如何在Mentis框架中集成和使用FireCrawl和Jina两种强大的网页提取工具，以实现高效的网站结构分析和内容提取。\n\n## 2. 网页提取工具的核心概念\n\n### 2.1 网页提取的两个关键步骤\n\n高效的网页内容提取通常包含两个关键步骤：\n\n1. **网站结构分析**：了解网站的组织结构、页面之间的链接关系，以及重要页面的位置。\n2. **内容提取**：从特定页面中提取有价值的文本、图像或其他结构化信息。\n\n### 2.2 FireCrawl与Jina的角色分工\n\n在Mentis框架中，我们使用两种工具来分别处理这两个步骤：\n\n1. **FireCrawl**：专注于网站结构分析，能够爬取网站的页面结构和链接关系。\n2. **Jina**：专注于内容提取，能够从特定URL获取干净、结构化的内容。\n\n## 3. FireCrawlTool的实现与使用\n\n### 3.1 FireCrawlTool的基本结构\n\nFireCrawlTool是对FireCrawl API的封装，提供了网站爬取和内容分析的能力：\n\n```python\nclass FireCrawlTool(BaseTool):\n    \"\"\"Tool that uses FireCrawl API to crawl or scrape web content.\"\"\"\n\n    name: str = \"firecrawl_tool\"\n    description: str = (\n        \"A web crawler and scraper that extracts content from websites. \"\n        \"Useful for when you need to analyze the content of a specific website or webpage. \"\n        \"Input should be a URL to crawl or scrape.\"\n    )\n    args_schema: Type[BaseModel] = FireCrawlInput\n    \n    api_key: Optional[str] = None\n    api_url: Optional[str] = None\n    mode: str = \"crawl\"\n    params: Dict[str, Any] = Field(default_factory=dict)\n```\n\n### 3.2 FireCrawlTool的配置选项\n\nFireCrawlTool提供了多种配置选项：\n\n1. **mode**：工作模式，可选值包括：\n   - `crawl`：爬取网站结构和链接\n   - `scrape`：提取特定页面的内容\n   - `map`：生成网站地图\n\n2. **params**：额外参数，常用的包括：\n   - `max_pages`：限制爬取的最大页面数量\n   - `max_depth`：限制爬取的最大深度\n   - `follow_links`：是否跟踪页面中的链接\n\n### 3.3 使用FireCrawlTool爬取网站结构\n\n以下是使用FireCrawlTool爬取网站结构的示例代码：\n\n```python\n# 创建FireCrawl工具 - 用于网站结构分析\nfirecrawl_tool = FireCrawlTool(\n    mode=\"crawl\",  # 使用爬取模式\n    params={\n        \"max_pages\": 5,  # 限制爬取页面数量\n    }\n)\n\n# 在Agent中使用该工具\nreact_agent = create_react_agent(\n    model=model,\n    tools=[firecrawl_tool],\n    name=\"web_crawler\",\n    prompt=\"你是一位网站结构分析专家...\"\n)\n```\n\n## 4. JinaSearch的实现与使用\n\n### 4.1 JinaSearch的基本功能\n\nJinaSearch是LangChain提供的一个工具，能够从网页中提取干净、可读的内容，去除广告、导航栏等干扰元素：\n\n```python\nfrom langchain_community.tools import JinaSearch\n\n# 创建Jina Reader工具 - 用于内容提取\njina_reader_tool = JinaSearch()\n```\n\n### 4.2 使用JinaSearch提取网页内容\n\nJinaSearch特别适合在确定了目标页面后，提取其中的核心内容：\n\n```python\n# 在Agent中结合FireCrawl和Jina\nreact_agent = create_react_agent(\n    model=model,\n    tools=[firecrawl_tool, jina_reader_tool],\n    name=\"web_extraction_expert\",\n    prompt=\"你是一位网页内容分析专家...\"\n)\n```\n\n## 5. 网页提取的最佳实践\n\n### 5.1 两阶段提取策略\n\n为了高效地提取网页内容，建议采用两阶段策略：\n\n1. **第一阶段**：使用FireCrawlTool爬取网站结构，了解网站的组织方式和重要页面。\n2. **第二阶段**：根据第一阶段的结果，使用JinaSearch有针对性地提取重要页面的内容。\n\n### 5.2 提示词优化\n\n为了引导Agent正确使用这两个工具，提示词应该明确指出工具的使用顺序和方法：\n\n```python\nprompt = (\n    \"你是一位专业的网页内容分析专家，擅长提取和分析网站结构与内容。\\n\"\n    \"你有两个强大的工具:\\n\"\n    \"1. 'firecrawl_tool': 用于爬取网站结构和下级页面\\n\"\n    \"2. 'jina_reader_tool': 用于从特定URL提取结构化内容\\n\\n\"\n    \"当面对网站分析任务时，请遵循以下方法论:\\n\"\n    \"1. 先使用firecrawl_tool了解网站结构\\n\"\n    \"2. 再使用jina_reader_tool提取关键页面内容\\n\"\n    \"3. 最后整合信息提供分析结果\"\n)\n```\n\n### 5.3 处理大型网站的策略\n\n对于大型网站，可以采用以下策略：\n\n1. **限制爬取范围**：设置合理的`max_pages`和`max_depth`参数\n2. **分批处理**：先获取网站结构，然后每次只处理1-3个重要页面\n3. **内容摘要**：对提取的内容进行摘要，减少token消耗\n\n## 6. 实际应用案例\n\n### 6.1 分析LangGraph文档网站\n\n以下是使用FireCrawl和Jina分析LangGraph文档网站的示例：\n\n```python\n# 定义输入\ninputs = {\n    \"messages\": [\n        {\"role\": \"user\", \"content\": \"爬取LangGraph文档网站的每个章节的内容(https://langchain-ai.github.io/langgraph/how-tos/) \"}\n    ]\n}\n\n# 使用stream方法逐步获取中间状态\nfinal_state = None\nfor partial_state in react_agent.stream(inputs, stream_mode=\"values\"):\n    # 处理中间状态...\n    pass\n```\n\n### 6.2 结果分析与处理\n\nAgent会首先使用FireCrawl获取网站结构，然后使用Jina提取重要页面的内容，最后整合信息提供分析结果：\n\n1. **网站结构分析**：识别主要章节和子页面\n2. **内容提取**：获取每个章节的详细内容\n3. **信息整合**：将内容组织成结构化的文档或摘要\n\n## 7. 总结\n\nFireCrawl和Jina的结合为智能体提供了强大的网页内容提取能力。通过两阶段提取策略，可以高效地分析网站结构并提取有价值的内容。这种能力使智能体能够从互联网获取实时信息，为用户提供更加全面和准确的回答。\n\n未来的发展方向包括增强对JavaScript渲染页面的支持、提高内容提取的准确性，以及集成更多专业领域的内容分析能力。"
  },
  {
    "path": "instructions/07.web_extraction_with_filesystem.md",
    "content": "# 网页提取与文件系统集成：构建内容采集与存储系统\n\n## 1. 引言\n\n在智能体系统中，网页内容提取通常需要与文件系统操作相结合，以便将提取的内容持久化存储。本文将详细介绍如何在Mentis框架中集成网页提取工具和文件系统工具，并使用SupervisorAgent协调多个专业智能体，构建一个完整的内容采集与存储系统。\n\n## 2. 系统架构设计\n\n### 2.1 三层架构模式\n\n我们采用三层架构设计，包括：\n\n1. **Supervisor层**：负责协调和管理其他智能体，接收用户指令并分配任务\n2. **Research层**：负责网页内容提取，包括网站结构分析和内容提取\n3. **FileSystem层**：负责文件操作，包括内容保存、读取和目录管理\n\n### 2.2 智能体角色分工\n\n系统中的三个智能体各自承担不同的职责：\n\n1. **SupervisorAgent**：总协调者，负责理解用户需求，并将任务分配给适当的专业智能体\n2. **Research Agent**：网页内容分析专家，负责使用FireCrawl和Jina工具提取网页内容\n3. **FileSystem Agent**：文件系统管理专家，负责将提取的内容保存到本地文件系统\n\n## 3. 组件实现\n\n### 3.1 Research Agent实现\n\nResearch Agent负责网页内容提取，使用FireCrawl和Jina工具：\n\n```python\n# 创建FireCrawl工具 - 用于网站结构分析\nfirecrawl_tool = FireCrawlTool(\n    mode=\"crawl\",  # 使用爬取模式\n    params={\n        \"max_pages\": 5,  # 限制爬取页面数量\n    }\n)\n\n# 创建Jina Reader工具 - 用于内容提取\njina_reader_tool = JinaSearch()\n\n# 创建Research Agent\nresearch_agent = create_react_agent(\n    model=model,\n    tools=[firecrawl_tool, jina_reader_tool],\n    name=\"research_agent\",\n    prompt=(\n        \"你是一位专业的网页内容分析专家，擅长提取和分析网站结构与内容。\\n\"\n        \"你有两个强大的工具...\\n\"\n        # 提示词内容\n    ),\n)\n```\n\n### 3.2 FileSystem Agent实现\n\nFileSystem Agent负责文件操作，使用LangChain的FileManagementToolkit：\n\n```python\n# 设置文件系统工具的根目录\noutput_dir = os.path.join(os.path.dirname(__file__), \"output\")\nos.makedirs(output_dir, exist_ok=True)\n\n# 创建文件系统工具集\nfilesystem_toolkit = FileManagementToolkit(\n    root_dir=output_dir,\n    selected_tools=[\"write_file\", \"read_file\", \"list_directory\"]\n)\n\n# 获取文件系统工具\nfilesystem_tools = filesystem_toolkit.get_tools()\n\n# 创建FileSystem Agent\nfilesystem_agent = create_react_agent(\n    model=model,\n    tools=filesystem_tools,\n    name=\"filesystem_agent\",\n    prompt=(\n        \"你是一位专业的文件系统管理专家，负责将网页内容保存到本地文件系统。\\n\"\n        \"你有以下工具可以使用...\\n\"\n        # 提示词内容\n    ),\n)\n```\n\n### 3.3 SupervisorAgent实现\n\nSupervisorAgent负责协调Research Agent和FileSystem Agent：\n\n```python\n# 创建Supervisor Agent\nsupervisor = SupervisorAgent(\n    agents=[research_agent, filesystem_agent],\n    model=model,\n    prompt=(\n        \"你是一个智能助手的总协调者，负责管理两个专业智能体:\\n\"\n        \"1) research_agent: 网页内容分析专家，可以爬取和分析网站内容\\n\"\n        \"2) filesystem_agent: 文件系统管理专家，可以将内容保存到本地文件系统\\n\\n\"\n        # 提示词内容\n    ),\n)\n\n# 创建内存存储器用于保存对话状态\nmemory_saver = MemorySaver()\n\n# 编译得到一个可调用的\"App\"，添加checkpointer实现记忆功能\napp = supervisor.compile(checkpointer=memory_saver)\n```\n\n## 4. 工作流程\n\n### 4.1 基本工作流程\n\n系统的基本工作流程如下：\n\n1. **用户请求**：用户提出网页内容提取和保存的请求\n2. **Supervisor分析**：SupervisorAgent分析用户请求，确定需要调用哪个专业智能体\n3. **内容提取**：如果需要提取网页内容，SupervisorAgent调用Research Agent\n4. **内容保存**：如果需要保存内容，SupervisorAgent将Research Agent的结果传递给FileSystem Agent\n5. **结果返回**：SupervisorAgent将最终结果返回给用户\n\n### 4.2 上下文管理策略\n\n为了有效管理上下文长度，系统采用以下策略：\n\n1. **分批处理**：对于大型网站，采用分批处理策略，每次只处理少量页面\n2. **内容摘要**：对于大型内容，进行摘要处理，减少传递的token数量\n3. **先保存再处理**：对于多页面内容，采用先保存再处理的策略，减轻上下文负担\n\n## 5. 提示词设计\n\n### 5.1 SupervisorAgent提示词\n\nSupervisorAgent的提示词强调任务分配和协调：\n\n```\n你是一个智能助手的总协调者，负责管理两个专业智能体:\n1) research_agent: 网页内容分析专家，可以爬取和分析网站内容\n2) filesystem_agent: 文件系统管理专家，可以将内容保存到本地文件系统\n\n你的工作流程如下:\n1. 分析用户请求，确定是需要网页内容提取还是文件操作，或两者都需要\n2. 如果需要网页内容提取，调用research_agent获取网页内容\n3. 如果需要将提取的内容保存到文件，调用filesystem_agent进行保存\n4. 如果用户同时需要提取内容并保存，先调用research_agent获取内容，再调用filesystem_agent保存内容\n\n重要规则:\n- 不要在一个消息中同时调用多个智能体，必须一步一步来\n- 当调用filesystem_agent保存内容时，必须提供完整的内容和建议的文件名\n- 确保在最终回复中告知用户内容已成功提取和/或保存\n```\n\n### 5.2 Research Agent提示词\n\nResearch Agent的提示词强调网页内容提取的方法论：\n\n```\n你是一位专业的网页内容分析专家，擅长提取和分析网站结构与内容。\n你有两个强大的工具:\n1. 'firecrawl_tool': 用于爬取网站结构和下级页面\n2. 'jina_reader_tool': 用于从特定URL提取结构化内容，获取干净可读的内容\n\n当面对网站分析任务时，请遵循以下方法论:\n1. 分析任务: 明确需要从网站获取什么信息\n2. 网站结构分析: 使用firecrawl_tool爬取网站结构，了解可用页面\n3. 内容提取: 根据网站结构，使用jina_reader_tool从关键页面提取内容\n4. 信息整合: 将提取的内容整合成有条理的分析结果\n```\n\n### 5.3 FileSystem Agent提示词\n\nFileSystem Agent的提示词强调文件操作和内容保存：\n\n```\n你是一位专业的文件系统管理专家，负责将网页内容保存到本地文件系统。\n你有以下工具可以使用:\n1. 'write_file': 用于将内容写入文件\n2. 'read_file': 用于读取文件内容\n3. 'list_directory': 用于列出目录内容\n\n当接收到保存内容的请求时，请遵循以下方法论:\n1. 分析内容: 确定内容的类型和结构\n2. 确定文件名: 根据内容类型和来源创建合适的文件名\n3. 保存内容: 使用write_file工具将内容保存到文件中\n4. 验证保存: 使用read_file或list_directory工具验证内容已正确保存\n```\n\n## 6. 记忆功能实现\n\n### 6.1 使用MemorySaver实现记忆\n\n系统使用LangGraph的MemorySaver实现对话状态的持久化：\n\n```python\n# 创建内存存储器用于保存对话状态\nmemory_saver = MemorySaver()\n\n# 编译得到一个可调用的\"App\"，添加checkpointer实现记忆功能\napp = supervisor.compile(checkpointer=memory_saver)\n```\n\n### 6.2 记忆功能的应用场景\n\n记忆功能在以下场景中特别有用：\n\n1. **多轮对话**：在多轮对话中保持上下文连贯性\n2. **长时间任务**：对于需要长时间处理的任务，可以保存中间状态\n3. **断点续传**：支持任务的暂停和恢复\n\n## 7. 应用案例\n\n### 7.1 提取并保存LangGraph文档\n\n以下是一个完整的应用案例，提取并保存LangGraph文档：\n\n```python\n# 用户请求\ninputs = {\n    \"messages\": [\n        HumanMessage(content=\"请爬取LangGraph文档网站(https://langchain-ai.github.io/langgraph/how-tos/)的内容，并保存为Markdown文件\")\n    ]\n}\n\n# 执行工作流\nfinal_state = None\nfor partial_state in app.stream(inputs, stream_mode=\"values\"):\n    # 处理中间状态...\n    final_state = partial_state\n    # 记录状态\n    log_agent_actions(partial_state)\n\n# 最终结果\nprint(\"\\n最终结果:\")\nif final_state and final_state.get(\"messages\"):\n    for message in final_state[\"messages\"]:\n        if isinstance(message, AIMessage) and not message.tool_calls:\n            print(message.content)\n```\n\n## 8. 总结\n\n网页提取与文件系统集成是构建完整内容采集系统的关键。通过SupervisorAgent协调Research Agent和FileSystem Agent，我们可以实现网页内容的提取、分析和持久化存储。这种多智能体协作模式不仅提高了系统的模块化程度，也使得每个智能体可以专注于自己的专业领域，从而提高整体系统的效率和质量。\n\n未来的发展方向包括增强对复杂网站的处理能力、支持更多文件格式的存储和处理，以及集成数据库存储以支持更大规模的内容管理。"
  },
  {
    "path": "instructions/08.react_agent_tool_registry.md",
    "content": "# 工具注册机制与ReactAgent集成：构建可扩展的智能体系统\n\n## 1. 引言\n\n工具注册机制是构建可扩展智能体系统的关键组件，它允许我们以统一的方式管理和使用各种工具，并将这些工具与ReactAgent集成。本文将详细介绍Mentis框架中的工具注册机制，包括工具注册、分类管理以及与ReactAgent的集成方式。\n\n## 2. 工具注册机制的核心概念\n\n### 2.1 工具注册的意义\n\n工具注册机制提供了以下优势：\n\n1. **统一管理**：集中管理所有可用工具，避免重复创建和配置\n2. **分类组织**：按功能和用途对工具进行分类，便于查找和使用\n3. **动态加载**：支持动态注册和加载工具，提高系统的灵活性\n4. **简化集成**：简化工具与Agent的集成过程，只需从注册表中获取工具列表\n\n### 2.2 工具分类体系\n\n在Mentis框架中，我们使用`ToolCategory`枚举定义了工具的分类体系：\n\n```python\nclass ToolCategory(Enum):\n    SEARCH = \"Search\"\n    CODE_INTERPRETER = \"Code Interpreter\"\n    WEB_BROWSING = \"Web Browsing\"\n    DATABASE = \"Database\"\n    FILE_SYSTEM = \"FileSystem\"\n    OTHER = \"Other\"\n```\n\n这种分类体系使我们能够根据任务需求选择特定类别的工具，提高工具使用的针对性和效率。\n\n## 3. 工具注册机制的实现\n\n### 3.1 全局工具注册表\n\n工具注册机制的核心是一个全局工具注册表，它是一个字典，用于存储所有已注册的工具及其分类信息：\n\n```python\n# 全局工具注册表\n_registered_tools = {}\n```\n\n### 3.2 工具注册函数\n\n`register_tool`函数用于将工具注册到全局注册表中：\n\n```python\ndef register_tool(tool: Tool, category: ToolCategory) -> None:\n    \"\"\"注册一个工具到全局字典中，带有分类信息\"\"\"\n    if tool.name in _registered_tools:\n        raise ValueError(f\"工具名 {tool.name} 已存在，请确保工具名唯一\")\n    _registered_tools[tool.name] = {\n        \"tool\": tool,\n        \"category\": category\n    }\n```\n\n### 3.3 工具获取函数\n\n框架提供了多种函数来获取已注册的工具：\n\n```python\ndef get_registered_tools(as_dict: bool = False) -> Union[List[Tool], Dict[str, Dict]]:\n    \"\"\"返回所有已注册的工具\"\"\"\n    if as_dict:\n        return _registered_tools\n    return [info[\"tool\"] for info in _registered_tools.values()]\n\ndef get_tools_by_category(category: ToolCategory, return_instances: bool = True) -> List[Union[str, Tool]]:\n    \"\"\"返回指定分类的工具列表\"\"\"\n    if return_instances:\n        return [info[\"tool\"] for name, info in _registered_tools.items() if info[\"category\"] == category]\n    return [name for name, info in _registered_tools.items() if info[\"category\"] == category]\n```\n\n## 4. 简化工具注册的辅助函数\n\n### 4.1 直接注册工具的函数\n\n为了简化工具注册过程，框架提供了`register_direct_tool`函数，它可以根据工具类名自动判断工具类别：\n\n```python\ndef register_direct_tool(tool_instance: BaseTool, category: ToolCategory = None) -> None:\n    \"\"\"注册直接从langchain_community.tools导入的工具\"\"\"\n    if not category:\n        # 获取工具类名\n        tool_class_name = tool_instance.__class__.__name__\n        # 根据工具类名自动判断类别\n        category = tool_category_mapping.get(tool_class_name, tool_category_mapping[\"default\"])\n    \n    # 注册工具\n    register_tool(tool_instance, category)\n    print(f\"已注册工具: {tool_instance.name} (类别: {category.value})\")\n```\n\n### 4.2 自动注册自定义工具\n\n框架还支持自动扫描和注册自定义工具。在`__init__.py`中，我们使用以下代码自动注册自定义工具：\n\n```python\n# 遍历目录中的所有文件，注册自定义工具\nfor filename in os.listdir(tools_dir):\n    # 只处理 .py 文件，且排除 __init__.py 和 registry.py\n    if filename.endswith('.py') and filename not in ['__init__.py', 'registry.py']:\n        # 提取模块名（去掉 .py 后缀）\n        module_name = filename[:-3]\n        try:\n            # 动态导入模块\n            module = importlib.import_module(f'.{module_name}', package='core.tools')\n            \n            # 查找模块中的工具类（继承自BaseTool的类）\n            for name, obj in inspect.getmembers(module):\n                # 检查是否是类且是BaseTool的子类\n                if inspect.isclass(obj) and issubclass(obj, BaseTool) and obj != BaseTool:\n                    # 检查该类是否已经被实例化并注册\n                    tool_name = getattr(obj, 'name', None)\n                    if tool_name and tool_name not in [info['tool'].name for info in get_registered_tools().values()]:\n                        # 确定工具类别\n                        category = getattr(module, 'category', ToolCategory.OTHER)\n                        # 实例化并注册工具\n                        try:\n                            tool_instance = obj()\n                            register_tool(tool_instance, category)\n                            print(f\"已注册工具类: {name} (工具名: {tool_instance.name}, 类别: {category.value})\")\n                        except Exception as e:\n                            print(f\"实例化工具类 {name} 时出错: {e}\")\n        except Exception as e:\n            print(f\"导入 {module_name} 时出错: {e}\")\n```\n\n这段代码会自动扫描`core/tools`目录中的所有Python文件，查找继承自`BaseTool`的类，并自动实例化和注册这些工具。\n\n## 5. 与ReactAgent的集成\n\n### 5.1 从注册表获取工具列表\n\n在创建ReactAgent实例时，我们可以从注册表中获取工具列表：\n\n```python\n# 从注册表中获取工具列表\ntools_list = [info[\"tool\"] for info in registered_tools.values()]\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=model,\n    tools=tools_list,\n    name=\"fed_research_agent\",\n    prompt=(\n        \"你是一位专业的经济研究分析师，擅长分析复杂的经济问题并提供深入见解。\\n\"\n        \"你有多个强大的工具可以搜索网络获取实时信息：\\n\"\n        \"- jina_search: 用于进行网络搜索获取最新信息\\n\"\n        \"- wikipedia_query_run: 用于查询维基百科获取基础知识\\n\"\n        \"- firecrawl_tool: 用于抓取和分析特定网页内容\\n\\n\"\n        # 提示词内容\n    ),\n)\n```\n\n### 5.2 按类别选择工具\n\n在某些场景下，我们可能只需要特定类别的工具。这时，可以使用`get_tools_by_category`函数：\n\n```python\n# 获取所有搜索类工具\nsearch_tools = get_tools_by_category(ToolCategory.SEARCH)\n\n# 创建专注于搜索的ReactAgent\nsearch_agent = ReactAgent(\n    model=model,\n    tools=search_tools,\n    name=\"search_agent\",\n    prompt=\"你是一位专业的信息搜索专家...\"\n)\n```\n\n## 6. 实际应用案例\n\n### 6.1 美联储研究任务\n\n以下是一个完整的应用案例，使用工具注册机制和ReactAgent进行美联储研究：\n\n```python\n# 注册搜索工具\njina_search = JinaSearch()\nwiki_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())\n\n# 使用register_direct_tool函数注册工具\nregister_direct_tool(jina_search)\nregister_direct_tool(wiki_tool)\n\n# 注意：FireCrawlTool已经在core/tools/__init__.py中被注册，这里不需要再次注册\n\n# 获取所有已注册的工具（以字典格式）\nregistered_tools = get_registered_tools(as_dict=True)\n\n# 从注册表中获取工具列表\ntools_list = [info[\"tool\"] for info in registered_tools.values()]\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=model,\n    tools=tools_list,\n    name=\"fed_research_agent\",\n    prompt=(\n        \"你是一位专业的经济研究分析师，擅长分析复杂的经济问题并提供深入见解。\\n\"\n        # 提示词内容\n    ),\n)\n\n# 编译Agent\nagent = react_agent.compile()\n\n# 定义输入\ninputs = {\n    \"messages\": [\n        HumanMessage(content=\"请提供美联储(Federal Reserve)的详细介绍，包括其历史、结构、职能，以及它如何通过货币政策影响全球经济。\")\n    ]\n}\n\n# 执行Agent\nfinal_state = None\nfor partial_state in react_agent.stream(inputs, stream_mode=\"values\"):\n    # 处理中间状态...\n    pass\n```\n\n### 6.2 结果保存\n\n执行完成后，我们可以将结果保存到文件：\n\n```python\n# 打印最终回答\nif final_state and final_state.get(\"messages\"):\n    for message in final_state[\"messages\"]:\n        if isinstance(message, AIMessage) and not message.tool_calls:\n            print(message.content)\n            \n            # 将结果保存到文件\n            output_dir = os.path.join(os.path.dirname(__file__), \"output\")\n            os.makedirs(output_dir, exist_ok=True)\n            output_file = os.path.join(output_dir, \"fed_research_report.md\")\n            \n            with open(output_file, \"w\", encoding=\"utf-8\") as f:\n                f.write(\"# 美联储研究报告\\n\\n\")\n                f.write(message.content)\n            \n            print(f\"\\n研究报告已保存到: {output_file}\")\n```\n\n## 7. 最佳实践\n\n### 7.1 工具命名规范\n\n为了避免工具名冲突，建议遵循以下命名规范：\n\n1. 使用有意义的名称，反映工具的功能\n2. 对于同一类别的工具，使用统一的前缀或后缀\n3. 避免使用过于通用的名称，如`search`、`get`等\n\n### 7.2 工具分类策略\n\n合理的工具分类策略可以提高工具使用的效率：\n\n1. 根据工具的主要功能进行分类，而不是实现方式\n2. 对于多功能工具，根据其主要功能进行分类\n3. 只有在无法确定主要功能时，才将工具归类为`OTHER`\n\n### 7.3 提示词优化\n\n在提示词中明确说明可用工具及其用途，可以提高Agent的工具使用效率：\n\n```\n你是一位专业的经济研究分析师，擅长分析复杂的经济问题并提供深入见解。\n你有多个强大的工具可以搜索网络获取实时信息：\n- jina_search: 用于进行网络搜索获取最新信息\n- wikipedia_query_run: 用于查询维基百科获取基础知识\n- firecrawl_tool: 用于抓取和分析特定网页内容\n\n当面对复杂问题时，请遵循以下方法论：\n1. 分解问题：将复杂问题分解为更小的子问题\n2. 制定计划：确定需要搜索哪些信息，以及使用哪些工具\n3. 执行搜索：使用适当的工具执行搜索\n4. 分析结果：分析搜索结果，确定是否需要进一步搜索\n5. 综合信息：将所有搜索结果综合成一个连贯的回答\n```\n\n## 8. 总结\n\n工具注册机制为Mentis框架提供了强大的可扩展性，使得智能体系统能够轻松集成各种工具，并根据任务需求灵活选择合适的工具组合。通过分类管理和自动注册，工具注册机制简化了工具的管理和使用流程，提高了开发效率。\n\n结合ReactAgent，工具注册机制使得智能体能够访问丰富的外部功能，从而处理更复杂的任务。未来的发展方向包括支持更多类型的工具、增强工具的自动发现和选择能力，以及提供更细粒度的工具权限控制。"
  },
  {
    "path": "instructions/09.e2b_sandbox_integration.md",
    "content": "# E2B沙箱环境与智能代理集成指南\n\n## 1. 引言\n\nE2B沙箱环境是一个强大的代码执行工具，它提供了安全、隔离的环境来运行Python代码和Shell命令。将E2B沙箱与智能代理（如ReactAgent）集成，可以显著增强代理的能力，使其能够执行代码、处理数据、创建可视化，甚至与文件系统交互。本文将详细介绍E2B沙箱的核心概念、工作原理、实现方式以及在智能代理系统中的应用。\n\n## 2. E2B沙箱环境的核心概念\n\n### 2.1 什么是E2B沙箱\n\nE2B（Execution Environment for Bots）是一个专为AI代理设计的代码执行环境，它提供以下核心功能：\n\n1. **安全隔离**：在隔离的容器中执行代码，防止恶意代码影响宿主系统\n2. **多语言支持**：主要支持Python，同时可通过Shell命令执行其他语言代码\n3. **文件系统操作**：允许创建、读取、写入和管理文件\n4. **包管理**：支持安装和使用第三方Python库\n5. **持久化**：可以在会话之间保持状态和文件\n\n### 2.2 E2B沙箱与代码解释器的关系\n\nE2B沙箱是一种特殊的代码解释器实现，它不仅能执行代码，还提供了完整的操作系统环境（基于Debian）。这使得它比简单的代码解释器功能更强大，能够：\n\n- 执行系统命令\n- 管理文件和目录\n- 安装和使用各种软件包\n- 运行网络服务\n- 处理复杂的数据分析和可视化任务\n\n## 3. E2B沙箱的实现\n\n### 3.1 E2BCodeInterpreterTool类的设计\n\n在我们的实现中，`E2BCodeInterpreterTool`类继承自LangChain的`BaseTool`，提供了与E2B沙箱交互的接口：\n\n```python\nclass E2BCodeInterpreterTool(BaseTool):\n    \"\"\"使用E2B SDK执行Python代码的工具\n    \n    该工具创建一个安全的沙箱环境，用于执行Python代码，并返回执行结果、\n    标准输出、标准错误和任何错误信息。\n    \"\"\"\n    \n    name: str = \"e2b_code_interpreter\"\n    description: str = (\n        \"在安全的 Debian 基础沙箱环境中执行 Python 代码或 shell 命令，并返回结果。\"\n        \"适用于数据分析、可视化、复杂计算以及系统操作。\"\n        \"输入应为有效的 Python 代码字符串，或以 '!' 开头的 shell 命令。\"\n        \"常见 Python 库（如 numpy、pandas 和 matplotlib）已预装，若需其他库，可通过 pip 安装。\"\n        \"沙箱环境充分利用 Debian 系统的强大功能，支持广泛的操作。\"\n    )\n```\n\n### 3.2 核心方法\n\n`E2BCodeInterpreterTool`类提供了以下核心方法：\n\n1. **_initialize_sandbox()**: 初始化沙箱环境\n2. **_run()**: 在沙箱中执行代码并返回结果\n3. **close()**: 关闭沙箱并释放资源\n4. **format_to_tool_message()**: 将执行结果格式化为工具消息\n\n### 3.3 沙箱初始化与资源管理\n\n沙箱初始化过程包括：\n\n1. 检查是否安装了`e2b_code_interpreter`包\n2. 验证是否设置了`E2B_API_KEY`环境变量\n3. 创建`Sandbox`实例\n4. 设置沙箱状态标志\n\n资源管理方面，工具提供了`close()`方法来释放沙箱资源：\n\n```python\ndef close(self):\n    \"\"\"关闭沙箱，释放资源\"\"\"\n    if hasattr(self, \"sandbox\") and self._is_available and self.sandbox is not None:\n        try:\n            print(\"正在关闭E2B沙箱并释放资源...\")\n            self.sandbox.kill()\n            print(\"E2B沙箱已成功关闭\")\n        except Exception as e:\n            print(f\"关闭E2B沙箱时出错: {str(e)}\")\n```\n\n## 4. 将E2B沙箱与ReactAgent集成\n\n### 4.1 基本集成流程\n\n将E2B沙箱与ReactAgent集成的基本流程如下：\n\n1. **注册E2B工具**：将`E2BCodeInterpreterTool`注册到工具注册表中\n2. **创建ReactAgent**：使用包含E2B工具的工具列表初始化ReactAgent\n3. **设计提示词**：编写强调代码执行能力的提示词\n4. **执行工作流**：让Agent使用E2B工具执行代码并处理结果\n\n### 4.2 代码示例\n\n以下是一个基本的集成示例：\n\n```python\n# 导入必要的库\nfrom core.agents.react_agent import ReactAgent\nfrom core.tools.registry import get_tools_by_category, ToolCategory\nfrom langchain_openai import ChatOpenAI\n\n# 获取代码解释器工具\ntools_list = get_tools_by_category(ToolCategory.CODE_INTERPRETER)\n\n# 创建ReactAgent实例\nreact_agent = ReactAgent(\n    model=ChatOpenAI(model=\"gpt-4o-mini\"),\n    tools=tools_list,\n    prompt=(\n        \"你是一位专业的数据分析师，可以使用Python代码解决问题。\\n\"\n        \"你有强大的代码执行工具可以使用：\\n\"\n        \"- e2b_code_interpreter: 用于执行Python代码和shell命令\\n\"\n    ),\n)\n\n# 编译Agent\nagent = react_agent.compile()\n\n# 执行任务\nresult = agent.invoke({\"messages\": [HumanMessage(content=\"分析以下数据并创建可视化...\")]})\n```\n\n## 5. E2B沙箱的高级功能\n\n### 5.1 文件系统操作\n\nE2B沙箱提供了完整的文件系统操作能力，可以：\n\n- 创建和管理目录结构\n- 读写文本和二进制文件\n- 列出目录内容\n- 移动和删除文件\n\n示例代码：\n\n```python\n# 在沙箱中创建目录和文件\ncode = \"\"\"\n# 创建目录\nimport os\nos.makedirs('test_dir/subdir', exist_ok=True)\n\n# 创建并写入文件\nwith open('test_dir/example.txt', 'w') as f:\n    f.write('Hello from E2B sandbox!')\n    \n# 列出目录内容\nprint(os.listdir('test_dir'))\n\n# 读取文件内容\nwith open('test_dir/example.txt', 'r') as f:\n    content = f.read()\n    print(f'文件内容: {content}')\n\"\"\"\n\n# 执行代码\nresult = e2b_tool.invoke({\"code\": code})\n```\n\n### 5.2 包管理\n\nE2B沙箱允许安装和使用第三方Python库：\n\n```python\n# 安装并使用第三方库\ncode = \"\"\"\n# 安装pandas库\n!pip install pandas matplotlib\n\n# 使用pandas进行数据分析\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# 创建示例数据\ndata = {'Category': ['A', 'B', 'C', 'D'], 'Values': [10, 25, 15, 30]}\ndf = pd.DataFrame(data)\n\n# 打印数据\nprint(df)\n\n# 创建可视化\nplt.figure(figsize=(8, 4))\nplt.bar(df['Category'], df['Values'])\nplt.title('Sample Bar Chart')\nplt.savefig('chart.png')\nprint('图表已保存为chart.png')\n\"\"\"\n\n# 执行代码\nresult = e2b_tool.invoke({\"code\": code})\n```\n\n### 5.3 从沙箱下载文件\n\n可以将沙箱中生成的文件下载到本地系统：\n\n```python\ndef download_file_from_sandbox(sandbox, sandbox_path, local_path):\n    \"\"\"从沙箱下载文件到本地\"\"\"\n    try:\n        # 从沙箱读取文件内容\n        content = sandbox.files.read(sandbox_path)\n        \n        # 确保目标目录存在\n        os.makedirs(os.path.dirname(local_path), exist_ok=True)\n        \n        # 写入本地文件\n        with open(local_path, 'w', encoding='utf-8') as file:\n            file.write(content)\n            \n        print(f\"文件已从沙箱下载到本地: {local_path}\")\n        return True\n    except Exception as e:\n        print(f\"从沙箱下载文件时出错: {str(e)}\")\n        return False\n```\n\n## 6. 实际应用案例\n\n### 6.1 数据分析与可视化\n\nE2B沙箱特别适合数据分析和可视化任务，可以：\n\n- 加载和处理各种格式的数据（CSV、JSON、Excel等）\n- 使用pandas进行数据清洗和转换\n- 使用matplotlib、seaborn等创建可视化\n- 生成分析报告\n\n### 6.2 文件处理与转换\n\nE2B沙箱可以处理各种文件格式的转换和处理：\n\n- 文本文件处理（如日志分析）\n- 图像处理和转换\n- 数据格式转换（如CSV到JSON）\n- 文档生成（如生成HTML或PDF报告）\n\n### 6.3 Web爬虫与API调用\n\nE2B沙箱可以执行网络相关任务：\n\n- 使用requests或BeautifulSoup进行网页爬取\n- 调用各种API并处理响应\n- 下载和处理网络资源\n\n## 7. 最佳实践与注意事项\n\n### 7.1 安全考虑\n\n虽然E2B沙箱提供了隔离环境，但在使用时仍需注意：\n\n- 不要在沙箱中处理敏感数据\n- 避免执行未经验证的用户输入代码\n- 限制沙箱的网络访问权限\n- 定期关闭和重新创建沙箱实例\n\n### 7.2 资源管理\n\nE2B沙箱会消耗系统资源，因此：\n\n- 在不需要时关闭沙箱（使用`close()`方法）\n- 避免在单个沙箱中运行过多或过大的任务\n- 监控沙箱的内存和CPU使用情况\n\n### 7.3 错误处理\n\n在与E2B沙箱交互时，应当实施健壮的错误处理：\n\n- 捕获并处理代码执行异常\n- 验证沙箱初始化是否成功\n- 提供有意义的错误消息给用户\n- 实现重试机制处理临时故障\n\n## 8. 总结\n\nE2B沙箱为智能代理提供了强大的代码执行能力，使其能够处理各种复杂任务。通过将E2B沙箱与ReactAgent集成，我们可以创建能够执行代码、处理数据、创建可视化，甚至与文件系统交互的智能系统。\n\n正确使用E2B沙箱需要理解其核心概念、实现方式和最佳实践。通过本文的指导，开发者应能够有效地将E2B沙箱集成到自己的智能代理系统中，并充分利用其强大功能。\n\n## 9. 参考资源\n\n- [E2B官方文档](https://e2b.dev/docs)\n- [E2B Code Interpreter SDK](https://github.com/e2b-dev/code-interpreter)\n- [LangChain工具集成指南](https://python.langchain.com/docs/integrations/tools)\n- [ReactAgent文档](https://python.langchain.com/docs/modules/agents/agent_types/react)"
  },
  {
    "path": "log_analyzer.py",
    "content": "import re\nimport sys\nimport argparse\nfrom collections import defaultdict\nimport json\n\ndef parse_log_file(file_path):\n    \"\"\"Parse the execution log file and extract agent interactions.\"\"\"\n    with open(file_path, 'r', encoding='utf-8') as f:\n        content = f.read()\n    \n    # Extract different sections of the log\n    sections = content.split(\"================================ Human Message =================================\")\n    if len(sections) > 1:\n        main_content = sections[1]  # Skip header\n    else:\n        main_content = content\n    \n    # Extract messages\n    messages = []\n    \n    # Pattern for AI messages\n    ai_pattern = r\"================================== Ai Message ==================================\\nName: (\\w+)\\n\\n(.*?)(?=(==================================|$))\"\n    ai_matches = re.finditer(ai_pattern, main_content, re.DOTALL)\n    \n    for match in ai_matches:\n        agent_name = match.group(1)\n        message_content = match.group(2).strip()\n        \n        # Check if message has tool calls\n        tool_calls = []\n        tool_call_pattern = r\"Tool Calls:\\n(.*?)(?=\\n==================================|$)\"\n        tool_call_match = re.search(tool_call_pattern, message_content, re.DOTALL)\n        if tool_call_match:\n            # Extract tool calls\n            tool_calls_text = tool_call_match.group(1)\n            tool_call_entries = re.findall(r\"  (\\w+) \\(([^)]+)\\)\", tool_calls_text)\n            tool_calls = [{\"name\": name, \"id\": call_id} for name, call_id in tool_call_entries]\n            \n            # Remove tool calls from the message content\n            message_content = re.sub(r\"Tool Calls:.*?(?=\\n==================================|$)\", \"\", message_content, flags=re.DOTALL).strip()\n        \n        messages.append({\n            \"role\": \"agent\",\n            \"agent\": agent_name,\n            \"content\": message_content,\n            \"tool_calls\": tool_calls\n        })\n    \n    # Pattern for Tool messages\n    tool_pattern = r\"================================= Tool Message =================================\\nName: (\\w+)\\n\\n(.*?)(?=(==================================|$))\"\n    tool_matches = re.finditer(tool_pattern, main_content, re.DOTALL)\n    \n    for match in tool_matches:\n        tool_name = match.group(1)\n        tool_content = match.group(2).strip()\n        \n        messages.append({\n            \"role\": \"tool\",\n            \"tool\": tool_name,\n            \"content\": tool_content\n        })\n    \n    # Sort messages by their position in the log\n    messages.sort(key=lambda x: main_content.find(x[\"content\"]))\n    \n    return messages\n\ndef analyze_agent_interactions(messages):\n    \"\"\"Analyze the interactions between agents.\"\"\"\n    interactions = []\n    current_sender = None\n    tool_call_map = {}\n    \n    for i, msg in enumerate(messages):\n        if msg[\"role\"] == \"agent\":\n            current_sender = msg[\"agent\"]\n            # Check if this agent is using tool calls\n            for tool_call in msg.get(\"tool_calls\", []):\n                tool_name = tool_call[\"name\"]\n                tool_id = tool_call[\"id\"]\n                tool_call_map[tool_id] = {\n                    \"sender\": current_sender,\n                    \"tool\": tool_name\n                }\n                interactions.append({\n                    \"step\": i,\n                    \"from\": current_sender,\n                    \"to\": f\"SYSTEM ({tool_name})\",\n                    \"action\": f\"Called tool {tool_name}\",\n                    \"content\": f\"Tool call ID: {tool_id}\"\n                })\n        elif msg[\"role\"] == \"tool\":\n            # Find which agent invoked this tool\n            for prev_msg in reversed(messages[:i]):\n                if prev_msg[\"role\"] == \"agent\" and any(tc[\"name\"] == msg[\"tool\"] for tc in prev_msg.get(\"tool_calls\", [])):\n                    sender = prev_msg[\"agent\"]\n                    break\n            else:\n                sender = \"SYSTEM\"\n            \n            interactions.append({\n                \"step\": i,\n                \"from\": f\"SYSTEM ({msg['tool']})\",\n                \"to\": sender,\n                \"action\": f\"Tool response\",\n                \"content\": msg[\"content\"]\n            })\n    \n    return interactions\n\ndef visualize_interactions(interactions):\n    \"\"\"Visualize the interactions between agents.\"\"\"\n    print(\"\\n\" + \"=\"*100)\n    print(\" \"*40 + \"AGENT INTERACTIONS SUMMARY\")\n    print(\"=\"*100 + \"\\n\")\n    \n    for idx, interaction in enumerate(interactions):\n        print(f\"[{idx+1}] {interaction['from']} → {interaction['to']}\")\n        print(f\"    Action: {interaction['action']}\")\n        content = interaction['content']\n        if len(content) > 100:\n            content = content[:97] + \"...\"\n        print(f\"    Content: {content}\\n\")\n\ndef visualize_conversation_flow(messages):\n    \"\"\"Visualize the conversation flow between agents.\"\"\"\n    print(\"\\n\" + \"=\"*100)\n    print(\" \"*40 + \"CONVERSATION FLOW\")\n    print(\"=\"*100 + \"\\n\")\n    \n    for idx, message in enumerate(messages):\n        if message[\"role\"] == \"agent\":\n            agent_name = message[\"agent\"]\n            print(f\"[{idx+1}] Agent: {agent_name}\")\n            content = message[\"content\"]\n            if len(content) > 150:\n                content = content[:147] + \"...\"\n            print(f\"    Content: {content}\")\n            \n            if message.get(\"tool_calls\"):\n                tools = \", \".join([tc[\"name\"] for tc in message[\"tool_calls\"]])\n                print(f\"    Tools Called: {tools}\")\n        else:\n            print(f\"[{idx+1}] Tool: {message['tool']}\")\n            content = message[\"content\"]\n            if len(content) > 100:\n                content = content[:97] + \"...\"\n            print(f\"    Response: {content}\")\n        print()\n\ndef main():\n    parser = argparse.ArgumentParser(description='Analyze Mentis execution logs.')\n    parser.add_argument('log_file', help='Path to the log file')\n    parser.add_argument('--format', choices=['interactions', 'flow', 'all'], default='all',\n                      help='Output format: interactions, flow, or all')\n    \n    args = parser.parse_args()\n    \n    try:\n        messages = parse_log_file(args.log_file)\n        interactions = analyze_agent_interactions(messages)\n        \n        if args.format in ['interactions', 'all']:\n            visualize_interactions(interactions)\n        \n        if args.format in ['flow', 'all']:\n            visualize_conversation_flow(messages)\n            \n    except Exception as e:\n        print(f\"Error: {e}\")\n        sys.exit(1)\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[build-system]\nrequires = [\"setuptools>=42\", \"wheel\"]\nbuild-backend = \"setuptools.build_meta\"\nreadme = \"README.md\"\nrequires-python = \">=3.11\"\n\n[project]\nname = \"mentis\"\nversion = \"0.1.0\"\ndescription = \"A Multi-Agents project based on langgraph\"\nrequires-python = \">=3.11\"\ndependencies = [\n    \"dotenv>=0.9.9\",\n    \"langchain-community>=0.3.19\",\n    \"langchain-core>=0.3.45\",\n    \"langchain-openai>=0.3.8\",\n    \"langgraph>=0.3.11\",\n    \"pydantic>=2.10.6\",\n    \"typing-extensions>=4.12.2\",\n    \"python-dotenv>=1.0.0\",\n    \"firecrawl-py\",\n    \"wikipedia>=1.4.0\",\n    \"serpapi>=0.1.5\",\n    \"google-search-results>=2.4.2\",\n    \"duckduckgo-search>=7.5.2\",\n    \"arxiv>=2.1.3\",\n    \"rizaio>=0.9.0\",\n    \"e2b-code-interpreter>=1.1.0\",\n    \"fastapi>=0.115.11\",\n    \"uvicorn>=0.34.0\",\n    \"sse-starlette>=2.2.1\",\n    \"exa-py>=1.9.1\",\n    \"tavily-python>=0.5.1\",\n    \"replicate>=1.0.4\",\n    \"langchain-mcp-adapters>=0.0.7\",\n    \"mcp>=1.6.0\",\n    \"playwright>=1.51.0\",\n    \"pillow>=11.2.1\",\n    \"yfinance>=0.2.55\",\n]\n\n[tool.setuptools]\npackages = [\"core\"]\n"
  },
  {
    "path": "requirements.txt",
    "content": "# This file was autogenerated by uv via the following command:\n#    uv pip compile pyproject.toml -o requirements.txt\naiohappyeyeballs==2.6.1\n    # via aiohttp\naiohttp==3.11.14\n    # via langchain-community\naiosignal==1.3.2\n    # via aiohttp\nannotated-types==0.7.0\n    # via pydantic\nanyio==4.9.0\n    # via\n    #   httpx\n    #   openai\n    #   rizaio\narxiv==2.1.3\n    # via mentis (pyproject.toml)\nattrs==25.3.0\n    # via\n    #   aiohttp\n    #   e2b\n    #   e2b-code-interpreter\nbeautifulsoup4==4.13.3\n    # via wikipedia\ncertifi==2025.1.31\n    # via\n    #   httpcore\n    #   httpx\n    #   requests\ncharset-normalizer==3.4.1\n    # via requests\nclick==8.1.8\n    # via duckduckgo-search\ndataclasses-json==0.6.7\n    # via langchain-community\ndistro==1.9.0\n    # via\n    #   openai\n    #   rizaio\ndotenv==0.9.9\n    # via mentis (pyproject.toml)\nduckduckgo-search==7.5.2\n    # via mentis (pyproject.toml)\ne2b==1.1.0\n    # via e2b-code-interpreter\ne2b-code-interpreter==1.1.0\n    # via mentis (pyproject.toml)\nfeedparser==6.0.11\n    # via arxiv\nfirecrawl-py==1.14.1\n    # via mentis (pyproject.toml)\nfrozenlist==1.5.0\n    # via\n    #   aiohttp\n    #   aiosignal\ngoogle-search-results==2.4.2\n    # via mentis (pyproject.toml)\ngreenlet==3.1.1\n    # via sqlalchemy\nh11==0.14.0\n    # via httpcore\nhttpcore==1.0.7\n    # via\n    #   e2b\n    #   httpx\nhttpx==0.28.1\n    # via\n    #   e2b\n    #   e2b-code-interpreter\n    #   langgraph-sdk\n    #   langsmith\n    #   openai\n    #   rizaio\nhttpx-sse==0.4.0\n    # via langchain-community\nidna==3.10\n    # via\n    #   anyio\n    #   httpx\n    #   requests\n    #   yarl\njiter==0.9.0\n    # via openai\njsonpatch==1.33\n    # via langchain-core\njsonpointer==3.0.0\n    # via jsonpatch\nlangchain==0.3.20\n    # via langchain-community\nlangchain-community==0.3.19\n    # via mentis (pyproject.toml)\nlangchain-core==0.3.45\n    # via\n    #   mentis (pyproject.toml)\n    #   langchain\n    #   langchain-community\n    #   langchain-openai\n    #   langchain-text-splitters\n    #   langgraph\n    #   langgraph-checkpoint\n    #   langgraph-prebuilt\nlangchain-openai==0.3.9\n    # via mentis (pyproject.toml)\nlangchain-text-splitters==0.3.6\n    # via langchain\nlanggraph==0.3.11\n    # via mentis (pyproject.toml)\nlanggraph-checkpoint==2.0.20\n    # via\n    #   langgraph\n    #   langgraph-prebuilt\nlanggraph-prebuilt==0.1.3\n    # via langgraph\nlanggraph-sdk==0.1.57\n    # via langgraph\nlangsmith==0.3.15\n    # via\n    #   langchain\n    #   langchain-community\n    #   langchain-core\nlxml==5.3.1\n    # via duckduckgo-search\nmarshmallow==3.26.1\n    # via dataclasses-json\nmsgpack==1.1.0\n    # via langgraph-checkpoint\nmultidict==6.2.0\n    # via\n    #   aiohttp\n    #   yarl\nmypy-extensions==1.0.0\n    # via typing-inspect\nnest-asyncio==1.6.0\n    # via firecrawl-py\nnumpy==2.2.4\n    # via langchain-community\nopenai==1.66.3\n    # via langchain-openai\norjson==3.10.15\n    # via\n    #   langgraph-sdk\n    #   langsmith\npackaging==24.2\n    # via\n    #   e2b\n    #   langchain-core\n    #   langsmith\n    #   marshmallow\nprimp==0.14.0\n    # via duckduckgo-search\npropcache==0.3.0\n    # via\n    #   aiohttp\n    #   yarl\nprotobuf==5.29.3\n    # via e2b\npydantic==2.10.6\n    # via\n    #   mentis (pyproject.toml)\n    #   firecrawl-py\n    #   langchain\n    #   langchain-core\n    #   langsmith\n    #   openai\n    #   pydantic-settings\n    #   rizaio\npydantic-core==2.27.2\n    # via pydantic\npydantic-settings==2.8.1\n    # via langchain-community\npython-dateutil==2.9.0.post0\n    # via e2b\npython-dotenv==1.0.1\n    # via\n    #   mentis (pyproject.toml)\n    #   dotenv\n    #   firecrawl-py\n    #   pydantic-settings\npyyaml==6.0.2\n    # via\n    #   langchain\n    #   langchain-community\n    #   langchain-core\nregex==2024.11.6\n    # via tiktoken\nrequests==2.32.3\n    # via\n    #   arxiv\n    #   firecrawl-py\n    #   google-search-results\n    #   langchain\n    #   langchain-community\n    #   langsmith\n    #   requests-toolbelt\n    #   serpapi\n    #   tiktoken\n    #   wikipedia\nrequests-toolbelt==1.0.0\n    # via langsmith\nrizaio==0.9.0\n    # via mentis (pyproject.toml)\nserpapi==0.1.5\n    # via mentis (pyproject.toml)\nsgmllib3k==1.0.0\n    # via feedparser\nsix==1.17.0\n    # via python-dateutil\nsniffio==1.3.1\n    # via\n    #   anyio\n    #   openai\n    #   rizaio\nsoupsieve==2.6\n    # via beautifulsoup4\nsqlalchemy==2.0.39\n    # via\n    #   langchain\n    #   langchain-community\ntenacity==9.0.0\n    # via\n    #   langchain-community\n    #   langchain-core\ntiktoken==0.9.0\n    # via langchain-openai\ntqdm==4.67.1\n    # via openai\ntyping-extensions==4.12.2\n    # via\n    #   mentis (pyproject.toml)\n    #   anyio\n    #   beautifulsoup4\n    #   e2b\n    #   langchain-core\n    #   openai\n    #   pydantic\n    #   pydantic-core\n    #   rizaio\n    #   sqlalchemy\n    #   typing-inspect\ntyping-inspect==0.9.0\n    # via dataclasses-json\nurllib3==2.3.0\n    # via requests\nwebsockets==15.0.1\n    # via firecrawl-py\nwikipedia==1.4.0\n    # via mentis (pyproject.toml)\nyarl==1.18.3\n    # via aiohttp\nzstandard==0.23.0\n    # via langsmith\n"
  },
  {
    "path": "setup.py",
    "content": "from setuptools import setup\n\nsetup()\n"
  },
  {
    "path": "super_agents/__init__.py",
    "content": ""
  },
  {
    "path": "super_agents/browser_use/README.md",
    "content": "n# Browser Agent (基于 LangGraph) - super_agents/browser_use\n\n## 概述\n\n本项目实现了一个基于 LangGraph 框架的 Web 浏览和交互 Agent。其核心目标是让一个大型语言模型 (LLM) 能够像人一样理解任务指令，自主地控制浏览器（通过 Playwright）来访问网页、分析内容、与页面元素交互（点击、输入、滚动等），并最终完成用户指定的任务，例如信息提取、表单填写、在线搜索等。\n\n该 Agent 采用了多模态感知的设计思路，结合了传统的 DOM/Accessibility Tree 分析和可选的视觉语言模型 (VLM) 分析，以期在复杂网页上获得更鲁棒的理解和定位能力。\n\n## 核心技术栈\n\n* **流程编排:** LangGraph (LangChain 的状态图编排库)\n* **浏览器自动化:** Playwright (异步 Python 版本)\n* **模型调用:** LangChain ChatModels (`langchain-openai`, `langchain-community` 等)\n* **语言模型 (LLM/VLM):**\n    * **规划/决策 LLM:** 可配置，支持 OpenAI, Groq, xAI (Grok), 及其他 OpenAI 兼容 API (通过 `llm.py` 和 `.env` 配置)。\n    * **视觉分析 VLM:** 可选，通过 OpenRouter 调用支持 Vision 的模型 (如 Qwen-VL, GPT-4o, Claude 3.5 Sonnet 等) (通过 `detector.py` 和 `.env` 配置)。\n* **依赖管理:** `uv` (或 `pip`)\n* **配置:** `.env` 文件\n\n## 项目架构\n\n项目主要文件和目录结构如下：\n\n```\nsuper_agents/\n└── browser_use/              # Agent 根目录\n    ├── agent/                # LangGraph 核心实现\n    │   ├── __init__.py\n    │   ├── graph.py          # 定义 LangGraph 图结构、节点连接、条件边\n    │   ├── nodes.py          # 定义图中各节点 (Node) 的具体执行逻辑 (AgentNodes 类)\n    │   ├── state.py          # 定义 Agent 在图中流转的状态 (AgentState)\n    │   ├── schemas.py        # 定义数据模型 (如动作指令 Action Schema, VLM 输出 Schema)\n    │   └── prompts.py        # 管理发送给规划 LLM 和 VLM 的 Prompt 模板\n    │\n    ├── browser/              # 浏览器交互底层实现 (基于原始项目代码)\n    │   ├── __init__.py\n    │   ├── browser.py        # 核心 Browser 类，封装 Playwright 操作、感知方法 (get_content, update_state)\n    │   ├── detector.py       # 视觉检测器类，实现 VLM 调用逻辑\n    │   ├── models.py         # 定义浏览器状态、元素等 Pydantic 模型\n    │   ├── utils.py          # 浏览器相关的工具函数\n    │   └── findVisibleInteractiveElements.js # 用于 DOM 元素检测的 JS 脚本\n    │\n    ├── llm/                  # LLM 相关实现\n    │   ├── __init__.py\n    │   └── llm.py            # 定义 ChatOpenRouter (VLM 调用), initialize_llms (规划 LLM 初始化), generate_structured_output\n    │\n    ├── main.py               # Agent 的主入口脚本\n    ├── requirements.txt      # Python 依赖列表\n    ├── README.md             # 本文件\n    └── .env                  # 环境变量配置文件 (需要手动创建)\n```\n\n## 核心概念与设计\n\n### 1. LangGraph 状态机\n\nAgent 的核心控制流由 LangGraph 管理。它被实现为一个状态机 (`StateGraph`)：\n\n* **状态 (State):** `agent/state.py` 中的 `AgentState` (TypedDict) 定义了在节点间传递的数据，包含当前任务、浏览器内容/状态、LLM 解析出的动作、历史记录、错误信息等。\n* **节点 (Nodes):** `agent/nodes.py` 中的 `AgentNodes` 类定义了主要的处理步骤，作为图的节点：\n    * `get_browser_state`: 调用 `Browser` 类的感知方法 (当前是 `get_content`) 获取页面信息。\n    * `plan_action`: 将感知信息和任务包装成 Prompt，调用**规划 LLM** (通过 `llm.py` 的 `generate_structured_output`) 获取结构化的下一步动作 JSON。\n    * `execute_action`: 解析 `plan_action` 返回的动作 JSON，并调用 `Browser` 类中相应的交互方法 (如 `Maps_to`, `click`, `type`, `scroll`, `wait`) 执行操作。\n* **边 (Edges):** `agent/graph.py` 定义了节点间的固定跳转（如 `get_browser_state` -> `plan_action`）和条件跳转（如 `execute_action` 后根据 `should_end` 函数判断是结束 `END` 还是回到 `get_browser_state`）。\n\n### 2. 感知 (Perception)\n\nAgent 通过 `browser.py` 中的 `Browser.get_content()` 方法（被 `get_browser_state` 节点调用）来理解当前网页状态。该方法整合了多种信息源，旨在为 LLM 提供丰富且相对简洁的页面表示：\n\n* **简化 DOM:** 通过注入并执行 `SIMPLIFY_PAGE_SCRIPT` JavaScript，移除无关标签（脚本、样式等），提取关键交互元素及其属性，并为这些元素添加 `x-pw-id` 唯一标识。结果以伪 HTML 字符串形式返回。\n* **可访问性树 (AX Tree):** (当前实现中暂时禁用/存在错误) 理论上通过 `page.accessibility.snapshot()` 获取页面的语义结构信息（角色、名称等），以 JSON 字符串形式返回。\n* **视觉元素 (VLM):** (可选，需配置)\n    * 如果 `.env` 文件中配置了 VLM (`OPENROUTER_API_KEY`, `VLM_API_MODEL`)，`get_content` 会调用 `Detector` 实例。\n    * `Detector` (在 `browser/detector.py` 中) 使用 LangChain 的 `ChatOpenRouter` (在 `llm.py` 中定义) 调用配置的 VLM API。\n    * 通过精心设计的 Prompt (`VLM_PROMPT_TEMPLATE`) 请求 VLM 返回页面交互元素的**描述、类型和边界框百分比坐标** (JSON 格式)。\n    * `Detector` 解析 VLM 返回的 JSON，创建 `InteractiveElement` 对象列表（目前坐标是占位符）。\n    * `get_content` 将这些视觉元素信息格式化为**文本摘要** (包含 VLM 分配的 ID 和边界框信息)。\n* **合并与截断:** `get_content` 将 URL、简化 DOM、AX Tree (如果成功)、视觉元素摘要合并为一个长的文本字符串，并在超过 `max_length` 时进行截断，最后返回给 `plan_action` 节点。\n\n### 3. 规划 (Planning)\n\n* `plan_action` 节点接收 `get_content` 返回的**混合文本字符串**。\n* `agent/prompts.py` 中的 `create_agent_prompt` 函数将任务描述、历史记录、错误信息（如果有）和这段混合文本整合成一个 Prompt。\n* 该 Prompt 被发送给**规划 LLM**（通过 `llm.py` 中的 `generate_structured_output` 函数，该函数使用 LangChain 的 `.with_structured_output()` 功能）。\n* LLM 被要求分析输入信息，决定下一步动作，并**严格按照 `agent/schemas.py` 中定义的 `LLMResponse` Pydantic 模型返回一个包含具体动作指令的 JSON**。Prompt 中包含了对生成**健壮 CSS 选择器**（优先使用稳定 ID、aria-label、文本内容，结合 `x-pw-id`）的明确指导。\n\n### 4. 行动 (Action Execution)\n\n* `execute_action` 节点接收规划 LLM 返回的结构化动作 JSON (存储在 `state['parsed_action']`)。\n* 它解析出动作类型 (`type`) 和参数 (`selector`, `url`, `text`, `direction` 等)。\n* 根据动作类型，调用 `browser/browser.py` 中 `Browser` 类对应的**简单交互方法**:\n    * `Maps_to(url)`\n    * `click(selector)`\n    * `type(selector, text)`\n    * `scroll(direction)`\n    * `wait(milliseconds)`\n* 这些方法内部使用 Playwright 的 `page.goto`, `page.locator(...).click`, `page.locator(...).fill`, `page.evaluate(...)` 等函数执行实际的浏览器操作。\n* 如果动作是 `finish` 或 `error`，图流程会根据 `graph.py` 中的 `should_end` 函数判断并终止。\n\n## 安装与配置\n\n1.  **环境:** 推荐使用 Python 3.10+。\n2.  **依赖安装:**\n    * 克隆项目。\n    * 进入 `super_agents/browser_use/` 目录。\n    * 创建并激活虚拟环境 (使用 uv):\n        ```bash\n        uv venv\n        source .venv/bin/activate  # Linux/macOS\n        # 或者 .venv\\Scripts\\activate # Windows\n        ```\n    * 安装依赖项 (使用 uv):\n        ```bash\n        uv sync\n        ```\n3.  **Playwright 浏览器:** 运行 `playwright install` (至少需要 `playwright install chromium`) 来下载浏览器驱动。\n4.  **环境变量 (`.env` 文件):**\n    * 在 `super_agents/browser_use/` 目录下创建一个名为 `.env` 的文件。\n    * 参考我们之前讨论的 `.env` 示例，**至少需要配置**：\n        * **规划 LLM:** 选择一个 Provider (如 `openai`), 设置 `LLM_PROVIDER`, `LLM_MODEL_NAME`, 以及对应的 API Key (如 `OPENAI_API_KEY`)。\n        * **VLM (可选):** 如果要启用视觉分析，设置 `OPENROUTER_API_KEY` 和 `VLM_API_MODEL` (设置为 OpenRouter 上支持视觉的模型 ID，如 `openai/gpt-4.1`等)。\n    * 确保 `.env` 文件被正确加载（`main.py` 和 `llm.py` 中包含 `load_dotenv()`）。\n\n## 如何运行\n\n1.  确保已完成安装和配置。\n2.  激活虚拟环境。\n3.  从 `super_agents/` 目录（即 `browser_use` 的**上级**目录）运行 `main.py`：\n\n    ```bash\n    # 基本运行\n    python -m browser_use.main \"您的任务描述\"\n\n    # 示例：访问 Hacker News 并获取导航栏信息\n    python -m browser_use.main \"访问 news.ycombinator.com，返回页面导航栏信息\"\n\n    # 示例：使用其他命令行参数（如果有定义，如下面的最大步骤数）\n    python -m browser_use.main \"您的任务描述\" --max-steps 30\n    ```\n\n## 当前状态、局限性与未来工作\n\n* **核心流程:** Agent 的基本 LangGraph 流程（感知-规划-行动循环）、浏览器操作（导航、点击、输入、滚动、等待）、规划 LLM 调用、可选的 VLM 调用**已经跑通**，能够完成一些多步骤的 Web 任务。\n* **视觉集成 (部分):** VLM 调用流程已集成到 `Detector` 类并通过 `get_content` 触发（需配置 API Key 和 Model）。VLM 能够返回 JSON 格式的检测结果，并且可以被成功解析为内部数据结构 (`InteractiveElement`)。\n* **局限性 & 待完善:**\n    1.  **VLM 坐标处理:** VLM 返回的是百分比坐标，但在解析时 (`_parse_vlm_detections`) 目前使用的是**占位符像素坐标**。需要获取截图的实际尺寸，实现准确的百分比到像素的转换，才能真正利用视觉信息进行定位。\n    2.  **动作执行方式:** 当前 `execute_action` 仍然**完全依赖规划 LLM 生成的 CSS 选择器**。尚未实现基于 VLM 的元素 ID 或坐标进行点击/输入的操作，这限制了视觉能力的实际应用，特别是在 CSS 选择器不可靠的复杂页面上。\n    3.  **感知信息完整性:**\n        * **内容截断:** `get_content` 方法返回的内容会因为 `max_length` 限制而被截断，影响需要完整页面信息的任务（如“摘录全文”）。需要增大 `max_length` 或实现更智能的内容提取/滚动策略。\n        * **AX Tree 缺失:** 获取 Accessibility Tree 的代码目前被注释或存在错误，导致缺少重要的语义信息。需要修复 `page.accessibility.snapshot()` 调用。\n    4.  **滚动策略:** 当前依靠 Prompt 指示 LLM 进行滚动。可能需要更鲁棒的机制来处理长页面，例如 Agent 内部判断是否需要滚动，或者让 LLM 能获取滚动状态信息。\n    5.  **Pydantic V1 警告:** 调用规划 LLM 的 `with_structured_output` 时仍然出现 Pydantic V1 警告，建议保持 LangChain 相关库和 Pydantic 为最新版本。\n    6.  **错误处理:** 当前错误处理相对简单（例如 VLM 解析失败直接跳过，执行错误直接终止图），可以增加更复杂的重试、回退或用户介入机制。\n    7.  **VLM 稳定性:** VLM 能否稳定、准确地返回所需的 JSON 格式和边界框，高度依赖所选模型和 Prompt，可能需要进一步调优。\n\n* **未来工作:**\n    * 修复 AX Tree 获取。\n    * 实现 VLM 百分比坐标到像素坐标的准确转换。\n    * 增强 `execute_action` 和 `Browser` 类以支持基于坐标的交互。\n    * 优化 Prompt，指导 LLM 输出 VLM 元素 ID 或在 CSS 选择器失败时提供坐标作为备选。\n    * 实现更智能的滚动策略以处理长页面和完整内容提取。\n    * 持续更新依赖库，解决 Pydantic 警告。\n    * 增强错误处理和恢复能力。"
  },
  {
    "path": "super_agents/browser_use/__init__.py",
    "content": ""
  },
  {
    "path": "super_agents/browser_use/agent/__init__.py",
    "content": "# super_agents/browser_use/agent/__init__.py\n\"\"\"\nBrowser agent module that handles browser automation using LLM guidance.\n\"\"\"\n"
  },
  {
    "path": "super_agents/browser_use/agent/graph.py",
    "content": "# super_agents/browser_use/agent/graph.py\nimport logging\nfrom typing import Dict, Any\n\nfrom langchain_core.runnables.base import RunnableSerializable\nfrom langgraph.graph import StateGraph, END\n\nfrom .state import AgentState\nfrom .nodes import AgentNodes\nfrom ..browser.browser import Browser\n\nlogger = logging.getLogger(__name__)\n\nNODE_GET_BROWSER_STATE = \"get_browser_state\"\nNODE_PLAN_ACTION = \"plan_action\"\nNODE_EXECUTE_ACTION = \"execute_action\"\n\n# --- UPDATED Conditional Edge Logic ---\ndef should_end(state: AgentState) -> bool:\n    \"\"\"Determines if the graph should end.\"\"\"\n    action = state.get(\"parsed_action\", {})\n    action_type = action.get(\"type\")\n    error_occurred = state.get(\"error\") is not None # Check if execute_action reported an error\n\n    # End if the LLM planned action is 'finish' or 'error'\n    if action_type == \"finish\":\n        logger.info(\"Graph execution: 'finish' action planned. Ending.\")\n        return True\n    if action_type == \"error\":\n        # Log the error message from the action payload\n        logger.error(f\"Graph execution: 'error' action planned by LLM: {action.get('message', 'Unknown error')}. Ending.\")\n        return True\n\n    # End if the execute_action node reported an error in the state\n    # Note: Depending on desired behavior, you might want to retry instead of ending on execution errors\n    # if error_occurred:\n    #     logger.error(f\"Graph execution: Error occurred during execution: {state['error']}. Ending.\")\n    #     return True # Uncomment this line if ANY execution error should terminate the graph\n\n    return False # Continue otherwise\n\ndef create_graph_app(browser: Browser, llm: RunnableSerializable):\n    \"\"\"\n    Creates the LangGraph application using class-based nodes.\n    \"\"\"\n    agent_nodes = AgentNodes(browser=browser, llm=llm)\n    workflow = StateGraph(AgentState)\n\n    workflow.add_node(NODE_GET_BROWSER_STATE, agent_nodes.get_browser_state)\n    workflow.add_node(NODE_PLAN_ACTION, agent_nodes.plan_action)\n    workflow.add_node(NODE_EXECUTE_ACTION, agent_nodes.execute_action)\n\n    workflow.set_entry_point(NODE_GET_BROWSER_STATE)\n    workflow.add_edge(NODE_GET_BROWSER_STATE, NODE_PLAN_ACTION)\n    workflow.add_edge(NODE_PLAN_ACTION, NODE_EXECUTE_ACTION)\n\n    # After executing action, decide whether to end or loop back\n    workflow.add_conditional_edges(\n        NODE_EXECUTE_ACTION,\n        # Function to decide the next step based on the state *after* execution\n        lambda state: END if should_end(state) else NODE_GET_BROWSER_STATE,\n        {\n            END: END,\n            NODE_GET_BROWSER_STATE: NODE_GET_BROWSER_STATE\n        }\n    )\n\n    logger.info(\"Compiling LangGraph workflow...\")\n    app = workflow.compile()\n    logger.info(\"LangGraph workflow compiled successfully.\")\n    return app"
  },
  {
    "path": "super_agents/browser_use/agent/nodes.py",
    "content": "# super_agents/browser_use/agent/nodes.py\nimport asyncio\nimport logging\nfrom typing import Dict, Any, Optional\n\n# --- LangChain Core Import for Type Hint ---\nfrom langchain_core.runnables.base import RunnableSerializable # <--- Import this\n\nfrom .state import AgentState\nfrom .schemas import (\n    BaseAction, LLMResponse\n)\nfrom .prompts import create_agent_prompt\n# --- CORRECTED LLM IMPORT ---\n# Import only the necessary functions/classes that actually exist in llm.py\nfrom ..llm import generate_structured_output\n\n# Import the correct Browser from the browser subdirectory\nfrom ..browser.browser import Browser\n\nlogger = logging.getLogger(__name__)\n\n# --- Class to hold nodes and dependencies ---\nclass AgentNodes:\n    \"\"\"Encapsulates agent nodes and their dependencies (browser, llm).\"\"\"\n    # --- CORRECTED TYPE HINT for llm ---\n    def __init__(self, browser: Browser, llm: RunnableSerializable): # <--- Use RunnableSerializable\n        if not isinstance(llm, RunnableSerializable):\n             logger.warning(f\"LLM instance provided to AgentNodes is not of type RunnableSerializable (actual type: {type(llm)}).\")\n        self.browser = browser\n        self.llm = llm\n        logger.info(\"AgentNodes initialized with browser and llm instances.\")\n\n    # --- Node method implementations remain the same ---\n    async def get_browser_state(self, state: AgentState) -> Dict[str, Any]:\n        \"\"\"Node method to get the current state of the browser page.\"\"\"\n        logger.info(\"Node: get_browser_state\")\n        try:\n            content = await self.browser.get_content()\n            return {\"browser_content\": content, \"error\": None}\n        except Exception as e:\n            logger.error(f\"Error getting browser state: {e}\", exc_info=True)\n            return {\"error\": f\"Failed to get browser state: {e}\"}\n\n    async def plan_action(self, state: AgentState) -> Dict[str, Any]:\n        \"\"\"Node method to decide the next action using the LLM's structured output.\"\"\"\n        logger.info(\"Node: plan_action\")\n        if state.get(\"error\"):\n            logger.warning(f\"Planning action with existing error: {state['error']}\")\n\n        prompt = create_agent_prompt(\n            task=state[\"task\"],\n            current_browser_content=state[\"browser_content\"],\n            history=state.get(\"history\", []),\n            error_message=state.get(\"error\")\n        )\n        system_message = \"You are an AI agent controlling a web browser. Respond with the single next action formatted as JSON matching the required schema.\"\n\n        try:\n            llm_response: Optional[LLMResponse] = await generate_structured_output(\n                model=self.llm, # Pass the llm instance\n                schema=LLMResponse,\n                prompt=prompt,\n                system_message=system_message\n            )\n\n            if llm_response and isinstance(llm_response, LLMResponse):\n                parsed_action_model: BaseAction = llm_response.action\n                parsed_action_dict = parsed_action_model.dict()\n                logger.info(f\"LLM proposed action: {parsed_action_dict.get('type', 'unknown')}\")\n                return {\"parsed_action\": parsed_action_dict, \"error\": None}\n            else:\n                logger.error(\"Failed to get valid structured output from LLM.\")\n                error_action_dict = {\"type\": \"error\", \"message\": \"Failed to get valid structured output from LLM.\"}\n                return {\"parsed_action\": error_action_dict, \"error\": \"LLM did not return valid structured output.\"}\n\n        except Exception as e:\n            logger.error(f\"Error during structured action planning: {e}\", exc_info=True)\n            error_action_dict = {\"type\": \"error\", \"message\": f\"LLM planning exception: {e}\"}\n            return {\"parsed_action\": error_action_dict, \"error\": f\"LLM planning exception: {e}\"}\n\n\n    async def execute_action(self, state: AgentState) -> Dict[str, Any]:\n        \"\"\"Node method to execute the action dictionary from the state.\"\"\"\n        logger.info(\"Node: execute_action\")\n        action_dict = state.get(\"parsed_action\")\n        history = state.get(\"history\", [])\n\n        if not action_dict or not isinstance(action_dict, dict) or \"type\" not in action_dict:\n            error_msg = \"No valid action dictionary provided to execute.\"\n            logger.error(error_msg)\n            return {\"error\": error_msg}\n\n        action_type = action_dict.get(\"type\")\n        action_repr = f\"Action: {action_type}, Details: { {k:v for k,v in action_dict.items() if k != 'type'} }\"\n        logger.info(f\"Executing {action_repr}\")\n\n        new_history = history + [action_repr]\n\n        try:\n            if action_type == \"navigate\":\n                await self.browser.navigate_to(action_dict[\"url\"]) # Check if method name/args match Browser class\n            elif action_type == \"click\":\n                 await self.browser.click(action_dict[\"selector\"]) # Check Browser class for click method/args\n            elif action_type == \"type\":\n                  await self.browser.type(action_dict[\"selector\"], action_dict[\"text\"]) # Check Browser class for type method/args\n            elif action_type == \"scroll\":\n                  await self.browser.scroll(action_dict[\"direction\"]) # Check Browser class for scroll method/args\n            elif action_type == \"wait\":\n                  await self.browser.wait(action_dict[\"milliseconds\"]) # Check Browser class for wait method/args\n            elif action_type == \"get_content\":\n                 logger.info(\"Action 'get_content' requested (will be handled by next cycle)\")\n                 pass\n            elif action_type == \"finish\":\n                logger.info(f\"Action 'finish' received. Result: {action_dict.get('result')}\")\n                pass\n            elif action_type == \"error\":\n                 error_msg = action_dict.get(\"message\", \"LLM signaled an error.\")\n                 logger.error(f\"Executing 'error' action from LLM: {error_msg}\")\n                 return {\"error\": error_msg, \"history\": new_history}\n            else:\n                error_msg = f\"Attempted to execute unknown/unhandled action type: {action_type}\"\n                logger.error(error_msg)\n                return {\"error\": error_msg, \"history\": new_history}\n\n            return {\"error\": None, \"history\": new_history}\n\n        except Exception as e:\n            logger.error(f\"Error executing action '{action_type}': {e}\", exc_info=True)\n            return {\"error\": f\"Failed to execute action '{action_type}': {e}\", \"history\": new_history}"
  },
  {
    "path": "super_agents/browser_use/agent/prompts.py",
    "content": "from typing import List\n\ndef create_agent_prompt(\n    task: str,\n    current_browser_content: str, # This string now potentially contains URL, DOM, AX Tree, and Visual Elements\n    history: List[str],\n    error_message: str = None\n) -> str:\n    \"\"\"\n    Generates the prompt to be sent to the LLM based on the current state.\n    Includes sections for Simplified DOM, Accessibility Tree, and Visual Elements.\n    \"\"\"\n    prompt_parts = []\n    prompt_parts.append(\"You are an AI agent controlling a web browser to complete a task.\")\n    prompt_parts.append(f\"Your current task is: {task}\")\n\n    if error_message:\n        prompt_parts.append(f\"\\nAn error occurred in the previous step: {error_message}\")\n        prompt_parts.append(\"Please analyze the error and the current browser state, then decide the next best action.\")\n\n    prompt_parts.append(\"\\n\\n# Current Browser Perception:\")\n    # The browser_content string now contains multiple sections, as generated by get_content\n    prompt_parts.append(current_browser_content)\n\n    if history:\n        prompt_parts.append(\"\\n\\n# History of Previous Actions:\")\n        for i, item in enumerate(history[-5:], 1):\n            prompt_parts.append(f\"{i}. {item}\")\n\n    # --- Instructions with guidance on using all perception data ---\n    instructions = \"\"\"\n\n# Instructions:\nAnalyze the **Current Browser Perception** section above, which includes:\n1.  **Page URL:** The current web address.\n2.  **Simplified DOM:** A structural view of the page with interactive elements marked with `x-pw-id` attributes.\n3.  **Accessibility Tree:** Semantic information about elements (roles, names).\n4.  **Visual Elements:** Elements detected visually via Computer Vision (CV), including their bounding boxes `[L:left, T:top, R:right, B:bottom]` and IDs (e.g., `cv-0`, `cv-1`).\n\nBased on the task and ALL available perception information, decide the single next action to take.\nYour response MUST be a JSON object with a single top-level key named \"action\".\nThe value of the \"action\" key MUST be an object matching one of the following action schemas:\n\n- Navigate: {{\"type\": \"navigate\", \"url\": \"<url_string>\"}}\n- Click: {{\"type\": \"click\", \"selector\": \"<css_selector>\", \"description\": \"<element_description: optional>\"}}\n- Type: {{\"type\": \"type\", \"selector\": \"<css_selector>\", \"text\": \"<text_to_type>\", \"description\": \"<element_description: optional>\"}}\n- Scroll: {{\"type\": \"scroll\", \"direction\": \"<up|down|left|right>\"}}\n- Finish: {{\"type\": \"finish\", \"result\": \"<final_answer_or_summary>\"}}\n- Error: {{\"type\": \"error\", \"message\": \"<error_description>\"}} (Use if you detect an unrecoverable error or loop)\n- GetContent: {{\"type\": \"get_content\", \"description\": \"<reason>\"}}\n\n**Important Task Handling Guidance:**\n1.  **Identify elements** using the DOM, AX Tree (if available), and Visual Elements. Use robust selectors as previously guided.\n2.  **If the task requires reading or extracting content that might extend beyond the current view (e.g., '摘录全文', 'find all items', 'read the article'), and you haven't finished scrolling, your next action should likely be to SCROLL DOWN.** Use: `{{\"action\": {{\"type\": \"scroll\", \"direction\": \"down\"}}}}`\n3.  Only use `get_content` if you believe scrolling will not help or if you need to re-analyze after a non-scroll action.\n4.  Once you believe you have scrolled enough and have all necessary information visible in the content provided, proceed with the extraction or final action.\n5.  If the task is complete, use the 'finish' action.\n\nExample Response:\n```json\n{{\n  \"action\": {{\n    \"type\": \"click\",\n    \"selector\": \"a[x-pw-id='pw-16']:has-text('new')\",\n    \"description\": \"Click the 'new' link, corresponds to visual element cv-3\"\n  }}\n}}\n{{\n  \"action\": {{\n    \"type\": \"scroll\",\n    \"direction\": \"down\"\n  }}\n}}\n```\n\nProvide ONLY the JSON object containing the 'action' key in a ```json ... ``` block.\nThink step-by-step. Correlate information from the DOM, AX Tree, and Visual Elements if possible. Choose the most precise and stable selector.\nIf the task is complete, use the 'finish' action.\n\"\"\"\n    prompt_parts.append(instructions)\n    # --- End Instructions ---\n\n    final_prompt = \"\\n\".join(prompt_parts)\n    return final_prompt"
  },
  {
    "path": "super_agents/browser_use/agent/schemas.py",
    "content": "# super_agents/browser_use/agent/schemas.py\nfrom typing import Literal, Optional, Union, List, Dict, Any, Type\n# Use Pydantic V2+ if installed, otherwise V1 syntax\ntry:\n    from pydantic.v1 import BaseModel, Field\nexcept ImportError:\n    from pydantic import BaseModel, Field # Fallback to V2\n\n# --- Action Type ---\nActionTypeLiteral = Literal[\n    \"navigate\",\n    \"click\",\n    \"type\",\n    \"scroll\",\n    \"wait\",\n    \"get_content\",\n    \"finish\",\n    \"error\"\n]\n\n# --- Pydantic Schemas for Actions ---\n# Using Pydantic allows for better validation and compatibility\n# with LangChain's structured output features.\n\nclass BaseAction(BaseModel):\n    \"\"\"Base schema for all actions, containing the type.\"\"\"\n    type: ActionTypeLiteral = Field(..., description=\"The type of action to perform.\")\n\nclass NavigateAction(BaseAction):\n    type: Literal[\"navigate\"] = \"navigate\"\n    url: str = Field(..., description=\"The URL to navigate to.\")\n\nclass ClickAction(BaseAction):\n    type: Literal[\"click\"] = \"click\"\n    selector: str = Field(..., description=\"CSS selector for the element to click.\")\n    description: Optional[str] = Field(None, description=\"Optional description of the element being clicked.\")\n\nclass TypeAction(BaseAction):\n    type: Literal[\"type\"] = \"type\"\n    selector: str = Field(..., description=\"CSS selector for the input field.\")\n    text: str = Field(..., description=\"The text to type into the field.\")\n    description: Optional[str] = Field(None, description=\"Optional description of the element being typed into.\")\n\nclass ScrollAction(BaseAction):\n    type: Literal[\"scroll\"] = \"scroll\"\n    direction: Literal[\"up\", \"down\", \"left\", \"right\"] = Field(..., description=\"The direction to scroll the page.\")\n    # selector: Optional[str] = Field(None, description=\"Optional CSS selector of element to scroll within.\") # Add if needed\n\nclass WaitAction(BaseAction):\n    type: Literal[\"wait\"] = \"wait\"\n    milliseconds: int = Field(..., description=\"Duration to wait in milliseconds.\")\n\nclass GetContentAction(BaseAction):\n    type: Literal[\"get_content\"] = \"get_content\"\n    # No extra fields needed, just signifies intent to refresh state\n    description: Optional[str] = Field(\"Requesting updated browser content\", description=\"Reason for requesting content.\")\n\nclass FinishAction(BaseAction):\n    type: Literal[\"finish\"] = \"finish\"\n    result: str = Field(..., description=\"The final answer or summary of the completed task.\")\n\nclass ErrorAction(BaseAction):\n    type: Literal[\"error\"] = \"error\"\n    message: str = Field(..., description=\"Description of the error encountered or signaled by the LLM.\")\n\n# --- Union for Parsing ---\n# LangChain's with_structured_output often works best when targeting a single Pydantic model\n# that uses discriminated unions (if available in your Pydantic version) or by prompting\n# the LLM clearly to only output ONE type of action JSON matching the base structure.\n# For simplicity here, we define the *expected output structure* the LLM should generate.\n# The parsing function might need refinement based on how the LLM structures the output.\n\n# Define the overall structure the LLM should output, which includes one of the actions.\n# This structure helps `with_structured_output`.\nclass LLMResponse(BaseModel):\n    action: Union[\n        NavigateAction,\n        ClickAction,\n        TypeAction,\n        ScrollAction,\n        WaitAction,\n        GetContentAction,\n        FinishAction,\n        ErrorAction\n    ] = Field(..., description=\"The specific action determined by the LLM.\")\n\n# --- Parsing Function (Placeholder/Example) ---\n# The `generate_structured_output` function in llm.py now handles the parsing\n# directly into the Pydantic schema (LLMResponse).\n# So, we might not need a separate manual parsing function here if using that.\n\n# If you need manual parsing from raw text (less reliable):\n# def parse_llm_response_manual(response: str) -> Optional[BaseAction]:\n#     # ... (complex logic using regex or JSON parsing as in previous example)\n#     # This would return one of the action models (NavigateAction, ClickAction, etc.)\n#     pass\n"
  },
  {
    "path": "super_agents/browser_use/agent/state.py",
    "content": "# super_agents/browser_use/agent/state.py\nfrom typing import Dict, List, Optional, Any, TypedDict\n\n# Define the state structure using TypedDict for type hinting\nclass AgentState(TypedDict, total=False):\n    \"\"\"\n    TypedDict representing the state of the browser agent during execution.\n    \n    Attributes:\n        task: The user task description\n        browser_content: The current HTML content of the browser\n        parsed_action: The last action parsed from LLM response\n        history: List of previous actions taken\n        error: Any error message from the last operation\n    \"\"\"\n    task: str\n    browser_content: str\n    parsed_action: Dict[str, Any]\n    history: List[str]\n    error: Optional[str]\n"
  },
  {
    "path": "super_agents/browser_use/agent/tools.py",
    "content": ""
  },
  {
    "path": "super_agents/browser_use/agent.py",
    "content": "# super_agents/browser_use/agent.py\n\"\"\"\nAgent API for browser-based task execution.\nProvides a simplified interface similar to the original implementation.\n\"\"\"\n\nimport asyncio\nimport logging\nfrom typing import Any, Dict, Optional\n\nfrom .agent.graph import create_graph_app\nfrom .agent.state import AgentState\nfrom .browser.browser import Browser\nfrom .browser.config import BrowserConfig\nfrom .llm import initialize_llms\n\nlogger = logging.getLogger(__name__)\n\nclass Agent:\n    \"\"\"\n    Agent class that provides a simple interface for browser automation with LLM.\n    \n    This implementation is similar to the original API but uses the current\n    browser automation stack with LangGraph.\n    \"\"\"\n    \n    def __init__(\n        self, \n        llm=None,\n        browser_config: Optional[BrowserConfig] = None,\n        max_steps: int = 50\n    ):\n        \"\"\"\n        Initialize the Agent with optional LLM and browser configuration.\n        \n        Args:\n            llm: LLM instance to use (if None, will initialize from environment)\n            browser_config: Browser configuration options\n            max_steps: Maximum number of steps the agent can take\n        \"\"\"\n        self.browser_config = browser_config or BrowserConfig()\n        self.llm = llm\n        self.max_steps = max_steps\n        self.browser = None\n        self._app = None\n    \n    async def _initialize(self):\n        \"\"\"Initialize the browser and LLM if not already initialized.\"\"\"\n        # Initialize LLM if not provided\n        if self.llm is None:\n            logger.info(\"Initializing LLM from environment variables\")\n            self.llm, _ = initialize_llms()\n            \n        if self.llm is None:\n            raise ValueError(\"Failed to initialize LLM. Check API keys and .env settings.\")\n        \n        # Initialize browser\n        self.browser = Browser(config=self.browser_config)\n        await self.browser.initialize()\n        \n        # Initialize LangGraph app\n        self._app = create_graph_app(browser=self.browser, llm=self.llm)\n    \n    async def run(self, prompt: str) -> Dict[str, Any]:\n        \"\"\"\n        Run the agent with the given prompt/task.\n        \n        Args:\n            prompt: The task description or prompt for the agent\n        \n        Returns:\n            Dictionary containing the execution result\n        \"\"\"\n        # Ensure initialization\n        if self.browser is None or self._app is None:\n            await self._initialize()\n        \n        # Define the initial state\n        initial_state = AgentState(\n            task=prompt,\n            browser_content=\"\",\n            parsed_action={},\n            history=[],\n            error=None\n        )\n        \n        # Run the graph\n        logger.info(f\"Starting agent execution for task: {prompt}\")\n        try:\n            final_state = await self._app.ainvoke(\n                initial_state, \n                config={\"recursion_limit\": self.max_steps}\n            )\n            \n            # Process result\n            if final_state.get(\"error\"):\n                logger.error(f\"Agent finished with error: {final_state['error']}\")\n                return {\"result\": f\"Error: {final_state['error']}\", \"success\": False}\n            elif final_state.get(\"parsed_action\", {}).get(\"type\") == \"finish\":\n                result = final_state[\"parsed_action\"].get(\"result\", \"Task finished, but no result extracted.\")\n                logger.info(f\"Agent finished successfully. Result: {result}\")\n                return {\"result\": result, \"success\": True}\n            else:\n                logger.warning(\"Agent finished without a 'finish' action or error.\")\n                return {\n                    \"result\": \"Agent stopped without producing a final answer.\", \n                    \"success\": False,\n                    \"state\": final_state\n                }\n                \n        except Exception as e:\n            logger.error(f\"Agent execution failed: {e}\", exc_info=True)\n            return {\"result\": f\"Error during execution: {str(e)}\", \"success\": False}\n        finally:\n            # Clean up resources\n            if self.browser:\n                await self.browser.close()\n                self.browser = None\n            self._app = None\n    \n    def __del__(self):\n        \"\"\"Ensure resources are cleaned up.\"\"\"\n        if self.browser:\n            asyncio.create_task(self.browser.close())\n\n\n# Provider classes for compatibility with original API\nclass OpenAIProvider:\n    \"\"\"OpenAI provider compatible with the interface\"\"\"\n    \n    def __init__(self, model=\"gpt-4o-mini\", api_key=None, temperature=0.1):\n        \"\"\"\n        Initialize OpenAI provider.\n        \n        Args:\n            model: Model name to use\n            api_key: OpenAI API key (if None, will use from environment)\n            temperature: Temperature for generation\n        \"\"\"\n        self.model = model\n        self.api_key = api_key\n        self.temperature = temperature\n        \n        # These parameters will be used by initialize_llms() internally\n        import os\n        if api_key:\n            os.environ[\"OPENAI_API_KEY\"] = api_key\n        os.environ[\"LLM_PROVIDER\"] = \"openai\"\n        os.environ[\"LLM_MODEL_NAME\"] = model\n        os.environ[\"LLM_TEMPERATURE\"] = str(temperature)\n\n\nclass AnthropicProvider:\n    \"\"\"Anthropic provider compatible with the interface\"\"\"\n    \n    def __init__(self, model=\"claude-3-opus-20240229\", api_key=None, temperature=0.1, \n                 enable_thinking=False, thinking_token_budget=None):\n        \"\"\"\n        Initialize Anthropic provider.\n        \n        Args:\n            model: Model name to use\n            api_key: Anthropic API key (if None, will use from environment)\n            temperature: Temperature for generation\n            enable_thinking: Enable thinking step (not fully supported in current implementation)\n            thinking_token_budget: Tokens for thinking (not fully supported)\n        \"\"\"\n        self.model = model\n        self.api_key = api_key\n        self.temperature = temperature\n        self.enable_thinking = enable_thinking\n        self.thinking_token_budget = thinking_token_budget\n        \n        # These parameters will be used by initialize_llms() internally\n        import os\n        if api_key:\n            os.environ[\"ANTHROPIC_API_KEY\"] = api_key\n        os.environ[\"LLM_PROVIDER\"] = \"anthropic\" \n        os.environ[\"LLM_MODEL_NAME\"] = model\n        os.environ[\"LLM_TEMPERATURE\"] = str(temperature)\n\n\n# Add convenience imports to __init__.py\n# This will allow: from super_agents.browser_use import Agent, OpenAIProvider, BrowserConfig\n"
  },
  {
    "path": "super_agents/browser_use/browser/browser.py",
    "content": "# super_agents/browser_use/browser/browser.py\n\"\"\"\nStreamlined Playwright browser implementation with integrated perception capabilities.\nIncludes DOM/AX Tree/Visual analysis and basic interaction methods.\n\"\"\"\n\nimport asyncio\nimport json\nimport logging\nimport functools \nimport base64\nimport os\nfrom dataclasses import dataclass, field\n# from importlib import resources # Not used\nfrom typing import Any, Optional, TypedDict, List, Dict # Added List, Dict\n\n# --- Local Imports (Ensure these files exist in the same directory) ---\ntry:\n    from .observe_helper import observe\nexcept ImportError:\n    def observe(name, ignore_input=False, ignore_output=False):\n        def decorator(func): return func\n        return decorator\n    logging.basicConfig(level=logging.WARNING) # Setup basic logging if needed\n    logger_observe = logging.getLogger(__name__)\n    logger_observe.warning(\"observe_helper not found, using dummy decorator.\")\n\ntry:\n    from .detector import Detector\n    from .models import (\n        BrowserError,\n        BrowserState,\n        InteractiveElementsData,\n        TabInfo,\n        InteractiveElement,\n    )\n    from .utils import (\n        combine_and_filter_elements,\n        put_highlight_elements_on_screenshot,\n    )\nexcept ImportError as e:\n     logging.basicConfig(level=logging.ERROR)\n     logger_import = logging.getLogger(__name__)\n     logger_import.error(f\"Failed to import local browser dependencies (detector, models, utils): {e}. Browser class may not function correctly.\", exc_info=True)\n     # Define dummy classes to allow file loading, but functionality will be broken\n     class Detector: enabled=False\n     class BrowserError(Exception): pass\n     class BrowserState: pass\n     class InteractiveElementsData: elements=[]; viewport={}\n     class TabInfo: pass\n     class InteractiveElement: pass\n     def combine_and_filter_elements(a, b): return []\n     def put_highlight_elements_on_screenshot(a, b): return None\n# --- End Local Imports ---\n\n# --- Playwright Imports ---\nfrom playwright.async_api import (\n    Browser as PlaywrightBrowser,\n    BrowserContext as PlaywrightBrowserContext,\n    Page,\n    Playwright,\n    StorageState,\n    async_playwright,\n    Error as PlaywrightError\n)\n# --- Tenacity Import ---\nfrom tenacity import (\n    retry,\n    retry_if_exception_type,\n    stop_after_attempt,\n    wait_exponential,\n)\n\nlogger = logging.getLogger(__name__)\n# Ensure basic logging is configured if not done elsewhere\nif not logger.hasHandlers():\n     logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n\n\n# --- Load JavaScript Files ---\nINTERACTIVE_ELEMENTS_JS_CODE = \"\"\nSIMPLIFY_PAGE_SCRIPT = \"\"\ntry:\n    current_dir = os.path.dirname(os.path.abspath(__file__))\n    # JS for DOM-based interactive elements used in get_interactive_elements_data\n    js_file_path_interactive = os.path.join(current_dir, 'findVisibleInteractiveElements.js')\n    with open(js_file_path_interactive, 'r', encoding='utf-8') as js_file:\n        INTERACTIVE_ELEMENTS_JS_CODE = js_file.read()\n\n    # JS for DOM simplification used in get_content\n    # (Re-paste the script here for completeness)\n    SIMPLIFY_PAGE_SCRIPT = \"\"\"\n    (() => {\n        const MAX_ELEMENTS = 250; const MAX_TEXT_LENGTH = 200;\n        const INTERACTIVE_TAGS = ['a', 'button', 'input', 'textarea', 'select', 'option', 'details', 'summary', 'label'];\n        const EXCLUDED_TAGS = ['script', 'style', 'noscript', 'svg', 'link', 'meta', 'head', 'embed', 'object', 'path', 'canvas', 'iframe', 'video', 'audio'];\n        let elementCount = 0; let uniqueIdCounter = 0;\n        function isVisible(el) { if (!el || !el.checkVisibility) return false; return el.checkVisibility({checkOpacity: true, checkVisibilityCSS: true}); }\n        function truncateText(text, maxLength = MAX_TEXT_LENGTH) { if (typeof text !== 'string') return text; return text.length > maxLength ? text.substring(0, maxLength) + '...' : text; }\n        function getElementData(el) {\n            const data = { tag: el.tagName.toLowerCase(), attributes: {}, text: '', children: [], pw_id: `pw-${uniqueIdCounter++}` };\n            try { if (document.body.contains(el)) el.setAttribute('x-pw-id', data.pw_id); } catch(e){}\n            const attrsToKeep = ['id', 'class', 'role', 'aria-label', 'aria-labelledby', 'aria-describedby', 'aria-hidden', 'aria-invalid', 'aria-required', 'placeholder', 'title', 'alt', 'for', 'name', 'type', 'href', 'value', 'selected', 'checked', 'disabled', 'readonly', 'open'];\n            for (const attr of attrsToKeep) {\n                if (el.hasAttribute(attr)) {\n                    let value = el.getAttribute(attr);\n                    if (attr === 'class' && value) value = value.split(' ').filter(c => c && c.length > 1 && c.length < 30 && !/^[0-9]+$/.test(c)).slice(0, 5).join(' ');\n                    if (value !== null && value !== '') data.attributes[attr] = truncateText(String(value), 80);\n                }\n            }\n            if (['button', 'a', 'label', 'summary'].includes(data.tag) && !data.attributes['aria-label'] && el.textContent) data.attributes['aria-label'] = truncateText(el.textContent.trim(), 80);\n            try {\n                if (el.tagName.toLowerCase() === 'input' && !data.attributes.value && el.value) data.attributes.value = truncateText(el.value);\n                else if (el.tagName.toLowerCase() === 'textarea' && !data.attributes.value && el.value) data.attributes.value = truncateText(el.value);\n                else if (el.tagName.toLowerCase() === 'select' && el.options && el.selectedIndex !== -1 && !data.attributes.value) data.attributes.value = truncateText(el.options[el.selectedIndex].text);\n            } catch (e) {}\n            try {\n                const directText = Array.from(el.childNodes).filter(node => node.nodeType === Node.TEXT_NODE && node.textContent.trim().length > 0).map(node => node.textContent.trim()).join(' ').replace(/\\s+/g, ' ');\n                if (directText) data.text = truncateText(directText);\n            } catch (e) {}\n            return data;\n        }\n        function simplifyNode(node) {\n            if (elementCount >= MAX_ELEMENTS) return null;\n            if (node.nodeType !== Node.ELEMENT_NODE || EXCLUDED_TAGS.includes(node.tagName.toLowerCase())) { if(node.nodeType === Node.TEXT_NODE && node.textContent.trim().length === 0) return null; return null; }\n            elementCount++; const elementData = getElementData(node);\n            if (node.hasChildNodes()) {\n                Array.from(node.childNodes).forEach(child => {\n                    if (INTERACTIVE_TAGS.includes(node.tagName.toLowerCase()) && child.nodeType === Node.ELEMENT_NODE) return;\n                    const simplifiedChild = simplifyNode(child); if (simplifiedChild) elementData.children.push(simplifiedChild);\n                });\n            }\n            const isInteractive = INTERACTIVE_TAGS.includes(elementData.tag); const hasMeaningfulAttrs = Object.keys(elementData.attributes).some(k => k !== 'x-pw-id');\n            if (!isInteractive && !hasMeaningfulAttrs && elementData.children.length === 0 && !elementData.text) { try { if (document.body.contains(node)) node.removeAttribute('x-pw-id'); } catch(e){} return null; }\n            return elementData;\n        }\n        if (!document.body) return \"<body> element not found.\"; const simplifiedBody = simplifyNode(document.body);\n        function convertToPseudoHTML(node) {\n            if (!node) return ''; let attrs = `x-pw-id=\"${node.pw_id}\"`;\n            for (const [key, value] of Object.entries(node.attributes)) attrs += ` ${key}=\"${String(value).replace(/\"/g, '&quot;')}\"`;\n            let childrenHTML = node.children.map(convertToPseudoHTML).join('');\n            let textContent = node.text ? String(node.text).replace(/</g, '&lt;').replace(/>/g, '&gt;') : '';\n            if (['input', 'img', 'br', 'hr'].includes(node.tag)) return `<${node.tag} ${attrs} />`;\n            else return `<${node.tag} ${attrs}>${textContent}${childrenHTML}</${node.tag}>`;\n        }\n        return convertToPseudoHTML(simplifiedBody);\n    })()\n    \"\"\"\nexcept FileNotFoundError:\n    logger.error(f\"JavaScript file 'findVisibleInteractiveElements.js' not found in {current_dir}. Interactive element detection (JS based) will fail.\")\n    INTERACTIVE_ELEMENTS_JS_CODE = \"() => ({ viewport: { width: window.innerWidth, height: window.innerHeight }, elements: [] });\" # Provide fallback\nexcept Exception as e:\n     logger.error(f\"Error loading JavaScript file(s): {e}\", exc_info=True)\n     INTERACTIVE_ELEMENTS_JS_CODE = \"() => ({ viewport: { width: window.innerWidth, height: window.innerHeight }, elements: [] });\"\n     SIMPLIFY_PAGE_SCRIPT = \"() => 'Error loading simplification script.';\"\n\n\n# --- TypedDict for Viewport Size ---\nclass ViewportSize(TypedDict):\n    width: int\n    height: int\n\n# --- BrowserConfig Dataclass (Corrected: No CV Endpoints) ---\n@dataclass\nclass BrowserConfig:\n    \"\"\"\n    Configuration for the Browser.\n    \"\"\"\n    cdp_url: Optional[str] = None\n    viewport_size: ViewportSize = field(default_factory=lambda: {\"width\": 1200, \"height\": 900})\n    storage_state: Optional[StorageState] = None\n    # CV/Sheets Endpoints Removed\n\n# --- Main Browser Class ---\nclass Browser:\n    \"\"\"\n    Unified Browser responsible for interacting with the browser via Playwright.\n    Includes methods for navigation, simple actions, perception (DOM, AX Tree, optional VLM),\n    and state management. Initializes its own VLM detector based on environment variables.\n    \"\"\"\n    def __init__(self, config: BrowserConfig = BrowserConfig(), close_context: bool = True):\n        \"\"\"\n        Initializes the Browser instance.\n        \"\"\"\n        logger.debug('Initializing browser')\n        self.config = config\n        self.close_context = close_context\n        # Playwright attributes\n        self.playwright: Optional[Playwright] = None\n        self.playwright_browser: Optional[PlaywrightBrowser] = None\n        self.context: Optional[PlaywrightBrowserContext] = None\n        # Page and state management\n        self.current_page: Optional[Page] = None\n        self._state: Optional[BrowserState] = None # This holds the rich state from update_state\n        self._cdp_session = None\n        # Initialize Detector internally\n        try:\n            self.detector: Optional[Detector] = Detector()\n            if not self.detector.enabled:\n                self.detector = None\n                logger.warning(\"Detector initialized but disabled due to missing config/errors.\")\n            else:\n                logger.info(\"Detector initialized successfully.\")\n        except NameError:\n             logger.error(\"Detector class not found (likely due to import errors). Vision disabled.\")\n             self.detector = None\n        except Exception as e:\n             logger.error(f\"Unexpected error initializing Detector: {e}\", exc_info=True)\n             self.detector = None\n        # REMOVED self._init_state() call as method doesn't exist / state init is implicit\n\n    # --- Context Management Methods ---\n    async def __aenter__(self):\n        await self.initialize()\n        return self\n\n    async def __aexit__(self, exc_type, exc_val, exc_tb):\n        if self.close_context:\n            await self.close()\n\n    # --- Public Initialization and Closing ---\n    async def initialize(self):\n        \"\"\"Initializes browser, context, page if not already done.\"\"\"\n        if self.current_page and self.context and self.playwright_browser and self.playwright:\n             logger.debug(\"Browser already initialized.\")\n             return self\n        logger.info(\"Initializing browser instance via initialize()\") # Changed level\n        await self._init_browser()\n        return self\n\n    async def close(self):\n        \"\"\"Closes the browser and cleans up Playwright resources.\"\"\"\n        if not self.playwright: return\n        logger.info('Closing browser...')\n        try:\n            self._cdp_session = None\n            if self.context:\n                try: await self.context.close()\n                except Exception as e: logger.warning(f'Failed to close context: {e}')\n            if self.playwright_browser and not self.config.cdp_url:\n                try: await self.playwright_browser.close()\n                except Exception as e: logger.warning(f'Failed to close browser: {e}')\n            if self.playwright:\n                try: await self.playwright.stop()\n                except Exception as e: logger.warning(f'Failed to stop Playwright: {e}')\n        except Exception as e:\n            logger.error(f'Error during browser cleanup: {e}', exc_info=True)\n        finally: # Ensure attributes are cleared\n            self.context = None; self.current_page = None; self._state = None\n            self.playwright_browser = None; self.playwright = None; self._cdp_session = None\n            logger.info(\"Browser closed.\")\n\n    # --- Internal Initialization Helper ---\n    async def _init_browser(self):\n        \"\"\"Internal method to initialize Playwright components.\"\"\"\n        if self.current_page and self.context: return # Avoid re-init if basics exist\n        logger.debug('Running internal browser context initialization _init_browser()')\n        try:\n            if self.playwright is None: self.playwright = await async_playwright().start()\n            if self.playwright_browser is None:\n                if self.config.cdp_url:\n                    logger.info(f'Connecting to remote browser via CDP {self.config.cdp_url}')\n                    self.playwright_browser = await self.playwright.chromium.connect_over_cdp(self.config.cdp_url, timeout=5000)\n                else:\n                    logger.info(f'Launching new browser instance (headless=False assumed)')\n                    # Note: Headless mode might need to be configurable via BrowserConfig again if needed\n                    self.playwright_browser = await self.playwright.chromium.launch(\n                        headless=False,\n                        args=[ # Common args for stability/anti-detection\n                            '--no-sandbox', '--disable-setuid-sandbox', '--disable-infobars',\n                            '--disable-blink-features=AutomationControlled',\n                            '--disable-dev-shm-usage', '--disable-gpu', '--window-size=1200,900', # Use configured size later\n                            # '--disable-web-security', # Use with caution\n                            # '--disable-site-isolation-trials',\n                            # '--disable-features=IsolateOrigins,site-per-process',\n                        ]\n                    )\n            if self.context is None:\n                existing_contexts = self.playwright_browser.contexts\n                if existing_contexts and not self.config.cdp_url: # Reuse only if we launched it? Be careful.\n                    self.context = existing_contexts[0]\n                    logger.info(\"Reusing existing browser context.\")\n                else:\n                    logger.info(\"Creating new browser context.\")\n                    self.context = await self.playwright_browser.new_context(\n                        viewport=self.config.viewport_size,\n                        user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',\n                        java_script_enabled=True, bypass_csp=True, ignore_https_errors=True,\n                        storage_state=self.config.storage_state if self.config.storage_state else None\n                    )\n                    await self._apply_anti_detection_scripts() # Apply only to new contexts\n                self.context.on('page', self._on_page_change) # Attach listener\n\n            if self.current_page is None:\n                if len(self.context.pages) > 0:\n                    self.current_page = self.context.pages[-1] # Default to last open page\n                    logger.info(f\"Using existing page: {self.current_page.url}\")\n                else:\n                    self.current_page = await self.context.new_page()\n                    logger.info(\"Created new page.\")\n                # Ensure viewport is applied regardless\n                try: await self.current_page.set_viewport_size(self.config.viewport_size)\n                except Exception as vp_err: logger.warning(f\"Failed to set viewport: {vp_err}\")\n\n            if not self.current_page: raise BrowserError(\"Failed to get or create a page.\")\n            await self.get_cdp_session() # Initialize CDP session for current page\n\n        except PlaywrightError as pe:\n            logger.error(f\"Playwright Error during browser init: {pe}\", exc_info=True)\n            await self.close(); raise BrowserError(f\"Playwright initialization failed: {pe}\") from pe\n        except Exception as e:\n            logger.error(f\"Unexpected error during browser init: {e}\", exc_info=True)\n            await self.close(); raise BrowserError(f\"Unexpected browser initialization failed: {e}\") from e\n\n    # --- Method Implementations (Ensure ALL referenced methods are defined) ---\n\n    async def _apply_anti_detection_scripts(self):\n        \"\"\"Apply scripts to avoid detection as automation\"\"\"\n        if self.context is None: return # Should not happen if called from _init_browser correctly\n        try:\n            await self.context.add_init_script(\n                \"\"\"\n                Object.defineProperty(navigator, 'webdriver', { get: () => undefined });\n                Object.defineProperty(navigator, 'languages', { get: () => ['en-US', 'en'] });\n                Object.defineProperty(navigator, 'plugins', { get: () => [] }); // Empty is safer\n                // ... other scripts from previous version ...\n                const originalQuery = window.navigator.permissions.query;\n                window.navigator.permissions.query = (parameters) => (\n                    parameters.name === 'notifications' ?\n                        Promise.resolve({ state: Notification.permission }) :\n                        originalQuery(parameters)\n                );\n                \"\"\"\n            )\n            logger.debug(\"Applied anti-detection init script.\")\n        except Exception as e:\n             logger.error(f\"Failed to add anti-detection init script: {e}\", exc_info=True)\n\n    async def _on_page_change(self, page: Page):\n        \"\"\"Handle page creation/popup events.\"\"\"\n        # Don't automatically switch current page, just log\n        logger.info(f'Page event detected. New/Popup URL: {page.url}')\n        self._cdp_session = None # Invalidate CDP session as context changed\n\n    async def get_current_page(self) -> Page:\n        \"\"\"Get the current page, ensuring browser is initialized.\"\"\"\n        if self.current_page is None or self.current_page.is_closed():\n            logger.warning(\"Current page is None or closed, re-initializing.\")\n            await self._init_browser()\n            if self.current_page is None: raise BrowserError(\"Unable to get a valid page.\")\n        return self.current_page\n\n    # Inside Browser class in browser.py\n    async def get_cdp_session(self):\n        \"\"\"Get or create a CDP session for the *current* page.\"\"\"\n        page = await self.get_current_page()\n        session_invalid = True # Assume invalid\n        if self._cdp_session:\n            # More robust check: try a simple CDP command to see if session is active\n            try:\n                # Example: Get cookies via CDP (relatively harmless check)\n                await self._cdp_session.send(\"Network.getAllCookies\")\n                # Check if session page matches current page (using internal attr is risky)\n                if hasattr(self._cdp_session, '_client') and hasattr(self._cdp_session._client, '_page') and self._cdp_session._client._page == page:\n                   session_invalid = False # Session seems alive and for the correct page\n                else:\n                   logger.debug(\"CDP session page mismatch or internals unclear, recreating.\")\n            except Exception as session_check_err:\n                 logger.debug(f\"Existing CDP session check failed ({session_check_err}), recreating.\")\n                 session_invalid = True\n\n        if session_invalid:\n            try:\n                if self.context is None: await self._init_browser()\n                logger.debug(f\"Attempting to create new CDP session for page: {page.url}\")\n                self._cdp_session = await self.context.new_cdp_session(page)\n                logger.debug(f\"Created new CDP session successfully.\")\n            except Exception as e:\n                logger.error(f\"Failed to create CDP session: {e}\", exc_info=True)\n                self._cdp_session = None\n                raise BrowserError(f\"Failed to create CDP session: {e}\") from e\n        return self._cdp_session\n\n    @observe(name='browser.fast_screenshot', ignore_output=True)\n    async def fast_screenshot(self) -> str:\n        \"\"\"Returns a base64 encoded screenshot using CDP.\"\"\"\n        cdp_session = await self.get_cdp_session()\n        try:\n            screenshot_data = await cdp_session.send(\"Page.captureScreenshot\", {\"format\": \"png\", \"fromSurface\": False, \"captureBeyondViewport\": False})\n            return screenshot_data[\"data\"]\n        except Exception as e:\n             logger.error(f\"Failed to capture screenshot via CDP: {e}\")\n             # Fallback to playwright's screenshot? Or raise error?\n             page = await self.get_current_page()\n             try:\n                 logger.warning(\"CDP screenshot failed, falling back to Playwright screenshot.\")\n                 buffer = await page.screenshot()\n                 return base64.b64encode(buffer).decode()\n             except Exception as pw_e:\n                  logger.error(f\"Fallback Playwright screenshot also failed: {pw_e}\")\n                  raise BrowserError(f\"Failed to take screenshot: {e}\") from e\n\n    # --- Simple Action Methods ---\n    @observe(name='browser.navigate_to')\n    async def navigate_to(self, url: str):\n        page = await self.get_current_page()\n        logger.info(f\"Navigating to: {url}\")\n        try:\n            await page.goto(url, wait_until='domcontentloaded', timeout=60000)\n            logger.info(f\"Navigation successful. Current URL: {page.url}\")\n        except PlaywrightError as e: raise BrowserError(f\"Navigation failed: {e}\") from e\n        except Exception as e: raise BrowserError(f\"Navigation failed unexpectedly: {e}\") from e\n\n    @observe(name='browser.click')\n    async def click(self, selector: str):\n        page = await self.get_current_page()\n        logger.info(f\"Attempting to click element: '{selector}'\")\n        try:\n            element = page.locator(selector).first\n            await element.wait_for(state=\"visible\", timeout=15000)\n            await element.scroll_into_view_if_needed(timeout=10000)\n            await element.click(timeout=15000, delay=50)\n            logger.info(f\"Successfully clicked element: '{selector}'\")\n        except PlaywrightError as e: raise BrowserError(f\"Click action failed: {e}\") from e\n        except Exception as e: raise BrowserError(f\"Click action failed unexpectedly: {e}\") from e\n\n    @observe(name='browser.type')\n    async def type(self, selector: str, text: str):\n        page = await self.get_current_page()\n        log_text = '***' if 'password' in selector.lower() else text\n        logger.info(f\"Attempting to type into element: '{selector}', Text: '{log_text}'\")\n        try:\n            element = page.locator(selector).first\n            await element.wait_for(state=\"visible\", timeout=15000)\n            await element.scroll_into_view_if_needed(timeout=10000)\n            await element.fill(text, timeout=15000)\n            logger.info(f\"Successfully typed into element: '{selector}'\")\n        except PlaywrightError as e: raise BrowserError(f\"Type action failed: {e}\") from e\n        except Exception as e: raise BrowserError(f\"Type action failed unexpectedly: {e}\") from e\n\n    @observe(name='browser.scroll')\n    async def scroll(self, direction: str):\n        page = await self.get_current_page()\n        logger.info(f\"Scrolling page {direction}\")\n        try:\n            if direction == \"down\": await page.evaluate(\"window.scrollBy(0, window.innerHeight)\")\n            elif direction == \"up\": await page.evaluate(\"window.scrollBy(0, -window.innerHeight)\")\n            elif direction == \"left\": await page.evaluate(\"window.scrollBy(-window.innerWidth, 0)\")\n            elif direction == \"right\": await page.evaluate(\"window.scrollBy(window.innerWidth, 0)\")\n            else: logger.warning(f\"Unknown scroll direction: {direction}\"); return\n            await asyncio.sleep(0.3)\n            logger.info(f\"Scrolled page {direction}\")\n        except PlaywrightError as e: raise BrowserError(f\"Scroll action failed: {e}\") from e\n        except Exception as e: raise BrowserError(f\"Scroll action failed unexpectedly: {e}\") from e\n\n    async def wait(self, milliseconds: int):\n        logger.info(f\"Waiting for {milliseconds} ms\")\n        if milliseconds <= 0: return\n        await asyncio.sleep(milliseconds / 1000.0)\n        logger.info(\"Wait finished\")\n\n    # --- Perception & State Methods ---\n    async def get_content(self, max_length: int = 120000) -> str:\n        \"\"\"Gets comprehensive text representation: URL, DOM, AX Tree, VLM Elements.\"\"\"\n        page = await self.get_current_page()\n        logger.info(\"Getting comprehensive page content with vision...\")\n        combined_content = \"\"\n        error_messages = []\n        current_url = \"Unknown\"\n        screenshot_b64 = None\n        try:\n            current_url = page.url\n            combined_content += f\"# Page URL:\\n{current_url}\\n\\n\"\n            try:\n                screenshot_b64 = await self.fast_screenshot()\n                logger.debug(f\"Screenshot captured (size: {len(screenshot_b64) if screenshot_b64 else 0})\")\n            except Exception as ss_err: error_messages.append(f\"Screenshot Error: {ss_err}\"); logger.error(\"Screenshot error\", exc_info=False); combined_content += \"# Screenshot Error\\n\"\n            try:\n                if SIMPLIFY_PAGE_SCRIPT:\n                     simplified_dom = await page.evaluate(SIMPLIFY_PAGE_SCRIPT)\n                     if simplified_dom: combined_content += f\"# Simplified DOM:\\n```html\\n{simplified_dom}\\n```\\n\\n\"; logger.debug(f\"DOM length: {len(simplified_dom)}\")\n                     else: combined_content += \"# Simplified DOM:\\n(Empty)\\n\\n\"; logger.warning(\"JS simplification empty.\")\n                else: combined_content += \"# Simplified DOM:\\n(JS Script Error)\\n\\n\"; logger.error(\"SIMPLIFY_PAGE_SCRIPT empty.\")\n            except Exception as js_err: error_messages.append(f\"JS Error: {js_err}\"); logger.error(\"JS Simp. Error\", exc_info=False); combined_content += f\"# Simplified DOM Error: {js_err}\\n\"\n            try:\n                ax_tree = await page.accessibility.snapshot(interesting_only=False) # No root arg\n                if ax_tree:\n                    try:\n                        ax_tree_str = json.dumps(ax_tree, separators=(',', ':')) # Compact\n                        ax_max_len = 2000\n                        if len(ax_tree_str) > ax_max_len: ax_tree_str = ax_tree_str[:ax_max_len] + \"...(AX Tree truncated)\"\n                        combined_content += f\"# Accessibility Tree (JSON, Partial):\\n```json\\n{ax_tree_str}\\n```\\n\\n\"; logger.debug(f\"AX Tree length: {len(ax_tree_str)}\")\n                    except Exception as json_err: error_messages.append(f\"AX JSON Error: {json_err}\"); logger.error(\"AX JSON Error\", exc_info=False); combined_content += \"# AX Tree Error (JSON)\\n\"\n                else: combined_content += \"# Accessibility Tree:\\n(Empty)\\n\\n\"; logger.warning(\"AX snapshot empty.\")\n            except Exception as ax_err: error_messages.append(f\"AX Tree Error: {ax_err}\"); logger.error(\"AX Tree Error\", exc_info=False); combined_content += f\"# Accessibility Tree Error: {ax_err}\\n\"\n\n            if self.detector and screenshot_b64:\n                logger.info(\"Attempting visual detection via Detector...\")\n                try:\n                    detect_sheets = 'docs.google.com/spreadsheets/d' in current_url\n                    visual_elements = await self.detector.detect_from_image(screenshot_b64, detect_sheets)\n                    if visual_elements:\n                        formatted = [f\"- ID: {el.browser_agent_id}, Box: [L:{el.rect.get('left',0)}, T:{el.rect.get('top',0)}, R:{el.rect.get('right',0)}, B:{el.rect.get('bottom',0)}] (Tag: {el.tag_name})\" for el in visual_elements[:20]]\n                        combined_content += f\"# Visual Elements (Detected via CV, Max 20):\\n{chr(10).join(formatted)}\\n\\n\"; logger.info(f\"Added {len(formatted)} visual elements.\") # Use chr(10) for newline\n                    else: combined_content += \"# Visual Elements:\\n(None detected or VLM error)\\n\\n\"; logger.info(\"No visual elements detected.\")\n                except Exception as cv_err: error_messages.append(f\"CV Error: {cv_err}\"); logger.error(\"CV Detector Error\", exc_info=True); combined_content += f\"# Visual Elements Error: {cv_err}\\n\"\n            else:\n                 if not self.detector: logger.info(\"CV Detector not available.\")\n                 if not screenshot_b64: logger.info(\"Screenshot missing.\")\n                 combined_content += \"# Visual Elements:\\n(Not Run)\\n\\n\"\n\n            if len(combined_content) > max_length:\n                logger.warning(f\"Combined content ({len(combined_content)}) exceeds limit ({max_length}). Truncating.\")\n                reserve = len(\"\\n\\n# Content Retrieval Errors:\\n- \") + sum(len(str(e)) + 4 for e in error_messages) + 50\n                trunc_len = max(0, max_length - reserve); combined_content = combined_content[:trunc_len].rstrip() + \"\\n\\n... (Content truncated)\"\n            if error_messages: combined_content += \"\\n\\n# Content Retrieval Errors:\\n- \" + \"\\n- \".join(map(str, error_messages))\n            logger.info(f\"Finished getting content (final length: {len(combined_content)})\")\n            return combined_content\n        except Exception as e: logger.error(f\"General error in get_content: {e}\", exc_info=True); return f\"# Page URL:\\n{current_url}\\n# Error:\\nFailed to get content: {e}\"\n\n    # --- Other Methods from Original Code ---\n\n    async def get_cookies(self) -> list[dict[str, Any]]:\n        \"\"\"Get cookies from the current browser context.\"\"\"\n        if self.context:\n            try: return await self.context.cookies()\n            except Exception as e: logger.error(f\"Failed to get cookies: {e}\"); return []\n        return []\n\n    async def get_storage_state(self) -> dict[str, Any]:\n        \"\"\"Get storage state (currently only cookies) from the browser.\"\"\"\n        # Playwright's get_storage_state includes local/session storage too,\n        # but might require more careful handling or filtering if large.\n        # Sticking to cookies for simplicity based on original user code structure.\n        if self.context:\n            try:\n                 # cookies = await self.context.cookies() # Redundant if get_cookies exists\n                 # return {'cookies': cookies}\n                 # Or use the full state function if available and needed\n                 state = await self.context.storage_state()\n                 return state\n            except Exception as e:\n                 logger.error(f\"Failed to get storage state: {e}\")\n                 return {}\n        return {}\n\n    async def get_tabs_info(self) -> list[TabInfo]:\n        \"\"\"Get information about all open tabs in the current context.\"\"\"\n        tabs_info = []\n        if not self.context: return []\n        try:\n            # Ensure pages list is accessed correctly\n            pages = self.context.pages\n            for i, page in enumerate(pages):\n                 if not page.is_closed(): # Check if page is open\n                     try:\n                         url = page.url\n                         title = await page.title()\n                         # Ensure TabInfo model is available\n                         tabs_info.append(TabInfo(page_id=i, url=url, title=title))\n                     except Exception as page_err:\n                          logger.warning(f\"Failed to get info for tab {i}: {page_err}\")\n                          # Add placeholder if needed?\n                          tabs_info.append(TabInfo(page_id=i, url=\"Error\", title=\"Error retrieving info\"))\n\n        except Exception as e:\n             logger.error(f\"Failed to get tabs info: {e}\")\n        return tabs_info\n\n    async def switch_to_tab(self, page_id: int) -> None:\n        \"\"\"Switch focus to a specific tab by its index.\"\"\"\n        if self.context is None: await self._init_browser()\n        pages = self.context.pages\n        if not 0 <= page_id < len(pages):\n            raise BrowserError(f'Invalid page_id: {page_id}. Available pages: {len(pages)}')\n        if pages[page_id].is_closed():\n            raise BrowserError(f'Page with page_id {page_id} is closed.')\n\n        logger.info(f\"Switching to tab (page_id): {page_id}\")\n        self.current_page = pages[page_id]\n        try:\n            await self.current_page.bring_to_front()\n            # Wait briefly for potential state changes after switch\n            await self.current_page.wait_for_load_state('domcontentloaded', timeout=5000)\n        except Exception as e:\n             logger.warning(f\"Error during tab switch finalization for page {page_id}: {e}\")\n             # Continue anyway, page is switched internally\n\n    async def create_new_tab(self, url: str | None = None) -> None:\n        \"\"\"Create a new tab, optionally navigating to a URL, and switch to it.\"\"\"\n        if self.context is None: await self._init_browser()\n        logger.info(f\"Creating new tab. Navigate to: {url if url else 'about:blank'}\")\n        try:\n            new_page = await self.context.new_page()\n            self.current_page = new_page # Switch focus to the new page\n            if url:\n                await self.navigate_to(url) # Reuse navigate method\n            else:\n                 await new_page.wait_for_load_state('domcontentloaded') # Wait for about:blank load\n            logger.info(f\"Switched to new tab. URL: {self.current_page.url}\")\n        except Exception as e:\n             logger.error(f\"Failed to create new tab: {e}\")\n             raise BrowserError(f\"Failed to create new tab: {e}\") from e\n\n\n    async def close_current_tab(self):\n        \"\"\"Close the currently focused tab.\"\"\"\n        if self.current_page is None: logger.warning(\"No current page to close.\"); return\n        if len(self.context.pages) <= 1: logger.warning(\"Cannot close the last remaining tab.\"); return # Prevent closing last tab? Or allow context close?\n\n        logger.info(f\"Closing current tab: {self.current_page.url}\")\n        page_to_close = self.current_page\n        # Find index to switch to after closing (e.g., previous or first)\n        pages = self.context.pages\n        current_index = pages.index(page_to_close) if page_to_close in pages else -1\n        switch_to_index = 0 if current_index != 0 else 1 # Switch to first unless closing first\n        if switch_to_index >= len(pages): switch_to_index = 0 # Fallback\n\n        try:\n            await page_to_close.close()\n            logger.info(\"Tab closed.\")\n            # Need to wait briefly for context.pages to update sometimes\n            await asyncio.sleep(0.1)\n            # Switch to another tab if possible\n            if self.context and self.context.pages:\n                 new_current_page = self.context.pages[min(switch_to_index, len(self.context.pages)-1)]\n                 self.current_page = new_current_page\n                 await self.current_page.bring_to_front()\n                 logger.info(f\"Switched to tab index {min(switch_to_index, len(self.context.pages)-1)} after closing.\")\n            else:\n                 self.current_page = None # No pages left\n                 logger.info(\"Closed the last tab.\")\n\n        except Exception as e:\n             logger.error(f\"Error closing tab or switching: {e}\")\n             # Attempt to recover current page if possible\n             if self.context and self.context.pages: self.current_page = self.context.pages[0]\n             else: self.current_page = None\n\n    async def refresh_page(self):\n        \"\"\"Refresh the current page.\"\"\"\n        page = await self.get_current_page()\n        logger.info(f\"Refreshing page: {page.url}\")\n        try:\n             await page.reload(wait_until='domcontentloaded')\n             logger.info(\"Page refreshed.\")\n        except Exception as e:\n             logger.error(f\"Failed to refresh page: {e}\")\n             raise BrowserError(f\"Failed to refresh page: {e}\") from e\n\n    async def go_forward(self):\n        \"\"\"Navigate forward in the current page's history.\"\"\"\n        page = await self.get_current_page()\n        logger.info(f\"Going forward in history for: {page.url}\")\n        try:\n            await page.go_forward(wait_until='domcontentloaded', timeout=10000) # Added timeout\n            logger.info(f\"Navigated forward. New URL: {page.url}\")\n        except Exception as e:\n            # Often fails if no forward history exists, log as warning\n            logger.warning(f'Failed to go forward (might be end of history): {e}')\n            # raise BrowserError(f\"Failed to go forward: {e}\") from e # Option: re-raise if needed\n\n    # --- State Update Methods (using CV potentially) ---\n    def get_state(self) -> Optional[BrowserState]:\n        \"\"\"Get the last updated internal browser state.\"\"\"\n        # Returns the state cached from the last update_state call\n        logger.debug(f\"Returning cached browser state (URL: {self._state.url if self._state else 'None'})\")\n        return self._state\n\n    @observe(name='browser.update_state', ignore_output=True)\n    async def update_state(self) -> BrowserState:\n        \"\"\"Update the internal browser state by re-evaluating the page (incl. CV if enabled).\"\"\"\n        logger.info(\"Updating browser state...\")\n        try:\n            self._state = await self._update_state()\n            logger.info(\"Browser state updated successfully.\")\n            if not self._state: raise BrowserError(\"State update returned None unexpectedly.\") # Should not happen if _update_state raises\n            return self._state\n        except Exception as e:\n             logger.error(f\"Failed to update browser state: {e}\", exc_info=True)\n             # Decide whether to return old state or raise error\n             # Raising error seems more appropriate if update fails\n             raise BrowserError(f\"Failed to update state: {e}\") from e\n\n\n    @observe(name='browser._update_state', ignore_output=True)\n    async def _update_state(self) -> BrowserState:\n        \"\"\"Internal method to get comprehensive state with retry logic.\"\"\"\n        @retry(\n            stop=stop_after_attempt(3),\n            wait=wait_exponential(multiplier=0.5, min=0.5, max=2),\n            retry=retry_if_exception_type((Exception)), # Retry on any exception during state fetch\n            reraise=True # Re-raise the exception after retries fail\n        )\n        async def get_stable_state():\n            page = await self.get_current_page() # Ensures page exists\n            url = page.url\n            detect_sheets = 'docs.google.com/spreadsheets/d' in url\n            screenshot_b64 = await self.fast_screenshot() # Get screenshot\n\n            interactive_elements_data: Optional[InteractiveElementsData] = None\n            # Get combined elements using CV if detector is enabled\n            if self.detector and screenshot_b64:\n                logger.debug(\"Getting interactive elements with CV...\")\n                interactive_elements_data = await self.get_interactive_elements_with_cv(screenshot_b64, detect_sheets)\n            # Fallback to browser-only if detector disabled or screenshot failed\n            elif INTERACTIVE_ELEMENTS_JS_CODE: # Ensure JS code loaded\n                 logger.debug(\"Getting interactive elements with browser JS only...\")\n                 interactive_elements_data = await self.get_interactive_elements_data()\n            else:\n                 logger.error(\"Cannot get interactive elements: Detector disabled/failed and JS code missing.\")\n                 interactive_elements_data = InteractiveElementsData(viewport={\"width\":0,\"height\":0}, elements=[]) # Return empty state\n\n\n            # Check if interactive_elements_data is valid before proceeding\n            if interactive_elements_data is None or not hasattr(interactive_elements_data, 'elements'):\n                 raise BrowserError(\"Failed to retrieve valid interactive elements data.\")\n\n            # Process elements into dictionary for state\n            interactive_elements = {element.browser_agent_id: element for element in interactive_elements_data.elements}\n\n            # Generate highlighted screenshot\n            screenshot_with_highlights = None\n            if screenshot_b64 and 'put_highlight_elements_on_screenshot' in globals():\n                try:\n                     screenshot_with_highlights = put_highlight_elements_on_screenshot(\n                         list(interactive_elements.values()), # Pass list of elements\n                         screenshot_b64\n                     )\n                except Exception as high_err:\n                     logger.warning(f\"Failed to generate highlighted screenshot: {high_err}\")\n\n            # Get tab info\n            tabs = await self.get_tabs_info()\n\n            # Ensure BrowserState model is available\n            if 'BrowserState' not in globals() or 'BrowserState' not in locals():\n                 raise ImportError(\"BrowserState model is not defined or imported.\")\n\n            # Create and return the state object\n            return BrowserState(\n                url=url,\n                tabs=tabs,\n                screenshot_with_highlights=screenshot_with_highlights,\n                screenshot=screenshot_b64,\n                viewport=interactive_elements_data.viewport, # Use viewport from data\n                interactive_elements=interactive_elements,\n            )\n\n        # Execute the retry logic\n        try:\n            new_state = await get_stable_state()\n            self._state = new_state # Cache the new state\n            return new_state\n        except Exception as e:\n            logger.error(f'Failed to update state after multiple attempts: {e}', exc_info=True)\n            # Don't return potentially stale state, let error propagate\n            raise BrowserError(f\"Failed to update state definitively: {e}\") from e\n\n    @observe(name='browser.get_interactive_elements')\n    async def get_interactive_elements_data(self) -> InteractiveElementsData:\n        \"\"\"Gets interactive elements using only in-browser JavaScript.\"\"\"\n        page = await self.get_current_page()\n        if not INTERACTIVE_ELEMENTS_JS_CODE:\n             logger.error(\"INTERACTIVE_ELEMENTS_JS_CODE is empty. Cannot get elements.\")\n             # Return default empty structure\n             vp = await page.viewport_size() or {\"width\":0, \"height\":0}\n             return InteractiveElementsData(viewport=vp, elements=[])\n        try:\n            result = await page.evaluate(INTERACTIVE_ELEMENTS_JS_CODE)\n            # Validate result basic structure\n            if not isinstance(result, dict) or 'viewport' not in result or 'elements' not in result:\n                 logger.error(f\"JS evaluation returned unexpected structure: {type(result)}\")\n                 vp = await page.viewport_size() or {\"width\":0, \"height\":0}\n                 return InteractiveElementsData(viewport=vp, elements=[])\n            # Parse using Pydantic model if available\n            if 'InteractiveElementsData' in globals() and 'InteractiveElementsData' in locals():\n                 return InteractiveElementsData(**result)\n            else:\n                 # Fallback if model missing (though this indicates setup error)\n                 logger.error(\"InteractiveElementsData model missing, returning raw dict.\")\n                 return result # type: ignore\n        except Exception as e:\n             logger.error(f\"Error evaluating INTERACTIVE_ELEMENTS_JS_CODE: {e}\", exc_info=True)\n             vp = await page.viewport_size() or {\"width\":0, \"height\":0}\n             return InteractiveElementsData(viewport=vp, elements=[])\n\n\n    @observe(name='browser.get_interactive_elements_with_cv')\n    async def get_interactive_elements_with_cv(self, screenshot_b64: Optional[str] = None, detect_sheets: bool = False) -> InteractiveElementsData:\n        \"\"\"Combines browser JS element detection with VLM detection.\"\"\"\n        if self.detector is None:\n            logger.warning(\"CV detector not available. Falling back to browser-only detection.\")\n            return await self.get_interactive_elements_data()\n\n        # Ensure screenshot exists\n        current_screenshot_b64 = screenshot_b64 or await self.fast_screenshot()\n        if not current_screenshot_b64:\n             logger.error(\"Screenshot unavailable for CV detection.\")\n             return await self.get_interactive_elements_data() # Fallback\n\n        logger.debug(\"Getting combined browser + CV elements...\")\n        try:\n            # Run browser JS detection and VLM detection concurrently\n            browser_elements_data_task = asyncio.create_task(self.get_interactive_elements_data())\n            cv_elements_task = asyncio.create_task(self.detector.detect_from_image(current_screenshot_b64, detect_sheets))\n\n            browser_elements_data = await browser_elements_data_task\n            cv_elements = await cv_elements_task\n\n            # Ensure results are valid before combining\n            if not browser_elements_data or not hasattr(browser_elements_data, 'elements'):\n                 logger.warning(\"Browser element data invalid or missing for combine step.\")\n                 browser_elements = []\n                 viewport = await self.get_current_page().viewport_size() or {\"width\":0,\"height\":0}\n            else:\n                 browser_elements = browser_elements_data.elements\n                 viewport = browser_elements_data.viewport # Use viewport from browser data\n\n            if not isinstance(cv_elements, list):\n                 logger.warning(\"CV elements result is not a list.\")\n                 cv_elements = []\n\n            # Combine results using utility function\n            if 'combine_and_filter_elements' in globals():\n                 combined_elements = combine_and_filter_elements(browser_elements, cv_elements)\n                 logger.info(f\"Combined browser ({len(browser_elements)}) and CV ({len(cv_elements)}) elements into {len(combined_elements)}.\")\n            else:\n                 logger.error(\"combine_and_filter_elements utility function not found. Returning only browser elements.\")\n                 combined_elements = browser_elements # Fallback\n\n             # Return combined data in the expected structure\n            if 'InteractiveElementsData' in globals() and 'InteractiveElementsData' in locals():\n                 return InteractiveElementsData(viewport=viewport, elements=combined_elements)\n            else:\n                 logger.error(\"InteractiveElementsData model missing, returning raw combined list.\")\n                 # This fallback is problematic, structure is needed downstream\n                 return {\"viewport\": viewport, \"elements\": combined_elements} # type: ignore\n\n        except Exception as e:\n            logger.error(f\"Error during combined CV+Browser element detection: {e}\", exc_info=True)\n            # Fallback gracefully to browser-only if possible\n            try: return await self.get_interactive_elements_data()\n            except Exception: return InteractiveElementsData(viewport={\"width\":0,\"height\":0}, elements=[]) # Final fallback"
  },
  {
    "path": "super_agents/browser_use/browser/detector.py",
    "content": "# super_agents/browser_use/browser/detector.py\nimport os\nimport json\nimport logging\nimport base64\nfrom typing import List, Optional, Dict, Any\n\n# LangChain Core Imports\nfrom langchain_core.messages import HumanMessage, SystemMessage\nfrom langchain_core.runnables.base import RunnableSerializable\n# Pydantic for schema\ntry:\n    from pydantic.v1 import BaseModel\nexcept ImportError:\n    from pydantic import BaseModel\n\nfrom tenacity import (\n    retry,\n    retry_if_exception_type,\n    stop_after_attempt,\n    wait_exponential,\n)\n\n# Local imports (ensure they exist)\ntry:\n    from .observe_helper import observe\nexcept ImportError:\n    def observe(name, ignore_input=False, ignore_output=False):\n        def decorator(func): return func\n        return decorator\n    # Setup basic logger if not configured by main app yet\n    logging.basicConfig(level=logging.WARNING)\n    logger = logging.getLogger(__name__)\n    logger.warning(\"observe_helper not found, using dummy decorator.\")\ntry:\n    from .models import InteractiveElement\n    # Define the expected VLM output schema here or import from agent.schemas\n    # Let's define it here for clarity in this step\n    class VLMJsonOutput(BaseModel):\n        detected_elements: List[Dict[str, Any]] = []\nexcept ImportError:\n    class InteractiveElement: pass\n    class VLMJsonOutput(BaseModel): detected_elements: List = []\n    # Setup basic logger if not configured by main app yet\n    logging.basicConfig(level=logging.WARNING)\n    logger = logging.getLogger(__name__)\n    logger.error(\"Failed to import InteractiveElement or define VLMJsonOutput! Detector parsing will fail.\")\n\n# Import the specific ChatOpenRouter class from the updated llm.py\n# Adjust path if llm.py is elsewhere relative to detector.py\ntry:\n    from ..llm import ChatOpenRouter # Assumes llm.py is one level up\nexcept ImportError:\n     logger.error(\"Failed to import ChatOpenRouter from ..llm. Ensure llm.py is in the parent directory.\")\n     # Define a dummy class to allow loading, but it won't work\n     class ChatOpenRouter: pass\n\nlogger = logging.getLogger(__name__)\n\n# --- VLM Configuration (Read by Detector's __init__ via ChatOpenRouter) ---\nVLM_API_MODEL = os.getenv(\"VLM_API_MODEL\", \"openai/gpt-4o\") # Read desired VLM model from .env\n\n# --- VLM Prompt Template ---\nVLM_PROMPT_TEMPLATE = \"\"\"\nAnalyze the provided screenshot of a webpage. Your task is to identify all significant interactive elements visible on the screen. Interactive elements include: buttons, links (<a> tags), text input fields (<input type='text'>, <input type='search'>, etc.), password fields (<input type='password'>), text areas (<textarea>), select dropdowns (<select>), checkboxes (<input type='checkbox'>), radio buttons (<input type='radio'>), and any other clearly clickable areas (e.g., some <div>s or <span>s styled as buttons).\n\nFor EACH identified interactive element, provide the following information:\n1.  `type`: A string indicating the type of the element (e.g., \"button\", \"link\", \"input-text\", \"input-password\", \"textarea\", \"select\", \"checkbox\", \"radio\", \"clickable-area\").\n2.  `description`: A brief string describing the element, preferably using its visible text label or aria-label. If no text is available, describe its appearance or function (e.g., \"Search icon button\", \"Dropdown menu arrow\").\n3.  `box_percent`: A list of four floating-point numbers `[xmin, ymin, xmax, ymax]`, representing the bounding box coordinates as percentages of the image's total width and height. Each value must be between 0.0 and 1.0. `xmin` is the left edge, `ymin` is the top edge, `xmax` is the right edge, and `ymax` is the bottom edge, all relative to the image dimensions.\n\nYour response MUST be a single, valid JSON object. This object must contain exactly one key: `\"detected_elements\"`. The value associated with this key must be a list (`[]`) where each item in the list is an object containing the `type`, `description`, and `box_percent` for one detected element.\n\nExample of the required EXACT output format:\n```json\n{{\n  \"detected_elements\": [\n    {{\n      \"type\": \"link\",\n      \"description\": \"new\",\n      \"box_percent\": [0.152, 0.015, 0.180, 0.035]\n    }},\n    {{\n      \"type\": \"input-text\",\n      \"description\": \"Search query input\",\n      \"box_percent\": [0.3, 0.1, 0.7, 0.15]\n    }},\n    {{\n       \"type\": \"button\",\n       \"description\": \"Login\",\n       \"box_percent\": [0.85, 0.1, 0.95, 0.15]\n    }}\n  ]\n}}\n```\n\nOutput ONLY the JSON object within a ```json ... ``` block. Do not include any other explanatory text before or after the JSON block. Be precise with the bounding box percentages.\n\"\"\"\n\nclass Detector:\n    \"\"\"\n    Uses ChatOpenRouter (LangChain) to call a VLM for visual element detection.\n    Initializes its own VLM client based on environment variables.\n    \"\"\"\n    def __init__(self):\n        \"\"\"\n        Initialize the detector by creating a ChatOpenRouter instance.\n        Reads OPENROUTER_API_KEY and VLM_API_MODEL from environment variables.\n        \"\"\"\n        self.vlm_client: Optional[ChatOpenRouter] = None\n        self.enabled = False\n        openrouter_key = os.getenv(\"OPENROUTER_API_KEY\")\n\n        if not openrouter_key:\n            logger.error(\"OPENROUTER_API_KEY environment variable not set. Vision detector disabled.\")\n        elif not VLM_API_MODEL:\n            logger.error(\"VLM_API_MODEL environment variable not set (e.g., 'alibaba/qwen-vl-max'). Vision detector disabled.\")\n        else:\n            try:\n                # Instantiate ChatOpenRouter using the VLM model from env var\n                # It reads OPENROUTER_API_KEY internally via its Field definition\n                self.vlm_client = ChatOpenRouter(\n                    model_name=VLM_API_MODEL,\n                    temperature=0.05,\n                    max_tokens=2048,\n                    # Note: API key is handled by ChatOpenRouter's default_factory\n                )\n                self.enabled = True\n                logger.info(f\"ChatOpenRouter VLM Detector initialized. Enabled: {self.enabled}. Model: {VLM_API_MODEL}\")\n            except Exception as e:\n                 logger.error(f\"Failed to initialize ChatOpenRouter in Detector: {e}\", exc_info=True)\n                 self.enabled = False # Ensure disabled if init fails\n\n\n    @observe(name=\"detector.detect_from_image\", ignore_input=True)\n    @retry(\n        stop=stop_after_attempt(3),\n        wait=wait_exponential(multiplier=1, min=1, max=10),\n        retry=retry_if_exception_type(Exception), # Retry on LangChain exceptions too\n        reraise=True,\n    )\n    async def detect_from_image(self, image_b64: str, detect_sheets: bool = False) -> List[InteractiveElement]:\n        \"\"\"\n        Sends a base64 encoded image to the configured VLM via ChatOpenRouter.\n\n        Args:\n            image_b64: Base64 encoded image.\n            detect_sheets: Currently ignored.\n\n        Returns:\n            List of InteractiveElement objects parsed from the VLM response.\n        \"\"\"\n        if not self.enabled or not self.vlm_client or not image_b64:\n            logger.warning(\"Detector disabled, VLM client not initialized, or image missing. Skipping detection.\")\n            return []\n\n        logger.info(f\"Calling VLM {VLM_API_MODEL} via ChatOpenRouter...\")\n        image_url_data = f\"data:image/png;base64,{image_b64}\"\n\n        prompt_text = VLM_PROMPT_TEMPLATE\n        # Optional: Modify prompt if detect_sheets is True\n\n        messages = [\n            HumanMessage(\n                content=[\n                    {\"type\": \"text\", \"text\": prompt_text},\n                    {\"type\": \"image_url\", \"image_url\": {\"url\": image_url_data}}\n                ]\n            )\n        ]\n\n        try:\n            # Use with_structured_output targeting the VLMJsonOutput schema\n            # Ensure VLMJsonOutput is correctly defined/imported\n            structured_llm_vlm = self.vlm_client.with_structured_output(VLMJsonOutput)\n            vlm_output: Optional[VLMJsonOutput] = await structured_llm_vlm.ainvoke(messages)\n\n            if vlm_output and isinstance(vlm_output, VLMJsonOutput):\n                detection_result = vlm_output.detected_elements\n                if not isinstance(detection_result, list): # Add validation\n                    logger.error(f\"Parsed VLM output 'detected_elements' is not a list: {detection_result}\")\n                    return []\n                logger.info(f\"Successfully received and parsed VLM JSON with {len(detection_result)} potential elements.\")\n                elements = self._parse_vlm_detections(detection_result)\n                logger.info(f\"Created {len(elements)} InteractiveElement objects from VLM detections.\")\n                return elements\n            else:\n                logger.error(\"VLM response failed validation against VLMJsonOutput schema or returned None.\")\n                return []\n\n        except Exception as e:\n            logger.error(f\"Error calling VLM or processing structured output: {e}\", exc_info=True)\n            raise # Re-raise to trigger tenacity retry or fail the node\n\n    # Inside class Detector in detector.py\n\n    def _parse_vlm_detections(self, detections: List[Dict[str, Any]]) -> List[InteractiveElement]:\n        \"\"\"\n        Parses VLM JSON output into InteractiveElement objects, populating\n        top-level VLM fields instead of nested attributes.\n        NOTE: Still needs image dimensions for pixel coordinates.\n        \"\"\"\n        elements = []\n        if not isinstance(detections, list):\n            logger.warning(f\"VLM detections expected to be a list, but got {type(detections)}\")\n            return []\n\n        # Placeholder dimensions\n        img_w, img_h = 100, 100\n\n        for i, pred in enumerate(detections):\n            if not isinstance(pred, dict):\n                logger.warning(f\"Skipping detection item as it's not a dict: {pred}\")\n                continue\n\n            try:\n                box_percent = pred.get('box_percent')\n                vlm_description = pred.get('description', '') # Get VLM description\n                vlm_type = pred.get('type', 'unknown') # Get VLM suggested type\n\n                if not isinstance(box_percent, list) or len(box_percent) != 4 or not all(isinstance(n, (int, float)) for n in box_percent):\n                     logger.warning(f\"Skipping detection due to invalid box_percent format: {box_percent}\")\n                     continue\n                box_percent_clamped = [max(0.0, min(1.0, p)) for p in box_percent]\n\n                # Calculate placeholder pixel values\n                xmin = round(box_percent_clamped[0] * img_w); ymin = round(box_percent_clamped[1] * img_h)\n                xmax = round(box_percent_clamped[2] * img_w); ymax = round(box_percent_clamped[3] * img_h)\n                if xmax < xmin: xmax = xmin;\n                if ymax < ymin: ymax = ymin\n                width = xmax - xmin; height = ymax - ymin\n\n                index_id = f\"vlm-{i}\"\n                # Use VLM type as tag_name, or maybe default to 'div'?\n                tag_name = vlm_type # Or 'div'\n\n                if 'InteractiveElement' not in globals() and 'InteractiveElement' not in locals(): continue\n\n                element = InteractiveElement(\n                    index=i,\n                    browser_agent_id=index_id,\n                    tag_name=tag_name,\n                    # Basic attributes remain empty for pure VLM detections for now\n                    attributes={},\n                    weight=0.8, # VLM weight\n                    # Use calculated placeholder pixel values\n                    viewport={\"x\": xmin, \"y\": ymin, \"width\": width, \"height\": height},\n                    page={\"x\": xmin, \"y\": ymin, \"width\": width, \"height\": height},\n                    center={\"x\": xmin + width//2, \"y\": ymin + height//2},\n                    rect={\"left\": xmin, \"top\": ymin, \"right\": xmax, \"bottom\": ymax, \"width\": width, \"height\": height},\n                    z_index=0,\n                    # --- Populate NEW VLM specific fields ---\n                    vlm_description=vlm_description,\n                    vlm_type=vlm_type,\n                    box_percent=box_percent_clamped\n                    # --- End VLM specific fields ---\n                )\n                elements.append(element)\n\n            except Exception as e:\n                logger.warning(f\"Error parsing individual VLM detection: {e} - Data: {pred}\", exc_info=False)\n\n        return elements\n    # def _parse_vlm_detections(self, detections: List[Dict[str, Any]]) -> List[InteractiveElement]:\n    #     \"\"\"\n    #     Parses the list of detections from the VLM JSON output into\n    #     InteractiveElement objects.\n    #     NOTE: Needs image dimensions passed in to calculate meaningful pixel coordinates.\n    #           Currently uses placeholder coordinates.\n    #     \"\"\"\n    #     elements = []\n    #     if not isinstance(detections, list):\n    #         logger.warning(f\"VLM detections expected to be a list, but got {type(detections)}\")\n    #         return []\n\n    #     # Placeholder dimensions - THIS IS STILL A PROBLEM TO SOLVE LATER\n    #     img_w, img_h = 100, 100\n\n    #     for i, pred in enumerate(detections):\n    #         if not isinstance(pred, dict):\n    #             logger.warning(f\"Skipping detection item as it's not a dict: {pred}\")\n    #             continue\n\n    #         try:\n    #             box_percent = pred.get('box_percent')\n    #             description = pred.get('description', '')\n    #             element_type = pred.get('type', 'unknown')\n\n    #             if not isinstance(box_percent, list) or len(box_percent) != 4 or not all(isinstance(n, (int, float)) for n in box_percent):\n    #                  logger.warning(f\"Skipping detection due to invalid box_percent format: {box_percent}\")\n    #                  continue\n    #             box_percent_clamped = [max(0.0, min(1.0, p)) for p in box_percent]\n\n    #             # Calculate placeholder pixel values\n    #             xmin = round(box_percent_clamped[0] * img_w); ymin = round(box_percent_clamped[1] * img_h)\n    #             xmax = round(box_percent_clamped[2] * img_w); ymax = round(box_percent_clamped[3] * img_h)\n    #             if xmax < xmin: xmax = xmin;\n    #             if ymax < ymin: ymax = ymin\n    #             width = xmax - xmin; height = ymax - ymin\n\n    #             index_id = f\"vlm-{i}\"; tag_name = element_type\n\n    #             if 'InteractiveElement' not in globals() and 'InteractiveElement' not in locals():\n    #                  logger.error(\"InteractiveElement class definition is missing. Cannot create elements.\")\n    #                  continue\n\n    #             element = InteractiveElement(\n    #                 index=i, browser_agent_id=index_id, tag_name=tag_name, text=description,\n    #                 attributes={'description': description, 'vlm_type': element_type, 'box_percent': box_percent_clamped}, weight=0.8,\n    #                 viewport={\"x\": xmin, \"y\": ymin, \"width\": width, \"height\": height}, page={\"x\": xmin, \"y\": ymin, \"width\": width, \"height\": height},\n    #                 center={\"x\": xmin + width//2, \"y\": ymin + height//2}, input_type=element_type if 'input' in element_type else None,\n    #                 rect={\"left\": xmin, \"top\": ymin, \"right\": xmax, \"bottom\": ymax, \"width\": width, \"height\": height}, z_index=0)\n    #             elements.append(element)\n\n    #         except Exception as e:\n    #             logger.warning(f\"Error parsing individual VLM detection: {e} - Data: {pred}\", exc_info=False)\n\n    #     return elements"
  },
  {
    "path": "super_agents/browser_use/browser/findVisibleInteractiveElements.js",
    "content": "() => {\n\n    console.time('totalExecutionTime');\n\n    // Define element weights for interactive likelihood - moved to higher scope\n    const elementWeights = {\n        'button': 10,\n        'a': 10,\n        'input': 10,\n        'select': 10,\n        'textarea': 10,\n        'summary': 8,\n        'details': 7,\n        'label': 5, // Labels are clickable but not always interactive\n        'option': 7,\n        'tr': 4,\n        'th': 3,\n        'td': 3,\n        'li': 8,\n        'div': 2,\n        'span': 1,\n        'img': 2,\n        'svg': 3,\n        'path': 3\n    };\n\n    function generateUniqueId() {\n        const rand = Math.random().toString(36);\n        return `ba-${rand}`;\n    } \n\n    // Add this helper function to check element coverage\n    function isElementTooBig(rect) {\n        const viewportWidth = window.innerWidth || document.documentElement.clientWidth;\n        const viewportHeight = window.innerHeight || document.documentElement.clientHeight;\n        const viewportArea = viewportWidth * viewportHeight;\n\n        // Calculate visible area of the element\n        const visibleWidth = Math.min(rect.right, viewportWidth) - Math.max(rect.left, 0);\n        const visibleHeight = Math.min(rect.bottom, viewportHeight) - Math.max(rect.top, 0);\n        const visibleArea = visibleWidth * visibleHeight;\n\n        // Check if element covers more than 50% of viewport\n        return (visibleArea / viewportArea) > 0.5;\n    }\n\n    // Helper function to check if element is in the visible viewport\n    function isInViewport(rect) {\n        // Get viewport dimensions\n        const viewportWidth = window.innerWidth || document.documentElement.clientWidth;\n        const viewportHeight = window.innerHeight || document.documentElement.clientHeight;\n        \n        // Element must have meaningful size\n        if (rect.width < 2 || rect.height < 2) {\n            return false;\n        }\n        \n        // Check if substantial part of the element is in viewport (at least 30%)\n        const visibleWidth = Math.min(rect.right, viewportWidth) - Math.max(rect.left, 0);\n        const visibleHeight = Math.min(rect.bottom, viewportHeight) - Math.max(rect.top, 0);\n        \n        if (visibleWidth <= 0 || visibleHeight <= 0) {\n            return false; // Not in viewport at all\n        }\n        \n        const visibleArea = visibleWidth * visibleHeight;\n        const totalArea = rect.width * rect.height;\n        const visiblePercent = visibleArea / totalArea;\n        \n        return visiblePercent >= 0.3; // At least 30% visible\n    }\n\n    // Helper function to get correct bounding rectangle, accounting for iframes\n    function getAdjustedBoundingClientRect(element, contextInfo = null) {\n        const rect = element.getBoundingClientRect();\n        \n        // If element is in an iframe, adjust coordinates\n        if (contextInfo && contextInfo.iframe) {\n            const iframeRect = contextInfo.iframe.getBoundingClientRect();\n            return {\n                top: rect.top + iframeRect.top,\n                right: rect.right + iframeRect.left,\n                bottom: rect.bottom + iframeRect.top,\n                left: rect.left + iframeRect.left,\n                width: rect.width,\n                height: rect.height\n            };\n        }\n        \n        return rect;\n    }\n\n    // Helper function to check if element is the top element at its position\n    function isTopElement(element) {\n\n        try {\n            const rect = getAdjustedBoundingClientRect(element, element._contextInfo);\n            const centerX = rect.left + rect.width / 2;\n            const centerY = rect.top + rect.height / 2;\n            \n            // Check if the element is visible at its center point\n            const elementsAtPoint = document.elementsFromPoint(centerX, centerY);\n            \n            // Nothing at this point (might be covered by an overlay)\n            if (!elementsAtPoint || elementsAtPoint.length === 0) {\n                return false;\n            }\n            \n            // Handle iframe cases\n            if (element._contextInfo && element._contextInfo.iframe) {\n                // For elements in iframes, check if the iframe itself is the top-level element\n                // then check if the element is topmost within that iframe\n                const iframe = element._contextInfo.iframe;\n                \n                // First check if iframe is visible at the adjusted center point\n                const iframeVisibleAtPoint = elementsAtPoint.includes(iframe);\n                if (!iframeVisibleAtPoint) {\n                    return false;\n                }\n                \n                // Then check if element is topmost within the iframe\n                try {\n                    const iframeDoc = iframe.contentDocument || iframe.contentWindow.document;\n                    // Convert coordinates to iframe's local coordinate system\n                    const iframeRect = iframe.getBoundingClientRect();\n                    const localX = centerX - iframeRect.left;\n                    const localY = centerY - iframeRect.top;\n                    \n                    const elementAtPointInIframe = iframeDoc.elementFromPoint(localX, localY);\n\n                    if (!elementAtPointInIframe) return false;\n\n                    return elementAtPointInIframe === element || element.contains(elementAtPointInIframe) || elementAtPointInIframe.contains(element);\n\n                } catch (e) {\n                    console.warn('Error checking element position in iframe:', e);\n                    return false;\n                }\n            }\n            \n            // Handle shadow DOM cases\n            if (element._contextInfo && element._contextInfo.shadowHost) {\n                // For shadow DOM elements, first check if its shadow host is visible\n                const shadowHost = element._contextInfo.shadowHost;\n                const shadowHostVisible = elementsAtPoint.includes(shadowHost);\n                \n                if (!shadowHostVisible) {\n                    return false;\n                }\n                \n                // Shadow DOM elements aren't directly accessible via elementsFromPoint\n                // So we're simplifying and assuming visibility based on the host visibility\n                return true;\n            }\n            \n            const elementAtPoint = document.elementFromPoint(centerX, centerY);\n            \n            if (!elementAtPoint) return false;\n            // Check if the element at this point is our element or a descendant/ancestor of our element\n            return element === elementAtPoint || \n                    element.contains(elementAtPoint) || \n                    elementAtPoint.contains(element);\n            \n        } catch (e) {\n            console.warn('Error in isTopElement check:', e);\n            return false;\n        }\n    }\n\n    // Add helper function to get effective z-index\n    function getEffectiveZIndex(element) {\n        let current = element;\n        let zIndex = 'auto';\n        \n        while (current && current !== document) {\n            const style = window.getComputedStyle(current);\n            if (style.position !== 'static' && style.zIndex !== 'auto') {\n                zIndex = parseInt(style.zIndex, 10);\n                break;\n            }\n            current = current.parentElement;\n        }\n        \n        return zIndex === 'auto' ? 0 : zIndex;\n    }\n\n    // Function to find all interactive elements\n    function findInteractiveElements() {\n        console.time('findInteractiveElements');\n        \n        // Batch selectors for better performance\n        const selectors = {\n            highPriority: 'button, a[href], input:not([type=\"hidden\"]), select, textarea, [role=\"button\"], [role=\"link\"], [role=\"checkbox\"], [role=\"menuitem\"], [role=\"tab\"], li[role=\"option\"], [role=\"switch\"]',\n            mediumPriority: 'details, summary, svg, path, td, [role=\"option\"], [role=\"radio\"], [role=\"switch\"], [tabindex]:not([tabindex=\"-1\"]), [aria-label], [aria-labelledby]',\n            lowPriority: '[onclick], .clickable, .btn, .button, .nav-item, .menu-item'\n        };\n        \n        // Process only elements in viewport for better performance\n        const allElements = [];\n        const processedElements = new Set();\n        const viewportElements = [];\n        \n        // Function to query elements within a document or shadow root\n        function queryElementsInContext(context, selector) {\n            try {\n                return context.querySelectorAll(selector);\n            } catch (e) {\n                console.warn('Error querying for elements:', e);\n                return [];\n            }\n        }\n        \n        // Function to process a document or shadow root\n        function processContext(context, contextInfo = { iframe: null, shadowHost: null }) {\n            // Process elements in priority order\n            Object.keys(selectors).forEach(priority => {\n                try {\n                    const elements = queryElementsInContext(context, selectors[priority]);\n                    \n                    for (let i = 0; i < elements.length; i++) {\n                        const element = elements[i];\n                        \n                        // Skip already processed\n                        if (processedElements.has(element)) {\n                            continue;\n                        }\n                        \n                        processedElements.add(element);\n                        \n                        // Add context information to the element\n                        element._contextInfo = contextInfo;\n                        \n                        allElements.push(element);\n                    }\n                } catch (e) {\n                    console.warn(`Error processing ${priority} elements:`, e);\n                }\n            });\n            \n            // Process shadow DOM\n            const shadowHosts = queryElementsInContext(context, '*');\n            for (let i = 0; i < shadowHosts.length; i++) {\n                const host = shadowHosts[i];\n                if (host.shadowRoot) {\n                    processContext(\n                        host.shadowRoot, \n                        { \n                            iframe: contextInfo.iframe, \n                            shadowHost: host \n                        }\n                    );\n                }\n            }\n        }\n        \n        // Process main document\n        processContext(document);\n        \n        // Process iframes\n        try {\n            const iframes = document.querySelectorAll('iframe');\n            for (let i = 0; i < iframes.length; i++) {\n                const iframe = iframes[i];\n                \n                // Skip iframes from different origins\n                try {\n                    // This will throw if cross-origin\n                    const iframeDoc = iframe.contentDocument || iframe.contentWindow.document;\n                    processContext(iframeDoc, { iframe: iframe, shadowHost: null });\n                } catch (e) {\n                    console.warn('Could not access iframe content (likely cross-origin):', e);\n                }\n            }\n        } catch (e) {\n            console.warn('Error processing iframes:', e);\n        }\n        \n        // Process cursor:pointer elements in all contexts\n        function processCursorPointerElements(context, contextInfo = { iframe: null, shadowHost: null }) {\n            try {\n                const allElementsInContext = queryElementsInContext(context, '*');\n                \n                for (let i = 0; i < allElementsInContext.length; i++) {\n                    const element = allElementsInContext[i];\n                    \n                    // Skip already processed\n                    if (processedElements.has(element)) {\n                        continue;\n                    }\n                    \n                    // Quick check before expensive operations\n                    const rect = getAdjustedBoundingClientRect(element, contextInfo);\n                    if (!isInViewport(rect)) {\n                        continue;\n                    }\n                    \n                    // Check style\n                    if (isTopElement(element) && window.getComputedStyle(element).cursor === 'pointer') {\n                        // Add context information to the element\n                        element._contextInfo = contextInfo;\n                        \n                        processedElements.add(element);\n                        allElements.push(element);\n                        \n                        viewportElements.push({\n                            element: element,\n                            rect: rect,\n                            weight: 1,\n                            zIndex: getEffectiveZIndex(element)\n                        });\n                    }\n                    \n                    // Process shadow DOM of this element\n                    if (element.shadowRoot) {\n                        processCursorPointerElements(\n                            element.shadowRoot,\n                            {\n                                iframe: contextInfo.iframe,\n                                shadowHost: element\n                            }\n                        );\n                    }\n                }\n            } catch (e) {\n                console.warn('Error processing cursor:pointer elements:', e);\n            }\n        }\n        \n        // Process cursor:pointer elements in the main document\n        processCursorPointerElements(document);\n        \n        // Process cursor:pointer elements in iframes\n        try {\n            const iframes = document.querySelectorAll('iframe');\n            for (let i = 0; i < iframes.length; i++) {\n                const iframe = iframes[i];\n                try {\n                    const iframeDoc = iframe.contentDocument || iframe.contentWindow.document;\n                    processCursorPointerElements(iframeDoc, { iframe: iframe, shadowHost: null });\n                } catch (e) {\n                    // Already logged in previous iframe processing\n                }\n            }\n        } catch (e) {\n            // Already logged in previous iframe processing\n        }\n        \n        // Filter for visible elements\n        for (let i = 0; i < allElements.length; i++) {\n            const element = allElements[i];\n            \n            // Skip detailed processing if not in viewport\n            const rect = getAdjustedBoundingClientRect(element, element._contextInfo);\n            if (!isInViewport(rect)) {\n                continue;\n            }\n            \n            // Skip disabled elements\n            if (element.hasAttribute('disabled') || \n                element.getAttribute('aria-disabled') === 'true') {\n                continue;\n            }\n\n            // Add check for too-large elements\n            if (isElementTooBig(rect)) {\n                continue; // Skip elements that cover more than 50% of viewport\n            }\n            \n            // Check if the element is the top element at its position\n            if (!isTopElement(element)) {\n                continue;\n            }\n            \n            // Calculate element weight\n            let weight = elementWeights[element.tagName.toLowerCase()] || 1;\n            \n            // Boost weight for elements with specific attributes\n            if (element.getAttribute('role') === 'button') weight = Math.max(weight, 8);\n            if (element.hasAttribute('onclick')) weight = Math.max(weight, 7);\n            if (element.hasAttribute('href')) weight = Math.max(weight, 8);\n            if (window.getComputedStyle(element).cursor === 'pointer') weight = Math.max(weight, 4);\n            \n            // Add to viewport elements\n            viewportElements.push({\n                element: element,\n                rect: rect,\n                weight: weight,\n                zIndex: getEffectiveZIndex(element)\n            });\n\n            // Add this to the code that processes each element\n            element.setAttribute('data-element-index', i);\n\n            // Add a unique identifier attribute to the element\n            const uniqueId = generateUniqueId();\n            element.setAttribute('data-browser-agent-id', uniqueId);\n        }\n        \n        console.timeEnd('findInteractiveElements');\n        console.log(`Found ${viewportElements.length} interactive elements in viewport (out of ${allElements.length} total)`);\n        return viewportElements;\n    }\n\n    // Calculate Intersection over Union (IoU) between two rectangles\n    function calculateIoU(rect1, rect2) {\n        // Calculate area of each rectangle\n        const area1 = (rect1.right - rect1.left) * (rect1.bottom - rect1.top);\n        const area2 = (rect2.right - rect2.left) * (rect2.bottom - rect2.top);\n        \n        // Calculate intersection\n        const intersectLeft = Math.max(rect1.left, rect2.left);\n        const intersectTop = Math.max(rect1.top, rect2.top);\n        const intersectRight = Math.min(rect1.right, rect2.right);\n        const intersectBottom = Math.min(rect1.bottom, rect2.bottom);\n        \n        // Check if intersection exists\n        if (intersectRight < intersectLeft || intersectBottom < intersectTop) {\n            return 0; // No intersection\n        }\n        \n        // Calculate area of intersection\n        const intersectionArea = (intersectRight - intersectLeft) * (intersectBottom - intersectTop);\n        \n        // Calculate union area\n        const unionArea = area1 + area2 - intersectionArea;\n        \n        // Calculate IoU\n        return intersectionArea / unionArea;\n    }\n\n    // Check if rect1 is fully contained within rect2\n    function isFullyContained(rect1, rect2) {\n        return rect1.left >= rect2.left && \n               rect1.right <= rect2.right &&\n               rect1.top >= rect2.top &&\n               rect1.bottom <= rect2.bottom;\n    }\n\n    // Filter overlapping elements using weight and IoU\n    function filterOverlappingElements(elements) {\n        console.time('filterOverlappingElements');\n        \n        // Sort by area (descending - larger first), then by weight (descending) for same area\n        elements.sort((a, b) => {\n            // Calculate areas\n            const areaA = a.rect.width * a.rect.height;\n            const areaB = b.rect.width * b.rect.height;\n            \n            // Sort by area first (larger area first)\n            if (areaB !== areaA) {\n                return areaB - areaA; // Larger area first\n            }\n            \n            // For same area, sort by weight (higher weight first)\n            return b.weight - a.weight;\n        });\n        \n        const filteredElements = [];\n        const iouThreshold = 0.7; // Threshold for considering elements as overlapping\n        \n        // Add elements one by one, checking against already added elements\n        for (let i = 0; i < elements.length; i++) {\n            const current = elements[i];\n            let shouldAdd = true;\n            \n            // For each element already in our filtered list\n            for (let j = 0; j < filteredElements.length; j++) {\n                const existing = filteredElements[j];\n                \n                // Convert DOMRect to plain object for IoU calculation\n                const currentRect = {\n                    left: current.rect.left,\n                    top: current.rect.top,\n                    right: current.rect.right,\n                    bottom: current.rect.bottom\n                };\n                \n                const existingRect = {\n                    left: existing.rect.left,\n                    top: existing.rect.top,\n                    right: existing.rect.right,\n                    bottom: existing.rect.bottom\n                };\n                \n                // Check for high overlap\n                const iou = calculateIoU(currentRect, existingRect);\n                if (iou > iouThreshold) {\n                    shouldAdd = false;\n                    break;\n                }\n                \n                // Check if current element is fully contained within an existing element with higher weight\n                if (existing.weight > current.weight && \n                    isFullyContained(currentRect, existingRect) && \n                    existing.zIndex === current.zIndex) {\n                    shouldAdd = false;\n                    break;\n                }\n            }\n            \n            if (shouldAdd) {\n                filteredElements.push(current);\n            }\n        }\n        \n        console.timeEnd('filterOverlappingElements');\n        return filteredElements;\n    }\n\n    // Main function to get interactive elements with coordinates\n    function getInteractiveElementsData() {\n        // Find all potential interactive elements\n        const potentialElements = findInteractiveElements();\n        \n        // Filter out overlapping elements\n        const filteredElements = filterOverlappingElements(potentialElements);\n        console.log(`Filtered to ${filteredElements.length} non-overlapping elements`);\n        \n        // Sort elements by position (top-to-bottom, left-to-right)\n        const sortedElements = sortElementsByPosition(filteredElements);\n        \n        // Prepare result with viewport metadata\n        const result = {\n            viewport: {\n                width: window.innerWidth,\n                height: window.innerHeight,\n                scrollX: Math.round(window.scrollX),\n                scrollY: Math.round(window.scrollY),\n                devicePixelRatio: window.devicePixelRatio || 1,\n                scrollDistanceAboveViewport: Math.round(window.scrollY),\n                scrollDistanceBelowViewport: Math.round(document.documentElement.scrollHeight - window.scrollY - window.innerHeight)\n            },\n            elements: []\n        };\n        \n        // Process each interactive element (now sorted by position)\n        sortedElements.forEach((item, index) => {\n            const element = item.element;\n            const rect = item.rect;\n            \n            // Ensure each element has a index_id\n            let browserId = element.getAttribute('data-browser-agent-id');\n\n            if (!browserId) {\n                const uniqueId = generateUniqueId();\n                element.setAttribute('data-browser-agent-id', uniqueId);\n                browserId = uniqueId;\n            }\n            \n            // Get element text (direct or from children)\n            let text = element.innerText || '';\n            if (!text) {\n                const textNodes = Array.from(element.childNodes)\n                    .filter(node => node.nodeType === Node.TEXT_NODE)\n                    .map(node => node.textContent.trim())\n                    .filter(content => content.length > 0);\n                text = textNodes.join(' ');\n            }\n            \n            // Extract important attributes\n            const attributes = {};\n            ['id', 'class', 'href', 'type', 'name', 'value', 'placeholder', 'aria-label', 'title', 'role'].forEach(attr => {\n                if (element.hasAttribute(attr)) {\n                    attributes[attr] = element.getAttribute(attr);\n                }\n            });\n            \n            // Determine input type and element role more clearly\n            let elementType = element.tagName.toLowerCase();\n            let inputType = null;\n\n            // Handle input elements specifically\n            if (elementType === 'input' && element.hasAttribute('type')) {\n                inputType = element.getAttribute('type').toLowerCase();\n            }\n\n            // Create element data object\n            const elementData = {\n                tagName: elementType,\n                text: text.trim(),\n                attributes,\n                index,\n                weight: item.weight,\n                browserAgentId: browserId,  // Use the guaranteed ID\n                inputType: inputType,  // Add specific input type\n                viewport: {\n                    x: Math.round(rect.left),\n                    y: Math.round(rect.top),\n                    width: Math.round(rect.width),\n                    height: Math.round(rect.height)\n                },\n                page: {\n                    x: Math.round(rect.left + window.scrollX),\n                    y: Math.round(rect.top + window.scrollY),\n                    width: Math.round(rect.width),\n                    height: Math.round(rect.height)\n                },\n                center: {\n                    x: Math.round(rect.left + rect.width/2),\n                    y: Math.round(rect.top + rect.height/2)\n                },\n                rect: {\n                    left: Math.round(rect.left),\n                    top: Math.round(rect.top),\n                    right: Math.round(rect.right),\n                    bottom: Math.round(rect.bottom),\n                    width: Math.round(rect.width),\n                    height: Math.round(rect.height)\n                },\n                zIndex: item.zIndex\n            };\n            \n            // Add context information for iframe or shadow DOM if applicable\n            if (element._contextInfo) {\n                elementData.context = {};\n                \n                // Add iframe information if element is within an iframe\n                if (element._contextInfo.iframe) {\n                    const iframeRect = element._contextInfo.iframe.getBoundingClientRect();\n                    elementData.context.iframe = {\n                        id: element._contextInfo.iframe.id || null,\n                        name: element._contextInfo.iframe.name || null,\n                        src: element._contextInfo.iframe.src || null,\n                        rect: {\n                            x: Math.round(iframeRect.left),\n                            y: Math.round(iframeRect.top),\n                            width: Math.round(iframeRect.width),\n                            height: Math.round(iframeRect.height)\n                        }\n                    };\n                }\n                \n                // Add shadow DOM information if element is within a shadow DOM\n                if (element._contextInfo.shadowHost) {\n                    const shadowHost = element._contextInfo.shadowHost;\n                    const shadowHostRect = shadowHost.getBoundingClientRect();\n                    elementData.context.shadowDOM = {\n                        hostTagName: shadowHost.tagName.toLowerCase(),\n                        hostId: shadowHost.id || null,\n                        hostRect: {\n                            x: Math.round(shadowHostRect.left),\n                            y: Math.round(shadowHostRect.top),\n                            width: Math.round(shadowHostRect.width),\n                            height: Math.round(shadowHostRect.height)\n                        }\n                    };\n                }\n            }\n            \n            result.elements.push(elementData);\n            \n        });\n        \n        return result;\n    }\n\n    // Add new function to sort elements by position\n    function sortElementsByPosition(elements) {\n        // Define what \"same row\" means (elements within this Y-distance are considered in the same row)\n        const ROW_THRESHOLD = 20; // pixels\n        \n        // First, group elements into rows based on their Y position\n        const rows = [];\n        let currentRow = [];\n        \n        // Copy elements to avoid modifying the original array\n        const sortedByY = [...elements].sort((a, b) => {\n            return a.rect.top - b.rect.top;\n        });\n        \n        // Group into rows\n        sortedByY.forEach(element => {\n            if (currentRow.length === 0) {\n                // Start a new row\n                currentRow.push(element);\n            } else {\n                // Check if this element is in the same row as the previous ones\n                const lastElement = currentRow[currentRow.length - 1];\n                if (Math.abs(element.rect.top - lastElement.rect.top) <= ROW_THRESHOLD) {\n                    // Same row\n                    currentRow.push(element);\n                } else {\n                    // New row\n                    rows.push([...currentRow]);\n                    currentRow = [element];\n                }\n            }\n        });\n        \n        // Add the last row if not empty\n        if (currentRow.length > 0) {\n            rows.push(currentRow);\n        }\n        \n        // Sort each row by X position (left to right)\n        rows.forEach(row => {\n            row.sort((a, b) => a.rect.left - b.rect.left);\n        });\n        \n        // Flatten the rows back into a single array\n        return rows.flat();\n    }\n\n    // Execute and measure performance\n    console.time('getInteractiveElements');\n    const result = getInteractiveElementsData();\n    console.timeEnd('getInteractiveElements');\n    console.timeEnd('totalExecutionTime');\n\n    return result;\n};   "
  },
  {
    "path": "super_agents/browser_use/browser/models.py",
    "content": "# super_agents/browser_use/browser/models.py\nfrom typing import List, Dict, Optional, Any\n\n# --- Force Pydantic V2 Import ---\nfrom pydantic import BaseModel, Field, ConfigDict\nfrom pydantic.alias_generators import to_camel\n# --- End Pydantic V2 Import ---\n\n# --- BrowserError Exception ---\nclass BrowserError(Exception): pass\nclass URLNotAllowedError(BrowserError): pass\n\n# --- Data Models ---\nclass TabInfo(BaseModel):\n    page_id: int\n    url: str\n    title: str\n\nclass Coordinates(BaseModel):\n    x: int\n    y: int\n    width: Optional[int] = Field(default=None)\n    height: Optional[int] = Field(default=None)\n\nclass Viewport(BaseModel):\n    model_config = ConfigDict(alias_generator=to_camel, populate_by_name=True, from_attributes=True, extra='ignore')\n    width: int = Field(default=1200)\n    height: int = Field(default=900)\n    scroll_x: int = Field(default=0)\n    scroll_y: int = Field(default=0)\n    device_pixel_ratio: float = Field(default=1.0)\n    scroll_distance_above_viewport: Optional[int] = Field(default=0)\n    scroll_distance_below_viewport: Optional[int] = Field(default=0)\n\nclass InteractiveElement(BaseModel):\n    \"\"\"Represents an interactive element, combining DOM/AX/VLM info.\"\"\"\n    model_config = ConfigDict(\n        alias_generator=to_camel,\n        populate_by_name=True,\n        from_attributes=True,\n        extra='ignore'\n    )\n\n    # Common fields\n    index: int\n    browser_agent_id: str # Unique ID (pw-X or vlm-Y)\n    tag_name: str\n    text: Optional[str] = Field(default=None)\n    attributes: Dict[str, str] = Field(default_factory=dict) # Keep basic DOM attributes as string dict?\n    weight: float = Field(default=1.0)\n    viewport: Optional[Coordinates] = Field(default=None) # Make optional as VLM might not provide perfectly\n    page: Optional[Coordinates] = Field(default=None)     # Make optional\n    center: Optional[Coordinates] = Field(default=None)   # Make optional\n    input_type: Optional[str] = Field(default=None)\n    rect: Optional[Dict[str, int]] = Field(default=None) # Make optional\n    z_index: int = Field(default=0)\n\n    # --- Fields specifically from VLM (added as top-level optional) ---\n    vlm_description: Optional[str] = Field(default=None, description=\"Description provided by VLM\")\n    vlm_type: Optional[str] = Field(default=None, description=\"Element type suggested by VLM\")\n    box_percent: Optional[List[float]] = Field(default=None, description=\"Bounding box [xmin, ymin, xmax, ymax] as percentages from VLM\")\n    # --- End VLM specific fields ---\n\nclass InteractiveElementsData(BaseModel):\n    model_config = ConfigDict(extra='ignore')\n    viewport: Viewport\n    elements: List[InteractiveElement] = Field(default_factory=list)\n\nclass BrowserState(BaseModel):\n    model_config = ConfigDict(extra='ignore')\n    url: str\n    tabs: List[TabInfo] = Field(default_factory=list)\n    viewport: Optional[Viewport] = Field(default=None)\n    screenshot_with_highlights: Optional[str] = Field(default=None)\n    screenshot: Optional[str] = Field(default=None)\n    # Use str key (browser_agent_id)\n    interactive_elements: Dict[str, InteractiveElement] = Field(default_factory=dict)"
  },
  {
    "path": "super_agents/browser_use/browser/utils.py",
    "content": "import base64\nimport logging\nfrom io import BytesIO\nfrom pathlib import Path\nfrom typing import Dict, List\n\nfrom PIL import Image, ImageDraw, ImageFont\n\n# 修正从 index.browser 导入改为本地相对导入\nfrom .models import InteractiveElement\n\nlogger = logging.getLogger(__name__)\n\ndef put_highlight_elements_on_screenshot(elements: dict[int, InteractiveElement], screenshot_b64: str) -> str:\n    \"\"\"Highlight elements using Pillow instead of OpenCV\"\"\"\n    try:\n        # Decode base64 to PIL Image\n        image_data = base64.b64decode(screenshot_b64)\n        image = Image.open(BytesIO(image_data))\n        draw = ImageDraw.Draw(image)\n        \n        # Colors (RGB format for PIL)\n        colors = [\n            (204, 0, 0),\n            (0, 136, 0),\n            (0, 0, 204),\n            (204, 112, 0),\n            (102, 0, 102),\n            (0, 102, 102),\n            (204, 51, 153),\n            (44, 0, 102),\n            (204, 35, 0), \n            (28, 102, 66),\n            (170, 0, 0),\n            (36, 82, 123)\n        ]\n        placed_labels = []\n        \n        # Load custom font from the package\n        try:\n            # Path to your packaged font\n            font_path = Path(__file__).parent / \"fonts\" / \"OpenSans-Medium.ttf\"\n            font = ImageFont.truetype(str(font_path), 14)\n        except Exception as e:\n            logger.warning(f\"Could not load custom font: {e}, falling back to default\")\n            font = ImageFont.load_default()\n            \n        for idx, element in elements.items():\n\n            # don't draw sheets elements\n            if element.browser_agent_id.startswith(\"row_\") or element.browser_agent_id.startswith(\"column_\"):\n                continue\n\n            color = colors[idx % len(colors)]\n            rect = element.viewport\n            \n            # Draw rectangle\n            draw.rectangle(\n                [(rect.x, rect.y), (rect.x + rect.width, rect.y + rect.height)],\n                outline=color,\n                width=2\n            )\n            \n            # Prepare label\n            text = str(idx)\n            \n            # Get precise text dimensions for proper centering\n            text_bbox = draw.textbbox((0, 0), text, font=font)\n            text_width = text_bbox[2] - text_bbox[0]\n            text_height = text_bbox[3] - text_bbox[1]\n            \n            # Make label size exactly proportional for better aesthetics\n            # Square labels look better for single digits as seen in the example image\n            label_width = text_width + 6\n            label_height = text_height + 6\n            \n            # Positioning logic\n            if label_width > rect.width or label_height > rect.height:\n                label_x = rect.x + rect.width\n                label_y = rect.y\n            else:\n                label_x = rect.x + rect.width - label_width\n                label_y = rect.y\n            \n            # Check for overlaps with existing labels\n            label_rect = {\n                'left': label_x, 'top': label_y,\n                'right': label_x + label_width, 'bottom': label_y + label_height\n            }\n            \n            for existing in placed_labels:\n                if not (label_rect['right'] < existing['left'] or \n                        label_rect['left'] > existing['right'] or \n                        label_rect['bottom'] < existing['top'] or \n                        label_rect['top'] > existing['bottom']):\n                    label_y = existing['bottom'] + 2\n                    label_rect['top'] = label_y\n                    label_rect['bottom'] = label_y + label_height\n                    break\n            \n            # Ensure label is visible within image boundaries\n            img_width, img_height = image.size\n            if label_x < 0:\n                label_x = 0\n            elif label_x + label_width >= img_width:\n                label_x = img_width - label_width - 1\n                \n            if label_y < 0:\n                label_y = 0\n            elif label_y + label_height >= img_height:\n                label_y = img_height - label_height - 1\n            \n            # Draw label background\n            draw.rectangle(\n                [(label_x, label_y), (label_x + label_width, label_y + label_height)],\n                fill=color\n            )\n                        \n\t\t\t# magic numbers to center the text\n            text_x = label_x + 3\n            text_y = label_y - 1\n            \n            # Draw text\n            draw.text(\n                (text_x, text_y),\n                text,\n                fill=(255, 255, 255),\n                font=font\n            )\n            \n            placed_labels.append(label_rect)\n        \n        # Convert back to base64\n        buffer = BytesIO()\n        image.save(buffer, format=\"PNG\")\n        new_image_base64 = base64.b64encode(buffer.getvalue()).decode()\n        \n        return new_image_base64\n    \n    except Exception as e:\n        logger.error(f\"Failed to add highlights to screenshot: {str(e)}\")\n        return screenshot_b64\n\n\ndef scale_b64_image(image_b64: str, scale_factor: float) -> str:\n    \"\"\"\n    Scale down a base64 encoded image using Pillow.\n    \n    Args:\n        image_b64: Base64 encoded image string\n        scale_factor: Factor to scale the image by (0.5 = half size)\n    \n    Returns:\n        Base64 encoded scaled image\n    \"\"\"\n    try:\n        # Decode base64 to PIL Image\n        image_data = base64.b64decode(image_b64)\n        image = Image.open(BytesIO(image_data))\n        \n        if image is None:\n            return image_b64\n            \n        # Get original dimensions\n        width, height = image.size\n        \n        # Calculate new dimensions\n        new_width = int(width * scale_factor)\n        new_height = int(height * scale_factor)\n        \n        # Resize the image using high quality resampling\n        resized_image = image.resize(\n            (new_width, new_height),\n            Image.LANCZOS\n        )\n        \n        # Convert back to base64\n        buffer = BytesIO()\n        resized_image.save(buffer, format=\"PNG\")\n        resized_image_b64 = base64.b64encode(buffer.getvalue()).decode()\n        \n        return resized_image_b64\n        \n    except Exception:\n        return image_b64\n\n\ndef calculate_iou(rect1: Dict, rect2: Dict) -> float:\n    \"\"\"\n    Calculate Intersection over Union between two rectangles.\n    \n    Args:\n        rect1: First rectangle with left, top, right, bottom keys\n        rect2: Second rectangle with left, top, right, bottom keys\n        \n    Returns:\n        IoU value\n    \"\"\"\n    # Calculate intersection\n    intersect_left = max(rect1[\"left\"], rect2[\"left\"])\n    intersect_top = max(rect1[\"top\"], rect2[\"top\"])\n    intersect_right = min(rect1[\"right\"], rect2[\"right\"])\n    intersect_bottom = min(rect1[\"bottom\"], rect2[\"bottom\"])\n    \n    # Check if intersection exists\n    if intersect_right < intersect_left or intersect_bottom < intersect_top:\n        return 0.0  # No intersection\n    \n    # Calculate area of each rectangle\n    area1 = (rect1[\"right\"] - rect1[\"left\"]) * (rect1[\"bottom\"] - rect1[\"top\"])\n    area2 = (rect2[\"right\"] - rect2[\"left\"]) * (rect2[\"bottom\"] - rect2[\"top\"])\n    \n    # Calculate area of intersection\n    intersection_area = (intersect_right - intersect_left) * (intersect_bottom - intersect_top)\n    \n    # Calculate union area\n    union_area = area1 + area2 - intersection_area\n    \n    # Calculate IoU\n    return intersection_area / union_area if union_area > 0 else 0.0\n\n\ndef is_fully_contained(rect1: Dict, rect2: Dict) -> bool:\n    \"\"\"\n    Check if rect1 is fully contained within rect2.\n    \n    Args:\n        rect1: First rectangle with left, top, right, bottom keys\n        rect2: Second rectangle with left, top, right, bottom keys\n        \n    Returns:\n        True if rect1 is fully contained within rect2\n    \"\"\"\n    return (rect1[\"left\"] >= rect2[\"left\"] and\n            rect1[\"right\"] <= rect2[\"right\"] and\n            rect1[\"top\"] >= rect2[\"top\"] and\n            rect1[\"bottom\"] <= rect2[\"bottom\"])\n\n\ndef filter_overlapping_elements(elements: List[InteractiveElement], iou_threshold: float = 0.7) -> List[InteractiveElement]:\n    \"\"\"\n    Filter overlapping elements using weight and IoU.\n    \n    Args:\n        elements: Elements to filter\n        iou_threshold: Threshold for considering elements as overlapping\n        \n    Returns:\n        Filtered elements\n    \"\"\"\n    if not elements:\n        return []\n        \n    # Sort by area (descending), then by weight (descending)\n    elements.sort(key=lambda e: (\n        -(e.rect[\"width\"] * e.rect[\"height\"]),  # Negative area for descending sort\n        -e.weight  # Negative weight for descending sort\n    ))\n    \n    filtered_elements: List[InteractiveElement] = []\n    \n    # Add elements one by one, checking against already added elements\n    for current in elements:\n        should_add = True\n        \n        # For each element already in our filtered list\n        for existing in filtered_elements:\n            # Check overlap with IoU\n            iou = calculate_iou(current.rect, existing.rect)\n            if iou > iou_threshold:\n                should_add = False\n                break\n            \n            # Check if current element is fully contained within an existing element with higher weight\n            if is_fully_contained(current.rect, existing.rect):\n                if existing.weight >= current.weight and existing.z_index == current.z_index:\n                    should_add = False\n                    break\n                else:\n                    # If current element has higher weight and is more than 50% of the size of the existing element, remove the existing element\n                    if current.rect[\"width\"] * current.rect[\"height\"] >= existing.rect[\"width\"] * existing.rect[\"height\"] * 0.5:\n                        filtered_elements.remove(existing)\n                        break\n        \n        if should_add:\n            filtered_elements.append(current)\n    \n    return filtered_elements\n\n\ndef sort_elements_by_position(elements: List[InteractiveElement]) -> List[InteractiveElement]:\n    \"\"\"\n    Sort elements by position (top to bottom, left to right).\n    \n    Args:\n        elements: Elements to sort\n        \n    Returns:\n        Sorted elements\n    \"\"\"\n    if not elements:\n        return []\n    \n    # Define what \"same row\" means\n    ROW_THRESHOLD = 20  # pixels\n    \n    # First, group elements into rows based on Y position\n    rows = []\n    current_row = []\n    \n    # Copy and sort elements by Y position\n    sorted_by_y = sorted(elements, key=lambda e: e.rect[\"top\"])\n    \n    # Group into rows\n    for element in sorted_by_y:\n        if not current_row:\n            # Start a new row\n            current_row.append(element)\n        else:\n            # Check if this element is in the same row as the previous ones\n            last_element = current_row[-1]\n            if abs(element.rect[\"top\"] - last_element.rect[\"top\"]) <= ROW_THRESHOLD:\n                # Same row\n                current_row.append(element)\n            else:\n                # New row\n                rows.append(list(current_row))\n                current_row = [element]\n    \n    # Add the last row if not empty\n    if current_row:\n        rows.append(current_row)\n    \n    # Sort each row by X position (left to right)\n    for row in rows:\n        row.sort(key=lambda e: e.rect[\"left\"])\n    \n    # Flatten the rows back into a single array\n    elements = [element for row in rows for element in row]\n\n    for i, element in enumerate(elements):\n        element.index = i\n\n    return elements\n\n\ndef combine_and_filter_elements(\n    browser_elements: List[InteractiveElement], \n    cv_elements: List[InteractiveElement],\n    iou_threshold: float = 0.7\n) -> List[InteractiveElement]:\n    \"\"\"\n    Combine browser elements and CV elements and filter duplicates.\n    \n    Args:\n        browser_elements: Browser detection elements\n        cv_elements: CV detection elements\n        iou_threshold: Threshold for considering elements as overlapping\n        \n    Returns:\n        Combined and filtered elements\n    \"\"\"\n    # Combine elements\n    all_elements = list(browser_elements) + cv_elements\n    \n    # Filter overlapping elements\n    filtered = filter_overlapping_elements(all_elements, iou_threshold)\n    \n    # Sort elements by position\n    sorted_elements = sort_elements_by_position(filtered)\n    \n    return sorted_elements"
  },
  {
    "path": "super_agents/browser_use/llm.py",
    "content": "# super_agents/browser_use/llm.py\nimport os\nimport json\nimport asyncio\nfrom typing import Optional, Tuple, Type, Dict\n\n# --- Environment Variable Loading ---\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# --- Pydantic & LangChain Core ---\ntry:\n    # Import necessary Pydantic components if needed elsewhere (e.g., for generate_structured_output)\n    from pydantic.v1 import BaseModel\nexcept ImportError:\n    from pydantic import BaseModel\n\nfrom langchain_core.messages import HumanMessage, SystemMessage\nfrom langchain_core.runnables.base import RunnableSerializable\n# No longer need secret_from_env here if ChatOpenRouter doesn't use Field/SecretStr\n# from langchain_core.utils.utils import secret_from_env\nfrom langchain_openai import ChatOpenAI # Use the standard import\n\n# --- API Key Loading (For initialize_llms) ---\nLLM_API_KEY_FROM_ENV = os.getenv(\"LLM_API_KEY\")\nOPENAI_API_KEY_FROM_ENV = os.getenv(\"OPENAI_API_KEY\")\nGROQ_API_KEY_FROM_ENV = os.getenv(\"GROQ_API_KEY\")\n# OPENROUTER key will be loaded directly in ChatOpenRouter init\nOPENROUTER_API_KEY_DIRECT = os.getenv(\"OPENROUTER_API_KEY\")\n\n# --- ChatOpenRouter Definition (Based on User's Example 1 Logic) ---\nclass ChatOpenRouter(ChatOpenAI):\n    \"\"\"\n    Wrapper for ChatOpenAI configured for OpenRouter.\n    Handles API key loading within __init__ using standard strings\n    to avoid Pydantic V2 SecretStr issues during class definition.\n    \"\"\"\n    # No class-level Field definition for openai_api_key to avoid Pydantic V2 error\n\n    def __init__(self,\n                 model_name: str, # Make model_name required\n                 openai_api_key: Optional[str] = None, # Accept optional string key\n                 openai_api_base: str = \"https://openrouter.ai/api/v1\", # Default OpenRouter base\n                 **kwargs):\n        \"\"\"\n        Initializes the ChatOpenRouter client.\n\n        Args:\n            model_name: The model identifier on OpenRouter (e.g., \"alibaba/qwen-vl-max\").\n            openai_api_key: Optional OpenRouter API key (string). If None, reads from\n                             OPENROUTER_API_KEY environment variable.\n            openai_api_base: The API base URL. Defaults to OpenRouter.\n            **kwargs: Additional arguments passed to ChatOpenAI.\n        \"\"\"\n        # Resolve the API key: use passed argument first, then environment variable\n        resolved_key = openai_api_key or OPENROUTER_API_KEY_DIRECT\n        if not resolved_key:\n            # Log warning or raise error if key is missing, depending on desired strictness\n            # Raising an error is safer to prevent unexpected failures later\n            raise ValueError(\"OpenRouter API key not provided directly or via OPENROUTER_API_KEY env var.\")\n\n        # Call the parent __init__ method, passing the resolved string key\n        # Use openai_api_base argument expected by ChatOpenAI\n        super().__init__(\n            openai_api_base=openai_api_base,\n            openai_api_key=resolved_key, # Pass resolved string key\n            model_name=model_name, # Pass model_name\n            **kwargs # Pass other arguments like temperature, max_tokens\n        )\n        # Optional: Log successful initialization\n        # logger.info(f\"ChatOpenRouter initialized for model {model_name}\") # Requires logger setup\n\n# --- Configurable LLM Initialization (For Planning LLM - unchanged) ---\ndef initialize_llms() -> Tuple[Optional[RunnableSerializable], Optional[RunnableSerializable]]:\n    # ... (function remains the same as before) ...\n    provider = os.getenv(\"LLM_PROVIDER\", \"openai\").lower()\n    model_name = os.getenv(\"LLM_MODEL_NAME\", \"gpt-4o-mini\")\n    api_key = LLM_API_KEY_FROM_ENV\n    base_url = os.getenv(\"LLM_BASE_URL\")\n    temperature = float(os.getenv(\"LLM_TEMPERATURE\", \"0.1\"))\n    creative_temperature = float(os.getenv(\"LLM_CREATIVE_TEMPERATURE\", \"0.4\"))\n    print(f\"\\n--- Initializing Planning LLM ---\")\n    print(f\"Provider: '{provider}'\")\n    print(f\"Model Name: '{model_name}'\")\n    print(f\"Base URL: {base_url if base_url else 'Default'}\")\n    print(f\"Temperatures: Main={temperature}, Creative={creative_temperature}\")\n    print(f\"-----------------------------\")\n    llm_instance: Optional[RunnableSerializable] = None\n    llm_creative_instance: Optional[RunnableSerializable] = None\n    try:\n        if provider == \"openai\": # ... (rest of provider logic) ...\n             key_to_use = api_key or OPENAI_API_KEY_FROM_ENV\n             if not key_to_use: raise ValueError(\"OpenAI API key not found for planning LLM.\")\n             llm_instance = ChatOpenAI(model=model_name, temperature=temperature, api_key=key_to_use)\n             llm_creative_instance = ChatOpenAI(model=model_name, temperature=creative_temperature, api_key=key_to_use)\n        elif provider == \"groq\": # ...\n             key_to_use = api_key or GROQ_API_KEY_FROM_ENV\n             if not key_to_use: raise ValueError(\"Groq API key not found.\")\n             llm_instance = ChatOpenAI(model=model_name, temperature=temperature, openai_api_key=key_to_use, openai_api_base=\"https://api.groq.com/openai/v1\")\n             llm_creative_instance = ChatOpenAI(model=model_name, temperature=creative_temperature, openai_api_key=key_to_use, openai_api_base=\"https://api.groq.com/openai/v1\")\n        elif provider == \"xai\" or provider == \"grok\": # ...\n             key_to_use = api_key\n             if not key_to_use: raise ValueError(f\"LLM_API_KEY required for '{provider}'.\")\n             if not base_url: raise ValueError(f\"LLM_BASE_URL required for '{provider}'.\")\n             if not model_name: raise ValueError(f\"LLM_MODEL_NAME required for '{provider}'.\")\n             llm_instance = ChatOpenAI(model=model_name, temperature=temperature, openai_api_key=key_to_use, openai_api_base=base_url)\n             llm_creative_instance = ChatOpenAI(model=model_name, temperature=creative_temperature, openai_api_key=key_to_use, openai_api_base=base_url)\n        elif provider == \"openai_compatible\": # ...\n             key_to_use = api_key\n             if not key_to_use: raise ValueError(f\"LLM_API_KEY required for '{provider}'.\")\n             if not base_url: raise ValueError(f\"LLM_BASE_URL required for '{provider}'.\")\n             if not model_name: raise ValueError(f\"LLM_MODEL_NAME required for '{provider}'.\")\n             llm_instance = ChatOpenAI(model=model_name, temperature=temperature, openai_api_key=key_to_use, openai_api_base=base_url)\n             llm_creative_instance = ChatOpenAI(model=model_name, temperature=creative_temperature, openai_api_key=key_to_use, openai_api_base=base_url)\n        else:\n            raise ValueError(f\"Unsupported LLM_PROVIDER for planning LLM: '{provider}'.\")\n        print(\"--- Planning LLM Initialization Successful ---\")\n        return llm_instance, llm_creative_instance\n    except Exception as e:\n        print(f\"!!! ERROR during Planning LLM Initialization: {e}\")\n        return None, None\n\n# --- generate_structured_output (Helper used by Planning Node - unchanged) ---\nasync def generate_structured_output(model: Optional[RunnableSerializable], schema: Type[BaseModel], prompt: str, system_message: str = \"\") -> Optional[BaseModel]:\n    # ... (function remains the same as before) ...\n    if model is None: return None\n    if not isinstance(model, RunnableSerializable): return None\n    try:\n        # Ensure schema is Pydantic BaseModel (imported from V1 or V2)\n        if not issubclass(schema, BaseModel):\n             print(f\"Error: schema provided to generate_structured_output is not a Pydantic BaseModel (type: {type(schema)})\")\n             return None\n        structured_llm = model.with_structured_output(schema)\n        messages = []\n        if system_message: messages.append(SystemMessage(content=system_message))\n        messages.append(HumanMessage(content=prompt))\n        response = await structured_llm.ainvoke(messages)\n        if isinstance(response, schema): return response\n        else:\n            print(f\"Warning: Structured output did not match expected schema {schema.__name__}. Got type: {type(response)}\")\n            return None\n    except Exception as e:\n        print(f\"Error during structured output generation: {e}\")\n        # import traceback; traceback.print_exc() # Uncomment for full debug trace\n        return None"
  },
  {
    "path": "super_agents/browser_use/main.py",
    "content": "# super_agents/browser_use/main.py\nimport asyncio\nimport argparse\nimport logging\nimport os\nfrom typing import Dict\nfrom dotenv import load_dotenv\n\n# Import components\nfrom .agent.graph import create_graph_app\nfrom .agent.state import AgentState\n# Import CORRECT Browser and BrowserConfig from browser.browser\nfrom .browser.browser import Browser, BrowserConfig\n# Import LLM initializer and type hint\nfrom .llm import initialize_llms, RunnableSerializable\n\n# --- Logging Setup ---\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')\nlogger = logging.getLogger(__name__)\n\n# --- Main Execution Logic ---\nasync def run_agent(task: str, config: Dict):\n    \"\"\"Initializes components and runs the agent graph.\"\"\"\n\n    load_dotenv()\n\n    # 1. Initialize Browser Configuration (Removed CV/Sheets endpoints)\n    browser_config = BrowserConfig( # <--- Uses CORRECT imported BrowserConfig\n        viewport_size=config.get(\"viewport\"),\n        cdp_url=config.get(\"cdp_url\"),\n        storage_state=config.get(\"storage_state\"), # Keep if storage_state is still in your BrowserConfig\n        # cv_model_endpoint=config.get(\"cv_model_endpoint\"), # <--- REMOVED\n        # sheets_model_endpoint=config.get(\"sheets_model_endpoint\"), # <--- REMOVED\n    )\n\n    # 2. Initialize ONLY the Planning LLM Provider\n    llm, _ = initialize_llms() # Use _ to ignore creative llm if not needed\n    if llm is None:\n        logger.error(\"Failed to initialize planning LLM. Exiting.\")\n        return {\"error\": \"Planning LLM Initialization failed.\"}\n\n    # 3. Initialize Browser Tool (No longer needs vlm passed)\n    browser_tool = None\n    try:\n        # Detector is now initialized internally by Browser using env vars\n        browser_tool = Browser(config=browser_config)\n        await browser_tool.initialize()\n\n        # 4. Create the LangGraph App\n        app = create_graph_app(browser=browser_tool, llm=llm)\n\n        # 5. Define the initial state\n        initial_state: AgentState = {\n            \"task\": task, \"browser_content\": \"\", \"parsed_action\": {}, \"history\": [], \"error\": None,\n        }\n\n        # 6. Run the graph\n        final_state = None\n        logger.info(f\"Starting agent execution for task: {task}\")\n        final_state = await app.ainvoke(initial_state, config={\"recursion_limit\": config.get(\"max_steps\", 50)})\n        logger.info(\"Agent execution finished.\")\n\n    except Exception as e:\n        logger.error(f\"Agent execution failed: {e}\", exc_info=True)\n        # Ensure error is propagated\n        return {\"error\": f\"Agent execution failed: {e}\"}\n    finally:\n        # 7. Clean up browser instance\n        if browser_tool:\n            await browser_tool.close()\n\n    # 8. Process and return the result\n    if final_state:\n         if final_state.get(\"error\"):\n             logger.error(f\"Agent finished with error: {final_state['error']}\")\n             return {\"error\": final_state['error']}\n         elif final_state.get(\"parsed_action\", {}).get(\"type\") == \"finish\":\n             result = final_state[\"parsed_action\"].get(\"result\", \"Task finished.\")\n             logger.info(f\"Agent finished successfully. Result: {result}\")\n             return {\"result\": result}\n         else:\n             logger.warning(\"Agent finished without a 'finish' action or error.\")\n             final_action = final_state.get(\"parsed_action\", {}).get(\"type\", \"N/A\")\n             return {\"result\": f\"Agent stopped unexpectedly after action: {final_action}.\", \"final_state\": final_state}\n    else:\n         # This case typically means an exception occurred before final state was reached\n         # The error should have been returned from the except block\n         return {\"error\": \"Agent execution failed to produce a final state (likely due to earlier exception).\"}\n\n\n# --- Command Line Interface ---\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description=\"Run the LangGraph Browser Agent.\")\n    parser.add_argument(\"task\", help=\"The task description for the agent.\")\n    # Browser args (Align with updated BrowserConfig)\n    parser.add_argument(\"--cdp-url\", help=\"CDP URL.\", default=None)\n    parser.add_argument(\"--width\", type=int, default=1200)\n    parser.add_argument(\"--height\", type=int, default=900)\n    # REMOVED CV/Sheets Endpoint Args\n    # parser.add_argument(\"--cv-endpoint\", help=\"CV Model Endpoint.\", default=None)\n    # parser.add_argument(\"--sheets-endpoint\", help=\"Sheets Model Endpoint.\", default=None)\n    # Add storage state path if needed\n    # parser.add_argument(\"--storage-state-path\", help=\"Path to storage state JSON file.\", default=None)\n\n    # Planning LLM args (Optional overrides for .env)\n    parser.add_argument(\"--llm-provider\", help=\"Force planning LLM provider.\")\n    parser.add_argument(\"--llm-model\", help=\"Force planning LLM model name.\")\n    parser.add_argument(\"--llm-api-key\", help=\"Force planning LLM API key (uses LLM_API_KEY env var).\")\n    parser.add_argument(\"--llm-base-url\", help=\"Force planning LLM base URL.\")\n\n    # REMOVED VLM specific CLI args\n\n    # Execution args\n    parser.add_argument(\"--max-steps\", type=int, default=50)\n\n    args = parser.parse_args()\n\n    # Prepare config dict for run_agent (Browser config + max_steps)\n    run_config = {\n        \"cdp_url\": args.cdp_url,\n        \"viewport\": {\"width\": args.width, \"height\": args.height},\n        \"max_steps\": args.max_steps,\n        # Load storage state from path if implemented\n        # \"storage_state\": load_storage_state(args.storage_state_path) if args.storage_state_path else None,\n        # REMOVED cv/sheets endpoints from config passed to run_agent\n    }\n\n    # Set environment variables for planning LLM if args provided\n    if args.llm_provider: os.environ['LLM_PROVIDER'] = args.llm_provider\n    if args.llm_model: os.environ['LLM_MODEL_NAME'] = args.llm_model\n    if args.llm_api_key: os.environ['LLM_API_KEY'] = args.llm_api_key # Set generic key\n    if args.llm_base_url: os.environ['LLM_BASE_URL'] = args.llm_base_url\n    # VLM config now solely relies on VLM_* env vars read by Detector/ChatOpenRouter\n\n    # Run the async function\n    result = asyncio.run(run_agent(args.task, run_config))\n\n    # Print result\n    print(\"\\n--- Agent Result ---\")\n    if isinstance(result, dict):\n        if \"result\" in result: print(f\"Result: {result['result']}\")\n        if \"error\" in result: print(f\"Error: {result['error']}\")\n        if \"final_state\" in result and \"result\" not in result and \"error\" not in result:\n             # Limited printing of final state for brevity\n             print(f\"Final State (Debug): Keys={list(result['final_state'].keys())}\")\n    else:\n         print(f\"Output (unexpected format): {result}\")"
  },
  {
    "path": "super_agents/customized_deep_research/PRD_README.md",
    "content": "**M&A DeepResearch Agent - Product Document**\n\n**Version:** 1.0 (Optimized - YF/Web Focus)\n**Date:** 2025年4月21日\n**Status:** Design & Initial Implementation Phase\n\n**Table of Contents:**\n\n1.  Introduction\n    1.1 Product Name\n    1.2 Purpose & Vision\n    1.3 Target Audience\n    1.4 Document Scope\n2.  Project Background & Business Need\n    2.1 The Challenge of Preliminary M&A Research\n    2.2 The Opportunity for Automation\n    2.3 Product Goal\n3.  Business & Functional Requirements\n    3.1 Input Requirements\n    3.2 Core Processing Requirements\n    3.3 Output Requirements\n    3.4 Non-Functional Requirements\n4.  Core Features\n5.  System Architecture & Core Implementation\n    5.1 Overview\n    5.2 Core Framework\n    5.3 State Management & Data Models\n    5.4 Workflow Orchestration (LangGraph)\n    5.5 Task Execution (Nodes)\n    5.6 AI / Large Language Models (LLM)\n    5.7 External Tools & Data Sources\n    5.8 Prompts\n    5.9 Execution Entrypoint\n6.  Workflow Diagram & Description\n7.  Data Requirements & Input Format\n    7.1 Input JSON Specification\n    7.2 Environment Variables & API Keys\n8.  Limitations & Constraints\n9.  Future Work & Potential Enhancements\n\n---\n\n**1. Introduction**\n\n**1.1 Product Name**\n\nM&A DeepResearch Agent (Preliminary Assessment - YF/Web Version)\n\n**1.2 Purpose & Vision**\n\n* **Purpose:** To automate the process of conducting *preliminary* due diligence research on potential Mergers and Acquisitions (M&A) target companies. The agent leverages publicly available data sources – primarily Yahoo Finance for basic financial indicators and extensive web searches for qualitative context – to generate a structured, initial assessment report.\n* **Vision:** To provide M&A professionals with a rapid, scalable, and consistent tool for initial target screening. By quickly identifying potential synergies, risks, and critical information gaps, the agent aims to significantly reduce the manual effort involved in the early stages of the M&A pipeline, enabling teams to focus resources on the most promising opportunities requiring deep, official-source due diligence.\n\n**1.3 Target Audience**\n\n* Mergers & Acquisitions (M&A) Analysts\n* Investment Bankers\n* Private Equity & Venture Capital Investment Professionals\n* Corporate Development Teams\n* Strategy Consultants involved in M&A screening\n\n**1.4 Document Scope**\n\nThis document provides a comprehensive overview of the M&A DeepResearch Agent, covering its background, business needs, functional requirements, core features, system architecture, implementation details, workflow, data inputs, inherent limitations, and potential future directions. It reflects the state of the agent after incorporating optimizations focused on handling limited data sources (Yahoo Finance, Web Search) effectively, including JSON input handling and YFinance failure fallback mechanisms.\n\n**2. Project Background & Business Need**\n\n**2.1 The Challenge of Preliminary M&A Research**\n\nThe initial screening and preliminary research phase of the M&A process is critical but often faces significant challenges:\n\n* **Time-Consuming:** Manually gathering information from disparate public sources (news, company websites, basic financial portals, web searches) for numerous potential targets is incredibly time-intensive.\n* **Resource-Intensive:** Requires significant analyst hours, diverting resources from deeper analysis on higher-priority targets.\n* **Data Accessibility Issues:** For many companies, especially non-US listed or private entities, easily accessible, standardized financial filings (like SEC EDGAR) are unavailable. Analysts must rely on fragmented, potentially unreliable public data.\n* **Consistency:** Manual research quality and depth can vary significantly depending on the analyst and time constraints.\n* **Information Overload:** Sifting through vast amounts of web search results to find relevant M&A signals is difficult.\n* **Decision Bottleneck:** The difficulty in quickly getting a baseline understanding often delays the crucial decision: \"Is this target worth dedicating serious resources for full due diligence?\"\n\n**2.2 The Opportunity for Automation**\n\nRecent advancements in Large Language Models (LLMs) and workflow automation frameworks (like LangGraph) present an opportunity to address these challenges. An AI agent can be designed to:\n\n* Automate the process of querying APIs (like Yahoo Finance) and performing targeted web searches.\n* Leverage LLMs to understand, analyze, synthesize, and structure information gathered from these diverse, often unstructured sources.\n* Execute a predefined, consistent research workflow across multiple targets.\n* Generate structured reports highlighting key findings, potential red flags, and critical information gaps.\n\n**2.3 Product Goal**\n\nThe primary goal of the M&A DeepResearch Agent is to **provide M&A professionals with a rapid, structured, and appropriately cautious preliminary research report on potential acquisition targets.** This report, based *only* on readily available public data (Yahoo Finance and Web Search), should:\n\n* Offer a baseline understanding of the target's business, market position, and preliminary financial signals.\n* Identify potential (speculative) M&A angles and key risks apparent from public sources.\n* Crucially, highlight the significant limitations of the data used and the specific information gaps that **must** be addressed through deep due diligence using official sources (e.g., audited financial statements, regulatory filings).\n* Ultimately, empower users to make more informed and efficient decisions about which targets warrant the significant investment required for a full due diligence process.\n\n**3. Business & Functional Requirements**\n\n**3.1 Input Requirements**\n\n* The agent must accept input identifying the target company via a standardized JSON object.\n* The JSON object **must** contain non-empty `identifier_ric` (e.g., \"AAPL\", \"9417.T\") and `company_name` fields.\n* The JSON object **may** contain optional auxiliary/validation fields: `country_of_exchange`, `market_cap_usd`, `business_description`, `pe_timeseries_ratio`, `ebitda_fy0_usd`, `query_date`.\n* The agent should allow configuration of analysis depth (e.g., 'basic', 'detailed').\n\n**3.2 Core Processing Requirements**\n\n* **Initialization:** Parse JSON input, identify core target info (Ticker/RIC, Name), store all provided input fields in the initial state.\n* **Financial Data Fetch:** Attempt to fetch basic financial data from Yahoo Finance using the provided Ticker/RIC. Handle potential errors gracefully (e.g., invalid ticker, no data). Serialize fetched DataFrame data into JSON-compatible dictionaries. Set a state flag (`yfinance_fetch_failed`) upon significant fetch failure.\n* **Research Planning:** Generate a dynamic research plan based on the target profile and the success/failure of the YFinance fetch.\n    * If YFinance succeeded, plan includes one YF fetch step and multiple targeted web search queries across M&A angles.\n    * If YFinance failed, plan **omits** the YF step and instead includes specific **financial web search queries** (using initial JSON data as context) alongside the general M&A angle web searches.\n    * Plan must define corresponding analysis goals requiring synthesis of available financial data (YF or Web) and general web findings.\n* **Web Searching:** Execute planned web search queries (using Tavily API). Handle both financial fallback searches and general M&A angle searches systematically. Store structured results.\n* **Multi-Angle Analysis:** Perform distinct analysis steps based on planned goals:\n    * **Financial Analysis:** Analyze available financial data (either serialized YF dicts or financial web search results), correlate findings with general web context, identify preliminary signals/flags, and note data source limitations.\n    * **Competitive Analysis:** Analyze market niche, competitors, positioning, and potential moat based on YF info hints and web searches.\n    * **Management/Governance Analysis:** Analyze hints about key personnel, ownership (YF), and governance signals from web searches.\n* **Gap Analysis:** Analyze the limitations of the research performed (YF/Web only), identify critical information gaps requiring official sources, and suggest potentially actionable (though uncertain) follow-up web search queries aimed at finding clues or links.\n* **Gap Filling Search (Conditional):** If actionable web follow-up queries were suggested by Gap Analysis, execute them.\n* **Synthesis:** Consolidate findings from all previous steps (initial data, YF/Web financials, web searches, analyses, gaps) into a coherent M&A-focused narrative, highlighting key themes (strengths/risks) and critical remaining uncertainties.\n* **Reporting:** Generate a final Markdown report including:\n    * A structured summary table (generated from state).\n    * All standard report sections (Exec Summary, Intro, Overview, Market, Financials, Mgmt/Gov, Risks/Angles, **Critical Limitations**, Conclusion).\n    * Appropriate tone (analytical, objective, acknowledging limitations without excessive repetition).\n    * Correctly reflecting the source of financial information (YF or Web Fallback).\n\n**3.3 Output Requirements**\n\n* The primary output must be a single Markdown file containing the comprehensive Preliminary Research Briefing.\n* The report must begin with the structured summary table.\n* The report must follow the defined section structure.\n* The report must clearly cite sources where appropriate (YF, Web).\n* The report must prominently feature the \"CRITICAL LIMITATIONS & NEXT STEPS\" section, detailing necessary official sources for deep diligence.\n* (Optional) The agent should provide streaming updates (`StreamUpdate` schema) indicating progress through the research workflow steps.\n\n**3.4 Non-Functional Requirements**\n\n* **Scalability:** The architecture should conceptually support processing a large number of targets (e.g., the user's ~1400 inputs) sequentially or potentially in parallel (with infrastructure adjustments).\n* **Configurability:** Allow configuration of LLM provider, model name, API keys, and potentially parameters like search result counts via environment variables (`.env`).\n* **Maintainability:** Code should be modular, well-commented, and use clear variable/function names.\n* **Robustness:** Implement error handling for API calls (LLM, YFinance, Tavily) and potential data parsing issues. The YFinance fallback mechanism enhances robustness.\n\n**4. Core Features**\n\n* **Automated M&A Preliminary Research Workflow:** End-to-end execution managed by LangGraph.\n* **JSON Input Processing:** Accepts standardized JSON for target identification and context.\n* **Yahoo Finance Integration:** Fetches and serializes basic financial data.\n* **YFinance Failure Fallback:** Automatically switches to targeted web searches for financial hints if YFinance fails.\n* **Advanced Web Search (Tavily):** Performs targeted web searches for qualitative insights across multiple M&A dimensions.\n* **Multi-Angle LLM Analysis:** Leverages LLMs for Financial, Competitive, and Management/Governance analysis based on combined data.\n* **Automated Gap Analysis:** Identifies key information gaps inherent in YF/Web-only research.\n* **Conditional Gap-Filling Search:** Attempts targeted web searches to address identified gaps (if deemed potentially fruitful).\n* **LLM-Powered Synthesis:** Consolidates all findings into an M&A-focused summary.\n* **Structured Markdown Report Generation:** Produces a standardized, readable report including a summary table and detailed sections.\n* **Configurable LLM Backend:** Supports various LLM providers via environment variables.\n* **Streaming Progress Updates:** Provides real-time feedback on the research process.\n\n**5. System Architecture & Core Implementation**\n\n**5.1 Overview**\n\nThe agent is implemented as a Python application utilizing the LangGraph library to orchestrate a multi-step research process. It interacts with external APIs (LLM, YFinance, Tavily) and follows a state-driven execution model.\n\n**5.2 Core Framework**\n\n* **Language:** Python 3.8+\n* **Orchestration:** LangGraph (`StateGraph`)\n\n**5.3 State Management & Data Models**\n\n* **State:** `ResearchState` TypedDict (`state.py`) defines the graph's memory, holding all inputs, intermediate results, and final outputs.\n* **Data Models:** Pydantic models (`schemas.py`) define structured inputs/outputs for LLM calls (e.g., `ResearchPlan`, `GapAnalysisResult`, `FinalSynthesisResult`) and data structures (e.g., `SearchResultItem`, `StreamUpdate`).\n\n**5.4 Workflow Orchestration (LangGraph)**\n\n* `graph.py` defines the `StateGraph` instance.\n* Nodes representing research tasks are added (`workflow.add_node`).\n* Edges define the sequence of execution (`workflow.add_edge`).\n* Conditional edges (`workflow.add_conditional_edges`) control branching based on state evaluation functions (e.g., `should_continue_web_search`, `decide_gap_followup`).\n\n**5.5 Task Execution (Nodes)**\n\n* `nodes.py` implements the core logic for each step as asynchronous Python functions.\n* Each node function receives the current `ResearchState`, performs its task (e.g., calling tools, formatting prompts, invoking LLMs), and returns a dictionary containing updates to the state.\n\n**5.6 AI / Large Language Models (LLM)**\n\n* Configured in `tools.py` via `initialize_llms()`. Supports OpenAI, XAI (Grok), Groq (via OpenAI-compatible API), or generic OpenAI-compatible endpoints based on `.env` settings.\n* Uses `langchain_openai.ChatOpenAI` (or potentially provider-specific classes).\n* Two instances typically used: `llm` (lower temperature for analytical tasks) and `llm_creative` (higher temperature for planning, synthesis, report generation).\n* Leverages LangChain's `with_structured_output` for reliable JSON generation based on Pydantic schemas.\n\n**5.7 External Tools & Data Sources**\n\n* **Yahoo Finance:** Accessed via the `yfinance` Python library. A wrapper function `Workspace_yfinance_data` in `tools.py` handles API calls, error catching, and **DataFrame serialization into dictionaries**.\n* **Web Search:** Accessed via the `Tavily` Python client. A wrapper function `perform_web_search` in `tools.py` handles API calls and result formatting into `SearchResultItem` schema.\n\n**5.8 Prompts**\n\n* Defined as constants in `prompt.py`.\n* Specifically crafted for each LLM-driven task: Planning, Financial Analysis (adapts based on YF status), Competitive Analysis, Management/Governance Analysis, Gap Analysis, Synthesis, and Final Report Generation.\n* Prompts are designed to guide the LLM, provide context from the state, and request output in specific formats (often structured JSON or Markdown).\n\n**5.9 Execution Entrypoint**\n\n* `main.py` serves as the script's entry point.\n* Handles command-line argument parsing (JSON input).\n* Initializes the `ResearchState` based on JSON input.\n* Retrieves the compiled LangGraph application (`get_mna_app_yfinance` from `graph.py`).\n* Executes the graph using `research_app.astream()`.\n* Processes streaming updates for console output.\n* Handles final state processing and saving the Markdown report to the `./Output/` directory.\n\n**6. Workflow Diagram & Description**\n\n```mermaid\ngraph TD\n    A[Start: Input JSON] --> B(Initialize Research State);\n    B --> C{Check Init OK?};\n    C -- Yes --> D(Plan Research (Adapts based on YF flag));\n    C -- No --> Z(Finalize Basic Research / Error);\n    D --> E{Check Plan OK?};\n    E -- Yes --> F(Prepare Steps);\n    E -- No --> Z;\n    F --> G(Fetch YFinance Data (Sets YF Flag));\n    G --> H(Execute Search);\n    H --> I{Continue Web Search? (Checks Total vs Completed)};\n    I -- Yes --> H;\n    I -- No --> J{Analysis Planned?};\n    J -- Yes --> K(Perform Analysis);\n    J -- No --> L(Analyze Gaps);\n    K --> M{Continue Analysis? (Checks Index vs Planned/Max)};\n    M -- Yes --> K;\n    M -- No --> L;\n    L --> N{Actionable Web Gaps Found & Gap Search Not Run?};\n    N -- Yes --> O(Execute Gap Search);\n    N -- No --> P(Synthesize Final Report);\n    O --> P;\n    P --> Q{Check Synthesis OK?};\n    Q -- Yes --> R(Generate Final Markdown Report (with Table));\n    Q -- No --> Z;\n    R --> Y(END);\n    Z --> Y;\n\n    subgraph Web Search Loop\n        H\n        I\n    end\n    subgraph Analysis Loop\n        K\n        M\n    end\n    subgraph Optional Gap Fill\n        N\n        O\n    end\n\n```\n\n**Workflow Description:**\n\n1.  **Initialize:** Start with JSON input, create initial state including company details and flags.\n2.  **Plan Research:** Based on initial info and whether YFinance is expected to work (or has already failed - though flag is set *after* fetch), LLM generates a plan including financial data steps (YF or Web) and general web search queries, plus analysis goals.\n3.  **Prepare Steps:** Creates a list of steps for potential UI display.\n4.  **Fetch YFinance:** Attempts to get data from Yahoo Finance. Sets the `yfinance_fetch_failed` flag in the state if it encounters significant errors. Serializes successful data.\n5.  **Execute Search:** Enters a loop. Checks the `yfinance_fetch_failed` flag. If true, it first executes planned *financial* web searches. Once those are done (or if YF succeeded), it executes the *general* M&A angle web searches. It updates a counter (`completed_web_search_count`) after each successful search.\n6.  **Continue Web Search?:** The conditional edge checks if `completed_web_search_count` is less than the total number of *required* web searches (financial fallback + general). If yes, loop back to Execute Search. If no, proceed.\n7.  **Perform Analysis:** If analysis steps were planned, enter a loop. Execute analysis based on the goal (Financial, Competitive, Mgmt/Gov), using appropriate prompts that consider the `yfinance_fetch_failed` flag to select the correct financial context (YF dicts or financial web results). Loop until all planned steps are done or `max_analysis_steps` is reached.\n8.  **Analyze Gaps:** Evaluate all gathered information (YF/Web financials, web search results, analyses) to identify critical limitations requiring official sources and suggest *actionable* web follow-up queries.\n9.  **Decide Gap Follow-up:** Check if actionable web follow-up queries were generated and if the gap search hasn't already run.\n10. **Execute Gap Search:** If needed, run the suggested web follow-up queries.\n11. **Synthesize Report:** Consolidate all information (initial inputs, YF/Web financials, all web search results, all analyses, gap summary) into a final synthesis focused on M&A themes and uncertainties.\n12. **Generate Final Report:** Create the structured summary table from the state. Call the LLM using the final report prompt, providing the table and all synthesized context. Prepend the table to the LLM's generated report body. Save the final Markdown.\n13. **End:** Terminate the process. `Finalize Basic Research` is a fallback endpoint for early termination due to errors.\n\n**7. Data Requirements & Input Format**\n\n**7.1 Input JSON Specification**\n\nThe agent expects a JSON object with the following structure:\n\n```json\n{\n  \"identifier_ric\": \"string\", // REQUIRED: Reuters Instrument Code or Ticker (e.g., \"AAPL\", \"9417.T\")\n  \"company_name\": \"string\", // REQUIRED: Full company name\n  \"country_of_exchange\": \"string\", // OPTIONAL: Country where the primary exchange is located (e.g., \"USA\", \"Japan\")\n  \"market_cap_usd\": number, // OPTIONAL: Recent market capitalization in USD\n  \"business_description\": \"string\", // OPTIONAL: A brief description of the company's business\n  \"pe_timeseries_ratio\": number, // OPTIONAL: Recent P/E ratio (note context if timeseries)\n  \"ebitda_fy0_usd\": number, // OPTIONAL: EBITDA for the last full fiscal year (FY0) in USD\n  \"query_date\": \"string\" // OPTIONAL: Date the input data was sourced (e.g., \"YYYY-MM-DD\")\n}\n```\n\n**7.2 Environment Variables & API Keys**\n\nThe agent requires API keys and configuration set via a `.env` file in the project root:\n\n* `LLM_PROVIDER`: e.g., \"openai\", \"xai\", \"groq\"\n* `LLM_MODEL_NAME`: e.g., \"gpt-4-turbo\", \"grok-2\"\n* `LLM_API_KEY`: API Key for the selected LLM provider (or provider-specific key like `OPENAI_API_KEY`, `XAI_API_KEY`, `GROQ_API_KEY`).\n* `LLM_BASE_URL`: Required for non-default OpenAI endpoints (like XAI).\n* `LLM_TEMPERATURE`, `LLM_CREATIVE_TEMPERATURE`: LLM temperature settings.\n* `TAVILY_API_KEY`: API Key for Tavily web search.\n* *(Optional)* `EXA_API_KEY`: If Exa search tools were enabled.\n\n**8. Limitations & Constraints**\n\n* **Data Source Reliance:** The agent's output quality is fundamentally limited by the accuracy, completeness, and timeliness of data available on Yahoo Finance and public web search. It **cannot replace** analysis based on official, audited sources.\n* **No Official Filings Access:** The agent **does not** parse or analyze official financial filings (e.g., SEC EDGAR 10-K/10-Q, local Annual Reports). This is the most significant limitation for deep M&A diligence.\n* **YFinance Data Limitations:** Yahoo Finance data can have gaps, inaccuracies, or delays. It lacks detailed footnotes and Management Discussion & Analysis (MD&A).\n* **Web Search Limitations:** Public web search results can be noisy, biased, outdated, lack context, or miss critical non-public information. Sentiment and opinions found online may not be representative.\n* **LLM Limitations:** Subject to standard LLM risks, including potential inaccuracies (\"hallucinations\"), biases present in training data, and inability to perform complex multi-step reasoning without explicit guidance. Structured output parsing can occasionally fail.\n* **Non-US/Private Company Data:** Publicly available information (especially structured financial data via YF and English web search results) is often significantly less comprehensive for non-US listed companies and practically non-existent for private companies.\n* **Analysis vs. Judgment:** The agent provides analysis and identifies potential signals based on limited data. It does **not** provide investment advice or a definitive judgment on whether a target *should* be acquired. That requires human expertise and deep diligence.\n\n**9. Future Work & Potential Enhancements**\n\n* **Official Document Ingestion (High Impact, High Complexity):** Develop capabilities to ingest and parse specific sections of downloaded official documents (e.g., PDF Annual Reports, specific SEC filing sections) if available, to augment YF/Web data.\n* **Premium Data Integration:** Integrate with commercial financial data providers (e.g., Bloomberg API, Refinitiv Eikon Data API, S&P Capital IQ) for more reliable and detailed financial data (requires subscriptions).\n* **Advanced Iteration & Re-planning:** Implement more sophisticated loops where the agent re-evaluates its plan or re-runs specific analyses based on intermediate findings or identified high-priority gaps.\n* **Human-in-the-Loop:** Integrate optional steps for human review and feedback to guide the research process or validate findings.\n* **Knowledge Base Integration:** Connect the agent to internal knowledge bases or databases containing prior research or proprietary company information.\n* **Multi-Lingual Enhancements:** Improve web search and analysis capabilities for targets operating primarily in non-English speaking markets.\n* **Deployment & Scalability:** Package the agent for deployment as a scalable microservice (potentially using the A2A adapter framework mentioned in the README).\n* **UI Development:** Create a dedicated web interface for easier input, configuration, and visualization of streaming results and final reports.\n* **Valuation Module:** Add a preliminary valuation analysis module (e.g., based on comparable companies analysis using YF data or web-found multiples), clearly stating its high-level, indicative nature."
  },
  {
    "path": "super_agents/customized_deep_research/README.md",
    "content": "# M&A DeepResearch Agent (Preliminary Assessment)\n\n这是 Deep Research Agent 的一个定制化版本，旨在简化 M&A 专业人士的研究流程，帮助他们快速评估潜在标的，并为后续的尽职调查提供基础。我认为有效的 Agent 大概率是定制化的，是针对特定任务和特定场景的服务的。\n\n## 概述 (Overview)\n\nM&A DeepResearch Agent 是一个基于 LangGraph 构建的、专注于执行**初步并购目标尽职调查**的自动化研究工具。它利用公开可用的数据源——主要包括 **Yahoo Finance** (用于获取基础财务指标) 和广泛的 **Web 搜索** (通过 Tavily 获取定性信息、市场背景、新闻等)——来为 M&A 专业人士提供支持。\n\n该 Agent 能够针对用户通过 JSON 格式提供的目标公司信息，自动化地执行一个标准化的初步研究流程，涵盖信息规划、数据获取（含 YFinance 失败时的 Web 搜索回退）、多维度分析和报告生成。最终产出是一份结构化的 Markdown 格式初步研究简报，旨在帮助用户快速评估潜在标的，识别关键风险点和信息缺口，并就是否投入资源进行更深入的、基于官方文件的尽职调查做出更明智的决策。\n\n## 核心特性 (Core Features)\n\n* **自动化初步 M&A 研究流程**: 实现了从目标初始化、研究规划、数据获取、多源信息分析到最终报告生成的端到端自动化工作流 (基于 YFinance 和 Web Search)。\n* **JSON 输入**: 通过标准化的 JSON 对象接收目标公司信息（必需：RIC/Ticker、公司名；可选：国家、市值、业务描述等），确保输入稳定性和可扩展性。\n* **Yahoo Finance 集成**: 调用 `yfinance` 库获取基础财务数据（公司信息、财报概览、股东信息等），并进行序列化处理。\n* **YFinance 失败回退**: 当无法从 Yahoo Finance 获取有效数据时，能自动切换到执行针对性的 Web 搜索来尝试获取替代性的财务线索。\n* **定向网络搜索 (Tavily)**: 利用 Tavily 执行高级 Web 搜索，围绕 M&A 关键角度（管理层、产品技术、市场竞争、客户、风险等）收集定性信息和市场背景。\n* **多角度 LLM 分析**: 基于获取的 YF/Web 数据，利用 LLM 进行多个维度的初步分析，包括：\n    * 财务概况与风险分析 (结合 YF 数据或 Web 回退结果与网络信息)\n    * 市场竞争格局与定位分析\n    * 管理层与治理初步评估\n* **M&A 聚焦的 Gap 分析**: 评估当前研究的局限性（强调 YF/Web 数据的不足），识别进行可靠 M&A 决策所必需的关键信息缺口（通常需要官方文件），并尝试提出可行的补充性 Web 搜索建议。\n* **结构化 Markdown 报告输出**: 生成包含标准章节（含关键局限性说明）、初步发现、以及**报告头部结构化摘要表**的研究简报。\n* **可配置 LLM 后端**: 支持通过环境变量配置不同的 LLM 提供商 (OpenAI, XAI Grok, Groq 等兼容 OpenAI API 的模型) 和模型参数。\n* **流式进度更新**: (通过 Agent 内部机制) 支持输出研究过程中的状态更新，便于观察执行进度。\n"
  },
  {
    "path": "super_agents/customized_deep_research/__init__.py",
    "content": ""
  },
  {
    "path": "super_agents/customized_deep_research/main.py",
    "content": "# /Users/peng/Dev/AI_AGENTS/mentis/super_agents/company_deep_research/main.py\r\n# (Optimized Version - Accepts JSON Input)\r\n\r\nimport sys\r\nfrom pathlib import Path\r\nimport asyncio\r\nimport json\r\nimport os\r\nimport re\r\nimport time\r\nfrom datetime import datetime\r\nfrom typing import Literal, List, Dict, Any, Optional # Ensure Optional is imported\r\n\r\n# --- OpenAI RateLimitError Handling ---\r\ntry:\r\n    from openai import RateLimitError\r\nexcept ImportError:\r\n    print(\"Warning: 'openai' package not installed. RateLimitError handling will use a basic Exception.\")\r\n    class RateLimitError(Exception):\r\n        pass\r\n\r\n# --- Dynamic Path Setup (Keep as is) ---\r\ntry:\r\n    # ... (keep existing dynamic path setup code) ...\r\n    current_script_path = Path(__file__).resolve()\r\n    project_root = current_script_path.parent\r\n    while not (project_root / '.git').exists() and project_root.parent != project_root:\r\n        project_root = project_root.parent\r\n    if not (project_root / '.git').exists():\r\n        print(\"Warning: Could not automatically determine project root based on '.git'. Adding script's directory parent.\")\r\n        project_root = current_script_path.parent.parent\r\n    path_to_add = project_root\r\n    if str(path_to_add) not in sys.path:\r\n        sys.path.insert(0, str(path_to_add))\r\n    print(f\"Dynamically added project root to sys.path: {path_to_add}\")\r\nexcept Exception as e:\r\n    print(f\"Error during dynamic path setup: {e}. Please ensure script is run from correct location or manually set PYTHONPATH.\")\r\n    exit(1)\r\n\r\n\r\n# --- LangGraph and Internal Module Imports ---\r\ntry:\r\n    from super_agents.company_deep_research.reason_graph.graph import get_mna_app_yfinance\r\n    from super_agents.company_deep_research.reason_graph.state import ResearchState # Import updated state\r\n    from super_agents.company_deep_research.reason_graph.schemas import StreamUpdate\r\nexcept ImportError as e:\r\n    print(f\"Error importing graph components: {e}\")\r\n    print(f\"Please ensure all required files exist in 'reason_graph' and dependencies are installed.\")\r\n    exit(1)\r\nexcept Exception as e:\r\n    print(f\"An unexpected error occurred during imports: {e}\")\r\n    exit(1)\r\n\r\n\r\n# --- Helper Function for Filenames (Keep as is) ---\r\ndef slugify(text: str) -> str:\r\n    \"\"\"Converts text into a safe filename component.\"\"\"\r\n    if not text:\r\n        return \"no_topic_provided\"\r\n    core_text = text.split(\" (\")[0].split(\" \")[0]\r\n    if not core_text: core_text = text\r\n    core_text = core_text.lower()\r\n    core_text = re.sub(r'\\s+', '_', core_text)\r\n    core_text = re.sub(r'[^\\w\\-\\.]+', '', core_text)\r\n    core_text = core_text.strip('_.- ')\r\n    return core_text[:50] if core_text else \"sanitized_topic\"\r\n\r\n# --- **NEW**: Function to create initial state from JSON ---\r\ndef create_initial_state_from_json(input_data: Dict[str, Any], depth: Literal['basic', 'detailed']) -> ResearchState:\r\n    \"\"\"Creates the initial ResearchState dictionary from the input JSON data.\"\"\"\r\n    if not input_data.get(\"identifier_ric\") or not input_data.get(\"company_name\"):\r\n        raise ValueError(\"Input JSON must contain non-empty 'identifier_ric' and 'company_name'.\")\r\n\r\n    # Use .get with appropriate defaults (e.g., None or specific like 'N/A', 0.0)\r\n    # Storing None is okay if subsequent nodes handle it correctly.\r\n    state: ResearchState = {\r\n        \"identifier_ric\": input_data[\"identifier_ric\"],\r\n        \"company_name\": input_data[\"company_name\"],\r\n        \"country_of_exchange\": input_data.get(\"country_of_exchange\"), # Default is None if not present\r\n        \"market_cap_usd\": input_data.get(\"market_cap_usd\"), # Default is None\r\n        \"input_business_description\": input_data.get(\"business_description\"), # Default is None\r\n        \"input_pe_ratio\": input_data.get(\"pe_timeseries_ratio\"), # Default is None\r\n        \"input_ebitda_usd\": input_data.get(\"ebitda_fy0_usd\"), # Default is None\r\n        \"input_query_date\": input_data.get(\"query_date\"), # Default is None\r\n\r\n        # Initialize other fields\r\n        \"topic\": f\"M&A Research for {input_data['company_name']} ({input_data['identifier_ric']})\",\r\n        \"ticker\": input_data[\"identifier_ric\"],\r\n        \"max_search_iterations\": 3,\r\n        \"max_analysis_steps\": 5,\r\n        \"analysis_depth\": depth,\r\n        \"research_plan\": None,\r\n        \"search_steps_planned\": [],\r\n        \"financial_web_search_steps\": [],\r\n        \"analysis_steps_planned\": [],\r\n        \"current_analysis_step_index\": 0,\r\n        \"completed_web_search_count\": 0, # Initialize counter\r\n        \"yfinance_data\": None,\r\n        \"yfinance_fetch_failed\": False,\r\n        \"search_results\": [],\r\n        \"financial_web_search_results\": [],\r\n        \"analysis_results\": [],\r\n        \"financial_analysis\": None,\r\n        \"competitive_analysis\": None,\r\n        \"management_governance_assessment\": None,\r\n        \"gaps_identified\": None,\r\n        \"gap_search_results\": [],\r\n        \"final_synthesis\": None,\r\n        \"final_report_markdown\": None,\r\n        \"structured_summary_table\": None,\r\n        \"stream_updates\": [],\r\n        \"completed_steps_count\": 0.0,\r\n        \"total_steps\": None,\r\n        \"error_message\": None\r\n    }\r\n    return state\r\n\r\n# --- Main Research Execution Function ---\r\nasync def run_research(initial_state: ResearchState): # Takes pre-filled state\r\n    \"\"\"\r\n    Runs the M&A research graph using the provided initial state,\r\n    handling streaming output and errors. Saves the final report.\r\n    \"\"\"\r\n    company_name = initial_state['company_name']\r\n    ticker = initial_state['ticker']\r\n    depth = initial_state['analysis_depth']\r\n\r\n    print(\"\\n--- Starting M&A Research Graph (Optimized - JSON Input) ---\")\r\n    print(f\"Company Name: '{company_name}'\")\r\n    print(f\"Ticker/RIC: '{ticker}'\")\r\n    print(f\"Analysis Depth: '{depth}'\")\r\n    print(\"-\" * 30)\r\n\r\n    processed_updates_count = 0\r\n    config = {\"recursion_limit\": 150}\r\n    final_state: Optional[ResearchState] = None\r\n    error_occurred: Optional[Exception] = None\r\n\r\n    # --- Streaming Execution ---\r\n    try:\r\n        research_app = get_mna_app_yfinance(for_web=False)\r\n        async for state_update_chunk in research_app.astream(initial_state, config=config, stream_mode=\"values\"):\r\n            final_state = state_update_chunk\r\n            all_current_updates: List[Dict] = final_state.get(\"stream_updates\", [])\r\n            new_updates_count = len(all_current_updates) - processed_updates_count\r\n\r\n            if new_updates_count > 0:\r\n                newly_added_updates = all_current_updates[processed_updates_count:]\r\n                print(f\"--- Processing {new_updates_count} New Stream Update(s) ---\")\r\n                for update_dict in newly_added_updates:\r\n                    update_data = update_dict.get('data', {})\r\n                    status = update_data.get('status', 'N/A')\r\n                    step_id = update_data.get('id', 'N/A')\r\n                    msg = update_data.get('message', '')\r\n                    update_type = update_data.get('type', 'N/A')\r\n                    title = update_data.get('title', '')\r\n                    print(f\"[{datetime.fromtimestamp(update_dict.get('timestamp', time.time())):%H:%M:%S}] \"\r\n                          f\"[{update_type.upper()}|{status.upper()}|ID:{step_id}] \"\r\n                          f\"{title+': ' if title else ''}{msg}\")\r\n                    payload = update_data.get('payload')\r\n                    # (Keep payload preview logic as before)\r\n                    if payload:\r\n                         try:\r\n                             payload_preview = json.dumps(payload, indent=2, default=str, ensure_ascii=False)\r\n                             if len(payload_preview) > 500: payload_preview = payload_preview[:500] + \"...\"\r\n                             print(f\"  Payload Preview: {payload_preview}\")\r\n                         except Exception as json_e: print(f\"  Payload Preview: [Could not serialize: {json_e}]\")\r\n\r\n                print(\"-\" * 30)\r\n                processed_updates_count = len(all_current_updates)\r\n\r\n    except RateLimitError as e:\r\n        error_occurred = e\r\n        print(\"\\n\" + \"=\"*40 + \"\\n!!! OpenAI API Error: Insufficient Quota !!!\\n\" + \"=\"*40 + \"\\n\")\r\n        # (Keep detailed error message)\r\n        print(\"The research process was stopped due to OpenAI quota limits.\")\r\n        print(\"Please check your OpenAI plan and billing details.\")\r\n        print(f\"Original error: {e}\")\r\n    except ImportError as e:\r\n        error_occurred = e\r\n        print(\"\\n\" + \"=\"*40 + \"\\n!!! Python Import Error !!!\\n\" + \"=\"*40 + \"\\n\")\r\n        # (Keep detailed error message)\r\n        print(f\"Could not import necessary modules: {e}\")\r\n        print(\"Please ensure all dependencies are installed and the project structure is correct.\")\r\n    except Exception as e:\r\n        error_occurred = e\r\n        print(\"\\n\" + \"=\"*40 + \"\\n!!! An Unexpected Error Occurred During Graph Execution !!!\\n\" + \"=\"*40 + \"\\n\")\r\n        # (Keep detailed error message)\r\n        print(f\"Error type: {type(e).__name__}\")\r\n        print(f\"Error details: {e}\")\r\n        import traceback\r\n        traceback.print_exc()\r\n\r\n\r\n    # --- Process Final State ---\r\n    if error_occurred:\r\n         print(\"\\n--- Graph Execution INTERRUPTED by Error ---\")\r\n         print(\"Attempting to process the last known state (may be incomplete).\")\r\n    else:\r\n         print(\"\\n--- Graph Execution Finished ---\")\r\n\r\n    # Check if final_state is valid before proceeding\r\n    if not final_state or not isinstance(final_state, dict):\r\n         print(\"Error: Final state is invalid or unavailable after execution.\")\r\n         error_report = f\"# Research Failed\\n\\nCompany: {initial_state['company_name']} ({initial_state['ticker']})\\nReason: Workflow execution failed to produce a valid final state.\"\r\n         if error_occurred: error_report += f\"\\nError Details: {type(error_occurred).__name__}: {error_occurred}\"\r\n         # (Keep minimal error report saving logic)\r\n         try:\r\n             topic_slug = slugify(initial_state['ticker']) # Use ticker for filename slug\r\n             timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\r\n             filename = f\"research_ERROR_{topic_slug}_{timestamp}.md\"\r\n             script_dir = Path(__file__).parent\r\n             output_dir = script_dir / \"Output\"\r\n             output_dir.mkdir(parents=True, exist_ok=True)\r\n             filepath = output_dir / filename\r\n             with open(filepath, \"w\", encoding=\"utf-8\") as f: f.write(error_report)\r\n             print(f\"Saved error summary to: {filepath}\")\r\n         except Exception as save_e: print(f\"Could not save error summary report: {save_e}\")\r\n         return None # Indicate failure\r\n\r\n    # --- Print Final State Summary ---\r\n    print(\"\\n--- FINAL STATE SUMMARY (May be partial if error occurred) ---\")\r\n    print(f\"Company Name: {final_state.get('company_name', 'N/A')}\")\r\n    print(f\"Ticker/RIC: {final_state.get('ticker', 'N/A')}\")\r\n    print(f\"Depth: {final_state.get('analysis_depth', 'N/A')}\")\r\n    print(f\"Completed Steps Count: {final_state.get('completed_steps_count', 'N/A')}\")\r\n    print(f\"Total Steps Estimated: {final_state.get('total_steps', 'N/A')}\")\r\n    yf_failed = final_state.get('yfinance_fetch_failed', False)\r\n    yf_data = final_state.get('yfinance_data')\r\n    yf_error_msg = \"Fetch Failed/Skipped\" if yf_failed else (yf_data.get('error', 'None') if isinstance(yf_data, dict) else 'N/A')\r\n    print(f\"Yahoo Finance Fetch Status: {'FAILED (Used Web Fallback)' if yf_failed else 'OK'}\")\r\n    if yf_error_msg != 'None': print(f\"  YF Error Message: {yf_error_msg}\")\r\n    print(f\"General Web Searches Planned/Executed: {len(final_state.get('search_steps_planned', []))} / {final_state.get('current_search_step_index', 0)}\")\r\n    print(f\"Financial Web Searches (Fallback) Planned/Executed: {len(final_state.get('financial_web_search_steps', []))} / {final_state.get('current_financial_web_search_index', 0) if yf_failed else 'N/A'}\") # Adjust index key maybe\r\n    print(f\"Analysis Steps Performed: {final_state.get('current_analysis_step_index', 0)}\")\r\n    print(f\"Total Web Results Collected (All): {len(final_state.get('search_results', []) + final_state.get('financial_web_search_results', []) + final_state.get('gap_search_results', []))}\")\r\n    print(f\"Final Synthesis Generated: {'Yes' if final_state.get('final_synthesis') else 'No'}\")\r\n    print(f\"Summary Table Generated: {'Yes' if final_state.get('structured_summary_table') else 'No'}\")\r\n\r\n\r\n    # --- Save Final Report ---\r\n    final_markdown = final_state.get('final_report_markdown')\r\n\r\n    if final_markdown and isinstance(final_markdown, str):\r\n        if \"Report Generation Failed\" in final_markdown and not error_occurred:\r\n             print(\"\\n--- Final Report Generation Node Failed ---\")\r\n             print(final_markdown.split('\\n\\n', 1)[-1])\r\n             print(\"Report not saved.\")\r\n        elif not error_occurred:\r\n             print(\"\\n--- Saving Final Report to Markdown ---\")\r\n             try:\r\n                 filename_base = final_state['ticker'] # Use ticker for filename\r\n                 topic_slug = slugify(filename_base)\r\n                 timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\r\n                 filename = f\"research_report_{topic_slug}_{timestamp}.md\"\r\n                 script_dir = Path(__file__).parent\r\n                 output_dir = script_dir / \"Output\"\r\n                 output_dir.mkdir(parents=True, exist_ok=True)\r\n                 filepath = output_dir / filename\r\n                 with open(filepath, \"w\", encoding=\"utf-8\") as f: f.write(final_markdown)\r\n                 print(f\"Successfully saved report to: {filepath}\")\r\n             except Exception as e:\r\n                 print(f\"\\nError saving final report to Markdown: {e}\")\r\n                 print(\"Report content was:\\n\" + final_markdown[:1000] + \"...\")\r\n        else:\r\n             # Error occurred, but report might exist (e.g., from finalize node)\r\n             print(\"\\nFinal Report: Not saved due to earlier execution error.\")\r\n             print(\"Partial/Fallback report content (if available):\\n\" + str(final_markdown)[:1000] + \"...\")\r\n\r\n    elif error_occurred:\r\n         print(\"\\nFinal Report: Not generated or incomplete due to execution error.\")\r\n    else:\r\n         print(\"\\nFinal Report: Not found in final state.\")\r\n\r\n    print(\"\\n--- END OF RESEARCH ---\")\r\n    return final_state\r\n\r\n\r\n# --- Main Execution Block ---\r\nasync def main():\r\n     # **MODIFIED: Accept JSON file path or JSON string as argument**\r\n     if len(sys.argv) < 2:\r\n         print(\"Usage: python main.py <path_to_json_file_or_json_string>\")\r\n         print(\"Example (File): python main.py input_data/9417.T.json\")\r\n         print(\"Example (String): python main.py '{\\\"identifier_ric\\\": \\\"AAPL\\\", \\\"company_name\\\": \\\"Apple Inc.\\\"}'\")\r\n         return\r\n\r\n     input_arg = sys.argv[1]\r\n     input_json_data = None\r\n\r\n     try:\r\n         # Try to load as file path first\r\n         input_path = Path(input_arg)\r\n         if input_path.is_file():\r\n             print(f\"Loading input data from file: {input_path}\")\r\n             with open(input_path, 'r', encoding='utf-8') as f:\r\n                 input_json_data = json.load(f)\r\n         else:\r\n             # Try to load as JSON string\r\n             print(\"Input is not a file path, attempting to parse as JSON string.\")\r\n             input_json_data = json.loads(input_arg)\r\n     except json.JSONDecodeError:\r\n         print(f\"Error: Input argument '{input_arg}' is neither a valid file path nor a valid JSON string.\")\r\n         return\r\n     except FileNotFoundError:\r\n         print(f\"Error: Input file not found at '{input_arg}'\")\r\n         return\r\n     except Exception as e:\r\n         print(f\"Error processing input argument: {e}\")\r\n         return\r\n\r\n     if not input_json_data or not isinstance(input_json_data, dict):\r\n         print(\"Error: Parsed input data is not a valid JSON object.\")\r\n         return\r\n\r\n     # Get analysis depth (optional second argument or default)\r\n     depth_input = sys.argv[2].strip().lower() if len(sys.argv) > 2 else 'detailed'\r\n     depth: Literal['basic', 'detailed'] = 'basic' if depth_input == 'basic' else 'detailed'\r\n\r\n     # Create initial state from JSON\r\n     try:\r\n          initial_research_state = create_initial_state_from_json(input_json_data, depth)\r\n     except ValueError as ve:\r\n          print(f\"Error creating initial state: {ve}\")\r\n          return\r\n     except Exception as state_e:\r\n          print(f\"Unexpected error creating initial state: {state_e}\")\r\n          return\r\n\r\n\r\n     # Run the research process\r\n     await run_research(initial_research_state)\r\n\r\nif __name__ == \"__main__\":\r\n    try:\r\n        print(\"Starting M&A Deep Research Runner (Optimized)...\")\r\n        if sys.version_info < (3, 8): # Asyncio.run needs 3.7+, some async features better in 3.8+\r\n             print(\"Warning: Python 3.8+ recommended for best asyncio performance.\")\r\n        asyncio.run(main())\r\n    except KeyboardInterrupt:\r\n        print(\"\\nResearch process interrupted by user (Ctrl+C).\")\r\n    except Exception as e:\r\n        print(f\"\\nA critical error occurred in the main execution block: {e}\")\r\n        import traceback\r\n        traceback.print_exc()\r\n    finally:\r\n        print(\"\\nProgram finished.\")"
  },
  {
    "path": "super_agents/customized_deep_research/reason_graph/__init__.py",
    "content": ""
  },
  {
    "path": "super_agents/customized_deep_research/reason_graph/graph.py",
    "content": "# /Users/peng/Dev/AI_AGENTS/mentis/super_agents/company_deep_research/reason_graph/graph.py\r\n# (Optimized Version v2 - Adjusted Conditional Logic)\r\n\r\nfrom typing import Literal, Optional, Dict, Any\r\nfrom langgraph.graph import StateGraph, END, START\r\n\r\n# Use updated state definition\r\nfrom .state import ResearchState\r\n# Import updated node functions\r\nfrom .nodes import (\r\n    initialize_research,\r\n    plan_research,\r\n    prepare_steps,\r\n    fetch_financial_data,\r\n    execute_search, # Handles both financial and general web searches now\r\n    perform_analysis,\r\n    analyze_gaps,\r\n    execute_gap_search,\r\n    synthesize_final_report,\r\n    finalize_basic_research,\r\n    generate_final_markdown_report\r\n)\r\n\r\n# --- Conditional Edge Functions (Revised) ---\r\n\r\ndef check_initialization(state: ResearchState) -> Literal[\"plan_research\", \"finalize_basic_research\"]:\r\n    \"\"\"Decides whether to proceed after initialization.\"\"\"\r\n    # Initialization now primarily uses guaranteed JSON input\r\n    if state.get('ticker') and state.get('company_name'):\r\n        print(\"[Graph Condition] Initialization successful (used JSON input), proceeding to plan.\")\r\n        # Initialize web search count here\r\n        state['completed_web_search_count'] = 0\r\n        return \"plan_research\"\r\n    else:\r\n        # This path should ideally not be hit if main.py enforces JSON input\r\n        print(\"[Graph Condition] Initialization failed (missing core data from state), finalizing.\")\r\n        state['error_message'] = \"Initialization failed: Missing core company info.\"\r\n        return \"finalize_basic_research\"\r\n\r\ndef check_planning(state: ResearchState) -> Literal[\"prepare_steps\", \"finalize_basic_research\"]:\r\n     \"\"\"Checks if the research plan was successfully generated.\"\"\"\r\n     if state.get(\"research_plan\"):\r\n         print(\"[Graph Condition] Planning successful, proceeding to prepare steps.\")\r\n         return \"prepare_steps\"\r\n     else:\r\n         print(\"[Graph Condition] Planning failed or plan is empty, finalizing research.\")\r\n         return \"finalize_basic_research\"\r\n\r\n# --- REVISED Web Search Continuation Logic ---\r\ndef should_continue_web_search(state: ResearchState) -> Literal[\"execute_search\", \"perform_analysis\", \"analyze_gaps\"]:\r\n    \"\"\"Decides whether to continue web searching (financial fallback or general) or move to analysis.\"\"\"\r\n    completed_count = state.get('completed_web_search_count', 0)\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False)\r\n\r\n    # Calculate total web searches needed\r\n    financial_searches_planned = state.get('financial_web_search_steps', [])\r\n    general_searches_planned = state.get('search_steps_planned', [])\r\n    total_web_searches_needed = 0\r\n    if yfinance_failed:\r\n        total_web_searches_needed += len(financial_searches_planned)\r\n    total_web_searches_needed += len(general_searches_planned)\r\n\r\n    print(f\"[Graph Condition Check] Web Searches: Completed={completed_count}, Total Needed={total_web_searches_needed}\")\r\n\r\n    if completed_count < total_web_searches_needed:\r\n        # If there are more web searches planned (either type), continue the loop.\r\n        print(f\"[Graph Condition] Continue web search ({completed_count + 1}/{total_web_searches_needed}).\")\r\n        return \"execute_search\"\r\n    else:\r\n        # If all planned web searches are done, check if analysis is needed.\r\n        analysis_steps_planned = state.get('analysis_steps_planned', [])\r\n        if analysis_steps_planned and isinstance(analysis_steps_planned, list) and len(analysis_steps_planned) > 0:\r\n             # If analysis steps exist, move to the analysis phase.\r\n             print(\"[Graph Condition] All applicable web searches complete. Moving to analysis.\")\r\n             return \"perform_analysis\"\r\n        else:\r\n             # If no analysis steps were planned, skip analysis and go directly to gap identification.\r\n             print(\"[Graph Condition] All applicable web searches complete, no analysis planned. Moving to gap analysis.\")\r\n             return \"analyze_gaps\"\r\n\r\n\r\ndef should_continue_analysis(state: ResearchState) -> Literal[\"perform_analysis\", \"analyze_gaps\"]:\r\n    \"\"\"Decides whether to continue executing planned analysis steps or move to gap analysis.\"\"\"\r\n    current_analysis_index = state.get('current_analysis_step_index', 0)\r\n    analysis_steps_planned = state.get('analysis_steps_planned', [])\r\n    if not isinstance(analysis_steps_planned, list): analysis_steps_planned = [] # Safety check\r\n    max_steps = state.get('max_analysis_steps', 5) # Use configured max steps\r\n\r\n    if current_analysis_index < len(analysis_steps_planned) and current_analysis_index < max_steps:\r\n        # If more analysis steps are left within plan and limit, continue the loop.\r\n        print(f\"[Graph Condition] Continue analysis ({current_analysis_index + 1}/{len(analysis_steps_planned)}, Max: {max_steps}).\")\r\n        return \"perform_analysis\"\r\n    else:\r\n        if current_analysis_index >= max_steps:\r\n            print(f\"[Graph Condition] Reached max analysis steps ({max_steps}). Moving to gap analysis.\")\r\n        else:\r\n            print(\"[Graph Condition] All planned analysis steps complete. Moving to gap analysis.\")\r\n        return \"analyze_gaps\"\r\n\r\ndef decide_gap_followup(state: ResearchState) -> Literal[\"execute_gap_search\", \"synthesize_final_report\"]:\r\n    \"\"\"Decides whether to execute gap-filling web searches or move to synthesis.\"\"\"\r\n    gaps = state.get('gaps_identified')\r\n    # Check if gap analysis suggested *actionable* web follow-up queries\r\n    # AND if the gap search node hasn't already run (check presence/content of gap_search_results)\r\n    has_run_gap_search = len(state.get('gap_search_results', [])) > 0\r\n    follow_up_queries_exist = gaps and gaps.follow_up_queries and isinstance(gaps.follow_up_queries, list) and len(gaps.follow_up_queries) > 0\r\n\r\n    if follow_up_queries_exist and not has_run_gap_search:\r\n         print(\"[Graph Condition] Actionable gaps identified with web search suggestions, proceeding to execute gap search.\")\r\n         return \"execute_gap_search\"\r\n    else:\r\n        if has_run_gap_search:\r\n             print(\"[Graph Condition] Gap search already performed or skipped previously. Moving to synthesis.\")\r\n        elif not follow_up_queries_exist:\r\n             print(\"[Graph Condition] No actionable web follow-up needed based on gap analysis. Moving to synthesis.\")\r\n        else: # Should not happen but safety catch\r\n             print(\"[Graph Condition] Unexpected state in gap decision. Moving to synthesis.\")\r\n        return \"synthesize_final_report\"\r\n\r\ndef check_synthesis(state: ResearchState) -> Literal[\"generate_final_markdown_report\", \"finalize_basic_research\"]:\r\n     \"\"\"Checks if the synthesis step was successful before generating the final report.\"\"\"\r\n     final_synthesis = state.get(\"final_synthesis\")\r\n     # Check if synthesis result exists and has non-empty key findings\r\n     if final_synthesis and hasattr(final_synthesis, 'key_findings_summary') and final_synthesis.key_findings_summary and \\\r\n        \"fail\" not in final_synthesis.key_findings_summary.lower(): # Basic check for failure text\r\n         print(\"[Graph Condition] Synthesis successful, proceeding to report generation.\")\r\n         return \"generate_final_markdown_report\"\r\n     else:\r\n         print(\"[Graph Condition] Synthesis failed, missing, or empty, finalizing research.\")\r\n         state['error_message'] = \"Synthesis failed or produced empty results.\" # Set error\r\n         return \"finalize_basic_research\"\r\n\r\n\r\n# --- Build the Optimized M&A Workflow ---\r\ndef build_mna_research_graph_yfinance_optimized(for_web: bool = False) -> StateGraph:\r\n    \"\"\"\r\n    Builds the LangGraph StateGraph for M&A preliminary research (Optimized Version).\r\n    \"\"\"\r\n    workflow = StateGraph(ResearchState)\r\n\r\n    # --- Define Nodes ---\r\n    workflow.add_node(\"initialize_research\", initialize_research)\r\n    workflow.add_node(\"plan_research\", plan_research)\r\n    workflow.add_node(\"prepare_steps\", prepare_steps)\r\n    workflow.add_node(\"fetch_financial_data\", fetch_financial_data)\r\n    workflow.add_node(\"execute_search\", execute_search) # Handles both search types\r\n    workflow.add_node(\"perform_analysis\", perform_analysis)\r\n    workflow.add_node(\"analyze_gaps\", analyze_gaps)\r\n    workflow.add_node(\"execute_gap_search\", execute_gap_search)\r\n    workflow.add_node(\"synthesize_final_report\", synthesize_final_report)\r\n    workflow.add_node(\"generate_final_markdown_report\", generate_final_markdown_report)\r\n    workflow.add_node(\"finalize_basic_research\", finalize_basic_research)\r\n\r\n    # --- Define Edges ---\r\n\r\n    # 1. Set Entry Point\r\n    workflow.set_entry_point(\"initialize_research\")\r\n\r\n    # 2. Initialization to Planning (Conditional)\r\n    workflow.add_conditional_edges(\r\n        \"initialize_research\",\r\n        check_initialization,\r\n        {\"plan_research\": \"plan_research\", \"finalize_basic_research\": \"finalize_basic_research\"}\r\n    )\r\n\r\n    # 3. Planning to Prepare Steps (Conditional)\r\n    workflow.add_conditional_edges(\r\n        \"plan_research\",\r\n        check_planning,\r\n        {\"prepare_steps\": \"prepare_steps\", \"finalize_basic_research\": \"finalize_basic_research\"}\r\n    )\r\n\r\n    # 4. Prepare Steps to Fetching Financial Data\r\n    # Always attempt YF fetch after preparing steps (node handles failure flag).\r\n    workflow.add_edge(\"prepare_steps\", \"fetch_financial_data\")\r\n\r\n    # 5. Fetch Financial Data to Starting Web Search\r\n    # Always proceed to execute_search node after fetch attempt.\r\n    # execute_search node internally decides which searches to run based on YF flag.\r\n    workflow.add_edge(\"fetch_financial_data\", \"execute_search\")\r\n\r\n    # 6. Web Search Loop (Handles both Financial Fallback and General)\r\n    # **MODIFIED Condition:** Uses the revised condition function.\r\n    workflow.add_conditional_edges(\r\n        \"execute_search\",\r\n        should_continue_web_search, # Uses revised logic checking total searches needed vs completed\r\n        {\r\n            \"execute_search\": \"execute_search\", # Loop back if more searches needed\r\n            \"perform_analysis\": \"perform_analysis\", # Move to analysis if searches done & analysis planned\r\n            \"analyze_gaps\": \"analyze_gaps\" # Move to gaps if searches done & no analysis planned\r\n        }\r\n    )\r\n\r\n    # 7. Analysis Loop to Gap Analysis\r\n    workflow.add_conditional_edges(\r\n        \"perform_analysis\",\r\n        should_continue_analysis, # Function checks if more analysis steps are planned within limits\r\n        {\"perform_analysis\": \"perform_analysis\", \"analyze_gaps\": \"analyze_gaps\"}\r\n    )\r\n\r\n    # 8. Gap Analysis to Gap Search or Synthesis\r\n    workflow.add_conditional_edges(\r\n        \"analyze_gaps\",\r\n        decide_gap_followup, # Checks for *actionable* web follow-ups\r\n        {\"execute_gap_search\": \"execute_gap_search\", \"synthesize_final_report\": \"synthesize_final_report\"}\r\n    )\r\n\r\n    # 9. After Gap Search (if run) to Synthesis\r\n    # Always go to synthesis after attempting gap search.\r\n    workflow.add_edge(\"execute_gap_search\", \"synthesize_final_report\")\r\n\r\n    # 10. Synthesis to Final Report (Conditional)\r\n    workflow.add_conditional_edges(\r\n        \"synthesize_final_report\",\r\n        check_synthesis, # Checks if synthesis result is valid\r\n        {\"generate_final_markdown_report\": \"generate_final_markdown_report\", \"finalize_basic_research\": \"finalize_basic_research\"}\r\n    )\r\n\r\n    # 11. Final Report to END\r\n    workflow.add_edge(\"generate_final_markdown_report\", END)\r\n\r\n    # 12. Fallback End Path\r\n    workflow.add_edge(\"finalize_basic_research\", END)\r\n\r\n    print(\"M&A Research Graph Built (Optimized JSON Input & YF Fallback Version).\")\r\n    return workflow\r\n\r\n# --- Build and Compile ---\r\ngraph_app_builder = build_mna_research_graph_yfinance_optimized\r\n\r\n# Compile the graph instance for script execution\r\napp_mna_yf_opt = graph_app_builder(for_web=False).compile()\r\n# Optionally compile for web if needed\r\n# web_app_mna_yf_opt = graph_app_builder(for_web=True).compile()\r\n\r\n# --- Function for main.py to Import ---\r\ndef get_mna_app_yfinance(for_web: bool = False) -> Any:\r\n    \"\"\"Returns the compiled optimized M&A graph.\"\"\"\r\n    print(f\"[Graph Module] Providing compiled OPTIMIZED graph instance (for_web={for_web})...\")\r\n    # if for_web:\r\n    #     return web_app_mna_yf_opt # If you have a web version\r\n    # else:\r\n    return app_mna_yf_opt # Return the optimized version"
  },
  {
    "path": "super_agents/customized_deep_research/reason_graph/nodes.py",
    "content": "# /Users/peng/Dev/AI_AGENTS/mentis/super_agents/company_deep_research/reason_graph/nodes.py\r\n# (Optimized Version)\r\n\r\nimport re\r\nimport asyncio\r\nimport json\r\nimport time\r\nfrom datetime import datetime\r\nfrom typing import Dict, Any, List, Literal, Optional\r\nimport pandas as pd\r\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\r\n\r\n# --- Internal Imports ---\r\nfrom .state import ResearchState, YFinanceData\r\nfrom .schemas import (\r\n    SearchQuery, RequiredAnalysis, AnalysisResult, GapAnalysisResult, GapFollowUpQuery,\r\n    FinalSynthesisResult, SearchStepResult, SearchResultItem, StreamUpdate, StepInfo, ResearchPlan, KeyFinding\r\n)\r\nfrom .tools import (\r\n    llm, llm_creative, generate_structured_output,\r\n    perform_web_search,\r\n    fetch_yfinance_data,\r\n    create_update # Use the corrected helper\r\n)\r\nfrom .prompt import (\r\n    PLAN_RESEARCH_PROMPT_YFINANCE,\r\n    FINAL_REPORT_SYSTEM_PROMPT_TEMPLATE_YFINANCE_ONLY,\r\n    FINANCIAL_ANALYSIS_PROMPT_YFINANCE,\r\n    COMPETITIVE_ANALYSIS_PROMPT_YFINANCE,\r\n    MANAGEMENT_GOVERNANCE_PROMPT_YFINANCE,\r\n    GAP_ANALYSIS_PROMPT_YFINANCE,\r\n    SYNTHESIS_PROMPT_YFINANCE\r\n)\r\n# Import logger from tools if defined there, or set up locally\r\n# from .tools import logger # Assuming logger is setup in tools.py\r\n# Fallback basic logger if not imported\r\nimport logging\r\nlogger = logging.getLogger(__name__)\r\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - [%(funcName)s] %(message)s')\r\n\r\n\r\n# --- Node Functions (Optimized Version) ---\r\n\r\nasync def initialize_research(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Initializes research using guaranteed JSON input fields.\"\"\"\r\n    # Assumes state is pre-populated with JSON input by main.py\r\n    identifier_ric = state['identifier_ric'] # Guaranteed\r\n    company_name = state['company_name'] # Guaranteed\r\n    step_id = 'initialize-research'\r\n\r\n    # Use guaranteed fields directly\r\n    ticker = identifier_ric # Use RIC as the ticker for yfinance\r\n    research_topic = f\"M&A Preliminary Deep Research for {company_name} ({ticker})\"\r\n\r\n    logger.info(f\"--- Running Node: initialize_research ({company_name} / {ticker}) ---\")\r\n    logger.info(f\"Using guaranteed input: Ticker='{ticker}', Name='{company_name}'\")\r\n    # Log optional fields if present\r\n    for key in ['country_of_exchange', 'market_cap_usd', 'input_business_description', 'input_pe_ratio', 'input_ebitda_usd', 'input_query_date']:\r\n        if state.get(key):\r\n            logger.info(f\"Input {key}: {state[key]}\")\r\n\r\n    message = f\"Initialization complete. Target: {company_name} ({ticker})\"\r\n    status = 'completed'\r\n    # Corrected create_update call\r\n    all_updates = create_update(state, {\r\n        'id': step_id,\r\n        'type': 'setup',\r\n        'status': status,\r\n        'title': 'Initialize Research',\r\n        'message': message,\r\n        'overwrite': True\r\n    })\r\n    # Corrected create_update call for progress\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-progress',\r\n        'type': 'progress',\r\n        'status': 'running',\r\n        'title': 'Research Progress',\r\n        'completedSteps': 0.5,\r\n        'message':'Initialization complete, planning research...',\r\n        'overwrite': True\r\n    }))\r\n\r\n    logger.info(f\"--- Exiting Node: initialize_research ---\")\r\n    # Return minimal update as core info is already in state\r\n    return {\r\n        \"topic\": research_topic, # Set derived topic\r\n        \"ticker\": ticker, # Ensure ticker is explicitly set from RIC\r\n        \"yfinance_fetch_failed\": False, # Initialize YF status flag\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates\r\n    }\r\n\r\n\r\nasync def plan_research(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Generates research plan, adapting based on yfinance fetch status.\"\"\"\r\n    ticker = state['ticker'] # Guaranteed from init\r\n    company_name = state['company_name'] # Guaranteed from init\r\n    topic = state['topic'] # Derived topic string\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False) # Check YF status flag\r\n    step_id = 'research-plan-initial'\r\n\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type': 'plan', 'status': 'running',\r\n        'title': 'Research Plan', 'message': 'Creating research plan...', 'overwrite': True\r\n    })\r\n    logger.info(f\"\\n--- Running Node: plan_research (Target: {company_name} / {ticker}) ---\")\r\n    logger.info(f\"Yahoo Finance fetch status (before plan): {'Failed' if yfinance_failed else 'Assumed OK / Pending'}\")\r\n\r\n    # Prepare context for the planning prompt, including initial JSON data\r\n    yfinance_status_text = \"Failed\" if yfinance_failed else \"Successful\" # Text for prompt\r\n    country = state.get('country_of_exchange', 'N/A')\r\n    market_cap = state.get('market_cap_usd', 'N/A')\r\n    ebitda = state.get('input_ebitda_usd', 'N/A')\r\n    query_date = state.get('input_query_date', 'N/A')\r\n    business_desc = state.get('input_business_description', 'N/A')\r\n\r\n\r\n    plan_prompt = PLAN_RESEARCH_PROMPT_YFINANCE.format(\r\n        company_name=company_name,\r\n        ticker=ticker,\r\n        country=country,\r\n        market_cap=market_cap,\r\n        ebitda=ebitda,\r\n        query_date=query_date,\r\n        business_desc=business_desc,\r\n        yfinance_status=yfinance_status_text\r\n        # topic=topic # Topic string might be less useful now\r\n    )\r\n\r\n    try:\r\n        research_plan_result: Optional[ResearchPlan] = await generate_structured_output(\r\n            llm_creative, ResearchPlan, plan_prompt\r\n        )\r\n\r\n        if not research_plan_result:\r\n             raise ValueError(\"Research plan generation failed or yielded empty result.\")\r\n\r\n        # Separate planned steps\r\n        search_steps_planned = research_plan_result.search_queries if research_plan_result.search_queries else []\r\n        analysis_steps_planned = research_plan_result.required_analyses if research_plan_result.required_analyses else []\r\n\r\n        # Filter out yfinance step if YF failed - it shouldn't be planned anyway based on prompt, but double-check.\r\n        if yfinance_failed:\r\n            search_steps_planned = [s for s in search_steps_planned if s.tool_hint != 'yfinance']\r\n            num_yfinance_steps = 0\r\n        else:\r\n             num_yfinance_steps = sum(1 for s in search_steps_planned if s.tool_hint == 'yfinance')\r\n\r\n        # Separate financial web searches if YF failed (assuming they are generated by the prompt)\r\n        financial_web_search_steps = []\r\n        other_web_search_steps = []\r\n        if yfinance_failed:\r\n             # Heuristic: Identify financial web searches based on keywords in query\r\n             financial_keywords = ['revenue', 'profit', 'financials', 'market cap', 'ebitda', 'funding', 'financing', 'debt', 'valuation']\r\n             for s in search_steps_planned:\r\n                 if s.tool_hint == 'web_search' and any(keyword in s.query.lower() for keyword in financial_keywords):\r\n                     financial_web_search_steps.append(s)\r\n                 elif s.tool_hint == 'web_search': # Keep other web searches\r\n                     other_web_search_steps.append(s)\r\n             logger.info(f\"YF failed. Identified {len(financial_web_search_steps)} potential financial web searches and {len(other_web_search_steps)} other web searches.\")\r\n             search_steps_planned = other_web_search_steps # Main loop handles non-financial web searches\r\n        else:\r\n             search_steps_planned = [s for s in search_steps_planned if s.tool_hint != 'yfinance'] # Remove YF step for web search loop\r\n\r\n        num_web_search_steps = len(search_steps_planned)\r\n        num_financial_web_search_steps = len(financial_web_search_steps)\r\n        num_analysis_steps = len(analysis_steps_planned)\r\n        # Adjust total steps estimate\r\n        total_steps = 1 + 1 + (0 if yfinance_failed else 1) + num_web_search_steps + num_financial_web_search_steps + num_analysis_steps + 1 + 1 + 1 + 1\r\n\r\n        message = f\"Research plan created: {num_web_search_steps} general web searches, {num_financial_web_search_steps} financial web searches (YF fallback), {num_analysis_steps} analyses.\"\r\n        if not yfinance_failed: message = f\"Research plan created: 1 yfinance step, {num_web_search_steps} web searches, {num_analysis_steps} analyses.\"\r\n\r\n        all_updates.extend(create_update(state, {\r\n            'id': step_id, 'type': 'plan', 'status': 'completed', 'title': 'Research Plan',\r\n            'message': message,\r\n            'payload': research_plan_result.dict() if research_plan_result else {},\r\n            'overwrite': True\r\n        }))\r\n        all_updates.extend(create_update(state, {\r\n            'id': 'research-progress', 'type': 'progress', 'status': 'running', 'title': 'Research Progress',\r\n            'message': 'Research plan complete.', 'completedSteps': 1.5, 'totalSteps': total_steps,\r\n            'isComplete': False, 'overwrite': True\r\n        }))\r\n\r\n        logger.info(\"--- Exiting Node: plan_research (Success) ---\")\r\n        return {\r\n            \"research_plan\": research_plan_result,\r\n            \"search_steps_planned\": search_steps_planned, # General web searches\r\n            \"financial_web_search_steps\": financial_web_search_steps, # Financial web searches (if YF failed)\r\n            \"analysis_steps_planned\": analysis_steps_planned,\r\n            \"current_search_step_index\": 0,\r\n            \"current_analysis_step_index\": 0,\r\n            \"completed_steps_count\": 1.5,\r\n            \"total_steps\": total_steps,\r\n            \"stream_updates\": state.get('stream_updates', []) + all_updates,\r\n        }\r\n    except Exception as e:\r\n        logger.error(f\"Error in plan_research: {e}\", exc_info=True)\r\n        error_updates = create_update(state, {\r\n            'id': step_id, 'type': 'plan', 'status': 'error', 'title': 'Research Plan',\r\n            'message': f\"Failed to create plan: {e}\", 'overwrite': True\r\n            })\r\n        progress_error = create_update(state, {\r\n            'id': 'research-progress', 'type': 'progress', 'status': 'error', 'title': 'Research Progress',\r\n            'message': 'Research planning failed.', 'isComplete': True, 'overwrite': True\r\n            })\r\n        logger.info(\"--- Exiting Node: plan_research (Error) ---\")\r\n        return {\"stream_updates\": state.get('stream_updates', []) + all_updates + error_updates + progress_error, \"research_plan\": None}\r\n\r\n\r\nasync def prepare_steps(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Prepares step info for UI, reflecting dynamic plan.\"\"\"\r\n    # Get planned steps from state\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False)\r\n    web_search_steps = state.get('search_steps_planned', []) # General web searches\r\n    financial_web_searches = state.get('financial_web_search_steps', []) # Financial web searches (if YF failed)\r\n    analysis_steps = state.get('analysis_steps_planned', [])\r\n    steps_info = []\r\n    all_updates = state.get('stream_updates', [])\r\n    logger.info(\"--- Running Node: prepare_steps ---\")\r\n\r\n    # Create StepInfo objects for UI display\r\n    steps_info.append(StepInfo(id='initialize-research', type='setup', status='completed', title='Initialize Research', description=f\"Target: {state['company_name']} ({state['ticker']})\"))\r\n    steps_info.append(StepInfo(id='research-plan-initial', type='plan', status='completed', title='Research Plan', description='Plan Created'))\r\n\r\n    # Add YFinance Step OR Financial Web Search Steps\r\n    if not yfinance_failed:\r\n        steps_info.append(StepInfo(id='fetch-yfinance', type='data_fetch', status='pending', title='Fetch Yahoo Finance Data', description=f\"Get financial data for {state['ticker']}\"))\r\n    else:\r\n        for i, step in enumerate(financial_web_searches):\r\n            steps_info.append(StepInfo(id=f'financial-web-search-{i}', type='search', status='pending', title=f\"Financial Web Search #{i+1}\", description=f\"Alt for YF: {step.query[:60]}...\" ))\r\n\r\n    # Add General Web Search Steps\r\n    for i, step in enumerate(web_search_steps):\r\n        steps_info.append(StepInfo(id=f'web-search-{i}', type='search', status='pending', title=f\"Web Search #{i+1}\", description=step.query[:60]+\"...\" ))\r\n\r\n    # Add Analysis Steps\r\n    for i, step in enumerate(analysis_steps):\r\n         steps_info.append(StepInfo(id=f'analysis-{i}', type='analysis', status='pending', title=f\"Analysis #{i+1}\", description=step.analysis_goal[:60]+\"...\" ))\r\n\r\n    # Add Fixed Subsequent Steps\r\n    steps_info.append(StepInfo(id='gap-analysis', type='analysis', status='pending', title='Identify Gaps', description='Analyze limitations.'))\r\n    steps_info.append(StepInfo(id='gap-search', type='search', status='pending', title='Gap Filling Search', description='Follow-up web searches.'))\r\n    steps_info.append(StepInfo(id='synthesis', type='synthesis', status='pending', title='Synthesize Findings', description='Combine all findings.'))\r\n    steps_info.append(StepInfo(id='final-report', type='report', status='pending', title='Generate Final Report', description='Create final report.'))\r\n\r\n    # Send steps list update\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-steps-list',\r\n        'type': 'steps_list',\r\n        'status': 'completed',\r\n        'title': 'Research Steps',\r\n        'payload': [s.dict() for s in steps_info] # Use model_dump for Pydantic V2\r\n    }))\r\n\r\n    # Update total steps based on actual steps listed for better accuracy\r\n    total_steps_actual = len(steps_info)\r\n    if state.get('total_steps') != total_steps_actual:\r\n        all_updates.extend(create_update(state, {\r\n            'id': 'research-progress', 'type': 'progress', 'status': 'running',\r\n            'title': 'Research Progress', 'totalSteps': total_steps_actual,\r\n            'message': 'Steps prepared.', 'overwrite': True\r\n            }))\r\n\r\n    logger.info(f\"--- Exiting Node: prepare_steps (Prepared {total_steps_actual} steps) ---\")\r\n    return {\"stream_updates\": all_updates, \"total_steps\": total_steps_actual} # Return updated total_steps\r\n\r\n\r\nasync def fetch_financial_data(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Fetches data using the yfinance tool and sets failure flag.\"\"\"\r\n    ticker = state['ticker'] # Guaranteed from init\r\n    step_id = 'fetch-yfinance'\r\n    yfinance_fetch_failed = False # Default to success initially\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type': 'data_fetch', 'status': 'running',\r\n        'title': 'Fetch Yahoo Finance Data', 'message': f\"Fetching Yahoo Finance data for {ticker}...\",\r\n        'overwrite': True\r\n    })\r\n    logger.info(f\"\\n--- Running Node: fetch_financial_data ({ticker}) ---\")\r\n\r\n    yfinance_result: YFinanceData = {\"error\": \"Fetch not attempted.\"} # Default\r\n    status = 'pending'\r\n\r\n    try:\r\n        # Call the tool function from tools.py\r\n        yfinance_result = await fetch_yfinance_data(ticker) # Assumes tool is async\r\n        fetch_error = yfinance_result.get('error')\r\n\r\n        if fetch_error:\r\n            # Check if it's a critical failure (e.g., info failed, or many errors)\r\n            if \"Failed to fetch core info\" in fetch_error or \"critical error\" in fetch_error.lower():\r\n                 message = f\"Yahoo Finance critical error: {fetch_error[:150]}...\"\r\n                 status = 'error'\r\n                 yfinance_fetch_failed = True # Set failure flag\r\n                 logger.error(message)\r\n            else:\r\n                 # Treat other errors as warnings, data might be partially useful\r\n                 message = f\"Yahoo Finance fetch completed with non-critical error: {fetch_error[:100]}...\"\r\n                 status = 'warning'\r\n                 # yfinance_fetch_failed = False # Assume partial success is okay unless explicitly critical\r\n                 logger.warning(message)\r\n        else:\r\n             message = \"Yahoo Finance data fetched successfully.\"\r\n             status = 'completed'\r\n             logger.info(message)\r\n\r\n    except Exception as e:\r\n        message = f\"Critical system error in fetch_financial_data node: {e}\"\r\n        logger.error(message, exc_info=True)\r\n        yfinance_result = {\"error\": message}\r\n        status = 'error'\r\n        yfinance_fetch_failed = True # Set failure flag on system error\r\n\r\n    # Update UI for node completion/status\r\n    payload = {'keys': list(yfinance_result.keys()), 'error': yfinance_result.get('error')} if isinstance(yfinance_result, dict) else None\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type': 'data_fetch', 'status': status,\r\n        'title': 'Fetch Yahoo Finance Data', 'message': message,\r\n        'payload': payload, 'overwrite': True\r\n    }))\r\n\r\n    # Update progress\r\n    completed_steps = state.get('completed_steps_count', 0) + 1\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress', 'status': 'running',\r\n        'title': 'Research Progress', 'completedSteps': completed_steps,\r\n        'message': f'Completed financial data fetch step ({status}).',\r\n        'overwrite': True\r\n    }))\r\n\r\n    logger.info(f\"--- Exiting Node: fetch_financial_data ({status}, YF_Failed={yfinance_fetch_failed}) ---\")\r\n    return {\r\n        \"yfinance_data\": yfinance_result,\r\n        \"yfinance_fetch_failed\": yfinance_fetch_failed, # Pass the flag status\r\n        \"completed_steps_count\": completed_steps,\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates\r\n    }\r\n\r\n\r\nasync def execute_search(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Executes planned web searches: financial fallback first (if YF failed), then general.\"\"\"\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False)\r\n    completed_web_search_total = state.get('completed_web_search_count', 0) # Use the total count\r\n\r\n    financial_searches_planned = state.get('financial_web_search_steps', [])\r\n    general_searches_planned = state.get('search_steps_planned', [])\r\n\r\n    num_financial_to_do = len(financial_searches_planned) if yfinance_failed else 0\r\n    num_general_to_do = len(general_searches_planned)\r\n\r\n    search_to_execute = None\r\n    list_being_processed = None # 'financial' or 'general'\r\n    current_local_index = -1 # Index within the specific list\r\n    result_key = None # State key to append results to\r\n    step_prefix = None\r\n    step_type = 'search'\r\n    step_title_prefix = None\r\n\r\n    # Determine which search step is next based on the total completed count\r\n    if yfinance_failed and completed_web_search_total < num_financial_to_do:\r\n        list_being_processed = 'financial'\r\n        current_local_index = completed_web_search_total\r\n        search_to_execute = financial_searches_planned[current_local_index]\r\n        result_key = 'financial_web_search_results'\r\n        step_prefix = 'financial-web-search-'\r\n        step_title_prefix = \"Financial Web Search #\"\r\n    elif completed_web_search_total < (num_financial_to_do + num_general_to_do):\r\n        list_being_processed = 'general'\r\n        # Adjust index based on whether financial searches were done\r\n        current_local_index = completed_web_search_total - num_financial_to_do\r\n        search_to_execute = general_searches_planned[current_local_index]\r\n        result_key = 'search_results'\r\n        step_prefix = 'web-search-'\r\n        step_title_prefix = \"Web Search #\"\r\n    else:\r\n        # Should not be called if condition in graph is correct, but handle defensively\r\n        logger.warning(\"execute_search called but all web searches seem complete. Check graph logic.\")\r\n        return {\"completed_web_search_count\": completed_web_search_total} # No changes\r\n\r\n\r\n    step_id = f'{step_prefix}{current_local_index}'\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type': step_type, 'status': 'running',\r\n        'title': f'{step_title_prefix}{current_local_index + 1}', # Use local index for title numbering\r\n        'message': f\"Executing: {search_to_execute.query[:60]}...\", 'overwrite': True\r\n    })\r\n    logger.info(f\"\\n--- Running Node: execute_search ({step_title_prefix}{current_local_index + 1}) ---\")\r\n    logger.info(f\"Overall Web Step: {completed_web_search_total + 1} / {num_financial_to_do + num_general_to_do}\")\r\n    logger.info(f\"Query: {search_to_execute.query}\")\r\n\r\n    search_step_result = SearchStepResult(query=search_to_execute.query, results=[], tool_used=\"web_search\")\r\n    status = 'error'\r\n\r\n    try:\r\n        web_results = await perform_web_search(search_to_execute.query, max_results=5)\r\n        search_step_result.results = web_results\r\n        message = f\"{step_title_prefix}{current_local_index + 1} finished, found {len(web_results)} results.\"\r\n        status = 'completed'\r\n        logger.info(message)\r\n    except Exception as e:\r\n        message = f\"{step_title_prefix}{current_local_index + 1} failed: {e}\"\r\n        status = 'error'\r\n        logger.error(f\"Error during web search for query '{search_to_execute.query}': {e}\", exc_info=True)\r\n        search_step_result.results = []\r\n\r\n    # --- Update UI for node completion ---\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type': step_type, 'status': status,\r\n        'title': f'{step_title_prefix}{current_local_index + 1}',\r\n        'message': message, 'overwrite': True\r\n    }))\r\n\r\n    # --- Update PROGRESS (Overall step count AND web search count) ---\r\n    completed_steps = state.get('completed_steps_count', 0) + 1\r\n    new_completed_web_search_count = completed_web_search_total + 1 # Increment total web search count\r\n\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress', 'status': 'running',\r\n        'title': 'Research Progress', 'completedSteps': completed_steps,\r\n        'message': f'Completed Web Search Step {new_completed_web_search_count} ({status}).', # Use total count in message\r\n        'overwrite': True\r\n    }))\r\n\r\n    # --- Append result to the correct list in the state ---\r\n    current_results_list = state.get(result_key, [])\r\n    new_results = current_results_list + [search_step_result]\r\n\r\n    logger.info(f\"--- Exiting Node: execute_search ({step_title_prefix}{current_local_index + 1}) ---\")\r\n\r\n    return {\r\n        result_key: new_results,\r\n        \"completed_web_search_count\": new_completed_web_search_count, # Return updated total count\r\n        \"completed_steps_count\": completed_steps,\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates,\r\n    }\r\n\r\n\r\nasync def perform_analysis(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Performs analysis, adapting prompt context based on YFinance status.\"\"\"\r\n    current_index = state.get('current_analysis_step_index', 0)\r\n    analysis_steps_planned = state.get('analysis_steps_planned', [])\r\n\r\n    if current_index >= len(analysis_steps_planned):\r\n        logger.info(\"No more analysis steps planned.\")\r\n        return {\"current_analysis_step_index\": current_index}\r\n\r\n    analysis_step = analysis_steps_planned[current_index]\r\n    company_name = state['company_name']\r\n    ticker = state['ticker']\r\n    topic = state['topic']\r\n    step_id = f'analysis-{current_index}'\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False)\r\n\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type': 'analysis', 'status': 'running',\r\n        'title': f'Analysis #{current_index + 1}',\r\n        'message': f\"Performing: {analysis_step.analysis_goal[:60]}...\", 'overwrite': True\r\n    })\r\n    logger.info(f\"\\n--- Running Node: perform_analysis (Step {current_index + 1}/{len(analysis_steps_planned)}) ---\")\r\n    logger.info(f\"Goal: {analysis_step.analysis_goal}\")\r\n    logger.info(f\"YFinance Status: {'Failed - Using Web Fallback' if yfinance_failed else 'OK - Using YF Data'}\")\r\n\r\n    # --- Gather Context ---\r\n    # Financial Context (Conditional)\r\n    financial_context = \"[Financial Context]\\n\"\r\n    financial_data_source_description = \"N/A\" # Default\r\n    if yfinance_failed:\r\n        financial_web_results = state.get('financial_web_search_results', [])\r\n        if financial_web_results:\r\n             financial_context += \"Source: Financial Web Search Results (Yahoo Finance Failed)\\n\"\r\n             financial_data_source_description = \"financial web search results\"\r\n             for i, res in enumerate(financial_web_results):\r\n                 financial_context += f\"Query {i+1}: {res.query}\\n\"\r\n                 for item in res.results[:3]: # Limit snippets\r\n                     financial_context += f\"- {item.title}: {item.snippet[:150]}...\\n\"\r\n             # Include initial JSON financial data if available\r\n             initial_market_cap = state.get('market_cap_usd')\r\n             initial_ebitda = state.get('input_ebitda_usd')\r\n             initial_pe = state.get('input_pe_ratio')\r\n             if initial_market_cap or initial_ebitda or initial_pe:\r\n                  financial_context += \"\\nInitial Input Data Hints:\\n\"\r\n                  if initial_market_cap: financial_context += f\"- Market Cap (USD): {initial_market_cap}\\n\"\r\n                  if initial_ebitda: financial_context += f\"- EBITDA (USD, FY0): {initial_ebitda}\\n\"\r\n                  if initial_pe: financial_context += f\"- P/E Ratio: {initial_pe}\\n\"\r\n        else:\r\n             financial_context += \"Source: Yahoo Finance Failed and NO financial web search results available.\\n\"\r\n             financial_data_source_description = \"web search (YF failed, limited results)\"\r\n    else:\r\n        yfinance_data = state.get('yfinance_data')\r\n        if yfinance_data and not yfinance_data.get('error'):\r\n             financial_context += \"Source: Yahoo Finance Data (Serialized Dictionaries)\\n\"\r\n             financial_data_source_description = \"Yahoo Finance data\"\r\n             # Summarize available YF data keys/presence\r\n             financial_context += f\"Available YF Keys: {list(yfinance_data.keys())}\\n\"\r\n             # Optionally include snippets of info or structure hints if needed by prompt\r\n             if yfinance_data.get('info'):\r\n                  info_preview = {k: v for k, v in yfinance_data['info'].items() if k in ['sector', 'industry', 'marketCap', 'currency']}\r\n                  financial_context += f\"Info Preview: {json.dumps(info_preview)}\\n\"\r\n             # Add note about serialized format\r\n             financial_context += \"(Financial statements are dicts with 'index', 'columns', 'data')\\n\"\r\n        elif yfinance_data and yfinance_data.get('error'):\r\n             financial_context += f\"Source: Yahoo Finance Data (Fetch completed with error: {yfinance_data.get('error')})\\n\"\r\n             financial_data_source_description = \"Yahoo Finance data (with errors)\"\r\n        else:\r\n            financial_context += \"Source: Yahoo Finance Data (Not Available or Fetch Error)\\n\"\r\n            financial_data_source_description = \"Yahoo Finance data (unavailable)\"\r\n\r\n\r\n    # General Web Search Context\r\n    web_search_context = \"[General Web Search Results Context]\\n\"\r\n    general_web_results = state.get('search_results', [])\r\n    gap_web_results = state.get('gap_search_results', [])\r\n    all_web_for_context = general_web_results + gap_web_results\r\n    if all_web_for_context:\r\n        for i, res in enumerate(all_web_for_context):\r\n            web_search_context += f\"Query {i+1}: {res.query}\\n\"\r\n            for item in res.results[:3]: # Limit snippets\r\n                web_search_context += f\"- {item.title}: {item.snippet[:150]}...\\n\"\r\n    else:\r\n        web_search_context += \"N/A\\n\"\r\n\r\n    # Previous Analysis Context\r\n    previous_analysis_context = \"[Previous Analysis Steps Summary]\\n\"\r\n    analyses = state.get('analysis_results', [])\r\n    if isinstance(analyses, list) and analyses:\r\n        formatted_analyses = []\r\n        for idx, ar in enumerate(analyses):\r\n             # Simplified access assuming AnalysisResult objects are stored\r\n             goal_summary = ar.analysis_goal[:60] if isinstance(ar, AnalysisResult) else f'Goal N/A step {idx}'\r\n             result_summary = ar.analysis_result[:200] if isinstance(ar, AnalysisResult) else f'Result N/A step {idx}'\r\n             formatted_analyses.append(f\"- Step {idx+1} ({goal_summary}...): {result_summary}...\")\r\n        previous_analysis_context += \"\\n\".join(formatted_analyses)\r\n    else:\r\n         previous_analysis_context += \"N/A\\n\"\r\n\r\n    # Company Info Context (YF Info + Input Desc)\r\n    info_context = \"[Company Info Context]\\n\"\r\n    input_desc = state.get('input_business_description', 'N/A')\r\n    yf_info_data = state.get('yfinance_data', {}).get('info') if not yfinance_failed else None\r\n    info_context += f\"Input Description: {input_desc}\\n\"\r\n    if yf_info_data:\r\n         info_context += f\"YF Info Summary: Sector: {yf_info_data.get('sector', 'N/A')}, Industry: {yf_info_data.get('industry', 'N/A')}, Employees: {yf_info_data.get('fullTimeEmployees', 'N/A')}\\n\"\r\n         info_context += f\"YF Long Description: {yf_info_data.get('longBusinessSummary', 'N/A')[:500]}...\\n\" # Limit length\r\n    else:\r\n         info_context += \"YF Info: Not available or fetch failed.\\n\"\r\n\r\n    # YF Holders Context (if not failed)\r\n    yfinance_info_context = \"[Yahoo Finance Info/Holders Context]\\n\" + info_context # Reuse info part\r\n    if not yfinance_failed and state.get('yfinance_data'):\r\n         holders_summary = \"\"\r\n         major = state['yfinance_data'].get('major_holders')\r\n         inst = state['yfinance_data'].get('institutional_holders')\r\n         if major is not None: holders_summary += f\"Major Holders data present (structure: {major.get('columns') if isinstance(major,dict) else 'N/A'}).\\n\"\r\n         if inst is not None: holders_summary += f\"Institutional Holders data present (structure: {inst.get('columns') if isinstance(inst,dict) else 'N/A'}).\\n\"\r\n         yfinance_info_context += holders_summary if holders_summary else \"Holders data: Not found in YF results.\\n\"\r\n    else:\r\n         yfinance_info_context += \"Holders data: Not applicable (YF fetch failed or data unavailable).\\n\"\r\n\r\n\r\n    # --- Determine Prompt & State Key ---\r\n    analysis_prompt_template = None\r\n    state_key_to_update = None # Key in ResearchState to store result\r\n\r\n    analysis_goal_lower = analysis_step.analysis_goal.lower()\r\n    is_financial_analysis_goal = \"financial\" in analysis_goal_lower or \"财务\" in analysis_goal_lower\r\n    is_competitive_analysis_goal = \"competitive\" in analysis_goal_lower or \"竞争\" in analysis_goal_lower or \"market\" in analysis_goal_lower or \"moat\" in analysis_goal_lower\r\n    is_mgmt_gov_analysis_goal = \"management\" in analysis_goal_lower or \"governance\" in analysis_goal_lower or \"管理\" in analysis_goal_lower\r\n\r\n    if is_financial_analysis_goal:\r\n        logger.info(\"Using FINANCIAL_ANALYSIS_PROMPT_YFINANCE...\")\r\n        analysis_prompt_template = FINANCIAL_ANALYSIS_PROMPT_YFINANCE\r\n        state_key_to_update = \"financial_analysis\"\r\n    elif is_competitive_analysis_goal:\r\n         logger.info(\"Using COMPETITIVE_ANALYSIS_PROMPT_YFINANCE...\")\r\n         analysis_prompt_template = COMPETITIVE_ANALYSIS_PROMPT_YFINANCE\r\n         state_key_to_update = \"competitive_analysis\"\r\n    elif is_mgmt_gov_analysis_goal:\r\n         logger.info(\"Using MANAGEMENT_GOVERNANCE_PROMPT_YFINANCE...\")\r\n         analysis_prompt_template = MANAGEMENT_GOVERNANCE_PROMPT_YFINANCE\r\n         state_key_to_update = \"management_governance_assessment\"\r\n    else:\r\n        logger.warning(f\"No specific prompt matched goal: '{analysis_step.analysis_goal}'. Using generic approach.\")\r\n        # Fallback generic analysis (less structured)\r\n        analysis_prompt_template = \"\"\"Analyze the provided context for the goal: '{analysis_goal}'.\r\n        Combine information from financial context ({financial_data_source_description}), web searches, company info, and previous analyses.\r\n        Focus on insights relevant to M&A if possible.\r\n\r\n        Goal: {analysis_goal}\r\n\r\n        Financial Context ({financial_data_source_description}):\r\n        {financial_context}\r\n\r\n        General Web Search Context:\r\n        {web_context}\r\n\r\n        Company Info Context:\r\n        {info_context}\r\n\r\n        Previous Analysis Context:\r\n        {previous_analysis_context}\r\n\r\n        Analysis:\r\n        \"\"\"\r\n        state_key_to_update = None # Store in general list\r\n\r\n\r\n    analysis_content = f\"Analysis failed for goal: {analysis_step.analysis_goal}\" # Default content\r\n    status = 'error'\r\n\r\n    # Ensure template exists before formatting\r\n    if analysis_prompt_template:\r\n         try:\r\n             # Format the selected prompt with all gathered context\r\n             prompt = analysis_prompt_template.format(\r\n                 company_name=company_name,\r\n                 ticker=ticker,\r\n                 financial_data_source_description=financial_data_source_description, # Pass the description\r\n                 financial_context=financial_context[:8000], # Limit context\r\n                 web_context=web_search_context[:8000], # Limit context\r\n                 info_context=info_context[:3000],\r\n                 previous_analysis_context=previous_analysis_context[:3000],\r\n                 yfinance_info_context=yfinance_info_context[:6000], # For mgmt/gov prompt\r\n                 analysis_goal=analysis_step.analysis_goal, # For generic prompt\r\n                 market_cap=state.get('market_cap_usd', 'N/A'), # Pass market cap for financial prompt context\r\n                 ebitda=state.get('input_ebitda_usd', 'N/A') # Pass EBITDA for financial prompt context\r\n             )\r\n\r\n             # --- Invoke LLM ---\r\n             analysis_response = await llm.ainvoke(prompt) # Use standard LLM for analysis\r\n             analysis_content = analysis_response.content if hasattr(analysis_response, 'content') else str(analysis_response)\r\n             message = f\"Analysis #{current_index + 1} finished.\"\r\n             status = 'completed'\r\n             logger.info(message)\r\n\r\n         except KeyError as ke:\r\n              message = f\"Analysis #{current_index + 1} failed: Missing key in prompt format - {ke}\"\r\n              status = 'error'\r\n              logger.error(message, exc_info=True)\r\n              analysis_content = f\"Analysis prompt formatting failed: {ke}\"\r\n         except Exception as e:\r\n             message = f\"Analysis #{current_index + 1} failed: {e}\"\r\n             status = 'error'\r\n             logger.error(f\"Error during analysis for goal '{analysis_step.analysis_goal}': {e}\", exc_info=True)\r\n             analysis_content = f\"Analysis failed: {e}\"\r\n    else:\r\n         # This case should ideally not happen if generic fallback exists\r\n         message = f\"Analysis #{current_index + 1} skipped: No suitable prompt template found.\"\r\n         status = 'skipped'\r\n         logger.error(message)\r\n         analysis_content = \"Analysis skipped.\"\r\n\r\n\r\n    # --- Prepare State Update ---\r\n    state_update = {}\r\n    if state_key_to_update:\r\n        state_update = {state_key_to_update: analysis_content}\r\n    else:\r\n        # Store generic analysis in the list\r\n        analysis_result_obj = AnalysisResult(analysis_goal=analysis_step.analysis_goal, analysis_result=analysis_content)\r\n        new_analysis_results = state.get('analysis_results', []) + [analysis_result_obj]\r\n        state_update = {\"analysis_results\": new_analysis_results}\r\n\r\n\r\n    # Update UI for node completion\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type': 'analysis', 'status': status,\r\n        'title': f'Analysis #{current_index + 1}', 'message': message,\r\n        'overwrite': True\r\n    }))\r\n\r\n    # Update progress\r\n    completed_steps = state.get('completed_steps_count', 0) + 1\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress', 'status': 'running',\r\n        'title': 'Research Progress', 'completedSteps': completed_steps,\r\n        'message': f'Completed analysis step {current_index + 1} ({status}).',\r\n        'overwrite': True\r\n    }))\r\n\r\n    logger.info(f\"--- Exiting Node: perform_analysis (Step {current_index + 1}) ---\")\r\n    # Merge state_update into the return dictionary\r\n    return_state = {\r\n        \"current_analysis_step_index\": current_index + 1,\r\n        \"completed_steps_count\": completed_steps,\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates,\r\n    }\r\n    return_state.update(state_update)\r\n    return return_state\r\n\r\n\r\nasync def analyze_gaps(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Analyzes gaps, potentially suggesting actionable web searches.\"\"\"\r\n    step_id = 'gap-analysis'\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type': 'analysis', 'status': 'running',\r\n        'title': 'Gap Analysis', 'message': 'Analyzing for knowledge gaps & limitations...',\r\n        'overwrite': True\r\n        })\r\n    logger.info(f\"\\n--- Running Node: analyze_gaps ---\")\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False)\r\n    yfinance_status_text = \"Failed (Used Web Fallback)\" if yfinance_failed else \"Successful\"\r\n\r\n    # --- Gather Context ---\r\n    # Consolidate context from various analysis steps and data sources\r\n    context_parts = []\r\n    context_parts.append(f\"Research Target: {state['company_name']} ({state['ticker']})\")\r\n    context_parts.append(f\"Yahoo Finance Status: {yfinance_status_text}\")\r\n    if state.get('financial_analysis'): context_parts.append(f\"\\n[Financial Analysis Summary]\\n{state['financial_analysis'][:1000]}...\")\r\n    if state.get('competitive_analysis'): context_parts.append(f\"\\n[Competitive Analysis Summary]\\n{state['competitive_analysis'][:1000]}...\")\r\n    if state.get('management_governance_assessment'): context_parts.append(f\"\\n[Mgmt/Gov Assessment Summary]\\n{state['management_governance_assessment'][:1000]}...\")\r\n    # Include snippets from web searches maybe?\r\n    # search_summary = \"\\n[Web Search Snippet Highlights]\\n\"\r\n    # ... logic to add highlights ...\r\n    # context_parts.append(search_summary)\r\n\r\n    context = \"\\n\".join(context_parts)\r\n\r\n    # --- Format Prompt ---\r\n    prompt = GAP_ANALYSIS_PROMPT_YFINANCE.format(\r\n        topic=state['topic'], # Keep original topic for reference if needed\r\n        company_name=state['company_name'],\r\n        ticker=state['ticker'],\r\n        yfinance_status=yfinance_status_text, # Pass status to prompt\r\n        context=context[:10000] # Limit context\r\n    )\r\n\r\n    gap_analysis_result: Optional[GapAnalysisResult] = None # Initialize\r\n    status = 'error' # Default\r\n    message = \"Gap analysis failed before LLM call.\"\r\n\r\n    try:\r\n        gap_analysis_result = await generate_structured_output(\r\n            llm_creative, GapAnalysisResult, prompt\r\n        )\r\n        if not gap_analysis_result:\r\n             gap_analysis_result = GapAnalysisResult(summary=\"Failed to generate structured gap analysis.\", follow_up_queries=[])\r\n             message = \"Gap analysis LLM call succeeded but failed to parse structure.\"\r\n             status = 'warning'\r\n        else:\r\n             # Filter follow-up queries - Keep this filtering\r\n             original_query_count = len(gap_analysis_result.follow_up_queries)\r\n             gap_analysis_result.follow_up_queries = [\r\n                 q for q in gap_analysis_result.follow_up_queries if isinstance(q, GapFollowUpQuery) and q.tool_hint == 'web_search'\r\n             ]\r\n             filtered_query_count = len(gap_analysis_result.follow_up_queries)\r\n             message = f\"Gap analysis completed. Identified limitations. {filtered_query_count} actionable follow-up web searches suggested (out of {original_query_count} raw suggestions).\"\r\n             status = 'completed'\r\n        logger.info(message)\r\n    except Exception as e:\r\n        logger.error(f\"Error during gap analysis LLM call or parsing: {e}\", exc_info=True)\r\n        gap_analysis_result = GapAnalysisResult(summary=f\"Gap analysis failed: {e}\", follow_up_queries=[])\r\n        message = f\"Gap analysis failed: {e}\"\r\n        status = 'error'\r\n\r\n    # Update UI for node completion\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type': 'analysis', 'status': status,\r\n        'title': 'Gap Analysis', 'message': message,\r\n        'payload': gap_analysis_result.dict() if hasattr(gap_analysis_result, 'dict') else {\"summary\": \"Error or N/A\"},\r\n        'overwrite': True\r\n    }))\r\n    # Update progress\r\n    completed_steps = state.get('completed_steps_count', 0) + 1\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress', 'status': 'running',\r\n        'title': 'Research Progress', 'completedSteps': completed_steps,\r\n        'message': f'Completed gap analysis step ({status}).', 'overwrite': True\r\n    }))\r\n\r\n    logger.info(f\"--- Exiting Node: analyze_gaps ---\")\r\n    return {\r\n        \"gaps_identified\": gap_analysis_result,\r\n        \"completed_steps_count\": completed_steps,\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates\r\n    }\r\n\r\n\r\nasync def execute_gap_search(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Executes follow-up *web* searches based on identified gaps.\"\"\"\r\n    step_id = 'gap-search'\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type':'search', 'status': 'running',\r\n        'title': 'Gap Filling Web Search', 'message': 'Executing follow-up web searches...',\r\n        'overwrite': True\r\n        })\r\n    logger.info(f\"\\n--- Running Node: execute_gap_search ---\")\r\n\r\n    gaps = state.get('gaps_identified')\r\n    follow_up_web_queries = gaps.follow_up_queries if gaps and hasattr(gaps, 'follow_up_queries') and isinstance(gaps.follow_up_queries, list) else []\r\n    status = 'skipped' # Default if no queries\r\n    message = \"No actionable follow-up web searches suggested by gap analysis.\"\r\n\r\n    gap_search_step_results: List[SearchStepResult] = []\r\n\r\n    if follow_up_web_queries:\r\n        max_gap_queries = 3 # Keep limit or adjust if needed\r\n        queries_to_run = follow_up_web_queries[:max_gap_queries]\r\n        status = 'running' # Will be updated later\r\n        logger.info(f\"Executing {len(queries_to_run)} gap web queries (max {max_gap_queries})...\")\r\n        try:\r\n            for i, gap_query_obj in enumerate(queries_to_run):\r\n                if not isinstance(gap_query_obj, GapFollowUpQuery): continue\r\n                query_text = gap_query_obj.query\r\n                logger.info(f\"Executing Gap Web Query {i+1}/{len(queries_to_run)}: {query_text}\")\r\n                try:\r\n                    web_results = await perform_web_search(query_text, 3) # Use slightly fewer results for gap fill?\r\n                    gap_search_step_results.append(SearchStepResult(query=query_text, results=web_results, tool_used=\"web_search_gap\"))\r\n                except Exception as e_inner:\r\n                    logger.error(f\"Error during specific gap web search for query '{query_text}': {e_inner}\")\r\n                    gap_search_step_results.append(SearchStepResult(query=query_text, results=[], tool_used=\"web_search_gap\")) # Add empty result on error\r\n\r\n            message = f\"Gap web search finished. Executed {len(queries_to_run)} queries, found {sum(len(r.results) for r in gap_search_step_results)} total results.\"\r\n            status = 'completed'\r\n            logger.info(message)\r\n        except Exception as e_outer:\r\n            message = f\"Error during gap search execution loop: {e_outer}\"\r\n            status = 'error'\r\n            logger.error(message, exc_info=True)\r\n    else:\r\n        logger.info(message) # Log skip message\r\n\r\n    # Update UI for node completion\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type': 'search', 'status': status,\r\n        'title': 'Gap Filling Web Search', 'message': message,\r\n        'overwrite': True\r\n    }))\r\n\r\n    # Update progress - Count as one step overall\r\n    completed_steps = state.get('completed_steps_count', 0) + 1\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress', 'status': 'running',\r\n        'title': 'Research Progress', 'completedSteps': completed_steps,\r\n        'message': f'Completed gap search step ({status}).', 'overwrite': True\r\n    }))\r\n\r\n    logger.info(f\"--- Exiting Node: execute_gap_search ---\")\r\n    # Append gap search results to the main search results list OR keep separate?\r\n    # Let's keep them separate for now in state, but combine for context later.\r\n    return {\r\n        \"gap_search_results\": gap_search_step_results, # Store gap results separately\r\n        \"completed_steps_count\": completed_steps,\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates\r\n    }\r\n\r\n\r\nasync def synthesize_final_report(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Synthesizes findings, adapting context based on YF status.\"\"\"\r\n    step_id = 'synthesis'\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type':'synthesis', 'status': 'running',\r\n        'title': 'Synthesize Findings', 'message': 'Synthesizing all findings...',\r\n        'overwrite': True\r\n        })\r\n    logger.info(f\"\\n--- Running Node: synthesize_final_report ---\")\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False)\r\n    yfinance_status_text = \"Failed (Used Web Fallback)\" if yfinance_failed else \"Successful\"\r\n\r\n    # --- Gather Context (More robust handling of potential None values) ---\r\n    context_parts = []\r\n    context_parts.append(f\"Research Target: {state.get('company_name', 'N/A')} ({state.get('ticker', 'N/A')})\")\r\n    context_parts.append(f\"Yahoo Finance Status: {yfinance_status_text}\")\r\n\r\n    # Add initial input data summary with checks for None\r\n    input_summary = \"\\n[Initial Input Data Summary]\\n\"\r\n    country = state.get('country_of_exchange')\r\n    input_summary += f\"- Country: {country if country else 'N/A'}\\n\"\r\n    market_cap = state.get('market_cap_usd')\r\n    input_summary += f\"- Market Cap (USD, {state.get('input_query_date', 'N/A')}): {market_cap if market_cap is not None else 'N/A'}\\n\"\r\n    ebitda = state.get('input_ebitda_usd')\r\n    input_summary += f\"- EBITDA (USD, FY0, {state.get('input_query_date', 'N/A')}): {ebitda if ebitda is not None else 'N/A'}\\n\"\r\n    input_pe = state.get('input_pe_ratio')\r\n    input_summary += f\"- P/E Ratio ({state.get('input_query_date', 'N/A')}): {input_pe if input_pe is not None else 'N/A'}\\n\"\r\n    # *** FIX: Check if description is None before slicing ***\r\n    business_desc_val = state.get('input_business_description')\r\n    input_summary += f\"- Business Desc: {(business_desc_val[:300] + '...') if business_desc_val else 'N/A'}\\n\"\r\n    context_parts.append(input_summary)\r\n\r\n    # Add analysis summaries (Safely access potentially None values)\r\n    financial_analysis_val = state.get('financial_analysis')\r\n    if financial_analysis_val: context_parts.append(f\"\\n[Financial Analysis Summary (Source: {'Web Fallback' if yfinance_failed else 'YF Data'})]\\n{financial_analysis_val[:1500]}...\")\r\n\r\n    competitive_analysis_val = state.get('competitive_analysis')\r\n    if competitive_analysis_val: context_parts.append(f\"\\n[Competitive Analysis Summary]\\n{competitive_analysis_val[:1500]}...\")\r\n\r\n    mgmt_gov_val = state.get('management_governance_assessment')\r\n    if mgmt_gov_val: context_parts.append(f\"\\n[Mgmt/Gov Assessment Summary]\\n{mgmt_gov_val[:1500]}...\")\r\n\r\n    analysis_results_list = state.get('analysis_results')\r\n    if analysis_results_list: # Check if the list itself exists\r\n        generic_analysis_summary = \"\\n[Other Analysis Results]\\n\"\r\n        for ar in analysis_results_list:\r\n            if isinstance(ar, AnalysisResult): # Check type for safety\r\n                 generic_analysis_summary += f\"- {ar.analysis_goal[:50]}...: {ar.analysis_result[:150]}...\\n\"\r\n        context_parts.append(generic_analysis_summary)\r\n\r\n    # Add Gap Analysis Summary (Safely access)\r\n    gaps = state.get('gaps_identified')\r\n    if gaps and isinstance(gaps, GapAnalysisResult): context_parts.append(f\"\\n[Gap Analysis Summary]\\n{gaps.summary[:1000]}...\")\r\n\r\n    # Add Web Search Highlights (Combine all searches safely)\r\n    web_highlights = \"\\n[Web Search Highlights (All Searches)]\\n\"\r\n    search_results = state.get('search_results', []) or []\r\n    financial_web_results = state.get('financial_web_search_results', []) or []\r\n    gap_search_results = state.get('gap_search_results', []) or []\r\n    all_searches = search_results + financial_web_results + gap_search_results\r\n    highlight_count = 0\r\n    max_highlights = 15\r\n    if all_searches: # Check if there are any search results at all\r\n        for res in all_searches:\r\n            if highlight_count >= max_highlights: break\r\n            if isinstance(res, SearchStepResult): # Check type\r\n                web_highlights += f\"Query: {res.query}\\n\"\r\n                if res.results: # Check if results list exists\r\n                     for item in res.results[:2]:\r\n                         if highlight_count >= max_highlights: break\r\n                         if isinstance(item, SearchResultItem): # Check type\r\n                             title = item.title or \"N/A\"\r\n                             snippet = item.snippet or \"\"\r\n                             web_highlights += f\"- {title}: {snippet[:100]}...\\n\"\r\n                             highlight_count += 1\r\n    context_parts.append(web_highlights if highlight_count > 0 else \"\\n[Web Search Highlights: None available or processed]\\n\")\r\n\r\n    context = \"\\n\".join(context_parts)\r\n\r\n    # --- Use Synthesis Prompt ---\r\n    prompt = SYNTHESIS_PROMPT_YFINANCE.format(\r\n        company_name=state.get('company_name', 'N/A'), # Use .get for safety\r\n        ticker=state.get('ticker', 'N/A'),\r\n        yfinance_status=yfinance_status_text,\r\n        context=context[:20000] # Limit context\r\n    )\r\n\r\n    # ... (Rest of the synthesize_final_report function remains the same: LLM call, error handling, state update) ...\r\n    # ... (LLM call and result handling as before) ...\r\n    synthesis_result: Optional[FinalSynthesisResult] = None\r\n    status = 'error'\r\n    message = \"Synthesis failed before LLM call.\"\r\n\r\n    try:\r\n         synthesis_result = await generate_structured_output(\r\n             llm_creative, FinalSynthesisResult, prompt\r\n         )\r\n         if not synthesis_result or not synthesis_result.key_findings_summary: # Check summary content\r\n             synthesis_result = FinalSynthesisResult(\r\n                 key_findings_summary=\"Synthesis generation failed or returned empty summary.\",\r\n                 remaining_uncertainties=[\"Data limitations significantly impacted synthesis.\", \"Error during parsing or generation.\"]\r\n             )\r\n             message = \"Synthesis completed but failed to generate valid/meaningful structure.\"\r\n             status = 'warning'\r\n         else:\r\n             message = \"Synthesis of all findings completed.\"\r\n             status = 'completed'\r\n         logger.info(message)\r\n    except Exception as e:\r\n        logger.error(f\"Error during synthesis: {e}\", exc_info=True)\r\n        synthesis_result = FinalSynthesisResult(key_findings_summary=f\"Synthesis failed: {e}\", remaining_uncertainties=[\"Error during synthesis process.\"])\r\n        message = f\"Synthesis failed: {e}\"\r\n        status = 'error'\r\n\r\n    # Update UI for node completion\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type': 'synthesis', 'status': status,\r\n        'title': 'Synthesize Findings', 'message': message,\r\n        'payload': synthesis_result.dict() if hasattr(synthesis_result, 'dict') else {\"key_findings_summary\": \"Error or N/A\"},\r\n        'overwrite': True\r\n    }))\r\n    # Update progress\r\n    completed_steps = state.get('completed_steps_count', 0) + 1\r\n    all_updates.extend(create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress', 'status': 'running',\r\n        'title': 'Research Progress', 'completedSteps': completed_steps,\r\n        'message': f'Completed synthesis step ({status}).', 'overwrite': True\r\n    }))\r\n\r\n    logger.info(f\"--- Exiting Node: synthesize_final_report ---\")\r\n    return {\r\n        \"final_synthesis\": synthesis_result,\r\n        \"completed_steps_count\": completed_steps,\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates\r\n    }\r\n\r\n\r\nasync def generate_final_markdown_report(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Generates the final Markdown report, including summary table and adjusted tone.\"\"\"\r\n    step_id = 'final-report-generation'\r\n    all_updates = create_update(state, {\r\n        'id': step_id, 'type':'report', 'status': 'running',\r\n        'title':'Final Report Generation', 'message': 'Generating final report...',\r\n        'overwrite': True\r\n        })\r\n    logger.info(f\"\\n--- Running Node: generate_final_markdown_report ---\")\r\n\r\n    # --- 1. Generate Structured Summary Table ---\r\n    # ... (Summary table generation logic remains the same as previous version) ...\r\n    summary_table_md = \"# ERROR: Could not generate summary table.\" # Default\r\n    try:\r\n        # (Keep the table generation logic here)\r\n        company_name = state.get('company_name', 'N/A')\r\n        ticker = state.get('ticker', 'N/A')\r\n        country = state.get('country_of_exchange', 'N/A')\r\n        query_date = state.get('input_query_date', 'N/A')\r\n        market_cap = state.get('market_cap_usd') # Get value, might be None\r\n        market_cap_str = f\"{market_cap:,.2f}\" if isinstance(market_cap, (int, float)) else \"N/A\"\r\n        ebitda = state.get('input_ebitda_usd') # Get value, might be None\r\n        ebitda_str = f\"{ebitda:,.2f}\" if isinstance(ebitda, (int, float)) else \"N/A\"\r\n        input_pe = state.get('input_pe_ratio') # Get value, might be None\r\n        input_pe_str = f\"{input_pe:.2f}\" if isinstance(input_pe, (int, float)) else \"N/A\" # Format if number\r\n\r\n        # Infer Industry (best effort)\r\n        industry = \"N/A\"\r\n        yf_info = state.get('yfinance_data', {}).get('info') if not state.get('yfinance_fetch_failed') else None\r\n        if yf_info and yf_info.get('industry'):\r\n            industry = yf_info['industry']\r\n        elif state.get('input_business_description'):\r\n             business_desc_val = state.get('input_business_description') # Check if None later\r\n             if business_desc_val: # Check if not None before using\r\n                 desc_lower = business_desc_val.lower()\r\n                 # ... (industry inference logic) ...\r\n                 if 'cloud service' in desc_lower: industry = \"Cloud Services (from Desc)\"\r\n                 # ... (other heuristics) ...\r\n                 else: industry = business_desc_val[:30] + \"... (from Desc)\"\r\n\r\n        # Extract from Synthesis\r\n        synthesis = state.get('final_synthesis')\r\n        prelim_rationale = \"See Exec Summary\" # Default\r\n        key_risks = \"See Exec Summary / Risks Section\" # Default\r\n        if synthesis and isinstance(synthesis, FinalSynthesisResult) and synthesis.key_findings_summary:\r\n             summary_text = synthesis.key_findings_summary.lower()\r\n             rationale_hints = re.findall(r\"(?:potential rationale|attractive aspect|strength).{0,100}\", summary_text)\r\n             if rationale_hints: prelim_rationale = rationale_hints[0][20:].strip() # Basic extraction\r\n\r\n             risk_hints = re.findall(r\"(?:red flag|major risk|key risk|concern).{0,100}\", summary_text)\r\n             if risk_hints: key_risks = risk_hints[0][10:].strip() # Basic extraction\r\n\r\n        # Format Table (Ensure N/A for None values passed)\r\n        summary_table_md = f\"\"\"\r\n| Key Information Item          | Details (Preliminary - Based on YF/Web)                     |\r\n| :---------------------------- | :---------------------------------------------------------- |\r\n| **Company Name** | {company_name}                                              |\r\n| **Ticker / RIC** | {ticker}                                                    |\r\n| **Country of Exchange** | {country if country else 'N/A'}                           |\r\n| **Market Cap (USD)** | {market_cap_str} *(as of {query_date if query_date else 'N/A'})* |\r\n| **Input EBITDA (USD, FY0)** | {ebitda_str} *(as of {query_date if query_date else 'N/A'})* |\r\n| **Input P/E Ratio** | {input_pe_str} *(as of {query_date if query_date else 'N/A'})* |\r\n| **Industry (Inferred)** | {industry}                                                  |\r\n| **Preliminary M&A Rationale** | {prelim_rationale} *(Speculative)* |\r\n| **Key Preliminary Risks** | {key_risks} *(Speculative)* |\r\n| **Data Confidence Level** | **Low (YF/Web Only)** |\r\n| **Next Step Recommendation** | **Deep Due Diligence using Official Filings REQUIRED** |\r\n\"\"\"\r\n        logger.info(\"Successfully generated structured summary table.\")\r\n    except Exception as table_e:\r\n        logger.error(f\"Error generating summary table: {table_e}\", exc_info=True)\r\n        summary_table_md = f\"# Error Generating Summary Table: {table_e}\\n\"\r\n        # Ensure it's still a string even on error\r\n        if not isinstance(summary_table_md, str): summary_table_md = \"# Summary Table Error\\n\"\r\n\r\n\r\n    # --- 2. Prepare Context for Final Report LLM (More robust handling of None) ---\r\n    synthesis = state.get('final_synthesis')\r\n    gaps = state.get('gaps_identified')\r\n    yfinance_failed = state.get('yfinance_fetch_failed', False)\r\n    yfinance_status_text = \"Failed (Used Web Fallback)\" if yfinance_failed else \"Successful\"\r\n    financial_data_source = \"Web Search Fallback\" if yfinance_failed else \"Yahoo Finance\"\r\n    financial_section_source_note = f\"Based on {financial_data_source}\"\r\n\r\n    final_report_text = f\"{summary_table_md}\\n\\n# Report Generation Failed\\nSynthesis data missing.\" # Default error\r\n    status = 'error'\r\n    message = \"Report generation failed: Missing synthesis data.\"\r\n\r\n    if synthesis and isinstance(synthesis, FinalSynthesisResult): # Check synthesis exists and is correct type\r\n        context_parts = {\r\n            \"structured_summary_table_context\": summary_table_md, # Pass generated table\r\n            \"synthesis_context\": \"\",\r\n            \"gap_context\": \"\",\r\n            \"analysis_summaries_context\": \"\",\r\n            \"search_results_context\": \"\",\r\n            \"initial_input_context\": \"\" # Will be built below\r\n        }\r\n\r\n        # Synthesis Context\r\n        context_parts[\"synthesis_context\"] = f\"Synthesized Key Findings:\\n{synthesis.key_findings_summary}\\n\\nRemaining Uncertainties:\\n\" + \"\\n\".join(f\"- {u}\" for u in (synthesis.remaining_uncertainties or [])) # Handle None\r\n\r\n        # Gap Context\r\n        context_parts[\"gap_context\"] = f\"Gap Analysis Summary:\\n{gaps.summary if gaps and isinstance(gaps, GapAnalysisResult) else 'N/A'}\" # Check gaps type\r\n\r\n        # Analysis Summaries Context (Handle None values safely)\r\n        analysis_summaries = []\r\n        fin_analysis = state.get('financial_analysis')\r\n        if fin_analysis: analysis_summaries.append(f\"### Financial Analysis (Source: {financial_data_source})\\n{fin_analysis}\")\r\n        comp_analysis = state.get('competitive_analysis')\r\n        if comp_analysis: analysis_summaries.append(f\"### Competitive Analysis\\n{comp_analysis}\")\r\n        mgmt_gov = state.get('management_governance_assessment')\r\n        if mgmt_gov: analysis_summaries.append(f\"### Management/Governance Assessment\\n{mgmt_gov}\")\r\n        other_analysis = state.get('analysis_results')\r\n        if other_analysis: # Check list exists\r\n             generic_summary = \"### Other Analysis Results\\n\"\r\n             for ar in other_analysis:\r\n                 if isinstance(ar, AnalysisResult): # Check type\r\n                     generic_summary += f\"- **{ar.analysis_goal}**: {ar.analysis_result}\\n\"\r\n             analysis_summaries.append(generic_summary)\r\n        context_parts[\"analysis_summaries_context\"] = \"\\n\\n\".join(analysis_summaries) if analysis_summaries else \"N/A\"\r\n\r\n        # Search Results Context (Handle None values safely)\r\n        search_context = \"[Web Search Results Context for Reference]\\n\"\r\n        search_results = state.get('search_results', []) or []\r\n        financial_web_results = state.get('financial_web_search_results', []) or []\r\n        gap_search_results = state.get('gap_search_results', []) or []\r\n        all_searches = search_results + financial_web_results + gap_search_results\r\n        search_count = 0\r\n        max_search_items = 20\r\n        if all_searches:\r\n            for res in all_searches:\r\n                if search_count >= max_search_items: break\r\n                if isinstance(res, SearchStepResult): # Check type\r\n                    search_context += f\"Query: {res.query}\\n\"\r\n                    if res.results:\r\n                        for item in res.results[:2]:\r\n                            if search_count >= max_search_items: break\r\n                            if isinstance(item, SearchResultItem):\r\n                                title = item.title or \"N/A\"\r\n                                snippet = item.snippet or \"\"\r\n                                url = item.url or \"#\" # Provide fallback URL\r\n                                search_context += f\"- [{title}]({url}): {snippet[:150]}...\\n\"\r\n                                search_count +=1\r\n        context_parts[\"search_results_context\"] = search_context[:15000] if search_count > 0 else \"[Web Search Results Context for Reference]\\nN/A\"\r\n\r\n\r\n        # *** FIX: Build Initial Input Context Safely ***\r\n        input_ctx = \"[Initial Input Data]\\n\"\r\n        company_name_val = state.get('company_name', 'N/A')\r\n        ticker_val = state.get('ticker', 'N/A')\r\n        country_val = state.get('country_of_exchange')\r\n        market_cap_val = state.get('market_cap_usd')\r\n        ebitda_val = state.get('input_ebitda_usd')\r\n        pe_val = state.get('input_pe_ratio')\r\n        desc_val = state.get('input_business_description') # Get the value, could be None\r\n        query_date_val = state.get('input_query_date')\r\n\r\n        input_ctx += f\"- Name: {company_name_val}\\n\"\r\n        input_ctx += f\"- RIC/Ticker: {ticker_val}\\n\"\r\n        input_ctx += f\"- Country: {country_val if country_val else 'N/A'}\\n\"\r\n        input_ctx += f\"- Market Cap (USD, {query_date_val if query_date_val else 'N/A'}): {market_cap_val if market_cap_val is not None else 'N/A'}\\n\"\r\n        input_ctx += f\"- EBITDA (USD, FY0, {query_date_val if query_date_val else 'N/A'}): {ebitda_val if ebitda_val is not None else 'N/A'}\\n\"\r\n        input_ctx += f\"- P/E Ratio ({query_date_val if query_date_val else 'N/A'}): {pe_val if pe_val is not None else 'N/A'}\\n\"\r\n        # Check desc_val before slicing\r\n        input_ctx += f\"- Business Desc: {(desc_val[:500] + '...') if desc_val else 'N/A'}\\n\"\r\n        context_parts[\"initial_input_context\"] = input_ctx\r\n        # *** END FIX ***\r\n\r\n        # --- 3. Format Final Report Prompt ---\r\n        current_date_str = datetime.now().strftime('%Y-%m-%d')\r\n        try:\r\n            prompt = FINAL_REPORT_SYSTEM_PROMPT_TEMPLATE_YFINANCE_ONLY.format(\r\n                current_date=current_date_str,\r\n                research_topic=state.get('topic', 'N/A'), # Use .get\r\n                yfinance_status=yfinance_status_text,\r\n                financial_section_source_note=financial_section_source_note,\r\n                financial_data_source=financial_data_source,\r\n                **context_parts # Pass all context sections\r\n            )\r\n        except KeyError as ke:\r\n            logger.error(f\"KeyError formatting final report prompt: {ke}. Context keys: {list(context_parts.keys())}\", exc_info=True)\r\n            final_report_text = f\"{summary_table_md}\\n\\n# Report Generation Failed\\n\\nError: Missing key in final report prompt template: {ke}\"\r\n            message = f\"Error formatting report prompt: Missing key {ke}\"\r\n            status = 'error'\r\n            prompt = None # Prevent LLM call\r\n\r\n        # --- 4. Invoke LLM for Report Generation (only if prompt formatting succeeded) ---\r\n        if prompt:\r\n            try:\r\n                final_report = await llm_creative.ainvoke(prompt) # Use creative for report writing\r\n                final_report_text = final_report.content if hasattr(final_report, 'content') else str(final_report)\r\n\r\n                if len(final_report_text) < 500 or \"report generation failed\" in final_report_text.lower():\r\n                     logger.warning(\"Final report seems short or indicates internal failure.\")\r\n                     message = \"Final report generated, but may be incomplete or failed.\"\r\n                     status = 'warning'\r\n                     # Keep the potentially faulty report text\r\n                else:\r\n                     message = \"Final research report generated successfully.\"\r\n                     status = 'completed'\r\n                logger.info(message)\r\n\r\n            except Exception as e:\r\n                logger.error(f\"Error generating final report via LLM: {e}\", exc_info=True)\r\n                final_report_text = f\"{summary_table_md}\\n\\n# Report Generation Failed\\n\\nError during LLM call: {str(e)}\"\r\n                message = f\"Error generating report via LLM: {str(e)[:100]}...\"\r\n                status = 'error'\r\n\r\n    else: # Synthesis was missing\r\n         logger.error(\"Cannot generate report: Final synthesis is missing.\")\r\n         final_report_text = f\"{summary_table_md}\\n\\n\" + final_report_text # Include table even if synthesis failed\r\n         message = \"Report generation failed: Missing synthesis data.\"\r\n         status = 'error'\r\n\r\n\r\n    # --- 5. Update UI and Progress ---\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type': 'report', 'status': status,\r\n        'title': 'Final Report Generation', 'message': message,\r\n        'payload': {'report_preview': final_report_text[:500]+\"...\"} if status != 'error' else None,\r\n        'overwrite': True\r\n        }))\r\n\r\n    completed_steps = state.get('completed_steps_count', 0) + 1\r\n    final_total_steps = state.get('total_steps', completed_steps)\r\n    progress_final = create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress',\r\n        'status': status if status == 'error' else 'completed',\r\n        'title': 'Research Progress', 'message': f'Research finished ({status}).',\r\n        'completedSteps': completed_steps if status == 'completed' else completed_steps -1, # Adjust completed on error?\r\n        'totalSteps': final_total_steps, 'isComplete': True, 'overwrite': True\r\n    })\r\n    all_updates.extend(progress_final)\r\n\r\n    logger.info(f\"--- Exiting Node: generate_final_markdown_report ({status}) ---\")\r\n    return {\r\n        \"final_report_markdown\": final_report_text,\r\n        \"structured_summary_table\": summary_table_md,\r\n        \"completed_steps_count\": completed_steps,\r\n        \"stream_updates\": state.get('stream_updates', []) + all_updates,\r\n    }\r\n\r\nasync def finalize_basic_research(state: ResearchState) -> Dict[str, Any]:\r\n    \"\"\"Fallback finalizer, attempts to include summary table.\"\"\"\r\n    step_id = 'finalize-research'\r\n    all_updates = state.get('stream_updates', [])\r\n    final_message = state.get(\"error_message\", \"Research process finalized via fallback path.\")\r\n    all_updates.extend(create_update(state, {\r\n        'id': step_id, 'type':'end', 'status': 'completed',\r\n        'title':'Research Finalized', 'message': final_message, 'overwrite': True\r\n        }))\r\n    logger.info(f\"\\n--- Running Node: finalize_basic_research ({final_message}) ---\")\r\n\r\n    # Determine final overall progress status\r\n    is_error_final = bool(state.get(\"error_message\"))\r\n    final_status = 'error' if is_error_final else 'completed'\r\n    final_completed_steps = state.get('completed_steps_count', 0)\r\n    final_total_steps = state.get('total_steps', final_completed_steps)\r\n\r\n    progress_final = create_update(state, {\r\n        'id': 'research-progress', 'type': 'progress', 'status': final_status,\r\n        'title': 'Research Progress', 'message': f'Research finished ({final_status} via fallback).',\r\n        'completedSteps': final_completed_steps, 'totalSteps': final_total_steps,\r\n        'isComplete': True, 'overwrite': True\r\n    })\r\n    all_updates.extend(progress_final)\r\n\r\n    # Try to provide a minimal useful report, including summary table if available\r\n    final_report = state.get(\"final_report_markdown\")\r\n    summary_table = state.get(\"structured_summary_table\", \"\\n# Summary Table Generation Failed in Fallback\\n\")\r\n\r\n    if not final_report or \"Report Generation Failed\" in final_report or \"final state.\" in final_report: # Check for various failure states\r\n        fallback_report_content = f\"\\n\\n# Research Finalized ({final_status.upper()})\\n\\n{final_message}\\n\\n\"\r\n        final_synthesis = state.get('final_synthesis')\r\n        if final_synthesis and hasattr(final_synthesis, 'key_findings_summary'):\r\n            fallback_report_content += f\"## Last Available Synthesis Summary\\n{final_synthesis.key_findings_summary}\\n\\n## Remaining Uncertainties\\n\" + \"\\n\".join(f\"- {u}\" for u in final_synthesis.remaining_uncertainties)\r\n        else:\r\n            fallback_report_content += \"No usable synthesis or report was generated prior to fallback.\"\r\n        # Prepend summary table to the fallback content\r\n        final_report = summary_table + fallback_report_content\r\n\r\n    return {\"final_report_markdown\": final_report, \"stream_updates\": all_updates}"
  },
  {
    "path": "super_agents/customized_deep_research/reason_graph/prompt.py",
    "content": "# --- REVISED Plan Research Prompt ---\r\n# Goal: Generate deeper, more diverse queries, handle YF failure, create actionable analysis steps.\r\nPLAN_RESEARCH_PROMPT_YFINANCE = \"\"\"You are an expert M&A research analyst planning preliminary due diligence for: **{company_name} ({ticker})**.\r\nCountry: {country}. Initial Market Cap (USD): {market_cap}. Initial EBITDA (USD): {ebitda}. Source Date: {query_date}.\r\nBusiness Desc: {business_desc}\r\n\r\n**Constraint:** Rely ONLY on 'yfinance' (if available) and 'web_search'. No direct access to official filings or premium databases.\r\n\r\n**Scenario:** Yahoo Finance data fetch status: **{yfinance_status}**.\r\n\r\n**Goal:** Create a focused research plan combining financial tool usage (if applicable) and deep web searching to uncover M&A-critical insights.\r\n\r\n**Plan Requirements:**\r\n\r\n1.  **Financial Data Step:**\r\n    * **IF `yfinance_status` is 'Successful'**: Include exactly ONE step with `tool_hint: 'yfinance'` for ticker '{ticker}'. Query: \"Fetch comprehensive financial data summary\".\r\n    * **IF `yfinance_status` is 'Failed'**: **DO NOT include a 'yfinance' step.** Instead, generate 3-5 **specific 'web_search' queries** aiming to find alternative financial information online. Use the initial Market Cap ({market_cap}) and EBITDA ({ebitda}) as context/validation points. Examples:\r\n        * `\"{company_name} estimated revenue trend 2023-2025\"`\r\n        * `\"analyst report summary {company_name} profitability OR debt\"`\r\n        * `\"{company_name} market capitalization verification news OR source\"`\r\n        * `\"news {company_name} recent funding OR financing rounds\"`\r\n        * `\"{company_name} EBITDA margin discussion OR competitor comparison\"`\r\n\r\n2.  **Deep Web Search Queries (Generate 8-10 DIVERSE queries minimum, regardless of YF status):** Design **specific, targeted `web_search` queries** for '{company_name}' ({ticker}) covering these angles. Aim for queries likely to hit news, industry analysis, forums, reviews, executive mentions, etc.:\r\n    * **Management & Strategy:** Search for **named executive interviews/quotes on strategy, reports on management changes/stability, discussions on company culture (e.g., Glassdoor summary if mentioned), analysis of recent strategic moves (partnerships, M&A).** Examples:\r\n        * `\"Interview OR Quote [CEO Name if known, else 'CEO'] {company_name} future strategy\"`\r\n        * `\"Analysis {company_name} management team effectiveness OR recent changes\"`\r\n    * **Product/Tech Competitiveness & Risk:** Search for **independent reviews of core products/services, technical comparisons vs. specific competitors, user forum discussions on product quality/bugs/features, mentions of technical debt or platform scalability, news on R&D/patents.** Examples:\r\n        * `\"comparison review {company_name} [main product/service] vs [Competitor A]\"`\r\n        * `\"{company_name} product user forum common complaints OR issues\"`\r\n        * `\"Analysis {company_name} technology stack OR technical debt\"`\r\n    * **Market Position & Moat:** Search for **market share estimates (even if in news/blogs), analysis of competitive advantages (moat), discussion of pricing power, recent competitor actions impacting {company_name}, relevant market trends/forecasts.** Examples:\r\n        * `\"{company_name} market share [specific niche derived from Business Desc]\"`\r\n        * `\"Analysis {company_name} competitive advantages OR economic moat\"`\r\n        * `\"Impact of [Market Trend] on {company_name}\"`\r\n    * **Customer Insights:** Search for **mentions of major customer wins/losses, case studies, discussions on customer satisfaction/churn (if public), reviews on B2B sites (if applicable).** Examples:\r\n        * `\"{company_name} major client announcement OR case study\"`\r\n        * `\"{company_name} customer reviews OR satisfaction rating\"`\r\n    * **Key Risks (Operational, Legal, etc.):** Search specifically for **news/reports on lawsuits/litigation, regulatory scrutiny/fines in {country} or key markets, supply chain issues, product recalls, negative analyst commentary on risks.** Examples:\r\n        * `\"{company_name} lawsuit OR regulatory action {country}\"`\r\n        * `\"{company_name} operational challenges OR supply chain news\"`\r\n    * **M&A Context:** Search for **M&A rumors/speculation (note source quality), analysis of {company_name} as potential target/acquirer, industry M&A trends relevant to its niche.** Examples:\r\n        * `\"{company_name} acquisition speculation OR target analysis\"`\r\n        * `\"M&A trends {company_name} industry sector\"`\r\n\r\n3.  **Analysis Steps (`required_analyses` - Generate for key M&A themes):** Define analysis goals that EXPLICITLY require **synthesizing insights from AVAILABLE financial data (YF dict OR financial web search results) AND the broader web search findings.** Focus on M&A implications:\r\n    * **Financial Profile & Risks:** \"Analyze the company's financial health signals (growth, profitability, debt) based *solely* on the available **[financial data source - e.g., Yahoo Finance or Web Search]** and corroborating/contradicting web search context. Identify key financial red flags for M&A diligence, noting data limitations.\" # <<< MODIFIED: Removed placeholder, using static description\r\n    * **Competitive Position & Moat:** \"Evaluate {company_name}'s market position, competitive advantages/disadvantages, and potential economic moat based on web search findings (competitors, market share hints, reviews). Assess attractiveness for an M&A acquirer.\"\r\n    * **Management & Execution:** \"Assess apparent management stability, strategic direction hints, and potential governance flags based on web search findings (executive mentions, news, culture hints). Consider M&A execution risk implications.\"\r\n    * **Overall Preliminary M&A Assessment:** \"Synthesize all findings into a preliminary view: Is {company_name}, based *only* on this limited YF/Web research, a potentially attractive M&A target? What are the 1-2 biggest perceived strengths and 1-2 biggest red flags requiring immediate deep dive with official data?\"\r\n\r\n**Output Format:** A JSON object adhering to the `ResearchPlan` schema. Ensure high query quality and diversity, and actionable analysis goals.\r\n\"\"\"\r\n\r\n\r\n\r\n# --- REVISED Financial Analysis Prompt ---\r\n# Goal: Analyze available financial data (YF dict or Web results), correlate deeply with web context, infer M&A implications, reduce excessive caution in tone.\r\nFINANCIAL_ANALYSIS_PROMPT_YFINANCE = \"\"\"You are an M&A financial analyst reviewing **{company_name} ({ticker})**.\r\nYour analysis is based ONLY on the provided financial context ({financial_data_source_description}) and qualitative context from general web searches. Be analytical and objective, noting data limitations where relevant.\r\n\r\n**Analysis Goals:**\r\n\r\n1.  **Financial Data Summary:** Briefly summarize key figures and trends observed in the provided `{financial_data_source_description}`. Note any obvious data gaps or inconsistencies within this source. If analyzing serialized YF data (dictionaries with index/columns/data), interpret trends from the 'data' arrays over time periods in 'columns'.\r\n2.  **Correlation with Web Context:** **Critically connect** the financial signals (e.g., revenue trend, profitability metrics, debt hints, market cap from input {market_cap}) with the narrative found in web searches.\r\n    * Does web news (e.g., product launches, market changes, partnerships) **support or contradict** the financial trends?\r\n    * Are there web discussions (e.g., competition, pricing pressure, operational issues) that **explain** observed financial metrics (e.g., margins, EBITDA {ebitda})?\r\n    * Does the company's reported activity level in web searches seem consistent with its financial scale (Market Cap, Revenue hints)?\r\n    * **Highlight key consistencies and discrepancies.**\r\n3.  **M&A Implications & Potential Red Flags (Inferred):** Based *only* on this combined, limited information:\r\n    * What **potential financial strengths** (e.g., reported growth seemingly validated by web news, potentially manageable debt based on context) might be attractive? (Label as preliminary).\r\n    * What **potential financial RED FLAGS** (e.g., negative trends contradicted by optimistic news, high debt without clear financing context online, discrepancies between reported scale and web presence) demand urgent investigation using official filings?\r\n    * What is the **preliminary assessment of financial viability/risk** from an M&A perspective, acknowledging the data source limitations?\r\n4.  **Key Limitations Note:** Briefly state that this analysis lacks audited figures, footnotes, MD&A, and segment details, which are essential for definitive M&A financial due diligence.\r\n\r\n**Instructions:**\r\n- Focus on **analysis and interpretation**, not just data listing.\r\n- Prioritize connecting the financial data points with the qualitative web narrative.\r\n- Use objective but insightful language. Label speculative inferences clearly (e.g., \"This *suggests*...\", \"A potential implication *could be*...\").\r\n- Structure logically (e.g., ## Financial Summary, ## Web Correlation, ## M&A Implications/Flags, ## Limitations Note).\r\n- Output only the analysis text.\r\n\r\n**Provided Financial Context ({financial_data_source_description}):**\r\n{financial_context}\r\n\r\n**Provided General Web Search Context:**\r\n{web_context}\r\n\r\n**Financial Analysis (Preliminary - Based on {financial_data_source_description} & Web Search):**\r\n\"\"\"\r\n\r\n# --- REVISED Competitive Analysis Prompt ---\r\n# Goal: Deeper analysis of positioning, moat hints, M&A implications.\r\nCOMPETITIVE_ANALYSIS_PROMPT_YFINANCE = \"\"\"You are an M&A market analyst assessing the competitive landscape for **{company_name} ({ticker})**.\r\nAnalyze the provided context from its business description, Yahoo Finance profile hints, and general web search results.\r\n\r\n**Analysis Goals:**\r\n\r\n1.  **Market Definition & Niche:** Define the specific market niche(s) {company_name} operates in, based on available info. Estimate market size or growth potential if any hints exist in the context.\r\n2.  **Competitor Landscape:** List key competitors identified. Summarize any available information on their relative size, product focus, or recent strategic moves found in the web context.\r\n3.  **Competitive Positioning & Potential Moat:** Synthesize information to assess {company_name}'s likely market position (e.g., leader, niche player, challenger).\r\n    * What are its apparent **strengths or differentiators** mentioned (e.g., specific tech, strong brand hints, key partnerships)?\r\n    * Are there hints of a **competitive advantage or 'moat'** (e.g., network effects, high switching costs suggested by discussions, unique IP mentions)? (Label as speculative).\r\n    * What **weaknesses or vulnerabilities** are suggested (e.g., negative reviews, limited scale, strong competitor actions)?\r\n4.  **Market Dynamics & Trends:** Summarize relevant market trends, technological shifts, or regulatory factors mentioned in web searches that could impact {company_name} and its competitors.\r\n5.  **M&A Implications:**\r\n    * How attractive is the target's **apparent market position and potential moat** for an acquirer?\r\n    * What are the **key competitive dynamics or threats** an acquirer needs to consider?\r\n    * Does the competitive landscape suggest **synergy potential** (e.g., consolidation opportunities, cross-selling)?\r\n    * Assess the **difficulty of replicating** the target's position (barrier to entry assessment based on web hints).\r\n6.  **Limitations Note:** Briefly state this analysis relies on public web information and lacks professional market research data.\r\n\r\n**Instructions:**\r\n- Integrate findings cohesively.\r\n- Focus on **competitive strength/weakness assessment** and **M&A relevance**.\r\n- Be specific where evidence allows, label inferences clearly.\r\n- Structure logically (e.g., ## Market Niche, ## Competitors, ## Positioning & Moat Analysis, ## Dynamics, ## M&A Implications, ## Limitations Note).\r\n- Output only the analysis text.\r\n\r\n**Provided Company Info/Description Context:**\r\n{info_context}\r\n\r\n**Provided Web Search Context:**\r\n{web_context}\r\n\r\n**Competitive Landscape Analysis (Preliminary - Based on Public Web/YF Info):**\r\n\"\"\"\r\n\r\n\r\n# --- REVISED Management & Governance Prompt ---\r\n# Goal: Focus on M&A implications of findings, even if limited.\r\nMANAGEMENT_GOVERNANCE_PROMPT_YFINANCE = \"\"\"You are an analyst evaluating management and governance hints for M&A target **{company_name} ({ticker})**.\r\nBase your assessment *only* on provided context from **Yahoo Finance info/holders data** and **general web search results**.\r\n\r\n**Assessment Goals:**\r\n\r\n1.  **Key Personnel:** Identify key executives (from YF 'info' or web searches). Summarize any available hints about their background, tenure, or public statements found.\r\n2.  **Ownership Structure Hints (YF):** Summarize basic ownership structure from YF holders data (e.g., % institutions, % insiders if available). Any notable holders mentioned?\r\n3.  **Governance Signals (Web Search):** Summarize any significant governance-related news or discussions found (e.g., board changes, shareholder issues, compensation controversy hints, positive/negative reputation mentions).\r\n4.  **M&A Implications (Inferred):** Based ONLY on these limited signals:\r\n    * Are there preliminary **positive signs** regarding management stability, relevant experience, or alignment that might facilitate an M&A deal? (Label as speculative).\r\n    * Are there potential **red flags** (e.g., high turnover hints, negative press, questionable decisions mentioned online, concentrated ownership issues suggested by YF data) that warrant caution or deeper investigation in M&A diligence? (Label as speculative).\r\n    * Consider potential impact on **integration or post-acquisition strategy**.\r\n5.  **Critical Limitations Note:** Briefly state this assessment is highly superficial, lacking official proxy statements, detailed board/compensation info, and internal governance documents crucial for M&A.\r\n\r\n**Instructions:**\r\n- Stick strictly to the provided context.\r\n- Focus on potential **M&A relevance** of the limited findings.\r\n- Structure logically (e.g., ## Key Personnel Hints, ## Ownership Overview (YF), ## Governance Signals (Web), ## M&A Implications (Speculative), ## Limitations Note).\r\n- Output only the assessment text.\r\n\r\n**Provided Yahoo Finance Context (Info/Holders):**\r\n{yfinance_info_context}\r\n\r\n**Provided Web Search Context:**\r\n{web_context}\r\n\r\n**Management & Governance Glimpse (Preliminary - Based on YF/Web):**\r\n\"\"\"\r\n\r\n\r\n# --- REVISED Gap Analysis Prompt ---\r\n# Goal: Balance identifying critical official data gaps with suggesting *actionable* creative web searches.\r\nGAP_ANALYSIS_PROMPT_YFINANCE = \"\"\"Analyze the research findings summary provided below for **{company_name} ({ticker})**.\r\nThe research relied ONLY on **Yahoo Finance (YF)** (status: {yfinance_status}) and **general web search**.\r\n\r\n**Goal:**\r\n1.  Identify **critical knowledge gaps** for M&A due diligence that REQUIRE **official company filings** (e.g., Annual Reports, 10-K/10-Q equivalents, Proxy Statements) or specialized databases, which YF/Web cannot reliably provide. List major categories (e.g., Detailed Audited Financials & Footnotes, MD&A, Official Risk Factors, Legal/Compliance Details, Customer Contracts, IP Details, Detailed Governance/Compensation). Briefly explain *why* YF/Web are insufficient for each.\r\n2.  Suggest **1-3 specific, creative follow-up WEB search queries** (`tool_hint: 'web_search'`) **ONLY IF** they have a realistic (even if small) chance of uncovering **partial insights, third-party summaries, links to official sources, or corroborating context** related to the identified gaps. **Focus on actionable queries.** Examples:\r\n    * `\"analyst report summary {company_name} key risks OR financial outlook\"`\r\n    * `\"{company_name} investor relations contact OR website link\"`\r\n    * `\"news {company_name} recent patent filing OR litigation update\"`\r\n    * `\"summary {company_name} latest annual report highlights\"`\r\n    * `\"{company_name} corporate governance rating OR report\"`\r\n    **Do NOT suggest searching directly for unobtainable data** like \"detailed financial footnotes\". Prioritize queries likely to yield *some* relevant signal, however indirect. If no plausible web follow-up seems possible for the key gaps, return an empty list for `follow_up_queries`.\r\n\r\n**Instructions:**\r\n- Be specific about the limitations of YF and Web Search for M&A.\r\n- Be realistic but creative in suggesting follow-up *web* queries.\r\n- Output should be structured using the `GapAnalysisResult` schema format (`summary` and `follow_up_queries` list).\r\n\r\n**Provided Research Context Summary:**\r\n{context}\r\n\r\n**Gap Analysis Output (Using GapAnalysisResult Schema):**\r\n\"\"\"\r\n\r\n\r\n# --- REVISED Synthesis Prompt ---\r\n# Goal: Stronger M&A narrative, clearer themes, balanced tone.\r\nSYNTHESIS_PROMPT_YFINANCE = \"\"\"Synthesize the research findings for **{company_name} ({ticker})** from an **M&A preliminary due diligence perspective**.\r\nThe research relied ONLY on **Yahoo Finance** data (status: {yfinance_status}) and **general web search**.\r\n\r\n**Goal:** Create a concise synthesis forming a preliminary M&A narrative. Highlight the most critical **themes** (potential strengths/attractions and red flags/risks) emerging from the combined data. Identify key remaining uncertainties crucial for an M&A decision.\r\n\r\n**Synthesize & Evaluate for M&A Relevance:**\r\n1.  **Preliminary M&A Narrative:** Based *only* on the available YF/Web information, what initial \"story\" emerges about this company as an M&A target? (e.g., Is it presented as a growth opportunity needing financial validation? A niche tech asset with unclear market traction? A stable but slow-moving player? A situation with significant red flags needing immediate investigation?).\r\n2.  **Key Themes (Strengths/Attractions - Speculative):** What 2-3 potential strengths or attractive aspects stand out from the analysis (e.g., apparent market niche leadership, positive product reviews found online, seemingly consistent reported growth)? Note the evidence basis (YF hint, Web mention) and the need for verification.\r\n3.  **Key Themes (Risks/Red Flags - Speculative):** What 2-3 major risks or red flags are most prominent (e.g., concerning financial signals from YF/Web, strong competitive threats identified, negative management/governance hints, significant data gaps in critical areas)? Note the evidence basis and the need for verification.\r\n4.  **Remaining Critical Uncertainties:** List the 3-5 most important unanswered questions that *must* be addressed through deep diligence using official sources before any M&A decision could be made.\r\n\r\n**Instructions:**\r\n- Focus on creating a coherent **M&A-focused narrative**.\r\n- Use objective language but draw clear (labeled) preliminary conclusions based on the synthesized themes.\r\n- **Acknowledge the low confidence level** due to data sources concisely within the summary.\r\n- Output using the `FinalSynthesisResult` schema: `key_findings_summary` should contain the narrative synthesis including themes (strengths/risks), and `remaining_uncertainties` lists the critical unanswered questions.\r\n\r\n**Comprehensive Research Context:**\r\n{context}\r\n\r\n**Synthesis Output (Using FinalSynthesisResult Schema):**\r\n\"\"\"\r\n\r\n\r\n# --- REVISED Final Report Prompt Template ---\r\n# Goal: Maintain structure, significantly reduce repetitive warnings, integrate summary table, adjust financial section based on source.\r\nFINAL_REPORT_SYSTEM_PROMPT_TEMPLATE_YFINANCE_ONLY = \"\"\"You are an M&A analyst writing a **Preliminary Research Briefing** on **{research_topic}**.\r\nThis briefing is based *only* on **Yahoo Finance aggregated data (Status: {yfinance_status})** and **public web search results**. No official filings or proprietary databases were consulted.\r\nThe purpose is to provide a highly preliminary assessment to inform the decision on whether to commit resources to full due diligence using official sources.\r\nCurrent date: {current_date}.\r\n\r\n**Report Requirements:**\r\n\r\n1.  **Tone & Qualification:** Be analytical and objective. Present findings derived from the provided context. Briefly note the source (YF/Web) for key points where necessary. Acknowledge limitations primarily in the dedicated \"Limitations\" section, rather than excessively throughout. Label clearly speculative conclusions derived from limited data (e.g., \"This *might suggest*...\", \"A *potential* implication...\").\r\n2.  **Structure (M&A Assessment Focus):**\r\n    * **(Optional but Recommended) Structured Summary Table:** (If a pre-formatted table is provided in the context, include it here).\r\n    * `## Executive Summary`: (~2-3 paragraphs) High-level overview: company profile, market context. Briefly mention the preliminary M&A rationale hints (if any) and the most significant potential red flags identified from YF/Web analysis. Conclude with a clear statement on the overall confidence level (Low, due to data sources) and the necessity of deep diligence using official sources if proceeding.\r\n    * `## Introduction`: State the report's purpose and the data sources used (YF/Web Only).\r\n    * `## Company & Business Overview (From Input, YF Info & Web Search)`: Describe the business based on initial input description, YF Info, and web search findings.\r\n    * `## Market & Competitive Environment (Web Derived Insights)`: Summarize findings on market niche, competitors, positioning, and dynamics based *only* on web search analysis. Note reliance on public information.\r\n    * `## Financial Overview ({financial_section_source_note})`: **Start with a brief disclaimer acknowledging the data source (YF or Web Fallback).** Present key findings from the financial analysis node (trends, balance sheet signals, web correlations). Discuss potential M&A implications (strengths/flags) identified in the analysis, labeling them as preliminary. Reference `(Source: {financial_data_source})`.\r\n    * `## Management & Governance Glimpse (YF Holders & Web Derived)`: Summarize findings about personnel, ownership hints (YF), and any governance signals from web searches. Note the superficial nature of this information.\r\n    * `## Key Preliminary Risks & Potential M&A Angles (Synthesized)`: Based on the `final_synthesis` context, summarize the key synthesized risks and potential (speculative) M&A angles.\r\n    * `## CRITICAL LIMITATIONS & NEXT STEPS`: **Crucial Section.** Elaborate using the `gap_context`. Clearly explain *why* YF/Web data is insufficient for M&A (lack of audited financials, footnotes, MD&A, verified segment data, detailed risks, governance docs, etc.). List the **specific types of information** and **official documents** (e.g., Annual Reports from relevant exchanges, SEC filings, Prospectuses) that *must* be obtained and analyzed for proper due diligence.\r\n    * `## Conclusion`: Briefly reiterate the preliminary nature of the assessment and the **absolute necessity** of deep due diligence using reliable official sources before making any M&A decisions.\r\n3.  **Formatting:** Use Markdown. Use H2 (`##`) for main sections and H3 (`###`) for subsections if needed. Ensure clear paragraphs.\r\n\r\n**Context Sections Provided:**\r\n- Section I: Structured Summary Table (`structured_summary_table_context`) - Optional pre-formatted table.\r\n- Section II: Synthesized Key Findings & Uncertainties (`synthesis_context`) - Narrative synthesis based on YF/Web.\r\n- Section III: Gap Analysis Summary (`gap_context`) - Focused on limitations of YF/Web.\r\n- Section IV: Analysis Summaries Context (`analysis_summaries_context`) - Outputs from financial, competitive, mgmt nodes.\r\n- Section V: Search Results Context (`search_results_context`) - Snippets from web searches for context.\r\n- Section VI: Initial Input Data (`initial_input_context`) - Key fields from the input JSON.\r\n\r\n**Your goal is to deliver an informative preliminary briefing that is objective about findings based on limited data, manages expectations appropriately, and clearly guides the necessary next steps involving official data sources.**\r\n\"\"\""
  },
  {
    "path": "super_agents/customized_deep_research/reason_graph/schemas.py",
    "content": "from typing import List, Optional, Dict, Any, Literal\r\nfrom pydantic import BaseModel, Field\r\nimport time\r\n\r\n# --- Schemas for Planning ---\r\nclass SearchQuery(BaseModel):\r\n    query: str = Field(..., description=\"The specific search query string.\")\r\n    tool_hint: str = Field(\"web_search\", description=\"Hint for which tool to use (e.g., 'yfinance', 'web_search', 'news_api').\")\r\n    # Optional: Add expected information type if needed\r\n\r\nclass RequiredAnalysis(BaseModel):\r\n    analysis_goal: str = Field(..., description=\"The specific question or goal for the analysis step.\")\r\n    required_inputs: List[str] = Field(default_factory=list, description=\"Data types needed for this analysis (e.g., 'yfinance_financials', 'web_search_market_info').\")\r\n\r\nclass ResearchPlan(BaseModel):\r\n    search_queries: List[SearchQuery] = Field(..., description=\"List of planned search queries.\")\r\n    required_analyses: List[RequiredAnalysis] = Field(..., description=\"List of planned analysis steps.\")\r\n\r\n# --- Schemas for Search Results ---\r\nclass SearchResultItem(BaseModel):\r\n    title: str\r\n    url: Optional[str] = None\r\n    snippet: str\r\n\r\nclass SearchStepResult(BaseModel):\r\n    query: str\r\n    results: List[SearchResultItem] = Field(default_factory=list)\r\n    tool_used: Optional[str] = None # Optional: Track which tool generated results\r\n\r\n# --- Schemas for Analysis ---\r\nclass AnalysisResult(BaseModel):\r\n    analysis_goal: str\r\n    analysis_result: str # The textual output of the analysis\r\n\r\n# --- Schemas for Gap Analysis ---\r\nclass GapFollowUpQuery(BaseModel):\r\n     query: str = Field(..., description=\"Specific web search query to fill a gap.\")\r\n     tool_hint: str = Field(\"web_search\", description=\"Should primarily be 'web_search' in this version.\")\r\n     rationale: Optional[str] = Field(None, description=\"Why this query helps fill a gap.\")\r\n\r\nclass GapAnalysisResult(BaseModel):\r\n    summary: str = Field(..., description=\"Summary of key limitations and information gaps, focusing on YFinance/Web constraints for M&A.\")\r\n    follow_up_queries: List[GapFollowUpQuery] = Field(default_factory=list, description=\"Suggested *web search* queries to potentially find related info.\")\r\n\r\n# --- Schemas for Synthesis & Reporting ---\r\nclass KeyFinding(BaseModel):\r\n     finding: str = Field(..., description=\"A single key finding or insight.\")\r\n     evidence_source: Optional[str] = Field(None, description=\"Brief note on source (e.g., 'YFinance Trend', 'Web Search Mention').\")\r\n\r\nclass FinalSynthesisResult(BaseModel):\r\n    key_findings_summary: str = Field(..., description=\"Synthesized summary of the most important findings relevant to M&A, based on YFinance/Web.\")\r\n    remaining_uncertainties: List[str] = Field(..., description=\"List of key questions or uncertainties remaining due to data limitations.\")\r\n    # Optional: Add structured key findings list if needed\r\n    # key_findings: List[KeyFinding] = Field(default_factory=list)\r\n\r\n# --- Schemas for UI Streaming & State ---\r\nclass StreamUpdateData(BaseModel):\r\n    id: str # Unique ID for the step/update type\r\n    type: Literal[\"plan\", \"search\", \"analysis\", \"data_fetch\", \"synthesis\", \"report\", \"progress\", \"steps_list\", \"error\", \"info\", \"setup\", \"end\"]\r\n    status: Literal[\"pending\", \"running\", \"completed\", \"error\", \"skipped\", \"warning\"]\r\n    title: Optional[str] = None # User-friendly title for the step\r\n    message: Optional[str] = None # Status message\r\n    payload: Optional[Dict[str, Any] | List[Dict[str, Any]]] = None # Any associated data (e.g., results preview, step list)\r\n    overwrite: bool = False # Whether this update should replace previous updates with the same ID\r\n    isComplete: Optional[bool] = None # For progress updates\r\n    completedSteps: Optional[float] = None # For progress updates\r\n    totalSteps: Optional[int] = None # For progress updates\r\n\r\nclass StreamUpdate(BaseModel):\r\n    data: StreamUpdateData\r\n    timestamp: float = Field(default_factory=time.time)\r\n\r\nclass StepInfo(BaseModel):\r\n    id: str\r\n    type: str\r\n    status: str\r\n    title: str\r\n    description: Optional[str] = None"
  },
  {
    "path": "super_agents/customized_deep_research/reason_graph/state.py",
    "content": "# /Users/peng/Dev/AI_AGENTS/mentis/super_agents/company_deep_research/reason_graph/state.py\r\n# (Optimized Version v2 - Adjusted for Graph Logic)\r\n\r\nfrom typing import TypedDict, List, Optional, Dict, Any, Literal\r\nimport pandas as pd\r\nimport time\r\n\r\nfrom .schemas import (\r\n    SearchQuery, RequiredAnalysis, AnalysisResult, GapAnalysisResult,\r\n    FinalSynthesisResult, SearchStepResult, StreamUpdate, StepInfo, ResearchPlan, KeyFinding\r\n)\r\n\r\nclass YFinanceData(TypedDict, total=False):\r\n    info: Optional[Dict[str, Any]]\r\n    financials: Optional[Dict]\r\n    quarterly_financials: Optional[Dict]\r\n    balance_sheet: Optional[Dict]\r\n    quarterly_balance_sheet: Optional[Dict]\r\n    cashflow: Optional[Dict]\r\n    quarterly_cashflow: Optional[Dict]\r\n    major_holders: Optional[Dict]\r\n    institutional_holders: Optional[Dict]\r\n    recommendations: Optional[Dict]\r\n    news: Optional[List[Dict[str, Any]]]\r\n    error: Optional[str]\r\n\r\nclass ResearchState(TypedDict):\r\n    # --- Input Fields ---\r\n    identifier_ric: str\r\n    company_name: str\r\n    country_of_exchange: Optional[str]\r\n    market_cap_usd: Optional[float]\r\n    input_business_description: Optional[str]\r\n    input_pe_ratio: Optional[float]\r\n    input_ebitda_usd: Optional[float]\r\n    input_query_date: Optional[str]\r\n\r\n    # --- Derived/Internal Fields ---\r\n    topic: str\r\n    ticker: str\r\n    max_search_iterations: int # Might not be used with current loop logic\r\n    max_analysis_steps: int # Max steps for the analysis loop\r\n    analysis_depth: Literal[\"basic\", \"detailed\"]\r\n\r\n    # --- Planning ---\r\n    research_plan: Optional[ResearchPlan]\r\n    search_steps_planned: List[SearchQuery] # General web searches\r\n    financial_web_search_steps: List[SearchQuery] # Financial web searches (if YF failed)\r\n    analysis_steps_planned: List[RequiredAnalysis]\r\n\r\n    # --- Data Collection ---\r\n    yfinance_data: Optional[YFinanceData]\r\n    yfinance_fetch_failed: bool\r\n\r\n    search_results: List[SearchStepResult] # Stores general web search results\r\n    financial_web_search_results: List[SearchStepResult] # Stores financial web search results\r\n\r\n    # --- Analysis & Synthesis ---\r\n    analysis_results: List[AnalysisResult] # Generic analysis results\r\n    financial_analysis: Optional[str]\r\n    competitive_analysis: Optional[str]\r\n    management_governance_assessment: Optional[str]\r\n\r\n    # --- Gap Analysis & Follow-up ---\r\n    gaps_identified: Optional[GapAnalysisResult]\r\n    gap_search_results: List[SearchStepResult]\r\n\r\n    # --- Final Output ---\r\n    final_synthesis: Optional[FinalSynthesisResult]\r\n    final_report_markdown: Optional[str]\r\n    structured_summary_table: Optional[str]\r\n\r\n    # --- Workflow State Tracking ---\r\n    # REMOVED: current_search_step_index (replaced by completed_web_search_count logic)\r\n    # REMOVED: current_financial_web_search_index (handled internally or via count)\r\n    completed_web_search_count: int # **NEW**: Tracks total web searches completed (both types)\r\n    current_analysis_step_index: int\r\n    completed_steps_count: float # Overall progress counter\r\n    total_steps: Optional[int]\r\n\r\n    # --- UI / Streaming ---\r\n    stream_updates: List[StreamUpdate]\r\n\r\n    # --- Error Tracking ---\r\n    error_message: Optional[str]"
  },
  {
    "path": "super_agents/customized_deep_research/reason_graph/tools.py",
    "content": "import os\r\nimport json\r\nimport time\r\nimport re\r\nimport logging # Use logging instead of just print for warnings/errors\r\nimport asyncio\r\nfrom datetime import datetime\r\nfrom typing import Optional, List, Literal, Dict, Any, Tuple, Set, Type\r\n\r\n# --- Environment Variable Loading ---\r\nfrom dotenv import load_dotenv\r\nload_dotenv()\r\nimport yfinance as yf\r\nimport pandas as pd\r\n\r\n# --- Pydantic & LangChain Core ---\r\nfrom pydantic import BaseModel, ValidationError, Field # Import Field for schema descriptions\r\nfrom langchain_core.prompts import ChatPromptTemplate\r\nfrom langchain_core.messages import HumanMessage, SystemMessage, AIMessage\r\nfrom langchain_core.runnables.base import RunnableSerializable # Type hint for LLM\r\n# Use specific import for ChatOpenAI or other providers as needed\r\nfrom langchain_openai import ChatOpenAI\r\n\r\n# --- Internal Imports ---\r\n# Assuming schemas.py and state.py exist in the same directory or path is correctly set\r\ntry:\r\n    from .schemas import SearchResultItem, SearchQuery, StreamUpdate, StreamUpdateData # Relative import\r\n    from .state import ResearchState, YFinanceData # Relative import\r\nexcept ImportError as e:\r\n    print(f\"Error importing local schemas/state within tools.py: {e}\")\r\n    # Define dummy classes if needed for script loading without full context\r\n    class BaseModel: pass # Basic placeholder\r\n    class SearchResultItem(BaseModel): title: str = \"\"; url: Optional[str] = None; snippet: str = \"\"\r\n    class SearchQuery(BaseModel): query: str = \"\"; tool_hint: str = \"web_search\"\r\n    class StreamUpdateData(BaseModel): id: str = \"\"; type: str = \"\"; status: str = \"\"\r\n    class StreamUpdate(BaseModel): data: Optional[StreamUpdateData] = None; timestamp: float = 0.0\r\n    class ResearchState(dict): pass\r\n    class YFinanceData(dict): pass\r\n\r\n# --- Configure Logging ---\r\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\r\nlogger = logging.getLogger(__name__)\r\n\r\n# --- API Key Loading ---\r\nLLM_API_KEY_FROM_ENV = os.getenv(\"LLM_API_KEY\")\r\nOPENAI_API_KEY_FROM_ENV = os.getenv(\"OPENAI_API_KEY\")\r\nGROQ_API_KEY_FROM_ENV = os.getenv(\"GROQ_API_KEY\")\r\nTAVILY_API_KEY = os.getenv(\"TAVILY_API_KEY\")\r\n# EXA_API_KEY = os.getenv(\"EXA_API_KEY\") # Keep commented unless Exa tools are re-enabled\r\n\r\n# --- Configurable LLM Initialization ---\r\ndef initialize_llms() -> Tuple[Optional[RunnableSerializable], Optional[RunnableSerializable]]:\r\n    \"\"\"\r\n    Initializes and returns the main and creative LLM instances based on environment variables.\r\n    Supports providers: \"openai\", \"groq\", \"xai\"/\"grok\", \"openai_compatible\".\r\n    Returns: (llm, llm_creative) or (None, None) on failure.\r\n    \"\"\"\r\n    provider = os.getenv(\"LLM_PROVIDER\", \"openai\").lower()\r\n    model_name = os.getenv(\"LLM_MODEL_NAME\") # Get model name from env\r\n    api_key = LLM_API_KEY_FROM_ENV\r\n    base_url = os.getenv(\"LLM_BASE_URL\")\r\n\r\n    # Validate essential config based on provider\r\n    if not model_name:\r\n         logger.error(\"LLM_MODEL_NAME environment variable is not set.\")\r\n         return None, None\r\n\r\n    try:\r\n        temperature = float(os.getenv(\"LLM_TEMPERATURE\", \"0.0\"))\r\n        creative_temperature = float(os.getenv(\"LLM_CREATIVE_TEMPERATURE\", \"0.5\"))\r\n    except ValueError:\r\n        logger.warning(\"Invalid LLM temperature value in .env. Using defaults (0.0 / 0.5).\")\r\n        temperature = 0.0\r\n        creative_temperature = 0.5\r\n\r\n    logger.info(\"--- Initializing LLM ---\")\r\n    logger.info(f\"Provider: '{provider}'\")\r\n    logger.info(f\"Model Name: '{model_name}'\")\r\n    logger.info(f\"Base URL: {base_url if base_url else 'Default'}\")\r\n    logger.info(f\"Temperatures: Main={temperature}, Creative={creative_temperature}\")\r\n    logger.info(\"------------------------\")\r\n\r\n    llm_instance = None\r\n    llm_creative_instance = None\r\n\r\n    try:\r\n        # Consolidate key logic\r\n        key_to_use = None\r\n        if provider == \"openai\":\r\n            key_to_use = api_key or OPENAI_API_KEY_FROM_ENV\r\n            if not key_to_use: raise ValueError(\"OpenAI API key not found (checked LLM_API_KEY, OPENAI_API_KEY).\")\r\n            # Use default base_url for OpenAI if not provided\r\n            if not base_url: base_url = None # Let ChatOpenAI use default\r\n        elif provider in [\"xai\", \"grok\", \"openai_compatible\"]:\r\n            provider_name = \"xAI/Grok\" if provider in [\"xai\", \"grok\"] else \"OpenAI Compatible\"\r\n            logger.info(f\"Configuring provider '{provider_name}'. Assuming OpenAI-compatible API endpoint.\")\r\n            key_to_use = api_key # Must use LLM_API_KEY\r\n            if not key_to_use: raise ValueError(f\"LLM_API_KEY is required for provider '{provider}'.\")\r\n            if not base_url: raise ValueError(f\"LLM_BASE_URL is required for provider '{provider}'.\")\r\n            logger.info(f\"Note: Ensure '{model_name}' is valid for the API at {base_url}.\")\r\n        elif provider == \"groq\":\r\n            key_to_use = api_key or GROQ_API_KEY_FROM_ENV\r\n            if not key_to_use: raise ValueError(\"Groq API key not found (checked LLM_API_KEY, GROQ_API_KEY).\")\r\n            # Groq uses ChatGroq class, needs separate import if used.\r\n            # For simplicity, let's assume it behaves like ChatOpenAI for now,\r\n            # but ideally, use the specific Groq class.\r\n            # from langchain_groq import ChatGroq\r\n            # llm_instance = ChatGroq(...)\r\n            # For now, treat as openai_compatible requires user to ensure compatibility\r\n            logger.warning(\"Groq provider selected, using ChatOpenAI assuming compatibility. Consider using ChatGroq.\")\r\n            if not base_url: base_url = \"https://api.groq.com/openai/v1\" # Default Groq compatible endpoint\r\n        else:\r\n            raise ValueError(f\"Unsupported LLM_PROVIDER: '{provider}'. Check .env. Supported: 'openai', 'groq', 'xai'/'grok', 'openai_compatible'.\")\r\n\r\n        # Instantiate LLMs using ChatOpenAI (or specific provider class if needed)\r\n        common_params = {\r\n             \"model\": model_name,\r\n             \"api_key\": key_to_use,\r\n             \"base_url\": base_url, # Pass None if using default OpenAI URL\r\n        }\r\n        # Filter out None values for base_url if using default OpenAI\r\n        if provider == \"openai\" and base_url is None:\r\n            del common_params[\"base_url\"]\r\n\r\n        llm_instance = ChatOpenAI(**common_params, temperature=temperature)\r\n        llm_creative_instance = ChatOpenAI(**common_params, temperature=creative_temperature)\r\n\r\n        logger.info(\"--- LLM Initialization Successful ---\")\r\n        return llm_instance, llm_creative_instance\r\n\r\n    except ImportError as e:\r\n        logger.error(f\"!!! ERROR: Missing required LangChain provider package for '{provider}': {e}\")\r\n        logger.error(\"Please install the necessary package (e.g., 'pip install langchain-openai', 'pip install langchain-groq').\")\r\n        return None, None\r\n    except Exception as e:\r\n        logger.error(f\"!!! ERROR during LLM Initialization: {e}\")\r\n        import traceback\r\n        traceback.print_exc() # Print traceback for debugging init errors\r\n        return None, None\r\n\r\n# --- Initialize LLM instances at module level ---\r\nllm, llm_creative = initialize_llms()\r\n\r\n# --- Initialize External Service Clients ---\r\n# Tavily Client (for web search)\r\ntavily_client = None\r\nif TAVILY_API_KEY:\r\n    try:\r\n        from tavily import AsyncTavilyClient\r\n        tavily_client = AsyncTavilyClient(api_key=TAVILY_API_KEY)\r\n        logger.info(\"Tavily client initialized.\")\r\n    except ImportError:\r\n        logger.warning(\"tavily-python not installed, Tavily web search will not be available.\")\r\n    except Exception as e:\r\n        logger.error(f\"Failed to initialize Tavily client: {e}\")\r\nelse:\r\n    logger.warning(\"TAVILY_API_KEY not found in environment variables. Tavily web search will fail.\")\r\n\r\n# Exa Client (Commented out as per simplified plan)\r\n# exa_client = None\r\n# if EXA_API_KEY:\r\n#     try:\r\n#         from exa_py import Exa\r\n#         exa_client = Exa(api_key=EXA_API_KEY)\r\n#         logger.info(\"Exa client initialized.\")\r\n#     except ImportError:\r\n#         logger.warning(\"exa-py not installed, Exa searches will not be available.\")\r\n#     except Exception as e:\r\n#         logger.error(f\"Failed to initialize Exa client: {e}\")\r\n# else:\r\n#     logger.warning(\"EXA_API_KEY not found in environment variables. Exa searches will fail.\")\r\n\r\n\r\n# --- Tool Helper Functions ---\r\n\r\nasync def generate_structured_output(\r\n    model: Optional[RunnableSerializable],\r\n    schema: Type[BaseModel], # Use Type[BaseModel] for typing Pydantic models\r\n    prompt: str,\r\n    system_message: str = \"\"\r\n) -> Optional[BaseModel]:\r\n    \"\"\"\r\n    Uses langchain's `.with_structured_output` for reliable JSON generation\r\n    conforming to the provided Pydantic schema.\r\n\r\n    Args:\r\n        model: The LangChain LLM runnable instance (e.g., llm_creative).\r\n        schema: The Pydantic model class to structure the output.\r\n        prompt: The main user prompt for the LLM.\r\n        system_message: Optional system message to guide the LLM.\r\n\r\n    Returns:\r\n        An instance of the Pydantic schema if successful, otherwise None.\r\n    \"\"\"\r\n    if model is None:\r\n        logger.error(\"LLM instance is None, cannot generate structured output.\")\r\n        return None # Return None if LLM failed to initialize\r\n\r\n    logger.info(f\"[Tool] Attempting structured output generation for schema: {schema.__name__}\")\r\n    try:\r\n        # Use with_structured_output - method='function_calling' is often reliable\r\n        # method='json_mode' might be available/preferable for newer models/versions\r\n        structured_llm = model.with_structured_output(schema, method=\"function_calling\")\r\n        # structured_llm = model.with_structured_output(schema, method=\"json_mode\") # Alternative\r\n\r\n        messages = []\r\n        if system_message:\r\n            messages.append(SystemMessage(content=system_message))\r\n        messages.append(HumanMessage(content=prompt))\r\n\r\n        # Use asynchronous invoke if the model supports it (most ChatModels do)\r\n        response = await structured_llm.ainvoke(messages)\r\n\r\n        # Check if the response is of the correct Pydantic type\r\n        if isinstance(response, schema):\r\n             logger.info(f\"[Tool] Successfully generated structured output for {schema.__name__}.\")\r\n             return response\r\n        else:\r\n             # This case might happen if parsing fails within the LangChain method\r\n             logger.error(f\"[Tool] Structured output generation returned unexpected type: {type(response)}. Expected {schema.__name__}.\")\r\n             # Log the raw response if possible for debugging\r\n             logger.error(f\"Raw response: {response}\")\r\n             return None\r\n\r\n    except NotImplementedError as nie:\r\n        # Handle cases where the model/method combination isn't supported\r\n        logger.error(f\"Structured output method not implemented for this LLM/schema combination: {nie}\")\r\n        logger.error(\"Try switching the 'method' argument in with_structured_output (e.g., 'json_mode').\")\r\n        return None\r\n    except ValidationError as ve:\r\n        # Catch Pydantic validation errors if LangChain parsing returns data that doesn't fit the schema\r\n        logger.error(f\"Pydantic validation failed for structured output: {ve}\")\r\n        # Log the prompt or relevant context if helpful for debugging schema mismatches\r\n        # logger.error(f\"Prompt leading to validation error: {prompt[:500]}...\")\r\n        return None\r\n    except Exception as e:\r\n        logger.error(f\"Error during structured output generation for {schema.__name__}: {e}\")\r\n        import traceback\r\n        traceback.print_exc() # Print full traceback for unexpected errors\r\n        return None\r\n\r\n\r\ndef create_update(state: Dict[str, Any], update_data: Dict[str, Any]) -> List[Dict[str, Any]]:\r\n    \"\"\"\r\n    Helper to create stream update dictionaries adhering to StreamUpdate schema.\r\n    Ensures required keys for StreamUpdateData are present based on schema definition.\r\n    \"\"\"\r\n    # Define REQUIRED fields for StreamUpdateData based on your schemas.py\r\n    # Assuming 'id', 'type', 'status' are always required\r\n    required_keys = {'id', 'type', 'status'}\r\n\r\n    # Set defaults for optional fields if not provided in update_data\r\n    defaults = {\r\n        'title': None,\r\n        'message': None,\r\n        'payload': None,\r\n        'overwrite': False,\r\n        'isComplete': None,\r\n        'completedSteps': None,\r\n        'totalSteps': None,\r\n    }\r\n    # Merge defaults with provided data\r\n    data_payload = {**defaults, **update_data}\r\n\r\n    # Validate required keys\r\n    missing_keys = required_keys - data_payload.keys()\r\n    if missing_keys:\r\n        logger.warning(f\"create_update missing required keys {missing_keys} in data: {data_payload}\")\r\n        # Decide how to handle: fill with defaults, raise error, or just log?\r\n        # Let's fill with defaults for robustness, but log clearly.\r\n        for key in missing_keys:\r\n            data_payload[key] = f\"MISSING_{key.upper()}\" # Make missing value obvious\r\n\r\n    # Construct the final update object matching StreamUpdate structure\r\n    timestamp = time.time()\r\n    stream_update_obj = {\r\n        # Assuming StreamUpdate is {'data': StreamUpdateData, 'timestamp': float}\r\n        # If StreamUpdate IS StreamUpdateData + timestamp, adjust structure\r\n        \"data\": data_payload,\r\n        \"timestamp\": timestamp\r\n    }\r\n\r\n    # Validate against Pydantic models if desired (adds overhead but ensures correctness)\r\n    # try:\r\n    #     StreamUpdate(**stream_update_obj) # Validate structure\r\n    # except ValidationError as ve:\r\n    #     logger.error(f\"Validation failed for created StreamUpdate object: {ve}\")\r\n    #     logger.error(f\"Object causing error: {stream_update_obj}\")\r\n    #     return [] # Return empty list on validation failure\r\n\r\n    # Return a list containing the single update dictionary\r\n    return [stream_update_obj]\r\n\r\n# --- Tool Wrappers ---\r\n\r\nasync def perform_web_search(query: str, max_results: int = 5) -> List[SearchResultItem]:\r\n    \"\"\"Performs web search using Tavily async client.\"\"\"\r\n    if not tavily_client:\r\n        logger.warning(f\"Tavily client not available. Skipping web search for: '{query}'\")\r\n        return []\r\n\r\n    # Ensure max_results is reasonable\r\n    max_results = max(1, min(max_results, 10)) # Clamp between 1 and 10\r\n\r\n    try:\r\n        logger.info(f\"[Tool] Calling Tavily API for: '{query}' (Max results: {max_results})\")\r\n        # Use include_raw_content=False unless you need the full webpage content\r\n        response = await tavily_client.search(\r\n            query=query,\r\n            search_depth=\"advanced\", # Use advanced for potentially better M&A context\r\n            include_answer=False, # Typically don't need Tavily's generated answer\r\n            max_results=max_results,\r\n            include_raw_content=False,\r\n            # include_images=False, # Don't need images\r\n        )\r\n        logger.info(f\"[Tool] Tavily API call successful for: '{query}'\")\r\n\r\n        results_list = response.get('results', []) if isinstance(response, dict) else []\r\n\r\n        # Convert Tavily results to our internal SearchResultItem schema\r\n        formatted_results = []\r\n        for r in results_list:\r\n             if isinstance(r, dict) and r.get('url'):\r\n                 formatted_results.append(\r\n                     SearchResultItem(\r\n                         # source='tavily_web', # Optional: track source tool\r\n                         title=r.get('title', 'N/A'),\r\n                         url=r.get('url'),\r\n                         snippet=r.get('content', '') # Tavily 'content' is the snippet\r\n                     )\r\n                 )\r\n        logger.info(f\"Formatted {len(formatted_results)} results from Tavily.\")\r\n        return formatted_results\r\n    except Exception as e:\r\n        logger.error(f\"Error during Tavily search for '{query}': {e}\")\r\n        return []\r\n\r\n\r\n# --- NEW yfinance Data Fetching Tool ---\r\nasync def fetch_yfinance_data(ticker_symbol: str) -> YFinanceData:\r\n    \"\"\"\r\n    Fetches comprehensive financial data for a given ticker using yfinance.\r\n    Handles potential errors during data retrieval. Returns YFinanceData dict.\r\n    \"\"\"\r\n    if not ticker_symbol or not isinstance(ticker_symbol, str):\r\n        msg = \"Invalid or missing ticker symbol provided for yfinance.\"\r\n        logger.warning(f\"[Tool] {msg}\")\r\n        return {\"error\": msg} # Return error in expected structure\r\n\r\n    logger.info(f\"[Tool] Fetching yfinance data for Ticker: {ticker_symbol}\")\r\n    # Initialize with None or empty structures matching YFinanceData TypedDict\r\n    data: YFinanceData = {\r\n        \"info\": None, \"financials\": None, \"quarterly_financials\": None,\r\n        \"balance_sheet\": None, \"quarterly_balance_sheet\": None,\r\n        \"cashflow\": None, \"quarterly_cashflow\": None,\r\n        \"major_holders\": None, \"institutional_holders\": None,\r\n        \"recommendations\": None, \"news\": [], \"error\": None # Default news to empty list\r\n    }\r\n    fetched_items_count = 0\r\n    total_items_to_fetch = 11 # info, fin*2, bs*2, cf*2, holders*2, recs, news\r\n\r\n    try:\r\n        # Instantiate Ticker object\r\n        ticker = yf.Ticker(ticker_symbol)\r\n\r\n        # Fetch data points individually with error handling\r\n        # Use asyncio.gather to fetch some potentially slow items concurrently?\r\n        # Example: Fetch info first, then others concurrently if info looks valid.\r\n\r\n        # 1. Fetch Info (Critical)\r\n        try:\r\n            info_data = ticker.info\r\n            # Basic validation: Check if info dict is not empty and has a common key like 'symbol' or 'longName'\r\n            if info_data and ('symbol' in info_data or 'longName' in info_data):\r\n                 data['info'] = info_data\r\n                 fetched_items_count += 1\r\n                 logger.info(f\"  Fetched .info for {ticker_symbol}\")\r\n            else:\r\n                 raise ValueError(f\"ticker.info for {ticker_symbol} is empty or invalid.\")\r\n        except Exception as e:\r\n            logger.error(f\"  Error fetching critical .info for {ticker_symbol}: {e}\")\r\n            data['error'] = f\"Failed to fetch core info for ticker '{ticker_symbol}'. It might be invalid or delisted. Error: {e}\"\r\n            # If core info fails, maybe don't bother fetching others? Return early.\r\n            logger.warning(f\"[Tool] Aborting yfinance fetch for {ticker_symbol} due to critical info error.\")\r\n            return data # Return immediately with error\r\n\r\n        # 2. Fetch other data points (can potentially be concurrent)\r\n        async def _fetch_yf(attr_name):\r\n             try:\r\n                 # Use getattr to call the property/method on the ticker object\r\n                 result = getattr(ticker, attr_name)\r\n                 # Basic check for empty DataFrames\r\n                 if isinstance(result, pd.DataFrame) and result.empty:\r\n                     logger.warning(f\"  yfinance returned empty DataFrame for .{attr_name}\")\r\n                     return attr_name, None # Return None for empty df? Or empty df itself? Let's return None.\r\n                 elif isinstance(result, list) and not result:\r\n                      logger.warning(f\"  yfinance returned empty list for .{attr_name}\")\r\n                      return attr_name, [] # Return empty list for news\r\n                 logger.info(f\"  Successfully fetched .{attr_name}\")\r\n                 return attr_name, result\r\n             except Exception as e:\r\n                 logger.warning(f\"  Error fetching .{attr_name} for {ticker_symbol}: {e}\")\r\n                 return attr_name, None # Return None on error\r\n\r\n        attributes_to_fetch = [\r\n             'financials', 'quarterly_financials', 'balance_sheet', 'quarterly_balance_sheet',\r\n             'cashflow', 'quarterly_cashflow', 'major_holders', 'institutional_holders',\r\n             'recommendations', 'news'\r\n        ]\r\n        # Run fetches concurrently\r\n        results = await asyncio.gather(*[_fetch_yf(attr) for attr in attributes_to_fetch])\r\n\r\n        # Populate the data dictionary from results\r\n        for attr_name, result_value in results:\r\n            if result_value is not None:\r\n                data[attr_name] = result_value # Assign fetched data\r\n                fetched_items_count += 1\r\n\r\n        logger.info(f\"[Tool] Fetched {fetched_items_count}/{total_items_to_fetch} data items total from yfinance for {ticker_symbol}\")\r\n\r\n    except Exception as e:\r\n        # Catch errors during Ticker instantiation or other critical issues\r\n        error_message = f\"Critical error initializing yfinance.Ticker or during fetch process for {ticker_symbol}: {str(e)}\"\r\n        logger.error(f\"[Tool] {error_message}\")\r\n        # Ensure error key exists and is updated, avoid overwriting previous specific errors if possible\r\n        if data.get('error') is None:\r\n             data['error'] = error_message\r\n\r\n    # --- NEW: Convert DataFrames to serializable dict format ---\r\n    serializable_data = {}\r\n    for key, value in data.items():\r\n        if isinstance(value, pd.DataFrame):\r\n            try:\r\n                # 'split' orientation is often good for preserving structure\r\n                # Handle potential Timestamp conversion issues in index/columns here if necessary before to_dict\r\n                # Example: Convert index to string if it's Timestamp\r\n                if pd.api.types.is_datetime64_any_dtype(value.index):\r\n                    value.index = value.index.strftime('%Y-%m-%d') # Or another suitable string format\r\n                # Example: Convert columns to string if they are Timestamps (less common for yfinance columns)\r\n                if any(isinstance(col, pd.Timestamp) for col in value.columns):\r\n                    value.columns = [str(col) for col in value.columns]\r\n\r\n                serializable_data[key] = value.to_dict(orient='split')\r\n                logger.debug(f\"  Converted DataFrame '{key}' to dict.\")\r\n            except Exception as convert_e:\r\n                logger.error(f\"  Error converting DataFrame '{key}' to dict: {convert_e}\")\r\n                serializable_data[key] = {\"error\": f\"Failed to serialize DataFrame: {convert_e}\"}\r\n        else:\r\n            # Keep non-DataFrame items (like info dict, news list, error string) as they are\r\n            serializable_data[key] = value\r\n\r\n    if data.get('error'):\r\n        logger.warning(f\"Returning yfinance data for {ticker_symbol} with error: {data['error']}\")\r\n    else:\r\n        logger.info(f\"[Tool] Completed yfinance fetch and serialization for {ticker_symbol} successfully.\")\r\n\r\n    # Return the dictionary with serialized DataFrames\r\n    return serializable_data # Return the modified dictionary\r\n\r\n\r\n# --- Commented out Exa Tools (Keep if desired, ensure EXA_API_KEY is set) ---\r\n# async def perform_academic_search(query: str, max_results: int = 3) -> List[SearchResultItem]:\r\n#      if not exa_client:\r\n#          logger.warning(f\"Exa client not available. Skipping academic search for: '{query}'\")\r\n#          return []\r\n#      logger.info(f\"[Tool] Performing Academic Search for: {query} (Using Exa - Requires EXA_API_KEY)\")\r\n#      # ... Implementation using exa_client ...\r\n#      return []\r\n\r\n# async def perform_x_search(query: str, max_results: int = 5) -> List[SearchResultItem]:\r\n#      if not exa_client:\r\n#          logger.warning(f\"Exa client not available. Skipping X search for: '{query}'\")\r\n#          return []\r\n#      logger.info(f\"[Tool] Performing X Search for: {query} (Using Exa - Requires EXA_API_KEY)\")\r\n#      # ... Implementation using exa_client ...\r\n#      return []"
  },
  {
    "path": "super_agents/deep_research/README.md",
    "content": "# DeepResearch Agent\n\n## 概述\n\nDeepResearch Agent 是一个基于 LangGraph 构建的、能够执行深度研究并调用外部工具的复杂 Agent。它能够针对用户提供的任意主题，自动化地执行一个完整的研究流程，从搜索信息到分析数据，最终生成一份详细的研究报告。\n\n最近，我们还实现了与 Google 的 **Agent-to-Agent (A2A) 协议**的集成，使 DeepResearch Agent 可以作为标准的 A2A 服务被发现和调用，响应 A2A 请求，并通过同步或流式方式返回结构化的研究结果。\n\n## 特性\n\n### 核心功能\n\n* **自动化研究流程**：从主题分析、多源搜索到最终报告生成的端到端流程\n* **多工具集成**：集成了 Tavily 搜索、Exa 学术搜索等外部工具\n* **结构化报告**：生成包含引用、章节和关键发现的 Markdown 格式研究报告\n* **状态驱动**：基于 LangGraph 的状态驱动设计，支持复杂的研究流程管理\n\n### A2A 适配器特性\n\n* **解耦设计:** 适配器层 (`DeepResearchTaskManager`) 与 DeepResearch 核心 Agent 逻辑分离，方便维护和扩展\n* **A2A 协议兼容:** 实现了 A2A 协议的核心方法，如 `tasks/send`, `tasks/sendSubscribe`, `tasks/get` 等\n* **类型安全:** 基于 Pydantic 模型进行严格的请求/响应校验\n* **流式响应:** 支持通过 Server-Sent Events (SSE) 实时返回研究进度和中间状态更新\n* **推送通知框架:** 包含了处理和发送推送通知的逻辑框架\n\n## 目录结构\n\n```\n.\n├── a2a_adapter/                # DeepResearch 的 A2A 适配层\n│   ├── README.md              # A2A 适配器的详细文档\n│   ├── client_example.py      # 测试 A2A 适配器的客户端示例\n│   ├── deep_research_task_manager.py # 核心适配器逻辑\n│   ├── run_server.py          # 启动 A2A 服务器的脚本\n│   └── setup.py               # 配置和组装 A2A 服务器\n├── main.py                    # DeepResearch Agent 的主入口点\n├── output/                    # 生成的研究报告输出目录\n└── reason_graph/              # DeepResearch 的 LangGraph 图和状态定义\n    ├── graph.py               # LangGraph 图定义\n    ├── nodes.py               # 图节点实现\n    ├── prompt.py              # 提示模板\n    ├── schemas.py             # 数据模型定义\n    ├── state.py               # 状态定义\n    └── tools.py               # 工具实现\n```\n\n## 安装\n\n确保已安装所有必要的依赖。推荐使用虚拟环境。\n\n1. **创建并激活虚拟环境 (使用 uv):**\n   ```bash\n   uv venv\n   source .venv/bin/activate  # Linux/macOS\n   # 或者 .venv\\Scripts\\activate # Windows\n   ```\n   *(如果未使用 uv, 可用 `python -m venv .venv`)*\n\n2. **安装依赖项 (使用 uv):**\n   ```bash\n   uv sync\n   ```\n   *(如果未使用 uv, 可用 `pip install -r requirements.txt`)*\n\n## 配置\n\n1. 在项目**根目录**下创建 `.env` 文件（如果不存在，可以复制 `.env.example` 并重命名）。\n2. 确保设置了必要的环境变量：\n   ```dotenv\n   # LLM API 配置 (根据实际使用的 LLM 修改)\n   OPENAI_API_KEY=sk-...  # 如果使用 OpenAI\n   # XAI_API_KEY=...      # 如果使用 Grok\n   # DEEPSEEK_API_KEY=... # 如果使用 DeepSeek\n   # GROQ_API_KEY=...     # 如果使用 Groq\n\n   # 研究工具 API Keys\n   TAVILY_API_KEY=tvly-...\n   EXA_API_KEY=...\n\n   # A2A 服务器配置 (如果使用 A2A 适配器)\n   A2A_HOST=127.0.0.1\n   A2A_PORT=8000\n   ```\n\n## 使用方法\n\n### 直接使用 DeepResearch Agent\n\n在项目根目录下，运行：\n\n```bash\n# 从项目根目录 (mentis/) 运行\npython -m super_agents.deep_research.main\n```\n\n脚本会提示您输入研究主题。输入后，Agent 将开始执行研究流程，并在完成后在 `output/` 目录中生成一份 Markdown 格式的研究报告。\n\n### 使用 A2A 适配器\n\n#### 启动 A2A 服务器\n\n在项目根目录下，运行：\n\n```bash\npython -m super_agents.deep_research.a2a_adapter.run_server\n```\n\n服务器将根据 `.env` 文件中的 `A2A_HOST` 和 `A2A_PORT` 启动，默认监听 `http://127.0.0.1:8000`。\n\n#### 使用客户端示例\n\n项目提供了一个专门测试 DeepResearch A2A 适配器的客户端示例。在服务器运行的情况下，打开**新的终端**并运行：\n\n```bash\npython -m super_agents.deep_research.a2a_adapter.client_example\n```\n\n它会连接服务器，获取 Agent 信息，然后提示你输入研究主题（或使用默认的特斯拉主题），并通过流式方式显示研究进度和最终报告。\n\n#### 在代码中集成（服务端）\n\n如果你想在其他 Python 代码中启动这个服务，可以导入并使用 `setup` 模块：\n\n```python\n# 导入设置函数\nfrom super_agents.deep_research.a2a_adapter.setup import setup_a2a_server\n\n# 配置并获取服务器实例\nserver = setup_a2a_server(host=\"127.0.0.1\", port=8000)\n\n# 启动服务器 (这是一个阻塞调用)\nserver.start()\n```\n\n## 内部工作流程\n\nDeepResearch Agent 执行以下研究步骤：\n\n1. **研究规划 (Plan Research)**: 分析主题，生成初步的搜索查询和分析点\n2. **多源搜索 (Multi-Source Search)**: 调用网页搜索 (Tavily)、学术搜索 (Exa) 等工具获取信息\n3. **(可选) 分析执行 (Perform Analysis)**: 对搜索结果进行初步分析（如情感、SWOT 等）\n4. **差距分析 (Gap Analysis)**: 评估已有信息，识别知识空白和局限性\n5. **(可选) 补充搜索 (Gap Filling)**: 针对知识空白进行额外的、更具针对性的搜索\n6. **最终综合 (Final Synthesis)**: 整合所有信息，提炼关键发现和不确定性\n7. **报告生成 (Report Generation)**: 将综合结果和上下文信息，撰写成一份详细的、带引用的 Markdown 研究报告\n\n## A2A 适配器架构\n\nA2A 适配器主要由以下几部分协作完成：\n\n1. **`deep_research_task_manager.py` (`DeepResearchTaskManager`)**:\n   * 核心适配器，继承自通用的 `InMemoryTaskManager`\n   * 实现了处理 A2A 请求的具体逻辑\n   * 将 A2A 请求转换为 DeepResearch Agent 需要的输入格式\n   * 调用 DeepResearch Agent 的流式接口来执行研究任务\n   * 处理中间状态和最终结果，转换为 A2A 协议格式\n\n2. **`setup.py`**:\n   * `setup_a2a_server` 函数：配置和组装 A2A 服务器组件\n   * 创建 `AgentCard`（描述 Agent 能力）\n   * 创建 `DeepResearchTaskManager` 实例\n   * 创建并返回配置好的 `A2AServer` 实例\n\n3. **`run_server.py`**: \n   * 简单的入口脚本，调用 `setup.py` 中的函数来启动服务\n\n## A2A 工作流程 (流式任务示例)\n\n1. **客户端**: 构造请求，调用 `client.send_task_streaming(payload)`\n2. **A2AClient**: 发送 `tasks/sendSubscribe` 的 JSON-RPC 请求到服务器\n3. **A2AServer**: 接收请求，调用 `TaskManager.on_send_task_subscribe`\n4. **DeepResearchTaskManager**: \n   * 验证请求，设置任务初始状态为 `WORKING`\n   * 启动后台任务 `_process_research_task(payload)`\n   * 设置 SSE 队列，返回 `dequeue_events_for_sse` 异步生成器\n5. **A2AServer**: 向客户端发送 HTTP 200 OK 响应，`Content-Type` 为 `text/event-stream`\n6. **客户端**: 建立 SSE 连接，开始等待事件\n7. **服务器 (后台任务)**: 调用 `research_app.astream` 执行 LangGraph 图\n8. **服务器 (后台任务)**: 每次产生状态更新，解析并创建 `TaskStatusUpdateEvent`\n9. **服务器**: 将事件放入队列，然后发送给客户端\n10. **客户端**: 接收事件，处理并显示进度更新\n11. **服务器 (后台任务)**: 研究完成，创建最终 `Artifact` 和 `COMPLETED` 状态\n12. **客户端**: 接收并处理最终报告和状态事件\n\n## 与其他系统集成\n\n由于实现了标准的 A2A 协议，DeepResearch Agent 可以方便地集成到：\n\n* Google Assistant 等支持 A2A 的平台\n* 其他实现了 A2A 客户端的 Agent 或应用程序\n* 需要调用强大研究能力的自定义前端或后端系统\n\n## 故障排除\n\n如果遇到问题：\n\n1. **检查 `.env` 文件:** 确保所有必需的 API 密钥都已正确配置且有效\n2. **检查服务器日志:** 优先查看 `ERROR` 或 `WARNING` 级别的日志\n3. **检查客户端日志:** 客户端脚本的输出可以帮助判断问题发生在请求发送阶段还是响应处理阶段\n4. **端口冲突:** 确保端口 8000 没有被其他应用程序占用\n5. **依赖安装:** 确认所有依赖都已在激活的虚拟环境中正确安装\n\n## 贡献\n\n欢迎对 DeepResearch Agent 或 A2A 适配器贡献代码、报告问题或提出改进建议。"
  },
  {
    "path": "super_agents/deep_research/__init__.py",
    "content": ""
  },
  {
    "path": "super_agents/deep_research/a2a_adapter/README.md",
    "content": "# DeepResearch A2A 适配器\n\n## 概述\n\n本模块提供了一个将 **DeepResearch Agent**（一个基于 LangGraph 构建的、能够执行深度研究并调用外部工具的复杂 Agent）与 Google 的 **Agent-to-Agent (A2A) 协议** 进行集成的适配层。通过这个适配器，强大的 DeepResearch Agent 可以作为一个标准的 A2A 服务被发现和调用，响应 A2A 请求，并通过同步或流式方式返回结构化的研究结果。\n\n## 特性\n\n* **解耦设计:** 适配器层 (`AgentTaskManager`) 与 DeepResearch 核心 Agent 逻辑分离，方便维护和扩展。\n* **A2A 协议兼容:** 实现了 A2A 协议的核心方法，如 `tasks/send`, `tasks/sendSubscribe`, `tasks/get` 等，并提供 `/.well-known/agent.json` 服务发现端点。\n* **类型安全:** 基于 `core/a2a/types.py` 中的 Pydantic 模型进行严格的请求/响应校验。\n* **工具集成:** 支持 DeepResearch Agent 在执行任务时调用外部工具 (如 Tavily, Exa API, LLM)。\n* **流式响应:** 支持通过 Server-Sent Events (SSE) 实时返回研究进度和中间状态更新。*(当前版本的更新详细程度取决于 `_process_stream_updates` 的实现)*\n* **推送通知框架:** 包含了处理和发送推送通知的逻辑框架。*(需要配置真实的推送发送器才能实际发送)*\n\n## 目录结构 (相关部分)\n\n```\n.\n├── core/                           # 核心 A2A 协议实现 (复用)\n│   └── a2a/\n│       ├── client/\n│       │   └── client.py           # A2AClient 客户端库实现\n│       ├── server/\n│       │   ├── server.py           # A2AServer HTTP 服务器实现\n│       │   └── task_manager.py     # TaskManager 基础接口\n│       ├── agent_task_manager.py     # (之前的 LangGraph Agent 任务管理器示例)\n│       └── types.py                # A2A 协议的 Pydantic 模型定义\n├── super_agents/                   # 可能包含多个 Super Agent\n│   └── deep_research/              # DeepResearch Agent 核心代码\n│       ├── a2a_adapter/            # DeepResearch 的 A2A 适配层\n│       │   ├── deep_research_task_manager.py # ★ 本适配器的核心逻辑\n│       │   ├── setup.py              # ★ 配置和组装 A2A 服务器\n│       │   ├── run_server.py         # ★ 启动服务器的脚本\n│       │   └── client_example.py       # ★ 测试本适配器的客户端示例\n│       ├── reason_graph/             # DeepResearch 的 LangGraph 图和状态定义 (假设)\n│       │   ├── graph.py\n│       │   ├── state.py\n│       │   └── schemas.py\n│       └── ...                     # DeepResearch 的其他模块\n├── .env                            # 存储环境变量 - *需要自行创建*\n├── requirements.txt                # Python 依赖项列表 (假设存在)\n└── README.md                       # 项目主 README (可能)\n```\n*(★ 表示本文档主要涉及的文件)*\n\n## 安装\n\n确保已安装所有必要的依赖。推荐使用虚拟环境。\n\n1.  **创建并激活虚拟环境 (使用 uv):**\n    ```bash\n    uv venv\n    source .venv/bin/activate  # Linux/macOS\n    # 或者 .venv\\Scripts\\activate # Windows\n    ```\n    *(如果未使用 uv, 可用 `python -m venv .venv`)*\n\n2.  **安装依赖项 (使用 uv):**\n    ```bash\n    uv sync\n    ```\n    *(如果未使用 uv, 可用 `pip install -r requirements.txt`)*\n\n## 配置\n\n1.  在项目**根目录**下创建 `.env` 文件（如果不存在，可以复制 `.env.example` 并重命名）。\n2.  确保设置了必要的环境变量。根据服务器日志和 DeepResearch 的可能需求，可能包括：\n    ```dotenv\n    # A2A 服务器配置\n    A2A_HOST=127.0.0.1\n    A2A_PORT=8000\n\n    # LLM API 配置 (示例为 OpenAI/XAI, 根据实际使用的 LLM 修改)\n    # OPENAI_API_KEY=sk-...\n    XAI_API_KEY=your_xai_api_key # 如果使用 Grok\n    # GROQ_API_KEY=... # 如果使用 Groq\n\n    # DeepResearch 可能需要的其他工具 API Keys\n    TAVILY_API_KEY=tvly-...\n    EXA_API_KEY=your_exa_api_key\n    # 其他 DeepResearch 可能需要的 Keys...\n    ```\n\n## 使用方法\n\n### 启动 A2A 服务器\n\n在项目根目录下，运行：\n\n```bash\npython -m super_agents.deep_research.a2a_adapter.run_server\n```\n\n服务器将根据 `.env` 文件中的 `A2A_HOST` 和 `A2A_PORT` 启动，默认监听 `http://127.0.0.1:8000`。\n\n### 客户端示例\n\n项目提供了一个专门测试 DeepResearch A2A 适配器的客户端示例。在服务器运行的情况下，打开**新的终端**并运行：\n\n```bash\npython -m super_agents.deep_research.a2a_adapter.client_example\n```\n\n它会连接服务器，获取 Agent 信息，然后提示你输入研究主题（或使用默认的特斯拉主题），并通过流式方式（如果 AgentCard 声明支持）显示研究进度和最终报告。\n\n### 在代码中集成（服务端）\n\n如果你想在其他 Python 代码中启动这个服务，可以导入并使用 `setup` 模块：\n\n```python\n# 导入设置函数\nfrom super_agents.deep_research.a2a_adapter.setup import setup_a2a_server\n\n# 配置并获取服务器实例 (host/port 可选，会使用 setup 中的默认值或环境变量)\nserver = setup_a2a_server(host=\"127.0.0.1\", port=8000)\n\n# 启动服务器 (这是一个阻塞调用)\nserver.start()\n```\n\n## 架构与核心组件\n\n此 A2A 适配器主要由以下几部分协作完成：\n\n1.  **`core/a2a/types.py`**: 定义 A2A 协议数据结构的 Pydantic 模型，确保类型安全和数据校验。\n2.  **`core/a2a/server/server.py` (`A2AServer`)**: 通用的 A2A HTTP 服务器，负责接收请求、解析 JSON-RPC、验证方法、调用 TaskManager 处理，并根据 TaskManager 的返回类型（`JSONRPCResponse` 或 `AsyncIterable`）发送正确的 HTTP 响应（`application/json` 或 `text/event-stream`）。\n3.  **`super_agents/deep_research/a2a_adapter/deep_research_task_manager.py` (`DeepResearchTaskManager`)**:\n    * **核心适配器**，继承自通用的 `InMemoryTaskManager`（提供内存任务存储和基础方法）。\n    * 实现了处理 A2A 请求（如 `on_send_task`, `on_send_task_subscribe`）的具体逻辑。\n    * **关键职责:**\n        * 将传入 A2A 请求中的用户查询 (`message.parts`) 转换为 DeepResearch Agent (`research_app`) 需要的输入格式（目前是提取文本放入 `initial_state[\"topic\"]`）。\n        * 调用 DeepResearch Agent 的流式接口 (`research_app.astream`) 来执行研究任务。\n        * **处理中间状态:** 在 `_process_stream_updates` 方法中，解析 `research_app` 流式输出的状态更新 (`StreamUpdate` 对象)，提取信息，并将其转换为 A2A 的 `TaskStatusUpdateEvent`（包含 `TextPart` 和 `DataPart`），通过 SSE 推送给客户端。**此方法的实现质量直接决定了客户端收到的进度信息的丰富程度。**\n        * **处理最终结果:** 在 `_finalize_task` 方法中，从 Agent 的最终状态提取 Markdown 报告，创建 A2A `Artifact`，更新任务状态为 `COMPLETED`，并通过 SSE 推送 `TaskArtifactUpdateEvent` 和最终的 `TaskStatusUpdateEvent`。\n        * **SSE 队列管理:** 实现了 `setup_sse_consumer`, `enqueue_events_for_sse`, `dequeue_events_for_sse` 等方法来管理与客户端的 SSE 连接和事件推送。\n        * **推送通知 (框架):** 实现了 `send_task_notification` 方法框架，但需要注入真实的 `notification_sender_auth` 对象才能实际发送。\n4.  **`super_agents/deep_research/a2a_adapter/setup.py`**:\n    * `setup_a2a_server` 函数：集中配置和组装上述组件。创建 `AgentCard`（描述 Agent 能力，包括是否支持流式和推送），创建 `DeepResearchTaskManager` 实例（并注入模拟的推送通知发送器），最后创建并返回配置好的 `A2AServer` 实例。\n    * `run_server` 函数：调用 `setup_a2a_server` 并启动服务器。\n5.  **`super_agents/deep_research/a2a_adapter/run_server.py`**: 简单的入口脚本，调用 `setup.py` 中的 `run_server` 函数来启动服务。\n\n## 工作流程 (流式任务示例)\n\n1.  **客户端**: 构造 `payload` (符合 `TaskSendParams`, 含 `id`, `message`), 调用 `client.send_task_streaming(payload)`.\n2.  **A2AClient**: 发送 `method: tasks/sendSubscribe` 的 JSON-RPC POST 请求到服务器。\n3.  **A2AServer**: 接收请求，验证，调用 `TaskManager.on_send_task_subscribe`.\n4.  **DeepResearchTaskManager**: 验证请求，设置任务初始状态为 `WORKING`，**启动后台任务** `_process_research_task(payload)`，设置 SSE 队列，**立即返回 `dequeue_events_for_sse` 异步生成器**。\n5.  **A2AServer**: 检测到返回的是 `AsyncIterable`，向客户端发送 HTTP 200 OK 响应，`Content-Type` 为 `text/event-stream`，保持连接。\n6.  **客户端**: 收到 200 OK 和正确的 `Content-Type`，建立 SSE 连接，开始 `async for` 循环等待事件。\n7.  **服务器 (后台任务 `_process_research_task`)**: 调用 `research_app.astream` 执行 LangGraph 图。\n8.  **服务器 (后台任务 `_process_research_task`)**: 每次 `research_app` 产生状态更新，调用 `_process_stream_updates`。\n9.  **服务器 (`_process_stream_updates`)**: 解析状态更新，创建 `TaskStatusUpdateEvent` (含 `TextPart`/`DataPart`)，调用 `enqueue_events_for_sse` 将事件放入队列。\n10. **服务器 (`dequeue_events_for_sse`)**: 从队列中获取事件，包装成 `SendTaskStreamingResponse`，`yield` 给 `A2AServer`。\n11. **A2AServer**: 将 `SendTaskStreamingResponse` 格式化为 SSE 事件 (`data: {...}\\n\\n`) 发送给客户端。\n12. **客户端**: `async for` 循环接收到事件，解析 `SendTaskStreamingResponse`，处理 `result` 中的 `TaskStatusUpdateEvent` 并打印“进度更新”。\n13. **服务器 (后台任务 `_process_research_task`)**: 研究完成，调用 `_finalize_task`。\n14. **服务器 (`_finalize_task`)**: 创建最终 `Artifact` 和 `COMPLETED` 状态，调用 `enqueue_events_for_sse` 发送 `TaskArtifactUpdateEvent` 和 `TaskStatusUpdateEvent(final=True)`，最后发送 `SSE_CLOSE_SENTINEL`。\n15. **客户端**: 接收并处理 `TaskArtifactUpdateEvent`（打印报告），接收到 `final=True` 的状态事件，接收到关闭信号后 `async for` 循环结束。\n\n## 关键实现细节总结\n\n* **Agent 接口:** `AgentTaskManager` 期望注入的 Agent 对象至少实现 `invoke(query, session_id)` 和 `stream(query, session_id)` 方法（后者需为异步生成器）。\n* **流式更新内容:** 客户端看到的流式更新的详细程度，完全取决于 `DeepResearchTaskManager._process_stream_updates` 方法如何解析 Agent 内部状态并构造 `TextPart` 或 `DataPart`。\n* **Pydantic 严格性:** A2A 交互的健壮性很大程度上依赖于 `types.py` 中模型的准确性和双方对这些模型的遵守。任何必需字段的缺失或类型错误都会导致 `ValidationError`。\n* **SSE 实现:** 流式响应依赖于 `AgentTaskManager` 中 SSE 队列的正确实现（`setup_sse_consumer`, `enqueue_events_for_sse`, `dequeue_events_for_sse`）。\n\n## 与其他系统集成\n\n由于实现了标准的 A2A 协议，此 DeepResearch Agent 服务可以方便地集成到：\n\n* Google Assistant 等支持 A2A 的平台。\n* 其他实现了 A2A 客户端的 Agent 或应用程序。\n* 需要调用强大研究能力的自定义前端或后端系统。\n\n## 故障排除\n\n如果遇到问题：\n\n1.  **检查 `.env` 文件:** 确保所有必需的 API 密钥（OpenAI/XAI, Tavily, Exa 等）都已正确配置且有效。\n2.  **检查服务器日志:** `run_server.py` 的输出包含详细的执行信息和错误栈。优先查看 `ERROR` 或 `WARNING` 级别的日志。\n3.  **检查客户端日志:** 客户端脚本的输出可以帮助判断问题发生在请求发送阶段还是响应处理阶段。`httpx` 的日志可以确认网络请求是否成功。\n4.  **端口冲突:** 确保端口 8000 没有被其他应用程序占用。\n5.  **依赖安装:** 确认所有 `requirements.txt` 中的依赖都已在激活的虚拟环境中正确安装。\n\n## 贡献\n\n欢迎对此适配器或 DeepResearch Agent 本身贡献代码、报告问题或提出改进建议。请参考项目（如果公开）的贡献指南。"
  },
  {
    "path": "super_agents/deep_research/a2a_adapter/__init__.py",
    "content": "# super_agents/deep_research/a2a_adapter/__init__.py\n\n# 确保导出关键组件\nfrom super_agents.deep_research.a2a_adapter.deep_research_task_manager import DeepResearchTaskManager\nfrom super_agents.deep_research.a2a_adapter.setup import setup_a2a_server\n\n__all__ = [\"DeepResearchTaskManager\", \"setup_a2a_server\"]"
  },
  {
    "path": "super_agents/deep_research/a2a_adapter/client_example.py",
    "content": "# super_agents/deep_research/a2a_adapter/client_example.py\n\nimport os\nimport sys\nimport asyncio\nimport json\nimport logging\nfrom pathlib import Path\nfrom uuid import uuid4 # Import uuid\n\n# 添加项目根目录到路径\ncurrent_script_path = Path(__file__).resolve()\nproject_root = current_script_path.parent.parent.parent.parent\nif str(project_root) not in sys.path:\n    sys.path.insert(0, str(project_root))\n\n# 导入环境变量\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# 导入A2A客户端和所需类型\nfrom core.a2a.client.client import A2AClient\nfrom core.a2a.client.card_resolver import A2ACardResolver # Assuming this works as intended\n# Import necessary types for requests and responses\nfrom core.a2a.types import (\n    Message, TextPart, AgentCard, Task, TaskState,DataPart,\n    SendTaskResponse, GetTaskResponse, JSONRPCError,\n    SendTaskStreamingResponse, TaskStatusUpdateEvent, TaskArtifactUpdateEvent # Import event types\n)\n\n\n# 配置日志\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(__name__)\n\nasync def main():\n    \"\"\"\n    DeepResearch A2A客户端示例 (已修正)\n    \"\"\"\n    # 定义服务器配置\n    HOST = os.getenv(\"A2A_HOST\", \"127.0.0.1\")\n    PORT = int(os.getenv(\"A2A_PORT\", \"8000\"))\n\n    print(f\"\\n=== DeepResearch A2A 客户端示例 ===\\n\")\n    print(f\"连接到服务器: http://{HOST}:{PORT}\")\n    print(\"-\" * 40)\n\n    # 创建A2A客户端\n    client = A2AClient(url=f\"http://{HOST}:{PORT}\")\n\n    # 获取Agent卡片信息\n    agent_card: Optional[AgentCard] = None # Initialize agent_card\n    try:\n        # Assuming A2ACardResolver works and might need await if it does I/O\n        # If get_agent_card is synchronous, wrap it if running in async context,\n        # but for simplicity, let's assume it works or replace with direct GET if needed.\n        card_resolver = A2ACardResolver(base_url=f\"http://{HOST}:{PORT}\")\n        # If get_agent_card is async: agent_card = await card_resolver.get_agent_card()\n        # If get_agent_card is sync:\n        try:\n            agent_card = card_resolver.get_agent_card() # Assuming sync for now\n            print(\"\\n=== Agent卡片信息 ===\\n\")\n            print(json.dumps(agent_card.model_dump(exclude_none=True), indent=2, ensure_ascii=False))\n            print(\"-\" * 40)\n        except Exception as card_err:\n             logger.warning(f\"同步获取Agent卡片失败: {card_err}. 可能需要异步获取或直接请求URL.\")\n             # Fallback or re-raise depending on requirements\n\n    except Exception as e:\n        logger.error(f\"处理Agent卡片时出错: {e}\")\n        # Decide if execution should continue without the card info\n        # return\n\n    # --- 使用 Agent Card 判断是否支持流式 ---\n    # Use a default if agent_card couldn't be fetched\n    supports_streaming = False\n    if agent_card and hasattr(agent_card, 'capabilities'):\n        supports_streaming = agent_card.capabilities.streaming\n    else:\n        logger.warning(\"无法获取 Agent Card 或 Capabilities，将尝试非流式请求。\")\n\n\n    # 发送研究请求\n    research_topic = input(\"\\n请输入研究主题 (或按 Enter 使用默认): \")\n    if not research_topic:\n        research_topic = \"特斯拉电动汽车的市场分析和未来发展趋势\"\n        print(f\"使用默认研究主题: {research_topic}\")\n\n    print(\"\\n=== 发送研究请求 ===\\n\")\n    print(f\"研究主题: {research_topic}\")\n    print(\"正在处理，请稍候...\")\n\n    # 创建消息\n    message = Message(\n        role=\"user\",\n        parts=[TextPart(text=research_topic)] # type=\"text\" is default\n    )\n\n    # 发送任务并获取响应\n    try:\n        # 生成唯一任务ID\n        task_id = \"deep_research_\" + uuid4().hex\n\n        # 构建任务参数字典\n        payload = {\n            \"id\": task_id,\n            \"sessionId\": \"deep_research_session_\" + uuid4().hex, # Unique session per run\n            \"message\": message.model_dump(), # Serialize message to dict\n            \"acceptedOutputModes\": [\"text\"],\n            \"metadata\": {\"skill_name\": \"deep_research\"} # Match skill name/id from setup.py\n        }\n\n        if supports_streaming:\n            # --- 修正流式API调用和处理 ---\n            print(\"\\n=== 流式响应 ===\\n\")\n            print(f\"任务ID: {task_id}\")\n            # 1. 调用 send_task_streaming (不使用 await) 获取异步生成器\n            event_stream_generator = client.send_task_streaming(payload=payload)\n\n            # 2. 使用 async for 迭代生成器\n            async for event_response in event_stream_generator:\n                logger.debug(f\"Received stream event: {event_response}\")\n\n                # 3. 检查整个响应是否有错误\n                if event_response.error:\n                     error: JSONRPCError = event_response.error\n                     print(f\"流式传输中出错: Code={error.code}, Message={error.message}\")\n                     continue # 或者 break\n\n                # 4. 获取事件具体内容 (TaskStatusUpdateEvent 或 TaskArtifactUpdateEvent)\n                event = event_response.result\n                if not event:\n                     logger.warning(\"Received stream response with empty result.\")\n                     continue\n\n                # 5. 根据事件类型处理 (使用 isinstance 或 hasattr)\n                if isinstance(event, TaskStatusUpdateEvent):\n                    if event.status and event.status.message and event.status.message.parts:\n                        readable_summary = \"\"\n                        structured_info = {}\n                        for part in event.status.message.parts:\n                            if isinstance(part, TextPart):\n                                readable_summary = part.text # 获取人类可读的文本\n                            elif isinstance(part, DataPart): # *** 处理 DataPart ***\n                                structured_info = part.data # 获取结构化数据字典\n                                logger.debug(f\"收到结构化数据: {structured_info}\") # 打印原始数据\n\n                        # 你可以根据需要选择性地打印信息\n                        if readable_summary:\n                            print(f\"进度更新 (文本): {readable_summary}\")\n                        # 或者/并且 打印结构化信息\n                        if structured_info:\n                            step = structured_info.get('step', '-')\n                            status = structured_info.get('status', '-')\n                            detail = structured_info.get('detail', '-')\n                            query = structured_info.get('query')\n                            source = structured_info.get('source')\n                            count = structured_info.get('results_count')\n\n                            print(f\"进度更新 (结构化): [步骤: {step}, 状态: {status}]\", end=\"\")\n                            if source: print(f\" - 来源: {source}\", end=\"\")\n                            if query: print(f\" - 查询: '{query}'\", end=\"\")\n                            if count is not None: print(f\" - 结果数: {count}\", end=\"\")\n                            print(f\" - 详情: {detail}\")\n\n                elif isinstance(event, TaskArtifactUpdateEvent):\n                    # ... 处理 artifact.parts (也可能包含 DataPart) ...\n                    print(\"\\n收到最终 Artifact:\")\n                    if event.artifact and event.artifact.parts:\n                        full_report = \"\"\n                        for part in event.artifact.parts:\n                            if isinstance(part, TextPart):\n                                print(f\"  研究报告片段 (TextPart): {part.text}\")\n                                full_report += part.text + \"\\n\"\n                            elif isinstance(part, DataPart):\n                                # 如果最终报告也可能在 DataPart 中\n                                print(f\"  研究报告片段 (DataPart): {part.data}\")\n                                # 假设报告主要在 TextPart\n                        # 如果需要打印完整报告\n                        print(f\"\\n=== 最终研究报告 (来自Artifact) ===\\n{full_report.strip()}\")\n\n                else:\n                    logger.warning(f\"收到未知类型的流式事件: {type(event)}\")\n\n            print(\"流式任务处理完成。\")\n\n        else:\n            # --- 修正非流式API调用和处理 ---\n            print(\"\\n=== 非流式响应 ===\\n\")\n            # 1. 调用 send_task\n            send_response: SendTaskResponse = await client.send_task(payload=payload)\n            logger.debug(f\"Send task response: {send_response}\")\n\n            if send_response.error:\n                error: JSONRPCError = send_response.error\n                print(f\"发送任务时出错: Code={error.code}, Message={error.message}\")\n                return # Exit if sending failed\n            if not send_response.result:\n                print(f\"发送任务成功，但未收到任务详情: {send_response}\")\n                # Use the task_id we sent for polling\n            elif send_response.result.id != task_id:\n                logger.warning(f\"服务器返回的任务ID '{send_response.result.id}' 与客户端发送的ID '{task_id}' 不匹配。\")\n                task_id = send_response.result.id # Use server's ID\n\n            print(f\"任务已发送，ID: {task_id}\")\n\n            # 2. 轮询 get_task\n            print(\"等待任务完成...\")\n            task_result: Optional[Task] = None\n            for attempt in range(20): # Increase attempts for potentially long research tasks\n                await asyncio.sleep(5) # Increase sleep time\n                get_payload = {\"id\": task_id}\n                logger.debug(f\"Getting task with payload: {get_payload} (Attempt {attempt+1})\")\n                get_response: GetTaskResponse = await client.get_task(payload=get_payload)\n                logger.debug(f\"Get task response: {get_response}\")\n\n                if get_response.error:\n                     error: JSONRPCError = get_response.error\n                     print(f\"获取任务时出错: Code={error.code}, Message={error.message}\")\n                     return\n                if not get_response.result:\n                     print(f\"获取任务成功，但未收到任务详情: {get_response}\")\n                     continue\n\n                task_result = get_response.result\n                print(f\"  当前任务状态: {task_result.status.state.value}\")\n                if task_result.status.state in [TaskState.COMPLETED, TaskState.FAILED, TaskState.CANCELED]:\n                    break\n            else:\n                print(\"任务在限定时间内未完成。\")\n                return\n\n            # 3. 处理最终结果\n            if task_result.status.state == TaskState.COMPLETED and task_result.artifacts:\n                print(f\"\\n=== 研究报告 ===\")\n                full_report = \"\"\n                for artifact in task_result.artifacts:\n                    if artifact.parts:\n                        for part in artifact.parts:\n                            if isinstance(part, TextPart):\n                                full_report += part.text + \"\\n\" # Concatenate parts\n                print(full_report.strip())\n\n            elif task_result.status.state == TaskState.FAILED:\n                 error_msg = \"未知错误\"\n                 if task_result.status.message and task_result.status.message.parts:\n                     if isinstance(task_result.status.message.parts[0], TextPart):\n                        error_msg = task_result.status.message.parts[0].text\n                 print(f\"任务失败: {error_msg}\")\n            else:\n                 print(f\"任务最终状态为: {task_result.status.state.value}\")\n\n\n    except Exception as e:\n        logger.error(f\"处理任务时发生异常: {e}\", exc_info=True)\n        print(f\"处理任务时出错: {e}\")\n\n    print(\"\\n=== 示例完成 ===\\n\")\n\nif __name__ == \"__main__\":\n    try:\n        asyncio.run(main())\n    except KeyboardInterrupt:\n        print(\"\\n客户端已手动停止。\")\n    except Exception as e:\n        logger.error(f\"运行客户端时发生未处理的异常: {e}\", exc_info=True)"
  },
  {
    "path": "super_agents/deep_research/a2a_adapter/deep_research_task_manager.py",
    "content": "# super_agents/deep_research/a2a_adapter/deep_research_task_manager.py\nimport asyncio\nimport logging\nimport traceback\nfrom typing import Dict, Any, Union, AsyncIterable, Optional, List\nfrom collections import defaultdict # Import defaultdict\n\n# Ensure all necessary types are imported\nfrom core.a2a.types import (\n    TaskState, TaskStatus, Task, Artifact, Message, TextPart, DataPart,\n    SendTaskRequest, SendTaskResponse, GetTaskRequest, GetTaskResponse,\n    CancelTaskRequest, CancelTaskResponse, SendTaskStreamingRequest, SendTaskStreamingResponse,\n    SetTaskPushNotificationRequest, SetTaskPushNotificationResponse,\n    GetTaskPushNotificationRequest, GetTaskPushNotificationResponse,\n    TaskResubscriptionRequest, TaskSendParams, JSONRPCResponse, InvalidParamsError,\n    TaskNotFoundError, TaskNotCancelableError, PushNotificationNotSupportedError,\n    TaskArtifactUpdateEvent, TaskStatusUpdateEvent, InternalError, TaskIdParams,\n    PushNotificationConfig\n)\nfrom core.a2a.server.task_manager import TaskManager, InMemoryTaskManager\nfrom core.a2a.server import utils\n\n# 导入DeepResearch相关组件\nfrom super_agents.deep_research.reason_graph.graph import get_app\nfrom super_agents.deep_research.reason_graph.state import ResearchState\n# Assume StreamUpdate has a specific structure, likely including a 'data' field\nfrom super_agents.deep_research.reason_graph.schemas import StreamUpdate\n\nlogger = logging.getLogger(__name__)\n\n# Sentinel object to signal queue closure\nSSE_CLOSE_SENTINEL = object()\n\nclass DeepResearchTaskManager(InMemoryTaskManager):\n    \"\"\"\n    DeepResearchTaskManager (已修改 _process_stream_updates 以发送更详细的日志)\n    \"\"\"\n    def __init__(self, notification_sender_auth=None):\n        super().__init__()\n        self.notification_sender_auth = notification_sender_auth\n        self.research_app = get_app(for_web=True)\n        self.sse_queues: Dict[str, List[asyncio.Queue]] = defaultdict(list)\n        self.sse_queues_lock = asyncio.Lock()\n        # --- ADDED: Track last processed stream update index per task ---\n        self.last_stream_update_index: Dict[str, int] = defaultdict(int)\n        # --- END ADDED ---\n\n    # --- send_task_notification method (保持不变) ---\n    async def send_task_notification(self, task: Task):\n        # ... (代码同上一版本) ...\n        if not task or not task.id: logger.error(\"send_task_notification called with invalid task object.\"); return\n        try:\n            has_info = await self.has_push_notification_info(task.id)\n            if not has_info: logger.debug(f\"No push notification info found for task {task.id}\"); return\n            push_info: Optional[PushNotificationConfig] = await self.get_push_notification_info(task.id)\n            if not push_info or not push_info.url: logger.warning(f\"Push notification info incomplete or URL missing for task {task.id}\"); return\n            if self.notification_sender_auth:\n                logger.info(f\"Sending push notification for task {task.id} to {push_info.url} (State: {task.status.state.value})\")\n                notification_data = task.model_dump(exclude_none=True)\n                await self.notification_sender_auth.send_push_notification(push_info.url, data=notification_data)\n            else: logger.warning(f\"Push notification URL configured for task {task.id} but no 'notification_sender_auth' object was provided.\")\n        except AttributeError as e: logger.error(f\"Push notification methods missing in base class? Error: {e}\", exc_info=True)\n        except Exception as e: logger.error(f\"Failed to send push notification for task {task.id}: {e}\", exc_info=True)\n\n    # --- SSE Management Methods (保持不变) ---\n    async def setup_sse_consumer(self, task_id: str) -> asyncio.Queue:\n        # ... (代码同上一版本) ...\n        queue = asyncio.Queue()\n        async with self.sse_queues_lock: self.sse_queues[task_id].append(queue)\n        logger.debug(f\"SSE consumer queue created and registered for task {task_id}. Total consumers: {len(self.sse_queues[task_id])}\")\n        return queue\n\n    async def enqueue_events_for_sse(self, task_id: str, event: Union[TaskStatusUpdateEvent, TaskArtifactUpdateEvent, object]):\n         # ... (代码同上一版本) ...\n        async with self.sse_queues_lock:\n            if task_id in self.sse_queues:\n                queues = self.sse_queues[task_id]\n                logger.debug(f\"Enqueuing event for task {task_id} to {len(queues)} consumers. Event: {type(event)}\")\n                put_tasks = [q.put(event) for q in queues]\n                await asyncio.gather(*put_tasks, return_exceptions=True)\n            else: logger.debug(f\"No active SSE consumers found for task {task_id} when enqueuing event.\")\n\n    async def _cleanup_sse_queues(self, task_id: str, queue_to_remove: Optional[asyncio.Queue] = None):\n         # ... (代码同上一版本) ...\n        async with self.sse_queues_lock:\n            if task_id in self.sse_queues:\n                if queue_to_remove:\n                    try: self.sse_queues[task_id].remove(queue_to_remove); logger.debug(f\"Removed specific SSE queue for task {task_id}.\")\n                    except ValueError: logger.warning(f\"Attempted to remove a non-existent SSE queue for task {task_id}.\")\n                else:\n                    queues = self.sse_queues.pop(task_id, []); logger.debug(f\"Cleaning up all {len(queues)} SSE queues for task {task_id}.\")\n                if not self.sse_queues.get(task_id): self.sse_queues.pop(task_id, None); logger.debug(f\"Task ID {task_id} removed from SSE queue registry.\")\n            else: logger.debug(f\"No SSE queues found for task {task_id} during cleanup.\")\n            # --- ADDED: Clean up last processed index ---\n            self.last_stream_update_index.pop(task_id, None)\n            logger.debug(f\"Removed last stream update index tracker for task {task_id}.\")\n            # --- END ADDED ---\n\n    async def dequeue_events_for_sse(self, request_id: str, task_id: str, queue: asyncio.Queue) -> AsyncIterable[SendTaskStreamingResponse]:\n        # ... (代码同上一版本) ...\n        logger.debug(f\"Starting SSE event dequeuing for task {task_id}, request {request_id}.\")\n        try:\n            while True:\n                event = await queue.get()\n                logger.debug(f\"Dequeued event for task {task_id}, request {request_id}. Event type: {type(event)}\")\n                try:\n                    if event is SSE_CLOSE_SENTINEL: logger.debug(f\"SSE close sentinel received for task {task_id}, request {request_id}. Closing stream.\"); break\n                    if isinstance(event, (TaskStatusUpdateEvent, TaskArtifactUpdateEvent)): yield SendTaskStreamingResponse(id=request_id, result=event)\n                    else: logger.warning(f\"Dequeued unexpected event type for SSE: {type(event)} for task {task_id}\")\n                finally:\n                     if hasattr(queue, 'task_done'): queue.task_done()\n                 # Check final flag AFTER processing the event\n                if hasattr(event, 'final') and event.final: logger.debug(f\"Received final event flag for task {task_id}, request {request_id}. Closing stream after yielding.\"); break\n        except asyncio.CancelledError: logger.info(f\"SSE stream cancelled for task {task_id}, request {request_id}.\")\n        except Exception as e: logger.error(f\"Error during SSE event dequeuing for task {task_id}, request {request_id}: {e}\", exc_info=True)\n        finally: logger.debug(f\"Cleaning up SSE queue for task {task_id}, request {request_id}.\"); await self._cleanup_sse_queues(task_id, queue)\n\n    # --- _get_user_query (保持不变) ---\n    def _get_user_query(self, task_send_params: TaskSendParams) -> str:\n        # ... (代码同上一版本) ...\n        if not task_send_params.message or not task_send_params.message.parts: logger.warning(f\"[_get_user_query] Message or parts are empty for task {task_send_params.id}\"); return \"\"\n        part = task_send_params.message.parts[0]; text = \"\"\n        if isinstance(part, TextPart): text = part.text\n        elif isinstance(part, dict) and part.get(\"type\") == \"text\": text = part.get(\"text\", \"\")\n        elif hasattr(part, 'text'): text = part.text\n        else: logger.error(f\"[_get_user_query] First part is not a recognized text part! Type: {type(part)}, Value: {part!r}\"); raise ValueError(f\"Expected first message part to contain text, but got {type(part)}\")\n        logger.debug(f\"[_get_user_query] Extracted query: '{text}'\"); return text.strip()\n\n    # --- _validate_request (保持不变) ---\n    def _validate_request(self, request: Union[SendTaskRequest, SendTaskStreamingRequest]) -> JSONRPCResponse | None:\n         # ... (代码同上一版本) ...\n        task_send_params: TaskSendParams = request.params; supported_content_types = [\"text\"]\n        if not utils.are_modalities_compatible(task_send_params.acceptedOutputModes, supported_content_types): logger.warning(f\"Unsupported output mode. Received %s, Support %s\", task_send_params.acceptedOutputModes, supported_content_types); return utils.new_incompatible_types_error(request.id)\n        if task_send_params.pushNotification and not task_send_params.pushNotification.url: logger.warning(\"Push notification URL is missing\"); return JSONRPCResponse(id=request.id, error=InvalidParamsError(message=\"Push notification URL is missing\"))\n        return None\n\n    # --- on_send_task (保持不变) ---\n    async def on_send_task(self, request: SendTaskRequest) -> SendTaskResponse:\n         # ... (代码同上一版本) ...\n        validation_error = self._validate_request(request);\n        if validation_error: return SendTaskResponse(id=request.id, error=validation_error.error)\n        if request.params.pushNotification:\n             try:\n                if not await self.set_push_notification_info(request.params.id, request.params.pushNotification): return SendTaskResponse(id=request.id, error=InvalidParamsError(message=\"Failed to set push notification info\"))\n             except AttributeError: logger.error(\"set_push_notification_info method not found/implemented.\"); return SendTaskResponse(id=request.id, error=InternalError(message=\"Server config error (push notifications setup).\"))\n             except Exception as e: logger.error(f\"Error during set_push_notification_info: {e}\", exc_info=True); return SendTaskResponse(id=request.id, error=InternalError(message=f\"Error setting push notification: {e}\"))\n        await self.upsert_task(request.params)\n        task_working: Optional[Task] = await self.update_store(request.params.id, TaskStatus(state=TaskState.WORKING), None)\n        if not task_working: logger.error(f\"Failed to update task {request.params.id} to WORKING state.\"); return SendTaskResponse(id=request.id, error=InternalError(message=\"Failed to initialize task state.\"))\n        await self.send_task_notification(task_working)\n        asyncio.create_task(self._process_research_task(request.params))\n        return SendTaskResponse(id=request.id, result=task_working)\n\n    # --- _process_research_task (修正 finally 块) ---\n    async def _process_research_task(self, task_send_params: TaskSendParams):\n        query = self._get_user_query(task_send_params)\n        task_id = task_send_params.id\n        task_failed = None\n        try:\n            logger.info(f\"Starting research process for task {task_id} with query: '{query}'\")\n            initial_state: ResearchState = { \"topic\": query, \"depth\": \"advanced\", \"research_plan\": None, \"search_steps_planned\": [], \"analysis_steps_planned\": [], \"current_search_step_index\": 0, \"current_analysis_step_index\": 0, \"current_gap_search_index\": 0, \"search_results\": [], \"gap_analysis\": None, \"additional_queries_planned\": [], \"final_synthesis\": None, \"final_report_markdown\": None, \"stream_updates\": [], \"completed_steps_count\": 0, \"total_steps\": 0, }\n            config = {\"recursion_limit\": 100}\n\n            async for current_state in self.research_app.astream(initial_state, config=config, stream_mode=\"values\"):\n                await self._process_stream_updates(task_id, current_state) # 将当前状态传递给处理函数\n                if current_state.get(\"final_report_markdown\"):\n                    await self._finalize_task(task_id, current_state)\n                    return # 正常结束\n\n            logger.warning(f\"Research task {task_id} stream finished without producing final report.\")\n            await self._finalize_task(task_id, {\"final_report_markdown\": \"研究过程异常结束，未能生成报告。\"})\n\n        except Exception as e:\n            # ... (异常处理逻辑不变, 包含 send_task_notification 和 enqueue SSE close) ...\n            logger.error(f\"Error during research task processing for task {task_id}: {e}\", exc_info=True)\n            error_message = f\"研究过程中发生错误: {str(e) or type(e).__name__}\"; parts = [TextPart(text=error_message)]\n            task_status = TaskStatus(state=TaskState.FAILED, message=Message(role=\"agent\", parts=parts))\n            try:\n                task_failed = await self.update_store(task_id, task_status, None)\n                if task_failed:\n                    await self.send_task_notification(task_failed)\n                    status_event = TaskStatusUpdateEvent(id=task_id, status=task_status, final=True)\n                    await self.enqueue_events_for_sse(task_id, status_event)\n                    await self.enqueue_events_for_sse(task_id, SSE_CLOSE_SENTINEL)\n                else: logger.error(f\"Failed to update task {task_id} to FAILED state after error.\")\n            except Exception as final_err: logger.error(f\"Further error during task failure handling for {task_id}: {final_err}\", exc_info=True)\n        finally:\n            # --- 修正 finally 块 ---\n            logger.debug(f\"Entering finally block for task {task_id} processing.\")\n            # 直接访问基类提供的任务存储字典 self.tasks (假设存在)\n            final_task_object: Optional[Task] = self.tasks.get(task_id) # 使用 .get() 安全地获取\n\n            # 检查任务是否以最终状态结束\n            if not final_task_object or final_task_object.status.state not in [TaskState.COMPLETED, TaskState.FAILED, TaskState.CANCELED]:\n                logger.warning(f\"Task {task_id} processing ended but task not in final state ({getattr(final_task_object, 'status', None)}). Enqueuing SSE close sentinel just in case.\")\n                # 确保向所有等待的客户端发送关闭信号\n                await self.enqueue_events_for_sse(task_id, SSE_CLOSE_SENTINEL)\n            else:\n                logger.debug(f\"Task {task_id} processing ended in final state: {final_task_object.status.state.value}. SSE cleanup should be handled by dequeue.\")\n            # 不再需要在这里主动清理所有队列\n            # await self._cleanup_sse_queues(task_id)\n            # --- 修改结束 ---\n\n\n    async def _process_stream_updates(self, task_id: str, current_state: Dict[str, Any]):\n        \"\"\"\n        处理来自 research_app 的流式状态更新，提取详细信息并发送 A2A 事件。\n        (已增强以发送更丰富的更新)\n        \"\"\"\n        last_index = self.last_stream_update_index[task_id]\n        stream_updates: List[StreamUpdate] = current_state.get(\"stream_updates\", [])\n        new_updates = stream_updates[last_index:]\n\n        if not new_updates:\n            return\n\n        logger.debug(f\"Processing {len(new_updates)} new stream updates for task {task_id} (from index {last_index})\")\n\n        for update in new_updates:\n            # 尝试从 update.data 中提取结构化信息和详细消息\n            # (这里的字段名 'step', 'status', 'query', 'source', 'message' 是基于日志的推测,\n            # 你需要根据 StreamUpdate 的实际定义调整)\n            update_data = getattr(update, 'data', None)\n            structured_data = {}\n            detail_message = None\n\n            if update_data:\n                detail_message = getattr(update_data, 'message', None)\n                structured_data['step'] = getattr(update_data, 'step', getattr(update_data, 'step_name', None)) # 尝试不同可能的字段名\n                structured_data['status'] = getattr(update_data, 'status', None)\n                structured_data['query'] = getattr(update_data, 'query', None)\n                structured_data['source'] = getattr(update_data, 'source', None)\n                structured_data['results_count'] = getattr(update_data, 'results_count', None)\n                # 添加原始消息作为备用细节\n                structured_data['detail'] = detail_message if detail_message else str(update_data)[:200] + \"...\"\n            else:\n                # 如果没有 data 字段，则使用 update 本身的字符串表示\n                detail_message = str(update)[:200] + \"...\"\n                structured_data['detail'] = detail_message\n\n            # 清理 structured_data 中的 None 值\n            structured_data = {k: v for k, v in structured_data.items() if v is not None}\n\n            # 构造人类可读的文本 (基于提取到的信息)\n            readable_text = detail_message if detail_message else structured_data.get('detail', 'Processing...')\n            # 可以添加更多信息到 readable_text，例如:\n            prefix = \"\"\n            if step := structured_data.get('step'): prefix += f\"[{step}] \"\n            if query := structured_data.get('query'): prefix += f\"Query: '{query}' \"\n            if source := structured_data.get('source'): prefix += f\"Source: {source} \"\n            if count := structured_data.get('results_count'): prefix += f\"({count} results) \"\n            if prefix: readable_text = prefix.strip() + (f\": {detail_message}\" if detail_message and not detail_message.startswith(prefix) else \"\")\n\n\n            # 如果提取到了有效更新\n            if structured_data or readable_text:\n                parts_to_send = []\n                # 添加结构化数据部分 (推荐)\n                if structured_data:\n                     logger.debug(f\"Sending DataPart for task {task_id}: {structured_data}\")\n                     parts_to_send.append(DataPart(data=structured_data))\n                # 添加人类可读文本部分\n                logger.debug(f\"Sending TextPart for task {task_id}: {readable_text}\")\n                parts_to_send.append(TextPart(text=readable_text))\n\n                if parts_to_send:\n                    message = Message(role=\"agent\", parts=parts_to_send)\n                    # 状态始终是 WORKING，因为这是中间更新\n                    task_status = TaskStatus(state=TaskState.WORKING, message=message)\n\n                    # 更新内存中的任务状态（可选）\n                    task_updated = await self.update_store(task_id, task_status, None)\n                    if task_updated:\n                        await self.send_task_notification(task_updated) # 发送推送（如果配置）\n                    else:\n                        logger.warning(f\"Failed to update store during stream processing for task {task_id}\")\n\n                    # 将 TaskStatusUpdateEvent 放入 SSE 队列\n                    task_update_event = TaskStatusUpdateEvent(\n                        id=task_id, status=task_status, final=False # final=False 表示是中间更新\n                    )\n                    await self.enqueue_events_for_sse(task_id, task_update_event)\n\n        # 更新此任务已处理的最新索引\n        self.last_stream_update_index[task_id] = len(stream_updates)\n        logger.debug(f\"Updated last stream update index for task {task_id} to {len(stream_updates)}\")\n    # --- 核心修改结束 ---\n\n\n    # --- _finalize_task (添加 index 清理) ---\n    async def _finalize_task(self, task_id: str, final_state: Dict[str, Any]):\n        logger.info(f\"Finalizing task {task_id}\")\n        final_report = final_state.get(\"final_report_markdown\", \"未能生成研究报告\")\n        parts = [TextPart(text=final_report)]\n        artifact = Artifact(parts=parts, index=0, append=False)\n        task_status = TaskStatus(state=TaskState.COMPLETED)\n\n        task_completed = await self.update_store(task_id, task_status, [artifact])\n        if task_completed: await self.send_task_notification(task_completed)\n        else: logger.error(f\"Failed to update task {task_id} to COMPLETED state.\")\n\n        # 发送最终事件到 SSE 队列\n        artifact_event = TaskArtifactUpdateEvent(id=task_id, artifact=artifact)\n        await self.enqueue_events_for_sse(task_id, artifact_event)\n        status_event = TaskStatusUpdateEvent(id=task_id, status=task_status, final=True) # 标记 final=True\n        await self.enqueue_events_for_sse(task_id, status_event)\n        await self.enqueue_events_for_sse(task_id, SSE_CLOSE_SENTINEL) # 发送关闭信号\n\n        # 清理 stream update index 跟踪器\n        async with self.sse_queues_lock: # 使用锁确保线程安全\n            self.last_stream_update_index.pop(task_id, None)\n            logger.debug(f\"Removed last stream update index tracker for completed task {task_id}.\")\n\n    # --- on_send_task_subscribe (保持不变, 使用已修正的 SSE 方法) ---\n    async def on_send_task_subscribe(self, request: SendTaskStreamingRequest) -> Union[AsyncIterable[SendTaskStreamingResponse], JSONRPCResponse]:\n        # ... (代码同上一版本) ...\n        logger.debug(f\"Received on_send_task_subscribe request: {request.id} for task {request.params.id}\"); validation_error = self._validate_request(request);\n        if validation_error: logger.warning(f\"Validation failed for task {request.params.id}: {validation_error.error}\"); return JSONRPCResponse(id=request.id, error=validation_error.error)\n        if request.params.pushNotification:\n             try:\n                if not await self.set_push_notification_info(request.params.id, request.params.pushNotification): logger.warning(f\"Failed to set push notification info for task {request.params.id}\"); return JSONRPCResponse(id=request.id, error=InvalidParamsError(message=\"Failed to set push notification info\"))\n             except AttributeError: logger.error(\"set_push_notification_info method not found/implemented.\"); return JSONRPCResponse(id=request.id, error=InternalError(message=\"Server config error (push notifications setup).\"))\n             except Exception as e: logger.error(f\"Error during set_push_notification_info for task {request.params.id}: {e}\", exc_info=True); return JSONRPCResponse(id=request.id, error=InternalError(message=f\"Error setting push notification: {e}\"))\n        await self.upsert_task(request.params)\n        task_working: Optional[Task] = await self.update_store(request.params.id, TaskStatus(state=TaskState.WORKING), None)\n        if not task_working: logger.error(f\"Failed to update task {request.params.id} to WORKING state.\"); return JSONRPCResponse(id=request.id, error=InternalError(message=\"Failed to initialize task state.\"))\n        await self.send_task_notification(task_working)\n        logger.info(f\"Creating background task for research processing: {request.params.id}\")\n        asyncio.create_task(self._process_research_task(request.params))\n        logger.debug(f\"Attempting to setup SSE for task {request.params.id}, request {request.id}\")\n        try:\n            sse_consumer_queue = await self.setup_sse_consumer(request.params.id); logger.debug(f\"SSE consumer queue setup successfully for task {request.params.id}, request {request.id}\")\n            result_iterable = self.dequeue_events_for_sse(request.id, request.params.id, sse_consumer_queue); logger.debug(f\"[TaskManager DEBUG] Returning from on_send_task_subscribe (Success - SSE Iterable): type={type(result_iterable)}, value={result_iterable!r}\")\n            return result_iterable\n        except Exception as e:\n            logger.error(f\"Fatal error setting up SSE consumer or dequeuing for task {request.params.id}, request {request.id}: {e}\", exc_info=True)\n            error_response = JSONRPCResponse(id=request.id, error=InternalError(message=\"Failed to setup streaming response channel\")); logger.debug(f\"[TaskManager DEBUG] Returning from on_send_task_subscribe (SSE Setup Exception): type={type(error_response)}, value={error_response!r}\")\n            return error_response\n\n    # --- Other methods like on_get_task, on_cancel_task should be inherited ---\n    # Implement set_push_notification_info if not provided by base class and verification is needed\n    # async def set_push_notification_info(self, task_id: str, push_notification_config: PushNotificationConfig):\n    #     if self.notification_sender_auth:\n    #         is_verified = await self.notification_sender_auth.verify_push_notification_url(push_notification_config.url)\n    #         if not is_verified:\n    #             return False\n    #     # Assuming base class handles storage\n    #     await super().set_push_notification_info(task_id, push_notification_config)\n    #     return True"
  },
  {
    "path": "super_agents/deep_research/a2a_adapter/dr_terminal_output.md",
    "content": "python3 super_agents/deep_research/a2a_adapter/client_example.py\n\n=== DeepResearch A2A 客户端示例 ===\n\n连接到服务器: http://127.0.0.1:8000\n----------------------------------------\nINFO:httpx:HTTP Request: GET http://127.0.0.1:8000/.well-known/agent.json \"HTTP/1.1 200 OK\"\n\n=== Agent卡片信息 ===\n\n{\n  \"name\": \"DeepResearch Agent\",\n  \"description\": \"一个强大的研究助手，能够执行深度研究并生成详细报告\",\n  \"url\": \"http://127.0.0.1:8000/agent\",\n  \"version\": \"0.1.0\",\n  \"capabilities\": {\n    \"streaming\": true,\n    \"pushNotifications\": true,\n    \"stateTransitionHistory\": false\n  },\n  \"defaultInputModes\": [\n    \"text\"\n  ],\n  \"defaultOutputModes\": [\n    \"text\"\n  ],\n  \"skills\": [\n    {\n      \"id\": \"deep_research_skill\",\n      \"name\": \"deep_research\",\n      \"description\": \"执行深度研究并生成详细报告，包括搜索、分析和综合\",\n      \"inputModes\": [\n        \"text\"\n      ],\n      \"outputModes\": [\n        \"text\"\n      ]\n    }\n  ]\n}\n----------------------------------------\n\n请输入研究主题 (或按 Enter 使用默认): \n使用默认研究主题: 特斯拉电动汽车的市场分析和未来发展趋势\n\n=== 发送研究请求 ===\n\n研究主题: 特斯拉电动汽车的市场分析和未来发展趋势\n正在处理，请稍候...\n\n=== 流式响应 ===\n\n任务ID: deep_research_055a54fdeb8e4a0099ae4c9939ee1968\nINFO:httpx:HTTP Request: POST http://127.0.0.1:8000 \"HTTP/1.1 200 OK\"\n进度更新 (文本): Creating research plan...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Creating research plan...\n进度更新 (文本): Research plan created\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Research plan created\n进度更新 (文本): Query: '特斯拉电动汽车市场份额': Searching all sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车市场份额' - 详情: Searching all sources...\n进度更新 (文本): Query: '特斯拉电动汽车市场份额': Found 4 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车市场份额' - 详情: Found 4 results\n进度更新 (文本): Query: '特斯拉电动汽车市场份额': Searching all sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车市场份额' - 详情: Searching all sources...\n进度更新 (文本): Query: '特斯拉电动汽车市场份额': Found 4 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车市场份额' - 详情: Found 4 results\n进度更新 (文本): Query: '特斯拉电动汽车市场份额': Searching all sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车市场份额' - 详情: Searching all sources...\n进度更新 (文本): Query: '特斯拉电动汽车市场份额': Found 4 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车市场份额' - 详情: Found 4 results\n进度更新 (文本): Query: '特斯拉电动汽车销售数据': Searching web sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车销售数据' - 详情: Searching web sources...\n进度更新 (文本): Query: '特斯拉电动汽车销售数据': Found 4 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车销售数据' - 详情: Found 4 results\n进度更新 (文本): Query: '特斯拉电动汽车消费者反馈': Searching x sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车消费者反馈' - 详情: Searching x sources...\n进度更新 (文本): Query: '特斯拉电动汽车消费者反馈': Found 6 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车消费者反馈' - 详情: Found 6 results\n进度更新 (文本): Query: '特斯拉电动汽车技术创新': Searching academic sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车技术创新' - 详情: Searching academic sources...\n进度更新 (文本): Query: '特斯拉电动汽车技术创新': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车技术创新' - 详情: Found 3 results\n进度更新 (文本): Query: '特斯拉电动汽车未来发展策略': Searching all sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车未来发展策略' - 详情: Searching all sources...\n进度更新 (文本): Query: '特斯拉电动汽车未来发展策略': Found 2 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车未来发展策略' - 详情: Found 2 results\n进度更新 (文本): Query: '特斯拉电动汽车未来发展策略': Searching all sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车未来发展策略' - 详情: Searching all sources...\n进度更新 (文本): Query: '特斯拉电动汽车未来发展策略': Found 2 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车未来发展策略' - 详情: Found 2 results\n进度更新 (文本): Query: '特斯拉电动汽车未来发展策略': Searching all sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车未来发展策略' - 详情: Searching all sources...\n进度更新 (文本): Query: '特斯拉电动汽车未来发展策略': Found 8 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车未来发展策略' - 详情: Found 8 results\n进度更新 (文本): Query: '特斯拉电动汽车竞争对手分析': Searching web sources...\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: '特斯拉电动汽车竞争对手分析' - 详情: Searching web sources...\n进度更新 (文本): Query: '特斯拉电动汽车竞争对手分析': Found 2 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: '特斯拉电动汽车竞争对手分析' - 详情: Found 2 results\n进度更新 (文本): Analyzing SWOT...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Analyzing SWOT...\n进度更新 (文本): Analysis complete\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Analysis complete\n进度更新 (文本): Analyzing Comparative...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Analyzing Comparative...\n进度更新 (文本): Analysis complete\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Analysis complete\n进度更新 (文本): Analyzing Sentiment...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Analyzing Sentiment...\n进度更新 (文本): Analysis complete\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Analysis complete\n进度更新 (文本): Analyzing Trend...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Analyzing Trend...\n进度更新 (文本): Analysis complete\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Analysis complete\n进度更新 (文本): Analyzing research gaps and limitations...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Analyzing research gaps and limitations...\n进度更新 (文本): Identified 3 limitations and 3 knowledge gaps\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Identified 3 limitations and 3 knowledge gaps\n进度更新 (文本): Query: 'Tesla market share in India 2024': Searching web to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla market share in India 2024' - 详情: Searching web to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (文本): Query: 'Tesla market share in India 2024': Searching academic to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla market share in India 2024' - 详情: Searching academic to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (文本): Query: 'Tesla market share in India 2024': Searching x to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla market share in India 2024' - 详情: Searching x to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (文本): Query: 'Tesla market share in India 2024': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla market share in India 2024' - 详情: Found 3 results\n进度更新 (文本): Query: 'Tesla market share in India 2024': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla market share in India 2024' - 详情: Found 3 results\n进度更新 (文本): Query: 'Tesla market share in India 2024': Found 6 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla market share in India 2024' - 详情: Found 6 results\n进度更新 (文本): Query: 'Tesla sales growth in Southeast Asia': Searching academic to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla sales growth in Southeast Asia' - 详情: Searching academic to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (文本): Query: 'Tesla sales growth in Southeast Asia': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla sales growth in Southeast Asia' - 详情: Found 3 results\n进度更新 (文本): Query: 'Tesla's market entry strategy in Africa': Searching x to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla's market entry strategy in Africa' - 详情: Searching x to fill gap: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis, yet the current research lacks detailed data on these regions.\n进度更新 (文本): Query: 'Tesla's market entry strategy in Africa': Found 6 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla's market entry strategy in Africa' - 详情: Found 6 results\n进度更新 (文本): Query: 'Tesla consumer sentiment among millennials': Searching web to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla consumer sentiment among millennials' - 详情: Searching web to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (文本): Query: 'Tesla consumer sentiment among millennials': Searching academic to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla consumer sentiment among millennials' - 详情: Searching academic to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (文本): Query: 'Tesla consumer sentiment among millennials': Searching x to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla consumer sentiment among millennials' - 详情: Searching x to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (文本): Query: 'Tesla consumer sentiment among millennials': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla consumer sentiment among millennials' - 详情: Found 3 results\n进度更新 (文本): Query: 'Tesla consumer sentiment among millennials': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla consumer sentiment among millennials' - 详情: Found 3 results\n进度更新 (文本): Query: 'Tesla consumer sentiment among millennials': Found 6 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla consumer sentiment among millennials' - 详情: Found 6 results\n进度更新 (文本): Query: 'Tesla brand perception among baby boomers': Searching academic to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla brand perception among baby boomers' - 详情: Searching academic to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (文本): Query: 'Tesla brand perception among baby boomers': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla brand perception among baby boomers' - 详情: Found 3 results\n进度更新 (文本): Query: 'Tesla customer feedback from urban vs rural areas': Searching x to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Tesla customer feedback from urban vs rural areas' - 详情: Searching x to fill gap: The sentiment analysis conducted is broad and does not account for variations across different demographic groups, which could influence Tesla's marketing and product development strategies.\n进度更新 (文本): Query: 'Tesla customer feedback from urban vs rural areas': Found 6 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Tesla customer feedback from urban vs rural areas' - 详情: Found 6 results\n进度更新 (文本): Query: 'Effect of EV subsidies on Tesla sales in Europe': Searching web to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Effect of EV subsidies on Tesla sales in Europe' - 详情: Searching web to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (文本): Query: 'Effect of EV subsidies on Tesla sales in Europe': Searching academic to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Effect of EV subsidies on Tesla sales in Europe' - 详情: Searching academic to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (文本): Query: 'Effect of EV subsidies on Tesla sales in Europe': Searching x to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Effect of EV subsidies on Tesla sales in Europe' - 详情: Searching x to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (文本): Query: 'Effect of EV subsidies on Tesla sales in Europe': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Effect of EV subsidies on Tesla sales in Europe' - 详情: Found 3 results\n进度更新 (文本): Query: 'Effect of EV subsidies on Tesla sales in Europe': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Effect of EV subsidies on Tesla sales in Europe' - 详情: Found 3 results\n进度更新 (文本): Query: 'Effect of EV subsidies on Tesla sales in Europe': Found 6 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Effect of EV subsidies on Tesla sales in Europe' - 详情: Found 6 results\n进度更新 (文本): Query: 'Impact of US tariffs on Tesla's competitiveness in China': Searching academic to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Impact of US tariffs on Tesla's competitiveness in China' - 详情: Searching academic to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (文本): Query: 'Impact of US tariffs on Tesla's competitiveness in China': Found 3 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Impact of US tariffs on Tesla's competitiveness in China' - 详情: Found 3 results\n进度更新 (文本): Query: 'Government incentives for Tesla in South America': Searching x to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (结构化): [步骤: -, 状态: running] - 查询: 'Government incentives for Tesla in South America' - 详情: Searching x to fill gap: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position, but the research does not delve into this aspect in detail.\n进度更新 (文本): Query: 'Government incentives for Tesla in South America': Found 6 results\n进度更新 (结构化): [步骤: -, 状态: completed] - 查询: 'Government incentives for Tesla in South America' - 详情: Found 6 results\n进度更新 (文本): Synthesizing all research findings...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Synthesizing all research findings...\n进度更新 (文本): Synthesized 6 key findings\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Synthesized 6 key findings\n进度更新 (文本): Research complete\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Research complete\n进度更新 (文本): Compiling research findings into the final report...\n进度更新 (结构化): [步骤: -, 状态: running] - 详情: Compiling research findings into the final report...\n进度更新 (文本): Successfully generated Markdown report (22103 characters).\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Successfully generated Markdown report (22103 characters).\n进度更新 (文本): Research complete\n进度更新 (结构化): [步骤: -, 状态: completed] - 详情: Research complete\n\n收到最终 Artifact:\n  研究报告片段 (TextPart): ## Introduction\n\nTesla, Inc., has emerged as a pivotal player in the global electric vehicle (EV) market, spearheading the transition to sustainable transportation. The company's journey, however, has been marked by dynamic shifts in market dynamics, technological advancements, and varying consumer sentiments across different regions. This comprehensive research report delves into Tesla's market performance and future prospects, analyzing key findings from recent data on market share fluctuations, technological innovations, and consumer preferences. The focus is on understanding the intricate factors influencing Tesla's position in the EV industry, including regional market trends, the impact of government policies, and the evolving competitive landscape.\n\nThe report aims to provide a detailed examination of Tesla's market share trends in major regions such as the United States, China, and Europe, where the company has faced varying degrees of success and challenges. Furthermore, it explores Tesla's technological edge, particularly in battery efficiency and autonomous driving capabilities, which continue to shape its competitive advantage. Consumer sentiment, especially among different demographic groups like millennials, is also scrutinized to understand the broader appeal of Tesla's products. Additionally, the report assesses the significant role of government policies, including subsidies and tariffs, in influencing Tesla's market dynamics globally.\n\nBy synthesizing these key findings, this report seeks to offer a nuanced analysis of Tesla's current market position and future development trends, providing insights into the company's strategic responses to the challenges and opportunities it faces in the rapidly evolving EV market.\n\n## Tesla's Market Share Fluctuations\n\n### Finding 1: Tesla's Market Share in the US Electric Vehicle (EV) Market Has Experienced Fluctuations, with a Notable Decline in 2024\n\nTesla's market share in the US electric vehicle market has been subject to significant fluctuations over recent years, culminating in a notable decline in 2024. According to Cox Automotive, Tesla's US market share dropped to 4.2% in 2023, indicating a shift in the competitive landscape [特斯拉在美国电动汽车市场份额首次跌破50% - NE时代](https://m.ne-time.cn/newindexDetail/33817). This decline continued into 2024, with Tesla's sales in the US falling by 5.6%, marking the company's first annual decline since 2011 [Auto: For Tesla, India is a challenge as well as opportunity - Rediff.com](https://www.rediff.com/business/report/auto-for-tesa-india-is-a-challenge-as-well-as-opportunity/20250319.htm). This downturn is particularly significant as it contrasts with the overall growth in the US EV market, suggesting that Tesla's dominance is being challenged by emerging competitors.\n\nThe decline in Tesla's market share in the US can be attributed to several factors. Firstly, the increase in competition from other automakers, such as General Motors and Ford, has eroded Tesla's once-unassailable lead. These competitors have introduced new models and increased production capacity, which has diluted Tesla's market share. Secondly, the aging model lineup, particularly the Model S and Model X, may have contributed to waning consumer interest, as newer models from competitors offer fresh designs and features. Lastly, Tesla's pricing strategies and production challenges have also played a role, as potential buyers may have been deterred by price volatility and delivery delays.\n\nDespite the decline, Tesla remains a significant player in the US EV market, with its vehicles still commanding a substantial portion of total EV sales. The company's focus on technological innovation and brand loyalty continues to be a key factor in maintaining its position, even as it navigates these market fluctuations. However, Tesla must address these challenges head-on, potentially through the introduction of new models and improvements in production efficiency, to regain its footing and reverse the downward trend in its US market share.\n\n### Finding 2: In China, Tesla's Market Share Has Been Decreasing, Despite Record Sales in 2024\n\nIn China, Tesla has experienced a paradoxical situation where its market share has declined despite achieving record sales in 2024. According to data from bjx.com.cn, Tesla's market share in China dropped from 7.8% in 2023 to 6% in 2024, even as the company sold over 657,000 cars in the country during the same period [特斯拉汽车2024年在中国市场创销量纪录，但市场份额下降](https://m.bjx.com.cn/mnews/20250110/1422044.shtml). This decline in market share underscores the intensifying competition within the Chinese EV market, where local manufacturers are rapidly gaining ground.\n\nThe decrease in Tesla's market share in China can be attributed to several key factors. Firstly, the rise of domestic competitors, such as BYD and NIO, has put pressure on Tesla's position. These companies have not only increased their production capacities but also introduced new models that cater specifically to Chinese consumer preferences, offering competitive alternatives to Tesla's vehicles. Secondly, Tesla's pricing strategies have faced scrutiny, as the company has engaged in price wars to maintain sales volumes, which may have impacted its brand perception and profitability. Lastly, the lack of new model introductions and updates to existing models has been a point of contention, as consumers seek the latest technology and features.\n\nDespite these challenges, Tesla's record sales in China in 2024 indicate strong underlying demand for its vehicles. The company's focus on expanding its manufacturing capabilities in Shanghai and enhancing its charging infrastructure has been crucial in sustaining sales growth. However, to reverse the decline in market share, Tesla must continue to innovate and adapt to the unique dynamics of the Chinese market. This could involve introducing new models tailored to local preferences, enhancing its service network, and possibly adjusting pricing strategies to balance volume and profitability.\n\n### Finding 3: Tesla's Sales in Europe Have Declined Significantly, Influenced by the End of EV Subsidies and Increasing Competition\n\nTesla's sales in Europe have experienced a significant decline in 2024, influenced by the end of EV subsidies and increasing competition from other manufacturers. According to data from bnn bloomberg.ca, Tesla's European sales fell by 13% in 2024 [Tesla Sales Plunge 63% in EU's Second-Biggest EV Market](https://www.bnnbloomberg.ca/business/2025/02/03/tesla-sales-plunge-63-in-france-the-eus-second-biggest-ev-market/). This decline was particularly pronounced in Germany, where the cessation of EV subsidies in December 2023 had a profound impact on Tesla's sales, with a reported 41% drop [Tesla Sales Tumbled In Europe In 2024. But That's Just Part Of The ...](https://insideevs.com/news/747977/tesla-sales-down-europe-2024/).\n\nThe end of government incentives for electric vehicles in several European countries has been a major factor in the decline of Tesla's sales. These subsidies had previously encouraged consumers to opt for electric vehicles, and their removal has led to a decrease in overall EV demand, with Tesla being disproportionately affected due to its significant reliance on these markets. Additionally, increasing competition from European automakers, such as Volkswagen and Stellantis, has further challenged Tesla's position. These companies have introduced new EV models and expanded their production capacities, offering consumers more choices and potentially more appealing options.\n\nDespite these challenges, Tesla continues to hold a significant presence in the European market, with its vehicles still accounting for a notable portion of total EV sales. To mitigate the impact of the subsidy cuts and rising competition, Tesla has implemented strategies such as price adjustments and the introduction of new features through over-the-air updates. However, the company must continue to innovate and adapt to the changing market dynamics in Europe, potentially through the introduction of new models and enhanced marketing efforts to maintain and grow its market share.\n\n## Tesla's Technological Innovations\n\n### Finding 4: Tesla's Technological Innovations, Particularly in Battery Efficiency and Autonomous Driving, Continue to Be a Competitive Advantage\n\nTesla's technological innovations, particularly in battery efficiency and autonomous driving, have been pivotal in maintaining its competitive edge in the EV market. The company's advancements in battery technology have significantly improved the range and efficiency of its vehicles, addressing one of the primary concerns for EV consumers. According to naipo.com, Tesla's focus on battery technology has enabled the company to develop high-efficiency lithium-ion battery packs, which have enhanced the driving range and charging speed of its vehicles [北美智权报第151期：特斯拉2024：技术创新与市场挑战的展望](https://www.naipo.com/Portals/11/web_cn/Knowledge_Center/Industry_Insight/IPND_240124_1501.htm).\n\nIn addition to battery technology, Tesla's advancements in autonomous driving have positioned it as a leader in the industry. The company's Autopilot and Full Self-Driving (FSD) systems have attracted significant attention and interest from consumers and investors alike. These systems leverage over-the-air (OTA) software updates to continuously improve vehicle performance and add new features without the need for physical modifications. According to tradesmax.com, Tesla's focus on OTA updates and its autonomous driving capabilities have been key differentiators in the market [为什么特斯拉电动车会成功？ - 美股投资网](https://www.tradesmax.com/component/k2/item/20180-why-tesla-is-successful).\n\nTesla's commitment to technological innovation extends beyond just battery and autonomous driving technologies. The company has also made significant strides in other areas, such as electric motor efficiency and vehicle manufacturing processes. For instance, Tesla's use of carbon silicon power devices in its inverters has led to improved energy conversion efficiency, resulting in a 5-10% increase in vehicle range [“平平无奇”特斯拉，身上全是“遥遥领先” - 新浪汽车](https://auto.sina.cn/zz/hy/2023-09-28/detail-imzpfekr3231284.d.html). Additionally, the company's adoption of one-piece casting technology has streamlined its manufacturing process, reducing complexity and costs.\n\nDespite these technological achievements, Tesla faces ongoing challenges in maintaining its lead. The rapid pace of innovation in the EV industry means that competitors are continually catching up, with companies like BYD and NIO making significant investments in battery and autonomous driving technologies. To sustain its competitive advantage, Tesla must continue to invest in research and development, focusing on breakthroughs that can further enhance the performance and appeal of its vehicles.\n\n## Consumer Sentiment Towards Tesla\n\n### Finding 5: Consumer Sentiment Towards Tesla Varies Significantly Across Demographics, with Millennials Showing Strong Interest in Tesla's Products\n\nConsumer sentiment towards Tesla varies significantly across different demographic groups, with millennials demonstrating particularly strong interest in the company's products. According to foxbusiness.com, the Tesla Model 3 was rated as the 'most satisfying' car for millennials, indicating a high level of satisfaction and loyalty among this demographic [Both millennials and baby boomers name Tesla Model 3 the 'most satisfying' car](https://www.foxbusiness.com/lifestyle/millenials-baby-boomers-tesla-model-3-most-satisfying-car). This sentiment is driven by Tesla's alignment with millennials' values, such as environmental consciousness and technological innovation.\n\nMillennials' preference for Tesla can be attributed to several factors. Firstly, the company's eco-friendly image resonates with this demographic, as they are more likely to prioritize sustainability and environmental impact in their purchasing decisions. Secondly, Tesla's focus on cutting-edge technology, including features like Autopilot and OTA updates, appeals to tech-savvy millennials who value innovation and connectivity in their vehicles. According to businessinsider.com, Tesla's Model 3 appeals to millennials due to its affordability and alignment with their values [Why Tesla's Model 3 appeals to millennials](https://www.businessinsider.com/why-tesla-model-3-appeals-to-millennials-2018-2).\n\nIn contrast, other demographic groups, such as baby boomers, have shown mixed sentiments towards Tesla. While some baby boomers also rated the Model 3 as the 'most satisfying' car, there is a broader range of opinions among this group, with some expressing concerns about the reliability and practicality of electric vehicles. According to fool.com, baby boomers' perceptions of Tesla are influenced by factors such as brand familiarity and traditional automotive preferences [Why Do Baby Boomers Hate Tesla?](https://www.fool.com/investing/2020/11/24/why-do-baby-boomers-hate-tesla/).\n\nUnderstanding these demographic variations in consumer sentiment is crucial for Tesla's marketing and product development strategies. The company must continue to tailor its messaging and offerings to different age groups, emphasizing the aspects of its brand and products that resonate most with each demographic. For millennials, this could involve highlighting Tesla's commitment to sustainability and technological advancement, while for baby boomers, focusing on reliability and performance may be more effective.\n\n## Impact of Government Policies on Tesla's Market Position\n\n### Finding 6: Government Policies, Such as Subsidies and Tariffs, Have a Significant Impact on Tesla's Market Position Globally\n\nGovernment policies, including subsidies and tariffs, have a significant impact on Tesla's market position globally, influencing the company's sales and competitiveness in different regions. In Europe, the end of EV subsidies in countries like Germany has led to a notable decline in Tesla's sales. According to insideevs.com, the cessation of Germany's EV subsidy program in December 2023 resulted in a 41% drop in Tesla's sales in the country [Tesla Sales Tumbled In Europe In 2024. But That's Just Part Of The ...](https://insideevs.com/news/747977/tesla-sales-down-europe-2024/). This highlights the importance of government incentives in driving EV adoption and Tesla's reliance on these markets.\n\nIn contrast, changes in government policies can also create opportunities for Tesla. In India, the government's decision to reduce import duties on EVs to 15% under certain conditions has opened up potential new markets for the company. According to restofworld.org, this policy change could facilitate Tesla's entry into the Indian market, which is expected to grow significantly in the coming years [Tesla looks to India at a moment of crisis - Rest of World](https://restofworld.org/2025/tesla-india-sales-stock-decline/). However, the exact impact of Tesla's market share in emerging markets like India remains uncertain due to limited data.\n\nTariffs also play a crucial role in shaping Tesla's market dynamics, particularly in China. The imposition of US tariffs on Chinese imports has affected Tesla's competitiveness in the country, as the company relies heavily on its Shanghai factory for production. According to cnn.com, Tesla stopped taking new orders in China for two imported, US-made models due to these tariffs, which could impact its overall sales in the region [Tesla stops taking new orders in China for two imported, US-made ...](https://www.cnn.com/2025/04/12/business/tesla-china-tariffs-musk/index.html).\n\nTo navigate these challenges and capitalize on opportunities, Tesla must adopt a flexible and strategic approach to government policies. This could involve lobbying for favorable policies in key markets, adjusting pricing strategies to mitigate the impact of subsidy cuts, and exploring new markets where government incentives are more favorable. By doing so, Tesla can maintain and enhance its global market position in the face of varying policy landscapes.\n\n## Scope and Limitations\n\n### Scope and Limitations\n\nThis research report on Tesla's market analysis and future development trends is comprehensive, yet it is important to acknowledge its scope and limitations, which stem from the identified gaps in the data and methodology used.\n\n**Source Bias**: The majority of the sources utilized in this research are derived from web articles and social media platforms, which may introduce bias due to the potential for sensationalism or incomplete data. Academic sources, while included, are limited and often focus on specific aspects rather than providing a comprehensive market analysis. This reliance on non-academic sources could skew the findings and affect the reliability of the conclusions drawn [特斯拉电动汽车市场份额](WEB). To address this limitation, future research should incorporate more academic and industry reports to balance the data and cross-reference findings with official company statements and financial reports.\n\n**Data Scarcity**: There is a notable lack of detailed, up-to-date data on Tesla's market share in various regions, particularly in emerging markets like India and Southeast Asia. The available data often focuses on established markets such as the US and China, leaving gaps in understanding global market dynamics [特斯拉电动汽车市场份额](ACADEMIC). This scarcity hinders a complete analysis of Tesla's performance and potential in these regions. To overcome this, primary research or surveys in underrepresented regions could be conducted, and international market research databases could be utilized for more comprehensive data.\n\n**Temporal Bias**: The research results are heavily weighted towards recent data, which may overlook long-term trends and historical context that could provide deeper insights into Tesla's market position and future strategies. This temporal bias could lead to an incomplete understanding of the company's trajectory and its response to market changes over time [特斯拉电动汽车市场份额](X). To mitigate this, future studies should include historical data analysis to understand long-term trends and use time-series analysis to predict future market movements based on past performance.\n\n### Identified Knowledge Gaps\n\n**Tesla's Market Share in Emerging Markets**: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis. However, the current research lacks detailed data on these regions, limiting the ability to assess Tesla's global market strategy effectively [特斯拉电动汽车市场份额](WEB). Future research should prioritize collecting more data from these markets to fill this gap.\n\n**Consumer Sentiment in Different Demographics**: The sentiment analysis conducted in this report is broad and does not account for variations across different demographic groups beyond millennials and baby boomers. This limitation could influence Tesla's marketing and product development strategies, as understanding these variations is essential for targeted approaches [特斯拉电动汽车消费者反馈](X). Future studies should delve deeper into consumer sentiment across various demographics to provide a more nuanced understanding.\n\n**Impact of Government Policies on Tesla's Market Position**: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position. However, the research does not delve into this aspect in detail, particularly in how these policies influence Tesla's long-term strategies and competitiveness [Effect of EV subsidies on Tesla sales in Europe](WEB). A more thorough analysis of the impact of government policies across different regions would enhance the understanding of Tesla's global market dynamics.\n\nBy acknowledging these limitations and addressing the identified knowledge gaps, future research can provide a more comprehensive and accurate analysis of Tesla's market position and future development trends.\n\n## Conclusion\n\nThis research report has provided a detailed analysis of Tesla's market performance and future development trends, highlighting key findings across different regions and aspects of the company's operations. Tesla's market share in the US and China has experienced fluctuations, with notable declines in 2024, driven by increased competition and the end of EV subsidies in key markets like Europe. Despite these challenges, Tesla's technological innovations in battery efficiency and autonomous driving continue to be a significant competitive advantage, attracting strong interest from consumers, particularly among millennials.\n\nGovernment policies, including subsidies and tariffs, play a crucial role in shaping Tesla's market position globally. The end of EV subsidies in Europe has led to a decline in sales, while potential opportunities in emerging markets like India are influenced by favorable policy changes. However, the exact impact of Tesla's market share in these regions remains uncertain due to limited data.\n\nThe report also acknowledges several limitations and knowledge gaps, including source bias, data scarcity in emerging markets, and temporal bias in the analysis. Future research should aim to address these gaps by incorporating more academic sources, conducting primary research in underrepresented regions, and including historical data to provide a more comprehensive understanding of Tesla's market dynamics.\n\nIn conclusion, Tesla faces a complex and evolving market landscape, with challenges and opportunities that require strategic responses. By continuing to innovate and adapt to regional market conditions and government policies, Tesla can navigate these dynamics and maintain its position as a leader in the global EV market. However, the remaining uncertainties, such as the long-term effects of government policies and variations in consumer sentiment across different demographics, highlight the need for ongoing research and analysis to fully understand Tesla's future prospects.\n\n=== 最终研究报告 (来自Artifact) ===\n## Introduction\n\nTesla, Inc., has emerged as a pivotal player in the global electric vehicle (EV) market, spearheading the transition to sustainable transportation. The company's journey, however, has been marked by dynamic shifts in market dynamics, technological advancements, and varying consumer sentiments across different regions. This comprehensive research report delves into Tesla's market performance and future prospects, analyzing key findings from recent data on market share fluctuations, technological innovations, and consumer preferences. The focus is on understanding the intricate factors influencing Tesla's position in the EV industry, including regional market trends, the impact of government policies, and the evolving competitive landscape.\n\nThe report aims to provide a detailed examination of Tesla's market share trends in major regions such as the United States, China, and Europe, where the company has faced varying degrees of success and challenges. Furthermore, it explores Tesla's technological edge, particularly in battery efficiency and autonomous driving capabilities, which continue to shape its competitive advantage. Consumer sentiment, especially among different demographic groups like millennials, is also scrutinized to understand the broader appeal of Tesla's products. Additionally, the report assesses the significant role of government policies, including subsidies and tariffs, in influencing Tesla's market dynamics globally.\n\nBy synthesizing these key findings, this report seeks to offer a nuanced analysis of Tesla's current market position and future development trends, providing insights into the company's strategic responses to the challenges and opportunities it faces in the rapidly evolving EV market.\n\n## Tesla's Market Share Fluctuations\n\n### Finding 1: Tesla's Market Share in the US Electric Vehicle (EV) Market Has Experienced Fluctuations, with a Notable Decline in 2024\n\nTesla's market share in the US electric vehicle market has been subject to significant fluctuations over recent years, culminating in a notable decline in 2024. According to Cox Automotive, Tesla's US market share dropped to 4.2% in 2023, indicating a shift in the competitive landscape [特斯拉在美国电动汽车市场份额首次跌破50% - NE时代](https://m.ne-time.cn/newindexDetail/33817). This decline continued into 2024, with Tesla's sales in the US falling by 5.6%, marking the company's first annual decline since 2011 [Auto: For Tesla, India is a challenge as well as opportunity - Rediff.com](https://www.rediff.com/business/report/auto-for-tesa-india-is-a-challenge-as-well-as-opportunity/20250319.htm). This downturn is particularly significant as it contrasts with the overall growth in the US EV market, suggesting that Tesla's dominance is being challenged by emerging competitors.\n\nThe decline in Tesla's market share in the US can be attributed to several factors. Firstly, the increase in competition from other automakers, such as General Motors and Ford, has eroded Tesla's once-unassailable lead. These competitors have introduced new models and increased production capacity, which has diluted Tesla's market share. Secondly, the aging model lineup, particularly the Model S and Model X, may have contributed to waning consumer interest, as newer models from competitors offer fresh designs and features. Lastly, Tesla's pricing strategies and production challenges have also played a role, as potential buyers may have been deterred by price volatility and delivery delays.\n\nDespite the decline, Tesla remains a significant player in the US EV market, with its vehicles still commanding a substantial portion of total EV sales. The company's focus on technological innovation and brand loyalty continues to be a key factor in maintaining its position, even as it navigates these market fluctuations. However, Tesla must address these challenges head-on, potentially through the introduction of new models and improvements in production efficiency, to regain its footing and reverse the downward trend in its US market share.\n\n### Finding 2: In China, Tesla's Market Share Has Been Decreasing, Despite Record Sales in 2024\n\nIn China, Tesla has experienced a paradoxical situation where its market share has declined despite achieving record sales in 2024. According to data from bjx.com.cn, Tesla's market share in China dropped from 7.8% in 2023 to 6% in 2024, even as the company sold over 657,000 cars in the country during the same period [特斯拉汽车2024年在中国市场创销量纪录，但市场份额下降](https://m.bjx.com.cn/mnews/20250110/1422044.shtml). This decline in market share underscores the intensifying competition within the Chinese EV market, where local manufacturers are rapidly gaining ground.\n\nThe decrease in Tesla's market share in China can be attributed to several key factors. Firstly, the rise of domestic competitors, such as BYD and NIO, has put pressure on Tesla's position. These companies have not only increased their production capacities but also introduced new models that cater specifically to Chinese consumer preferences, offering competitive alternatives to Tesla's vehicles. Secondly, Tesla's pricing strategies have faced scrutiny, as the company has engaged in price wars to maintain sales volumes, which may have impacted its brand perception and profitability. Lastly, the lack of new model introductions and updates to existing models has been a point of contention, as consumers seek the latest technology and features.\n\nDespite these challenges, Tesla's record sales in China in 2024 indicate strong underlying demand for its vehicles. The company's focus on expanding its manufacturing capabilities in Shanghai and enhancing its charging infrastructure has been crucial in sustaining sales growth. However, to reverse the decline in market share, Tesla must continue to innovate and adapt to the unique dynamics of the Chinese market. This could involve introducing new models tailored to local preferences, enhancing its service network, and possibly adjusting pricing strategies to balance volume and profitability.\n\n### Finding 3: Tesla's Sales in Europe Have Declined Significantly, Influenced by the End of EV Subsidies and Increasing Competition\n\nTesla's sales in Europe have experienced a significant decline in 2024, influenced by the end of EV subsidies and increasing competition from other manufacturers. According to data from bnn bloomberg.ca, Tesla's European sales fell by 13% in 2024 [Tesla Sales Plunge 63% in EU's Second-Biggest EV Market](https://www.bnnbloomberg.ca/business/2025/02/03/tesla-sales-plunge-63-in-france-the-eus-second-biggest-ev-market/). This decline was particularly pronounced in Germany, where the cessation of EV subsidies in December 2023 had a profound impact on Tesla's sales, with a reported 41% drop [Tesla Sales Tumbled In Europe In 2024. But That's Just Part Of The ...](https://insideevs.com/news/747977/tesla-sales-down-europe-2024/).\n\nThe end of government incentives for electric vehicles in several European countries has been a major factor in the decline of Tesla's sales. These subsidies had previously encouraged consumers to opt for electric vehicles, and their removal has led to a decrease in overall EV demand, with Tesla being disproportionately affected due to its significant reliance on these markets. Additionally, increasing competition from European automakers, such as Volkswagen and Stellantis, has further challenged Tesla's position. These companies have introduced new EV models and expanded their production capacities, offering consumers more choices and potentially more appealing options.\n\nDespite these challenges, Tesla continues to hold a significant presence in the European market, with its vehicles still accounting for a notable portion of total EV sales. To mitigate the impact of the subsidy cuts and rising competition, Tesla has implemented strategies such as price adjustments and the introduction of new features through over-the-air updates. However, the company must continue to innovate and adapt to the changing market dynamics in Europe, potentially through the introduction of new models and enhanced marketing efforts to maintain and grow its market share.\n\n## Tesla's Technological Innovations\n\n### Finding 4: Tesla's Technological Innovations, Particularly in Battery Efficiency and Autonomous Driving, Continue to Be a Competitive Advantage\n\nTesla's technological innovations, particularly in battery efficiency and autonomous driving, have been pivotal in maintaining its competitive edge in the EV market. The company's advancements in battery technology have significantly improved the range and efficiency of its vehicles, addressing one of the primary concerns for EV consumers. According to naipo.com, Tesla's focus on battery technology has enabled the company to develop high-efficiency lithium-ion battery packs, which have enhanced the driving range and charging speed of its vehicles [北美智权报第151期：特斯拉2024：技术创新与市场挑战的展望](https://www.naipo.com/Portals/11/web_cn/Knowledge_Center/Industry_Insight/IPND_240124_1501.htm).\n\nIn addition to battery technology, Tesla's advancements in autonomous driving have positioned it as a leader in the industry. The company's Autopilot and Full Self-Driving (FSD) systems have attracted significant attention and interest from consumers and investors alike. These systems leverage over-the-air (OTA) software updates to continuously improve vehicle performance and add new features without the need for physical modifications. According to tradesmax.com, Tesla's focus on OTA updates and its autonomous driving capabilities have been key differentiators in the market [为什么特斯拉电动车会成功？ - 美股投资网](https://www.tradesmax.com/component/k2/item/20180-why-tesla-is-successful).\n\nTesla's commitment to technological innovation extends beyond just battery and autonomous driving technologies. The company has also made significant strides in other areas, such as electric motor efficiency and vehicle manufacturing processes. For instance, Tesla's use of carbon silicon power devices in its inverters has led to improved energy conversion efficiency, resulting in a 5-10% increase in vehicle range [“平平无奇”特斯拉，身上全是“遥遥领先” - 新浪汽车](https://auto.sina.cn/zz/hy/2023-09-28/detail-imzpfekr3231284.d.html). Additionally, the company's adoption of one-piece casting technology has streamlined its manufacturing process, reducing complexity and costs.\n\nDespite these technological achievements, Tesla faces ongoing challenges in maintaining its lead. The rapid pace of innovation in the EV industry means that competitors are continually catching up, with companies like BYD and NIO making significant investments in battery and autonomous driving technologies. To sustain its competitive advantage, Tesla must continue to invest in research and development, focusing on breakthroughs that can further enhance the performance and appeal of its vehicles.\n\n## Consumer Sentiment Towards Tesla\n\n### Finding 5: Consumer Sentiment Towards Tesla Varies Significantly Across Demographics, with Millennials Showing Strong Interest in Tesla's Products\n\nConsumer sentiment towards Tesla varies significantly across different demographic groups, with millennials demonstrating particularly strong interest in the company's products. According to foxbusiness.com, the Tesla Model 3 was rated as the 'most satisfying' car for millennials, indicating a high level of satisfaction and loyalty among this demographic [Both millennials and baby boomers name Tesla Model 3 the 'most satisfying' car](https://www.foxbusiness.com/lifestyle/millenials-baby-boomers-tesla-model-3-most-satisfying-car). This sentiment is driven by Tesla's alignment with millennials' values, such as environmental consciousness and technological innovation.\n\nMillennials' preference for Tesla can be attributed to several factors. Firstly, the company's eco-friendly image resonates with this demographic, as they are more likely to prioritize sustainability and environmental impact in their purchasing decisions. Secondly, Tesla's focus on cutting-edge technology, including features like Autopilot and OTA updates, appeals to tech-savvy millennials who value innovation and connectivity in their vehicles. According to businessinsider.com, Tesla's Model 3 appeals to millennials due to its affordability and alignment with their values [Why Tesla's Model 3 appeals to millennials](https://www.businessinsider.com/why-tesla-model-3-appeals-to-millennials-2018-2).\n\nIn contrast, other demographic groups, such as baby boomers, have shown mixed sentiments towards Tesla. While some baby boomers also rated the Model 3 as the 'most satisfying' car, there is a broader range of opinions among this group, with some expressing concerns about the reliability and practicality of electric vehicles. According to fool.com, baby boomers' perceptions of Tesla are influenced by factors such as brand familiarity and traditional automotive preferences [Why Do Baby Boomers Hate Tesla?](https://www.fool.com/investing/2020/11/24/why-do-baby-boomers-hate-tesla/).\n\nUnderstanding these demographic variations in consumer sentiment is crucial for Tesla's marketing and product development strategies. The company must continue to tailor its messaging and offerings to different age groups, emphasizing the aspects of its brand and products that resonate most with each demographic. For millennials, this could involve highlighting Tesla's commitment to sustainability and technological advancement, while for baby boomers, focusing on reliability and performance may be more effective.\n\n## Impact of Government Policies on Tesla's Market Position\n\n### Finding 6: Government Policies, Such as Subsidies and Tariffs, Have a Significant Impact on Tesla's Market Position Globally\n\nGovernment policies, including subsidies and tariffs, have a significant impact on Tesla's market position globally, influencing the company's sales and competitiveness in different regions. In Europe, the end of EV subsidies in countries like Germany has led to a notable decline in Tesla's sales. According to insideevs.com, the cessation of Germany's EV subsidy program in December 2023 resulted in a 41% drop in Tesla's sales in the country [Tesla Sales Tumbled In Europe In 2024. But That's Just Part Of The ...](https://insideevs.com/news/747977/tesla-sales-down-europe-2024/). This highlights the importance of government incentives in driving EV adoption and Tesla's reliance on these markets.\n\nIn contrast, changes in government policies can also create opportunities for Tesla. In India, the government's decision to reduce import duties on EVs to 15% under certain conditions has opened up potential new markets for the company. According to restofworld.org, this policy change could facilitate Tesla's entry into the Indian market, which is expected to grow significantly in the coming years [Tesla looks to India at a moment of crisis - Rest of World](https://restofworld.org/2025/tesla-india-sales-stock-decline/). However, the exact impact of Tesla's market share in emerging markets like India remains uncertain due to limited data.\n\nTariffs also play a crucial role in shaping Tesla's market dynamics, particularly in China. The imposition of US tariffs on Chinese imports has affected Tesla's competitiveness in the country, as the company relies heavily on its Shanghai factory for production. According to cnn.com, Tesla stopped taking new orders in China for two imported, US-made models due to these tariffs, which could impact its overall sales in the region [Tesla stops taking new orders in China for two imported, US-made ...](https://www.cnn.com/2025/04/12/business/tesla-china-tariffs-musk/index.html).\n\nTo navigate these challenges and capitalize on opportunities, Tesla must adopt a flexible and strategic approach to government policies. This could involve lobbying for favorable policies in key markets, adjusting pricing strategies to mitigate the impact of subsidy cuts, and exploring new markets where government incentives are more favorable. By doing so, Tesla can maintain and enhance its global market position in the face of varying policy landscapes.\n\n## Scope and Limitations\n\n### Scope and Limitations\n\nThis research report on Tesla's market analysis and future development trends is comprehensive, yet it is important to acknowledge its scope and limitations, which stem from the identified gaps in the data and methodology used.\n\n**Source Bias**: The majority of the sources utilized in this research are derived from web articles and social media platforms, which may introduce bias due to the potential for sensationalism or incomplete data. Academic sources, while included, are limited and often focus on specific aspects rather than providing a comprehensive market analysis. This reliance on non-academic sources could skew the findings and affect the reliability of the conclusions drawn [特斯拉电动汽车市场份额](WEB). To address this limitation, future research should incorporate more academic and industry reports to balance the data and cross-reference findings with official company statements and financial reports.\n\n**Data Scarcity**: There is a notable lack of detailed, up-to-date data on Tesla's market share in various regions, particularly in emerging markets like India and Southeast Asia. The available data often focuses on established markets such as the US and China, leaving gaps in understanding global market dynamics [特斯拉电动汽车市场份额](ACADEMIC). This scarcity hinders a complete analysis of Tesla's performance and potential in these regions. To overcome this, primary research or surveys in underrepresented regions could be conducted, and international market research databases could be utilized for more comprehensive data.\n\n**Temporal Bias**: The research results are heavily weighted towards recent data, which may overlook long-term trends and historical context that could provide deeper insights into Tesla's market position and future strategies. This temporal bias could lead to an incomplete understanding of the company's trajectory and its response to market changes over time [特斯拉电动汽车市场份额](X). To mitigate this, future studies should include historical data analysis to understand long-term trends and use time-series analysis to predict future market movements based on past performance.\n\n### Identified Knowledge Gaps\n\n**Tesla's Market Share in Emerging Markets**: Understanding Tesla's penetration and growth potential in emerging markets is crucial for a comprehensive market analysis. However, the current research lacks detailed data on these regions, limiting the ability to assess Tesla's global market strategy effectively [特斯拉电动汽车市场份额](WEB). Future research should prioritize collecting more data from these markets to fill this gap.\n\n**Consumer Sentiment in Different Demographics**: The sentiment analysis conducted in this report is broad and does not account for variations across different demographic groups beyond millennials and baby boomers. This limitation could influence Tesla's marketing and product development strategies, as understanding these variations is essential for targeted approaches [特斯拉电动汽车消费者反馈](X). Future studies should delve deeper into consumer sentiment across various demographics to provide a more nuanced understanding.\n\n**Impact of Government Policies on Tesla's Market Position**: Government policies, such as subsidies and tariffs, significantly affect Tesla's market position. However, the research does not delve into this aspect in detail, particularly in how these policies influence Tesla's long-term strategies and competitiveness [Effect of EV subsidies on Tesla sales in Europe](WEB). A more thorough analysis of the impact of government policies across different regions would enhance the understanding of Tesla's global market dynamics.\n\nBy acknowledging these limitations and addressing the identified knowledge gaps, future research can provide a more comprehensive and accurate analysis of Tesla's market position and future development trends.\n\n## Conclusion\n\nThis research report has provided a detailed analysis of Tesla's market performance and future development trends, highlighting key findings across different regions and aspects of the company's operations. Tesla's market share in the US and China has experienced fluctuations, with notable declines in 2024, driven by increased competition and the end of EV subsidies in key markets like Europe. Despite these challenges, Tesla's technological innovations in battery efficiency and autonomous driving continue to be a significant competitive advantage, attracting strong interest from consumers, particularly among millennials.\n\nGovernment policies, including subsidies and tariffs, play a crucial role in shaping Tesla's market position globally. The end of EV subsidies in Europe has led to a decline in sales, while potential opportunities in emerging markets like India are influenced by favorable policy changes. However, the exact impact of Tesla's market share in these regions remains uncertain due to limited data.\n\nThe report also acknowledges several limitations and knowledge gaps, including source bias, data scarcity in emerging markets, and temporal bias in the analysis. Future research should aim to address these gaps by incorporating more academic sources, conducting primary research in underrepresented regions, and including historical data to provide a more comprehensive understanding of Tesla's market dynamics.\n\nIn conclusion, Tesla faces a complex and evolving market landscape, with challenges and opportunities that require strategic responses. By continuing to innovate and adapt to regional market conditions and government policies, Tesla can navigate these dynamics and maintain its position as a leader in the global EV market. However, the remaining uncertainties, such as the long-term effects of government policies and variations in consumer sentiment across different demographics, highlight the need for ongoing research and analysis to fully understand Tesla's future prospects.\n流式任务处理完成。\n\n=== 示例完成 ===\n"
  },
  {
    "path": "super_agents/deep_research/a2a_adapter/run_server.py",
    "content": "# super_agents/deep_research/a2a_adapter/run_server.py\n\nimport os\nimport sys\nimport logging\nfrom pathlib import Path\n\n# 添加项目根目录到路径\ncurrent_script_path = Path(__file__).resolve()\nproject_root = current_script_path.parent.parent.parent.parent\nif str(project_root) not in sys.path:\n    sys.path.insert(0, str(project_root))\n\n# 导入环境变量\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# 导入A2A适配器\nfrom super_agents.deep_research.a2a_adapter.setup import run_server\n\n# 配置日志\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(__name__)\n\ndef main():\n    \"\"\"\n    启动DeepResearch A2A服务器的主函数\n    \"\"\"\n    # 定义服务器配置\n    HOST = os.getenv(\"A2A_HOST\", \"127.0.0.1\")\n    PORT = int(os.getenv(\"A2A_PORT\", \"8000\"))\n    \n    print(f\"\\n=== 启动 DeepResearch A2A 服务器 ===\\n\")\n    print(f\"主机: {HOST}\")\n    print(f\"端口: {PORT}\")\n    print(\"-\" * 40)\n    \n    # 运行服务器\n    run_server(HOST, PORT)\n\nif __name__ == \"__main__\":\n    try:\n        main()\n    except KeyboardInterrupt:\n        print(\"\\n服务器已手动停止。\")\n    except Exception as e:\n        logger.error(f\"启动服务器时发生未处理的异常: {e}\", exc_info=True)"
  },
  {
    "path": "super_agents/deep_research/a2a_adapter/setup.py",
    "content": "# super_agents/deep_research/a2a_adapter/setup.py\n\nimport logging\nimport asyncio\nfrom typing import Dict, Any, Optional\n\n# 导入A2A相关组件\nfrom core.a2a.types import (\n    AgentCard, AgentCapabilities, AgentSkill, Task # Import Task for type hinting\n)\nfrom core.a2a.server.server import A2AServer\nfrom starlette.middleware.cors import CORSMiddleware\n\n# 导入DeepResearch适配器\nfrom super_agents.deep_research.a2a_adapter.deep_research_task_manager import DeepResearchTaskManager\n\n# --- Placeholder/Dummy Push Notification Sender ---\n# TODO: Replace this with your actual push notification sender implementation\n# Your real implementation should likely handle HTTP requests, errors, retries,\n# and potentially authentication challenges.\n# It needs to be importable, e.g., from core.a2a.server.push_notification_auth import PushNotificationSenderAuth\nclass DummyPushNotificationSender:\n    \"\"\"这是一个推送通知发送器的占位符/模拟实现，仅记录日志。\"\"\"\n    async def send_push_notification(self, url: str, data: dict):\n        \"\"\"\n        模拟发送推送通知。\n\n        Args:\n            url: 目标推送 URL.\n            data: 要发送的任务数据 (通常是 Task.model_dump()).\n        \"\"\"\n        task_id = data.get(\"id\", \"N/A\")\n        task_state = data.get(\"status\", {}).get(\"state\", \"N/A\")\n        logger.info(\n            f\"[DummyPushNotificationSender] SIMULATING push notification for task {task_id} \"\n            f\"(State: {task_state}) to URL: {url}\"\n        )\n        # 在这里添加实际的 HTTP POST 请求逻辑\n        # 例如:\n        # async with httpx.AsyncClient() as client:\n        #     try:\n        #         response = await client.post(url, json=data, timeout=10.0)\n        #         response.raise_for_status()\n        #         logger.info(f\"Push notification sent successfully for task {task_id}\")\n        #     except Exception as e:\n        #         logger.error(f\"Failed to send push notification for task {task_id} to {url}: {e}\")\n        await asyncio.sleep(0.01) # Simulate tiny async delay\n\n    async def verify_push_notification_url(self, url: str) -> bool:\n         \"\"\"\n         模拟验证推送通知URL（例如通过挑战请求）。\n         TODO: 实现真实的验证逻辑。\n         \"\"\"\n         logger.info(f\"[DummyPushNotificationSender] SIMULATING verification for URL: {url} - Returning True\")\n         return True # 假设总是验证成功\n# --- End of Placeholder ---\n\n\nlogger = logging.getLogger(__name__)\n\ndef setup_a2a_server(host: str = \"127.0.0.1\", port: int = 8000) -> A2AServer:\n    \"\"\"\n    设置并返回DeepResearch的A2A服务器实例 (启用推送通知支持)\n\n    Args:\n        host: 服务器主机地址\n        port: 服务器端口\n\n    Returns:\n        A2AServer: 配置好的A2A服务器实例\n    \"\"\"\n    print(\"\\n=== 配置 DeepResearch A2A 服务器 ===\\n\")\n\n    # 创建Agent卡片 (确保 pushNotifications=True)\n    agent_card = AgentCard(\n        name=\"DeepResearch Agent\",\n        description=\"一个强大的研究助手，能够执行深度研究并生成详细报告\",\n        url=f\"http://{host}:{port}/agent\", # 使用传入的 host/port 构建 URL\n        version=\"0.1.0\",\n        capabilities=AgentCapabilities(\n            streaming=True,           # Agent 支持流式\n            pushNotifications=True    # Agent *声明*支持推送通知\n        ),\n        skills=[\n            AgentSkill(\n                id=\"deep_research_skill\",\n                name=\"deep_research\",\n                description=\"执行深度研究并生成详细报告，包括搜索、分析和综合\",\n                inputModes=[\"text\"],\n                outputModes=[\"text\"]\n            )\n        ]\n        # 你可以在这里添加 provider 等可选字段\n        # provider=AgentProvider(organization=\"YourOrg\", url=\"http://yourorg.com\")\n    )\n\n    # --- 实例化 Push Notification Sender ---\n    # 使用上面定义的占位符实现。\n    # TODO: 当你有真实的实现时，替换下面这行\n    notification_sender = DummyPushNotificationSender()\n    logger.info(\"Initialized with DummyPushNotificationSender.\")\n    # --- 实例化结束 ---\n\n    # --- 创建任务管理器，并传入 notification_sender_auth ---\n    task_manager = DeepResearchTaskManager(\n        notification_sender_auth=notification_sender\n    )\n    # --- 创建结束 ---\n\n    # 创建A2A服务器实例 (传入 host 和 port)\n    server = A2AServer(\n        host=host,\n        port=port,\n        agent_card=agent_card,\n        task_manager=task_manager\n    )\n    \n    # 添加CORS中间件支持\n    server.app.add_middleware(\n        CORSMiddleware,\n        allow_origins=[\"*\"],  # 允许所有前端域名访问，生产环境中应该限制为特定域名\n        allow_credentials=True,\n        allow_methods=[\"*\"],  # 允许所有HTTP方法\n        allow_headers=[\"*\"],  # 允许所有HTTP头\n    )\n    print(\"已添加CORS支持，允许来自所有域的请求\")\n\n    print(f\"DeepResearch A2A服务器实例已创建，监听地址 http://{host}:{port}\")\n    return server\n\n# 示例使用方法 (保持不变)\ndef run_server(host: str = \"127.0.0.1\", port: int = 8000):\n    \"\"\"\n    运行DeepResearch A2A服务器\n\n    Args:\n        host: 服务器主机地址\n        port: 服务器端口\n    \"\"\"\n    try:\n        # 设置服务器\n        server = setup_a2a_server(host, port)\n\n        # 启动服务器\n        print(f\"启动DeepResearch A2A服务器...\") # 移除重复地址信息\n        server.start()\n\n    except KeyboardInterrupt:\n        print(\"\\n服务器已手动停止。\")\n    except Exception as e:\n        logger.error(f\"启动服务器时发生未处理的异常: {e}\", exc_info=True)\n\nif __name__ == \"__main__\":\n    # 直接运行此文件时启动服务器\n    # 你可以从命令行参数或环境变量获取 host 和 port\n    run_host = os.getenv(\"A2A_HOST\", \"127.0.0.1\")\n    run_port = int(os.getenv(\"A2A_PORT\", \"8000\"))\n    run_server(host=run_host, port=run_port)"
  },
  {
    "path": "super_agents/deep_research/main.py",
    "content": "# main.py\nimport sys\nfrom pathlib import Path\nimport asyncio\nimport json\nimport os # <--- 导入 os 模块\nimport re\nimport time # <--- 确保导入 time (虽然 finalize_basic_research 中没用到，但 add_stream_update 可能需要)\nfrom datetime import datetime\nfrom typing import Literal, List, Dict, Any, Set # <--- 确保导入 List\n\n# --- OpenAI 错误处理 ---\ntry:\n    from openai import RateLimitError\nexcept ImportError:\n    # 如果用户没有安装 openai 包，定义一个基础异常类以便 except 块能工作\n    class RateLimitError(Exception):\n        pass\n\n# 1. 获取当前脚本文件的绝对路径对象\n#    Path(__file__) 获取当前脚本路径\n#    .resolve() 将其转换为绝对路径，并解析任何符号链接\ncurrent_script_path = Path(__file__).resolve()\nproject_root = current_script_path.parent\nwhile not (project_root / '.git').exists() and project_root.parent != project_root:\n    project_root = project_root.parent\nif not (project_root / '.git').exists():\n       # 如果找不到 .git，可能需要用其他标记或给出错误\n    raise FileNotFoundError(\"Could not determine project root based on .git directory.\")\n#    构建需要添加的路径 (例如 'src' 目录)\n#    根据你的实际情况，可能是项目根目录，或者根目录下的 'src', 'lib' 等\npath_to_add = project_root\n# 3. 将计算出的路径添加到 sys.path (如果它还不在里面的话)\n#    使用 str() 将 Path 对象转换为字符串，因为 sys.path 需要字符串\nif str(path_to_add) not in sys.path:\n    # insert(0, ...) 表示优先搜索这个路径\n    sys.path.insert(0, str(path_to_add))\n\n# (可选) 打印出来确认一下\nprint(f\"Dynamically added to sys.path: {path_to_add}\")\n# print(sys.path)\n\n# --- LangGraph 和内部模块导入 ---\ntry:\n    from super_agents.deep_research.reason_graph.graph import app\n    from super_agents.deep_research.reason_graph.state import ResearchState\n    # 导入需要用到的 Pydantic 模型\n    from super_agents.deep_research.reason_graph.schemas import StreamUpdate, FinalSynthesisResult, KeyFinding\nexcept ImportError as e:\n    print(f\"Error importing graph components: {e}\")\n    print(\"Please ensure 'reason_graph' package and its modules (graph, state, schemas) exist.\")\n    exit(1)\n\n# --- 助手函数 ---\n\ndef slugify(text: str) -> str:\n    \"\"\"将文本转换为安全的文件名部分 (简化版).\"\"\"\n    if not text:\n        return \"no_topic\"\n    text = text.lower()\n    text = re.sub(r'\\s+', '_', text) # 空格替换为下划线\n    text = re.sub(r'[^\\w\\-]+', '', text) # 移除所有非字母、数字、下划线、连字符\n    text = text.strip('_')\n    # 限制文件名长度，避免过长\n    return text[:100] if text else \"sanitized_topic\"\n\n# --- 主研究函数 ---\n\nasync def run_research(topic: str, depth: Literal['basic', 'advanced'] = 'basic'):\n    \"\"\"执行研究图并处理输出和错误。\"\"\"\n    initial_state: ResearchState = {\n        \"topic\": topic,\n        \"depth\": depth,\n        \"research_plan\": None,\n        \"search_steps_planned\": [],\n        \"analysis_steps_planned\": [],\n        \"current_search_step_index\": 0,\n        \"current_analysis_step_index\": 0,\n        \"current_gap_search_index\": 0,\n        \"search_results\": [],\n        \"gap_analysis\": None,\n        \"additional_queries_planned\": [],\n        \"final_synthesis\": None,\n        \"final_report_markdown\": None, # 确保初始状态包含\n        \"stream_updates\": [],\n        \"completed_steps_count\": 0,\n        \"total_steps\": 0,\n    }\n\n    print(\"--- Starting Research Graph ---\")\n    print(f\"Topic: '{topic}'\")\n    print(f\"Depth: '{depth}'\")\n    print(\"-\" * 30)\n\n    processed_updates_count = 0\n    config = {\"recursion_limit\": 100} # 保持递归限制\n\n    final_state = initial_state.copy() # 初始化 final_state\n    error_occurred: Exception | None = None # 用于标记是否有错误发生\n\n    # --- Streaming Execution with Error Handling ---\n    try:\n        async for current_state in app.astream(\n            initial_state,\n            config=config,\n            stream_mode=\"values\" # 使用 values 模式获取完整状态\n        ):\n            final_state = current_state # 持续更新 final_state 为最新状态\n\n            # 检查并打印新的 stream_updates\n            all_current_updates: List[StreamUpdate] = current_state.get(\"stream_updates\", [])\n            new_updates_count = len(all_current_updates) - processed_updates_count\n\n            if new_updates_count > 0:\n                newly_added_updates = all_current_updates[processed_updates_count:]\n                for update in newly_added_updates:\n                    try:\n                        # 尝试打印详细信息\n                        print(f\"--- STREAM UPDATE (ID: {update.data.id} | Status: {update.data.status}) ---\")\n                        # 使用 model_dump() (Pydantic V2) 而不是 dict()\n                        print(json.dumps(update.model_dump(), indent=2, default=str))\n                        print(\"-\" * 30)\n                    except AttributeError as e:\n                        # 处理可能的意外情况，比如列表中混入了非 StreamUpdate 对象\n                        print(f\"--- Error processing stream update (AttributeError): {e} ---\")\n                        print(f\"Problematic update data: {update}\")\n                        print(\"-\" * 30)\n                    except Exception as e: # 捕获其他可能的打印错误\n                        print(f\"--- Error printing stream update: {e} ---\")\n                        print(f\"Problematic update data: {update}\")\n                        print(\"-\" * 30)\n\n\n                # 更新已处理计数\n                processed_updates_count = len(all_current_updates)\n\n            # --- Optional Current State Summary ---\n            # 可以取消注释以查看每步状态摘要\n            # print(f\"--- Current State Summary ---\")\n            # print(f\"  Search steps completed: {current_state.get('current_search_step_index', 0)}\")\n            # print(f\"  Analysis steps completed: {current_state.get('current_analysis_step_index', 0)}\")\n            # print(f\"  Total results so far: {len(current_state.get('search_results', []))}\")\n            # print(\"-\" * 30)\n\n\n    except RateLimitError as e: # 捕获特定的 OpenAI Quota 错误\n        error_occurred = e # 标记错误\n        print(\"\\n\" + \"=\"*40)\n        print(\"!!! OpenAI API Error: Insufficient Quota !!!\")\n        print(\"=\"*40)\n        print(\"The research process was stopped because your OpenAI account has exceeded its quota.\")\n        print(\"Please check your OpenAI plan and billing details.\")\n        print(f\"Original error message: {e}\")\n        print(\"Attempting to show partial results obtained before the error...\")\n    except Exception as e: # 捕获其他可能的意外错误\n         error_occurred = e\n         print(\"\\n\" + \"=\"*40)\n         print(\"!!! An Unexpected Error Occurred During Graph Execution !!!\")\n         print(\"=\"*40)\n         print(f\"Error type: {type(e).__name__}\")\n         print(f\"Error details: {e}\")\n         # 打印详细的 traceback 以便调试\n         import traceback\n         traceback.print_exc()\n         print(\"Attempting to show partial results obtained before the error...\")\n\n\n    # --- Process Final State ---\n    if error_occurred:\n         print(\"\\n--- Graph Execution INTERRUPTED ---\")\n    else:\n         print(\"\\n--- Graph Execution Finished ---\")\n\n    # 检查 final_state 是否有效\n    if not final_state or not isinstance(final_state, dict):\n         print(\"Error: Invalid or unavailable final state after execution.\")\n         return None\n\n    # --- Print Final State Summary (始终尝试打印) ---\n    print(\"\\n--- FINAL (Possibly Partial) RESEARCH STATE (Summary) ---\") # 调整标题\n    print(f\"Topic: {final_state.get('topic', 'N/A')}\")\n    print(f\"Depth: {final_state.get('depth', 'N/A')}\")\n    research_plan = final_state.get('research_plan')\n    if research_plan and hasattr(research_plan, 'search_queries') and hasattr(research_plan, 'required_analyses'):\n        print(\"\\nResearch Plan:\")\n        print(f\"- {len(research_plan.search_queries)} Search Queries Planned\")\n        print(f\"- {len(research_plan.required_analyses)} Analyses Planned\")\n    search_results = final_state.get('search_results', [])\n    print(f\"\\nTotal Search Results Collected: {len(search_results)}\")\n    gap_analysis = final_state.get('gap_analysis')\n    if gap_analysis and hasattr(gap_analysis, 'limitations') and hasattr(gap_analysis, 'knowledge_gaps'):\n        print(\"\\nGap Analysis:\")\n        print(f\"- {len(gap_analysis.limitations)} Limitations Identified\")\n        print(f\"- {len(gap_analysis.knowledge_gaps)} Knowledge Gaps Identified\")\n\n\n    # --- Save Final Synthesis Report (只有在没有错误且报告存在时才保存) ---\n    final_markdown = final_state.get('final_report_markdown')\n\n    if not error_occurred and final_markdown and isinstance(final_markdown, str) and \"Report Generation Failed\" not in final_markdown:\n        # 打印 Synthesis 摘要 (如果存在)\n        final_synthesis_data = final_state.get('final_synthesis')\n        if final_synthesis_data and hasattr(final_synthesis_data, 'key_findings') and hasattr(final_synthesis_data, 'remaining_uncertainties'):\n             print(\"\\nFinal Synthesis Summary:\")\n             print(f\"- {len(final_synthesis_data.key_findings)} Key Findings\")\n             print(f\"- {len(final_synthesis_data.remaining_uncertainties)} Remaining Uncertainties\")\n\n        print(\"\\n--- Saving Final Report to Markdown ---\")\n        try:\n            markdown_content = final_markdown\n            # 使用 .get 提供默认值以防 topic 丢失\n            topic_slug = slugify(final_state.get('topic', 'unknown_topic'))\n            timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n            filename = f\"research_report_{topic_slug}_{timestamp}.md\"\n\n            # --- 保存路径逻辑 ---\n            script_dir = os.path.dirname(os.path.abspath(__file__))\n            output_dir = os.path.join(script_dir, \"Output\")\n            os.makedirs(output_dir, exist_ok=True)\n            filepath = os.path.join(output_dir, filename)\n            # --- 路径逻辑结束 ---\n\n            with open(filepath, \"w\", encoding=\"utf-8\") as f:\n                f.write(markdown_content)\n            print(f\"Successfully saved report to: {filepath}\")\n\n        except Exception as e:\n            print(f\"Error saving report to Markdown: {e}\")\n\n    elif final_markdown and isinstance(final_markdown, str) and \"Report Generation Failed\" in final_markdown:\n         # 如果报告生成节点本身出错并返回了错误信息\n         print(\"\\n--- Final Report Generation Failed ---\")\n         # 只打印错误部分，避免打印整个 Markdown 错误模板\n         print(final_markdown.split('\\n\\n', 1)[-1]) # 尝试只打印 Error: ...\n         print(\"Report not saved.\")\n    elif error_occurred:\n         # 如果是因为 RateLimitError 等原因中断\n         print(\"\\nFinal Report: Not generated due to execution error.\")\n    else:\n         # 正常结束但没有报告（例如 basic depth 或 synthesis 缺失）\n         print(\"\\nFinal Report: Not generated (Flow did not reach, complete report generation step, or synthesis was missing).\")\n\n\n    print(\"\\n--- END OF RESEARCH ---\")\n    return final_state\n\n# --- Main Execution Block ---\nasync def main():\n     # --- 用户输入 topic ---\n     topic = input(\"Please enter the research topic: \")\n     if not topic:\n         print(\"No topic entered. Exiting.\")\n         return\n     # --- (可选) 用户输入 depth ---\n     depth_input = input(\"Enter search depth (basic/advanced) [Default: advanced]: \").strip().lower()\n     depth: Literal['basic', 'advanced'] = 'basic' if depth_input == 'basic' else 'advanced'\n     # ---\n\n     await run_research(topic, depth=depth)\n\nif __name__ == \"__main__\":\n    # 运行 asyncio 事件循环\n    try:\n        asyncio.run(main())\n    except KeyboardInterrupt:\n        print(\"\\nResearch interrupted by user.\")"
  },
  {
    "path": "super_agents/deep_research/output/research_report_analyze_smartvalue_co_ltds_9417t_core_business_key_productsservices_eg_government_cloud_solutions_mo_20250418_125137.md",
    "content": "## Introduction\n\nSmartvalue Co Ltd (9417.T) stands as a significant player in Japan's IT landscape, focusing on cloud solutions and mobility services. The company's primary target markets include the public sector and mobility industries, where it provides innovative cloud-based platforms and mobility services tailored to meet the specific needs of these sectors. Smartvalue's business model is centered around leveraging cloud technology to address social issues, with a particular emphasis on regional information cloud business, cloud platform business, and mobility service business. The company's strategic approach involves a blend of steady income growth and aggressive pursuit of new business opportunities, aiming for significant increases in operating profit. This report delves into Smartvalue's core business operations, market strategy, financial performance, and technological capabilities, providing a comprehensive analysis based on the available data.\n\nThe analysis begins with an exploration of Smartvalue's key products and services, followed by an examination of its revenue mix, unique value proposition, and market strategy. We then discuss Smartvalue's target markets in Japan, its financial performance over the last five years, and its core technology base. Additionally, we assess the company's intellectual property, R&D capabilities, and key management personnel. The report also covers Smartvalue's major shareholders, corporate governance structure, and financial condition, including key KPIs and cash flow trends. Finally, we evaluate Smartvalue's customer segments, sales and marketing strategies, and potential risks such as customer concentration and legal proceedings. This comprehensive analysis aims to provide a detailed understanding of Smartvalue's position in the market and its potential for growth and synergy if acquired by a larger IT services firm.\n\n## Smartvalue Co Ltd's Core Business Focus\n\n### Finding 1: Smartvalue Co Ltd's core business focuses on cloud solutions and mobility services, targeting primarily the public sector and mobility markets in Japan.\n\nSmartvalue Co Ltd is deeply engaged in the provision of resolutions to social issues through its cloud service offerings. The company operates across multiple segments, including the regional information cloud business, cloud platform business, and mobility service business, all of which are geared towards leveraging cloud technology to enhance public sector operations and mobility services in Japan [Smartvalue Co Ltd, 9417:TYO profile - FT.com - Markets data](https://markets.ft.com/data/equities/tearsheet/profile?s=9417:TYO). This strategic focus aligns with the growing demand for digital transformation in these sectors, where efficiency and innovation are paramount.\n\nIn the public sector, Smartvalue provides cloud solutions such as SMART L-Gov, a platform designed to solve various regional issues, and GaaS (Government as a Service), an online government platform for administrative services. These solutions are part of Smartvalue's broader effort to streamline government operations and improve service delivery to citizens. The company also offers the Open-gov platform, which is tailored for smart cities and areas, further demonstrating its commitment to using technology to enhance public administration [Smartvalue Co., Ltd. (9417.T) Stock Price, News, Quote & History - Yahoo Finance](https://finance.yahoo.com/quote/9417.T/).\n\nIn the mobility sector, Smartvalue's offerings include the Kuruma Tsunagu Platform, an IoT platform specialized in mobility, and Kuruma Base, a telematic service for corporations that provides white labelled in-vehicle devices, management consoles, and smartphone apps for mobility sharing services such as car sharing and call centers. These services are aimed at companies engaged in Mobility-as-a-Service (MaaS) business, indicating Smartvalue's focus on leveraging technology to transform mobility solutions [Smartvalue - CB Insights](https://www.cbinsights.com/investor/smartvalue). The integration of these technologies into the mobility sector showcases Smartvalue's innovative approach to addressing transportation challenges in Japan.\n\nThe company's emphasis on cloud solutions and mobility services positions it well to capitalize on the digital transformation trends in Japan. By targeting the public sector and mobility markets, Smartvalue not only addresses current market needs but also positions itself for future growth as these sectors continue to evolve and demand more sophisticated technological solutions. This strategic focus on niche markets allows Smartvalue to differentiate itself from competitors and build a strong brand reputation in these areas.\n\n## Revenue Mix of Smartvalue Co Ltd\n\n### Finding 2: Smartvalue's revenue mix includes both recurring and non-recurring revenue, though specific details on the proportions are not available.\n\nSmartvalue Co Ltd's revenue model encompasses both recurring and non-recurring revenue streams, a common strategy among companies in the technology sector. Recurring revenue typically comes from subscription-based services or long-term contracts, providing a stable income source, while non-recurring revenue is generated from one-time sales or project-based services [Recurring revenue vs. non-recurring revenue - Calqulate](https://www.calqulate.io/blog/recurring-revenue-vs-non-recurring-revenue). Although specific data on the proportions of these revenue streams for Smartvalue is not publicly available, understanding the general dynamics of these revenue types is crucial for assessing the company's financial stability and growth potential.\n\nRecurring revenue is often considered a key indicator of a company's health, as it provides a predictable and stable income stream that can be used to forecast future earnings. For Smartvalue, recurring revenue likely stems from its cloud service subscriptions, such as those provided through SMART L-Gov and GaaS, which are essential for maintaining and expanding its customer base in the public sector. On the other hand, non-recurring revenue may be derived from one-time sales of mobility solutions or project-specific implementations, which can be significant but less predictable [SaaS Recurring Revenue: A Complete Guide - HubiFi](https://www.hubifi.com/blog/saas-recurring-revenue-guide).\n\nThe absence of detailed data on the revenue mix poses a challenge in fully evaluating Smartvalue's financial strategy. However, the company's financial statements indicate overall revenue figures, suggesting a robust business model that balances these two revenue types. For instance, the company's net sales for fiscal years 2023 and 2024 were reported at 3.87 billion and 3.81 billion yen, respectively, indicating a stable revenue stream despite the lack of breakdown into recurring and non-recurring components [Smartvalue Co., Ltd. 10-Year Income Statement, Financial Data 9417 - MarketScreener](https://www.marketscreener.com/quote/stock/SMARTVALUE-CO-LTD-22468068/finances-income-statement/).\n\nUnderstanding the balance between recurring and non-recurring revenue is essential for investors and potential acquirers, as it impacts the company's valuation and growth strategy. A higher proportion of recurring revenue would suggest a more stable business model with predictable cash flows, which is attractive for long-term investments. Conversely, a reliance on non-recurring revenue might indicate higher growth potential but also increased volatility. As such, further detailed financial disclosures from Smartvalue would be beneficial for a more comprehensive analysis.\n\n## Market Strategy of Smartvalue Co Ltd\n\n### Finding 3: Smartvalue's market strategy involves growth through steady income and aggressive pursuit of new business, aiming for increased operating profit.\n\nSmartvalue Co Ltd's market strategy is characterized by a dual approach of maintaining steady income growth while aggressively pursuing new business opportunities. This strategy is outlined in the company's revised second medium-term business plan, where it expresses a goal of achieving a significant increase in operating profit by leveraging both steady income and new business ventures [The Revised Second Medium-term Business Plan - Smartvalue_OM_MTBP2020.pdf](https://www.tokaitokyo.co.jp/japan-gateway/uploads/2020/12/Smartvalue_OM_MTBP2020.pdf). This approach reflects a balanced strategy that aims to ensure stability while capitalizing on growth opportunities.\n\nThe steady income component of Smartvalue's strategy likely stems from its recurring revenue streams, such as subscriptions to its cloud services. These stable revenue sources provide a foundation for the company's financial health, allowing it to invest in new initiatives without compromising its operational stability. The aggressive pursuit of new business, on the other hand, involves expanding into new markets, developing innovative solutions, and potentially acquiring or partnering with other companies to enhance its service offerings. This dual strategy is critical in a dynamic market where both stability and innovation are necessary for sustained growth.\n\nSmartvalue's focus on increasing operating profit through this strategy suggests a keen awareness of the need to optimize its cost structure and revenue streams. By combining steady income growth with the pursuit of new business, the company aims to achieve a more efficient and profitable operation. This approach is particularly relevant in the context of Japan's public sector and mobility markets, where digital transformation initiatives are driving demand for innovative cloud and mobility solutions.\n\nThe effectiveness of Smartvalue's market strategy can be assessed by examining its financial performance over time. For instance, the company's net sales figures for fiscal years 2023 and 2024 show a stable revenue base, while the stated goal of increasing operating profit indicates a focus on improving profitability. However, the lack of detailed financial performance data, including specific breakdowns of operating profit and the impact of new business ventures, limits the ability to fully evaluate the success of this strategy.\n\nIn summary, Smartvalue's market strategy of balancing steady income with the aggressive pursuit of new business aligns with its goal of increasing operating profit. This approach positions the company to capitalize on growth opportunities while maintaining financial stability, a critical factor for success in the competitive IT services market in Japan.\n\n## Target Markets of Smartvalue Co Ltd in Japan\n\n### Finding 4: Smartvalue's target markets in Japan include the public sector for cloud solutions and mobility services, with a focus on local governments and mobility-related businesses.\n\nSmartvalue Co Ltd has strategically targeted the public sector and mobility markets in Japan, focusing on local governments and businesses engaged in Mobility-as-a-Service (MaaS). The company's regional information cloud business is specifically tailored to provide software as a service (SaaS) solutions to local governments, public institutions, and other specific industries, leveraging an urban data center to enhance service delivery and efficiency [Smartvalue Co Ltd, 9417:TYO profile - FT.com - Markets data](https://markets.ft.com/data/equities/tearsheet/profile?s=9417:TYO). This focus on the public sector aligns with the increasing demand for digital transformation in government operations, where cloud solutions can significantly improve administrative processes and public service delivery.\n\nIn the mobility sector, Smartvalue's offerings such as the Kuruma Tsunagu Platform and Kuruma Base are designed to support companies involved in MaaS. These solutions provide IoT and telematic services that facilitate mobility sharing services, including car sharing and call centers. By targeting this niche market, Smartvalue positions itself as a key player in the evolving mobility landscape in Japan, where the demand for efficient and innovative transportation solutions is growing [Smartvalue - CB Insights](https://www.cbinsights.com/investor/smartvalue).\n\nThe public sector in Japan represents a significant market for Smartvalue, as local governments and public institutions seek to modernize their operations and improve service delivery. The size of this market is substantial, given the number of local governments and public entities across Japan, and its growth is driven by the ongoing digital transformation initiatives. Smartvalue's cloud solutions such as SMART L-Gov, GaaS, and the Open-gov platform are well-suited to meet these needs, offering scalable and efficient platforms that can be customized to various regional requirements [Smartvalue Co., Ltd. (9417.T) Stock Price, News, Quote & History - Yahoo Finance](https://finance.yahoo.com/quote/9417.T/).\n\nSimilarly, the mobility market in Japan is experiencing rapid growth, fueled by the rise of MaaS and the demand for sustainable transportation solutions. Smartvalue's mobility services, particularly the Kuruma Tsunagu Platform and Kuruma Base, cater to this market by providing advanced IoT and telematic services that enhance the efficiency and user experience of mobility sharing services. The company's focus on this sector positions it to capitalize on the growing demand for innovative mobility solutions in Japan.\n\nBy targeting these specific markets, Smartvalue not only addresses current market needs but also positions itself for future growth as these sectors continue to evolve. The company's strategic focus on the public sector and mobility markets allows it to differentiate itself from competitors and build a strong brand reputation in these areas. However, the success of this strategy depends on Smartvalue's ability to continuously innovate and adapt its offerings to meet the changing needs of these markets.\n\n## Financial Performance of Smartvalue Co Ltd Over the Last 5 Years\n\n### Finding 5: Smartvalue's financial performance over the last 5 years shows a decline in net income and mixed trends in other financial metrics.\n\nSmartvalue Co Ltd's financial performance over the last five years has exhibited a notable decline in net income alongside mixed trends in other key financial metrics. According to data from the Wall Street Journal, the company experienced a significant net income growth rate of -619.04% over this period, indicating a substantial decrease in profitability. Despite this, Smartvalue's sales or revenue remained relatively stable at 3.81 billion yen, suggesting that while the company has maintained its revenue base, it has struggled to convert this into net income [9417.JP | Smartvalue Co. Ltd. Financial Statements - WSJ](https://www.wsj.com/market-data/quotes/JP/XTKS/9417/financials).\n\nThe company's financial statements provide further insights into its performance. Over the five-year period, various financial ratios and trends have shown mixed results. For instance, the inventory turnover ratio increased from 10.51 to 25.44, indicating improved efficiency in managing inventory. However, the current ratio, a measure of short-term liquidity, declined from 2.84 to 1.85, suggesting a potential vulnerability in meeting short-term obligations. Similarly, the quick ratio, which excludes inventory from current assets, decreased from 2.4 to 1.61, further highlighting liquidity concerns [Financial Ratios Smartvalue Co., Ltd. - MarketScreener](https://www.marketscreener.com/quote/stock/SMARTVALUE-CO-LTD-22468068/finances-ratios/).\n\nDespite these challenges, Smartvalue's net sales figures for fiscal years 2023 and 2024 were reported at 3.87 billion and 3.81 billion yen, respectively, indicating a stable revenue stream. However, the company's net income for these years showed significant declines, with figures of -48 million and -348 million yen, respectively. This suggests that while Smartvalue has been able to maintain its revenue, it has faced difficulties in managing costs and achieving profitability [Smartvalue Co., Ltd. 10-Year Income Statement, Financial Data 9417 - MarketScreener](https://www.marketscreener.com/quote/stock/SMARTVALUE-CO-LTD-22468068/finances-income-statement/).\n\nThe decline in net income over the last five years raises concerns about Smartvalue's ability to maintain profitability and manage its cost structure effectively. The mixed trends in other financial metrics, such as the inventory turnover ratio and current ratio, indicate areas where the company has shown improvement and areas where it needs to focus on enhancing its financial health. These trends underscore the importance of a detailed financial analysis to understand the underlying factors driving these results.\n\nIn conclusion, Smartvalue's financial performance over the last five years has been characterized by a significant decline in net income and mixed trends in other financial metrics. While the company has maintained a stable revenue base, it has struggled to achieve profitability, highlighting the need for strategic initiatives to improve cost management and operational efficiency. A deeper analysis of the company's financial statements and key performance indicators would provide further insights into its financial health and potential areas for improvement.\n\n## Scope and Limitations\n\n### Scope and Limitations of the Research\n\nThe analysis of Smartvalue Co Ltd's core business, market strategy, and financial performance has been conducted using a variety of publicly available data sources. However, several limitations and gaps in the research must be acknowledged to provide a comprehensive understanding of the scope of this report.\n\n**Source Bias:** The majority of the data used in this analysis is derived from web sources, such as financial reports, company profiles, and market data from platforms like FT.com, Yahoo Finance, and MarketScreener. While these sources provide valuable insights into Smartvalue's operations and financial performance, they may not offer the depth and credibility required for a thorough analysis. The limited availability of academic sources directly addressing Smartvalue's core business aspects further compounds this issue [Smartvalue Co Ltd, 9417:TYO profile - FT.com - Markets data](https://markets.ft.com/data/equities/tearsheet/profile?s=9417:TYO). To mitigate this limitation, future research could incorporate more academic and industry-specific reports to gain deeper insights into Smartvalue's technology and market position.\n\n**Data Scarcity:** A significant challenge in this analysis has been the lack of detailed information on Smartvalue's core technology base, intellectual property, and R&D/innovation capabilities. The available data does not provide specific details on these critical areas, which are essential for assessing the company's competitive edge and future growth potential [Smartvalue - CB Insights](https://www.cbinsights.com/investor/smartvalue). To address this gap, targeted searches in patent databases and technology-focused publications could be conducted. Additionally, reaching out to industry experts or analysts might provide further insights into Smartvalue's technological capabilities.\n\n**Relevance:** Many of the search results retrieved during the research process were not directly relevant to Smartvalue Co Ltd (9417.T). Instead, they pertained to other companies with similar names or unrelated topics, diluting the quality of the information gathered [Smartvalue Co., Ltd. (9417.T) Stock Price, News, Quote & History - Yahoo Finance](https://finance.yahoo.com/quote/9417.T/). Refining search queries to include more specific identifiers, such as the stock ticker or company registration number, and using advanced search filters to exclude irrelevant results could improve the relevance of future research.\n\n**Identified Knowledge Gaps:** The research has identified several key knowledge gaps that limit the comprehensiveness of the analysis:\n\n- **Core Technology Base:** Detailed information on Smartvalue's cloud platform capabilities, software quality, and scalability is lacking, which is crucial for understanding its competitive edge.\n- **Intellectual Property:** There is a significant lack of data on Smartvalue's key intellectual property, which is essential for assessing its innovation and market protection.\n- **R&D and Innovation:** The research does not cover Smartvalue's R&D efforts and innovation capabilities, which are vital for future growth and competitive positioning.\n- **Financial Performance Details:** While some financial data is available, there is a gap in understanding the detailed financial performance, especially regarding key KPIs and cash flow trends.\n\nTo address these limitations and gaps, future research should aim to incorporate a broader range of sources, including academic publications, industry reports, and direct communication with company representatives or industry experts. This approach would provide a more comprehensive and accurate analysis of Smartvalue Co Ltd's operations and market position.\n\n## Conclusion\n\nSmartvalue Co Ltd's focus on cloud solutions and mobility services in Japan's public sector and mobility markets positions it as a key player in the IT services industry. The company's strategic approach of balancing steady income with the aggressive pursuit of new business opportunities aims to increase operating profit and drive growth. However, the analysis reveals a mixed financial performance over the last five years, with a significant decline in net income despite stable revenue figures. This highlights the need for Smartvalue to enhance its cost management and operational efficiency to improve profitability.\n\nSeveral uncertainties remain, including the lack of detailed information on Smartvalue's core technology base, intellectual property, and R&D capabilities. These gaps limit the ability to fully assess the company's competitive edge and potential for future growth. Additionally, the reliance on web-based sources and the scarcity of relevant data pose challenges to the depth and credibility of the analysis. Addressing these limitations through more comprehensive research and data collection would provide a clearer picture of Smartvalue's market position and strategic direction.\n\nIn conclusion, while Smartvalue Co Ltd demonstrates a strong market focus and strategic vision, its financial performance and the identified knowledge gaps suggest areas for improvement and further investigation. Future research should aim to fill these gaps and provide a more detailed understanding of Smartvalue's technological capabilities and financial health, ultimately contributing to a more robust assessment of its potential for growth and synergy within the IT services sector."
  },
  {
    "path": "super_agents/deep_research/output/research_report_id_like_a_thorough_analysis_of_li_auto_stock_including_summary_company_overview_key_metrics_performa_20250327_121800.md",
    "content": "## Introduction\n\nLI Auto Inc., a prominent player in the Chinese electric vehicle (EV) market, has garnered significant attention due to its focus on extended-range electric vehicles (EREVs). Founded in November 2015 by Li Xiang, the company has quickly established itself as a leader in the new energy vehicle (NEV) sector, particularly with its flagship model, the Li ONE. As the global automotive industry shifts towards sustainable transportation, understanding LI Auto's position and potential within this dynamic market is crucial for investors and industry analysts alike. This report aims to provide a comprehensive analysis of LI Auto's stock, encompassing a detailed company overview, financial performance, market sentiment, technical analysis, competitive positioning, and investment thesis.\n\nThe analysis will delve into LI Auto's financial metrics, including revenue trends and profit margins, to assess its financial health and growth trajectory. Additionally, we will explore market sentiment through analyst ratings and recent news impacts, as well as technical indicators to understand short-term trading dynamics. A comparative analysis against key competitors will highlight LI Auto's market share and financial standing, while a value investor's perspective will focus on intrinsic value, growth potential, and risk factors. Finally, a SWOT analysis will provide a structured framework for evaluating LI Auto's strategic position, culminating in tailored investment recommendations for different investor types.\n\n## Company Overview and Key Metrics\n\n### LI Auto Inc.: A Chinese Electric Vehicle Manufacturer\n\nLI Auto Inc. is a Chinese electric vehicle manufacturer that specializes in producing extended-range electric vehicles (EREVs). Founded in November 2015 by Li Xiang, the company is headquartered in Beijing, China. LI Auto's mission is to provide premium smart electric vehicles that cater to the growing demand for sustainable transportation in China. The company's focus on EREVs sets it apart in the competitive landscape of the electric vehicle market, as these vehicles combine the benefits of electric propulsion with the convenience of a gasoline engine for extended range [Li Auto Inc. (LI): history, ownership, mission, how it works & makes ...](https://dcfmodeling.com/blogs/history/li-history-mission-ownership).\n\nLI Auto's product lineup includes the Li ONE, a six-seater, large, premium plug-in electric SUV equipped with a range extension system and advanced smart vehicle solutions. The company began volume production of the Li ONE in November 2019 and has since expanded its product offerings to include other models such as the Li L series and Li MEGA. LI Auto's vehicles are designed to appeal to consumers seeking luxury and performance in the NEV market, positioning the company as a premium brand in the industry [Li Auto Inc (LI): A Deep Dive into Its Performance Metrics](https://finance.yahoo.com/news/li-auto-inc-li-deep-161354442.html).\n\nThe company's ownership structure reflects a balanced mix of founder ownership, institutional investors, and public float. As of the latest data, founder ownership accounts for 34.5% of the company, while institutional investors hold 41.3%, and the public float represents 24.2%. This ownership distribution suggests a strong alignment of interests between the company's leadership and its investors, which can be a positive indicator for long-term stability and growth [Li Auto Inc. (LI): history, ownership, mission, how it works & makes ...](https://dcfmodeling.com/blogs/history/li-history-mission-ownership).\n\n## Financial Performance\n\n### Revenue and Profitability in 2023\n\nIn 2023, LI Auto reported total revenue of $12.4 billion, all of which was derived from the Chinese market. This figure underscores the company's strong domestic presence and its ability to capture a significant share of the NEV market in China. The exclusive focus on the Chinese market highlights LI Auto's strategic decision to prioritize its home market before expanding internationally, a move that has allowed the company to establish a solid foundation for growth [Li Auto Inc. (LI) SWOT Analysis](https://dcfmodeling.com/products/li-swot-analysis).\n\nDespite the impressive revenue figures, detailed revenue trends and profit margins over time are not available, which limits the ability to assess LI Auto's financial performance comprehensively. However, the company's financial performance in 2023 indicates a robust growth trajectory, with a significant increase in revenue compared to previous years. This growth can be attributed to the increasing demand for NEVs in China and LI Auto's successful market penetration strategies [Li Auto Inc. (LI) SWOT Analysis](https://dcfmodeling.com/products/li-swot-analysis).\n\nThe absence of specific data on profit margins and other financial metrics over time is a notable gap in the analysis. Understanding these trends is essential for evaluating the company's profitability and financial health. Future research should focus on obtaining more detailed financial data to provide a more comprehensive assessment of LI Auto's financial performance [Li Auto Inc. (LI) SWOT Analysis](https://dcfmodeling.com/products/li-swot-analysis).\n\n## Product Lineup and Market Position\n\n### The Li ONE: Flagship Model and Range Capabilities\n\nLI Auto's flagship model, the Li ONE, is a testament to the company's commitment to innovation and performance in the electric vehicle sector. The Li ONE boasts a range of approximately 800 kilometers (about 497 miles) on a single charge when utilizing the range extender. This extended range positions LI Auto strongly against competitors in the electric vehicle market, which typically offer shorter ranges [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\nThe Li ONE's range capability is a significant competitive advantage, as it addresses one of the primary concerns of consumers considering electric vehicles: range anxiety. By offering a vehicle that can travel nearly 500 miles on a single charge, LI Auto appeals to a broader segment of the market, including those who may be hesitant to switch to electric vehicles due to concerns about range limitations [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\nThe success of the Li ONE has been instrumental in establishing LI Auto's reputation as a leader in the EREV segment. The model's sales performance and positive consumer feedback have contributed to the company's strong market position and its ability to command a premium price in the NEV market [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\n## Market Focus and Competitive Positioning\n\n### Focus on Hybrid Electric Vehicles\n\nLI Auto's strong focus on hybrid electric vehicles (HEVs) is a strategic decision that positions the company well against competitors in the electric vehicle sector. By offering vehicles that combine electric propulsion with a gasoline engine for extended range, LI Auto addresses the needs of consumers who value both sustainability and convenience. This focus on HEVs allows the company to differentiate itself in a crowded market and appeal to a broader customer base [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\nThe company's emphasis on HEVs is reflected in its product lineup, which includes models like the Li ONE and the Li L series. These vehicles are designed to offer the benefits of electric propulsion, such as reduced emissions and lower operating costs, while also providing the flexibility of a gasoline engine for longer trips. This dual approach to vehicle design has resonated with consumers in the Chinese market, contributing to LI Auto's strong sales performance and market share [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\nLI Auto's competitive positioning in the HEV segment is further enhanced by its commitment to technological innovation and product development. The company's R&D efforts focus on improving the efficiency and performance of its vehicles, ensuring that they remain competitive in a rapidly evolving market. This dedication to innovation is a key strength that sets LI Auto apart from its competitors and positions the company for long-term success [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\n## Market Share and International Expansion\n\n### Market Share in the Chinese NEV Market\n\nLI Auto's market share in the Chinese new energy vehicle (NEV) market is approximately 0.2% globally, with minimal international expansion. This figure reflects the company's strong focus on the domestic market, where it has achieved significant success. As of Q4 2023, LI Auto's operations were primarily concentrated in China, with international sales representing only 0.3% of total company revenue [Li Auto Inc. (LI) SWOT Analysis](https://dcfmodeling.com/products/li-swot-analysis).\n\nThe company's limited international presence is a notable weakness, as it restricts LI Auto's potential for growth and diversification. Expanding into international markets could provide new revenue streams and reduce the company's reliance on the Chinese market. However, the challenges associated with international expansion, such as regulatory compliance and market entry barriers, must be carefully considered [Li Auto Inc. (LI) SWOT Analysis](https://dcfmodeling.com/products/li-swot-analysis).\n\nDespite its limited global market share, LI Auto has demonstrated strong performance in the Chinese NEV market. The company's focus on the domestic market has allowed it to capture a significant share of the premium segment, particularly in the RMB200,000 and above NEV market. In September 2024, LI Auto accounted for over 17% of market share in this segment, ranking first among Chinese automotive brands [Li Auto Inc. September 2024 Delivery Update](https://ir.lixiang.com/news-releases/news-release-details/li-auto-inc-september-2024-delivery-update).\n\n## Analyst Forecasts and Stock Performance\n\n### Analyst Forecasts and Upside Potential\n\nAnalysts have forecasted a 13.43% upside potential for LI Auto's stock based on average price targets. This forecast reflects the positive sentiment among analysts regarding the company's future performance and growth prospects. The average target price of $230.57 suggests that analysts believe LI Auto's stock is undervalued and has room for appreciation [Li Auto (LI) Stock Forecast, Price Targets and Analysts Predictions](https://www.tipranks.com/stocks/li/forecast).\n\nThe positive analyst sentiment is supported by LI Auto's strong financial performance and market position. The company's revenue growth and profitability have exceeded market expectations, contributing to the bullish outlook among analysts. Additionally, LI Auto's focus on innovation and product development has been well-received by the investment community, further bolstering confidence in the company's future prospects [Li Auto (LI) Stock Forecast, Price Targets and Analysts Predictions](https://www.tipranks.com/stocks/li/forecast).\n\nHowever, it is important to note that analyst forecasts are subject to change based on new information and market conditions. Investors should consider a range of factors, including macroeconomic trends and industry developments, when evaluating the potential upside of LI Auto's stock [Li Auto (LI) Stock Forecast, Price Targets and Analysts Predictions](https://www.tipranks.com/stocks/li/forecast).\n\n## Stock Volatility and Market Sentiment\n\n### Stock Volatility and Recent News Impact\n\nLI Auto's stock has shown significant volatility, with recent news impacting its performance. The company's stock price has experienced fluctuations due to various factors, including quarterly earnings reports, product announcements, and market sentiment. For example, LI Auto's stock price was affected by the company's decision to lower its sales outlook for the first quarter of 2024, which was attributed to lower-than-expected sales of the Li Mega minivan [Li Auto Options Trading: A Deep Dive into Market Sentiment](https://www.benzinga.com/insights/options/25/03/44500474/li-auto-options-trading-a-deep-dive-into-market-sentiment).\n\nThe impact of recent news on LI Auto's stock highlights the importance of monitoring market sentiment and staying informed about developments that may affect the company's performance. Investors should consider the potential for volatility when making investment decisions and be prepared to adjust their strategies based on new information [Li Auto Options Trading: A Deep Dive into Market Sentiment](https://www.benzinga.com/insights/options/25/03/44500474/li-auto-options-trading-a-deep-dive-into-market-sentiment).\n\nIn addition to news events, market sentiment can be influenced by broader economic trends and industry developments. For example, the ongoing trade war between the U.S. and China has had a significant impact on the Chinese automotive industry, contributing to volatility in LI Auto's stock price. Investors should consider these macroeconomic factors when evaluating the company's stock and its potential for growth [Li Auto Options Trading: A Deep Dive into Market Sentiment](https://www.benzinga.com/insights/options/25/03/44500474/li-auto-options-trading-a-deep-dive-into-market-sentiment).\n\n## Strengths and Weaknesses\n\n### Technological Innovation and Product Appeal\n\nLI Auto's strengths include its technological innovation and product appeal, which have been key drivers of the company's success in the Chinese NEV market. The company's focus on developing advanced smart vehicle solutions and extended-range electric vehicles has resonated with consumers, contributing to strong sales performance and market share [Research on the Investment Value of LI Auto based on Multiple ...](https://drpress.org/ojs/index.php/HBEM/article/view/23726).\n\nThe Li ONE, with its impressive range capabilities and premium features, exemplifies LI Auto's commitment to innovation and product excellence. The vehicle's success has helped establish the company as a leader in the EREV segment and has attracted a loyal customer base. Additionally, LI Auto's ongoing R&D efforts ensure that the company remains at the forefront of technological advancements in the electric vehicle industry [Research on the Investment Value of LI Auto based on Multiple ...](https://drpress.org/ojs/index.php/HBEM/article/view/23726).\n\nHowever, LI Auto also faces challenges related to its limited international presence and production scale. The company's focus on the Chinese market has limited its potential for growth and diversification, as international expansion could provide new revenue streams and reduce reliance on the domestic market. Additionally, scaling production to meet growing demand is a critical challenge that LI Auto must address to maintain its competitive position [Research on the Investment Value of LI Auto based on Multiple ...](https://drpress.org/ojs/index.php/HBEM/article/view/23726).\n\n## Opportunities and Threats\n\n### International Expansion and Market Competition\n\nOpportunities for LI Auto include expanding into international markets, which could provide new growth avenues and reduce the company's reliance on the Chinese market. The global demand for electric vehicles is increasing, and LI Auto's focus on EREVs positions it well to capitalize on this trend. By entering new markets, the company could diversify its revenue streams and enhance its long-term growth potential [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\nHowever, LI Auto also faces threats from intense competition in the Chinese EV market. The industry is characterized by rapid technological advancements and aggressive competition among established players and new entrants. Companies like NIO, XPeng, and Tesla are vying for market share, and LI Auto must continue to innovate and differentiate its products to maintain its competitive position [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\nAdditionally, macroeconomic factors such as the ongoing trade war between the U.S. and China and economic slowdowns can impact the company's performance. These external factors can affect consumer demand and market sentiment, contributing to volatility in LI Auto's stock price. The company must navigate these challenges while continuing to focus on its core strengths and growth opportunities [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\n## Scope and Limitations\n\n### Incomplete Financial Data\n\nThe research results lack comprehensive data on LI Auto's revenue trends, profit margins, balance sheet, and cash flow analysis. This absence severely limits the ability to assess the company's financial health and growth trajectory. Without detailed financial data, it is challenging to evaluate LI Auto's profitability, liquidity, and overall financial stability [Gap Analysis Summary].\n\nTo address this limitation, future research should focus on obtaining LI Auto's financial statements and reports from official sources such as the company's investor relations page or financial databases. Utilizing financial analysis tools to extract and analyze the required data would provide a more comprehensive understanding of the company's financial performance [Gap Analysis Summary].\n\n### Missing Market Sentiment Data\n\nThere is a significant gap in data regarding analyst ratings, sentiment indicators, and the impact of recent news on LI Auto's stock. This limits the understanding of market perceptions and potential influences on stock price. Comprehensive market sentiment data is essential for investors to make informed decisions and assess the company's stock performance [Gap Analysis Summary].\n\nTo address this gap, future research should search for recent analyst reports and sentiment analyses from reputable financial news outlets and platforms like Bloomberg, Reuters, or Morningstar. Monitoring social media and financial forums for real-time sentiment indicators would also provide valuable insights into market perceptions [Gap Analysis Summary].\n\n### Lack of Technical Analysis\n\nThe absence of data on price trends, technical indicators, and support/resistance levels for LI Auto's stock hinders the ability to perform a thorough technical analysis, which is crucial for short-term trading strategies. Technical analysis provides insights into market trends and potential price movements, helping traders make informed decisions [Gap Analysis Summary].\n\nTo address this limitation, future research should use financial charting tools and platforms like TradingView or Yahoo Finance to gather historical price data and apply technical indicators. Consulting technical analysis reports from financial analysts specializing in stock market trends would also enhance the understanding of LI Auto's stock performance [Gap Analysis Summary].\n\n### Inadequate Competitive Analysis\n\nThe research lacks a comparison of LI Auto's market share and financial metrics against its key competitors, which is essential for understanding its relative market position. A comprehensive competitive analysis would provide insights into LI Auto's strengths and weaknesses relative to other players in the electric vehicle market [Gap Analysis Summary].\n\nTo address this gap, future research should gather market share data from industry reports and market research firms like Statista or IBISWorld. Comparing financial metrics of LI Auto with those of competitors like NIO, XPeng, and Tesla using financial databases would provide a more complete picture of the company's competitive position [Gap Analysis Summary].\n\n### Insufficient Value Investor Analysis\n\nThere is no data on LI Auto's intrinsic value, growth potential, and risk factors, which are critical for value investors to make informed decisions. Understanding these factors is essential for assessing the company's long-term investment potential and identifying potential risks [Gap Analysis Summary].\n\nTo address this limitation, future research should use valuation models like DCF (Discounted Cash Flow) or comparable company analysis to estimate LI Auto's intrinsic value. Analyzing industry reports and economic forecasts to assess growth potential and identify risk factors would provide valuable insights for value investors [Gap Analysis Summary].\n\n### Limited Investment Thesis\n\nThe investment thesis is incomplete without a comprehensive SWOT analysis and tailored recommendations for different investor types, limiting the depth of investment insights. A detailed SWOT analysis and investment recommendations would provide a structured framework for evaluating LI Auto's strategic position and potential for growth [Gap Analysis Summary].\n\nTo address this gap, future research should conduct a detailed SWOT analysis using company reports, industry analyses, and expert opinions. Developing investment recommendations by considering different investor profiles and risk appetites would enhance the investment thesis and provide actionable insights for investors [Gap Analysis Summary].\n\n## Conclusion\n\nLI Auto Inc. has established itself as a significant player in the Chinese electric vehicle market, with a strong focus on extended-range electric vehicles (EREVs). The company's flagship model, the Li ONE, offers an impressive range of approximately 800 kilometers on a single charge, positioning LI Auto well against competitors in the electric vehicle sector. In 2023, LI Auto reported total revenue of $12.4 billion, exclusively from the Chinese market, highlighting its strong domestic presence [Li Auto Inc. (LI) SWOT Analysis](https://dcfmodeling.com/products/li-swot-analysis).\n\nAnalysts have forecasted a 13.43% upside potential for LI Auto's stock, reflecting positive sentiment regarding the company's future performance. However, the stock has shown significant volatility, with recent news impacting its performance. LI Auto's strengths include technological innovation and product appeal, while weaknesses include limited international presence and production scale. Opportunities for growth include expanding into international markets, while threats include intense competition in the Chinese EV market [Li Auto SWOT Analysis – CanvasBusinessModel.com](https://canvasbusinessmodel.com/products/li-auto-swot-analysis?srsltid=AfmBOooD9nQETJeWHiano9p_PjceSDp39GjTXfWNmrWKCGuqXSSzNiSd).\n\nDespite these insights, several uncertainties remain. Detailed revenue trends and profit margins over time are not available, limiting the ability to assess financial performance comprehensively. Specific data on LI Auto's balance sheet and cash flow statements are missing, which are crucial for evaluating financial health and liquidity. Comprehensive analyst ratings and sentiment indicators are not provided, which are essential for understanding market perceptions. Technical analysis data such as price trends, technical indicators, and support/resistance levels are absent, hindering short-term trading insights. Comparative analysis of LI Auto's market share and financial metrics against key competitors is not available, impacting the understanding of its competitive position. Data on LI Auto's intrinsic value, growth potential, and specific risk factors are missing, which are critical for value investors. A detailed SWOT analysis and tailored investment recommendations for different investor types are not fully developed, limiting the depth of the investment thesis [Remaining Uncertainties].\n\nIn conclusion, while LI Auto has demonstrated strong performance in the Chinese NEV market, addressing these uncertainties and gaps in data will be crucial for a more comprehensive analysis and informed investment decisions."
  },
  {
    "path": "super_agents/deep_research/output/research_report_id_like_a_thorough_analysis_of_xpev_stock_including_summary_company_overview_key_metrics_performance_20250327_105350.md",
    "content": "## Query\nI'd like a thorough analysis of XPEV stock, including: Summary: Company overview, key metrics, performance data and investment recommendations Financial Data: Revenue trends, profit margins, balance sheet and cash flow analysis Market Sentiment: Analyst ratings, sentiment indicators and news impact Technical Analysis: Price trends, technical indicators and support/resistance levels Compare Assets: Market share and financial metrics vs. key competitors Value Investor: Intrinsic value, growth potential and risk factors Investment Thesis: SWOT analysis and recommendations for different investor types.\n\n## Introduction\n\nXPeng Inc. (XPEV) is a prominent player in China's electric vehicle (EV) market, focusing on mid- to high-end segments within the passenger vehicle sector. Established in 2014, XPeng has rapidly expanded its presence both domestically and internationally, driven by a commitment to integrating cutting-edge technology and smart features into its vehicles. This report provides a comprehensive analysis of XPeng's stock, covering various aspects such as company overview, financial performance, market sentiment, technical analysis, competitive positioning, intrinsic value, and investment thesis. The analysis leverages a wide array of data points and insights, ensuring a thorough understanding of XPeng's current market position and future potential.\n\nAs the global EV market continues to grow, XPeng's strategic initiatives and performance metrics have become critical for investors and analysts alike. The company's significant growth in vehicle deliveries and revenue, as well as its financial challenges, provide a complex picture that requires careful examination. This report aims to dissect these elements to offer a detailed view of XPeng's operational and financial health, market sentiment, and investment prospects. Through this analysis, stakeholders can gain insights into the company's strengths, weaknesses, opportunities, and threats, as well as its potential for future growth and the associated risks.\n\n## Company Overview and Key Metrics\n\n### XPeng Inc.'s Business Focus and Market Position\n\nXPeng Inc. is strategically positioned within China's burgeoning EV market, targeting the mid- to high-end segment of passenger vehicles. The company's focus on smart electric vehicles (EVs) is underscored by its development of advanced driver-assistance systems (ADAS) and in-car intelligent operating systems. XPeng's product lineup includes the G3 SUV and the P7 sports sedan, which are designed to appeal to tech-savvy consumers seeking environmentally friendly and technologically advanced transportation solutions [XPeng Inc. (XPEV) Stock Price, News, Quote & History](https://finance.yahoo.com/quote/XPEV/). \n\nThe company's market position is further reinforced by its vertical integration strategy, allowing XPeng to control the development and production of core vehicle systems, including powertrain and electrical/electronic architecture. This approach not only enhances the user experience but also differentiates XPeng's offerings from competitors. Moreover, XPeng's expansion into European markets and the successful launch of its XNGP driving technology have contributed to its growing global presence [XPeng Inc. (XPEV) Stock Price, News, Quote & History](https://finance.yahoo.com/quote/XPEV/).\n\n### Key Performance Metrics\n\nXPeng's performance metrics provide a quantitative measure of its growth and operational efficiency. As of the latest data, the company boasts a market capitalization of approximately $19.88 billion, reflecting its significant scale within the EV industry [XPeng Inc. (XPEV) Valuation Measures & Financial Statistics](https://finance.yahoo.com/quote/XPEV/key-statistics/). The enterprise value stands at $17.45 billion, indicating the company's total value including debt and equity.\n\nRevenue per share for the trailing twelve months (TTM) is reported at $43.21, with a quarterly revenue growth of 18.40% year-over-year. These figures highlight XPeng's robust growth trajectory, particularly in the context of the competitive EV market [XPeng Inc. (XPEV) Valuation Measures & Financial Statistics](https://finance.yahoo.com/quote/XPEV/key-statistics/). \n\nHowever, despite the revenue growth, XPeng's gross profit for the TTM is $5.85 billion, which translates to a gross margin of only 1.5%. This low margin is indicative of the challenges the company faces in achieving profitability, largely due to high production costs and the competitive pricing pressures within the EV market [Xpeng Inc Annual Gross Margin Trends, Business Profitability](https://csimarket.com/stocks/singleProfitabilityRatiosy.php?code=XPEV&gro).\n\n### Investment Recommendations and Analyst Insights\n\nInvestment recommendations for XPeng stock are mixed, reflecting the nuanced view of analysts on the company's future performance. The Zacks Investment Research report provides a detailed analysis of XPeng's vital statistics, including earnings and sales charts, which are crucial for understanding the company's financial health and growth prospects [XPEV : XPeng Key Company Metrics & Non-finance Metrics - Zacks](https://www.zacks.com/stock/research/XPEV/key-company-metrics).\n\nReuters' analysis of XPeng's key metrics further emphasizes the company's financial strength and management effectiveness, providing a comprehensive overview of its financial position [XPEV.N - | Stock Price & Latest News - Reuters](https://www.reuters.com/markets/companies/XPEV.N/key-metrics/management-effectiveness). These insights are essential for investors looking to evaluate XPeng's potential as an investment opportunity.\n\nIn summary, XPeng Inc.'s focus on smart EVs, coupled with its vertical integration strategy, positions it as a formidable player in the EV market. However, the company's financial performance, characterized by high revenue growth but low profit margins, presents challenges that investors must consider. The mixed analyst recommendations further underscore the need for a nuanced approach to investing in XPeng stock.\n\n## Financial Performance\n\n### Revenue Trends\n\nXPeng Inc. has demonstrated significant growth in its revenue over the past few years, aligning with its ambitious expansion plans and increasing demand for electric vehicles. In 2023, XPeng reported a total revenue of RMB30.68 billion, equivalent to approximately $4.32 billion [XPeng, Inc. ADR (XPEV) Financial Statements - Cash Flow - TipRanks](https://www.tipranks.com/stocks/xpev/financials). This figure represents a substantial increase from previous years, driven by a surge in vehicle deliveries and strategic market expansions.\n\nThe company's revenue growth is particularly notable in the first quarter of 2025, where XPeng reported a 300% year-over-year increase in vehicle deliveries, projecting up to 93,000 units for the quarter. This impressive delivery growth aligns with the company's robust revenue growth guidance, further boosting investor confidence [XPeng Inc. (XPEV) Latest Stock News & Headlines - Yahoo Finance](https://finance.yahoo.com/quote/XPEV/news/). \n\nDespite these achievements, XPeng's revenue growth has been accompanied by a significant increase in operating expenses, which have outpaced revenue growth, leading to challenges in achieving profitability. The company's quarterly revenue growth rate of 18.40% year-over-year for the most recent period further underscores its strong performance in the market [XPeng Inc. (XPEV) Valuation Measures & Financial Statistics](https://finance.yahoo.com/quote/XPEV/key-statistics/).\n\n### Profit Margins\n\nXPeng's profit margins remain a critical area of concern for investors and analysts. The company's gross margin for 2023 stood at a meager 1.5%, reflecting the high costs associated with producing electric vehicles and the competitive pricing pressures in the market [Xpeng Inc Annual Gross Margin Trends, Business Profitability](https://csimarket.com/stocks/singleProfitabilityRatiosy.php?code=XPEV&gro). This low margin is indicative of the challenges XPeng faces in achieving profitability, as the costs of materials, labor, and overhead continue to rise.\n\nHistorical data on XPeng's net profit margin reveals a consistent trend of negative margins, with the latest figures showing a net margin of -15.54% as of September 30, 2024 [XPeng Net Profit Margin 2020-2024 | XPEV - Macrotrends](https://macrotrends.net/stocks/charts/XPEV/xpeng/net-profit-margin). This negative net margin is a reflection of the company's inability to translate its revenue growth into profits, primarily due to high operating costs and the competitive nature of the EV market.\n\n### Balance Sheet Analysis\n\nXPeng's balance sheet provides insights into its financial health and stability. As of the fourth quarter of 2024, the company's total assets stood at $11.33 billion, marking a 4.01% increase from the previous quarter. Conversely, total liabilities increased by 11.32% to $7.05 billion during the same period [XPeng Inc. Balance Sheet – NYSE:XPEV - TradingView](https://www.tradingview.com/symbols/NYSE-XPEV/financials-balance-sheet/). \n\nThe rise in liabilities, particularly in the context of a high cash burn rate, raises concerns about XPeng's long-term financial sustainability. The company's debt-to-equity ratio, which has fluctuated over time, stood at 2.01 in 2020 but decreased to 0.73 by 2023, indicating a more balanced approach to financing in recent years [Study of Xpeng Automotive's Development Under China's Carbon](https://www.atlantis-press.com/article/125985530.pdf). However, the increasing liabilities suggest that XPeng may need to manage its debt levels carefully to maintain financial stability.\n\n### Cash Flow Analysis\n\nXPeng's cash flow statement further illuminates its financial performance, highlighting the company's cash burn rate and its ability to fund operations. In 2023, XPeng reported an operating income of -¥10.89 billion, a significant decrease from the previous year's -¥8.71 billion [XPeng, Inc. ADR (XPEV) Financial Statements - Cash Flow - TipRanks](https://www.tipranks.com/stocks/xpev/financials). This negative operating income reflects the company's high operational costs and the challenges it faces in achieving positive cash flow.\n\nThe company's cash flow from operating activities has been consistently negative, with a reported cash outflow of -$0.83 billion for the trailing twelve months as of September 30, 2024 [XPeng Net Profit Margin 2020-2024 | XPEV - Macrotrends](https://macrotrends.net/stocks/charts/XPEV/xpeng/net-profit-margin). This negative cash flow from operations is a significant concern, as it indicates that XPeng is not generating enough cash to cover its operational expenses, relying instead on external financing to sustain its growth.\n\nIn summary, XPeng's financial performance is characterized by strong revenue growth but persistent challenges with profitability and cash flow management. The company's low profit margins and negative cash flow from operations highlight the need for strategic cost management and operational efficiency improvements to ensure long-term financial sustainability.\n\n## Market Sentiment\n\n### Analyst Ratings\n\nThe market sentiment surrounding XPeng Inc.'s stock is characterized by a mix of optimism and caution, as reflected in the analyst ratings and price targets. According to TipRanks, XPeng has received a range of ratings in the current month, with 10 Buy ratings, 6 Hold ratings, and 2 Sell ratings. The average analyst price target over the past three months is $23.74 [XPeng, Inc. ADR (XPEV) Stock Forecast & Price Target - TipRanks](https://www.tipranks.com/stocks/xpev/forecast). This diversity in ratings suggests a lack of consensus among analysts, reflecting the complex nature of XPeng's market position and future prospects.\n\nMarketBeat reports a similar consensus, assigning XPeng a 'Hold' rating with an average rating score of 2.46, based on 5 buy ratings, 6 hold ratings, and 1 sell rating [XPeng (XPEV) Stock Price, News & Analysis - MarketBeat](https://www.marketbeat.com/stocks/NYSE/XPEV/). The 'Hold' consensus indicates that while some analysts see potential for growth, others are more cautious, possibly due to concerns over the company's profitability and cash flow challenges.\n\n### Sentiment Indicators\n\nSentiment indicators provide additional insights into the market's perception of XPeng's stock. The stock has experienced significant volatility, with a 91% appreciation over the last quarter [XPeng (XPEV) Stock Price, News & Analysis - MarketBeat](https://www.marketbeat.com/stocks/NYSE/XPEV/). This volatility is indicative of the dynamic nature of the EV market and the impact of various factors, including corporate performance and macroeconomic trends, on investor sentiment.\n\nRecent news has played a crucial role in shaping market sentiment. For instance, XPeng's announcement of expected Q1 2025 vehicle deliveries up to 93,000 units, representing an over 300% year-over-year increase, has bolstered investor confidence [XPeng Inc. (XPEV) Latest Stock News & Headlines - Yahoo Finance](https://finance.yahoo.com/quote/XPEV/news/). Additionally, the company's expansion into European markets and the successful launch of the XNGP driving technology have been viewed positively by investors, contributing to the stock's recent appreciation.\n\nHowever, the absence of recent news impact data from X/Twitter limits the ability to fully capture the immediate sentiment shifts influenced by real-time news and social media discussions. This gap in data suggests that while positive corporate developments have driven stock price increases, the full extent of news-driven sentiment fluctuations remains uncertain.\n\n### News Impact\n\nThe impact of news on XPeng's stock performance is evident in the company's quarterly performance reports and strategic announcements. XPeng's strong Q4 and FY2024 financial results, with significant growth in deliveries and revenues, have been key drivers of positive sentiment [XPEV - Xpeng Inc Latest Stock News & Market Updates](https://www.stocktitan.net/news/XPEV/). These results demonstrate the company's ability to execute its growth strategy effectively, reinforcing investor confidence.\n\nConversely, the broader economic environment and regulatory changes can also influence market sentiment. For instance, the EV market in China experienced a meaningful decrease in sales during the first quarter of 2024, which may have contributed to some of the caution reflected in analyst ratings [XPeng - XPEV - Stock Price & News | The Motley Fool](https://www.fool.com/quote/nyse/xpev/). Such market dynamics highlight the importance of considering both company-specific news and broader industry trends when assessing sentiment.\n\nIn conclusion, the market sentiment for XPeng Inc. is characterized by a mix of optimism and caution, driven by the company's strong growth in vehicle deliveries and revenue, alongside concerns over profitability and cash flow. The lack of real-time news impact data from X/Twitter represents a significant gap in understanding the full extent of sentiment fluctuations, underscoring the need for comprehensive data sources to capture the dynamic nature of market sentiment.\n\n## Technical Analysis\n\n### Price Trends\n\nXPeng Inc.'s stock has exhibited a moderately bearish trend in recent analyses, despite experiencing buying pressure, which is generally a positive indicator [XPEV Stock Price Chart Technical Analysis - Financhill](https://financhill.com/stock-price-chart/xpev-technical-analysis). The stock's price appreciation of 91% over the last quarter underscores the significant volatility it has experienced, reflecting both the dynamic nature of the EV market and the impact of various corporate developments on investor sentiment [XPeng (XPEV) Stock Price, News & Analysis - MarketBeat](https://www.marketbeat.com/stocks/NYSE/XPEV/).\n\nThe current stock price of XPeng is $20.72, with a support level identified at $21.79 and a resistance level at $23.73 [XPENG INC - ADR (XPEV) Stock Price, Quote, News and Overview](https://www.chartmill.com/stock/quote/XPEV/profile). These support and resistance levels are critical for traders to monitor, as they can influence short-term price movements and trading decisions.\n\n### Technical Indicators\n\nTechnical analysis of XPeng's stock is limited by incomplete data on key indicators, which affects the depth and reliability of the analysis. The Moving Average Convergence Divergence (MACD) stands at 1.92, and the Relative Strength Index (RSI) is at 59.50, according to the latest available data [XPeng, Inc. ADR (XPEV) Technical Analysis - TipRanks.com](https://www.tipranks.com/stocks/xpev/technical-analysis). These indicators provide insights into the stock's momentum and potential overbought or oversold conditions, but the lack of complete data on other key indicators, such as Stochastic %K, Commodity Channel Index (CCI), and Average Directional Index (ADI), limits the ability to draw comprehensive conclusions [Technical Analysis of XPeng Inc. (NYSE:XPEV) - TradingView](https://www.tradingview.com/symbols/NYSE-XPEV/technicals/).\n\nThe absence of complete data on these technical indicators represents a significant gap in the analysis, as they are crucial for making informed trading decisions. For instance, the Stochastic RSI, Williams Percent Range, and Bull Bear Power indicators are not available, which could affect the accuracy of trend predictions and the identification of potential entry and exit points for traders [Technical Analysis of XPeng Inc. (NYSE:XPEV) - TradingView](https://www.tradingview.com/symbols/NYSE-XPEV/technicals/).\n\n### Support and Resistance Levels\n\nThe identified support and resistance levels for XPeng's stock are essential for understanding potential price movements. The stock has a short-term support level at $22.34 and a resistance level at $22.61, which are valid for intraday trading [Xpeng Inc ADR XPEV Support Resistance charts](https://munafasutra.com/nyse/ma/XPEV). Additionally, a more significant support level below the current price is at $18.04, with a resistance level above at $24.91 [XPEV $XPEV Stock Charts, Analysis, Trend, XPeng Inc ADR](https://www.stockconsultant.com/consultnow/basicplus.cgi?symbol=XPEV).\n\nThese levels are critical for traders to monitor, as they can influence trading strategies and decision-making. The support levels represent potential buying opportunities, while the resistance levels indicate points where selling pressure may increase, potentially leading to a price reversal.\n\nIn summary, XPeng's stock exhibits a moderately bearish trend with buying pressure, but the technical analysis is limited by incomplete data on key indicators. The identified support and resistance levels provide critical insights for traders, but the lack of comprehensive technical data underscores the need for more detailed analysis to make informed trading decisions.\n\n## Comparative Analysis\n\n### Market Share\n\nXPeng Inc. holds a modest 0.21% market share in the global electric vehicle (EV) market as of Q4 2023, significantly lower than its competitors such as Toyota, which commands a 14.20% market share, and General Motors with an 8.63% market share [Xpeng Inc Market share relative to its competitors, as of Q4 2023](https://csimarket.com/stocks/competitionSEG2.php?code=XPEV). This disparity highlights XPeng's relatively small footprint in the global market, despite its significant growth in vehicle deliveries and revenue within China.\n\nThe company's market share has shown a slight increase from 0.19% in Q3 2023 to 0.21% in Q4 2023, indicating progress in capturing a larger portion of the market. However, XPeng's market share remains dwarfed by established players, underscoring the challenges it faces in expanding its global presence [Xpeng Inc Market share relative to its competitors, as of Q4 2023](https://csimarket.com/stocks/competitionSEG2.php?code=XPEV).\n\n### Financial Metrics vs. Competitors\n\nWhen comparing XPeng's financial metrics to those of its key competitors, several notable differences emerge. XPeng's revenue for 2023 stood at RMB30.68 billion ($4.32 billion), which is significantly lower than the revenues of larger competitors like Toyota and General Motors [XPeng, Inc. ADR (XPEV) Financial Statements - Cash Flow - TipRanks](https://www.tipranks.com/stocks/xpev/financials). For instance, Toyota reported a revenue of $275.3 billion in the same year, highlighting the vast scale difference between XPeng and established automotive giants [Xpeng Inc Market share relative to its competitors, as of Q4 2023](https://csimarket.com/stocks/competitionSEG2.php?code=XPEV).\n\nXPeng's gross margin for 2023 was reported at 1.5%, which is considerably lower than that of its competitors. For example, Toyota's gross margin for the same period was around 18%, reflecting a more efficient cost structure and higher profitability [Xpeng Inc Annual Gross Margin Trends, Business Profitability](https://csimarket.com/stocks/singleProfitabilityRatiosy.php?code=XPEV&gro). This comparison underscores XPeng's challenges in achieving profitability, particularly in the context of high production costs and competitive pricing pressures within the EV market.\n\n### Competitive Positioning\n\nXPeng's competitive positioning within the EV market is characterized by its focus on mid- to high-end segments and its emphasis on integrating cutting-edge technology into its vehicles. The company's vertical integration strategy allows it to develop core vehicle systems in-house, enhancing its ability to differentiate its offerings and optimize the user experience [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/).\n\nDespite its technological strengths, XPeng faces significant competition from both domestic and international players. In China, competitors like NIO and Li Auto are also vying for market share within the premium EV segment, while global giants like Tesla continue to expand their presence in the region. XPeng's limited product portfolio and dependence on the Chinese market are notable weaknesses that could hinder its ability to compete effectively on a global scale [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/).\n\nIn summary, XPeng Inc. holds a small market share in the global EV market and faces significant challenges in achieving profitability compared to its larger competitors. The company's focus on technology and vertical integration provides a competitive edge, but its limited product portfolio and market dependence remain areas of concern. Understanding these comparative dynamics is crucial for investors evaluating XPeng's long-term growth potential and competitive positioning.\n\n## Value Investor Analysis\n\n### Intrinsic Value\n\nThe intrinsic value of XPeng Inc. (XPEV) is a critical metric for value investors, providing insight into the company's true worth based on its future cash flows and growth potential. According to Investor's Craft, the intrinsic value of XPeng is estimated at $41.93, suggesting a significant upside potential of 92% from its previous close of $21.80 [Intrinsic Value of XPeng Inc. (XPEV) is $41.93 - Investor's Craft](https://investorscraft.com/intrinsic-value/xpev). This valuation is based on fiscal year data as of 2023 and quarterly data as of December 31, 2023, using a discounted cash flow (DCF) model.\n\nAnother source, ValueInvesting.io, provides a higher intrinsic value estimate of $206.70 for XPeng, based on the Discounted Cash Flows (Growth Exit 5Y) model [XPEV Intrinsic Value | Is Xpeng Inc (XPEV) undervalued?](https://valueinvesting.io/XPEV/valuation/intrinsic-value). This wide range in intrinsic value estimates reflects the variability and uncertainty inherent in projecting future cash flows, particularly for a company operating in the dynamic EV market.\n\nThe intrinsic value calculations are further supported by a DCF analysis conducted by Yahoo Finance, which projects XPeng's future cash flows up to 2034. The analysis suggests a terminal value of CN¥410 billion, discounted back to a present value of CN¥111 billion [Estimating The Intrinsic Value Of XPeng Inc. (NYSE:XPEV)](https://finance.yahoo.com/news/estimating-intrinsic-value-xpeng-inc-174757329.html). This comprehensive approach to valuation highlights the potential for significant growth, but also underscores the reliance on projections that may not be fully validated.\n\n### Growth Potential\n\nXPeng's growth potential is closely tied to its ability to expand its market share and increase its profitability. The company's projected revenue growth rate is expected to be robust, with forecasts indicating a compound annual growth rate (CAGR) that could drive significant increases in future cash flows [Intrinsic Value of XPeng Inc. (XPEV) is $41.93 - Investor's Craft](https://investorscraft.com/intrinsic-value/xpev). \n\nThe company's expansion into European markets and the successful launch of the XNGP driving technology are key drivers of its growth potential. These strategic initiatives are expected to contribute to XPeng's ability to capture a larger share of the global EV market, which is projected to grow significantly in the coming years [XPeng Inc. (XPEV) Latest Stock News & Headlines - Yahoo Finance](https://finance.yahoo.com/quote/XPEV/news/).\n\nHowever, XPeng's growth potential is tempered by the challenges it faces in achieving profitability and managing its cash flow. The company's negative profit margins and high cash burn rate suggest that while revenue growth is strong, the path to sustainable profitability remains uncertain [Xpeng Inc Annual Gross Margin Trends, Business Profitability](https://csimarket.com/stocks/singleProfitabilityRatiosy.php?code=XPEV&gro).\n\n### Risk Factors\n\nInvesting in XPeng Inc. comes with several risk factors that value investors must consider. The company's high cash burn rate is a significant concern, as it indicates a reliance on external financing to sustain operations. As of the latest data, XPeng's cash flow from operating activities has been consistently negative, with a reported cash outflow of -$0.83 billion for the trailing twelve months [XPeng Net Profit Margin 2020-2024 | XPEV - Macrotrends](https://macrotrends.net/stocks/charts/XPEV/xpeng/net-profit-margin).\n\nRevenue volatility is another risk factor, as XPeng's growth has been accompanied by fluctuations in revenue that could impact its ability to achieve stable cash flows. The company's dependence on the Chinese market further exposes it to geopolitical risks, particularly related to international trade tensions and regulatory changes [Breaking Down XPeng Inc. (XPEV): Key Insights for Investors – DCF Modeling](https://dcfmodeling.com/blogs/health/xpev-financial-health).\n\nAdditionally, XPeng's debt exposure is a concern, with total liabilities increasing by 11.32% in Q4 2024 to $7.05 billion [XPeng Inc. Balance Sheet – NYSE:XPEV - TradingView](https://www.tradingview.com/symbols/NYSE-XPEV/financials-balance-sheet/). This rise in liabilities, coupled with a high debt-to-equity ratio, suggests that XPeng may need to manage its debt levels carefully to maintain financial stability.\n\nIn summary, the intrinsic value calculations for XPeng Inc. suggest significant growth potential, with estimates ranging from $41.93 to $206.70. However, these calculations are based on projections that may not be fully validated, introducing uncertainty into value assessments. The company's growth potential is promising, driven by strategic expansions and technological innovations, but is tempered by challenges related to profitability and cash flow management. The identified risk factors, including high cash burn rate, revenue volatility, debt exposure, and geopolitical risks, highlight the need for careful consideration by value investors evaluating XPeng's investment potential.\n\n## Investment Thesis\n\n### SWOT Analysis\n\n#### Strengths\n\nXPeng Inc. boasts several strengths that position it as a competitive player in the electric vehicle (EV) market. One of its primary strengths lies in its cutting-edge technology, which includes advanced driver-assistance systems (ADAS), intelligent operating systems, and over-the-air updates. These features appeal to tech-savvy consumers seeking innovative and connected transportation solutions [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/). \n\nAnother significant strength is XPeng's vertical integration strategy, which allows the company to develop core vehicle systems, including powertrain and the electrical/electronic architecture, in-house. This approach enables XPeng to optimize the user experience and differentiate its offerings from competitors, enhancing its market position [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/).\n\n#### Weaknesses\n\nDespite its strengths, XPeng faces several weaknesses that could impact its growth and profitability. The company's limited product portfolio is a notable weakness, as it currently focuses on a narrow range of mid- to high-end vehicles. This limitation could hinder XPeng's ability to capture a broader market segment and compete effectively with companies offering more diverse product lines [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/).\n\nAdditionally, XPeng's dependence on the Chinese market is a significant weakness. While the company has begun expanding into European markets, its reliance on domestic sales exposes it to regulatory and economic risks within China, which could impact its overall performance [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/).\n\n#### Opportunities\n\nXPeng has several opportunities to leverage for future growth. The global EV market is projected to experience significant expansion in the coming years, driven by increasing consumer demand for environmentally friendly vehicles and supportive government policies. XPeng's strategic expansion into European markets and its focus on technological innovation position it well to capitalize on this growth [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/).\n\nMoreover, the company's development of advanced driving technologies, such as the XNGP driving system, presents an opportunity to differentiate itself further in the market. By continuing to invest in research and development, XPeng can enhance its competitive edge and attract a larger customer base [XPeng Inc. (XPEV) Latest Stock News & Headlines - Yahoo Finance](https://finance.yahoo.com/quote/XPEV/news/).\n\n#### Threats\n\nXPeng faces several threats that could impact its long-term success. The intense competition within the EV market, both domestically and internationally, poses a significant threat. Competitors like NIO, Li Auto, and Tesla are also vying for market share, particularly in the premium segment, which could limit XPeng's growth potential [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/).\n\nRegulatory risks are another major threat, as changes in government policies and subsidies related to EVs could impact XPeng's operations and profitability. Additionally, geopolitical tensions, particularly related to international trade, could disrupt XPeng's supply chain and market access, further complicating its expansion efforts [Breaking Down XPeng Inc. (XPEV): Key Insights for Investors – DCF Modeling](https://dcfmodeling.com/blogs/health/xpev-financial-health).\n\n### Recommendations for Different Investor Types\n\n#### Growth Investors\n\nFor growth investors, XPeng Inc. presents an attractive opportunity due to its significant growth in vehicle deliveries and revenue. The company's projected 300% year-over-year increase in Q1 2025 vehicle deliveries and its expansion into European markets indicate strong growth potential [XPeng Inc. (XPEV) Latest Stock News & Headlines - Yahoo Finance](https://finance.yahoo.com/quote/XPEV/news/). Growth investors should, however, be mindful of the company's challenges with profitability and cash flow, which could impact its long-term sustainability.\n\n#### Value Investors\n\nValue investors may find XPeng's intrinsic value calculations appealing, with estimates ranging from $41.93 to $206.70, suggesting significant upside potential [Intrinsic Value of XPeng Inc. (XPEV) is $41.93 - Investor's Craft](https://investorscraft.com/intrinsic-value/xpev). However, these calculations are based on projections that may not be fully validated, introducing uncertainty. Value investors should carefully consider the company's high cash burn rate and negative profit margins, as these factors could affect the realization of its intrinsic value.\n\n#### Risk-Averse Investors\n\nRisk-averse investors may find XPeng's stock less appealing due to its high cash burn rate, revenue volatility, and exposure to geopolitical risks. The company's negative cash flow from operating activities and increasing liabilities highlight the need for careful risk management [XPeng Net Profit Margin 2020-2024 | XPEV - Macrotrends](https://macrotrends.net/stocks/charts/XPEV/xpeng/net-profit-margin). Risk-averse investors should monitor XPeng's progress in achieving profitability and reducing its reliance on external financing before considering an investment.\n\n#### Long-Term Investors\n\nLong-term investors may find XPeng's focus on technological innovation and its strategic expansion into new markets appealing. The company's development of advanced driving technologies and its potential to capture a larger share of the growing global EV market align with a long-term investment strategy [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights](https://www.strategypunk.com/xpeng-swot-analysis-free-ppt-template-and-in-depth-insights-free-file/). Long-term investors should, however, remain vigilant about the company's financial health and the competitive landscape, as these factors could influence XPeng's long-term performance.\n\nIn summary, XPeng Inc.'s investment thesis is characterized by its strengths in technology and vertical integration, coupled with opportunities in the growing global EV market. However, the company's weaknesses, including a limited product portfolio and dependence on the Chinese market, along with threats from intense competition and regulatory risks, must be carefully considered. Different investor types can find XPeng appealing based on their investment strategies, but all should remain cognizant of the company's financial challenges and market dynamics.\n\n## Scope and Limitations\n\n### Identified Limitations\n\nThe research on XPeng Inc. (XPEV) stock has been conducted using primarily web sources such as Yahoo Finance, Zacks, Reuters, and TipRanks. While these sources are reputable financial data providers, the lack of academic sources and the absence of results from X/Twitter queries represent significant limitations in the depth and diversity of the information gathered. The reliance on web sources may limit the academic rigor and peer-reviewed validation of the data, potentially impacting the robustness of the analysis [Gap Analysis Summary].\n\nTo address this limitation, future research could incorporate academic papers to provide more in-depth analysis and validation of financial metrics and market sentiment. Additionally, utilizing X/Twitter could capture real-time market sentiment and the immediate impact of news on XPeng's stock performance, enhancing the comprehensiveness of the analysis [Gap Analysis Summary].\n\n### Missing Perspectives or Data\n\nThe research lacks data on the recent news impact on XPeng's stock, which is crucial for understanding short-term market movements. This gap in data is due to the absence of results from X/Twitter queries, which are essential for capturing real-time sentiment shifts influenced by news and social media discussions [Gap Analysis Summary]. \n\nMoreover, the technical analysis section has incomplete data on key indicators, such as Stochastic %K, Commodity Channel Index (CCI), and Average Directional Index (ADI). This incomplete data limits the depth and reliability of the technical analysis, potentially affecting the accuracy of trading decisions [Technical Analysis of XPeng Inc. (NYSE:XPEV) - TradingView].\n\nTo overcome these limitations, a targeted search for recent news articles and their impact on XPeng's stock should be conducted. Additionally, ensuring complete data sets for technical indicators would enhance the reliability of the technical analysis, providing a more comprehensive view of XPeng's stock trends [Gap Analysis Summary].\n\n### Areas Needing Deeper Investigation\n\nThe intrinsic value calculations and growth potential assessments for XPeng are based on projections and assumptions that may not be fully substantiated. The wide range of intrinsic value estimates, from $41.93 to $206.70, reflects the variability and uncertainty inherent in these projections [Intrinsic Value of XPeng Inc. (XPEV) is $41.93 - Investor's Craft], [XPEV Intrinsic Value | Is Xpeng Inc (XPEV) undervalued?]. \n\nSimilarly, the SWOT analysis, while comprehensive, could benefit from more specific data points to support each element. The qualitative assessments of XPeng's strengths, weaknesses, opportunities, and threats would be more robust with quantitative data to validate the claims [Xpeng SWOT Analysis: Free PPT Template and In-Depth Insights].\n\nTo address these areas, future research should validate intrinsic value calculations with multiple models and historical data, ensuring a more reliable assessment of XPeng's true worth. Enhancing the SWOT analysis with quantitative data points would also provide a more robust framework for understanding XPeng's strategic positioning [Gap Analysis Summary].\n\n### Potential Biases or Conflicts\n\nThe analyst ratings and sentiment indicators for XPeng may be influenced by the biases of the analysts or the firms they represent. These biases could skew the consensus ratings and price targets, potentially impacting investor decisions [XPeng, Inc. ADR (XPEV) Stock Forecast & Price Target - TipRanks]. Additionally, the financial data sources used in the research may have conflicts of interest if they are affiliated with investment firms that have stakes in XPeng, which could affect the objectivity of the data provided [Gap Analysis Summary].\n\nTo mitigate these potential biases and conflicts, cross-referencing analyst ratings with multiple sources could help identify any discrepancies and potential biases. Furthermore, disclosing any affiliations or conflicts of interest of the data providers would enhance the transparency and credibility of the research [Gap Analysis Summary].\n\nIn summary, the scope and limitations of this research on XPeng Inc. highlight the need for a more diverse set of data sources, including academic papers and real-time social media data, to enhance the depth and reliability of the analysis. Addressing the gaps in recent news impact and technical indicators, validating intrinsic value calculations, and mitigating potential biases are crucial steps to ensure a comprehensive and robust assessment of XPeng's stock.\n\n## Conclusion\n\nThis comprehensive analysis of XPeng Inc. (XPEV) stock has provided a detailed examination of the company's performance across various dimensions, including company overview, financial performance, market sentiment, technical analysis, competitive positioning, intrinsic value, and investment thesis. XPeng's focus on mid- to high-end smart electric vehicles in China's passenger vehicle market, coupled with its significant growth in vehicle deliveries and revenue, positions it as a notable player in the EV industry. However, the company's financial challenges, characterized by negative profit margins and a high cash burn rate, underscore the complexities and risks associated with investing in XPeng.\n\nThe market sentiment for XPeng's stock is mixed, with a consensus 'Hold' rating from analysts and significant volatility reflecting both optimism and caution. The technical analysis suggests a moderately bearish trend with buying pressure, though incomplete data on key indicators limits the depth of this analysis. Comparatively, XPeng holds a small market share in the global EV market and faces challenges in achieving profitability compared to larger competitors like Toyota and General Motors. \n\nIntrinsic value calculations indicate significant growth potential, with estimates ranging from $41.93 to $206.70. However, these projections introduce uncertainty due to their reliance on assumptions that may not be fully validated. The SWOT analysis highlights XPeng's strengths in technology and vertical integration, but also its weaknesses, such as a limited product portfolio and dependence on the Chinese market. Opportunities in the global EV market growth are promising, but threats from intense competition and regulatory risks remain significant.\n\nThe remaining uncertainties identified in this research include the impact of recent news on XPeng's stock performance, which is not fully captured due to the lack of X/Twitter data. Additionally, the incomplete technical analysis due to missing data on key indicators could affect trading decisions. The intrinsic value calculations and SWOT analysis also rely on projections and qualitative assessments that could benefit from further validation and quantitative support.\n\nIn conclusion, XPeng Inc. presents a complex investment opportunity with significant growth potential, but also notable risks and uncertainties. Investors should carefully consider the company's financial health, market sentiment, and competitive positioning when evaluating XPeng as an investment. Future research should aim to address the identified limitations by incorporating academic sources, real-time social media data, and more comprehensive technical indicators to enhance the robustness of the analysis."
  },
  {
    "path": "super_agents/deep_research/reason_graph/__init__.py",
    "content": ""
  },
  {
    "path": "super_agents/deep_research/reason_graph/graph.py",
    "content": "from typing import Literal, Optional, Dict, Any\nfrom langgraph.graph import StateGraph, END\nfrom super_agents.deep_research.reason_graph.state import ResearchState\nfrom super_agents.deep_research.reason_graph.nodes import (\n    plan_research,\n    prepare_steps,\n    execute_search,\n    perform_analysis,\n    analyze_gaps,\n    execute_gap_search,\n    synthesize_final_report,\n    finalize_basic_research,\n    generate_final_markdown_report \n    # These are the functions that will be used as nodes in the graph\n)\n# --- Conditional Edge Functions ---\n\ndef should_continue_search(state: ResearchState) -> Literal[\"execute_search\", \"perform_analysis\"]:\n    \"\"\"Decides whether to continue searching or move to analysis.\"\"\"\n    if state['current_search_step_index'] < len(state['search_steps_planned']):\n        return \"execute_search\"\n    else:\n        # Check if analysis steps exist before proceeding\n        if state['analysis_steps_planned']:\n             return \"perform_analysis\"\n        else:\n             # If no analysis steps, go directly to gap analysis\n             return \"analyze_gaps\"\n\n\ndef should_continue_analysis(state: ResearchState) -> Literal[\"perform_analysis\", \"analyze_gaps\"]:\n    \"\"\"Decides whether to continue analysis or move to gap analysis.\"\"\"\n    if state['current_analysis_step_index'] < len(state['analysis_steps_planned']):\n        return \"perform_analysis\"\n    else:\n        return \"analyze_gaps\"\n\ndef decide_gap_followup(state: ResearchState) -> Literal[\"execute_gap_search\", \"synthesize_final_report\", \"finalize_basic\"]:\n    \"\"\"Decides whether to perform gap searches, synthesize, or end.\"\"\"\n    depth = state['depth']\n    gap_analysis = state.get('gap_analysis')\n    additional_queries = state.get('additional_queries_planned', [])\n    current_gap_index = state.get('current_gap_search_index', 0)\n\n    if depth == 'advanced' and gap_analysis and additional_queries:\n        if current_gap_index < len(additional_queries):\n             return \"execute_gap_search\" \n        else:\n             # Finished gap searches, proceed to final synthesis\n             return \"synthesize_final_report\" \n    else:\n        # Basic depth, or advanced with no gaps/failed gap analysis/no queries from gaps\n        return \"finalize_basic_research\" # Use correct function name\n\n# --- Build Graph Function ---\n\ndef build_research_graph(for_web: bool = False) -> StateGraph:\n    \"\"\"Builds and returns a research workflow graph.\n    \n    Args:\n        for_web: If True, configures the graph for web streaming with additional settings.\n        \n    Returns:\n        A configured StateGraph instance ready to be compiled.\n    \"\"\"\n    workflow = StateGraph(ResearchState)\n    \n    # Add Nodes - same for both CLI and web versions\n    workflow.add_node(\"plan_research\", plan_research)\n    workflow.add_node(\"prepare_steps\", prepare_steps)\n    workflow.add_node(\"execute_search\", execute_search)\n    workflow.add_node(\"perform_analysis\", perform_analysis)\n    workflow.add_node(\"analyze_gaps\", analyze_gaps)\n    workflow.add_node(\"execute_gap_search\", execute_gap_search)\n    workflow.add_node(\"synthesize_final_report\", synthesize_final_report)\n    workflow.add_node(\"finalize_basic_research\", finalize_basic_research)\n    workflow.add_node(\"generate_final_markdown_report\", generate_final_markdown_report)\n    \n    # Define Edges - same for both CLI and web versions\n    workflow.set_entry_point(\"plan_research\")\n    workflow.add_edge(\"plan_research\", \"prepare_steps\")\n    workflow.add_edge(\"prepare_steps\", \"execute_search\") # Start search loop\n    \n    # Search Loop\n    workflow.add_conditional_edges(\n        \"execute_search\",\n        should_continue_search,\n        { \"execute_search\": \"execute_search\", \"perform_analysis\": \"perform_analysis\", \"analyze_gaps\": \"analyze_gaps\" }\n    )\n    \n    # Analysis Loop\n    workflow.add_conditional_edges(\n        \"perform_analysis\",\n        should_continue_analysis,\n        { \"perform_analysis\": \"perform_analysis\", \"analyze_gaps\": \"analyze_gaps\" }\n    )\n    \n    # Gap Analysis Follow-up Logic\n    workflow.add_conditional_edges(\n        \"analyze_gaps\",\n        decide_gap_followup,\n        { \"execute_gap_search\": \"execute_gap_search\", \"synthesize_final_report\": \"synthesize_final_report\", \"finalize_basic_research\": \"finalize_basic_research\" }\n    )\n    \n    # Gap Search Loop & Synthesis\n    workflow.add_conditional_edges(\n        \"execute_gap_search\",\n        decide_gap_followup, \n        { \"execute_gap_search\": \"execute_gap_search\", \"synthesize_final_report\": \"synthesize_final_report\", \"finalize_basic_research\": \"finalize_basic_research\" }\n    )\n    \n    # --- Adjust Final Edges ---\n    # If synthesis succeeds, go to report generation\n    workflow.add_edge(\"synthesize_final_report\", \"generate_final_markdown_report\") \n    # If report generation succeeds, END\n    workflow.add_edge(\"generate_final_markdown_report\", END) \n    # If flow goes to basic finalizer, END\n    workflow.add_edge(\"finalize_basic_research\", END)\n    \n    # Web-specific configuration\n    if for_web:\n        # For web, we might want to configure additional settings\n        # such as checkpoint frequency, stream mode, etc.\n        pass\n        \n    return workflow\n\n# --- Build the original workflow for main.py ---\nworkflow = build_research_graph(for_web=False)\n\n# --- Build the web workflow for web interface ---\nweb_workflow = build_research_graph(for_web=True)\n\n# Compile both graphs\napp = workflow.compile()\nweb_app = web_workflow.compile()\n\n# Function to get the appropriate app based on context\ndef get_app(for_web: bool = False) -> Any:\n    \"\"\"Returns the appropriate compiled graph based on the context.\n    \n    Args:\n        for_web: If True, returns the web-optimized graph.\n        \n    Returns:\n        The compiled graph application.\n    \"\"\"\n    return web_app if for_web else app"
  },
  {
    "path": "super_agents/deep_research/reason_graph/nodes.py",
    "content": "import asyncio\nimport json\nimport time\nfrom datetime import datetime\nfrom typing import Dict, Any, List, Literal\nfrom langchain_core.messages import AIMessage, HumanMessage, ToolMessage\n# --- Internal Imports ---\nfrom super_agents.deep_research.reason_graph.state import ResearchState # Relative import\nfrom super_agents.deep_research.reason_graph.schemas import ( # Relative import\n    SearchQuery, \n    RequiredAnalysis, \n    AnalysisResult, \n    GapAnalysisResult, \n    FinalSynthesisResult, \n    SearchStepResult, \n    SearchResultItem,\n    StreamUpdate,\n    StepInfo,\n    ResearchPlan\n)\nfrom super_agents.deep_research.reason_graph.tools import ( # Relative import\n    llm, \n    llm_creative, \n    generate_structured_output, \n    perform_web_search, \n    perform_academic_search, \n    perform_x_search, \n    add_stream_update\n)\nfrom super_agents.deep_research.reason_graph.prompt import FINAL_REPORT_SYSTEM_PROMPT_TEMPLATE \n# --- Node Functions ---\n\nasync def plan_research(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Generates the initial research plan using an LLM.\"\"\"\n    topic = state['topic']\n    updates = add_stream_update(state, {\n        'id': 'research-plan-initial',\n        'type': 'plan',\n        'status': 'running',\n        'title': 'Research Plan',\n        'message': 'Creating research plan...',\n        'overwrite': True\n    })\n\n    prompt = f\"\"\"Create a focused research plan for the topic: \"{topic}\". \n\nToday's date and day of the week: {datetime.now().strftime('%A, %B %d, %Y')}\n\nKeep the plan concise but comprehensive, with:\n- 4-12 targeted search queries (each can use web, academic, x (Twitter), or all sources)\n- 2-8 key analyses to perform\n- Prioritize the most important aspects to investigate (priority 2-4 for searches, 1-5 for analyses)\n\nAvailable sources:\n- \"web\": General web search (Use Tavily)\n- \"academic\": Academic papers and research (Use Exa)\n- \"x\": X/Twitter posts and discussions (Use Exa with domain filter)\n- \"all\": Use all source types (web, academic, and X/Twitter)\n\nPriority rules for search_queries:\n- Use only whole numbers between 2 and 4. Lower number means higher priority (e.g., 2 is highest).\n\nImportance rules for required_analyses:\n- Use only whole numbers between 1 and 5. Higher number means higher importance.\n\nConsider different angles and potential controversies, but maintain focus on the core aspects.\nEnsure the total number of steps (searches + analyses) does not exceed 20.\"\"\"\n\n    research_plan = await asyncio.get_event_loop().run_in_executor(\n        None, generate_structured_output, llm, ResearchPlan, prompt\n    )\n    # research_plan = generate_structured_output(llm, ResearchPlan, prompt) # If generate_structured_output is async\n\n    return {\"research_plan\": research_plan, \"stream_updates\": updates}\n\n\ndef prepare_steps(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Processes the plan to create lists of search and analysis steps with IDs.\"\"\"\n    plan = state['research_plan']\n    if not plan:\n        raise ValueError(\"Research plan is missing.\")\n\n    search_steps_planned = []\n    analysis_steps_planned = []\n    search_counter = 0\n    analysis_counter = 0\n\n    # Generate search steps, expanding 'all'\n    for i, query in enumerate(plan.search_queries):\n        if query.source == 'all':\n            search_steps_planned.append(StepInfo(id=f\"search-web-{i}\", type='web', details=query.dict()))\n            search_steps_planned.append(StepInfo(id=f\"search-academic-{i}\", type='academic', details=query.dict()))\n            search_steps_planned.append(StepInfo(id=f\"search-x-{i}\", type='x', details=query.dict()))\n            search_counter += 3\n        elif query.source == 'x':\n            search_steps_planned.append(StepInfo(id=f\"search-x-{i}\", type='x', details=query.dict()))\n            search_counter += 1\n        elif query.source == 'academic':\n            search_steps_planned.append(StepInfo(id=f\"search-academic-{i}\", type='academic', details=query.dict()))\n            search_counter += 1\n        else: # 'web'\n            search_steps_planned.append(StepInfo(id=f\"search-web-{i}\", type='web', details=query.dict()))\n            search_counter += 1\n\n    # Generate analysis steps\n    for i, analysis in enumerate(plan.required_analyses):\n        analysis_steps_planned.append(StepInfo(id=f\"analysis-{i}\", type='analysis', details=analysis.dict()))\n        analysis_counter += 1\n        \n    total_steps = search_counter + analysis_counter\n    \n    # Send plan completed update\n    updates = add_stream_update(state, {\n        'id': 'research-plan',\n        'type': 'plan',\n        'status': 'completed',\n        'title': 'Research Plan',\n        'plan': plan, # Send the plan object itself\n        'totalSteps': total_steps,\n        'message': 'Research plan created',\n        'overwrite': True\n    })\n\n    return {\n        \"search_steps_planned\": search_steps_planned,\n        \"analysis_steps_planned\": analysis_steps_planned,\n        \"current_search_step_index\": 0,\n        \"current_analysis_step_index\": 0,\n        \"total_steps\": total_steps,\n        \"completed_steps_count\": 0, # Initialize completed steps\n        \"stream_updates\": updates\n    }\n\n\nasync def execute_search(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Executes a single search step based on the current index.\"\"\"\n    idx = state['current_search_step_index']\n    step = state['search_steps_planned'][idx]\n    query_obj = SearchQuery(**step.details)\n    depth = state['depth']\n    \n    step_type = step.type\n    step_id = step.id\n    query_str = query_obj.query\n    \n    # 计算当前完成的步骤数和总步骤数，用于进度显示\n    completed_steps = state.get('completed_steps_count', 0)\n    total_steps = state.get('total_steps', 0)\n    \n    # Send running update with progress information\n    running_updates = add_stream_update(state, {\n        'id': step_id,\n        'type': step_type,\n        'status': 'running',\n        'title': f\"Searching {step_type} for '{query_str}'\",\n        'query': query_str,\n        'message': f\"Searching {query_obj.source} sources...\",\n        'completedSteps': completed_steps,\n        'totalSteps': total_steps,\n    })\n\n    results = []\n    search_step_result = None\n\n    try:\n        if step_type == 'web':\n            results = await perform_web_search(query_str, depth, query_obj.priority)\n        elif step_type == 'academic':\n            results = await perform_academic_search(query_str, query_obj.priority)\n        elif step_type == 'x':\n            # Pass the full query object to x_search if it needs more context (like priority)\n            results = await perform_x_search(query_obj)\n            \n        search_step_result = SearchStepResult(type=step_type, query=query_obj, results=results)\n        \n        # Send completed update\n        completed_updates = add_stream_update(state, {\n            'id': step_id,\n            'type': step_type,\n            'status': 'completed',\n            'title': f\"Search complete for '{query_str}'\",\n            'query': query_str,\n            'results': results, # Send results in the update\n            'message': f\"Found {len(results)} results\",\n            'overwrite': True\n        })\n        \n        all_updates = running_updates + completed_updates\n        \n        return {\n            \"search_results\": [search_step_result] if search_step_result else [], # Use operator.add\n            \"current_search_step_index\": idx + 1,\n            \"completed_steps_count\": state.get('completed_steps_count', 0) + 1,\n            \"stream_updates\": all_updates\n        }\n    except Exception as e:\n        print(f\"Error executing search step {step_id}: {e}\")\n         # Send error update\n        error_updates = add_stream_update(state, {\n            'id': step_id,\n            'type': step_type,\n            'status': 'completed', # Mark as completed even on error to proceed\n            'title': f\"Search failed for '{query_str}'\",\n            'query': query_str,\n            'message': f\"Error during search: {str(e)}\",\n            'overwrite': True\n        })\n        all_updates = running_updates + error_updates\n        return {\n            \"search_results\": [], \n            \"current_search_step_index\": idx + 1, # Move to next step even on error\n            \"completed_steps_count\": state.get('completed_steps_count', 0) + 1, # Count error step as 'completed' for progress\n            \"stream_updates\": all_updates\n        }\n\nasync def perform_analysis(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Performs a single analysis step based on the current index.\"\"\"\n    idx = state['current_analysis_step_index']\n    step = state['analysis_steps_planned'][idx]\n    analysis_obj = RequiredAnalysis(**step.details)\n    all_search_results = state['search_results']\n\n    step_id = step.id\n    analysis_type = analysis_obj.type\n    analysis_desc = analysis_obj.description\n\n    # 计算当前完成的步骤数和总步骤数，用于进度显示\n    completed_steps = state.get('completed_steps_count', 0)\n    total_steps = state.get('total_steps', 0)\n    \n    # Send running update with progress information\n    running_updates = add_stream_update(state, {\n        'id': step_id,\n        'type': 'analysis',\n        'status': 'running',\n        'title': f\"Analyzing {analysis_type}\",\n        'analysisType': analysis_type,\n        'message': f\"Analyzing {analysis_type}...\",\n        'completedSteps': completed_steps,\n        'totalSteps': total_steps,\n    })\n\n    prompt = f\"\"\"Perform a \"{analysis_type}\" analysis. Analysis description: \"{analysis_desc}\"\nConsider all sources and their reliability based on the provided search results.\n\nSearch results JSON: \n{json.dumps([r.dict() for r in all_search_results], indent=2)}\n\nGenerate findings (insight, evidence, confidence), implications, and limitations based *only* on the provided search results.\"\"\"\n    \n    try:\n        # Use the 'creative' LLM instance if needed\n        analysis_result = await asyncio.get_event_loop().run_in_executor(\n             None, generate_structured_output, llm_creative, AnalysisResult, prompt\n        )\n        # analysis_result = generate_structured_output(llm_creative, AnalysisResult, prompt) # If generate_structured_output is async\n\n        # 更新完成步骤数\n        completed_steps = state.get('completed_steps_count', 0) + 1\n        \n        # Send completed update with progress information\n        completed_updates = add_stream_update(state, {\n            'id': step_id,\n            'type': 'analysis',\n            'status': 'completed',\n            'title': f\"Analysis of {analysis_type} complete\",\n            'analysisType': analysis_type,\n            # Simplify findings for streaming if needed, or send full object\n            'findings': [f.dict() for f in analysis_result.findings], \n            'message': f\"Analysis complete\",\n            'overwrite': True,\n            'completedSteps': completed_steps,\n            'totalSteps': total_steps\n        })\n        \n        all_updates = running_updates + completed_updates\n\n        # NOTE: The original code streams the result but doesn't seem to store\n        # the *output* of individual analyses for later LLM steps, only the *plan*.\n        # We will follow that here. If aggregation is needed, modify the state.\n        return {\n            \"current_analysis_step_index\": idx + 1,\n            \"completed_steps_count\": state.get('completed_steps_count', 0) + 1,\n            \"stream_updates\": all_updates\n        }\n    except Exception as e:\n        print(f\"Error performing analysis step {step_id}: {e}\")\n         # Send error update\n        error_updates = add_stream_update(state, {\n            'id': step_id,\n            'type': 'analysis',\n            'status': 'completed',\n            'title': f\"Analysis failed for {analysis_type}\",\n            'analysisType': analysis_type,\n            'message': f\"Error during analysis: {str(e)}\",\n            'overwrite': True\n        })\n        all_updates = running_updates + error_updates\n        return {\n            \"current_analysis_step_index\": idx + 1, # Move to next step\n            \"completed_steps_count\": state.get('completed_steps_count', 0) + 1,\n            \"stream_updates\": all_updates\n        }\n\n\nasync def analyze_gaps(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Analyzes limitations and knowledge gaps based on all search results.\"\"\"\n    all_search_results = state['search_results']\n    analysis_steps_info = state['analysis_steps_planned'] # Get info about what analyses were done\n\n    # 计算当前完成的步骤数和总步骤数，用于进度显示\n    completed_steps = state.get('completed_steps_count', 0)\n    total_steps = state.get('total_steps', 0)\n    \n    # Send running update with progress information\n    running_updates = add_stream_update(state, {\n        'id': 'gap-analysis',\n        'type': 'analysis',\n        'status': 'running',\n        'title': 'Research Gaps and Limitations',\n        'analysisType': 'gaps',\n        'message': 'Analyzing research gaps and limitations...',\n        'completedSteps': completed_steps,\n        'totalSteps': total_steps,\n    })\n\n    # Prepare info about analyses performed for the prompt\n    analyses_performed_summary = [\n        {\"type\": step.details.get('type'), \"description\": step.details.get('description')} \n        for step in analysis_steps_info\n    ]\n\n    prompt = f\"\"\"Analyze the research results and identify limitations, knowledge gaps, and recommended follow-up actions.\nConsider:\n- Quality and reliability of sources evident in the results.\n- Missing perspectives or data based on the topic: \"{state['topic']}\".\n- Areas needing deeper investigation.\n- Potential biases or conflicts observed in the content.\n- Severity for limitations should be between 2 and 10.\n- Priority for follow-up actions should be between 2 and 10.\n\nWhen suggesting additional_queries for knowledge gaps, keep in mind they might be used to search:\n- Web sources (Tavily)\n- Academic papers (Exa)\n- X/Twitter (Exa)\nDesign queries likely to yield useful results across these diverse source types.\n\nResearch results JSON:\n{json.dumps([r.dict() for r in all_search_results], indent=2)}\n\nAnalyses performed during research (types and descriptions):\n{json.dumps(analyses_performed_summary, indent=2)}\n\"\"\"\n    try:\n        gap_analysis_result = await asyncio.get_event_loop().run_in_executor(\n             None, generate_structured_output, llm, GapAnalysisResult, prompt\n        )\n        # gap_analysis_result = generate_structured_output(llm, GapAnalysisResult, prompt) # If async\n        \n        # Calculate total steps including potential advanced steps for the update\n        base_total_steps = state['total_steps']\n        final_total_steps = base_total_steps + (2 if state['depth'] == 'advanced' else 1) # +1 for gap analysis, +1 for synthesis if advanced\n        \n        # Send completed update\n        completed_updates = add_stream_update(state, {\n            'id': 'gap-analysis',\n            'type': 'analysis',\n            'status': 'completed',\n            'title': 'Research Gaps and Limitations',\n            'analysisType': 'gaps',\n            # Simplify findings for streaming\n            'findings': [\n                {\"insight\": l.description, \"evidence\": l.potential_solutions, \"confidence\": (10 - l.severity) / 8.0} \n                for l in gap_analysis_result.limitations\n            ], \n            'gaps': gap_analysis_result.knowledge_gaps,\n            'recommendations': gap_analysis_result.recommended_followup,\n            'message': f\"Identified {len(gap_analysis_result.limitations)} limitations and {len(gap_analysis_result.knowledge_gaps)} knowledge gaps\",\n            'overwrite': True,\n            'completedSteps': state.get('completed_steps_count', 0) + 1,\n            'totalSteps': final_total_steps\n        })\n        \n        all_updates = running_updates + completed_updates\n        \n        # Prepare additional queries if needed\n        additional_queries_planned = []\n        if state['depth'] == 'advanced' and gap_analysis_result.knowledge_gaps:\n             for gap_idx, gap in enumerate(gap_analysis_result.knowledge_gaps):\n                 for query_idx, query_str in enumerate(gap.additional_queries):\n                     # Strategy: 'all' for first query per gap, rotate others\n                     source: Literal['web', 'academic', 'x', 'all']\n                     if query_idx == 0:\n                         source = 'all'\n                     else:\n                         source_types: List[Literal['web', 'academic', 'x']] = ['web', 'academic', 'x']\n                         source = source_types[query_idx % len(source_types)]\n                         \n                     additional_queries_planned.append(SearchQuery(\n                         query=query_str,\n                         rationale=gap.reason,\n                         source=source,\n                         priority=3 # Default priority for gap fills\n                     ))\n\n        return {\n            \"gap_analysis\": gap_analysis_result,\n            \"completed_steps_count\": state.get('completed_steps_count', 0) + 1,\n            \"stream_updates\": all_updates,\n            \"additional_queries_planned\": additional_queries_planned,\n            \"current_gap_search_index\": 0, # Initialize gap search index\n            \"total_steps\": final_total_steps # Update total steps in state\n        }\n    except Exception as e:\n        print(f\"Error during gap analysis: {e}\")\n        # Send error update\n        error_updates = add_stream_update(state, {\n            'id': 'gap-analysis',\n            'type': 'analysis',\n            'status': 'completed',\n            'title': 'Gap Analysis Failed',\n            'analysisType': 'gaps',\n            'message': f\"Error during gap analysis: {str(e)}\",\n            'overwrite': True,\n            'completedSteps': state.get('completed_steps_count', 0) + 1,\n             # Use base total steps + 1 for gap analysis step itself\n            'totalSteps': state['total_steps'] + 1 \n        })\n        all_updates = running_updates + error_updates\n        return {\n            \"gap_analysis\": None, # Indicate failure\n            \"completed_steps_count\": state.get('completed_steps_count', 0) + 1,\n            \"stream_updates\": all_updates,\n            \"additional_queries_planned\": [], # No additional searches on error\n            \"current_gap_search_index\": 0,\n             # Ensure total_steps reflects only the completed gap analysis attempt\n            \"total_steps\": state['total_steps'] + 1 \n        }\n\n\nasync def execute_gap_search(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Executes searches based on identified gaps (for advanced depth).\"\"\"\n    idx = state['current_gap_search_index']\n    if not state.get('additional_queries_planned') or idx >= len(state['additional_queries_planned']):\n        return {} # Should not happen if logic is correct, but safe return\n\n    query_obj = state['additional_queries_planned'][idx]\n    depth = state['depth'] # Should be 'advanced' here\n    \n    # 计算当前完成的步骤数和总步骤数，用于进度显示\n    completed_steps = state.get('completed_steps_count', 0)\n    total_steps = state.get('total_steps', 0)\n    \n    all_new_results: List[SearchStepResult] = []\n    all_updates: List[StreamUpdate] = []\n    \n    search_tasks = []\n    step_ids = []\n\n    # If source is 'all', create tasks for web, academic, and x\n    # Otherwise, create a task for the specific source\n    \n    base_id = f\"gap-search-{state['current_search_step_index'] + idx}\" # Create a unique enough ID base\n\n    sources_to_search: List[Literal['web', 'academic', 'x']] = []\n    if query_obj.source == 'all':\n        sources_to_search = ['web', 'academic', 'x']\n    else:\n        sources_to_search = [query_obj.source]\n\n    search_counter = 0 # To create unique IDs if 'all'\n    for source_type in sources_to_search:\n        step_id = f\"{base_id}-{source_type}\" if query_obj.source == 'all' else base_id\n        step_ids.append(step_id)\n        \n        # Send running update with progress information\n        running_update = add_stream_update(state, {\n            'id': step_id,\n            'type': source_type,\n            'status': 'running',\n            'title': f\"Additional {source_type} search for '{query_obj.query}'\",\n            'query': query_obj.query,\n            'message': f\"Searching {source_type} to fill gap: {query_obj.rationale}\",\n            'completedSteps': completed_steps,\n            'totalSteps': total_steps,\n        })\n        all_updates.extend(running_update)\n        \n        # Create async task\n        if source_type == 'web':\n            search_tasks.append(perform_web_search(query_obj.query, depth, query_obj.priority))\n        elif source_type == 'academic':\n            search_tasks.append(perform_academic_search(query_obj.query, query_obj.priority))\n        elif source_type == 'x':\n            search_tasks.append(perform_x_search(query_obj)) # Pass full object\n            \n        search_counter +=1\n\n    # Execute searches concurrently\n    try:\n        search_outputs: List[List[SearchResultItem]] = await asyncio.gather(*search_tasks)\n        \n        # Process results and send completed updates\n        for i, results in enumerate(search_outputs):\n            source_type = sources_to_search[i]\n            step_id = step_ids[i]\n            \n            # Create a query object specific to this source type for the result log\n            specific_query = SearchQuery(\n                query=query_obj.query, \n                rationale=query_obj.rationale, \n                source=source_type, \n                priority=query_obj.priority\n            )\n            step_result = SearchStepResult(type=source_type, query=specific_query, results=results)\n            all_new_results.append(step_result)\n            \n            completed_update = add_stream_update(state, {\n                'id': step_id,\n                'type': source_type,\n                'status': 'completed',\n                'title': f\"Additional {source_type} search complete for '{query_obj.query}'\",\n                'query': query_obj.query,\n                'results': results,\n                'message': f\"Found {len(results)} results\",\n                'overwrite': True # Overwrite the running status\n            })\n            all_updates.extend(completed_update)\n\n    except Exception as e:\n         print(f\"Error during gap search for query '{query_obj.query}': {e}\")\n         # Send error updates for all attempted steps in this batch\n         for i, source_type in enumerate(sources_to_search):\n             step_id = step_ids[i]\n             error_update = add_stream_update(state, {\n                 'id': step_id,\n                 'type': source_type,\n                 'status': 'completed',\n                 'title': f\"Additional {source_type} search failed for '{query_obj.query}'\",\n                 'query': query_obj.query,\n                 'message': f\"Error during gap search: {str(e)}\",\n                 'completedSteps': completed_steps,\n                 'totalSteps': total_steps,\n                 'overwrite': True\n             })\n             all_updates.extend(error_update)\n         # Do not add partial results if gather failed significantly\n         all_new_results = []\n\n\n    return {\n        \"search_results\": all_new_results, # Append results\n        \"current_gap_search_index\": idx + 1,\n        \"stream_updates\": all_updates\n        # Note: completed_steps_count is handled in the final synthesis update\n    }\n\n\nasync def synthesize_final_report(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Synthesizes all findings if advanced search was performed.\"\"\"\n    all_search_results = state['search_results']\n    gap_analysis = state.get('gap_analysis')\n    \n    # This node is only reached if depth=='advanced' and gaps were found/searched\n    \n    # 计算当前完成的步骤数和总步骤数，用于进度显示\n    completed_steps = state.get('completed_steps_count', 0)\n    total_steps = state.get('total_steps', 0)\n    \n    # Send running update with progress information\n    running_updates = add_stream_update(state, {\n        'id': 'final-synthesis',\n        'type': 'analysis',\n        'status': 'running',\n        'title': 'Final Research Synthesis',\n        'analysisType': 'synthesis',\n        'message': 'Synthesizing all research findings...',\n        'completedSteps': completed_steps,\n        'totalSteps': total_steps,\n    })\n\n    # Prepare gap analysis summary for prompt (avoid sending full objects if too large)\n    gap_summary = \"No gap analysis performed or available.\"\n    if gap_analysis:\n         gap_summary = {\n             \"limitations_summary\": [l.description for l in gap_analysis.limitations],\n             \"knowledge_gaps_summary\": [f\"{g.topic}: {g.reason}\" for g in gap_analysis.knowledge_gaps],\n             \"followup_summary\": [f.action for f in gap_analysis.recommended_followup]\n         }\n         gap_summary = json.dumps(gap_summary, indent=2)\n\n\n    prompt = f\"\"\"Synthesize all research findings, including the initial searches, the gap analysis, and any follow-up research conducted to fill those gaps.\nHighlight key conclusions, assign a confidence score (0-1), list supporting evidence (e.g., citing source URLs or titles briefly), and identify remaining uncertainties.\n\nStick strictly to the requested output schema.\n\nTopic: \"{state['topic']}\"\n\nCombined Search Results (Initial + Gap Filling) JSON:\n{json.dumps([r.dict() for r in all_search_results], indent=2, default=str)} \n\nGap Analysis Summary:\n{gap_summary}\n\nGenerate the final synthesis.\"\"\"\n\n    try:\n        final_synthesis_result = await asyncio.get_event_loop().run_in_executor(\n            None, generate_structured_output, llm, FinalSynthesisResult, prompt\n        )\n        # final_synthesis_result = generate_structured_output(llm, FinalSynthesisResult, prompt) # If async\n        \n        final_total_steps = state['total_steps'] # Should already include the +2 for advanced\n        final_completed_steps = final_total_steps # Synthesis is the last step\n\n        # Send completed update\n        completed_updates = add_stream_update(state, {\n            'id': 'final-synthesis',\n            'type': 'analysis',\n            'status': 'completed',\n            'title': 'Final Research Synthesis',\n            'analysisType': 'synthesis',\n            'findings': [\n                 {\"insight\": f.finding, \"evidence\": f.supporting_evidence, \"confidence\": f.confidence} \n                 for f in final_synthesis_result.key_findings\n             ], # Simplified for stream\n            'uncertainties': final_synthesis_result.remaining_uncertainties,\n            'message': f\"Synthesized {len(final_synthesis_result.key_findings)} key findings\",\n            'overwrite': True,\n            'completedSteps': final_completed_steps -1, # Show as nearly complete before final progress update\n            'totalSteps': final_total_steps\n        })\n\n        all_updates = running_updates + completed_updates\n        \n        # Add final progress update\n        final_progress_update = add_stream_update(state, {\n            'id': 'research-progress',\n            'type': 'progress',\n            'status': 'completed',\n            'title': 'Research Progress', \n            'message': 'Research complete',\n            'completedSteps': final_completed_steps,\n            'totalSteps': final_total_steps,\n            'isComplete': True,\n            'overwrite': True, # Overwrite any previous progress\n            'timestamp': time.time()\n        })\n        all_updates.extend(final_progress_update)\n\n        return {\n            \"final_synthesis\": final_synthesis_result,\n            \"stream_updates\": all_updates,\n            \"completed_steps_count\": final_completed_steps # Mark final count\n        }\n    except Exception as e:\n        print(f\"Error during final synthesis: {e}\")\n        final_total_steps = state['total_steps']\n        # Send error update for synthesis\n        running_updates = add_stream_update(state, { # 重新获取或确保 running_updates 可用\n            'id': 'final-synthesis', \n            'type': 'analysis', \n            'status': 'running', \n            'title': 'Final Research Synthesis', \n            'analysisType': 'synthesis', \n            'message': 'Synthesizing all research findings...', 'timestamp': time.time() # Add timestamp if needed\n        }) # 确保 running_updates 可用, 或者移除它如果你不打算在这里加它\n        error_updates = add_stream_update(state, {\n            'id': 'final-synthesis',\n            'type': 'analysis',\n            'status': 'completed', # Mark step as ended\n            'title': 'Final Synthesis Failed',\n            'analysisType': 'synthesis',\n            'message': f\"Error during synthesis: {str(e)}\",\n            'overwrite': True,\n            'completedSteps': final_total_steps -1, \n            'totalSteps': final_total_steps\n        })\n         # Still send a final progress update, but indicate potential incompletion\n        final_progress_update = add_stream_update(state, {\n            'id': 'research-progress',\n            'type': 'progress',\n            'status': 'completed', # Research process finished, even if synthesis failed\n            'title': 'Research Progress', \n            'message': 'Research finished, but final synthesis failed.',\n            'completedSteps': final_total_steps - 1, # One step failed\n            'totalSteps': final_total_steps,\n            'isComplete': True, # The graph run is complete\n            'overwrite': True \n        })\n        all_updates = running_updates + error_updates + final_progress_update\n        return {\n            \"final_synthesis\": None, # Indicate failure\n            \"stream_updates\": all_updates,\n             \"completed_steps_count\": final_total_steps - 1\n        }\n\n\ndef finalize_basic_research(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Adds the final progress update for basic depth or advanced without gaps.\"\"\"\n    final_total_steps = state['total_steps']\n    final_completed_steps = state['completed_steps_count'] # Should be total_steps if no errors\n    \n    message = \"Research complete\"\n    # Check if gap analysis failed, adjust message\n    if state.get('gap_analysis') is None and state['current_analysis_step_index'] == len(state['analysis_steps_planned']):\n         message = \"Research finished, but gap analysis failed.\"\n         final_completed_steps = state['completed_steps_count'] # Keep completed count as is\n\n    final_progress_update = add_stream_update(state, {\n        'id': 'research-progress',\n        'type': 'progress',\n        'status': 'completed',\n        'title': 'Research Progress',\n        'message': message,\n        'completedSteps': final_completed_steps,\n        'totalSteps': final_total_steps,\n        'isComplete': True,\n        'overwrite': True,\n        'timestamp': time.time()\n    })\n    return {\"stream_updates\": final_progress_update}\n\n# --- generate_final_markdown_report 函数 ---\n\nasync def generate_final_markdown_report(state: ResearchState) -> Dict[str, Any]:\n    \"\"\"Generates the final, long-form Markdown report using all gathered data.\"\"\"\n\n    print(\"--- Entering Node: generate_final_markdown_report ---\")\n\n    # --- 获取状态数据 ---\n    topic = state['topic']\n    final_synthesis = state.get('final_synthesis')\n    search_results = state.get('search_results', [])\n    gap_analysis = state.get('gap_analysis')\n    \n    # 计算当前完成的步骤数和总步骤数，用于进度显示\n    completed_steps = state.get('completed_steps_count', 0)\n    total_steps = state.get('total_steps', 0)\n\n    # --- 检查是否有 Synthesis 数据 ---\n    if not final_synthesis:\n        print(\"Skipping final report generation: Final synthesis data is missing.\")\n        skipped_update = add_stream_update(state, {\n            'id': 'final-report-generation', 'type': 'report', 'status': 'completed',\n            'title': 'Final Report Generation Skipped',\n            'message': 'Skipped report generation because final synthesis data was missing.',\n            'completedSteps': completed_steps,\n            'totalSteps': total_steps,\n            'overwrite': True, 'timestamp': time.time()\n        })\n        base_total_steps = state.get('total_steps', 0)\n        final_total_steps = base_total_steps # 没有增加步骤\n        final_completed_steps = state.get('completed_steps_count', 0)\n        final_progress_update = add_stream_update(state, {\n             'id': 'research-progress', 'type': 'progress', 'status': 'completed',\n             'title': 'Research Progress', 'message': 'Research finished, synthesis missing, report skipped.',\n             'completedSteps': final_completed_steps, 'totalSteps': final_total_steps,\n             'isComplete': True, 'overwrite': True, 'timestamp': time.time()\n        })\n        return {\"final_report_markdown\": None, \"stream_updates\": skipped_update + final_progress_update}\n\n    # --- 发送运行中 Update ---\n    running_updates = add_stream_update(state, {\n        'id': 'final-report-generation',\n        'type': 'report',\n        'status': 'running',\n        'title': 'Generating Final Report',\n        'message': 'Compiling research findings into the final report...',\n        'completedSteps': completed_steps,\n        'totalSteps': total_steps,\n        'timestamp': time.time() # 添加时间戳\n    })\n    all_updates = list(running_updates) # 初始化 updates 列表\n\n    # --- 构建详细上下文 (只构建一次) ---\n    print(\"--- Building Context for Final Report ---\")\n    context_parts = []\n    context_parts.append(f\"## Research Topic:\\n{topic}\\n\")\n\n    # 1. 添加 Final Synthesis 结果\n    context_parts.append(\"## I. Synthesized Key Findings & Uncertainties (Core Content)\\n\")\n    context_parts.append(\"### Key Findings (Elaborate on these using evidence below):\\n\")\n    if final_synthesis.key_findings:\n        for i, finding in enumerate(final_synthesis.key_findings):\n            context_parts.append(f\"**Finding {i+1}: {finding.finding}**\")\n            context_parts.append(f\"   - Confidence: {finding.confidence:.2f}\")\n            context_parts.append(f\"   - Evidence Hints: {', '.join(finding.supporting_evidence)}\")\n            context_parts.append(\"\")\n    else:\n        context_parts.append(\"- No key findings were synthesized.\\n\")\n\n    context_parts.append(\"### Remaining Uncertainties (Address in conclusion or limitations):\\n\")\n    if final_synthesis.remaining_uncertainties:\n        for uncertainty in final_synthesis.remaining_uncertainties:\n            context_parts.append(f\"- {uncertainty}\")\n    else:\n        context_parts.append(\"- No specific remaining uncertainties identified.\\n\")\n    context_parts.append(\"\\n\")\n\n    # 2. 添加 Gap Analysis 结果\n    if gap_analysis:\n         context_parts.append(\"## II. Gap Analysis Summary (For 'Scope and Limitations' section):\\n\")\n         if gap_analysis.limitations:\n              context_parts.append(\"### Identified Limitations:\\n\")\n              for limit in gap_analysis.limitations:\n                   context_parts.append(f\"- **{limit.type}**: {limit.description} (Severity: {limit.severity})\")\n                   if limit.potential_solutions:\n                        context_parts.append(f\"  - Potential Solutions: {'; '.join(limit.potential_solutions)}\")\n         else:\n             context_parts.append(\"- No specific limitations identified.\\n\")\n\n         if gap_analysis.knowledge_gaps:\n              context_parts.append(\"### Identified Knowledge Gaps:\\n\")\n              for gap in gap_analysis.knowledge_gaps:\n                   context_parts.append(f\"- **{gap.topic}**: {gap.reason}\")\n         else:\n             context_parts.append(\"- No specific knowledge gaps identified.\\n\")\n         context_parts.append(\"\\n\")\n    else:\n        context_parts.append(\"## II. Gap Analysis Summary:\\n- Gap analysis was not performed or yielded no results.\\n\")\n\n    # 3. 添加详细的 Search Results Context\n    context_parts.append(\"## III. Search Results Context (Evidence for Citations [Title](URL) and Details):\\n\")\n    search_results_list = state.get('search_results', [])\n    total_results_count = sum(len(group.results) for group in search_results_list)\n    context_parts.append(f\"(Reference Appendix: Contains snippets from {total_results_count} collected results)\\n\")\n\n    processed_urls: Set[str] = set()\n    max_results_per_query_in_context = 5\n    max_content_length = 600\n\n    if search_results_list:\n        for result_group in search_results_list:\n             query_text = result_group.query.query\n             source_type = result_group.type\n             context_parts.append(f\"### Context for Query: \\\"{query_text}\\\" ({source_type.upper()})\\n\")\n\n             results_shown_count = 0\n             if result_group.results:\n                  for item in result_group.results:\n                        if results_shown_count >= max_results_per_query_in_context:\n                             break\n                        if not item.url or item.url in processed_urls:\n                             continue\n\n                        title = item.title.replace('\"',\"'\").strip() if item.title else \"Source\"\n                        url = item.url\n\n                        content_full = item.content if item.content else \"\"\n                        content_snippet = content_full[:max_content_length]\n                        if len(content_full) > max_content_length:\n                             last_period = content_snippet.rfind('.')\n                             if last_period > max_content_length * 0.7:\n                                  content_snippet = content_snippet[:last_period+1]\n                             else:\n                                  content_snippet += \"...\"\n                        content_snippet = content_snippet.replace('\\n', ' ').strip()\n\n                        context_parts.append(f\"- **[{title}]({url})**\")\n                        if content_snippet:\n                            context_parts.append(f\"  - Snippet: {content_snippet}\")\n\n                        processed_urls.add(url)\n                        results_shown_count += 1\n             else:\n                  context_parts.append(\"- (No relevant results found or processed for this query)\")\n             context_parts.append(\"\")\n    else:\n        context_parts.append(\"- No search results were collected.\\n\")\n\n    user_prompt_context = \"\\n\".join(context_parts)\n    # --- 上下文构建结束 ---\n\n    # --- 定义 Prompts ---\n    try:\n        current_date_str = datetime.now().strftime(\"%a, %b %d, %Y\")\n        # 从 prompt.py 导入模板并格式化\n        system_prompt = FINAL_REPORT_SYSTEM_PROMPT_TEMPLATE.format(current_date=current_date_str)\n    except Exception as e:\n        print(f\"Error formatting system prompt: {e}\")\n        system_prompt = \"Error: Could not format system prompt.\" # Fallback\n\n    # User Prompt 结尾指令\n    user_prompt = f\"\"\"Based *only* on the provided context below (Sections I, II, III), please generate the comprehensive research report following ALL the guidelines and requirements detailed in the system prompt. Ensure deep analysis, structure with Introduction, thematic H2 sections with H3 subsections for each finding (3-5 paragraphs each), Scope/Limitations, and Conclusion. Every factual claim MUST have an inline citation [Title](URL) from Section III. Aim for a substantial word count by fully utilizing the context.\n\n{user_prompt_context}\n\nGenerate the final Markdown research report now:\"\"\"\n\n    # --- Call LLM and handle response ---\n    markdown_content = \"\"\n    try:\n        print(\"--- Calling LLM for Final Report Generation ---\")\n        response = await llm_creative.ainvoke([\n            AIMessage(content=system_prompt),\n            HumanMessage(content=user_prompt)\n        ])\n        markdown_content = response.content\n        print(f\"--- LLM Call Successful. Report Length: {len(markdown_content)} chars ---\")\n\n        # 发送 'final-report-generation' 完成 Update\n        completed_updates = add_stream_update(state, {\n            'id': 'final-report-generation', 'type': 'report', 'status': 'completed',\n            'title': 'Final Report Generated',\n            'message': f'Successfully generated Markdown report ({len(markdown_content)} characters).',\n            'completedSteps': completed_steps,\n            'totalSteps': total_steps,\n            'overwrite': True, 'timestamp': time.time() # 添加时间戳\n        })\n        all_updates.extend(completed_updates) # 添加到列表中\n\n        # 发送最终 'research-progress' 完成 Update\n        base_total_steps = state.get('total_steps', 0)\n        # 如果 base_total_steps 有效则加1，否则基于 completed_steps 加1\n        final_total_steps = base_total_steps + 1 if base_total_steps > 0 else state.get('completed_steps_count', 0) + 1\n        final_completed_steps = final_total_steps\n\n        final_progress_update = add_stream_update(state, {\n             'id': 'research-progress', 'type': 'progress', 'status': 'completed',\n             'title': 'Research Progress', # <-- 确保有 title\n             'message': 'Research complete',\n             'completedSteps': final_completed_steps,\n             'totalSteps': final_total_steps,\n             'isComplete': True, 'overwrite': True, 'timestamp': time.time()\n        })\n        all_updates.extend(final_progress_update)\n\n        print(\"--- Exiting Node: generate_final_markdown_report (Success) ---\")\n        return {\n            \"final_report_markdown\": markdown_content,\n            \"stream_updates\": all_updates,\n            \"completed_steps_count\": final_completed_steps\n        }\n\n    except Exception as e:\n        print(f\"Error during final report generation: {e}\")\n        # 发送 'final-report-generation' 失败 Update\n        error_updates = add_stream_update(state, {\n            'id': 'final-report-generation', 'type': 'report', 'status': 'completed',\n            'title': 'Final Report Generation Failed',\n            'message': f\"Error generating report: {str(e)}\",\n            'overwrite': True, 'timestamp': time.time()\n        })\n        all_updates.extend(error_updates)\n\n        # 发送最终 'research-progress' 完成 (但报告失败) Update\n        base_total_steps = state.get('total_steps', 0)\n        final_total_steps = base_total_steps + 1 if base_total_steps > 0 else state.get('completed_steps_count', 0) + 1\n        final_completed_steps = final_total_steps - 1 # 本节点失败\n\n        final_progress_update = add_stream_update(state, {\n             'id': 'research-progress', 'type': 'progress', 'status': 'completed',\n             'title': 'Research Progress', # <-- 确保有 title\n             'message': 'Research finished, but final report generation failed.',\n             'completedSteps': final_completed_steps,\n             'totalSteps': final_total_steps,\n             'isComplete': True, 'overwrite': True, 'timestamp': time.time()\n        })\n        all_updates.extend(final_progress_update)\n\n        print(\"--- Exiting Node: generate_final_markdown_report (Error) ---\")\n        return {\n            \"final_report_markdown\": f\"# Report Generation Failed\\n\\nError: {str(e)}\",\n            \"stream_updates\": all_updates,\n            \"completed_steps_count\": state.get('completed_steps_count', 0) # 失败时不增加\n        }"
  },
  {
    "path": "super_agents/deep_research/reason_graph/prompt.py",
    "content": "# reason_graph/prompt.py\nFINAL_REPORT_SYSTEM_PROMPT_TEMPLATE = \"\"\"You are an advanced research assistant tasked with writing a final, comprehensive research report based *only* on the provided context (synthesized findings, gap analysis, search results). Your focus is deep analysis, logical structure, and accurate citation based *only* on the provided evidence.\nThe current date is {current_date}.\n\n**Report Requirements:**\n\n1.  **Length & Depth:** Generate a highly detailed and comprehensive report. Aim for a substantial length (e.g., target **3000-5000 words** or more if the context supports it) by elaborating deeply on the findings using the provided search result details. Do NOT just summarize. Analyze, compare, contrast, and discuss implications.\n2.  **Structure:**\n    * Start with an \"Introduction\" section (~2-3 paragraphs) setting the stage for the research topic.\n    * Create thematic sections using H2 headings (##) based on the \"Synthesized Key Findings\" provided in the context.\n    * For *each* Key Finding, create a dedicated subsection using H3 headings (###).\n    * Within each H3 subsection, write **3-5 detailed paragraphs** elaborating on the finding. Use specific details, data points, or quotes found in the \"Search Results Context\" section to support your points. Critically analyze the evidence where possible.\n    * Include a dedicated \"Scope and Limitations\" section (H2) using insights from the \"Gap Analysis Summary\" context.\n    * End with a \"Conclusion\" section (H2, ~2-3 paragraphs) summarizing the main takeaways and discussing the \"Remaining Uncertainties\" provided in the context.\n3.  **Citations:**\n    * You MUST cite every factual claim using evidence *only* from the \"Search Results Context\".\n    * Place citations *inline* immediately after the relevant sentence or statement.\n    * Use the format `[Title](URL)` where Title and URL are taken directly from the Search Results Context section.\n    * Do *not* list citations separately at the end. Do *not* hallucinate sources.\n4.  **Formatting:**\n    * Use Markdown format exclusively.\n    * Use H2 (##) and H3 (###) headings only. Do NOT use H1 (#).\n    * Write in well-structured paragraphs. Bullet points are acceptable within paragraphs or for specific lists but the main body should be paragraphs.\n    * Use LaTeX for math ($inline$$ or $$block$$) and \"USD\" for currency if relevant.\n5.  **Tone & Style:** Maintain a formal, objective, analytical tone appropriate for a research report. Be creative in synthesis but strictly evidence-based.\n\n**Context Sections Provided:**\n- Section I: Synthesized Key Findings & Uncertainties (Core content to elaborate)\n- Section II: Gap Analysis Summary (For limitations section)\n- Section III: Search Results Context (Evidence for details and citations)\n\nAdhere strictly to these requirements and use *only* the provided context.\n\"\"\"\n"
  },
  {
    "path": "super_agents/deep_research/reason_graph/schemas.py",
    "content": "# reason_graph/schemas.py\n\nfrom typing import List, Optional, Literal, Dict, Any\nfrom pydantic import BaseModel, Field\n\n# --- Pydantic Schemas Mirroring Zod Schemas from original JS ---\n\nclass SearchQuery(BaseModel):\n    \"\"\"Represents a single search query within the research plan.\"\"\"\n    query: str = Field(description=\"The specific search query string.\")\n    rationale: str = Field(description=\"The reasoning behind why this query is important.\")\n    source: Literal['web', 'academic', 'x', 'all'] = Field(description=\"The source type(s) to search.\")\n    priority: int = Field(description=\"Priority of the query (e.g., 2-4, lower means higher priority).\")\n\nclass RequiredAnalysis(BaseModel):\n    \"\"\"Represents a required analysis step in the research plan.\"\"\"\n    type: str = Field(description=\"The type of analysis to perform (e.g., 'SWOT', 'Comparative', 'Sentiment').\")\n    description: str = Field(description=\"A brief description of what the analysis should cover.\")\n    importance: int = Field(description=\"Importance score (e.g., 1-5, higher means more important).\")\n\nclass ResearchPlan(BaseModel):\n    \"\"\"The overall research plan generated by the LLM.\"\"\"\n    search_queries: List[SearchQuery] = Field(\n        description=\"List of targeted search queries.\",\n    )\n    required_analyses: List[RequiredAnalysis] = Field(\n        description=\"List of key analyses to perform on the search results.\",\n    )\n\nclass SearchResultItem(BaseModel):\n    \"\"\"Represents a single item returned from a search API.\"\"\"\n    source: Literal['web', 'academic', 'x'] = Field(description=\"The type of source the result came from.\")\n    title: str = Field(description=\"The title of the search result.\")\n    url: str = Field(description=\"The URL of the search result.\")\n    content: str = Field(description=\"The content snippet or summary of the result.\")\n    tweetId: Optional[str] = Field(default=None, description=\"The ID of the tweet, if the source is 'x'.\")\n\nclass SearchStepResult(BaseModel):\n    \"\"\"Holds the results obtained from executing a single search step.\"\"\"\n    type: Literal['web', 'academic', 'x'] = Field(description=\"The type of search performed for this step.\")\n    query: SearchQuery = Field(description=\"The original SearchQuery object that prompted this search.\")\n    results: List[SearchResultItem] = Field(description=\"The list of results found for this search step.\")\n\nclass AnalysisFinding(BaseModel):\n    \"\"\"Represents a single finding from an analysis step.\"\"\"\n    insight: str = Field(description=\"The core insight or finding discovered.\")\n    evidence: List[str] = Field(description=\"List of supporting evidence (e.g., brief quotes, source references).\")\n    confidence: float = Field(description=\"Confidence score in the finding (0.0 to 1.0).\")\n\nclass AnalysisResult(BaseModel):\n    \"\"\"The structured output of a single analysis performed by the LLM.\"\"\"\n    findings: List[AnalysisFinding] = Field(description=\"List of key findings from the analysis.\")\n    implications: List[str] = Field(description=\"Potential implications of the findings.\")\n    limitations: List[str] = Field(description=\"Limitations noted during this specific analysis.\")\n\nclass Limitation(BaseModel):\n    \"\"\"Describes a limitation identified during the gap analysis phase.\"\"\"\n    type: str = Field(description=\"The type of limitation (e.g., 'Source Bias', 'Data Scarcity').\")\n    description: str = Field(description=\"Detailed description of the limitation.\")\n    severity: int = Field(description=\"Severity score (e.g., 2-10, higher means more severe).\")\n    potential_solutions: List[str] = Field(description=\"Suggested ways to mitigate or address the limitation.\")\n\nclass KnowledgeGap(BaseModel):\n    \"\"\"Describes a knowledge gap identified during the gap analysis phase.\"\"\"\n    topic: str = Field(description=\"The specific topic or area where knowledge is lacking.\")\n    reason: str = Field(description=\"The reason why this gap exists or is significant.\")\n    additional_queries: List[str] = Field(description=\"Specific queries suggested to help fill this gap.\")\n\nclass RecommendedFollowup(BaseModel):\n    \"\"\"Describes a recommended follow-up action from the gap analysis.\"\"\"\n    action: str = Field(description=\"The suggested follow-up action.\")\n    rationale: str = Field(description=\"The reasoning behind recommending this action.\")\n    priority: int = Field(description=\"Priority score for the follow-up action (e.g., 2-10).\")\n\nclass GapAnalysisResult(BaseModel):\n    \"\"\"The structured output of the gap analysis phase.\"\"\"\n    limitations: List[Limitation] = Field(description=\"List of identified limitations in the research.\")\n    knowledge_gaps: List[KnowledgeGap] = Field(description=\"List of identified knowledge gaps.\")\n    recommended_followup: List[RecommendedFollowup] = Field(description=\"List of recommended follow-up actions.\")\n\nclass KeyFinding(BaseModel):\n    \"\"\"Represents a key finding in the final synthesis report.\"\"\"\n    finding: str = Field(description=\"The synthesized key finding or conclusion.\")\n    confidence: float = Field(description=\"Overall confidence in this finding (0.0 to 1.0).\")\n    supporting_evidence: List[str] = Field(description=\"List of key pieces of evidence supporting the finding (e.g., references to specific search results or analyses).\")\n\nclass FinalSynthesisResult(BaseModel):\n    \"\"\"The structured output of the final synthesis phase (only in 'advanced' depth).\"\"\"\n    key_findings: List[KeyFinding] = Field(description=\"List of synthesized key findings from all research.\")\n    remaining_uncertainties: List[str] = Field(description=\"List of questions or uncertainties that remain after the research.\")\n\n\n# --- Helper Schemas for Graph State and Streaming ---\n\nclass StepInfo(BaseModel):\n    \"\"\"Helper schema to store planned step information in the state.\"\"\"\n    id: str = Field(description=\"Unique ID for the step.\")\n    type: str = Field(description=\"Type of step ('web', 'academic', 'x', 'analysis').\")\n    details: Dict[str, Any] = Field(description=\"Holds the original query or analysis object details.\")\n\n\nclass StreamUpdateData(BaseModel):\n    \"\"\"Data payload for a single streaming update message.\"\"\"\n    id: str = Field(description=\"Unique ID for the step or phase this update refers to.\")\n    type: str = Field(description=\"Type of the step or phase ('plan', 'web', 'academic', 'x', 'analysis', 'progress', 'error').\")\n    status: Literal['running', 'completed'] = Field(description=\"Current status of the step/phase.\")\n    title: str = Field(description=\"Display title for the update.\")\n    message: str = Field(description=\"Descriptive message about the current status or result.\")\n    timestamp: float = Field(description=\"Timestamp when the update was generated (epoch time).\")\n    overwrite: Optional[bool] = Field(default=False, description=\"Whether this update should replace a previous one with the same ID in the UI.\")\n    # Optional fields depending on the update type and status\n    plan: Optional[ResearchPlan] = Field(default=None, description=\"The research plan (used in 'plan completed' update).\")\n    totalSteps: Optional[int] = Field(default=None, description=\"Total number of steps planned for the research.\")\n    query: Optional[str] = Field(default=None, description=\"The query string for search steps.\")\n    results: Optional[List[SearchResultItem]] = Field(default=None, description=\"Search results (used in 'search completed' updates).\")\n    analysisType: Optional[str] = Field(default=None, description=\"The type of analysis being performed or completed.\")\n    # Use Dict for findings in stream update for simplicity, full Pydantic models are in state\n    findings: Optional[List[Dict]] = Field(default=None, description=\"Analysis findings (simplified for streaming).\")\n    gaps: Optional[List[KnowledgeGap]] = Field(default=None, description=\"Identified knowledge gaps.\")\n    recommendations: Optional[List[RecommendedFollowup]] = Field(default=None, description=\"Follow-up recommendations.\")\n    uncertainties: Optional[List[str]] = Field(default=None, description=\"Remaining uncertainties from final synthesis.\")\n    completedSteps: Optional[int] = Field(default=None, description=\"Number of steps completed so far.\")\n    isComplete: Optional[bool] = Field(default=None, description=\"Flag indicating if the entire research process is complete.\")\n\nclass StreamUpdate(BaseModel):\n    \"\"\"Wrapper for the streaming update message, matching the original JS structure.\"\"\"\n    type: Literal['research_update'] = Field(default='research_update')\n    data: StreamUpdateData = Field(description=\"The actual data payload for the update.\")"
  },
  {
    "path": "super_agents/deep_research/reason_graph/state.py",
    "content": "import operator\nfrom typing import TypedDict, List, Optional, Annotated, Dict, Any, Literal\n\n# Use relative import to access schemas defined within the same package\nfrom super_agents.deep_research.reason_graph.schemas import (\n    ResearchPlan,\n    SearchStepResult,\n    GapAnalysisResult,\n    FinalSynthesisResult,\n    StreamUpdate,\n    StepInfo,\n    SearchQuery,\n)\n\nclass ResearchState(TypedDict):\n    \"\"\"\n    Represents the state of the research graph execution.\n    It holds inputs, intermediate results, and final outputs.\n    \"\"\"\n    # --- Inputs ---\n    topic: str\n    depth: Literal['basic', 'advanced']\n\n    # --- Planning Phase ---\n    research_plan: Optional[ResearchPlan]\n    search_steps_planned: List[StepInfo] # Flat list of all searches to execute\n    analysis_steps_planned: List[StepInfo] # List of analyses to execute\n\n    # --- Execution Tracking ---\n    current_search_step_index: int # Index for iterating through search_steps_planned\n    current_analysis_step_index: int # Index for iterating through analysis_steps_planned\n    current_gap_search_index: int # Index for iterating through additional_queries_planned\n\n    # --- Accumulated Results ---\n    # Use Annotated and operator.add to append results instead of replacing the list\n    search_results: Annotated[List[SearchStepResult], operator.add]\n\n    # --- Analysis & Synthesis Results ---\n    gap_analysis: Optional[GapAnalysisResult]\n    # List of queries generated by gap analysis for advanced depth\n    additional_queries_planned: List[SearchQuery]\n    final_synthesis: Optional[FinalSynthesisResult] # Result of final synthesis if advanced depth\n\n    # --- Streaming & Progress ---\n    # Use Annotated and operator.add to append updates\n    stream_updates: Annotated[List[StreamUpdate], operator.add]\n    completed_steps_count: int # Counter for completed steps (searches, analyses, gap, synthesis)\n    total_steps: int # Total number of steps calculated after planning (may update after gap analysis)\n\n    # --- Final Output ---\n    final_report_markdown: Optional[str] # Add this field"
  },
  {
    "path": "super_agents/deep_research/reason_graph/tools.py",
    "content": "# reason_graph/tools.py\n\nimport os\nimport json\nimport time\nimport re\nimport asyncio\nfrom datetime import datetime\nfrom typing import Optional, List, Literal, Dict, Any, Tuple, Set # <--- 添加 Tuple, Set\n\n# --- Environment Variable Loading ---\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# --- Pydantic & LangChain Core ---\nfrom pydantic import BaseModel # Use Pydantic V2\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.messages import HumanMessage, SystemMessage\nfrom langchain_core.runnables.base import RunnableSerializable # Type hint for LLM\nfrom langchain_openai import ChatOpenAI # Default\n\n# --- External Service Clients ---\ntry:\n    from tavily import AsyncTavilyClient # Use Async Client\n    TAVILY_AVAILABLE = True\nexcept ImportError:\n    TAVILY_AVAILABLE = False\n    # print(\"Warning: tavily-python not installed. Web searches via Tavily will fail.\")\n\ntry:\n    from exa_py import Exa\n    EXA_AVAILABLE = True\nexcept ImportError:\n    EXA_AVAILABLE = False\n    # print(\"Warning: exa-py not installed. Academic/X searches via Exa will fail.\")\n\n\n# --- Internal Imports ---\n# Assuming these exist in sibling files\ntry:\n    from super_agents.deep_research.reason_graph.schemas import SearchResultItem, SearchQuery, StreamUpdate, StreamUpdateData\n    from super_agents.deep_research.reason_graph.state import ResearchState\nexcept ImportError as e:\n    print(f\"Error importing local schemas/state: {e}\")\n    # Define dummy classes if needed for script to load partially\n    class SearchResultItem(BaseModel): pass\n    class SearchQuery(BaseModel): pass\n    class StreamUpdate(BaseModel): pass\n    class StreamUpdateData(BaseModel): pass\n    class ResearchState(dict): pass\n\n\n# --- API Key Loading ---\n# Prefer specific LLM_API_KEY, fallback to provider-specific or general OPENAI key\nLLM_API_KEY_FROM_ENV = os.getenv(\"LLM_API_KEY\")\nOPENAI_API_KEY_FROM_ENV = os.getenv(\"OPENAI_API_KEY\")\nGROQ_API_KEY_FROM_ENV = os.getenv(\"GROQ_API_KEY\") # For Groq Cloud\n\nTAVILY_API_KEY = os.getenv(\"TAVILY_API_KEY\")\nEXA_API_KEY = os.getenv(\"EXA_API_KEY\")\n\n# --- Configurable LLM Initialization ---\ndef initialize_llms() -> Tuple[Optional[RunnableSerializable], Optional[RunnableSerializable]]:\n    \"\"\"\n    Initializes and returns the main and creative LLM instances based on environment variables.\n    Supports providers: \"openai\", \"groq\", \"xai\"/\"grok\" (via compatible endpoint), \"openai_compatible\".\n    Returns: (llm, llm_creative) or (None, None) on failure.\n    \"\"\"\n    provider = os.getenv(\"LLM_PROVIDER\", \"openai\").lower()\n    model_name = os.getenv(\"LLM_MODEL_NAME\", \"gpt-4.1-mini\") # Sensible default\n    api_key = LLM_API_KEY_FROM_ENV # Use generic key first\n    base_url = os.getenv(\"LLM_BASE_URL\") # For compatible APIs\n\n    try:\n        temperature = float(os.getenv(\"LLM_TEMPERATURE\", \"0.0\"))\n        creative_temperature = float(os.getenv(\"LLM_CREATIVE_TEMPERATURE\", \"0.5\"))\n    except ValueError:\n        print(\"Warning: Invalid LLM temperature value in .env. Using defaults (0.0 / 0.5).\")\n        temperature = 0.0\n        creative_temperature = 0.5\n\n    print(f\"\\n--- Initializing LLM ---\")\n    print(f\"Provider: '{provider}'\")\n    print(f\"Model Name: '{model_name}'\")\n    print(f\"Base URL: {base_url if base_url else 'Default'}\")\n    print(f\"Temperatures: Main={temperature}, Creative={creative_temperature}\")\n    print(f\"------------------------\")\n\n    llm_instance = None\n    llm_creative_instance = None\n\n    try:\n        if provider == \"openai\":\n            key_to_use = api_key or OPENAI_API_KEY_FROM_ENV\n            if not key_to_use:\n                raise ValueError(\"OpenAI API key not found (checked LLM_API_KEY, OPENAI_API_KEY).\")\n            llm_instance = ChatOpenAI(model=model_name, temperature=temperature, api_key=key_to_use)\n            llm_creative_instance = ChatOpenAI(model=model_name, temperature=creative_temperature, api_key=key_to_use)\n\n        elif provider == \"xai\" or provider == \"grok\":\n            print(\"Info: Configuring provider 'xai'/'grok'. Assuming OpenAI-compatible API endpoint.\")\n            if not api_key:\n                raise ValueError(f\"LLM_API_KEY is required for provider '{provider}' (Your xAI API Key).\")\n            if not base_url:\n                raise ValueError(f\"LLM_BASE_URL is required for provider '{provider}' (The xAI Grok API endpoint URL).\")\n            if not model_name:\n                raise ValueError(f\"LLM_MODEL_NAME is required for provider '{provider}' (e.g., 'grok-1').\")\n\n            llm_instance = ChatOpenAI(\n                model=model_name, temperature=temperature,\n                openai_api_key=api_key, openai_api_base=base_url,\n            )\n            llm_creative_instance = ChatOpenAI(\n                model=model_name, temperature=creative_temperature,\n                openai_api_key=api_key, openai_api_base=base_url,\n            )\n            print(f\"Note: Ensure '{model_name}' is valid for the xAI API at {base_url}.\")\n\n        elif provider == \"openai_compatible\":\n            if not api_key:\n                raise ValueError(f\"LLM_API_KEY is required for provider '{provider}'.\")\n            if not base_url:\n                raise ValueError(f\"LLM_BASE_URL is required for provider '{provider}'.\")\n            if not model_name:\n                 raise ValueError(f\"LLM_MODEL_NAME is required for provider '{provider}'.\")\n\n            llm_instance = ChatOpenAI(\n                model=model_name, temperature=temperature,\n                openai_api_key=api_key, openai_api_base=base_url,\n            )\n            llm_creative_instance = ChatOpenAI(\n                model=model_name, temperature=creative_temperature,\n                openai_api_key=api_key, openai_api_base=base_url,\n            )\n        else:\n            raise ValueError(f\"Unsupported LLM_PROVIDER: '{provider}'. Check .env file. Supported: 'openai', 'groq', 'xai'/'grok', 'openai_compatible'.\")\n\n        print(\"--- LLM Initialization Successful ---\")\n        return llm_instance, llm_creative_instance\n\n    except Exception as e:\n        print(f\"!!! ERROR during LLM Initialization: {e}\")\n        return None, None\n\n# --- Initialize LLM instances at module level ---\nllm, llm_creative = initialize_llms()\n\n# --- Initialize External Service Clients ---\nif not TAVILY_API_KEY:\n    print(\"Warning: TAVILY_API_KEY not found in environment variables. Web search will fail.\")\ntavily_client = AsyncTavilyClient(api_key=TAVILY_API_KEY) if TAVILY_API_KEY and TAVILY_AVAILABLE else None\n\nif not EXA_API_KEY:\n    print(\"Warning: EXA_API_KEY not found in environment variables. Academic/X search will fail.\")\nexa_client = Exa(api_key=EXA_API_KEY) if EXA_API_KEY and EXA_AVAILABLE else None\n\n\n# --- Tool Helper Functions ---\n\ndef generate_structured_output(model: Optional[RunnableSerializable], schema: BaseModel, prompt: str, system_message: str = \"\") -> Optional[BaseModel]:\n    \"\"\"\n    Uses langchain `.with_structured_output` for reliable JSON generation.\n    Returns the parsed Pydantic object or None on failure.\n    \"\"\"\n    if model is None:\n        print(\"Error: LLM instance not available for structured output generation.\")\n        return None # Return None if LLM failed to initialize\n\n    try:\n        # Let LangChain handle method selection, but be aware of compatibility warnings.\n        # If issues persist with specific models/providers, try method=\"function_calling\".\n        structured_llm = model.with_structured_output(schema)\n        # structured_llm = model.with_structured_output(schema, method=\"function_calling\") # Fallback\n\n        messages = []\n        if system_message:\n            messages.append(SystemMessage(content=system_message))\n        messages.append(HumanMessage(content=prompt))\n        \n        # .invoke is typically synchronous for ChatModels\n        response = structured_llm.invoke(messages)\n        return response\n    except Exception as e:\n        print(f\"Error during structured output generation: {e}\")\n        # Consider logging the full traceback here if needed for debugging\n        # import traceback\n        # traceback.print_exc()\n        return None # Indicate failure\n\ndef extract_tweet_id(url: str) -> Optional[str]:\n    \"\"\"Extracts tweet ID from twitter.com or x.com URLs.\"\"\"\n    if not url:\n        return None\n    match = re.search(r\"(?:twitter\\.com|x\\.com)\\/\\w+\\/status\\/(\\d+)\", url)\n    return match.group(1) if match else None\n\ndef add_stream_update(state: ResearchState, data_dict: Dict[str, Any]) -> List[StreamUpdate]:\n    \"\"\"Creates and returns a list containing a single StreamUpdate, handling potential errors.\"\"\"\n    # Ensure required fields are present for validation, add timestamp\n    data_dict.setdefault('timestamp', time.time())\n    # Set other defaults expected by StreamUpdateData if necessary,\n    # although Pydantic handles Optional fields.\n\n    try:\n        # Validate data against the Pydantic model\n        update_data = StreamUpdateData(**data_dict)\n        stream_update = StreamUpdate(type='research_update', data=update_data)\n        return [stream_update]\n    except Exception as e: # Catch Pydantic ValidationError or others\n        print(f\"Error creating stream update for ID {data_dict.get('id', 'N/A')}: {e}\")\n        print(f\"Data causing error: {json.dumps(data_dict, indent=2, default=str)}\") # Print problematic data\n\n        # Create a standardized error update object\n        try:\n            error_update_data = StreamUpdateData(\n                id=data_dict.get('id', 'error-id') + '-validation-error', # Make ID unique\n                type='error',\n                status='completed', # Treat validation error as 'completed' step for flow\n                title='Stream Update Creation Error', # Specific Title\n                message=f\"Pydantic validation failed: {str(e)}\", # Include Pydantic error\n                timestamp=time.time()\n                # Ensure all *required* fields of StreamUpdateData are present here\n            )\n            stream_update = StreamUpdate(type='research_update', data=error_update_data)\n            return [stream_update]\n        except Exception as inner_e:\n            # If creating even the error update fails, print and return empty\n            print(f\"CRITICAL: Failed to create error stream update: {inner_e}\")\n            return []\n\n\n# --- Tool Wrappers ---\n\nasync def perform_web_search(query: str, depth: Literal['basic', 'advanced'], priority: int) -> List[SearchResultItem]:\n    \"\"\"Performs web search using Tavily.\"\"\"\n    if not tavily_client:\n        print(f\"Tavily client not available. Skipping web search for: '{query}'\")\n        return []\n\n    max_results = min(max(1, 6 - priority), 10)\n    search_depth = depth if depth in ['basic', 'advanced'] else 'basic'\n\n    try:\n        print(f\"--- Calling Tavily API for: '{query}' ---\")\n        response = await tavily_client.search(\n            query=query,\n            search_depth=search_depth,\n            include_answer=False, # Set based on whether you need Tavily's answer\n            max_results=max_results\n        )\n        print(f\"--- Tavily API call successful for: '{query}' ---\")\n\n        results_list = response.get('results', []) if isinstance(response, dict) else []\n\n        formatted_results = [\n            SearchResultItem(\n                source='web',\n                title=r.get('title', 'N/A'),\n                url=r.get('url', '#'),\n                content=r.get('content', '')\n            ) for r in results_list if isinstance(r, dict) and r.get('url')\n        ]\n        return formatted_results\n    except Exception as e:\n        print(f\"Error during Tavily search for '{query}': {e}\")\n        # import traceback # Optional: Uncomment for detailed trace\n        # traceback.print_exc()\n        return []\n\n\nasync def perform_academic_search(query: str, priority: int) -> List[SearchResultItem]:\n    \"\"\"Performs academic search using Exa.\"\"\"\n    if not exa_client:\n        print(f\"Exa client not available. Skipping academic search for: '{query}'\")\n        return []\n\n    num_results = min(max(1, 6 - priority), 5)\n\n    try:\n        print(f\"--- Calling Exa API (Academic) for: '{query}' ---\")\n        # Wrap synchronous Exa call in run_in_executor for async context\n        loop = asyncio.get_running_loop()\n        response = await loop.run_in_executor(\n            None, # Use default executor (ThreadPoolExecutor)\n            lambda: exa_client.search_and_contents(\n                query,\n                type='auto',\n                num_results=num_results,\n                highlights=True, # Request highlights/summary\n                use_autoprompt=True # Let Exa optimize query\n            )\n        )\n        print(f\"--- Exa API call (Academic) successful for: '{query}' ---\")\n\n        formatted_results = [\n            SearchResultItem(\n                source='academic',\n                title=r.title or 'N/A',\n                url=r.url or '#',\n                # Use highlights as content proxy; fallback to text\n                content=(r.highlights[0] if r.highlights else r.text or ''),\n                tweetId=None # Explicitly None for academic\n            ) for r in response.results if r.url # Ensure URL exists\n        ]\n        return formatted_results\n    except Exception as e:\n        print(f\"Error during Exa academic search for '{query}': {e}\")\n        # import traceback\n        # traceback.print_exc()\n        return []\n\nasync def perform_x_search(query_obj: SearchQuery) -> List[SearchResultItem]:\n    \"\"\"Performs X/Twitter search using Exa.\"\"\"\n    if not exa_client:\n        print(f\"Exa client not available. Skipping X search for: '{query_obj.query}'\")\n        return []\n\n    # Priority might influence number of results differently for social\n    num_results = max(2, min(query_obj.priority * 2, 10)) # Example: Scale priority, cap at 10\n\n    try:\n        print(f\"--- Calling Exa API (X/Twitter) for: '{query_obj.query}' ---\")\n        loop = asyncio.get_running_loop()\n        response = await loop.run_in_executor(\n            None,\n            lambda: exa_client.search_and_contents(\n                query_obj.query,\n                type='neural', # Often better for social media queries\n                num_results=num_results,\n                include_domains=['twitter.com', 'x.com'],\n                text=True,\n                use_autoprompt=True\n            )\n        )\n        print(f\"--- Exa API call (X/Twitter) successful for: '{query_obj.query}' ---\")\n\n        processed_tweets = []\n        for r in response.results:\n            tweet_id = extract_tweet_id(r.url)\n            if tweet_id and r.url: # Ensure valid ID and URL\n                processed_tweets.append(\n                    SearchResultItem(\n                        source='x',\n                        title=r.title or r.author or 'Tweet',\n                        url=r.url,\n                        content=r.text or '',\n                        tweetId=tweet_id\n                    )\n                )\n        return processed_tweets\n    except Exception as e:\n        print(f\"Error during Exa X search for '{query_obj.query}': {e}\")\n        # import traceback\n        # traceback.print_exc()\n        return []"
  },
  {
    "path": "super_agents/deep_research/tests/__init__.py",
    "content": ""
  },
  {
    "path": "super_agents/deep_research/tests/test_graph.py",
    "content": ""
  },
  {
    "path": "web/.gitignore",
    "content": "# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.\n\n# dependencies\n/node_modules\n/.pnp\n.pnp.*\n.yarn/*\n!.yarn/patches\n!.yarn/plugins\n!.yarn/releases\n!.yarn/versions\n\n# testing\n/coverage\n\n# next.js\n/.next/\n/out/\n\n# production\n/build\n\n# misc\n.DS_Store\n*.pem\n\n# debug\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\n.pnpm-debug.log*\n\n# env files (can opt-in for committing if needed)\n.env.*\n\n# vercel\n.vercel\n\n# typescript\n*.tsbuildinfo\nnext-env.d.ts\n"
  },
  {
    "path": "web/README.md",
    "content": "# Mentis 的 LangGraph + NextJS 集成演示\n\nMentis 演示项目展示了如何使用LangGraph创建AI代理并将其集成到NextJS应用程序中。它具体演示了 **ReAct Agent** (用于通用任务) 和 **Deep Research Agent** (用于深度主题探索) 的实现。LangGraph是一个强大的框架，用于构建代理和多代理工作流。它提供了构建复杂逻辑的灵活性，并具有出色的调试工具(LangGraph Studio)和监控功能(LangSmith)。NextJS是一个流行的Web应用程序框架。\n\n## 技术选择\n\n### 为什么选择LangGraph\n\nLangGraph是一个用于构建基于LLM的状态化应用程序的库。它可以用于创建AI代理和多代理系统，或在LLM调用周围建立预定义的代码路径。该框架提供了对流程执行的低级控制，使用灵活。在LangGraph中，您定义图形，其中节点本质上是包含自定义代码的函数。然后，您在这些节点之间建立边缘连接。图形有一个状态，它只是一个键值对字典，在每个节点执行后更新。\n\n### 为什么选择Next.js\n\nNext.js是一个优秀的全栈框架，提供了构建现代Web应用程序所需的一切功能：\n\n-   **服务器端渲染(SSR)**：通过在将页面发送到客户端之前在服务器上渲染页面，提高SEO和性能。\n-   **静态站点生成(SSG)**：允许在构建时预渲染页面，从而缩短加载时间。\n-   **API路由**：通过允许在同一应用程序中创建API端点，简化后端集成。\n-   **自动代码分割**：只加载正在访问的页面所需的JavaScript，提高性能。\n-   **基于文件的路由**：通过使用文件系统简化路由，使创建和管理路由变得容易。\n-   **内置CSS和Sass支持**：无需额外配置即可支持全局和模块化CSS以及Sass。\n-   **图像优化**：自动优化图像以提高性能和用户体验。\n-   **丰富的生态系统**：与React良好集成，拥有庞大的社区和插件工具生态系统。\n\n## Agent集成机制\n\n### 图执行与检查点\n\n首先，让我们探讨图（AI代理）执行的含义。LangGraph在应用执行前、每个图节点执行前以及应用执行后创建检查点（checkpoints）。检查点本质上是图状态的快照，指示下一步将执行哪个节点或哪些节点。\n\n检查点机制是LangGraph的核心特性之一，它允许：\n-   保存执行历史，便于回溯和调试\n-   在任意点暂停和恢复执行\n-   从特定状态创建分支执行\n-   实现人机交互循环\n\n在客户端应用中，我们需要复制这种架构以获得完全控制并访问图执行数据。这种逻辑封装在`useLangGraphAgent`钩子中，它调用AI服务API并在客户端同步代理状态。\n\n### useLangGraphAgent钩子\n\n`useLangGraphAgent`钩子是前端与LangGraph后端集成的核心。它提供了以下功能：\n\n**属性：**\n-   `status`：指示代理的执行状态（idle、running、stopping、error）\n-   `appCheckpoints`：图检查点和节点及其状态的列表\n\n**方法：**\n-   `run`：使用提供的状态执行代理\n-   `resume`：人机交互后继续代理执行\n-   `restore`：检索特定代理线程的检查点历史\n-   `replay`：从检查点重新执行代理\n-   `fork`：使用自定义状态创建检查点的分支并运行代理\n-   `stop`：停止正在执行的代理\n\n### 客户端状态同步\n\n钩子通过以下机制与后端同步：\n\n1.  **SSE流处理**：使用Server-Sent Events接收来自LangGraph的实时更新\n2.  **事件类型处理**：\n    -   `checkpoint`：处理新的检查点和状态更新\n    -   `message_chunk`：处理LLM生成的消息片段\n    -   `interrupt`：处理需要人机交互的中断\n    -   `custom`：处理自定义状态更新\n    -   `error`：处理执行错误\n\n3.  **状态差异计算**：计算状态变化以优化UI更新\n\n## 功能特性\n\n-   **多 Agent 示例:** 演示了不同的 Agent 架构，当前包括：\n    -   **ReAct Agent:** 一个通用的助手，使用 ReAct 框架进行规划和工具使用。\n    -   **Deep Research Agent:** 一个专门用于执行深度研究任务的助手。\n-   **流式响应**：代理将LLM生成的内容实时流式传输到客户端应用程序。\n-   **生成式UI**：基于代理状态渲染组件，例如天气小部件。\n-   **人机交互**：代理可以向用户请求澄清以继续任务，例如确认创建提醒。\n-   **状态持久化**：LangGraph具有内置的持久层。它可用于在会话之间持久保存代理状态。在演示应用中，状态保存在内存中。参见[LangGraph持久化](https://langchain-ai.github.io/langgraph/docs/how-tos/persistence/)了解如何使用PostgreSQL或MongoDB。\n-   **回复和分支**：可以从任何检查点回复或分支代理。\n-   **代理状态复制**：基于图检查点，代理状态在客户端完全复制。\n-   **错误处理**：应用程序显示全局代理错误（例如代理不可访问时）以及图节点级别发生的错误。\n-   **停止代理**：可以停止代理执行并稍后恢复。\n-   **无依赖**：集成不依赖第三方库。您可以根据需要进行调整。\n-   **简洁UI**：应用程序基于shadcn组件，支持深色和浅色主题。\n\n## 项目架构\n\n项目分为两个主要部分：\n\n### 1. API服务器 (FastAPI + LangGraph)\n\n位于`/api`目录，包含：\n-   `server.py`：FastAPI服务器，提供与LangGraph交互的端点。负责加载和路由到正确的 Agent 图。\n-   `agent/` 目录：包含 LangGraph Agent 图的定义。这可能包括为 **ReAct Agent** 和 **Deep Research Agent** 配置或定义的独立模块（例如 `react_graph.py`, `research_graph.py` 或一个可配置的 `graph_factory.py`）。\n-   `utils.py`：用于格式化事件和状态的工具函数。\n\n### 2. Web客户端 (NextJS)\n\n位于`/web`目录（或项目根目录，取决于您的结构），包含：\n-   `app/`：NextJS应用程序页面和路由。包括用于不同 Agent 类型（如 `app/default/[id]/page.tsx` 和 `app/deep_research/[id]/page.tsx`）的动态路由。\n-   `hooks/useLangGraphAgent/`：与LangGraph代理交互的React钩子。\n-   `components/`：UI组件，包括 Sidebar 和 Agent 交互界面。\n-   `stores/`：使用Zustand的状态管理 (`chat-store.ts`)，用于存储聊天会话列表及其关联的 Agent 类型。\n\n## 技术实现细节\n\n### LangGraph与Next.js的集成\n\nLangGraph是Python框架，而Next.js是JavaScript框架，这使得直接集成变得复杂。我们的解决方案包括：\n\n1.  **FastAPI中间层**：创建一个FastAPI服务器作为中间层，暴露LangGraph功能为REST API。\n2.  **SSE（Server-Sent Events）**：使用SSE实现从服务器到客户端的实时数据流。\n3.  **状态同步机制**：在客户端复制和维护LangGraph的状态。\n\n### 关键API端点\n\n-   `/agent`：运行代理，支持多种操作模式（run、resume、fork、replay）。服务器端会根据请求路由到正确的 Agent 图。\n-   `/history`：获取完整的状态历史，用于恢复图执行。\n-   `/state`：获取当前图状态。\n-   `/agent/stop`：停止正在运行的代理。\n\n### 数据流程\n\n1.  **客户端请求**：通过 `useLangGraphAgent` 钩子或直接 API 调用发起请求，指定要交互的 Agent。\n2.  **服务器处理**：FastAPI 服务器接收请求，加载相应的 LangGraph Agent 图并开始执行。\n3.  **流式响应**：服务器通过 SSE 流式传输执行结果（检查点、消息、中断等）。\n4.  **客户端处理**：客户端解析事件流并更新本地状态 (`useLangGraphAgent` 钩子内部)。\n5.  **UI渲染**：基于更新的状态渲染 UI 组件（聊天消息、状态指示器等）。\n\n### 状态管理\n\nLangGraph的状态是一个键值对字典，在每个节点执行后更新。在客户端，我们使用以下机制管理状态：\n\n1.  **`useChatStore` (Zustand)**：存储聊天会话列表，每个会话包含 `id`, `name`, `agentId`, `agentName` 等信息。\n2.  **`useLangGraphAgent` Hook State**：钩子内部维护当前活动 Agent 的检查点 (`appCheckpoints`) 和执行状态 (`status`)。\n3.  **事件处理**：钩子处理来自 SSE 的不同类型的事件（checkpoint、message_chunk、interrupt、custom、error）以更新其内部状态。\n\n## 限制\n\n目前有一些尚未实现的功能：\n\n-   并行节点中的图中断（人机交互）\n-   从同一并行节点发送自定义事件。例如，同时检查多个城市的天气时，无法在客户端区分它们。\n-   Deep Research Agent 的前端渲染机制可能需要根据具体输出进行优化。\n\n## 安装和运行\n\n### 安装依赖\n\n#### API服务器\n\n```bash\n# Navigate to your API directory if needed\n# cd api/\nuv sync # 或者 pip install -r requirements.txt\n```\n\n#### Web客户端\n\n```bash\n# Navigate to your web client directory if needed (e.g., cd web/)\nnpm install # 或者 pnpm install 或 yarn install\n```\n\n### 环境变量\n\n1.  在项目根目录或 API 服务器目录创建 `.env` 文件（参考 `.env.example`）。\n2.  添加必要的API密钥，例如：\n    ```\n    OPENAI_API_KEY=your_openai_api_key\n    # LANGCHAIN_TRACING_V2=true (可选, 用于 LangSmith)\n    # LANGCHAIN_API_KEY=your_langsmith_api_key (可选, 用于 LangSmith)\n    ```\n\n### 运行项目\n\n#### 启动API服务器\n\n```bash\n# Navigate to your API directory if needed\n# cd api/\nuv run python -m api.server # 或者 python -m api.server\n```\n\nAPI 服务器通常运行在 `http://localhost:8001` (或您配置的端口)。\n\n#### 启动Web客户端\n\n```bash\n# Navigate to your web client directory if needed (e.g., cd web/)\npnpm run dev # 或者 npm run dev 或 yarn dev\n```\n\nWeb 应用程序将在 `http://localhost:3000` 启动。\n\n## 开发指南\n\n### 调整AI代理逻辑\n\n1.  修改 `api/agent/` 目录下相关的 Agent 图定义文件（例如，调整现有 **ReAct Agent** 或 **Deep Research Agent** 的逻辑）。\n2.  或者创建一个全新的 Agent 图文件。\n3.  确保 `api/server.py` 中的加载和路由逻辑能够识别并调用你的新 Agent 或修改后的 Agent。\n\n### 调整代理状态类型\n\n1.  如果 Agent 的状态结构发生变化，相应地在 `web/app/[agentId]/[id]/page.tsx` (或相关的类型定义文件，如 `agent-types.ts`) 中修改 TypeScript 类型定义。\n\n### 在客户端应用中调用代理\n\n在相关的页面组件 (例如 `app/default/[id]/page.tsx` 或 `app/deep_research/[id]/page.tsx`) 中使用 `useLangGraphAgent` 钩子：\n\n```tsx\nimport { useLangGraphAgent } from '@/hooks/useLangGraphAgent/useLangGraphAgent';\n// Import specific state types for the agent being used\nimport { AgentState, InterruptValue, ResumeValue } from './agent-types'; // Adjust path as needed\n\nexport default function AgentPage({ params }: { params: { id: string } }) {\n  const thread_id = params.id; // Get thread_id from route\n\n  const { status, appCheckpoints, run, resume, replay, restore } =\n    useLangGraphAgent<AgentState, InterruptValue, ResumeValue>(thread_id); // Pass thread_id\n\n  // 使用钩子方法与代理交互\n  // 例如，在组件加载时恢复历史记录:\n  // React.useEffect(() => {\n  //   restore();\n  // }, [restore]);\n\n  // ... rest of your component logic ...\n}\n```\n\n## 路线图\n\n### 短期目标\n\n1.  **改进错误处理**\n    -   实现更详细的错误消息\n    -   添加重试机制\n2.  **增强UI组件**\n    -   为 Deep Research Agent 的输出提供更丰富的渲染组件\n    -   改进移动端响应式设计\n3.  **添加认证 (可选)**\n    -   实现基本的用户认证\n    -   添加会话管理\n\n### 中期目标\n\n1.  **持久化存储**\n    -   为 LangGraph 检查点集成 PostgreSQL 或 MongoDB\n    -   为用户聊天列表添加持久化\n2.  **并行节点改进**\n    -   实现并行节点中的人机交互\n    -   支持从并行节点发送自定义事件\n3.  **工具集成**\n    -   为 ReAct Agent 添加更多实用的工具\n    -   为 Deep Research Agent 集成更多数据源\n\n### 长期目标\n\n1.  **多代理支持**\n    -   实现多个协作代理的示例\n    -   添加代理间通信的可视化\n2.  **高级UI功能**\n    -   探索可视化图构建/调试工具集成\n3.  **企业功能**\n    -   添加团队协作功能\n    -   实现角色和权限管理\n    -   添加审计和日志记录\n\n## 贡献\n\n欢迎贡献！请随时提交问题或拉取请求。\n\n## 许可\n\n[MIT](LICENSE)"
  },
  {
    "path": "web/app/api/agent/route.ts",
    "content": "import { NextRequest, NextResponse } from 'next/server';\n\n// This API route serves as a proxy to the agent endpoint of the ai service. \n// It is necessary to send requests from the Next.js backend rather than the client. \n// This approach prevents exposing the AI service as a public endpoint and eliminates the need to implement authentication logic.\n// The mode elegant way is to use server actions, but it is not possible with streaming response.\n\nconst AGENT_URL = process.env.NEXT_PUBLIC_AGENT_URL;\n\nexport async function POST(request: NextRequest) {\n  const body = await request.json();\n\n  try {\n    const response = await fetch(`${AGENT_URL}/agent`, {\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n        'Accept': 'text/event-stream',\n      },\n      body: JSON.stringify(body),\n    });\n\n    if (!response.ok) {\n      const error = await response.json();\n      throw new Error(error.detail || 'Failed to call agent');\n    }\n\n    const stream = new TransformStream();\n    const writer = stream.writable.getWriter();\n\n    (async () => {\n      try {\n        const reader = response.body?.getReader();\n        if (!reader) throw new Error('No reader available');\n\n        while (true) {\n          const { done, value } = await reader.read();\n          if (done) {\n            await writer.close();\n            break;\n          }\n\n          // Just forward the raw chunks\n          await writer.write(value);\n        }\n      } catch (error) {\n        console.error('Stream processing error:', error);\n\n        // Write an error message to the stream before closing\n        const errorData = JSON.stringify({ error: \"Error in agent\" });\n        await writer.write(new TextEncoder().encode(`event: error\\ndata: ${errorData}\\n\\n`));\n        await writer.close();\n      }\n    })();\n\n    return new Response(stream.readable, {\n      headers: {\n        'Content-Type': 'text/event-stream',\n        'Cache-Control': 'no-cache',\n        'Connection': 'keep-alive',\n      },\n    });\n\n  } catch (error) {\n    console.error('Error in agent route', error);\n    return NextResponse.json(\n      { error: 'Failed to process /agent request' },\n      { status: 500 }\n    );\n  }\n} "
  },
  {
    "path": "web/app/chat/[id]/agent-types.ts",
    "content": "import { WithMessages } from \"@/hooks/useLangGraphAgent/types\";\n\n// The agent state which mirrors the LangGraph state. If your sate have messages, extend WithMessages interface.\nexport interface AgentState extends WithMessages {\n  weather_forecast: WeatherForecast[];\n  research_status: ResearchStatus[];\n  search_results: SearchResult[];\n  report_content?: string;\n  node_type?: string;\n}\n\nexport interface WeatherForecast {\n  location: string;\n  search_status: string;\n  result: \"Sunny\" | \"Cloudy\" | \"Rainy\" | \"Snowy\";\n}\n\nexport interface ResearchStatus {\n  topic: string;\n  status: string;\n  progress?: number;\n}\n\nexport interface SearchResult {\n  title: string;\n  url: string;\n  snippet: string;\n}\n\n// All possible interrupt types from the graph. We are using string for Reminder node\nexport type InterruptValue = string | number | { \"question\": string };\n\n// All possible resume types to send to the graph. We are using string for Reminder node\nexport type ResumeValue = string | number;\n"
  },
  {
    "path": "web/app/chat/[id]/components/chatbot-node.tsx",
    "content": "import { AgentState } from '../agent-types';\nimport { Bot, User } from 'lucide-react';\nimport { cn } from '@/lib/utils';\nimport ReactMarkdown from 'react-markdown';\nimport remarkGfm from 'remark-gfm';\nimport { Badge } from '@/components/ui/badge';\nimport { Message } from '@/hooks/useLangGraphAgent/types';\nimport { useEffect, useRef } from 'react';\n\ninterface ChatbotNodeProps {\n  nodeState: Partial<AgentState>;\n  fallbackMessages?: Message[]; // Add fallback messages from hook\n}\n\nexport function ChatbotNode({ nodeState, fallbackMessages }: ChatbotNodeProps) {\n  // 使用ref保存最后一次有效的消息列表，防止消息丢失\n  const lastValidMessagesRef = useRef<Message[]>([]);\n  \n  // 如果nodeState.messages存在且不为空，使用它；否则使用fallbackMessages；如果都没有，使用上次有效的消息\n  const currentMessages = nodeState?.messages?.length ? nodeState.messages : \n                         (fallbackMessages?.length ? fallbackMessages : lastValidMessagesRef.current);\n  \n  // 更新最后一次有效的消息引用\n  useEffect(() => {\n    if (currentMessages?.length > 0) {\n      lastValidMessagesRef.current = [...currentMessages];\n      console.log(\"[ChatbotNode] 更新最后有效消息缓存:\", currentMessages.length);\n    }\n  }, [currentMessages]);\n\n  // Debug log for message rendering\n  console.log(\"[ChatbotNode] Rendering with:\", { \n    nodeStateMessages: nodeState?.messages?.length || 0, \n    fallbackMessages: fallbackMessages?.length || 0,\n    lastValidMessages: lastValidMessagesRef.current.length,\n    displaying: currentMessages.length\n  });\n\n  // 添加更详细的消息内容调试\n  if (currentMessages.length > 0) {\n    console.log(\"[ChatbotNode] Messages content:\", \n      currentMessages.map(msg => ({\n        id: msg.id,\n        type: msg.type,\n        contentLength: msg.content?.length || 0,\n        contentPreview: msg.content?.substring(0, 50) + (msg.content?.length > 50 ? '...' : ''),\n        hasToolCalls: msg.tool_calls?.length > 0\n      }))\n    );\n  }\n\n  const getMessageIcon = (type: string) => {\n    const baseClasses = \"bg-gray-100 dark:bg-gray-800 text-gray-600 dark:text-gray-300 border-gray-200 dark:border-gray-700\";\n\n    switch (type) {\n      case 'ai':\n        return {\n          icon: <Bot className=\"h-5 w-5\" />,\n          className: baseClasses\n        };\n      case 'user':\n      case 'human':\n        return {\n          icon: <User className=\"h-5 w-5\" />,\n          className: baseClasses\n        };\n      default:\n        return {\n          icon: <Bot className=\"h-5 w-5\" />,\n          className: baseClasses\n        };\n    }\n  };\n\n  if (!currentMessages?.length) {\n    console.log(\"[ChatbotNode] No messages to display, returning null\");\n    return null; // Don't render anything if no messages\n  }\n\n  return (\n    <div className=\"space-y-4 font-mono\">\n      {currentMessages.map((msg, index) => (\n        // When restoring data from checkpoint history, user input messages do not have an id.\n        // Use index as key to avoid React warnings.\n        <div key={msg.id ?? index} className=\"flex items-start gap-3\" style={{border: '1px solid #f0f0f0', padding: '8px', margin: '8px 0'}}>\n          <div className={cn(\n            \"flex-shrink-0 flex items-center justify-center w-10 h-10 rounded-full border\",\n            getMessageIcon(msg.type).className\n          )}>\n            {getMessageIcon(msg.type).icon}\n          </div>\n          <div className=\"flex-1 p-2 min-w-0\">\n            <div className=\"text-foreground text-sm break-words\">\n              {msg.content ? (\n                <ReactMarkdown\n                  remarkPlugins={[remarkGfm]}\n                  className=\"prose prose-sm max-w-none overflow-hidden\"\n                  components={{\n                    p: ({ children }) => <p className=\"mb-2 break-words\">{children}</p>,\n                    code: ({ children, className }) => {\n                      const isInline = !className?.includes('language-');\n                      return (\n                        <code className={cn(\n                          \"bg-gray-100 px-1 py-0.5 rounded break-all\",\n                          !isInline && \"block p-2 my-2 overflow-x-auto\"\n                        )}>\n                          {children}\n                        </code>\n                      );\n                    },\n                    pre: ({ children }) => <pre className=\"bg-gray-100 p-2 rounded my-2 overflow-x-auto max-w-full\">{children}</pre>,\n                    ul: ({ children }) => <ul className=\"list-disc pl-6 mb-2\">{children}</ul>,\n                    ol: ({ children }) => <ol className=\"list-decimal pl-6 mb-2\">{children}</ol>,\n                  }}\n                >\n                  {msg.content}\n                </ReactMarkdown>\n              ) : (\n                <span className=\"text-gray-400 italic\">(空消息)</span>\n              )}\n            </div>\n            {msg.tool_calls && msg.tool_calls.length > 0 && (\n              <div className=\"flex items-center gap-2\">\n                <span className=\"text-sm font-mono\">Tool calls:</span>\n                {msg.tool_calls?.map((toolCall) => (\n                  <div key={toolCall.id}>\n                    <Badge variant=\"outline\">{toolCall.name}</Badge>\n                  </div>\n                ))}\n              </div>\n            )}\n          </div>\n        </div>\n      ))}\n    </div>\n  )\n}"
  },
  {
    "path": "web/app/chat/[id]/components/checkpoint-card.tsx",
    "content": "import { Button } from '@/components/ui/button';\nimport { AppCheckpoint, ReplayAgentInput } from '@/hooks/useLangGraphAgent/types';\nimport { AgentState, InterruptValue } from '../agent-types';\nimport { Check, Redo, AlertCircle } from 'lucide-react';\nimport {\n  Popover,\n  PopoverContent,\n  PopoverTrigger,\n} from \"@/components/ui/popover\"\nimport { JsonView, defaultStyles } from 'react-json-view-lite';\nimport { cn } from '@/lib/utils';\n\ninterface CheckpointCardProps {\n  thread_id: string;\n  appCheckpoint: AppCheckpoint<AgentState, InterruptValue>;\n  replayHandler: (agentInput: ReplayAgentInput) => void;\n}\n\nexport function CheckpointCard({ thread_id, appCheckpoint: node, replayHandler }: CheckpointCardProps) {\n  return (\n    <div className={cn(\n      \"flex items-center gap-2 p-2 rounded-md font-mono text-sm\",\n      node.error ? \"bg-red-100/50\" : \"bg-muted\"\n    )}>\n      {node.error ? (\n        <AlertCircle className=\"h-4 w-4 text-red-500\" />\n      ) : (\n        <Check className=\"h-4 w-4 text-muted-foreground\" />\n      )}\n      <div className=\"flex-1 flex flex-col gap-1\">\n        <span className=\"text-muted-foreground text-xs\">checkpoint id: {node.checkpointConfig.configurable.checkpoint_id}</span>\n        <div className=\"flex items-center justify-between\">\n          <span className=\"text-xs\">next nodes: {node.nodes.map(n => n.name).join(', ')}</span>\n          <div className=\"flex items-center gap-2\">\n            <Popover>\n              <PopoverTrigger asChild>\n                <Button variant=\"link\" size=\"sm\" className=\"text-xs\">\n                  View state\n                </Button>\n              </PopoverTrigger>\n              <PopoverContent className=\"w-[500px]\">\n                <div className=\"max-h-[400px] overflow-auto p-2 rounded bg-muted/50\">\n                  <JsonView\n                    data={node.state}\n                    style={{\n                      ...defaultStyles,\n                      container: \"font-mono text-xs\",\n                    }}\n                  />\n                </div>\n              </PopoverContent>\n            </Popover>\n            <Popover>\n              <PopoverTrigger asChild>\n                <Button variant=\"link\" size=\"sm\" className=\"text-xs\">\n                  View state diff\n                </Button>\n              </PopoverTrigger>\n              <PopoverContent className=\"w-[500px]\">\n                <div className=\"max-h-[400px] overflow-auto p-2 rounded bg-muted/50\">\n                  <JsonView\n                    data={node.stateDiff}\n                    style={{\n                      ...defaultStyles,\n                      container: \"font-mono text-xs\",\n                    }}\n                  />\n                </div>\n              </PopoverContent>\n            </Popover>\n            <Button\n              variant=\"link\"\n              size=\"sm\"\n              className=\"text-xs\"\n              onClick={() => replayHandler({ thread_id, config: node.checkpointConfig })}\n            >\n              <Redo className=\"h-3 w-3 mr-1\" />\n              Replay\n            </Button>\n          </div>\n        </div>\n      </div>\n    </div>\n  )\n}"
  },
  {
    "path": "web/app/chat/[id]/components/node-card.tsx",
    "content": "import { GraphNode } from \"@/hooks/useLangGraphAgent/types\";\nimport { AgentState } from \"../agent-types\";\nimport { Button } from '@/components/ui/button';\nimport {\n  Popover,\n  PopoverContent,\n  PopoverTrigger,\n} from \"@/components/ui/popover\"\nimport { JsonView, defaultStyles } from 'react-json-view-lite';\n\nexport function NodeCard({ node }: { node: GraphNode<AgentState> }) {\n  return (\n    <div className=\"flex items-center gap-2 p-2 rounded-md font-mono text-sm bg-muted/50\">\n      <span className=\"text-xs\">node: {node.name}</span>\n      <div className=\"flex items-center gap-2 ml-auto\">\n        <Popover>\n          <PopoverTrigger asChild>\n            <Button variant=\"link\" size=\"sm\" className=\"text-xs\">\n              View state\n            </Button>\n          </PopoverTrigger>\n          <PopoverContent className=\"w-[500px]\">\n            <div className=\"max-h-[400px] overflow-auto p-2 rounded bg-muted/50\">\n              <JsonView\n                data={node.state}\n                style={{\n                  ...defaultStyles,\n                  container: \"font-mono text-xs\",\n                }}\n              />\n            </div>\n          </PopoverContent>\n        </Popover>\n      </div>\n    </div>\n  );\n}"
  },
  {
    "path": "web/app/chat/[id]/components/reminder.tsx",
    "content": "import { Card, CardHeader, CardFooter, CardTitle } from \"@/components/ui/card\";\nimport { Button } from \"@/components/ui/button\";\nimport { useState } from \"react\";\nimport { Loader2 } from \"lucide-react\";\n\ninterface ReminderProps {\n  interruptValue: string;\n  onResume: (resumeValue: string) => void;\n}\n\nexport default function Reminder({ interruptValue, onResume }: ReminderProps) {\n  const [isLoading, setIsLoading] = useState(false);\n\n  // Do not show the confirmation after user action\n  if (!interruptValue) {\n    return null;\n  }\n\n  const handleAction = (action: \"approve\" | \"cancel\") => {\n    setIsLoading(true);\n    onResume(action);\n  };\n\n  return (\n    <div className=\"flex justify-end\">\n      <Card className=\"w-full max-w-sm\">\n        <CardHeader className=\"space-y-1 p-4\">\n          <CardTitle className=\"text-xl\">{interruptValue}</CardTitle>\n          <p className=\"text-sm text-muted-foreground\">Are u sure you want to create a reminder?</p>\n        </CardHeader>\n        <CardFooter className=\"flex items-center gap-2 p-4 pt-0\">\n          {isLoading && <Loader2 className=\"h-4 w-4 animate-spin\" />}\n          <div className=\"flex gap-2 ml-auto\">\n            <Button\n              variant=\"outline\"\n              onClick={() => handleAction(\"cancel\")}\n              disabled={isLoading}\n            >\n              Cancel\n            </Button>\n            <Button\n              onClick={() => handleAction(\"approve\")}\n              disabled={isLoading}\n            >\n              Approve\n            </Button>\n          </div>\n        </CardFooter>\n      </Card>\n    </div>\n  );\n}"
  },
  {
    "path": "web/app/chat/[id]/components/research/report-preview.tsx",
    "content": "import { AgentState } from \"../../agent-types\";\nimport { Card, CardContent, CardHeader, CardTitle } from \"@/components/ui/card\";\nimport { FileText } from \"lucide-react\";\n\ninterface ReportPreviewProps {\n  nodeState: Partial<AgentState>;\n}\n\nexport default function ReportPreview({ nodeState }: ReportPreviewProps) {\n  if (!nodeState?.report_content) {\n    return null;\n  }\n\n  return (\n    <Card className=\"overflow-hidden\">\n      <CardHeader className=\"p-3 pb-0\">\n        <CardTitle className=\"text-sm flex items-center gap-2\">\n          <FileText className=\"h-4 w-4\" />\n          Research Report\n        </CardTitle>\n      </CardHeader>\n      <CardContent className=\"p-3 pt-1\">\n        <div className=\"prose prose-sm max-w-none\">\n          <div dangerouslySetInnerHTML={{ __html: nodeState.report_content }} />\n        </div>\n      </CardContent>\n    </Card>\n  );\n}"
  },
  {
    "path": "web/app/chat/[id]/components/research/research-node.tsx",
    "content": "import { AgentState } from \"../../agent-types\";\nimport ResearchStatus from \"./research-status\";\nimport SearchResults from \"./search-results\";\nimport ReportPreview from \"./report-preview\";\n\ninterface ResearchNodeProps {\n  nodeState: Partial<AgentState>;\n}\n\nexport default function ResearchNode({ nodeState }: ResearchNodeProps) {\n  // 根据节点名称渲染不同的组件\n  if (nodeState?.node_type === \"research_status\") {\n    return <ResearchStatus nodeState={nodeState} />;\n  }\n  \n  if (nodeState?.node_type === \"search_results\") {\n    return <SearchResults nodeState={nodeState} />;\n  }\n  \n  if (nodeState?.node_type === \"report_preview\") {\n    return <ReportPreview nodeState={nodeState} />;\n  }\n  \n  return null;\n}"
  },
  {
    "path": "web/app/chat/[id]/components/research/research-status.tsx",
    "content": "import { AgentState } from \"../../agent-types\";\nimport { Loader2 } from \"lucide-react\";\nimport { Card, CardContent } from \"@/components/ui/card\";\nimport { Progress } from \"@/components/ui/progress\";\n\ninterface ResearchStatusProps {\n  nodeState: Partial<AgentState>;\n}\n\nexport default function ResearchStatus({ nodeState }: ResearchStatusProps) {\n  if (!nodeState?.research_status?.[0]) {\n    return null;\n  }\n\n  const { topic, status, progress } = nodeState.research_status[0];\n\n  return (\n    <div className=\"flex justify-end\">\n      <Card className=\"inline-block\">\n        <CardContent className=\"p-2\">\n          <div className=\"space-y-2\">\n            <div className=\"flex items-center gap-2\">\n              <Loader2 className=\"w-4 h-4 animate-spin\" />\n              <div className=\"text-sm font-medium\">{topic}</div>\n            </div>\n            <div className=\"text-xs text-muted-foreground\">{status}</div>\n            {progress !== undefined && (\n              <Progress value={progress} className=\"h-1\" />\n            )}\n          </div>\n        </CardContent>\n      </Card>\n    </div>\n  );\n}"
  },
  {
    "path": "web/app/chat/[id]/components/research/search-results.tsx",
    "content": "import { AgentState } from \"../../agent-types\";\nimport { Card, CardContent, CardHeader, CardTitle } from \"@/components/ui/card\";\nimport { ExternalLink } from \"lucide-react\";\n\ninterface SearchResultsProps {\n  nodeState: Partial<AgentState>;\n}\n\nexport default function SearchResults({ nodeState }: SearchResultsProps) {\n  if (!nodeState?.search_results?.length) {\n    return null;\n  }\n\n  return (\n    <div className=\"space-y-3\">\n      {nodeState.search_results.map((result, index) => (\n        <Card key={index} className=\"overflow-hidden\">\n          <CardHeader className=\"p-3 pb-0\">\n            <CardTitle className=\"text-sm flex items-center gap-2\">\n              <a \n                href={result.url} \n                target=\"_blank\" \n                rel=\"noopener noreferrer\"\n                className=\"text-blue-600 hover:underline flex items-center gap-1\"\n              >\n                {result.title}\n                <ExternalLink className=\"h-3 w-3\" />\n              </a>\n            </CardTitle>\n          </CardHeader>\n          <CardContent className=\"p-3 pt-1\">\n            <p className=\"text-xs text-muted-foreground\">{result.snippet}</p>\n          </CardContent>\n        </Card>\n      ))}\n    </div>\n  );\n}"
  },
  {
    "path": "web/app/chat/[id]/components/weather/cloudy.tsx",
    "content": "\"use client\"\n\nimport { Cloud, Droplets, Wind } from \"lucide-react\"\nimport { Card } from \"@/components/ui/card\"\n\nexport default function Cloudy() {\n  return (\n    <Card className=\"relative overflow-hidden group w-72 h-40 cursor-pointer transition-all hover:shadow-lg\">\n      {/* Gradient Background */}\n      <div className=\"absolute inset-0 bg-gradient-to-br from-gray-50 via-gray-100 to-gray-200 dark:from-gray-900/40 dark:via-gray-800/30 dark:to-gray-700/20 opacity-50\" />\n\n      {/* Content Container */}\n      <div className=\"relative h-full p-6 flex flex-col justify-between\">\n        {/* Top Section */}\n        <div className=\"flex items-center space-x-4\">\n          <div className=\"p-2 bg-gray-200 dark:bg-gray-800/60 rounded-full group-hover:bg-gray-300 dark:group-hover:bg-gray-700/80 transition-colors\">\n            <Cloud className=\"w-6 h-6 text-gray-600 dark:text-gray-200\" />\n          </div>\n          <div>\n            <h3 className=\"text-xl font-semibold text-gray-800 dark:text-gray-100\">Cloudy</h3>\n            <p className=\"text-sm text-gray-600 dark:text-gray-300\">Today&apos;s Forecast</p>\n          </div>\n        </div>\n\n        {/* Bottom Section */}\n        <div className=\"flex justify-between items-center mt-4\">\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-300\">\n            <Droplets className=\"w-4 h-4\" />\n            <span className=\"text-sm\">60%</span>\n          </div>\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-300\">\n            <Wind className=\"w-4 h-4\" />\n            <span className=\"text-sm\">15 km/h</span>\n          </div>\n          <div className=\"text-2xl font-bold text-gray-800 dark:text-gray-100\">22°C</div>\n        </div>\n      </div>\n    </Card>\n  )\n}\n\n"
  },
  {
    "path": "web/app/chat/[id]/components/weather/rainy.tsx",
    "content": "\"use client\"\n\nimport { Cloud, Droplets, Wind } from \"lucide-react\"\nimport { Card } from \"@/components/ui/card\"\nimport { motion } from \"framer-motion\"\n\nexport default function Rainy() {\n  return (\n    <Card className=\"relative overflow-hidden group w-72 h-40 cursor-pointer transition-all hover:shadow-lg\">\n      {/* Gradient Background */}\n      <div className=\"absolute inset-0 bg-gradient-to-br from-blue-50 via-blue-100 to-blue-200 dark:from-blue-950/40 dark:via-blue-900/30 dark:to-blue-800/20 opacity-50\" />\n\n      {/* Content Container */}\n      <div className=\"relative h-full p-6 flex flex-col justify-between\">\n        {/* Top Section */}\n        <div className=\"flex items-center space-x-4\">\n          <div className=\"p-2 bg-blue-100 dark:bg-blue-900/30 rounded-full group-hover:bg-blue-200 dark:group-hover:bg-blue-800/40 transition-colors\">\n            <Cloud className=\"w-6 h-6 text-blue-600 dark:text-blue-300\" />\n          </div>\n          <div>\n            <h3 className=\"text-xl font-semibold text-gray-800 dark:text-gray-100\">Rainy</h3>\n            <p className=\"text-sm text-gray-600 dark:text-gray-300\">Today&apos;s Forecast</p>\n          </div>\n        </div>\n\n        {/* Bottom Section */}\n        <div className=\"flex justify-between items-center mt-4\">\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-300\">\n            <Droplets className=\"w-4 h-4\" />\n            <span className=\"text-sm\">75%</span>\n          </div>\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-300\">\n            <Wind className=\"w-4 h-4\" />\n            <span className=\"text-sm\">12 km/h</span>\n          </div>\n          <div className=\"text-2xl font-bold text-gray-800 dark:text-gray-100\">18°C</div>\n        </div>\n\n        {/* Animated Rain Effect */}\n        <div className=\"absolute inset-0 overflow-hidden pointer-events-none\">\n          {[...Array(10)].map((_, i) => (\n            <motion.div\n              key={i}\n              style={{\n                left: `${i * 10}%`\n              }}\n              className=\"absolute w-[2px] h-[10px] bg-blue-400/50 dark:bg-blue-200/60 rounded-full\"\n              animate={{\n                y: [\"-10%\", \"110%\"],\n              }}\n              transition={{\n                duration: 1,\n                repeat: Number.POSITIVE_INFINITY,\n                delay: i * 0.1,\n                ease: \"linear\",\n              }}\n            />\n          ))}\n          {[...Array(10)].map((_, i) => (\n            <motion.div\n              key={`second-${i}`}\n              style={{\n                left: `${5 + (i * 10)}%`\n              }}\n              className=\"absolute w-[2px] h-[10px] bg-blue-400/50 dark:bg-blue-200/60 rounded-full\"\n              animate={{\n                y: [\"-10%\", \"110%\"],\n              }}\n              transition={{\n                duration: 1,\n                repeat: Number.POSITIVE_INFINITY,\n                delay: 0.5 + (i * 0.1),\n                ease: \"linear\",\n              }}\n            />\n          ))}\n        </div>\n      </div>\n    </Card>\n  )\n}\n\n"
  },
  {
    "path": "web/app/chat/[id]/components/weather/snowy.tsx",
    "content": "\"use client\"\n\nimport { Snowflake, Thermometer, Wind } from \"lucide-react\"\nimport { Card } from \"@/components/ui/card\"\nimport { motion } from \"framer-motion\"\n\nexport default function Snowy() {\n  return (\n    <Card className=\"relative overflow-hidden group w-72 h-40 cursor-pointer transition-all hover:shadow-lg\">\n      {/* Gradient Background */}\n      <div className=\"absolute inset-0 bg-gradient-to-br from-blue-50 via-indigo-50 to-purple-50 dark:from-blue-950/40 dark:via-indigo-900/30 dark:to-purple-900/20 opacity-50\" />\n\n      {/* Content Container */}\n      <div className=\"relative h-full p-6 flex flex-col justify-between\">\n        {/* Top Section */}\n        <div className=\"flex items-center space-x-4\">\n          <div className=\"p-2 bg-blue-100 dark:bg-blue-900/30 rounded-full group-hover:bg-blue-200 dark:group-hover:bg-blue-800/40 transition-colors\">\n            <Snowflake className=\"w-6 h-6 text-blue-500 dark:text-blue-300\" />\n          </div>\n          <div>\n            <h3 className=\"text-xl font-semibold text-gray-800 dark:text-gray-100\">Snowy</h3>\n            <p className=\"text-sm text-gray-600 dark:text-gray-300\">Today&apos;s Forecast</p>\n          </div>\n        </div>\n\n        {/* Bottom Section */}\n        <div className=\"flex justify-between items-center mt-4\">\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-300\">\n            <Thermometer className=\"w-4 h-4\" />\n            <span className=\"text-sm\">-2°C</span>\n          </div>\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-300\">\n            <Wind className=\"w-4 h-4\" />\n            <span className=\"text-sm\">10 km/h</span>\n          </div>\n          <div className=\"text-2xl font-bold text-gray-800 dark:text-gray-100\">5 cm</div>\n        </div>\n\n        {/* Animated Snowfall Effect */}\n        <div className=\"absolute inset-0 overflow-hidden pointer-events-none\">\n          {[...Array(15)].map((_, i) => (\n            <motion.div\n              key={i}\n              style={{\n                left: `${i * 7}%`\n              }}\n              className=\"absolute w-3 h-3 text-blue-300 dark:text-blue-200/90 opacity-90\"\n              animate={{\n                y: [\"-10%\", \"110%\"],\n                x: [`${Math.sin(i) * 10}px`, `${Math.sin(i + 1) * -10}px`],\n                rotate: [0, 180]\n              }}\n              transition={{\n                duration: 3,\n                repeat: Number.POSITIVE_INFINITY,\n                delay: i * 0.2,\n                ease: \"linear\",\n                rotate: {\n                  duration: 3,\n                  ease: \"linear\",\n                  repeat: Number.POSITIVE_INFINITY\n                }\n              }}\n            >\n              <svg viewBox=\"0 0 24 24\" fill=\"currentColor\">\n                <path d=\"M12,0 L12,24 M6,6 L18,18 M18,6 L6,18 M0,12 L24,12\" strokeWidth=\"2\" stroke=\"currentColor\" strokeLinecap=\"round\" />\n                <circle cx=\"12\" cy=\"12\" r=\"2\" fill=\"currentColor\" />\n              </svg>\n            </motion.div>\n          ))}\n          {[...Array(15)].map((_, i) => (\n            <motion.div\n              key={`second-${i}`}\n              style={{\n                left: `${3.5 + (i * 7)}%`\n              }}\n              className=\"absolute w-3 h-3 text-blue-300 dark:text-blue-200/90 opacity-90\"\n              animate={{\n                y: [\"-10%\", \"110%\"],\n                x: [`${Math.sin(i) * 10}px`, `${Math.sin(i + 1) * -10}px`],\n                rotate: [0, 180]\n              }}\n              transition={{\n                duration: 3,\n                repeat: Number.POSITIVE_INFINITY,\n                delay: 1.5 + (i * 0.2),\n                ease: \"linear\",\n                rotate: {\n                  duration: 3,\n                  ease: \"linear\",\n                  repeat: Number.POSITIVE_INFINITY\n                }\n              }}\n            >\n              <svg viewBox=\"0 0 24 24\" fill=\"currentColor\">\n                <path d=\"M12,0 L12,24 M6,6 L18,18 M18,6 L6,18 M0,12 L24,12\" strokeWidth=\"2\" stroke=\"currentColor\" strokeLinecap=\"round\" />\n                <circle cx=\"12\" cy=\"12\" r=\"2\" fill=\"currentColor\" />\n              </svg>\n            </motion.div>\n          ))}\n        </div>\n      </div>\n    </Card>\n  )\n}\n\n"
  },
  {
    "path": "web/app/chat/[id]/components/weather/sunny.tsx",
    "content": "\"use client\"\n\nimport { Sun, Thermometer, Wind } from \"lucide-react\"\nimport { Card } from \"@/components/ui/card\"\nimport { motion } from \"framer-motion\"\n\nexport default function Sunny() {\n  return (\n    <Card className=\"relative overflow-hidden group w-72 h-40 cursor-pointer transition-all hover:shadow-lg\">\n      {/* Gradient Background */}\n      <div className=\"absolute inset-0 bg-gradient-to-br from-orange-50 via-yellow-50 to-orange-100 dark:from-amber-900/20 dark:via-yellow-900/20 dark:to-orange-800/30 opacity-50\" />\n\n      {/* Content Container */}\n      <div className=\"relative h-full p-6 flex flex-col justify-between\">\n        {/* Top Section */}\n        <div className=\"flex items-center space-x-4\">\n          <div className=\"relative p-2 bg-amber-100 dark:bg-amber-900/40 rounded-full group-hover:bg-amber-200 dark:group-hover:bg-amber-800/60 transition-colors\">\n            {/* Animated Sun Rays Effect */}\n            {[...Array(6)].map((_, i) => (\n              <motion.div\n                key={i}\n                className=\"absolute w-16 h-0.5 bg-amber-200 dark:bg-yellow-500/30\"\n                style={{\n                  left: \"50%\",\n                  top: \"50%\",\n                  rotate: i * 60,\n                  transformOrigin: \"0 50%\",\n                  transform: \"translateY(-50%)\",\n                }}\n                animate={{\n                  opacity: [0.2, 0.5, 0.2],\n                  scale: [1, 1.2, 1],\n                }}\n                transition={{\n                  duration: 2,\n                  repeat: Number.POSITIVE_INFINITY,\n                  delay: i * 0.2,\n                  ease: \"easeInOut\",\n                }}\n              />\n            ))}\n            <Sun className=\"w-6 h-6 text-amber-500 dark:text-yellow-400 relative z-10\" />\n          </div>\n          <div>\n            <h3 className=\"text-xl font-semibold text-gray-800 dark:text-gray-100\">Sunny</h3>\n            <p className=\"text-sm text-gray-600 dark:text-gray-400\">Today&apos;s Forecast</p>\n          </div>\n        </div>\n\n        {/* Bottom Section */}\n        <div className=\"flex justify-between items-center mt-4\">\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-400\">\n            <Thermometer className=\"w-4 h-4\" />\n            <span className=\"text-sm\">UV 8</span>\n          </div>\n          <div className=\"flex items-center space-x-2 text-gray-600 dark:text-gray-400\">\n            <Wind className=\"w-4 h-4\" />\n            <span className=\"text-sm\">8 km/h</span>\n          </div>\n          <div className=\"text-2xl font-bold text-gray-800 dark:text-gray-100\">28°C</div>\n        </div>\n      </div>\n    </Card>\n  )\n}\n\n"
  },
  {
    "path": "web/app/chat/[id]/components/weather/weather-node.tsx",
    "content": "import { AgentState } from \"../../agent-types\";\nimport { Loader2 } from \"lucide-react\";\nimport { Card, CardContent } from \"@/components/ui/card\";\nimport Rainy from \"./rainy\";\nimport Sunny from \"./sunny\";\nimport Cloudy from \"./cloudy\";\nimport Snowy from \"./snowy\";\n\ninterface WeatherNodeProps {\n  nodeState: Partial<AgentState>;\n}\n\nexport default function WeatherNode({ nodeState }: WeatherNodeProps) {\n  if (nodeState?.weather_forecast?.[0]?.search_status) {\n    return (\n      <div className=\"flex justify-end\">\n        <Card className=\"inline-block\">\n          <CardContent className=\"p-2\">\n            <div className=\"flex items-center gap-2\">\n              <Loader2 className=\"w-6 h-6 animate-spin\" />\n              <div className=\"text-sm\">{nodeState?.weather_forecast?.[0]?.search_status}</div>\n            </div>\n          </CardContent>\n        </Card>\n      </div>\n    );\n  }\n\n  if (!nodeState?.weather_forecast?.[0]?.result) {\n    return null;\n  }\n\n  const WeatherComponents = {\n    Sunny,\n    Cloudy,\n    Rainy,\n    Snowy,\n  } as const;\n\n  const WeatherComponent = WeatherComponents[nodeState?.weather_forecast?.[0].result];\n\n  return (\n    <div className=\"flex justify-end\">\n      <WeatherComponent />\n    </div>\n  );\n}"
  },
  {
    "path": "web/app/chat/[id]/page.tsx",
    "content": "'use client';\n\nimport { useState, useEffect, useRef } from 'react';\nimport { useParams } from 'next/navigation';\nimport { Button } from '@/components/ui/button';\nimport { Textarea } from \"@/components/ui/textarea\";\nimport { ArrowUp, Square, ArrowDown, Ellipsis, AlertTriangle } from \"lucide-react\";\nimport { useLangGraphAgent } from '@/hooks/useLangGraphAgent/useLangGraphAgent';\nimport { AppCheckpoint, GraphNode } from '@/hooks/useLangGraphAgent/types';\nimport { AgentState, InterruptValue, ResumeValue } from './agent-types';\nimport { CheckpointCard } from './components/checkpoint-card';\nimport { ChatbotNode } from './components/chatbot-node';\nimport { Checkbox } from \"@/components/ui/checkbox\";\nimport WeatherNode from './components/weather/weather-node';\nimport Reminder from './components/reminder';\nimport { NodeCard } from './components/node-card';\nimport ResearchNode from './components/research/research-node';\n\nexport default function ChatPage() {\n  const params = useParams<{ id: string }>();\n  const messagesContainerRef = useRef<HTMLDivElement>(null);\n  const inputRef = useRef<HTMLTextAreaElement>(null);\n\n  const [threadId] = useState(params.id);\n  const [inputValue, setInputValue] = useState('');\n  const [showScrollButton, setShowScrollButton] = useState(false);\n  const [shouldAutoScroll, setShouldAutoScroll] = useState(true);\n  const [showNodesinfo, setShowNodesinfo] = useState(false);\n  const [restoreError, setRestoreError] = useState(false);\n\n  const exampleMessages = [\n    \"What's the weather in SF today?\",\n    \"Set a reminder for to call John\",\n    \"Tell me a joke\",\n    \"What can you do?\"\n  ];\n\n  const onCheckpointStart = (checkpoint: AppCheckpoint<AgentState, InterruptValue>) => {\n    console.log('Checkpoint started:', checkpoint.nodes);\n  }\n\n  const onCheckpointEnd = (checkpoint: AppCheckpoint<AgentState, InterruptValue>) => {\n    console.log('Checkpoint ended:', checkpoint.nodes);\n\n    // Example how to do some application logic based on the agent flow. E.g. reminders list.\n    if (checkpoint.nodes.some(n => n.name === 'reminder')) {\n      console.log('Reminder created');\n    }\n  }\n\n  const onCheckpointStateUpdate = (checkpoint: AppCheckpoint<AgentState, InterruptValue>) => {\n    console.log('Checkpoint intermediate state updated:', checkpoint.nodes, checkpoint.state);\n  }\n\n  const { status, appCheckpoints, run, resume, replay, restore, stop, restoring } = useLangGraphAgent<AgentState, InterruptValue, ResumeValue>({ onCheckpointStart, onCheckpointEnd, onCheckpointStateUpdate });\n\n  // Restore chat on page open\n  useEffect(() => {\n    if (threadId) {\n      restore(threadId).catch(() => {\n        setRestoreError(true);\n      });\n    }\n  }, [threadId]);\n\n  // Focus input on page load and after message is sent\n  useEffect(() => {\n    const isInputEnabled = status !== 'running' && !restoring;\n    if (inputRef.current && isInputEnabled) {\n      inputRef.current.focus();\n    }\n  }, [status, restoring]);\n\n  // Add scroll event listener\n  useEffect(() => {\n    const messagesContainer = messagesContainerRef.current;\n    if (messagesContainer) {\n      messagesContainer.addEventListener('scroll', handleScrollUpdate);\n      return () => messagesContainer.removeEventListener('scroll', handleScrollUpdate);\n    }\n  }, []);\n\n  // Auto-scroll when new nodes appear\n  useEffect(() => {\n    if (shouldAutoScroll) {\n      scrollToBottom();\n    }\n  }, [appCheckpoints, shouldAutoScroll]);\n\n  const handleScrollUpdate = () => {\n    if (messagesContainerRef.current) {\n      const { scrollTop, scrollHeight, clientHeight } = messagesContainerRef.current;\n      const isAtBottom = scrollHeight - scrollTop - clientHeight < 100; // 100px threshold\n      setShowScrollButton(!isAtBottom);\n\n      if (isAtBottom) {\n        setShouldAutoScroll(true);\n      } else {\n        setShouldAutoScroll(false);\n      }\n    }\n  };\n\n  const scrollToBottom = () => {\n    if (messagesContainerRef.current) {\n      messagesContainerRef.current.scrollTo({\n        top: messagesContainerRef.current.scrollHeight,\n        behavior: 'smooth'\n      });\n    }\n  };\n\n  const handleExampleClick = (message: string) => {\n    if (status !== 'running' && !restoring) {\n      setRestoreError(false);\n      run({ thread_id: threadId, state: { \"messages\": [{ type: 'user', content: message }] } });\n    }\n  };\n\n  const handleResume = (resumeValue: ResumeValue) => {\n    resume({ thread_id: threadId, resume: resumeValue });\n  }\n\n  const renderCheckpointError = (checkpoint: AppCheckpoint<AgentState, InterruptValue>): React.ReactNode => {\n    return (\n      <div className=\"text-sm text-red-500 font-medium p-2 bg-red-50 rounded-md flex items-center gap-2\">\n        <AlertTriangle className=\"h-4 w-4\" />\n        Error in {checkpoint.checkpointConfig.configurable.checkpoint_id}\n      </div>\n    );\n  }\n\n  const renderNode = (checkpoint: AppCheckpoint<AgentState, InterruptValue>, node: GraphNode<AgentState>): React.ReactNode => {\n    switch (node.name) {\n      case '__start__':\n      case 'agent':\n        return <ChatbotNode nodeState={node.state} />;\n      case 'weather':\n        return <WeatherNode nodeState={node.state} />;\n      case 'reminder':\n        return <Reminder interruptValue={checkpoint.interruptValue as string} onResume={handleResume} />;\n      case 'research':\n      case 'search':\n      case 'report':\n        return <ResearchNode nodeState={node.state} />;\n      default:\n        return null;\n    }\n  }\n\n  return (\n    <div className=\"flex flex-col h-screen\">\n      <div className=\"flex justify-end flex-shrink-0 p-2\">\n        <div className=\"flex items-center space-x-2\">\n          <Checkbox\n            id=\"show-nodesinfo\"\n            checked={showNodesinfo}\n            onCheckedChange={(checked) => setShowNodesinfo(checked === true)}\n          />\n          <label\n            htmlFor=\"show-nodesinfo\"\n            className=\"text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70\"\n          >\n            Show graph info\n          </label>\n        </div>\n      </div>\n\n      <div\n        ref={messagesContainerRef}\n        className=\"flex-1 overflow-y-auto px-4 relative\"\n      >\n        <div className=\"space-y-2 max-w-2xl mx-auto w-full\">\n          {appCheckpoints.map((checkpoint) => (\n            <div key={checkpoint.checkpointConfig.configurable.checkpoint_id} className=\"space-y-2\">\n              {showNodesinfo && (\n                <CheckpointCard\n                  thread_id={threadId}\n                  appCheckpoint={checkpoint}\n                  replayHandler={replay}\n                />\n              )}\n              {checkpoint.error ? renderCheckpointError(checkpoint) : checkpoint.nodes.map((node, nodeIndex) => (\n                <div key={nodeIndex} className=\"space-y-2\">\n                  {showNodesinfo && <NodeCard node={node} />}\n                  {renderNode(checkpoint, node)}\n                </div>\n              ))}\n            </div>\n          ))}\n          {(status === 'running' || restoring) && (\n            <div className=\"flex items-center justify-center p-4\">\n              <Ellipsis className=\"w-6 h-6 text-muted-foreground animate-pulse\" />\n            </div>\n          )}\n          {(status === 'error') && (\n            <div className=\"text-sm text-red-500 font-medium font-mono p-2 bg-red-50 rounded-md flex items-center gap-2\">\n              <AlertTriangle className=\"h-4 w-4\" />\n              Error running agent.\n            </div>\n          )}\n          {restoreError && (\n            <div className=\"text-sm text-red-500 font-medium font-mono p-2 bg-red-50 rounded-md flex items-center gap-2\">\n              <AlertTriangle className=\"h-4 w-4\" />\n              Error restoring agent. Check if agent server is running.\n            </div>\n          )}\n        </div>\n\n        {showScrollButton && (\n          <Button\n            className=\"fixed bottom-28 right-8 rounded-full shadow-md\"\n            size=\"icon\"\n            variant=\"outline\"\n            onClick={scrollToBottom}\n          >\n            <ArrowDown />\n          </Button>\n        )}\n      </div>\n\n      <div className=\"flex-shrink-0 p-2 pb-4\">\n        <div className=\"max-w-2xl mx-auto\">\n          <div className=\"mb-2 grid grid-cols-2 gap-2\">\n            {exampleMessages.map((message, index) => (\n              <Button\n                key={index}\n                variant=\"outline\"\n                size=\"sm\"\n                onClick={() => handleExampleClick(message)}\n                disabled={status === 'running' || restoring}\n                className=\"text-xs font-mono w-full\"\n              >\n                {message}\n              </Button>\n            ))}\n          </div>\n          <div className=\"relative\">\n            <Textarea\n              ref={inputRef}\n              className=\"pr-24 resize-none font-mono\"\n              placeholder=\"Enter your message...\"\n              value={inputValue}\n              disabled={status === 'running' || restoring}\n              onChange={(e) => setInputValue(e.target.value)}\n              onKeyDown={(e) => {\n                if (e.key === 'Enter' && !e.shiftKey) {\n                  e.preventDefault();\n                  if (inputValue.trim() && status !== 'running' && !restoring) {\n                    setRestoreError(false);\n                    run({ thread_id: threadId, state: { \"messages\": [{ type: 'user', content: inputValue }] } });\n                    setInputValue('');\n                  }\n                }\n              }}\n            />\n            {status === 'running' ? (\n              <Button\n                className=\"absolute right-3 top-[50%] translate-y-[-50%]\"\n                size=\"icon\"\n                variant=\"destructive\"\n                onClick={() => stop(threadId)}\n              >\n                <Square className=\"h-4 w-4\" />\n              </Button>\n            ) : (\n              <Button\n                className=\"absolute right-3 top-[50%] translate-y-[-50%]\"\n                size=\"icon\"\n                variant=\"outline\"\n                disabled={!inputValue.trim() || restoring}\n                onClick={() => {\n                  if (inputValue.trim() && !restoring) {\n                    run({ thread_id: threadId, state: { \"messages\": [{ type: 'user', content: inputValue }] } });\n                    setInputValue('');\n                  }\n                }}\n              >\n                <ArrowUp className=\"h-4 w-4\" />\n              </Button>\n            )}\n          </div>\n        </div>\n      </div>\n    </div>\n  );\n}"
  },
  {
    "path": "web/app/chat/page.tsx",
    "content": "export default function ChatsPage() {\n  return (\n    <div>\n    </div>\n  )\n}\n\n"
  },
  {
    "path": "web/app/deep-research/[id]/page.tsx",
    "content": "'use client'; \n\nimport { useState, useEffect, useRef, useCallback, useMemo } from 'react';\nimport { useParams } from 'next/navigation';\nimport { v4 as uuidv4 } from 'uuid';\nimport { Button } from '@/components/ui/button'; \nimport { Textarea } from \"@/components/ui/textarea\"; \nimport { ArrowUp, Square, Loader, AlertTriangle, Check } from \"lucide-react\"; \nimport { Card, CardContent, CardHeader, CardTitle } from \"@/components/ui/card\"; \nimport { Progress } from \"@/components/ui/progress\"; \n\n// Import hook and types\nimport { useLangGraphAgent } from '@/hooks/useLangGraphAgent/useLangGraphAgent';\nimport { \n  StreamUpdateData, \n  Message, \n  ToolCall, \n  WithMessages, \n  AppCheckpoint \n} from '@/hooks/useLangGraphAgent/types';\n\n// Markdown renderer\nimport ReactMarkdown from 'react-markdown';\nimport remarkGfm from 'remark-gfm';\n\n// Deep Research State interface\ninterface DeepResearchState extends WithMessages { \n  topic?: string; \n  depth?: string;\n  final_report_markdown?: string | null; \n}\n\n// Progress display component\nfunction DeepResearchProgressDisplay({ updates }: { updates: Record<string, StreamUpdateData> }) {\n  if (Object.keys(updates).length === 0) return null;\n\n  return (\n    <div className=\"space-y-3\">\n      {Object.entries(updates).map(([id, data]) => {\n        // Calculate progress percentage - use completedSteps and totalSteps (if available) or default progress value\n        const progressValue = data.completedSteps && data.totalSteps\n          ? (data.completedSteps / data.totalSteps) * 100\n          : data.progress ? data.progress * 100 : 0;\n        \n        return (\n          <div key={id} className=\"border rounded-md p-3 bg-muted/20\">\n            <div className=\"flex items-center justify-between mb-1\">\n              <div className=\"flex items-center space-x-2\">\n                {data.status === 'completed' ? (\n                  <Check className=\"h-4 w-4 text-green-500\" />\n                ) : (\n                  <Loader className=\"h-4 w-4 animate-spin text-blue-500\" />\n                )}\n                <span className=\"text-sm font-medium\">{data.title || 'Progress'}</span>\n              </div>\n              <span className=\"text-xs text-muted-foreground\">\n                {data.completedSteps && data.totalSteps \n                  ? `${data.completedSteps}/${data.totalSteps}` \n                  : `${Math.round(progressValue)}%`}\n              </span>\n            </div>\n            <Progress value={progressValue} className=\"h-1\" />\n            {data.message && (\n              <p className=\"text-xs text-muted-foreground mt-1\">{data.message}</p>\n            )}\n            {/* Display query result count (if available) */}\n            {data.results && data.results.length > 0 && (\n              <div className=\"mt-2 text-xs\">\n                <span className=\"font-medium\">Results: </span>\n                <span className=\"text-muted-foreground\">{data.results.length} items</span>\n              </div>\n            )}\n          </div>\n        );\n      })}\n    </div>\n  );\n}\n\n// Message history display component\nfunction MessageHistoryDisplay({ messages }: { messages: Message[] }) {\n  if (messages.length === 0) return null;\n  \n  return (\n    <div className=\"space-y-4\">\n      {messages.map((message) => (\n        <div \n          key={message.id} \n          className={`p-3 rounded-lg ${\n            message.type === 'user' \n              ? 'bg-primary/10 border border-primary/20' \n              : 'bg-card'\n          }`}\n        >\n          <div className=\"flex items-center gap-2 mb-1\">\n            <span className={`text-xs px-2 py-0.5 rounded-full ${\n              message.type === 'user' \n                ? 'bg-primary/20 text-primary' \n                : 'bg-secondary/20 text-secondary'\n            }`}>\n              {message.name || message.type}\n            </span>\n          </div>\n          \n          <div className=\"prose prose-sm dark:prose-invert max-w-none\">\n            {message.content && (\n              <ReactMarkdown remarkPlugins={[remarkGfm]}>\n                {message.content}\n              </ReactMarkdown>\n            )}\n            \n            {message.tool_calls?.map((tool) => (\n              <div key={tool.id} className=\"bg-muted/30 p-2 rounded-md mt-2 text-xs font-mono\">\n                <div className=\"font-semibold\">{tool.name}</div>\n                <pre className=\"overflow-x-auto p-1 mt-1\">\n                  {typeof tool.args === 'string' \n                    ? tool.args \n                    : JSON.stringify(tool.args, null, 2)}\n                </pre>\n              </div>\n            ))}\n          </div>\n        </div>\n      ))}\n    </div>\n  );\n}\n\n// Final report display component\nfunction FinalReportDisplay({ report }: { report: string | null }) {\n  if (!report) {\n    return (\n      <div className=\"flex items-center justify-center h-full text-muted-foreground\">\n        <p>No report generated yet</p>\n      </div>\n    );\n  }\n\n  return (\n    <div className=\"prose prose-sm dark:prose-invert max-w-none\">\n      <ReactMarkdown remarkPlugins={[remarkGfm]}>\n        {report}\n      </ReactMarkdown>\n    </div>\n  );\n}\n\nexport default function DeepResearchPage() {\n  const params = useParams<{ id: string }>();\n  const threadId = params.id;\n\n  // State to prevent triggering run multiple times\n  const [initialRunAttempted, setInitialRunAttempted] = useState(false);\n  // Optional: State to indicate specific startup error\n  const [startupError, setStartupError] = useState<string | null>(null);\n\n\n  const {\n      status,\n      run, // We need the run function from the hook\n      restore,\n      stop,\n      restoring,\n      restoreError,\n      messages,\n      progressUpdates,\n      appCheckpoints\n      // Add interrupt state/handlers if needed: isInterrupted, interruptData, resume\n  } = useLangGraphAgent<DeepResearchState, any, any>({ // Pass generics if needed\n       // Add callbacks if used, e.g., onCheckpointEnd\n  });\n\n  // ... (useMemo for finalReport, useMemo for researchTopic, useRef for messagesEndRef) ...\n   const researchTopic = useMemo(() => {\n       // Logic to get topic from messages remains useful for display\n       return messages?.[0]?.type === 'user' && typeof messages[0].content === 'string'\n           ? messages[0].content\n           : null;\n   }, [messages]);\n   const messagesEndRef = useRef<HTMLDivElement>(null);\n\n\n  // Restore history AND trigger initial run if necessary\n  useEffect(() => {\n    // Guard: Only proceed if we have a threadId and haven't tried the initial run/restore check yet.\n    if (!threadId || initialRunAttempted) {\n        return;\n    }\n\n    setStartupError(null); // Clear previous startup error on new attempt\n    console.log(\"Effect: Starting restore/initial run check for thread:\", threadId);\n    \n    // 添加日志: 检查 sessionStorage 中是否有主题\n    const initialTopicCheck = sessionStorage.getItem(`topic_for_${threadId}`);\n    console.log(`Initial sessionStorage check for topic_for_${threadId}:`, initialTopicCheck);\n\n    restore(threadId)\n        .then((restoredCheckpoints) => {\n            console.log(\"Effect: Restore promise resolved.\");\n            console.log(\"Restored checkpoints:\", restoredCheckpoints);\n            const hasMeaningfulHistory = restoredCheckpoints && restoredCheckpoints.length > 1;\n            console.log(\"Has meaningful history:\", hasMeaningfulHistory);\n\n            // --- Logic to potentially trigger run ---\n            if (!hasMeaningfulHistory && !restoreError) {\n                console.log(\"Effect: Thread appears new based on checkpoints. Checking for topic...\");\n                // 重要修复: 再次检查 sessionStorage，因为可能在 restore 过程中被其他代码修改\n                const initialTopic = sessionStorage.getItem(`topic_for_${threadId}`);\n                console.log(`Second check for topic_for_${threadId}:`, initialTopic);\n                \n                // 只有在确认要运行时才删除 sessionStorage\n                if (initialTopic) {\n                    console.log(`Effect: Found initial topic: \"${initialTopic}\". Triggering run...`);\n                    const initialMessages: Message[] = [{ type: 'user', content: initialTopic, id: `user-${crypto.randomUUID()}` }];\n                    const initialState: DeepResearchState = { messages: initialMessages };\n\n                    run({ thread_id: threadId, state: initialState, agent: \"deep_research\" })\n                        .then(() => {\n                            console.log(\"Initial run command sent successfully.\");\n                            setInitialRunAttempted(true); // Set attempt complete on success\n                            // 成功运行后再删除 sessionStorage\n                            sessionStorage.removeItem(`topic_for_${threadId}`);\n                        })\n                        .catch(runError => {\n                            console.error(\"Error detail from initial run call:\", runError);\n                            let detail = runError instanceof Error ? runError.message : 'Unknown error';\n                            setStartupError(`Failed to start research: ${detail}`);\n                            setInitialRunAttempted(true); // Also set attempt complete on failure\n                        });\n                } else {\n                    console.warn(\"Effect: Thread appears new, but no initial topic found.\");\n                    // 尝试从 URL 参数获取主题 (备用方案)\n                    const urlParams = new URLSearchParams(window.location.search);\n                    const topicFromUrl = urlParams.get('topic');\n                    \n                    if (topicFromUrl) {\n                        console.log(`Found topic from URL: \"${topicFromUrl}\"`);\n                        const initialMessages: Message[] = [{ type: 'user', content: topicFromUrl, id: `user-${crypto.randomUUID()}` }];\n                        const initialState: DeepResearchState = { messages: initialMessages };\n                        \n                        run({ thread_id: threadId, state: initialState, agent: \"deep_research\" })\n                            .then(() => {\n                                console.log(\"Initial run from URL param sent successfully.\");\n                                setInitialRunAttempted(true);\n                            })\n                            .catch(runError => {\n                                console.error(\"Error starting from URL param:\", runError);\n                                setStartupError(`Failed to start research: ${runError instanceof Error ? runError.message : 'Unknown error'}`);\n                                setInitialRunAttempted(true);\n                            });\n                    } else {\n                        setStartupError(\"Cannot start research: Initial topic is missing.\");\n                        setInitialRunAttempted(true); // Set attempt complete as we can't proceed\n                    }\n                }\n            } else {\n                 // Existing thread or restore error occurred\n                 console.log(\"Effect: Existing thread or restore error.\");\n                 setInitialRunAttempted(true); // Mark attempt complete\n                 sessionStorage.removeItem(`topic_for_${threadId}`); // Clean up just in case\n            }\n            // --- End run trigger logic ---\n        })\n        .catch((err) => {\n            console.error(\"Effect: Unhandled error during restore promise chain:\", err);\n             setInitialRunAttempted(true); // Mark attempt complete on unexpected error\n             if (!restoreError) {\n                setStartupError(\"An unexpected error occurred during loading.\")\n             }\n        });\n\n// ***** ENSURE THIS DEPENDENCY ARRAY IS USED *****\n}, [threadId, restore, run, initialRunAttempted, restoreError]);\n// ***** DEPENDENCY ARRAY HAS 5 ITEMS AND IS STABLE *****\n\n\n  // Auto-scroll effect (remains the same)\n  useEffect(() => {\n       messagesEndRef.current?.scrollIntoView({ behavior: \"smooth\" });\n  }, [messages, progressUpdates]);\n\n\n  // Stop research handler (remains the same)\n  const handleStopResearch = useCallback(() => {\n       if (threadId && status === 'running') {\n         stop(threadId);\n       }\n  }, [threadId, status, stop]);\n\n// --- THIS DEFINITION MUST EXIST BEFORE THE RETURN STATEMENT ---\nconst finalReport = useMemo(() => {\n  // Defensive checks first\n  if (!appCheckpoints || !Array.isArray(appCheckpoints) || appCheckpoints.length === 0) {\n       return null;\n  }\n  const lastCheckpoint = appCheckpoints[appCheckpoints.length - 1];\n  if (!lastCheckpoint || !lastCheckpoint.state) { // Check if lastCheckpoint and its state exist\n      return null;\n  }\n\n  // Try to get the specific field from state\n  let report = (lastCheckpoint.state as DeepResearchState)?.final_report_markdown;\n\n  // Fallback to last AI message if report not found and it's the end of the graph\n  if (!report && lastCheckpoint.next?.length === 0 && messages && messages.length > 0) {\n    const lastMsg = messages[messages.length - 1];\n    if (lastMsg?.type === 'ai' && typeof lastMsg.content === 'string') {\n      console.log(\"Using last AI message as final report (fallback).\");\n      report = lastMsg.content;\n    }\n  }\n\n  return report ?? null; // Return the found report or null\n}, [appCheckpoints, messages]); // Dependencies are correct\n\n  // --- Render Logic ---\n  // (Includes checks for restoring, restoreError, startupError,\n  //  and then displays messages, progress, report, interrupt UI etc.)\n  return (\n      <div className=\"flex flex-col h-screen p-2 md:p-4 bg-background text-foreground\">\n          <h1 className=\"text-xl md:text-2xl font-semibold mb-2 md:mb-4 text-center flex-shrink-0\">\n              Deep Research Assistant\n              {researchTopic && !restoring && (\n                  <span className=\"block text-sm text-muted-foreground font-normal mt-1\">\n                    Topic: {researchTopic}\n                  </span>\n              )}\n          </h1>\n\n          <div className=\"flex-1 overflow-hidden flex flex-col\">\n              <div className=\"flex flex-col gap-4 flex-1 min-h-0\">\n                  <div className=\"h-full flex flex-col border rounded-lg shadow-sm bg-card\">\n                      {/* Header with Status Indicator */}\n                      <div className=\"p-2 border-b flex justify-between items-center flex-shrink-0\">\n                         {/* ... (Status display logic - same as before) ... */}\n                           <h2 className=\"text-base md:text-lg font-semibold\">\n                               {researchTopic ? `Research: ${researchTopic.substring(0,30)}...` : \"Research Progress\"}\n                          </h2>\n                          <div className=\"text-xs text-muted-foreground flex items-center gap-1\">\n                             {/* ... (Status/Error/Stop Button display logic - same as before) ... */}\n                          </div>\n                      </div>\n\n                      {/* Content Area */}\n                      <div className=\"flex-1 overflow-y-auto p-2 space-y-4\" id=\"messages-container\">\n                          {restoring ? (\n                              <div className=\"flex justify-center items-center h-full text-muted-foreground\">\n                                  <Loader className=\"mr-2 h-4 w-4 animate-spin\" /> Loading History...\n                              </div>\n                          ) : restoreError ? (\n                              <div className=\"flex flex-col justify-center items-center h-full text-red-500 text-center p-4\">\n                                 <AlertTriangle className=\"mr-2 h-5 w-5 mb-2\" />\n                                 <p className=\"font-semibold\">Failed to Load Research History</p>\n                                 <p className=\"text-sm\">{restoreError.message || 'An unknown error occurred.'}</p>\n                             </div>\n                          // Display specific startup error if restore was ok but topic was missing\n                          ) : startupError ? (\n                               <div className=\"flex flex-col justify-center items-center h-full text-orange-500 text-center p-4\">\n                                 <AlertTriangle className=\"mr-2 h-5 w-5 mb-2\" />\n                                 <p className=\"font-semibold\">Cannot Start Research</p>\n                                 <p className=\"text-sm\">{startupError}</p>\n                               </div>\n                          ) : (\n                              <>\n                                  {/* Display content only when no critical errors */}\n                                  <DeepResearchProgressDisplay updates={progressUpdates} />\n                                  <MessageHistoryDisplay messages={messages} />\n                                  {finalReport && (\n                                      <div className=\"mt-6 border-t pt-4\">\n                                          <h2 className=\"text-base md:text-lg font-semibold mb-2\">Final Report</h2>\n                                          <FinalReportDisplay report={finalReport} />\n                                      </div>\n                                  )}\n                                  {/* Message if idle/complete but nothing substantial found */}\n                                   {status === 'idle' && messages.length === 0 && !finalReport && (!appCheckpoints || appCheckpoints.length === 0) && (\n                                       <div className=\"text-center text-muted-foreground py-6\">\n                                         Waiting for research to start or no history found.\n                                       </div>\n                                   )}\n                              </>\n                          )}\n                          {/* --- End Conditional Content --- */}\n                          <div ref={messagesEndRef} />\n                      </div>\n                       {/* --- Human-in-the-Loop UI (Render based on hook state) --- */}\n                      {/* {isInterrupted && interruptData && ( ... UI to call resume ... )} */}\n                  </div>\n              </div>\n          </div>\n      </div>\n  );\n}"
  },
  {
    "path": "web/app/deep-research/page.tsx",
    "content": "// @filename: app/deepresearch/page.tsx\n'use client';\n\nimport { useState } from 'react';\nimport { useRouter } from 'next/navigation';\nimport { Button } from '@/components/ui/button';\nimport { Textarea } from '@/components/ui/textarea';\nimport { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';\nimport { useChatStore } from '@/stores/chat-store';\nimport { Loader } from 'lucide-react'; // Keep loader for button state\n\nexport default function DeepResearchInitiationPage() {\n    const [topic, setTopic] = useState('');\n    const [isNavigating, setIsNavigating] = useState(false); // Simple loading state for navigation\n    const [error, setError] = useState<string | null>(null);\n    const router = useRouter();\n    const { addChat } = useChatStore();\n\n    const handleInitiateResearch = () => {\n        if (!topic.trim()) {\n            setError(\"Please enter a topic.\");\n            return;\n        }\n        setIsNavigating(true); // Indicate process started\n        setError(null);\n\n        try {\n            // 1. Create the chat entry in the store to get an ID\n            //    Use agentId, agentName, and pass topic as the optional initialName\n            const newChat = addChat('deep-research', 'Deep Research', topic);\n\n            // 2. Store the actual topic temporarily for the next page\n            //    Use a unique key based on the new chat ID\n            sessionStorage.setItem(`topic_for_${newChat.id}`, topic);\n\n            // 3. Navigate to the specific research page\n            //    The actual 'run' will be triggered on that page load\n            router.push(`/deep-research/${newChat.id}`);\n\n            // Note: No backend API call here. No 'run' triggered here.\n\n        } catch (err: any) {\n            console.error(\"Failed to initiate research process:\", err);\n            setError(err.message || \"Failed to start. Please try again.\");\n            setIsNavigating(false); // Stop loading on error\n        }\n        // If navigation starts, the component will unmount, no need to set isNavigating back to false\n    };\n\n    return (\n        <div className=\"flex flex-col items-center justify-center min-h-screen p-4\">\n            <Card className=\"w-full max-w-2xl shadow-lg\">\n                <CardHeader>\n                    <CardTitle>Start New Deep Research</CardTitle>\n                </CardHeader>\n                <CardContent>\n                    <p className=\"text-muted-foreground mb-4\">\n                        Enter the topic you want the agent to research in depth.\n                    </p>\n                    <Textarea\n                        value={topic}\n                        onChange={(e) => setTopic(e.target.value)}\n                        placeholder=\"Example: Impact of AI on renewable energy\"\n                        className=\"min-h-[100px] mb-4 text-sm\"\n                        disabled={isNavigating}\n                    />\n                    {error && (\n                         <p className=\"text-red-500 text-sm mb-4\">{error}</p>\n                    )}\n                    <Button\n                        onClick={handleInitiateResearch} // Use the correct handler name\n                        disabled={isNavigating || !topic.trim()}\n                        className=\"w-full\"\n                    >\n                        {isNavigating ? (\n                            <>\n                                <Loader className=\"mr-2 h-4 w-4 animate-spin\" />\n                                Proceeding...\n                            </>\n                        ) : (\n                            // Changed button text to be more accurate\n                            <>Prepare Research</>\n                        )}\n                    </Button>\n                </CardContent>\n            </Card>\n        </div>\n    );\n}"
  },
  {
    "path": "web/app/globals.css",
    "content": "@import 'react-json-view-lite/dist/index.css';\n\n@tailwind base;\n@tailwind components;\n@tailwind utilities;\n\nbody {\n  font-family: Arial, Helvetica, sans-serif;\n}\n\n@layer base {\n  :root {\n    --background: 0 0% 100%;\n    --foreground: 240 10% 3.9%;\n    --card: 0 0% 100%;\n    --card-foreground: 240 10% 3.9%;\n    --popover: 0 0% 100%;\n    --popover-foreground: 240 10% 3.9%;\n    --primary: 240 5.9% 10%;\n    --primary-foreground: 0 0% 98%;\n    --secondary: 240 4.8% 95.9%;\n    --secondary-foreground: 240 5.9% 10%;\n    --muted: 240 4.8% 95.9%;\n    --muted-foreground: 240 3.8% 46.1%;\n    --accent: 240 4.8% 95.9%;\n    --accent-foreground: 240 5.9% 10%;\n    --destructive: 0 84.2% 60.2%;\n    --destructive-foreground: 0 0% 98%;\n    --border: 240 5.9% 90%;\n    --input: 240 5.9% 90%;\n    --ring: 240 10% 3.9%;\n    --chart-1: 12 76% 61%;\n    --chart-2: 173 58% 39%;\n    --chart-3: 197 37% 24%;\n    --chart-4: 43 74% 66%;\n    --chart-5: 27 87% 67%;\n    --radius: 0.5rem;\n    --sidebar-background: 0 0% 98%;\n    --sidebar-foreground: 240 5.3% 26.1%;\n    --sidebar-primary: 240 5.9% 10%;\n    --sidebar-primary-foreground: 0 0% 98%;\n    --sidebar-accent: 240 4.8% 95.9%;\n    --sidebar-accent-foreground: 240 5.9% 10%;\n    --sidebar-border: 220 13% 91%;\n    --sidebar-ring: 217.2 91.2% 59.8%;\n  }\n\n  .dark {\n    --background: 240 10% 3.9%;\n    --foreground: 0 0% 98%;\n    --card: 240 10% 3.9%;\n    --card-foreground: 0 0% 98%;\n    --popover: 240 10% 3.9%;\n    --popover-foreground: 0 0% 98%;\n    --primary: 0 0% 98%;\n    --primary-foreground: 240 5.9% 10%;\n    --secondary: 240 3.7% 15.9%;\n    --secondary-foreground: 0 0% 98%;\n    --muted: 240 3.7% 15.9%;\n    --muted-foreground: 240 5% 64.9%;\n    --accent: 240 3.7% 15.9%;\n    --accent-foreground: 0 0% 98%;\n    --destructive: 0 62.8% 30.6%;\n    --destructive-foreground: 0 0% 98%;\n    --border: 240 3.7% 15.9%;\n    --input: 240 3.7% 15.9%;\n    --ring: 240 4.9% 83.9%;\n    --chart-1: 220 70% 50%;\n    --chart-2: 160 60% 45%;\n    --chart-3: 30 80% 55%;\n    --chart-4: 280 65% 60%;\n    --chart-5: 340 75% 55%;\n    --sidebar-background: 240 5.9% 10%;\n    --sidebar-foreground: 240 4.8% 95.9%;\n    --sidebar-primary: 224.3 76.3% 48%;\n    --sidebar-primary-foreground: 0 0% 100%;\n    --sidebar-accent: 240 3.7% 15.9%;\n    --sidebar-accent-foreground: 240 4.8% 95.9%;\n    --sidebar-border: 240 3.7% 15.9%;\n    --sidebar-ring: 217.2 91.2% 59.8%;\n  }\n}\n\n@layer base {\n  * {\n    @apply border-border;\n  }\n\n  body {\n    @apply bg-background text-foreground;\n  }\n}"
  },
  {
    "path": "web/app/layout.tsx",
    "content": "import type { Metadata } from \"next\";\nimport { Geist, Geist_Mono } from \"next/font/google\";\nimport \"./globals.css\";\nimport { ThemeProvider } from \"@/components/theme-provider\"\nimport { SidebarProvider, SidebarTrigger } from \"@/components/ui/sidebar\"\nimport { AppSidebar } from \"@/components/app-sidebar\"\n\nconst geistSans = Geist({\n  variable: \"--font-geist-sans\",\n  subsets: [\"latin\"],\n});\n\nconst geistMono = Geist_Mono({\n  variable: \"--font-geist-mono\",\n  subsets: [\"latin\"],\n});\n\nexport const metadata: Metadata = {\n  title: \"Create Next App\",\n  description: \"Generated by create next app\",\n};\n\nexport default function RootLayout({\n  children,\n}: Readonly<{\n  children: React.ReactNode;\n}>) {\n  return (\n    <html lang=\"en\">\n      <body\n        className={`${geistSans.variable} ${geistMono.variable} antialiased`}\n      >\n        <ThemeProvider\n          attribute=\"class\"\n          defaultTheme=\"system\"\n          enableSystem\n          disableTransitionOnChange\n        >\n          <SidebarProvider>\n            <AppSidebar />\n            <div className=\"relative min-h-screen w-full\">\n              <SidebarTrigger className=\"absolute left-1 top-1 z-50\" />\n              <main className=\"h-full\">\n                {children}\n              </main>\n            </div>\n          </SidebarProvider>\n        </ThemeProvider>\n      </body>\n    </html>\n  );\n}\n"
  },
  {
    "path": "web/app/page.tsx",
    "content": "// @filename: pages/index.tsx (或者您的主页文件路径)\n'use client';\n\nimport React, { useState } from 'react'; // 导入 React 和 useState\nimport { useRouter } from \"next/navigation\";\nimport { Button } from \"@/components/ui/button\";\n// --- MODIFIED: Import Dialog components ---\nimport {\n  Dialog,\n  DialogContent,\n  DialogDescription,\n  DialogHeader,\n  DialogTitle,\n  DialogTrigger,\n  DialogFooter, // Optional: if you need a footer\n  DialogClose,  // Optional: for explicit close buttons\n} from \"@/components/ui/dialog\";\n// --- MODIFIED: Import useChatStore again ---\nimport { useChatStore } from \"@/stores/chat-store\";\n// --- Example Icons ---\nimport { BrainCircuit, Users, Wrench, BotMessageSquare, GitBranch, MessageSquare } from \"lucide-react\";\n\n// --- Agent Configuration (Hardcoded for now) ---\n// 'id' should match the agent name expected by your backend API loader.\nconst availableAgents = [\n  { id: 'chat', name: 'ReAct Agent', description: 'A general purpose assistant for various tasks.', icon: MessageSquare },\n  { id: 'deep_research', name: 'Deep Research', description: 'Performs in-depth research on a topic.', icon: BrainCircuit },\n  // Add other agents here\n  // { id: 'another_agent', name: 'Another Agent', description: 'Description here', icon: Users },\n];\n// --- End Agent Configuration ---\n\n\n// --- Feature Block Component (Unchanged from previous version) ---\nfunction FeatureBlock({ title, description, icon: Icon }: { title: string; description: string; icon?: React.ElementType; }) {\n  return (\n    <div className=\"group p-4 md:p-6 rounded-lg bg-card dark:bg-gray-800/50 border border-border dark:border-gray-700/50 hover:shadow-md transition-shadow duration-300\">\n      <div className=\"flex items-center gap-3 mb-3\">\n         {Icon && <Icon className=\"w-6 h-6 text-primary flex-shrink-0\" />}\n         <h3 className=\"text-lg md:text-xl font-semibold text-foreground\">{title}</h3>\n      </div>\n      <p className=\"text-sm md:text-base text-muted-foreground\">{description}</p>\n    </div>\n  );\n}\n\n// --- Main Welcome Page Component ---\nexport default function WelcomePage() {\n  const router = useRouter();\n  // --- MODIFIED: Get addChat from the store ---\n  const { addChat } = useChatStore();\n  // State to control Dialog open/closed status, useful for closing programmatically\n  const [isAgentSelectorOpen, setIsAgentSelectorOpen] = useState(false);\n\n  \n  // --- CORRECTED: Handler to create chat AND navigate dynamically ---\n  const handleCreateChat = (agentId: string, agentName: string) => {\n    console.log(`Creating new chat for agent: ${agentName} (ID: ${agentId})`);\n\n    // 1. Call addChat - This matches the store definition.\n    //    Pass agentId first, then agentName. The store generates the chat name.\n    const newChat = addChat(agentId, agentName);\n\n    if (agentId === 'deep_research') {\n      const targetPath = '/deep-research/';\n      setIsAgentSelectorOpen(false);\n      router.push(targetPath); // Use the CORRECT dynamic path\n    } else {\n    // 2. Construct the dynamic navigation path using agentId\n      const targetPath = `/${agentId}/${newChat.id}`;\n      setIsAgentSelectorOpen(false);\n      router.push(targetPath); // Use the CORRECT dynamic path\n    }\n\n  };\n\n  return (\n    <div className=\"min-h-screen bg-gradient-to-b from-background via-background/80 to-blue-50 dark:to-blue-900/20 flex items-center justify-center\">\n      <div className=\"container px-4 py-12 md:py-16 mx-auto space-y-12 md:space-y-16\">\n\n        {/* Hero Section (Unchanged) */}\n        <div className=\"text-center space-y-4 max-w-4xl mx-auto\">\n          <h1 className=\"text-4xl md:text-6xl font-bold tracking-tight text-gray-900 dark:text-gray-100\">\n            Welcome to Mentis\n          </h1>\n          <p className=\"text-lg md:text-xl text-muted-foreground max-w-2xl mx-auto\">\n            An interactive learning framework for exploring Superagents and Multi-Agent Systems built with LangGraph.\n          </p>\n        </div>\n\n        {/* About/Purpose Section (Unchanged) */}\n        <div className=\"max-w-3xl mx-auto text-center space-y-4\">\n          <h2 className=\"text-2xl md:text-3xl font-semibold tracking-tight\">Learn by Doing</h2>\n          <p className=\"text-base md:text-lg text-muted-foreground leading-relaxed\">\n            Mentis provides hands-on examples and tools to help you understand the core concepts, architectures, and capabilities of modern AI agents. Dive into pre-built agents or explore the underlying graph structures.\n          </p>\n        </div>\n\n        {/* Capabilities / Concepts Section (Unchanged) */}\n        <div className=\"space-y-8\">\n          <h3 className=\"text-2xl md:text-3xl font-semibold text-center\">Explore Key Agent Concepts</h3>\n          <div className=\"grid md:grid-cols-2 lg:grid-cols-3 gap-4 md:gap-6 max-w-5xl mx-auto\">\n            <FeatureBlock title=\"Autonomous Agents (Superagents)\" description=\"Interact with agents for complex, multi-step tasks like research, utilizing planning and tool use.\" icon={BrainCircuit} />\n            <FeatureBlock title=\"Multi-Agent Collaboration\" description=\"Observe how multiple specialized agents can work together, delegate tasks, and achieve a common goal.\" icon={Users}/>\n            <FeatureBlock title=\"Tool Usage & Function Calling\" description=\"See how agents leverage external tools (web search, APIs) to enhance their abilities.\" icon={Wrench}/>\n            <FeatureBlock title=\"Streaming & Real-time Feedback\" description=\"Experience how intermediate steps and results are streamed back for transparency.\" icon={BotMessageSquare}/>\n            <FeatureBlock title=\"State Management & Persistence\" description=\"Understand how LangGraph manages conversation state for resuming and tracing execution.\"/>\n            <FeatureBlock title=\"Human-in-the-Loop\" description=\"Explore scenarios where agents pause to ask for human input or approval.\"/>\n          </div>\n        </div>\n\n        {/* CTA Section --- MODIFIED --- */}\n        <div className=\"text-center pt-4\">\n          <Dialog open={isAgentSelectorOpen} onOpenChange={setIsAgentSelectorOpen}>\n            <DialogTrigger asChild>\n              <Button size=\"lg\" className=\"px-8 py-3 text-lg\">\n                Explore Agents\n              </Button>\n            </DialogTrigger>\n            <DialogContent className=\"sm:max-w-[425px] md:max-w-lg\">\n              <DialogHeader>\n                <DialogTitle>Select an Agent</DialogTitle>\n                <DialogDescription>\n                  Choose an agent type to start interacting with.\n                </DialogDescription>\n              </DialogHeader>\n              {/* List of available agents */}\n              <div className=\"grid gap-4 py-4\">\n                {availableAgents.map((agent) => {\n                   const Icon = agent.icon || Bot; // Default icon\n                   return (\n                     <button\n                       key={agent.id}\n                       // Ensure onClick passes BOTH agent.id and agent.name to the handler\n                       onClick={() => handleCreateChat(agent.id, agent.name)}\n                       className=\"flex items-center p-4 rounded-lg border bg-card hover:bg-muted/50 dark:border-gray-700 dark:hover:bg-gray-800/60 transition-colors text-left w-full\"\n                     >\n                       <Icon className=\"w-6 h-6 mr-4 text-primary flex-shrink-0\" />\n                       <div>\n                         <p className=\"font-semibold text-foreground\">{agent.name}</p>\n                         <p className=\"text-sm text-muted-foreground\">{agent.description}</p>\n                       </div>\n                     </button>\n                   )\n                 })}\n              </div>\n            </DialogContent>\n          </Dialog>\n        </div>\n        {/* --- End CTA Section --- */}\n\n      </div>\n    </div>\n  );\n}"
  },
  {
    "path": "web/components/app-sidebar.tsx",
    "content": "// @filename: components/layout/app-sidebar.tsx (或者您的实际路径)\n'use client';\n\nimport Link from \"next/link\";\nimport { usePathname, useRouter } from \"next/navigation\";\nimport React from \"react\"; // 导入 React\nimport {\n  Sidebar,\n  SidebarContent,\n  SidebarHeader,\n  SidebarFooter,\n  SidebarGroup,\n  SidebarMenu,\n  SidebarMenuItem,\n  SidebarMenuButton,\n  SidebarGroupLabel,\n  // SidebarGroupAction // 不再需要，我们用普通按钮替代\n} from \"@/components/ui/sidebar\"; // 确认这是您自定义的 Sidebar 结构组件\nimport { Bot, Plus, MessageSquare } from \"lucide-react\";\nimport { Button } from \"@/components/ui/button\"; // 导入 Button\nimport { useChatStore } from \"@/stores/chat-store\"; // 确认路径\nimport ThemeSwitcher from \"./theme-switcher\"; // 确认路径\n\n// --- Agent 配置 (硬编码，未来可改为 API 获取) ---\n// 'id' 应与后端 load_agent 期望的名称匹配\nconst availableAgents = [\n  { id: 'chat', name: 'General Chatbot', description: '通用助理', icon: MessageSquare }, // 添加图标\n  { id: 'deep_research', name: 'Deep Research', description: '深度研究助理', icon: Bot }, // 添加图标\n  // 在这里添加更多 Agent\n];\n// --- Agent 配置结束 ---\n\nexport function AppSidebar() {\n  const pathname = usePathname();\n  const router = useRouter();\n  // 假设 useChatStore 包含更新后的 addChat 和带 agentName 的 ChatItem\n  const { chats, addChat } = useChatStore();\n\n  // 处理创建新聊天的函数 (保持不变)\n  const handleAddNewChat = (agentId: string, agentName: string) => {\n    if (agentId === 'deep_research') {\n      // For Deep Research, navigate to the dedicated initiation page\n      const targetPath = '/deep-research/';\n      console.log(`Navigating from Sidebar to Deep Research initiation: ${targetPath}`);\n      router.push(targetPath);\n      // NOTE: We DO NOT call addChat here for deep_research\n    } else {\n      // For all other agents, create chat item and navigate to its specific ID page\n      console.log(`Creating new chat entry for agent: ${agentName}`);\n      const newChat = addChat(agentId, agentName); // Create entry in store\n      const targetPath = `/${agentId}/${newChat.id}`; // e.g., /default/abc987\n      console.log(`Navigating from Sidebar to: ${targetPath}`);\n      router.push(targetPath);\n    }\n  };\n\n  return (\n    <Sidebar>\n      <SidebarHeader>\n        {/* 保持您的 Header */}\n        <SidebarMenu>\n          <SidebarMenuItem>\n            <SidebarMenuButton size=\"lg\" asChild>\n              <Link href=\"/\" className=\"flex items-center gap-2\">\n                <div className=\"flex aspect-square size-8 items-center justify-center rounded-lg bg-sidebar-primary text-sidebar-primary-foreground\">\n                  <Bot className=\"size-4\" />\n                </div>\n                <span className=\"font-semibold\">Mentis Web UI</span>\n              </Link>\n            </SidebarMenuButton>\n          </SidebarMenuItem>\n        </SidebarMenu>\n      </SidebarHeader>\n\n      <SidebarContent className=\"flex flex-col\"> {/* 允许内容增长 */}\n        {/* --- MODIFIED: 添加 Agent 创建按钮区域 --- */}\n        <SidebarGroup className=\"flex-shrink-0\"> {/* 防止此区域过度增长 */}\n          <SidebarGroupLabel>New Chat</SidebarGroupLabel>\n          <SidebarMenu className=\"mt-1 space-y-1\"> {/* 为按钮添加间距 */}\n            {availableAgents.map((agent) => {\n              const Icon = agent.icon || Plus; // 使用配置的图标或默认 Plus\n              return (\n                <SidebarMenuItem key={agent.id}>\n                  <Button\n                    variant=\"ghost\" // 使用 ghost 样式使其看起来像菜单项\n                    size=\"sm\"\n                    className=\"w-full justify-start px-2\" // 调整 padding 和对齐\n                    onClick={() => handleAddNewChat(agent.id, agent.name)}\n                    title={agent.description} // 添加工具提示\n                  >\n                    <Icon className=\"mr-2 size-4\" /> {/* 显示图标 */}\n                    {agent.name}\n                  </Button>\n                </SidebarMenuItem>\n              );\n            })}\n          </SidebarMenu>\n        </SidebarGroup>\n        {/* --- End Agent 创建按钮区域 --- */}\n\n        <div className=\"my-4 border-t dark:border-gray-700 flex-shrink-0\"></div> {/* 分隔线 */}\n\n        {/* --- 聊天历史记录区域 --- */}\n        <SidebarGroup className=\"flex-grow overflow-y-auto\"> {/* 让历史记录区域可滚动 */}\n          <SidebarGroupLabel>Recent Chats</SidebarGroupLabel>\n           {/* 确保这里的 map 不会出错 */}\n          <SidebarMenu className=\"mt-2\">\n            {/* 如果 chats 为空，可以显示提示 */}\n            {chats && chats.length === 0 && (\n              <p className=\"px-2 text-xs text-muted-foreground\">No recent chats.</p>\n            )}\n            {/* --- MODIFIED: Link href in chat history --- */}\n            {Array.isArray(chats) && chats.map((chat) => {\n              // >>> Important Assumption: Your 'chat' object in the store needs to know its agentId <<<\n              // >>> If not, you cannot correctly link back here. Let's assume chat has `agentId` <<<\n              // >>> If `chat.agentId` doesn't exist, you'll need to update your store logic <<<\n              const chatAgentId = chat.agentId || 'chat'; // Fallback to 'chat' if missing, adjust as needed\n\n              // Construct the correct link based on the chat's agent type\n              const chatHref = `/${chatAgentId}/${chat.id}`;\n              const isActive = pathname === chatHref;\n\n              // Find the agent icon (optional, improves UI)\n               const agentConfig = availableAgents.find(a => a.id === chatAgentId);\n               const Icon = agentConfig?.icon || MessageSquare; // Use agent icon or default\n\n              return (\n                <SidebarMenuItem key={chat.id}>\n                  <SidebarMenuButton\n                      asChild\n                      isActive={isActive} // Check against the full dynamic path\n                      className=\"truncate\"\n                      title={chat.name} // Use chat name for title\n                  >\n                    <Link href={chatHref}>\n                      <Icon className=\"size-4 flex-shrink-0 mr-2\" /> {/* Use dynamic icon */}\n                      <span className=\"truncate\">{chat.name}</span>\n                    </Link>\n                  </SidebarMenuButton>\n                </SidebarMenuItem>\n              );\n            })}\n          </SidebarMenu>\n        </SidebarGroup>\n      </SidebarContent>\n\n      <SidebarFooter>\n        {/* 保持您的 Footer */}\n        <div className=\"flex flex-col items-center text-sm gap-4\">\n          <ThemeSwitcher />\n          <span>Made by{\" \"}\n            <a\n              href=\"https://github.com/foreveryh/mentis\"\n              target=\"_blank\"\n              rel=\"noopener noreferrer\"\n              className=\"text-primary hover:text-primary/80 transition-colors inline-flex items-center gap-1 font-semibold underline underline-offset-4\"\n            >\n              Mentis\n            </a>\n          </span>\n        </div>\n      </SidebarFooter>\n    </Sidebar>\n  );\n}"
  },
  {
    "path": "web/components/theme-provider.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport { useEffect, useState } from 'react'\nimport { ThemeProvider as NextThemesProvider } from \"next-themes\"\n\nexport const useMounted = () => {\n  const [mounted, setMounted] = useState(false)\n  useEffect(() => { setMounted(true) }, [])\n  return mounted\n}\n\nexport function ThemeProvider({\n  children,\n  ...props\n}: React.ComponentProps<typeof NextThemesProvider>) {\n  const mounted = useMounted()\n  return mounted && <NextThemesProvider {...props}>{children}</NextThemesProvider>\n}\n"
  },
  {
    "path": "web/components/theme-switcher.tsx",
    "content": "\"use client\"\n\nimport { useState } from \"react\"\nimport { useTheme } from \"next-themes\"\nimport { Moon, SunMedium, Monitor } from \"lucide-react\"\nimport { motion } from \"framer-motion\"\nimport { cn } from \"@/lib/utils\"\n\nconst themes = [\n  { name: \"system\", icon: Monitor },\n  { name: \"light\", icon: SunMedium },\n  { name: \"dark\", icon: Moon },\n]\n\nexport default function ThemeSwitcher() {\n  const { theme, setTheme } = useTheme()\n  const [selectedTheme, setSelectedTheme] = useState(theme)\n\n  console.log('current theme', theme)\n\n  const handleThemeChange = (themeToSwitch: string) => {\n    setSelectedTheme(themeToSwitch)\n    setTheme(themeToSwitch)\n  }\n\n  return (\n    <div className=\"inline-flex items-center bg-muted rounded-full relative border p-0.5\">\n      <div className=\"relative flex\">\n        {themes.map((theme) => {\n          const Icon = theme.icon\n          return (\n            <button\n              key={theme.name}\n              className={cn(\n                \"relative z-10 rounded-full transition-colors duration-200\",\n                \"w-7 h-7 flex items-center justify-center\",\n                selectedTheme === theme.name\n                  ? \"text-primary-foreground\"\n                  : \"text-muted-foreground hover:text-foreground\",\n              )}\n              onClick={() => handleThemeChange(theme.name)}\n              aria-label={`Switch to ${theme.name} theme`}\n            >\n              <Icon className=\"w-3.5 h-3.5\" />\n            </button>\n          )\n        })}\n        <motion.div\n          className=\"absolute inset-0 w-7 h-7 bg-primary rounded-full\"\n          initial={false}\n          animate={{\n            x: selectedTheme === \"system\" ? 0 : selectedTheme === \"light\" ? 28 : 56,\n          }}\n          transition={{\n            type: \"spring\",\n            stiffness: 400,\n            damping: 30,\n          }}\n        />\n      </div>\n    </div>\n  )\n}"
  },
  {
    "path": "web/components/ui/badge.tsx",
    "content": "import * as React from \"react\"\nimport { cva, type VariantProps } from \"class-variance-authority\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst badgeVariants = cva(\n  \"inline-flex items-center rounded-md border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2\",\n  {\n    variants: {\n      variant: {\n        default:\n          \"border-transparent bg-primary text-primary-foreground shadow hover:bg-primary/80\",\n        secondary:\n          \"border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80\",\n        destructive:\n          \"border-transparent bg-destructive text-destructive-foreground shadow hover:bg-destructive/80\",\n        outline: \"text-foreground\",\n      },\n    },\n    defaultVariants: {\n      variant: \"default\",\n    },\n  }\n)\n\nexport interface BadgeProps\n  extends React.HTMLAttributes<HTMLDivElement>,\n    VariantProps<typeof badgeVariants> {}\n\nfunction Badge({ className, variant, ...props }: BadgeProps) {\n  return (\n    <div className={cn(badgeVariants({ variant }), className)} {...props} />\n  )\n}\n\nexport { Badge, badgeVariants }\n"
  },
  {
    "path": "web/components/ui/button.tsx",
    "content": "import * as React from \"react\"\nimport { Slot } from \"@radix-ui/react-slot\"\nimport { cva, type VariantProps } from \"class-variance-authority\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst buttonVariants = cva(\n  \"inline-flex items-center justify-center gap-2 whitespace-nowrap rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:pointer-events-none disabled:opacity-50 [&_svg]:pointer-events-none [&_svg]:size-4 [&_svg]:shrink-0\",\n  {\n    variants: {\n      variant: {\n        default:\n          \"bg-primary text-primary-foreground shadow hover:bg-primary/90\",\n        destructive:\n          \"bg-destructive text-destructive-foreground shadow-sm hover:bg-destructive/90\",\n        outline:\n          \"border border-input bg-background shadow-sm hover:bg-accent hover:text-accent-foreground\",\n        secondary:\n          \"bg-secondary text-secondary-foreground shadow-sm hover:bg-secondary/80\",\n        ghost: \"hover:bg-accent hover:text-accent-foreground\",\n        link: \"text-primary underline-offset-4 hover:underline\",\n      },\n      size: {\n        default: \"h-9 px-4 py-2\",\n        sm: \"h-8 rounded-md px-3 text-xs\",\n        lg: \"h-10 rounded-md px-8\",\n        icon: \"h-9 w-9\",\n      },\n    },\n    defaultVariants: {\n      variant: \"default\",\n      size: \"default\",\n    },\n  }\n)\n\nexport interface ButtonProps\n  extends React.ButtonHTMLAttributes<HTMLButtonElement>,\n    VariantProps<typeof buttonVariants> {\n  asChild?: boolean\n}\n\nconst Button = React.forwardRef<HTMLButtonElement, ButtonProps>(\n  ({ className, variant, size, asChild = false, ...props }, ref) => {\n    const Comp = asChild ? Slot : \"button\"\n    return (\n      <Comp\n        className={cn(buttonVariants({ variant, size, className }))}\n        ref={ref}\n        {...props}\n      />\n    )\n  }\n)\nButton.displayName = \"Button\"\n\nexport { Button, buttonVariants }\n"
  },
  {
    "path": "web/components/ui/card.tsx",
    "content": "import * as React from \"react\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Card = React.forwardRef<\n  HTMLDivElement,\n  React.HTMLAttributes<HTMLDivElement>\n>(({ className, ...props }, ref) => (\n  <div\n    ref={ref}\n    className={cn(\n      \"rounded-xl border bg-card text-card-foreground shadow\",\n      className\n    )}\n    {...props}\n  />\n))\nCard.displayName = \"Card\"\n\nconst CardHeader = React.forwardRef<\n  HTMLDivElement,\n  React.HTMLAttributes<HTMLDivElement>\n>(({ className, ...props }, ref) => (\n  <div\n    ref={ref}\n    className={cn(\"flex flex-col space-y-1.5 p-6\", className)}\n    {...props}\n  />\n))\nCardHeader.displayName = \"CardHeader\"\n\nconst CardTitle = React.forwardRef<\n  HTMLDivElement,\n  React.HTMLAttributes<HTMLDivElement>\n>(({ className, ...props }, ref) => (\n  <div\n    ref={ref}\n    className={cn(\"font-semibold leading-none tracking-tight\", className)}\n    {...props}\n  />\n))\nCardTitle.displayName = \"CardTitle\"\n\nconst CardDescription = React.forwardRef<\n  HTMLDivElement,\n  React.HTMLAttributes<HTMLDivElement>\n>(({ className, ...props }, ref) => (\n  <div\n    ref={ref}\n    className={cn(\"text-sm text-muted-foreground\", className)}\n    {...props}\n  />\n))\nCardDescription.displayName = \"CardDescription\"\n\nconst CardContent = React.forwardRef<\n  HTMLDivElement,\n  React.HTMLAttributes<HTMLDivElement>\n>(({ className, ...props }, ref) => (\n  <div ref={ref} className={cn(\"p-6 pt-0\", className)} {...props} />\n))\nCardContent.displayName = \"CardContent\"\n\nconst CardFooter = React.forwardRef<\n  HTMLDivElement,\n  React.HTMLAttributes<HTMLDivElement>\n>(({ className, ...props }, ref) => (\n  <div\n    ref={ref}\n    className={cn(\"flex items-center p-6 pt-0\", className)}\n    {...props}\n  />\n))\nCardFooter.displayName = \"CardFooter\"\n\nexport { Card, CardHeader, CardFooter, CardTitle, CardDescription, CardContent }\n"
  },
  {
    "path": "web/components/ui/checkbox.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport * as CheckboxPrimitive from \"@radix-ui/react-checkbox\"\nimport { Check } from \"lucide-react\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Checkbox = React.forwardRef<\n  React.ElementRef<typeof CheckboxPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof CheckboxPrimitive.Root>\n>(({ className, ...props }, ref) => (\n  <CheckboxPrimitive.Root\n    ref={ref}\n    className={cn(\n      \"peer h-4 w-4 shrink-0 rounded-sm border border-primary shadow focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50 data-[state=checked]:bg-primary data-[state=checked]:text-primary-foreground\",\n      className\n    )}\n    {...props}\n  >\n    <CheckboxPrimitive.Indicator\n      className={cn(\"flex items-center justify-center text-current\")}\n    >\n      <Check className=\"h-4 w-4\" />\n    </CheckboxPrimitive.Indicator>\n  </CheckboxPrimitive.Root>\n))\nCheckbox.displayName = CheckboxPrimitive.Root.displayName\n\nexport { Checkbox }\n"
  },
  {
    "path": "web/components/ui/dialog.tsx",
    "content": "// @filename: components/ui/dialog.tsx\n\"use client\"\n\nimport * as React from \"react\"\nimport * as DialogPrimitive from \"@radix-ui/react-dialog\"\nimport { X } from \"lucide-react\"\n\nimport { cn } from \"@/lib/utils\" // Adjust path if necessary\n\nconst Dialog = DialogPrimitive.Root\n\nconst DialogTrigger = DialogPrimitive.Trigger\n\nconst DialogPortal = DialogPrimitive.Portal\n\nconst DialogClose = DialogPrimitive.Close\n\nconst DialogOverlay = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Overlay>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Overlay>\n>(({ className, ...props }, ref) => (\n  <DialogPrimitive.Overlay\n    ref={ref}\n    className={cn(\n      \"fixed inset-0 z-50 bg-black/80\", // Changed from bg-background/80 for darker overlay\n      \" data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0\",\n      className\n    )}\n    {...props}\n  />\n))\nDialogOverlay.displayName = DialogPrimitive.Overlay.displayName\n\nconst DialogContent = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Content>\n>(({ className, children, ...props }, ref) => (\n  <DialogPortal>\n    <DialogOverlay />\n    <DialogPrimitive.Content\n      ref={ref}\n      className={cn(\n        \"fixed left-[50%] top-[50%] z-50 grid w-full max-w-lg translate-x-[-50%] translate-y-[-50%] gap-4 border bg-background p-6 shadow-lg duration-200 data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[state=closed]:slide-out-to-left-1/2 data-[state=closed]:slide-out-to-top-[48%] data-[state=open]:slide-in-from-left-1/2 data-[state=open]:slide-in-from-top-[48%] sm:rounded-lg\",\n        className\n      )}\n      {...props}\n    >\n      {children}\n      <DialogPrimitive.Close className=\"absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none data-[state=open]:bg-accent data-[state=open]:text-muted-foreground\">\n        <X className=\"h-4 w-4\" />\n        <span className=\"sr-only\">Close</span>\n      </DialogPrimitive.Close>\n    </DialogPrimitive.Content>\n  </DialogPortal>\n))\nDialogContent.displayName = DialogPrimitive.Content.displayName\n\nconst DialogHeader = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col space-y-1.5 text-center sm:text-left\",\n      className\n    )}\n    {...props}\n  />\n)\nDialogHeader.displayName = \"DialogHeader\"\n\nconst DialogFooter = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2\",\n      className\n    )}\n    {...props}\n  />\n)\nDialogFooter.displayName = \"DialogFooter\"\n\nconst DialogTitle = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Title>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Title>\n>(({ className, ...props }, ref) => (\n  <DialogPrimitive.Title\n    ref={ref}\n    className={cn(\n      \"text-lg font-semibold leading-none tracking-tight\",\n      className\n    )}\n    {...props}\n  />\n))\nDialogTitle.displayName = DialogPrimitive.Title.displayName\n\nconst DialogDescription = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Description>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Description>\n>(({ className, ...props }, ref) => (\n  <DialogPrimitive.Description\n    ref={ref}\n    className={cn(\"text-sm text-muted-foreground\", className)}\n    {...props}\n  />\n))\nDialogDescription.displayName = DialogPrimitive.Description.displayName\n\nexport {\n  Dialog,\n  DialogTrigger,\n  DialogContent,\n  DialogHeader,\n  DialogFooter,\n  DialogTitle,\n  DialogDescription,\n  DialogClose, // Also export DialogClose if you use it directly\n}"
  },
  {
    "path": "web/components/ui/input.tsx",
    "content": "import * as React from \"react\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Input = React.forwardRef<HTMLInputElement, React.ComponentProps<\"input\">>(\n  ({ className, type, ...props }, ref) => {\n    return (\n      <input\n        type={type}\n        className={cn(\n          \"flex h-9 w-full rounded-md border border-input bg-transparent px-3 py-1 text-base shadow-sm transition-colors file:border-0 file:bg-transparent file:text-sm file:font-medium file:text-foreground placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50 md:text-sm\",\n          className\n        )}\n        ref={ref}\n        {...props}\n      />\n    )\n  }\n)\nInput.displayName = \"Input\"\n\nexport { Input }\n"
  },
  {
    "path": "web/components/ui/popover.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport * as PopoverPrimitive from \"@radix-ui/react-popover\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Popover = PopoverPrimitive.Root\n\nconst PopoverTrigger = PopoverPrimitive.Trigger\n\nconst PopoverAnchor = PopoverPrimitive.Anchor\n\nconst PopoverContent = React.forwardRef<\n  React.ElementRef<typeof PopoverPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof PopoverPrimitive.Content>\n>(({ className, align = \"center\", sideOffset = 4, ...props }, ref) => (\n  <PopoverPrimitive.Portal>\n    <PopoverPrimitive.Content\n      ref={ref}\n      align={align}\n      sideOffset={sideOffset}\n      className={cn(\n        \"z-50 w-72 rounded-md border bg-popover p-4 text-popover-foreground shadow-md outline-none data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2\",\n        className\n      )}\n      {...props}\n    />\n  </PopoverPrimitive.Portal>\n))\nPopoverContent.displayName = PopoverPrimitive.Content.displayName\n\nexport { Popover, PopoverTrigger, PopoverContent, PopoverAnchor }\n"
  },
  {
    "path": "web/components/ui/progress.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport * as ProgressPrimitive from \"@radix-ui/react-progress\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Progress = React.forwardRef<\n  React.ElementRef<typeof ProgressPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof ProgressPrimitive.Root>\n>(({ className, value, ...props }, ref) => (\n  <ProgressPrimitive.Root\n    ref={ref}\n    className={cn(\n      \"relative h-4 w-full overflow-hidden rounded-full bg-secondary\",\n      className\n    )}\n    {...props}\n  >\n    <ProgressPrimitive.Indicator\n      className=\"h-full w-full flex-1 bg-primary transition-all\"\n      style={{ transform: `translateX(-${100 - (value || 0)}%)` }}\n    />\n  </ProgressPrimitive.Root>\n))\nProgress.displayName = ProgressPrimitive.Root.displayName\n\nexport { Progress }"
  },
  {
    "path": "web/components/ui/separator.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport * as SeparatorPrimitive from \"@radix-ui/react-separator\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Separator = React.forwardRef<\n  React.ElementRef<typeof SeparatorPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof SeparatorPrimitive.Root>\n>(\n  (\n    { className, orientation = \"horizontal\", decorative = true, ...props },\n    ref\n  ) => (\n    <SeparatorPrimitive.Root\n      ref={ref}\n      decorative={decorative}\n      orientation={orientation}\n      className={cn(\n        \"shrink-0 bg-border\",\n        orientation === \"horizontal\" ? \"h-[1px] w-full\" : \"h-full w-[1px]\",\n        className\n      )}\n      {...props}\n    />\n  )\n)\nSeparator.displayName = SeparatorPrimitive.Root.displayName\n\nexport { Separator }\n"
  },
  {
    "path": "web/components/ui/sheet.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport * as SheetPrimitive from \"@radix-ui/react-dialog\"\nimport { cva, type VariantProps } from \"class-variance-authority\"\nimport { X } from \"lucide-react\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Sheet = SheetPrimitive.Root\n\nconst SheetTrigger = SheetPrimitive.Trigger\n\nconst SheetClose = SheetPrimitive.Close\n\nconst SheetPortal = SheetPrimitive.Portal\n\nconst SheetOverlay = React.forwardRef<\n  React.ElementRef<typeof SheetPrimitive.Overlay>,\n  React.ComponentPropsWithoutRef<typeof SheetPrimitive.Overlay>\n>(({ className, ...props }, ref) => (\n  <SheetPrimitive.Overlay\n    className={cn(\n      \"fixed inset-0 z-50 bg-black/80  data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0\",\n      className\n    )}\n    {...props}\n    ref={ref}\n  />\n))\nSheetOverlay.displayName = SheetPrimitive.Overlay.displayName\n\nconst sheetVariants = cva(\n  \"fixed z-50 gap-4 bg-background p-6 shadow-lg transition ease-in-out data-[state=closed]:duration-300 data-[state=open]:duration-500 data-[state=open]:animate-in data-[state=closed]:animate-out\",\n  {\n    variants: {\n      side: {\n        top: \"inset-x-0 top-0 border-b data-[state=closed]:slide-out-to-top data-[state=open]:slide-in-from-top\",\n        bottom:\n          \"inset-x-0 bottom-0 border-t data-[state=closed]:slide-out-to-bottom data-[state=open]:slide-in-from-bottom\",\n        left: \"inset-y-0 left-0 h-full w-3/4 border-r data-[state=closed]:slide-out-to-left data-[state=open]:slide-in-from-left sm:max-w-sm\",\n        right:\n          \"inset-y-0 right-0 h-full w-3/4 border-l data-[state=closed]:slide-out-to-right data-[state=open]:slide-in-from-right sm:max-w-sm\",\n      },\n    },\n    defaultVariants: {\n      side: \"right\",\n    },\n  }\n)\n\ninterface SheetContentProps\n  extends React.ComponentPropsWithoutRef<typeof SheetPrimitive.Content>,\n    VariantProps<typeof sheetVariants> {}\n\nconst SheetContent = React.forwardRef<\n  React.ElementRef<typeof SheetPrimitive.Content>,\n  SheetContentProps\n>(({ side = \"right\", className, children, ...props }, ref) => (\n  <SheetPortal>\n    <SheetOverlay />\n    <SheetPrimitive.Content\n      ref={ref}\n      className={cn(sheetVariants({ side }), className)}\n      {...props}\n    >\n      <SheetPrimitive.Close className=\"absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none data-[state=open]:bg-secondary\">\n        <X className=\"h-4 w-4\" />\n        <span className=\"sr-only\">Close</span>\n      </SheetPrimitive.Close>\n      {children}\n    </SheetPrimitive.Content>\n  </SheetPortal>\n))\nSheetContent.displayName = SheetPrimitive.Content.displayName\n\nconst SheetHeader = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col space-y-2 text-center sm:text-left\",\n      className\n    )}\n    {...props}\n  />\n)\nSheetHeader.displayName = \"SheetHeader\"\n\nconst SheetFooter = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2\",\n      className\n    )}\n    {...props}\n  />\n)\nSheetFooter.displayName = \"SheetFooter\"\n\nconst SheetTitle = React.forwardRef<\n  React.ElementRef<typeof SheetPrimitive.Title>,\n  React.ComponentPropsWithoutRef<typeof SheetPrimitive.Title>\n>(({ className, ...props }, ref) => (\n  <SheetPrimitive.Title\n    ref={ref}\n    className={cn(\"text-lg font-semibold text-foreground\", className)}\n    {...props}\n  />\n))\nSheetTitle.displayName = SheetPrimitive.Title.displayName\n\nconst SheetDescription = React.forwardRef<\n  React.ElementRef<typeof SheetPrimitive.Description>,\n  React.ComponentPropsWithoutRef<typeof SheetPrimitive.Description>\n>(({ className, ...props }, ref) => (\n  <SheetPrimitive.Description\n    ref={ref}\n    className={cn(\"text-sm text-muted-foreground\", className)}\n    {...props}\n  />\n))\nSheetDescription.displayName = SheetPrimitive.Description.displayName\n\nexport {\n  Sheet,\n  SheetPortal,\n  SheetOverlay,\n  SheetTrigger,\n  SheetClose,\n  SheetContent,\n  SheetHeader,\n  SheetFooter,\n  SheetTitle,\n  SheetDescription,\n}\n"
  },
  {
    "path": "web/components/ui/sidebar.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport { Slot } from \"@radix-ui/react-slot\"\nimport { VariantProps, cva } from \"class-variance-authority\"\nimport { PanelLeft } from \"lucide-react\"\n\nimport { useIsMobile } from \"@/hooks/use-mobile\"\nimport { cn } from \"@/lib/utils\"\nimport { Button } from \"@/components/ui/button\"\nimport { Input } from \"@/components/ui/input\"\nimport { Separator } from \"@/components/ui/separator\"\nimport { Sheet, SheetContent, SheetTitle } from \"@/components/ui/sheet\"\nimport { Skeleton } from \"@/components/ui/skeleton\"\nimport {\n  Tooltip,\n  TooltipContent,\n  TooltipProvider,\n  TooltipTrigger,\n} from \"@/components/ui/tooltip\"\n\nconst SIDEBAR_COOKIE_NAME = \"sidebar:state\"\nconst SIDEBAR_COOKIE_MAX_AGE = 60 * 60 * 24 * 7\nconst SIDEBAR_WIDTH = \"14rem\"\nconst SIDEBAR_WIDTH_MOBILE = \"16rem\"\nconst SIDEBAR_WIDTH_ICON = \"3rem\"\nconst SIDEBAR_KEYBOARD_SHORTCUT = \"b\"\n\ntype SidebarContext = {\n  state: \"expanded\" | \"collapsed\"\n  open: boolean\n  setOpen: (open: boolean) => void\n  openMobile: boolean\n  setOpenMobile: (open: boolean) => void\n  isMobile: boolean\n  toggleSidebar: () => void\n}\n\nconst SidebarContext = React.createContext<SidebarContext | null>(null)\n\nfunction useSidebar() {\n  const context = React.useContext(SidebarContext)\n  if (!context) {\n    throw new Error(\"useSidebar must be used within a SidebarProvider.\")\n  }\n\n  return context\n}\n\nconst SidebarProvider = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\"> & {\n    defaultOpen?: boolean\n    open?: boolean\n    onOpenChange?: (open: boolean) => void\n  }\n>(\n  (\n    {\n      defaultOpen = true,\n      open: openProp,\n      onOpenChange: setOpenProp,\n      className,\n      style,\n      children,\n      ...props\n    },\n    ref\n  ) => {\n    const isMobile = useIsMobile()\n    const [openMobile, setOpenMobile] = React.useState(false)\n\n    // This is the internal state of the sidebar.\n    // We use openProp and setOpenProp for control from outside the component.\n    const [_open, _setOpen] = React.useState(defaultOpen)\n    const open = openProp ?? _open\n    const setOpen = React.useCallback(\n      (value: boolean | ((value: boolean) => boolean)) => {\n        const openState = typeof value === \"function\" ? value(open) : value\n        if (setOpenProp) {\n          setOpenProp(openState)\n        } else {\n          _setOpen(openState)\n        }\n\n        // This sets the cookie to keep the sidebar state.\n        document.cookie = `${SIDEBAR_COOKIE_NAME}=${openState}; path=/; max-age=${SIDEBAR_COOKIE_MAX_AGE}`\n      },\n      [setOpenProp, open]\n    )\n\n    // Helper to toggle the sidebar.\n    const toggleSidebar = React.useCallback(() => {\n      return isMobile\n        ? setOpenMobile((open) => !open)\n        : setOpen((open) => !open)\n    }, [isMobile, setOpen, setOpenMobile])\n\n    // Adds a keyboard shortcut to toggle the sidebar.\n    React.useEffect(() => {\n      const handleKeyDown = (event: KeyboardEvent) => {\n        if (\n          event.key === SIDEBAR_KEYBOARD_SHORTCUT &&\n          (event.metaKey || event.ctrlKey)\n        ) {\n          event.preventDefault()\n          toggleSidebar()\n        }\n      }\n\n      window.addEventListener(\"keydown\", handleKeyDown)\n      return () => window.removeEventListener(\"keydown\", handleKeyDown)\n    }, [toggleSidebar])\n\n    // We add a state so that we can do data-state=\"expanded\" or \"collapsed\".\n    // This makes it easier to style the sidebar with Tailwind classes.\n    const state = open ? \"expanded\" : \"collapsed\"\n\n    const contextValue = React.useMemo<SidebarContext>(\n      () => ({\n        state,\n        open,\n        setOpen,\n        isMobile,\n        openMobile,\n        setOpenMobile,\n        toggleSidebar,\n      }),\n      [state, open, setOpen, isMobile, openMobile, setOpenMobile, toggleSidebar]\n    )\n\n    return (\n      <SidebarContext.Provider value={contextValue}>\n        <TooltipProvider delayDuration={0}>\n          <div\n            style={\n              {\n                \"--sidebar-width\": SIDEBAR_WIDTH,\n                \"--sidebar-width-icon\": SIDEBAR_WIDTH_ICON,\n                ...style,\n              } as React.CSSProperties\n            }\n            className={cn(\n              \"group/sidebar-wrapper flex min-h-svh w-full has-[[data-variant=inset]]:bg-sidebar\",\n              className\n            )}\n            ref={ref}\n            {...props}\n          >\n            {children}\n          </div>\n        </TooltipProvider>\n      </SidebarContext.Provider>\n    )\n  }\n)\nSidebarProvider.displayName = \"SidebarProvider\"\n\nconst Sidebar = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\"> & {\n    side?: \"left\" | \"right\"\n    variant?: \"sidebar\" | \"floating\" | \"inset\"\n    collapsible?: \"offcanvas\" | \"icon\" | \"none\"\n  }\n>(\n  (\n    {\n      side = \"left\",\n      variant = \"sidebar\",\n      collapsible = \"offcanvas\",\n      className,\n      children,\n      ...props\n    },\n    ref\n  ) => {\n    const { isMobile, state, openMobile, setOpenMobile } = useSidebar()\n\n    if (collapsible === \"none\") {\n      return (\n        <div\n          className={cn(\n            \"flex h-full w-[--sidebar-width] flex-col bg-sidebar text-sidebar-foreground\",\n            className\n          )}\n          ref={ref}\n          {...props}\n        >\n          {children}\n        </div>\n      )\n    }\n\n    if (isMobile) {\n      return (\n        <Sheet open={openMobile} onOpenChange={setOpenMobile} {...props}>\n          <SheetContent\n            data-sidebar=\"sidebar\"\n            data-mobile=\"true\"\n            className=\"w-[--sidebar-width] bg-sidebar p-0 text-sidebar-foreground [&>button]:hidden\"\n            style={\n              {\n                \"--sidebar-width\": SIDEBAR_WIDTH_MOBILE,\n              } as React.CSSProperties\n            }\n            side={side}\n          >\n            <SheetTitle className=\"sr-only\">Mobile Menu</SheetTitle>\n            <div className=\"flex h-full w-full flex-col\">{children}</div>\n          </SheetContent>\n        </Sheet>\n      )\n    }\n\n    return (\n      <div\n        ref={ref}\n        className=\"group peer hidden text-sidebar-foreground md:block\"\n        data-state={state}\n        data-collapsible={state === \"collapsed\" ? collapsible : \"\"}\n        data-variant={variant}\n        data-side={side}\n      >\n        {/* This is what handles the sidebar gap on desktop */}\n        <div\n          className={cn(\n            \"relative h-svh w-[--sidebar-width] bg-transparent transition-[width] duration-200 ease-linear\",\n            \"group-data-[collapsible=offcanvas]:w-0\",\n            \"group-data-[side=right]:rotate-180\",\n            variant === \"floating\" || variant === \"inset\"\n              ? \"group-data-[collapsible=icon]:w-[calc(var(--sidebar-width-icon)_+_theme(spacing.4))]\"\n              : \"group-data-[collapsible=icon]:w-[--sidebar-width-icon]\"\n          )}\n        />\n        <div\n          className={cn(\n            \"fixed inset-y-0 z-10 hidden h-svh w-[--sidebar-width] transition-[left,right,width] duration-200 ease-linear md:flex\",\n            side === \"left\"\n              ? \"left-0 group-data-[collapsible=offcanvas]:left-[calc(var(--sidebar-width)*-1)]\"\n              : \"right-0 group-data-[collapsible=offcanvas]:right-[calc(var(--sidebar-width)*-1)]\",\n            // Adjust the padding for floating and inset variants.\n            variant === \"floating\" || variant === \"inset\"\n              ? \"p-2 group-data-[collapsible=icon]:w-[calc(var(--sidebar-width-icon)_+_theme(spacing.4)_+2px)]\"\n              : \"group-data-[collapsible=icon]:w-[--sidebar-width-icon] group-data-[side=left]:border-r group-data-[side=right]:border-l\",\n            className\n          )}\n          {...props}\n        >\n          <div\n            data-sidebar=\"sidebar\"\n            className=\"flex h-full w-full flex-col bg-sidebar group-data-[variant=floating]:rounded-lg group-data-[variant=floating]:border group-data-[variant=floating]:border-sidebar-border group-data-[variant=floating]:shadow\"\n          >\n            {children}\n          </div>\n        </div>\n      </div>\n    )\n  }\n)\nSidebar.displayName = \"Sidebar\"\n\nconst SidebarTrigger = React.forwardRef<\n  React.ElementRef<typeof Button>,\n  React.ComponentProps<typeof Button>\n>(({ className, onClick, ...props }, ref) => {\n  const { toggleSidebar } = useSidebar()\n\n  return (\n    <Button\n      ref={ref}\n      data-sidebar=\"trigger\"\n      variant=\"ghost\"\n      size=\"icon\"\n      className={cn(\"h-7 w-7\", className)}\n      onClick={(event) => {\n        onClick?.(event)\n        toggleSidebar()\n      }}\n      {...props}\n    >\n      <PanelLeft />\n      <span className=\"sr-only\">Toggle Sidebar</span>\n    </Button>\n  )\n})\nSidebarTrigger.displayName = \"SidebarTrigger\"\n\nconst SidebarRail = React.forwardRef<\n  HTMLButtonElement,\n  React.ComponentProps<\"button\">\n>(({ className, ...props }, ref) => {\n  const { toggleSidebar } = useSidebar()\n\n  return (\n    <button\n      ref={ref}\n      data-sidebar=\"rail\"\n      aria-label=\"Toggle Sidebar\"\n      tabIndex={-1}\n      onClick={toggleSidebar}\n      title=\"Toggle Sidebar\"\n      className={cn(\n        \"absolute inset-y-0 z-20 hidden w-4 -translate-x-1/2 transition-all ease-linear after:absolute after:inset-y-0 after:left-1/2 after:w-[2px] hover:after:bg-sidebar-border group-data-[side=left]:-right-4 group-data-[side=right]:left-0 sm:flex\",\n        \"[[data-side=left]_&]:cursor-w-resize [[data-side=right]_&]:cursor-e-resize\",\n        \"[[data-side=left][data-state=collapsed]_&]:cursor-e-resize [[data-side=right][data-state=collapsed]_&]:cursor-w-resize\",\n        \"group-data-[collapsible=offcanvas]:translate-x-0 group-data-[collapsible=offcanvas]:after:left-full group-data-[collapsible=offcanvas]:hover:bg-sidebar\",\n        \"[[data-side=left][data-collapsible=offcanvas]_&]:-right-2\",\n        \"[[data-side=right][data-collapsible=offcanvas]_&]:-left-2\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarRail.displayName = \"SidebarRail\"\n\nconst SidebarInset = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"main\">\n>(({ className, ...props }, ref) => {\n  return (\n    <main\n      ref={ref}\n      className={cn(\n        \"relative flex min-h-svh flex-1 flex-col bg-background\",\n        \"peer-data-[variant=inset]:min-h-[calc(100svh-theme(spacing.4))] md:peer-data-[variant=inset]:m-2 md:peer-data-[state=collapsed]:peer-data-[variant=inset]:ml-2 md:peer-data-[variant=inset]:ml-0 md:peer-data-[variant=inset]:rounded-xl md:peer-data-[variant=inset]:shadow\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarInset.displayName = \"SidebarInset\"\n\nconst SidebarInput = React.forwardRef<\n  React.ElementRef<typeof Input>,\n  React.ComponentProps<typeof Input>\n>(({ className, ...props }, ref) => {\n  return (\n    <Input\n      ref={ref}\n      data-sidebar=\"input\"\n      className={cn(\n        \"h-8 w-full bg-background shadow-none focus-visible:ring-2 focus-visible:ring-sidebar-ring\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarInput.displayName = \"SidebarInput\"\n\nconst SidebarHeader = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\">\n>(({ className, ...props }, ref) => {\n  return (\n    <div\n      ref={ref}\n      data-sidebar=\"header\"\n      className={cn(\"flex flex-col gap-2 p-2\", className)}\n      {...props}\n    />\n  )\n})\nSidebarHeader.displayName = \"SidebarHeader\"\n\nconst SidebarFooter = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\">\n>(({ className, ...props }, ref) => {\n  return (\n    <div\n      ref={ref}\n      data-sidebar=\"footer\"\n      className={cn(\"flex flex-col gap-2 p-2\", className)}\n      {...props}\n    />\n  )\n})\nSidebarFooter.displayName = \"SidebarFooter\"\n\nconst SidebarSeparator = React.forwardRef<\n  React.ElementRef<typeof Separator>,\n  React.ComponentProps<typeof Separator>\n>(({ className, ...props }, ref) => {\n  return (\n    <Separator\n      ref={ref}\n      data-sidebar=\"separator\"\n      className={cn(\"mx-2 w-auto bg-sidebar-border\", className)}\n      {...props}\n    />\n  )\n})\nSidebarSeparator.displayName = \"SidebarSeparator\"\n\nconst SidebarContent = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\">\n>(({ className, ...props }, ref) => {\n  return (\n    <div\n      ref={ref}\n      data-sidebar=\"content\"\n      className={cn(\n        \"flex min-h-0 flex-1 flex-col gap-2 overflow-auto group-data-[collapsible=icon]:overflow-hidden\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarContent.displayName = \"SidebarContent\"\n\nconst SidebarGroup = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\">\n>(({ className, ...props }, ref) => {\n  return (\n    <div\n      ref={ref}\n      data-sidebar=\"group\"\n      className={cn(\"relative flex w-full min-w-0 flex-col p-2\", className)}\n      {...props}\n    />\n  )\n})\nSidebarGroup.displayName = \"SidebarGroup\"\n\nconst SidebarGroupLabel = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\"> & { asChild?: boolean }\n>(({ className, asChild = false, ...props }, ref) => {\n  const Comp = asChild ? Slot : \"div\"\n\n  return (\n    <Comp\n      ref={ref}\n      data-sidebar=\"group-label\"\n      className={cn(\n        \"flex h-8 shrink-0 items-center rounded-md px-2 text-xs font-medium text-sidebar-foreground/70 outline-none ring-sidebar-ring transition-[margin,opa] duration-200 ease-linear focus-visible:ring-2 [&>svg]:size-4 [&>svg]:shrink-0\",\n        \"group-data-[collapsible=icon]:-mt-8 group-data-[collapsible=icon]:opacity-0\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarGroupLabel.displayName = \"SidebarGroupLabel\"\n\nconst SidebarGroupAction = React.forwardRef<\n  HTMLButtonElement,\n  React.ComponentProps<\"button\"> & { asChild?: boolean }\n>(({ className, asChild = false, ...props }, ref) => {\n  const Comp = asChild ? Slot : \"button\"\n\n  return (\n    <Comp\n      ref={ref}\n      data-sidebar=\"group-action\"\n      className={cn(\n        \"absolute right-3 top-3.5 flex aspect-square w-5 items-center justify-center rounded-md p-0 text-sidebar-foreground outline-none ring-sidebar-ring transition-transform hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 [&>svg]:size-4 [&>svg]:shrink-0\",\n        // Increases the hit area of the button on mobile.\n        \"after:absolute after:-inset-2 after:md:hidden\",\n        \"group-data-[collapsible=icon]:hidden\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarGroupAction.displayName = \"SidebarGroupAction\"\n\nconst SidebarGroupContent = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\">\n>(({ className, ...props }, ref) => (\n  <div\n    ref={ref}\n    data-sidebar=\"group-content\"\n    className={cn(\"w-full text-sm\", className)}\n    {...props}\n  />\n))\nSidebarGroupContent.displayName = \"SidebarGroupContent\"\n\nconst SidebarMenu = React.forwardRef<\n  HTMLUListElement,\n  React.ComponentProps<\"ul\">\n>(({ className, ...props }, ref) => (\n  <ul\n    ref={ref}\n    data-sidebar=\"menu\"\n    className={cn(\"flex w-full min-w-0 flex-col gap-1\", className)}\n    {...props}\n  />\n))\nSidebarMenu.displayName = \"SidebarMenu\"\n\nconst SidebarMenuItem = React.forwardRef<\n  HTMLLIElement,\n  React.ComponentProps<\"li\">\n>(({ className, ...props }, ref) => (\n  <li\n    ref={ref}\n    data-sidebar=\"menu-item\"\n    className={cn(\"group/menu-item relative\", className)}\n    {...props}\n  />\n))\nSidebarMenuItem.displayName = \"SidebarMenuItem\"\n\nconst sidebarMenuButtonVariants = cva(\n  \"peer/menu-button flex w-full items-center gap-2 overflow-hidden rounded-md p-2 text-left text-sm outline-none ring-sidebar-ring transition-[width,height,padding] hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 active:bg-sidebar-accent active:text-sidebar-accent-foreground disabled:pointer-events-none disabled:opacity-50 group-has-[[data-sidebar=menu-action]]/menu-item:pr-8 aria-disabled:pointer-events-none aria-disabled:opacity-50 data-[active=true]:bg-sidebar-accent data-[active=true]:font-medium data-[active=true]:text-sidebar-accent-foreground data-[state=open]:hover:bg-sidebar-accent data-[state=open]:hover:text-sidebar-accent-foreground group-data-[collapsible=icon]:!size-8 group-data-[collapsible=icon]:!p-2 [&>span:last-child]:truncate [&>svg]:size-4 [&>svg]:shrink-0\",\n  {\n    variants: {\n      variant: {\n        default: \"hover:bg-sidebar-accent hover:text-sidebar-accent-foreground\",\n        outline:\n          \"bg-background shadow-[0_0_0_1px_hsl(var(--sidebar-border))] hover:bg-sidebar-accent hover:text-sidebar-accent-foreground hover:shadow-[0_0_0_1px_hsl(var(--sidebar-accent))]\",\n      },\n      size: {\n        default: \"h-8 text-sm\",\n        sm: \"h-7 text-xs\",\n        lg: \"h-12 text-sm group-data-[collapsible=icon]:!p-0\",\n      },\n    },\n    defaultVariants: {\n      variant: \"default\",\n      size: \"default\",\n    },\n  }\n)\n\nconst SidebarMenuButton = React.forwardRef<\n  HTMLButtonElement,\n  React.ComponentProps<\"button\"> & {\n    asChild?: boolean\n    isActive?: boolean\n    tooltip?: string | React.ComponentProps<typeof TooltipContent>\n  } & VariantProps<typeof sidebarMenuButtonVariants>\n>(\n  (\n    {\n      asChild = false,\n      isActive = false,\n      variant = \"default\",\n      size = \"default\",\n      tooltip,\n      className,\n      ...props\n    },\n    ref\n  ) => {\n    const Comp = asChild ? Slot : \"button\"\n    const { isMobile, state } = useSidebar()\n\n    const button = (\n      <Comp\n        ref={ref}\n        data-sidebar=\"menu-button\"\n        data-size={size}\n        data-active={isActive}\n        className={cn(sidebarMenuButtonVariants({ variant, size }), className)}\n        {...props}\n      />\n    )\n\n    if (!tooltip) {\n      return button\n    }\n\n    if (typeof tooltip === \"string\") {\n      tooltip = {\n        children: tooltip,\n      }\n    }\n\n    return (\n      <Tooltip>\n        <TooltipTrigger asChild>{button}</TooltipTrigger>\n        <TooltipContent\n          side=\"right\"\n          align=\"center\"\n          hidden={state !== \"collapsed\" || isMobile}\n          {...tooltip}\n        />\n      </Tooltip>\n    )\n  }\n)\nSidebarMenuButton.displayName = \"SidebarMenuButton\"\n\nconst SidebarMenuAction = React.forwardRef<\n  HTMLButtonElement,\n  React.ComponentProps<\"button\"> & {\n    asChild?: boolean\n    showOnHover?: boolean\n  }\n>(({ className, asChild = false, showOnHover = false, ...props }, ref) => {\n  const Comp = asChild ? Slot : \"button\"\n\n  return (\n    <Comp\n      ref={ref}\n      data-sidebar=\"menu-action\"\n      className={cn(\n        \"absolute right-1 top-1.5 flex aspect-square w-5 items-center justify-center rounded-md p-0 text-sidebar-foreground outline-none ring-sidebar-ring transition-transform hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 peer-hover/menu-button:text-sidebar-accent-foreground [&>svg]:size-4 [&>svg]:shrink-0\",\n        // Increases the hit area of the button on mobile.\n        \"after:absolute after:-inset-2 after:md:hidden\",\n        \"peer-data-[size=sm]/menu-button:top-1\",\n        \"peer-data-[size=default]/menu-button:top-1.5\",\n        \"peer-data-[size=lg]/menu-button:top-2.5\",\n        \"group-data-[collapsible=icon]:hidden\",\n        showOnHover &&\n        \"group-focus-within/menu-item:opacity-100 group-hover/menu-item:opacity-100 data-[state=open]:opacity-100 peer-data-[active=true]/menu-button:text-sidebar-accent-foreground md:opacity-0\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarMenuAction.displayName = \"SidebarMenuAction\"\n\nconst SidebarMenuBadge = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\">\n>(({ className, ...props }, ref) => (\n  <div\n    ref={ref}\n    data-sidebar=\"menu-badge\"\n    className={cn(\n      \"pointer-events-none absolute right-1 flex h-5 min-w-5 select-none items-center justify-center rounded-md px-1 text-xs font-medium tabular-nums text-sidebar-foreground\",\n      \"peer-hover/menu-button:text-sidebar-accent-foreground peer-data-[active=true]/menu-button:text-sidebar-accent-foreground\",\n      \"peer-data-[size=sm]/menu-button:top-1\",\n      \"peer-data-[size=default]/menu-button:top-1.5\",\n      \"peer-data-[size=lg]/menu-button:top-2.5\",\n      \"group-data-[collapsible=icon]:hidden\",\n      className\n    )}\n    {...props}\n  />\n))\nSidebarMenuBadge.displayName = \"SidebarMenuBadge\"\n\nconst SidebarMenuSkeleton = React.forwardRef<\n  HTMLDivElement,\n  React.ComponentProps<\"div\"> & {\n    showIcon?: boolean\n  }\n>(({ className, showIcon = false, ...props }, ref) => {\n  // Random width between 50 to 90%.\n  const width = React.useMemo(() => {\n    return `${Math.floor(Math.random() * 40) + 50}%`\n  }, [])\n\n  return (\n    <div\n      ref={ref}\n      data-sidebar=\"menu-skeleton\"\n      className={cn(\"flex h-8 items-center gap-2 rounded-md px-2\", className)}\n      {...props}\n    >\n      {showIcon && (\n        <Skeleton\n          className=\"size-4 rounded-md\"\n          data-sidebar=\"menu-skeleton-icon\"\n        />\n      )}\n      <Skeleton\n        className=\"h-4 max-w-[--skeleton-width] flex-1\"\n        data-sidebar=\"menu-skeleton-text\"\n        style={\n          {\n            \"--skeleton-width\": width,\n          } as React.CSSProperties\n        }\n      />\n    </div>\n  )\n})\nSidebarMenuSkeleton.displayName = \"SidebarMenuSkeleton\"\n\nconst SidebarMenuSub = React.forwardRef<\n  HTMLUListElement,\n  React.ComponentProps<\"ul\">\n>(({ className, ...props }, ref) => (\n  <ul\n    ref={ref}\n    data-sidebar=\"menu-sub\"\n    className={cn(\n      \"mx-3.5 flex min-w-0 translate-x-px flex-col gap-1 border-l border-sidebar-border px-2.5 py-0.5\",\n      \"group-data-[collapsible=icon]:hidden\",\n      className\n    )}\n    {...props}\n  />\n))\nSidebarMenuSub.displayName = \"SidebarMenuSub\"\n\nconst SidebarMenuSubItem = React.forwardRef<\n  HTMLLIElement,\n  React.ComponentProps<\"li\">\n>(({ ...props }, ref) => <li ref={ref} {...props} />)\nSidebarMenuSubItem.displayName = \"SidebarMenuSubItem\"\n\nconst SidebarMenuSubButton = React.forwardRef<\n  HTMLAnchorElement,\n  React.ComponentProps<\"a\"> & {\n    asChild?: boolean\n    size?: \"sm\" | \"md\"\n    isActive?: boolean\n  }\n>(({ asChild = false, size = \"md\", isActive, className, ...props }, ref) => {\n  const Comp = asChild ? Slot : \"a\"\n\n  return (\n    <Comp\n      ref={ref}\n      data-sidebar=\"menu-sub-button\"\n      data-size={size}\n      data-active={isActive}\n      className={cn(\n        \"flex h-7 min-w-0 -translate-x-px items-center gap-2 overflow-hidden rounded-md px-2 text-sidebar-foreground outline-none ring-sidebar-ring hover:bg-sidebar-accent hover:text-sidebar-accent-foreground focus-visible:ring-2 active:bg-sidebar-accent active:text-sidebar-accent-foreground disabled:pointer-events-none disabled:opacity-50 aria-disabled:pointer-events-none aria-disabled:opacity-50 [&>span:last-child]:truncate [&>svg]:size-4 [&>svg]:shrink-0 [&>svg]:text-sidebar-accent-foreground\",\n        \"data-[active=true]:bg-sidebar-accent data-[active=true]:text-sidebar-accent-foreground\",\n        size === \"sm\" && \"text-xs\",\n        size === \"md\" && \"text-sm\",\n        \"group-data-[collapsible=icon]:hidden\",\n        className\n      )}\n      {...props}\n    />\n  )\n})\nSidebarMenuSubButton.displayName = \"SidebarMenuSubButton\"\n\nexport {\n  Sidebar,\n  SidebarContent,\n  SidebarFooter,\n  SidebarGroup,\n  SidebarGroupAction,\n  SidebarGroupContent,\n  SidebarGroupLabel,\n  SidebarHeader,\n  SidebarInput,\n  SidebarInset,\n  SidebarMenu,\n  SidebarMenuAction,\n  SidebarMenuBadge,\n  SidebarMenuButton,\n  SidebarMenuItem,\n  SidebarMenuSkeleton,\n  SidebarMenuSub,\n  SidebarMenuSubButton,\n  SidebarMenuSubItem,\n  SidebarProvider,\n  SidebarRail,\n  SidebarSeparator,\n  SidebarTrigger,\n  useSidebar,\n}\n"
  },
  {
    "path": "web/components/ui/skeleton.tsx",
    "content": "import { cn } from \"@/lib/utils\"\n\nfunction Skeleton({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) {\n  return (\n    <div\n      className={cn(\"animate-pulse rounded-md bg-primary/10\", className)}\n      {...props}\n    />\n  )\n}\n\nexport { Skeleton }\n"
  },
  {
    "path": "web/components/ui/textarea.tsx",
    "content": "import * as React from \"react\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Textarea = React.forwardRef<\n  HTMLTextAreaElement,\n  React.ComponentProps<\"textarea\">\n>(({ className, ...props }, ref) => {\n  return (\n    <textarea\n      className={cn(\n        \"flex min-h-[60px] w-full rounded-md border border-input bg-transparent px-3 py-2 text-base shadow-sm placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50 md:text-sm\",\n        className\n      )}\n      ref={ref}\n      {...props}\n    />\n  )\n})\nTextarea.displayName = \"Textarea\"\n\nexport { Textarea }\n"
  },
  {
    "path": "web/components/ui/tooltip.tsx",
    "content": "\"use client\"\n\nimport * as React from \"react\"\nimport * as TooltipPrimitive from \"@radix-ui/react-tooltip\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst TooltipProvider = TooltipPrimitive.Provider\n\nconst Tooltip = TooltipPrimitive.Root\n\nconst TooltipTrigger = TooltipPrimitive.Trigger\n\nconst TooltipContent = React.forwardRef<\n  React.ElementRef<typeof TooltipPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof TooltipPrimitive.Content>\n>(({ className, sideOffset = 4, ...props }, ref) => (\n  <TooltipPrimitive.Portal>\n    <TooltipPrimitive.Content\n      ref={ref}\n      sideOffset={sideOffset}\n      className={cn(\n        \"z-50 overflow-hidden rounded-md bg-primary px-3 py-1.5 text-xs text-primary-foreground animate-in fade-in-0 zoom-in-95 data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=closed]:zoom-out-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2\",\n        className\n      )}\n      {...props}\n    />\n  </TooltipPrimitive.Portal>\n))\nTooltipContent.displayName = TooltipPrimitive.Content.displayName\n\nexport { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider }\n"
  },
  {
    "path": "web/components.json",
    "content": "{\n  \"$schema\": \"https://ui.shadcn.com/schema.json\",\n  \"style\": \"new-york\",\n  \"rsc\": true,\n  \"tsx\": true,\n  \"tailwind\": {\n    \"config\": \"tailwind.config.ts\",\n    \"css\": \"app/globals.css\",\n    \"baseColor\": \"zinc\",\n    \"cssVariables\": true,\n    \"prefix\": \"\"\n  },\n  \"aliases\": {\n    \"components\": \"@/components\",\n    \"utils\": \"@/lib/utils\",\n    \"ui\": \"@/components/ui\",\n    \"lib\": \"@/lib\",\n    \"hooks\": \"@/hooks\"\n  },\n  \"iconLibrary\": \"lucide\"\n}"
  },
  {
    "path": "web/eslint.config.mjs",
    "content": "import { dirname } from \"path\";\nimport { fileURLToPath } from \"url\";\nimport { FlatCompat } from \"@eslint/eslintrc\";\n\nconst __filename = fileURLToPath(import.meta.url);\nconst __dirname = dirname(__filename);\n\nconst compat = new FlatCompat({\n  baseDirectory: __dirname,\n});\n\nconst eslintConfig = [\n  ...compat.extends(\"next/core-web-vitals\", \"next/typescript\"),\n];\n\nexport default eslintConfig;\n"
  },
  {
    "path": "web/hooks/use-mobile.tsx",
    "content": "import * as React from \"react\"\n\nconst MOBILE_BREAKPOINT = 768\n\nexport function useIsMobile() {\n  const [isMobile, setIsMobile] = React.useState<boolean | undefined>(undefined)\n\n  React.useEffect(() => {\n    const mql = window.matchMedia(`(max-width: ${MOBILE_BREAKPOINT - 1}px)`)\n    const onChange = () => {\n      setIsMobile(window.innerWidth < MOBILE_BREAKPOINT)\n    }\n    mql.addEventListener(\"change\", onChange)\n    setIsMobile(window.innerWidth < MOBILE_BREAKPOINT)\n    return () => mql.removeEventListener(\"change\", onChange)\n  }, [])\n\n  return !!isMobile\n}\n"
  },
  {
    "path": "web/hooks/useLangGraphAgent/actions.ts",
    "content": "'use server';\n\nimport { Checkpoint } from './types';\n\nconst AGENT_URL = process.env.NEXT_PUBLIC_AGENT_URL;\n\nexport async function getHistory<TAgentState, TInterruptValue>(threadId: string): Promise<Checkpoint<TAgentState, TInterruptValue>[]> {\n  const response = await fetch(`${AGENT_URL}/history?thread_id=${threadId}`);\n\n  if (!response.ok) {\n    try {\n      // 尝试解析JSON错误\n      const error = await response.json();\n      throw new Error(error.detail || 'Failed to fetch agent history');\n    } catch (jsonError) {\n      // 如果响应不是JSON格式，返回状态文本或通用错误\n      throw new Error(`Failed to fetch agent history: ${response.statusText || response.status}`);\n    }\n  }\n\n  try {\n    const data = await response.json();\n    return data as Checkpoint<TAgentState, TInterruptValue>[];\n  } catch (error) {\n    console.error('Error parsing history response:', error);\n    throw new Error('Failed to parse agent history response');\n  }\n}\n\nexport async function stopAgent(threadId: string): Promise<void> {\n  const response = await fetch(`${AGENT_URL}/agent/stop`, {\n    method: 'POST',\n    headers: {\n      'Content-Type': 'application/json',\n    },\n    body: JSON.stringify({ thread_id: threadId }),\n  });\n\n  if (!response.ok) {\n    try {\n      // 尝试解析JSON错误\n      const error = await response.json();\n      throw new Error(error.detail || 'Failed to stop agent');\n    } catch (jsonError) {\n      // 如果响应不是JSON格式，返回状态文本或通用错误\n      throw new Error(`Failed to stop agent: ${response.statusText || response.status}`);\n    }\n  }\n}"
  },
  {
    "path": "web/hooks/useLangGraphAgent/api.ts",
    "content": "import {\n  AgentEvent,\n  RunAgentInputInternal,\n  ResumeAgentInputInternal,\n  ReplayAgentInputInternal,\n  ForkAgentInputInternal\n} from './types';\n\nfunction parseSSEMessage<TAgentState, TInterruptValue>(chunk: string): AgentEvent<TAgentState, TInterruptValue>[] {\n  const messages: AgentEvent<TAgentState, TInterruptValue>[] = [];\n  const lines = chunk.split('\\n');\n  let currentMessage: Partial<AgentEvent<TAgentState, TInterruptValue>> = {};\n\n  for (const line of lines) {\n    if (!line.trim()) {\n      if (Object.keys(currentMessage).length) {\n        messages.push(currentMessage as AgentEvent<TAgentState, TInterruptValue>);\n        currentMessage = {};\n      }\n      continue;\n    }\n\n    const [field, ...valueArr] = line.split(':');\n    const value = valueArr.join(':').trim();\n\n    switch (field) {\n      case 'event':\n        currentMessage.event = value;\n        break;\n      case 'data':\n        currentMessage.data = JSON.parse(value);\n        break;\n    }\n  }\n\n  if (Object.keys(currentMessage).length) {\n    messages.push(currentMessage as AgentEvent<TAgentState, TInterruptValue>);\n  }\n\n  return messages;\n}\n\nexport async function* callAgentRoute<TAgentState, TInterruptValue, TResumeValue>(\n  body: RunAgentInputInternal<TAgentState> | ResumeAgentInputInternal<TResumeValue> | ForkAgentInputInternal<TAgentState> | ReplayAgentInputInternal):\n  AsyncGenerator<AgentEvent<TAgentState, TInterruptValue>, void, unknown> {\n  try {\n    const response = await fetch('/api/agent', {\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n      },\n      body: JSON.stringify(body),\n    });\n\n    if (!response.ok) {\n      const error = await response.json();\n      throw new Error(error.detail || 'Failed to call agent route');\n    }\n\n    const reader = response.body?.getReader();\n    if (!reader) throw new Error('No reader available');\n\n    const decoder = new TextDecoder();\n\n    while (true) {\n      const { done, value } = await reader.read();\n      if (done) break;\n\n      const chunk = decoder.decode(value);\n      const parsedMessages = parseSSEMessage<TAgentState, TInterruptValue>(chunk);\n\n      for (const msg of parsedMessages) {\n        yield msg;\n      }\n    }\n  } catch (error) {\n    console.error('Error in callAgentRoute.', error);\n    throw error;\n  }\n} "
  },
  {
    "path": "web/hooks/useLangGraphAgent/ascii-tree.ts",
    "content": "import { Checkpoint } from \"./types\";\n\ninterface TreeNode {\n  id: string;\n  next?: string;\n  state?: any;\n  children: TreeNode[];\n}\n\n//Helper function for debugging purposes to build a tree from a list of checkpoints.\nfunction buildTree<TAgentState, TInterruptValue>(checkpoints: Checkpoint<TAgentState, TInterruptValue>[]): TreeNode[] {\n  const nodes = new Map<string, TreeNode>();\n  const roots: TreeNode[] = [];\n\n  // First create all nodes\n  checkpoints.forEach(checkpoint => {\n    const id = checkpoint.config.configurable.checkpoint_id;\n    if (!nodes.has(id)) {\n      nodes.set(id, {\n        id,\n        next: checkpoint.next?.[0],\n        state: checkpoint.values,\n        children: []\n      });\n    }\n  });\n\n  // Then build the tree structure\n  checkpoints.forEach(checkpoint => {\n    const nodeId = checkpoint.config.configurable.checkpoint_id;\n    const parentId = checkpoint.parent_config?.configurable.checkpoint_id;\n    const node = nodes.get(nodeId)!;\n\n    if (parentId && nodes.has(parentId)) {\n      const parent = nodes.get(parentId)!;\n      parent.children.push(node);\n    } else {\n      roots.push(node);\n    }\n  });\n\n  return roots;\n}\n\ninterface PrintOptions {\n  showState?: boolean;\n  renderState?: (state: any) => string;\n}\n\nfunction defaultRenderState(state: any): string {\n  const stateStr = JSON.stringify(state);\n  if (stateStr.length > 50) {\n    return `[${stateStr.substring(0, 47)}...]`;\n  }\n  return `[${stateStr}]`;\n}\n\nfunction printTreeNode(node: TreeNode, options: PrintOptions = {}, prefix: string = \"\", isLast: boolean = true): string {\n  const connector = isLast ? \"└── \" : \"├── \";\n  const childPrefix = isLast ? \"    \" : \"│   \";\n\n  let result = prefix + connector + node.id;\n  if (node.next) {\n    result += ` → ${node.next}`;\n  }\n  if (options.showState && node.state) {\n    const renderFn = options.renderState || defaultRenderState;\n    result += \" \" + renderFn(node.state);\n  }\n  result += \"\\n\";\n\n  for (let i = 0; i < node.children.length; i++) {\n    result += printTreeNode(\n      node.children[i],\n      options,\n      prefix + childPrefix,\n      i === node.children.length - 1\n    );\n  }\n\n  return result;\n}\n\nexport function printCheckpointTree<TAgentState, TInterruptValue>(checkpoints: Checkpoint<TAgentState, TInterruptValue>[], options: PrintOptions = {}): string {\n  const roots = buildTree(checkpoints);\n  let result = \"\";\n\n  roots.forEach((root, index) => {\n    result += printTreeNode(root, options, \"\", index === roots.length - 1);\n  });\n\n  return result;\n} "
  },
  {
    "path": "web/hooks/useLangGraphAgent/types.ts",
    "content": "/** \n * Represents the current status of an agent:\n * - idle: agent is not running, waiting for user input\n * - running: agent is running\n * - stopping: stop request has been sent, waiting for agent to stop\n * - error: error has occurred calling agent. It can occur when the agent is not accessible \n *          or when there is an error handling the request.\n * \n * Note: If there is an error in the graph node, the error property will be set to true in GraphNode.\n */\nexport type AgentStatus = 'idle' | 'running' | 'stopping' | 'error';\n\n/** Represents LangGraph checkpoint config */\nexport type CheckpointConfig = { configurable: { thread_id: string, checkpoint_id: string, checkpoint_ns: string } };\n\n/** Represents LangGraph checkpoint metadata */\nexport type CheckpointMetadata = {\n  source: string;\n  step: number;\n  writes: Record<string, object | object[]>;\n  parents: Record<string, string>;\n};\n\n/** Generic interface for an interruption (Human in the loop). Value can be anything. */\nexport interface Interrupt<TInterruptValue> {\n  value: TInterruptValue;\n}\n\n/** Represents LangGraph checkpoint\n * This object is recieved from the agent server.\n */\nexport interface Checkpoint<TAgentState, TInterruptValue> {\n  next: string[];\n  values: TAgentState;\n  config: CheckpointConfig;\n  interrupts?: Interrupt<TInterruptValue>[];\n  parent_config?: CheckpointConfig;\n  metadata?: CheckpointMetadata;\n}\n\n/** Graph checkpoint in the application.\n * @param nodes - array of nodes in the graph\n * @param stateInitial - initial state when checkpoint is created\n * @param state - state of the checkpoint after the nodes has been executed or intermediate state\n * @param stateDiff - difference between the initial state and the state after the node has been executed\n * @param interruptValue - contains the value passed to the interrupt function in the node\n * @param checkpointConfig - checkpoint config of the node\n * @param error - true if there is an error in the nodes.\n */\nexport interface AppCheckpoint<TAgentState, TInterruptValue> {\n  nodes: GraphNode<TAgentState>[];\n  stateInitial: TAgentState;\n  state: TAgentState;\n  stateDiff: TAgentState;\n  interruptValue?: TInterruptValue;\n  checkpointConfig: CheckpointConfig;\n  error: boolean;\n}\n\n/** Node that is executed in the checkpoint.\n * @param name - name of the node\n * @param state - the state produced by the node\n */\nexport interface GraphNode<TAgentState> {\n  name: string;\n  state: Partial<TAgentState>;\n}\n\n/** Representation of the LangChain message. */\nexport interface Message {\n  type: string;\n  content: string;\n  id?: string;\n  tool_calls?: ToolCall[];\n  name?: string; // Added name field\n}\n\nexport type ToolCall = { name: string, args: object, id: string };\n\n// StreamUpdateData interface for research progress updates\nexport interface StreamUpdateData {\n  id: string;\n  timestamp: number;\n  title: string;\n  status: 'running' | 'completed' | 'error';\n  message: string;\n  completedSteps?: number;\n  totalSteps?: number;\n  metadata?: Record<string, any>;\n}\n\n/** Data of the message chunk event. */\nexport interface NodeMessageChunk {\n  node_name: string;\n  message_chunk: MessageChunk;\n}\n\n/** Represents a LangChain message chunk. LLMs stream messages in chunks. */\nexport interface MessageChunk {\n  content: string;\n  id: string;\n  tool_call_chunks?: ToolCallChunk[];\n}\n\nexport type ToolCallChunk = { name?: string, args?: object, id?: string };\n\n/** Interface for states that have messages property.\n * Inherit this interface in your agent state interface if you use messages.\n */\nexport interface WithMessages {\n  messages: Message[];\n}\n\n/** Events that are emitted by the agent.\n * @param event - event type. Can be 'checkpoint', 'message_chunk', 'interrupt', 'custom', 'error', 'stream_update', 'end'.\n */\nexport interface AgentEvent<TAgentState, TInterruptValue> {\n  event: string; // 'checkpoint', 'message_chunk', 'interrupt', 'custom', 'error', 'stream_update', 'end'\n  data: Checkpoint<TAgentState, TInterruptValue> | NodeMessageChunk | Interrupt<TInterruptValue>[] | Partial<TAgentState> | StreamUpdateData | string | any;\n}\n\n/** Generic interface for an agent input. Thread id is required. */\ninterface AgentInput {\n  thread_id: string;\n}\n\nexport interface RunAgentInput<TAgentState> extends AgentInput {\n  state: Partial<TAgentState>;\n}\n\nexport interface ResumeAgentInput<TResumeValue> extends AgentInput {\n  resume: TResumeValue;\n}\n\nexport interface ForkAgentInput<TAgentState> extends AgentInput {\n  config: CheckpointConfig;\n  state: Partial<TAgentState>;\n}\n\nexport interface ReplayAgentInput extends AgentInput {\n  config: CheckpointConfig;\n}\n\nexport interface RunAgentInputInternal<TAgentState> extends RunAgentInput<TAgentState> {\n  type: \"run\";\n}\n\nexport interface ResumeAgentInputInternal<TResumeValue> extends ResumeAgentInput<TResumeValue> {\n  type: \"resume\";\n}\n\nexport interface ForkAgentInputInternal<TAgentState> extends ForkAgentInput<TAgentState> {\n  type: \"fork\";\n}\n\nexport interface ReplayAgentInputInternal extends ReplayAgentInput {\n  type: \"replay\";\n}"
  },
  {
    "path": "web/hooks/useLangGraphAgent/useLangGraphAgent.tsx",
    "content": "import { useState, useCallback } from 'react';\nimport { v4 as uuidv4 } from 'uuid';\nimport {\n  Checkpoint,\n  Interrupt,\n  AppCheckpoint,\n  RunAgentInput,\n  ResumeAgentInput,\n  ForkAgentInput,\n  ReplayAgentInput,\n  RunAgentInputInternal,\n  ResumeAgentInputInternal,\n  ForkAgentInputInternal,\n  ReplayAgentInputInternal,\n  AgentStatus,\n  ToolCall,\n  WithMessages,\n  NodeMessageChunk,\n  StreamUpdateData,\n  Message,\n} from './types';\nimport { callAgentRoute } from './api';\nimport { getHistory, stopAgent } from './actions';\n\ninterface UseAgentStateCallbacks<TAgentState extends object | WithMessages, TInterruptValue> {\n  /** Callback for when a checkpoint starts.*/\n  onCheckpointStart?: (checkpoint: AppCheckpoint<TAgentState, TInterruptValue>) => void;\n  /** Callback for when a checkpoint ends. */\n  onCheckpointEnd?: (checkpoint: AppCheckpoint<TAgentState, TInterruptValue>) => void;\n  /** Callback for when a checkpoint intermediate state is updated. It can happen in if the custom event in the node is called. */\n  onCheckpointStateUpdate?: (checkpoint: AppCheckpoint<TAgentState, TInterruptValue>) => void;\n}\n\n// Singleton cache that persists across page navigations\n// Enable if needed\nconst historyCache = new Map<string, Checkpoint<any, any>[]>();\nconst enableRestoreCache = false;\n\n/**\n * Hook to manage agent state and execution.\n * @template TAgentState - Type of agent state. Can be any object or an object implementing {@link WithMessages} interface.\n *                        If the state has 'messages' property, it will be used for message processing.\n * @template TInterruptValue - Type of value used when agent execution is interrupted (usually several types of interruptions are possible).\n * @template TResumeValue - Type of value used when resuming agent execution (usually several types of resumes are possible).\n * @param callbacks - Optional callbacks for checkpoint lifecycle events (see {@link UseAgentStateCallbacks}).\n */\nexport function useLangGraphAgent<TAgentState extends object | WithMessages, TInterruptValue, TResumeValue>(\n  callbacks?: UseAgentStateCallbacks<TAgentState, TInterruptValue>\n) {\n  const { onCheckpointStart, onCheckpointEnd, onCheckpointStateUpdate } = callbacks ?? {};\n\n  const [status, setStatus] = useState<AgentStatus>('idle');\n  const [restoring, setRestoring] = useState(false);\n  const [restoreError, setRestoreError] = useState(false);\n  const [appCheckpoints, setAppCheckpoints] = useState<AppCheckpoint<TAgentState, TInterruptValue>[]>([]);\n  // Add messages state to directly manage message history\n  const [messages, setMessages] = useState<Message[]>([]);\n  const [progressUpdates, setProgressUpdates] = useState<Record<string, StreamUpdateData>>({});\n\n  /**\n   * Run the agent.\n   * @param agentInput - Input configuration for running the agent (see {@link RunAgentInput}).\n   */\n  const run = useCallback(async (agentInput: RunAgentInput<TAgentState>) => {\n    // Extract user input messages from state and ensure they have IDs\n    const userInputMessages = (agentInput.state as WithMessages)?.messages ?? [];\n    if (userInputMessages.length > 0 && !userInputMessages[0].id) {\n      userInputMessages[0].id = `user-${uuidv4()}`;\n    }\n\n    // Update messages state immediately to show user input\n    setMessages(prev => [...prev, ...(userInputMessages as Message[])]);\n    \n    setProgressUpdates({}); // Reset progress updates\n    setAppCheckpoints([]); // Clear checkpoints for new run\n    \n    await callAgent({ type: \"run\", ...agentInput });\n  }, []);\n\n  /**\n   * Resume the agent. Action should be called after the agent has been interrupted.\n   * @param agentInput - Input configuration for resuming the agent (see {@link ResumeAgentInput}).\n   */\n  const resume = useCallback(async (agentInput: ResumeAgentInput<TResumeValue>) => {\n    await callAgent({ type: \"resume\", ...agentInput });\n  }, []);\n\n  /**\n   * Fork the checkpoint with the updated state.\n   * @param agentInput - Input configuration for forking the agent (see {@link ForkAgentInput}).\n   */\n  const fork = useCallback(async (agentInput: ForkAgentInput<TAgentState>) => {\n    removeAppCheckpointsAfterCheckpoint(agentInput.config.configurable.checkpoint_id);\n    await callAgent({ type: \"fork\", ...agentInput });\n  }, []);\n\n  /**\n   * Runs agent from the checkpoint.\n   * @param agentInput - Input configuration for replaying the agent (see {@link ReplayAgentInput}).\n   */\n  const replay = useCallback(async (agentInput: ReplayAgentInput) => {\n    removeAppCheckpointsAfterCheckpoint(agentInput.config.configurable.checkpoint_id);\n    await callAgent({ type: \"replay\", ...agentInput });\n  }, []);\n\n  /**\n   * Stops the agent execution. Agent will not stop immediately. It will stop before emitting the last event (see {@link AgentEvent}).\n   * @param threadId - The ID of the thread to stop.\n   */\n  const stop = useCallback(async (threadId: string) => {\n    try {\n      setStatus('stopping');\n      await stopAgent(threadId);\n    } catch (error) {\n      console.error('Error stopping agent:', error);\n      setStatus('idle');\n    }\n  }, []);\n\n  function removeAppCheckpointsAfterCheckpoint(checkpointId: string) {\n    setAppCheckpoints(prevCheckpoints => {\n      const index = prevCheckpoints.findIndex(\n        node => node.checkpointConfig.configurable.checkpoint_id === checkpointId\n      );\n      if (index !== -1) {\n        return prevCheckpoints.slice(0, index + 1);\n      }\n      return prevCheckpoints;\n    });\n  }\n\n  const callAgent = useCallback(async (agentInput: RunAgentInputInternal<TAgentState> | ResumeAgentInputInternal<TResumeValue> | ForkAgentInputInternal<TAgentState> | ReplayAgentInputInternal) => {\n    if (!agentInput.type) {\n      throw new Error('Type is required');\n    }\n\n    if (!agentInput.thread_id) {\n      throw new Error('Thread id is required');\n    }\n\n    // Create local copies of state to modify during streaming\n    let currentMessagesCopy: Message[] = [];\n    setMessages(prev => { \n      currentMessagesCopy = [...prev]; \n      return prev;\n    });\n    \n    let currentAppCheckpoints: AppCheckpoint<TAgentState, TInterruptValue>[] = [];\n    setAppCheckpoints(prev => { \n      currentAppCheckpoints = [...prev]; \n      return prev;\n    });\n\n    try {\n      setStatus('running');\n      // Invalidate cache when agent is called\n      historyCache.delete(agentInput.thread_id);\n\n      const messageStream = callAgentRoute<TAgentState, TInterruptValue, TResumeValue>(agentInput);\n      for await (const msg of messageStream) {\n        if (msg.event === 'checkpoint') {\n          const checkpoint = msg.data as Checkpoint<TAgentState, TInterruptValue>;\n          processCheckpoint(checkpoint, currentAppCheckpoints);\n          \n          // Update messages from checkpoint state if available\n          const stateValues = checkpoint.values as WithMessages;\n          if (stateValues?.messages) {\n            currentMessagesCopy = deepCopy(stateValues.messages);\n            setMessages([...currentMessagesCopy]);\n          }\n          \n          setAppCheckpoints([...currentAppCheckpoints]);\n        }\n\n        if (msg.event === 'message_chunk') {\n          processMessageChunk(msg.data as NodeMessageChunk, currentMessagesCopy);\n          setMessages([...currentMessagesCopy]);\n        }\n\n        if (msg.event === 'stream_update') {\n          try {\n            let updateData: StreamUpdateData;\n            if (typeof msg.data === 'string') {\n              updateData = JSON.parse(msg.data) as StreamUpdateData;\n            } else {\n              updateData = msg.data as StreamUpdateData;\n            }\n            \n            if (updateData?.id) {\n              setProgressUpdates(prev => ({ ...prev, [updateData.id]: updateData }));\n            } else {\n              console.warn(\"Invalid stream_update data\");\n            }\n          } catch (e) {\n            console.error(\"Error processing stream_update:\", e, msg.data);\n          }\n        }\n\n        if (msg.event === 'custom') {\n          processCustomEvent(msg.data as Partial<TAgentState>, currentAppCheckpoints);\n          setAppCheckpoints([...currentAppCheckpoints]);\n        }\n\n        if (msg.event === 'interrupt') {\n          processInterrupts(msg.data as Interrupt<TInterruptValue>[], currentAppCheckpoints);\n          setAppCheckpoints([...currentAppCheckpoints]);\n        }\n\n        if (msg.event === 'error') {\n          processError(currentAppCheckpoints);\n          setAppCheckpoints([...currentAppCheckpoints]);\n          setStatus('error');\n        }\n      }\n\n      setStatus('idle');\n    } catch (error) {\n      console.error('Error in callAgent:', error);\n      // Keep current messages on error\n      setMessages(currentMessagesCopy);\n      setStatus('error');\n    }\n  }, [onCheckpointStart, onCheckpointEnd, onCheckpointStateUpdate]);\n\n  /**\n   * Restores the agent state from the checkpoints history.\n   * @param threadId - The ID of the thread to restore.\n   * @returns Promise that resolves to the restored checkpoints\n   */\n  const restore = useCallback(async (threadId: string): Promise<AppCheckpoint<TAgentState, TInterruptValue>[]> => {\n    if (!threadId) {\n      throw new Error('Thread id is required');\n    }\n\n    try {\n      setRestoring(true);\n      setRestoreError(false);\n\n      const restoredCheckpoints: AppCheckpoint<TAgentState, TInterruptValue>[] = [];\n      let finalMessagesCopy: Message[] = [];\n      \n      let history: Checkpoint<TAgentState, TInterruptValue>[];\n\n      // Try to get history from cache\n      const cachedHistory = historyCache.get(threadId);\n      if (cachedHistory && enableRestoreCache) {\n        console.log(\"Getting history from cache\");\n        history = cachedHistory;\n      } else {\n        history = await getHistory(threadId);\n        historyCache.set(threadId, history);\n      }\n\n      // History contains all forks of graph execution. We need to restore the last fork.\n      const newHistory: Checkpoint<TAgentState, TInterruptValue>[] = [];\n      let skipToCheckpointId: string | undefined = undefined;\n      for (let i = 0; i < history.length; i++) {\n        if (skipToCheckpointId && history[i].config.configurable.checkpoint_id !== skipToCheckpointId) {\n          continue;\n        }\n\n        newHistory.push(history[i]);\n        skipToCheckpointId = history[i].parent_config?.configurable.checkpoint_id;\n      }\n\n      for (const checkpoint of newHistory.reverse()) {\n        processHistoryCheckpoint(checkpoint, restoredCheckpoints);\n        \n        // Extract messages from checkpoint state\n        const stateValues = checkpoint.values as WithMessages;\n        if (stateValues?.messages) {\n          finalMessagesCopy = deepCopy(stateValues.messages);\n        }\n      }\n\n      setAppCheckpoints(restoredCheckpoints);\n      setMessages(finalMessagesCopy);\n      \n      return restoredCheckpoints;\n    } catch (error) {\n      console.error('Error restoring agent:', error);\n      setRestoreError(true);\n      throw new Error('Error restoring agent');\n    } finally {\n      setRestoring(false);\n    }\n  }, []);\n\n  function processHistoryCheckpoint(checkpoint: Checkpoint<TAgentState, TInterruptValue>, appCheckpoints: AppCheckpoint<TAgentState, TInterruptValue>[]) {\n    let interruptionInLastCheckpoint = false;\n\n    // Update the last checkpoint with the latest checkpoint values\n    if (appCheckpoints.length > 0) {\n      const lastCheckpoint = appCheckpoints[appCheckpoints.length - 1];\n      lastCheckpoint.state = deepCopy(checkpoint.values);\n      lastCheckpoint.stateDiff = getStateDiff(lastCheckpoint.stateInitial, checkpoint.values);\n      updateGraphNodeStateFromMetadata(lastCheckpoint, checkpoint);\n\n      // Delete interrupt if there are further checkpoints to restore.\n      // Preserve interrupt for the last checkpoint.\n      interruptionInLastCheckpoint = lastCheckpoint.interruptValue !== undefined;\n      if (interruptionInLastCheckpoint) {\n        lastCheckpoint.interruptValue = undefined;\n      }\n    }\n\n    // Create a new app checkpoint except for the last checkpoint.\n    if (checkpoint.next.length > 0) {\n      const newAppCheckpoint = createAppCheckpoint(checkpoint);\n\n      // When restoring checkpoints from graph history, the checkpoint stores interrupts as interrupts property.\n      if (checkpoint.interrupts) {\n        newAppCheckpoint.interruptValue = checkpoint.interrupts?.[0]?.value; // handle only single interrupt for now\n      }\n      appCheckpoints.push(newAppCheckpoint);\n    }\n  }\n\n  function processCheckpoint(checkpoint: Checkpoint<TAgentState, TInterruptValue>, appCheckpoints: AppCheckpoint<TAgentState, TInterruptValue>[]) {\n    let interruptionInLastCheckpoint = false;\n\n    // Update the last checkpoint with the latest checkpoint values\n    if (appCheckpoints.length > 0) {\n      const lastCheckpoint = appCheckpoints[appCheckpoints.length - 1];\n      lastCheckpoint.state = deepCopy(checkpoint.values);\n      lastCheckpoint.stateDiff = getStateDiff(lastCheckpoint.stateInitial, checkpoint.values);\n      updateGraphNodeStateFromMetadata(lastCheckpoint, checkpoint);\n\n      // Delete interrupt if there are further checkpoints. It means that the interruption was handled.\n      // Preserve interrupt for the last checkpoint.\n      interruptionInLastCheckpoint = lastCheckpoint.interruptValue !== undefined;\n      if (interruptionInLastCheckpoint) {\n        lastCheckpoint.interruptValue = undefined;\n        onCheckpointEnd?.(lastCheckpoint);\n      }\n    }\n\n    // Create a new checkpoint except for the last checkpoint. Do not create a new checkpoint if there was an interruption in the last checkpoint.\n    if (checkpoint.next.length > 0 && !interruptionInLastCheckpoint) {\n      const newCheckpoint = createAppCheckpoint(checkpoint);\n      appCheckpoints.push(newCheckpoint);\n      onCheckpointStart?.(newCheckpoint);\n    }\n  }\n\n  function createAppCheckpoint(checkpoint: Checkpoint<TAgentState, TInterruptValue>): AppCheckpoint<TAgentState, TInterruptValue> {\n    return {\n      nodes: checkpoint.next.map((x, index) => {\n        const matchingKey = Object.keys(checkpoint.metadata?.writes ?? {}).find(key => key === x);\n        const value = matchingKey ? checkpoint.metadata?.writes?.[matchingKey] : undefined;\n        return {\n          name: x,\n          state: matchingKey\n            ? Array.isArray(value)\n              ? deepCopy((value as Partial<TAgentState>[])[index] as TAgentState)\n              : deepCopy(value as TAgentState)\n            : {} as TAgentState\n        };\n      }),\n      stateInitial: deepCopy(checkpoint.values),\n      state: deepCopy(checkpoint.values),\n      stateDiff: {} as TAgentState,\n      checkpointConfig: checkpoint.config,\n      error: false\n    };\n  }\n\n  function updateGraphNodeStateFromMetadata(appCheckpoint: AppCheckpoint<TAgentState, TInterruptValue>, checkpoint: Checkpoint<TAgentState, TInterruptValue>) {\n    // Update nodes states with the writes from the checkpoint metadata\n    Object.entries(checkpoint.metadata?.writes ?? {}).forEach(([key, value]) => {\n      const matchingNodes = appCheckpoint.nodes.filter(node => node.name === key);\n      matchingNodes.forEach((node, index) => {\n        node.state = Array.isArray(value)\n          ? deepCopy((value as Partial<TAgentState>[])[index] as TAgentState)\n          : deepCopy(value as TAgentState);\n      });\n    });\n  }\n\n  function processMessageChunk(nodeMessageChunk: NodeMessageChunk, currentMessages: Message[]) {\n    if (!nodeMessageChunk?.message_chunk?.id) return;\n    \n    const chunkId = nodeMessageChunk.message_chunk.id;\n    const chunkContent = nodeMessageChunk.message_chunk.content || '';\n    const chunkToolCalls = nodeMessageChunk.message_chunk.tool_call_chunks;\n    \n    const existingMsgIndex = currentMessages.findIndex(m => m.id === chunkId);\n\n    if (existingMsgIndex !== -1) {\n      // Update existing message\n      currentMessages[existingMsgIndex].content += chunkContent;\n      \n      // Process tool call chunks if available\n      if (chunkToolCalls?.length > 0) {\n        // Update or add tool calls\n        if (!currentMessages[existingMsgIndex].tool_calls) {\n          currentMessages[existingMsgIndex].tool_calls = [];\n        }\n        \n        // For each tool call chunk, find matching tool call or create new one\n        chunkToolCalls.forEach(toolCallChunk => {\n          if (!toolCallChunk.id) return;\n          \n          const existingToolCallIndex = currentMessages[existingMsgIndex].tool_calls?.findIndex(\n            tc => tc.id === toolCallChunk.id\n          );\n          \n          if (existingToolCallIndex !== -1 && currentMessages[existingMsgIndex].tool_calls) {\n            // Update existing tool call\n            const toolCall = currentMessages[existingMsgIndex].tool_calls![existingToolCallIndex];\n            if (toolCallChunk.name) toolCall.name = toolCallChunk.name;\n            \n            // Append arguments (typically JSON string)\n            if (toolCallChunk.args) {\n              if (!toolCall.args) toolCall.args = {};\n              try {\n                const argsObj = typeof toolCallChunk.args === 'string' \n                  ? JSON.parse(toolCallChunk.args)\n                  : toolCallChunk.args;\n                toolCall.args = { ...toolCall.args, ...argsObj };\n              } catch (e) {\n                console.error(\"Error parsing tool call args:\", e);\n                // If parsing fails, store as raw string\n                toolCall.args = toolCallChunk.args;\n              }\n            }\n          } else if (currentMessages[existingMsgIndex].tool_calls) {\n            // Create new tool call\n            currentMessages[existingMsgIndex].tool_calls.push({\n              id: toolCallChunk.id,\n              name: toolCallChunk.name || '',\n              args: toolCallChunk.args || {}\n            });\n          }\n        });\n      }\n      \n      // Update node-specific messages\n      if (nodeMessageChunk.node_name) {\n        // This is handled by the checkpoint update, not needed here\n      }\n    } else {\n      // Create new message\n      const toolCalls: ToolCall[] = [];\n      \n      // Initialize tool calls if present in the chunk\n      if (chunkToolCalls?.length > 0) {\n        chunkToolCalls.forEach(tc => {\n          if (tc.id && tc.name) {\n            toolCalls.push({\n              id: tc.id,\n              name: tc.name,\n              args: tc.args || {}\n            });\n          }\n        });\n      }\n      \n      const newMessage: Message = {\n        type: \"ai\",\n        content: chunkContent,\n        id: chunkId,\n        tool_calls: toolCalls.length > 0 ? toolCalls : undefined\n      };\n      \n      if (nodeMessageChunk.node_name) {\n        newMessage.name = nodeMessageChunk.node_name;\n      }\n      \n      currentMessages.push(newMessage);\n    }\n  }\n\n  function processCustomEvent(state: Partial<TAgentState>, appCheckpoints: AppCheckpoint<TAgentState, TInterruptValue>[]) {\n    if (appCheckpoints.length === 0) {\n      return;\n    }\n\n    // Update the last checkpoint state. Update only the properties that are in the custom event.\n    const lastCheckpoint = appCheckpoints[appCheckpoints.length - 1];\n    lastCheckpoint.state = deepCopy({ ...lastCheckpoint.state, ...state }) as TAgentState;\n\n    // Update all child nodes with the same partial state\n    lastCheckpoint.nodes.forEach(node => {\n      node.state = deepCopy({ ...node.state, ...state }) as TAgentState;\n    });\n\n    onCheckpointStateUpdate?.(lastCheckpoint);\n  }\n\n  function processInterrupts(interrupts: Interrupt<TInterruptValue>[], appCheckpoints: AppCheckpoint<TAgentState, TInterruptValue>[]) {\n    if (appCheckpoints.length === 0) {\n      return;\n    }\n\n    const lastCheckpoint = appCheckpoints[appCheckpoints.length - 1];\n    lastCheckpoint.interruptValue = interrupts[0].value; // handle only single interrupt for now\n  }\n\n  function processError(appCheckpoints: AppCheckpoint<TAgentState, TInterruptValue>[]) {\n    if (appCheckpoints.length === 0) {\n      return;\n    }\n\n    const lastCheckpoint = appCheckpoints[appCheckpoints.length - 1];\n    lastCheckpoint.error = true;\n  }\n\n  function getStateDiff(stateOld: TAgentState, stateNew: TAgentState): TAgentState {\n    const diff = {} as TAgentState;\n\n    // Get all keys from old state (structure should be the same in both states)\n    const keys = Object.keys(stateOld);\n\n    for (const key of keys) {\n      const oldValue = (stateOld as any)[key];\n      const newValue = (stateNew as any)[key];\n\n      // Handle arrays - only include new items\n      if (Array.isArray(oldValue)) {\n        const newItems = newValue.filter((newItem: any) =>\n          !oldValue.some((oldItem: any) =>\n            JSON.stringify(oldItem) === JSON.stringify(newItem)\n          )\n        );\n        (diff as any)[key] = newItems.length > 0 ? deepCopy(newItems) : [];\n        continue;\n      }\n\n      // For objects, recursively compute diff\n      if (typeof oldValue === 'object' && oldValue !== null) {\n        (diff as any)[key] = getStateDiff(oldValue, newValue);\n      }\n      // For primitive values, include both changed and unchanged\n      else {\n        (diff as any)[key] = newValue;\n      }\n    }\n\n    return diff;\n  }\n\n  function deepCopy<T>(obj: T): T {\n    if (obj === null || typeof obj !== 'object') { \n      return obj; \n    }\n    try { \n      return JSON.parse(JSON.stringify(obj)); \n    } catch (e) { \n      console.error(\"Deep copy failed:\", e); \n      return obj; \n    }\n  }\n\n  return { \n    status, \n    appCheckpoints, \n    run, \n    resume, \n    fork, \n    replay, \n    restore, \n    stop, \n    restoring,\n    restoreError,\n    messages,\n    progressUpdates\n  };\n}\n"
  },
  {
    "path": "web/next.config.ts",
    "content": "import type { NextConfig } from \"next\";\n\nconst nextConfig: NextConfig = {\n  /* config options here */\n};\n\nexport default nextConfig;\n"
  },
  {
    "path": "web/package.json",
    "content": "{\n  \"name\": \"langgraph-client\",\n  \"version\": \"0.1.0\",\n  \"private\": true,\n  \"scripts\": {\n    \"dev\": \"next dev --turbopack\",\n    \"build\": \"next build\",\n    \"start\": \"next start\",\n    \"lint\": \"next lint\"\n  },\n  \"dependencies\": {\n    \"@radix-ui/react-checkbox\": \"^1.1.3\",\n    \"@radix-ui/react-dialog\": \"^1.1.5\",\n    \"@radix-ui/react-popover\": \"^1.1.5\",\n    \"@radix-ui/react-progress\": \"^1.1.2\",\n    \"@radix-ui/react-separator\": \"^1.1.1\",\n    \"@radix-ui/react-slot\": \"^1.1.1\",\n    \"@radix-ui/react-tooltip\": \"^1.1.7\",\n    \"class-variance-authority\": \"^0.7.1\",\n    \"clsx\": \"^2.1.1\",\n    \"framer-motion\": \"^12.0.6\",\n    \"lucide-react\": \"^0.474.0\",\n    \"next\": \"15.1.0\",\n    \"next-themes\": \"^0.4.4\",\n    \"react\": \"^19.0.0\",\n    \"react-dom\": \"^19.0.0\",\n    \"react-json-view-lite\": \"^2.3.0\",\n    \"react-markdown\": \"^9.0.3\",\n    \"remark-gfm\": \"^4.0.0\",\n    \"tailwind-merge\": \"^2.6.0\",\n    \"tailwindcss-animate\": \"^1.0.7\",\n    \"uuid\": \"^11.1.0\",\n    \"zustand\": \"^5.0.3\"\n  },\n  \"devDependencies\": {\n    \"@eslint/eslintrc\": \"^3\",\n    \"@types/node\": \"^20\",\n    \"@types/react\": \"^19\",\n    \"@types/react-dom\": \"^19\",\n    \"eslint\": \"^9\",\n    \"eslint-config-next\": \"15.1.0\",\n    \"postcss\": \"^8\",\n    \"tailwindcss\": \"^3.4.1\",\n    \"typescript\": \"^5\"\n  }\n}\n"
  },
  {
    "path": "web/postcss.config.mjs",
    "content": "/** @type {import('postcss-load-config').Config} */\nconst config = {\n  plugins: {\n    tailwindcss: {},\n  },\n};\n\nexport default config;\n"
  },
  {
    "path": "web/stores/chat-store.tsx",
    "content": "import { create } from 'zustand'\n\nexport interface ChatItem {\n  id: string; // Corresponds to thread_id\n  name: string;\n  agentId: string; // e.g., 'chat', 'deep-research'\n  agentName: string; // e.g., 'default', 'deep_research', 'customer_service'\n  // Optional: Add creation timestamp, last updated timestamp, etc.\n  createdAt: number;\n}\n\ninterface ChatStore {\n  chats: ChatItem[]\n  addChat: (agentId:string, agentName:string, initialName?: string) => ChatItem\n}\n\nexport const useChatStore = create<ChatStore>((set, get) => ({\n  chats: [],\n  addChat: (agentId: string, agentName: string, initialName?: string) => {\n    const newChat: ChatItem = {\n      id: crypto.randomUUID(),\n      // Use provided initial name or generate default based on agent and count\n      name: initialName || `${agentName} Chat ${get().chats.filter(c => c.agentName === agentName).length + 1}`,\n      agentId: agentId, // Store the agent ID\n      agentName: agentName, // Store the agent name\n      createdAt: Date.now(),\n    };\n    set((state) => ({\n      chats: [newChat, ...state.chats] // Add to beginning for recency\n    }));\n    return newChat;\n  }\n}))"
  },
  {
    "path": "web/tailwind.config.ts",
    "content": "import type { Config } from \"tailwindcss\";\n\nexport default {\n    darkMode: [\"class\"],\n    content: [\n    \"./pages/**/*.{js,ts,jsx,tsx,mdx}\",\n    \"./components/**/*.{js,ts,jsx,tsx,mdx}\",\n    \"./app/**/*.{js,ts,jsx,tsx,mdx}\",\n  ],\n  theme: {\n  \textend: {\n  \t\tcolors: {\n  \t\t\tbackground: 'hsl(var(--background))',\n  \t\t\tforeground: 'hsl(var(--foreground))',\n  \t\t\tcard: {\n  \t\t\t\tDEFAULT: 'hsl(var(--card))',\n  \t\t\t\tforeground: 'hsl(var(--card-foreground))'\n  \t\t\t},\n  \t\t\tpopover: {\n  \t\t\t\tDEFAULT: 'hsl(var(--popover))',\n  \t\t\t\tforeground: 'hsl(var(--popover-foreground))'\n  \t\t\t},\n  \t\t\tprimary: {\n  \t\t\t\tDEFAULT: 'hsl(var(--primary))',\n  \t\t\t\tforeground: 'hsl(var(--primary-foreground))'\n  \t\t\t},\n  \t\t\tsecondary: {\n  \t\t\t\tDEFAULT: 'hsl(var(--secondary))',\n  \t\t\t\tforeground: 'hsl(var(--secondary-foreground))'\n  \t\t\t},\n  \t\t\tmuted: {\n  \t\t\t\tDEFAULT: 'hsl(var(--muted))',\n  \t\t\t\tforeground: 'hsl(var(--muted-foreground))'\n  \t\t\t},\n  \t\t\taccent: {\n  \t\t\t\tDEFAULT: 'hsl(var(--accent))',\n  \t\t\t\tforeground: 'hsl(var(--accent-foreground))'\n  \t\t\t},\n  \t\t\tdestructive: {\n  \t\t\t\tDEFAULT: 'hsl(var(--destructive))',\n  \t\t\t\tforeground: 'hsl(var(--destructive-foreground))'\n  \t\t\t},\n  \t\t\tborder: 'hsl(var(--border))',\n  \t\t\tinput: 'hsl(var(--input))',\n  \t\t\tring: 'hsl(var(--ring))',\n  \t\t\tchart: {\n  \t\t\t\t'1': 'hsl(var(--chart-1))',\n  \t\t\t\t'2': 'hsl(var(--chart-2))',\n  \t\t\t\t'3': 'hsl(var(--chart-3))',\n  \t\t\t\t'4': 'hsl(var(--chart-4))',\n  \t\t\t\t'5': 'hsl(var(--chart-5))'\n  \t\t\t},\n  \t\t\tsidebar: {\n  \t\t\t\tDEFAULT: 'hsl(var(--sidebar-background))',\n  \t\t\t\tforeground: 'hsl(var(--sidebar-foreground))',\n  \t\t\t\tprimary: 'hsl(var(--sidebar-primary))',\n  \t\t\t\t'primary-foreground': 'hsl(var(--sidebar-primary-foreground))',\n  \t\t\t\taccent: 'hsl(var(--sidebar-accent))',\n  \t\t\t\t'accent-foreground': 'hsl(var(--sidebar-accent-foreground))',\n  \t\t\t\tborder: 'hsl(var(--sidebar-border))',\n  \t\t\t\tring: 'hsl(var(--sidebar-ring))'\n  \t\t\t}\n  \t\t},\n  \t\tborderRadius: {\n  \t\t\tlg: 'var(--radius)',\n  \t\t\tmd: 'calc(var(--radius) - 2px)',\n  \t\t\tsm: 'calc(var(--radius) - 4px)'\n  \t\t}\n  \t}\n  },\n  plugins: [require(\"tailwindcss-animate\")],\n} satisfies Config;\n"
  },
  {
    "path": "web/tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"target\": \"ES2017\",\n    \"lib\": [\"dom\", \"dom.iterable\", \"esnext\"],\n    \"allowJs\": true,\n    \"skipLibCheck\": true,\n    \"strict\": true,\n    \"noEmit\": true,\n    \"esModuleInterop\": true,\n    \"module\": \"esnext\",\n    \"moduleResolution\": \"bundler\",\n    \"resolveJsonModule\": true,\n    \"isolatedModules\": true,\n    \"jsx\": \"preserve\",\n    \"incremental\": true,\n    \"plugins\": [\n      {\n        \"name\": \"next\"\n      }\n    ],\n    \"paths\": {\n      \"@/*\": [\"./*\"]\n    }\n  },\n  \"include\": [\"next-env.d.ts\", \"**/*.ts\", \"**/*.tsx\", \".next/types/**/*.ts\"],\n  \"exclude\": [\"node_modules\"]\n}\n"
  },
  {
    "path": "web_for_a2a/.gitignore",
    "content": "# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.\n\n# dependencies\n/node_modules\n/.pnp\n.pnp.*\n.yarn/*\n!.yarn/patches\n!.yarn/plugins\n!.yarn/releases\n!.yarn/versions\n\n# testing\n/coverage\n\n# next.js\n/.next/\n/out/\n\n# production\n/build\n\n# misc\n.DS_Store\n*.pem\n\n# debug\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\n.pnpm-debug.log*\n\n# env files (can opt-in for committing if needed)\n.env.*\n\n# vercel\n.vercel\n\n# typescript\n*.tsbuildinfo\nnext-env.d.ts"
  },
  {
    "path": "web_for_a2a/Instruction.md",
    "content": "## 前端对接 DeepResearch A2A 流式接口实现指南 (Next.js + React)\n\n### 1. 前提\n\n* **A2A 服务器运行中:** 确保 `super_agents/deep_research/a2a_adapter/run_server.py` 启动的服务器正在运行，并且监听地址已知（例如 `http://127.0.0.1:8000`）。\n* **技术栈:** 前端使用 Next.js, React, TypeScript, Tailwind CSS。\n* **核心目标:** 在 Web UI 中实时展示 DeepResearch Agent 的研究进度和最终报告。\n* **A2A 类型定义:** 理想情况下，前端可以共享或重新定义 `core/a2a/types.py` 中的关键 Pydantic 模型对应的 TypeScript 接口（如 `TaskStatusUpdateEvent`, `TaskArtifactUpdateEvent`, `Message`, `TextPart`, `DataPart` 等），以便在代码中获得类型检查和提示。\n\n```typescript\n// 示例 TypeScript 接口 (根据 types.py 定义)\ninterface TextPart {\n  type: \"text\";\n  text: string;\n  metadata?: Record<string, any>;\n}\n\ninterface DataPart {\n  type: \"data\";\n  data: Record<string, any>; // 结构化数据\n  metadata?: Record<string, any>;\n}\n\ntype Part = TextPart | DataPart; // 可以扩展 FilePart 等\n\ninterface Message {\n  role: \"user\" | \"agent\";\n  parts: Part[];\n  metadata?: Record<string, any>;\n}\n\ninterface TaskStatus {\n  state: string; // TaskState 枚举值\n  message?: Message;\n  timestamp: string; // ISO format string\n}\n\ninterface TaskStatusUpdateEvent {\n  id: string; // Task ID\n  status: TaskStatus;\n  final: boolean;\n  metadata?: Record<string, any>;\n}\n\ninterface Artifact {\n    name?: string;\n    description?: string;\n    parts: Part[];\n    metadata?: Record<string, any>;\n    index: number;\n    append?: boolean;\n    lastChunk?: boolean;\n}\n\ninterface TaskArtifactUpdateEvent {\n  id: string; // Task ID\n  artifact: Artifact;\n  final?: boolean; // Artifact 事件也可能有 final 标志\n  metadata?: Record<string, any>;\n}\n\n// 流式响应中 result 字段的可能类型\ntype StreamEventResult = TaskStatusUpdateEvent | TaskArtifactUpdateEvent;\n\ninterface SendTaskStreamingResponse {\n    jsonrpc: \"2.0\";\n    id: string | number | null; // 对应请求的 ID\n    result?: StreamEventResult;\n    error?: {\n        code: number;\n        message: string;\n        data?: any;\n    };\n}\n```\n\n### 2. 核心流程概述\n\n1.  **用户输入:** 用户在 UI 中输入研究主题。\n2.  **发起请求:** 前端使用 `Workspace` API 向 A2A 服务器的**主端点** (例如 `/`) 发送一个 HTTP `POST` 请求，请求体是符合 A2A 规范的 JSON-RPC 消息，`method` 为 `\"tasks/sendSubscribe\"`。\n3.  **服务器响应:**\n    * 如果请求有效且服务器成功启动后台任务并准备好 SSE 流，服务器**必须**返回一个 HTTP 200 OK 响应，且 `Content-Type` 头为 `text/event-stream`。\n    * 如果请求无效或在建立流之前出错，服务器会返回一个普通的 JSON 响应（`Content-Type: application/json`），通常包含一个错误状态码（如 400 或 500）和 JSON-RPC 错误对象。\n4.  **客户端处理流:**\n    * 如果收到 `text/event-stream` 响应，客户端开始读取响应体 (Response Body) 中的数据流。\n    * 流中的数据遵循 SSE 格式，主要是 `data: <JSON 字符串>\\n\\n`。\n    * 客户端需要**持续读取、解码、解析**这些 SSE 事件。每个事件的 `data` 部分是一个 JSON 字符串，代表一个 `SendTaskStreamingResponse` 对象。\n    * 客户端解析 `SendTaskStreamingResponse`，提取其中的 `result`（即 `TaskStatusUpdateEvent` 或 `TaskArtifactUpdateEvent`）。\n    * 根据事件内容更新 UI（显示进度、最终报告）。\n    * 直到收到带有 `final: true` 标志的事件或流被服务器关闭。\n\n### 3. 技术选型: 为什么用 `Workspace` + ReadableStream 而不是 `EventSource`？\n\n* 浏览器内置的 `EventSource` API 是处理 SSE 的标准方式，非常简洁易用。\n* **但是，`EventSource` API 通常只能发起 `GET` 请求。** 而 A2A 协议规定 `tasks/sendSubscribe` 方法需要通过 `POST` 请求发送，因为需要传递包含 `message` 等信息的复杂 `params` 对象在请求体中。\n* 因此，为了在**不修改标准 A2A 服务器行为**（即保持 `tasks/sendSubscribe` 为 POST）的情况下处理 SSE，我们需要使用更底层的 `Workspace` API。`Workspace` 可以发送 POST 请求，并且其返回的 `Response` 对象的 `.body` 属性是一个 `ReadableStream`，我们可以手动读取和解析这个流来处理 SSE 事件。\n\n### 4. 实现步骤详解 (React/TypeScript 示例)\n\n假设你在一个 React 组件（或自定义 Hook）中实现这个逻辑。\n\n**步骤 1: 发起流式请求 (`tasks/sendSubscribe`)**\n\n```typescript\nimport { useState, useCallback, useRef } from 'react';\nimport { v4 as uuidv4 } from 'uuid'; // 用于生成 Task ID\n// ... import 其他类型 ...\n\n// 在你的组件或 Hook 中\nconst [isLoading, setIsLoading] = useState(false);\nconst [error, setError] = useState<string | null>(null);\nconst [progressUpdates, setProgressUpdates] = useState<any[]>([]); // 存储解析后的事件数据\nconst [finalReport, setFinalReport] = useState<string | null>(null);\nconst abortControllerRef = useRef<AbortController | null>(null); // 用于中止 fetch 请求\n\nconst startStreamingResearch = useCallback(async (topic: string) => {\n  setIsLoading(true);\n  setError(null);\n  setProgressUpdates([]);\n  setFinalReport(null);\n\n  // 确保之前的请求被中止（如果需要）\n  if (abortControllerRef.current) {\n    abortControllerRef.current.abort();\n  }\n  abortControllerRef.current = new AbortController();\n  const signal = abortControllerRef.current.signal;\n\n\n  const taskId = \"deep_research_\" + uuidv4();\n  const message: Message = {\n    role: \"user\",\n    parts: [{ type: \"text\", text: topic }],\n  };\n  const payload = {\n    id: taskId,\n    sessionId: \"web_session_\" + uuidv4(), // 每次可以生成新的 Session\n    message: message, // 注意：实际发送时 message 可能需要 .model_dump()，但 fetch 的 body 会 JSON.stringify\n    acceptedOutputModes: [\"text\"],\n    metadata: { skill_name: \"deep_research\" }\n  };\n\n  const requestBody = {\n    jsonrpc: \"2.0\",\n    method: \"tasks/sendSubscribe\",\n    id: \"req-\" + taskId, // 请求本身的 ID\n    params: payload\n  };\n\n  try {\n    const response = await fetch(`http://127.0.0.1:8000`, { // 你的 A2A 服务器地址\n      method: 'POST',\n      headers: {\n        'Content-Type': 'application/json',\n        'Accept': 'text/event-stream', // 明确告诉服务器期望 SSE\n      },\n      body: JSON.stringify(requestBody),\n      signal: signal, // 允许中止请求\n    });\n\n    // 步骤 2: 处理 Fetch 响应 & 获取 ReadableStream\n    if (!response.ok) {\n      // 如果 HTTP 状态码不是 2xx\n      let errorMsg = `HTTP error! status: ${response.status}`;\n      try {\n        const errorJson = await response.json(); // 尝试解析错误 JSON 体\n        errorMsg = errorJson?.error?.message || errorJson.detail || JSON.stringify(errorJson);\n      } catch (e) {\n        // 解析 JSON 失败，使用状态文本\n        errorMsg = `${response.status} ${response.statusText}`;\n      }\n      throw new Error(errorMsg);\n    }\n\n    const contentType = response.headers.get('content-type');\n    if (!contentType || !contentType.includes('text/event-stream')) {\n      // 服务器没有返回 SSE 流！\n      let errorMsg = `Expected Content-Type 'text/event-stream', but got '${contentType}'`;\n       try {\n        const errorJson = await response.json(); // 可能是 JSONRPC 错误\n        errorMsg += ` - Body: ${errorJson?.error?.message || JSON.stringify(errorJson)}`;\n      } catch (e) {\n         // Try reading as text if not JSON\n          errorMsg += ` - Body: ${await response.text()}`;\n      }\n      throw new Error(errorMsg);\n    }\n\n    // 获取 ReadableStream 读取器\n    const reader = response.body?.getReader();\n    if (!reader) {\n      throw new Error('Failed to get response body reader');\n    }\n\n    // 步骤 3 & 4 & 5 & 6: 读取、解析 SSE 流并更新状态\n    await processStream(reader);\n\n  } catch (err: any) {\n    if (err.name === 'AbortError') {\n      console.log('Fetch aborted');\n      setError('请求已中止');\n    } else {\n      console.error(\"Error during streaming request:\", err);\n      setError(`请求失败: ${err.message}`);\n    }\n    setTaskStatus('IDLE'); // 或者 'FAILED'\n  } finally {\n    setIsLoading(false);\n     abortControllerRef.current = null; // 清理 AbortController\n  }\n}, []); // useCallback 依赖项根据实际情况添加\n\n// 独立的流处理函数\nconst processStream = async (reader: ReadableStreamDefaultReader<Uint8Array>) => {\n  const decoder = new TextDecoder();\n  let buffer = '';\n  let streamEnded = false;\n\n  try {\n    while (true) {\n      const { done, value } = await reader.read();\n      if (done) {\n        console.log(\"Stream finished.\");\n        break;\n      }\n      buffer += decoder.decode(value, { stream: true });\n\n      // 按 SSE 事件分隔符处理 buffer\n      const events = buffer.split('\\n\\n');\n      buffer = events.pop() || ''; // 保留最后不完整的部分\n\n      for (const eventString of events) {\n        if (!eventString.trim()) continue; // 跳过空事件\n\n        // 解析 SSE 消息 (data: <JSON>)\n        if (eventString.startsWith('data:')) {\n          const jsonData = eventString.substring(5).trim();\n          try {\n            const eventResponse = JSON.parse(jsonData) as SendTaskStreamingResponse;\n\n            if (eventResponse.error) {\n              const error = eventResponse.error;\n              console.error(\"Received SSE Error:\", error);\n              setError(`流式错误: Code=${error.code}, Msg=${error.message}`);\n              streamEnded = true; // 出现错误，通常流会中断\n              break; // 停止处理此流\n            }\n\n            const eventData = eventResponse.result; // TaskStatusUpdateEvent or TaskArtifactUpdateEvent\n            if (!eventData) continue;\n\n            // 更新状态 (示例：将整个事件数据存入列表)\n            setProgressUpdates(prev => [...prev, eventData]);\n\n            // 可以在这里根据 eventData 的类型做更精细的状态更新\n            if ('status' in eventData) { // TaskStatusUpdateEvent\n               setTaskStatus(eventData.status.state as TaskState || 'WORKING'); // 更新宏观状态\n            }\n            if ('artifact' in eventData) { // TaskArtifactUpdateEvent\n                 // 假设最终报告在 TextPart\n                 const reportPart = eventData.artifact.parts?.find(p => p.type === 'text') as TextPart | undefined;\n                 if(reportPart) {\n                     setFinalReport(prev => (prev || '') + reportPart.text); // 可以累积或直接设置\n                 }\n            }\n\n\n            // 检查是否是最终事件\n            if (eventData.final === true) {\n              console.log(\"Final event flag received from server.\");\n              streamEnded = true;\n              // 最终状态应该由事件本身携带的状态决定\n              if ('status' in eventData) {\n                  setTaskStatus(eventData.status.state as TaskState);\n              } else {\n                   setTaskStatus(TaskState.COMPLETED); // 假定 Artifact 事件也是完成\n              }\n               break; // 收到 final=true，我们可以停止读取这个流了\n            }\n\n          } catch (e) {\n            console.error(\"Failed to parse SSE event data:\", e, jsonData);\n            // 可以选择设置错误状态或继续处理下一个事件\n          }\n        } else {\n            // 处理其他 SSE 行 (如 event:, id:, retry:)，如果需要的话\n             console.log(\"Received non-data SSE line:\", eventString);\n        }\n      } // end for eventString in events\n       if (streamEnded) break; // 如果内部逻辑判断流应结束，则跳出外层循环\n    } // end while reader\n  } catch (err: any) {\n      console.error(\"Error reading stream:\", err);\n      setError(`读取流失败: ${err.message}`);\n      setTaskStatus('FAILED'); // 流读取出错，标记失败\n  } finally {\n     // 确保 reader 被释放 (如果需要， though exiting loop usually suffices)\n     // reader.releaseLock(); ? (Check MDN docs if needed)\n     setIsLoading(false); // 确保加载状态结束\n     if (!streamEnded && taskStatus !== TaskState.COMPLETED && taskStatus !== TaskState.FAILED) {\n         // 如果流意外中断，设置一个合适的最终状态\n         setError(\"流连接意外断开\");\n         setTaskStatus('FAILED'); // Or 'UNKNOWN'\n     }\n      console.log(\"Stream processing function finished.\");\n  }\n};\n\n// 在你的 React 组件的 JSX 中:\n// <TopicInputForm onSubmit={startStreamingResearch} disabled={isLoading} />\n// <StatusBar status={taskStatus} />\n// <ErrorMessage error={error} />\n// <ProgressDisplay updates={progressUpdates} />\n// <ReportDisplay markdownContent={finalReport} />\n\n```\n\n**5. 处理 `DataPart` (在 `ProgressDisplay` 组件中):**\n\n```typescript\n// 假设 ProgressDisplay 组件接收 updates: any[]\nconst ProgressDisplay = ({ updates }: { updates: any[] }) => {\n  return (\n    <div className=\"progress-log mt-4 p-4 border rounded bg-gray-50 h-64 overflow-y-auto font-mono text-sm\">\n      {updates.map((eventData, index) => {\n        let content = null;\n        // 确定事件类型并提取 Parts\n        let parts: Part[] | undefined = undefined;\n        if (eventData && 'status' in eventData && eventData.status?.message?.parts) {\n           parts = eventData.status.message.parts;\n        } else if (eventData && 'artifact' in eventData && eventData.artifact?.parts) {\n           // 注意：通常最终报告才放在 artifact 里，但这里也检查一下\n           parts = eventData.artifact.parts;\n        }\n\n        if (parts) {\n          content = parts.map((part, partIndex) => {\n            if (part.type === 'text') {\n              // 渲染 TextPart\n              return <p key={`${index}-${partIndex}`} className=\"whitespace-pre-wrap\">{part.text}</p>;\n            } else if (part.type === 'data') {\n              // 渲染 DataPart (示例：格式化 JSON)\n              const data = part.data;\n              // 尝试更友好的展示\n              const step = data?.step || data?.step_name;\n              const status = data?.status;\n              const detail = data?.detail || data?.message;\n              const query = data?.query;\n              const source = data?.source;\n              const count = data?.results_count;\n\n              let friendlyText = `[${step || '步骤未知'}] ${status ? '(' + status + ')' : ''}`;\n              if(source) friendlyText += ` 来源:${source}`;\n              if(query) friendlyText += ` 查询:'${query}'`;\n              if(count !== undefined) friendlyText += ` (${count}条结果)`;\n              if(detail && detail !== readable_summary) friendlyText += ` - ${detail}`; // 避免重复\n\n\n              return (\n                <details key={`${index}-${partIndex}`} className=\"my-1 p-1 border-l-2 border-blue-300 bg-blue-50 text-xs\">\n                   <summary className=\"cursor-pointer text-blue-800\">{friendlyText || `收到结构化数据 (点击展开)`}</summary>\n                   <pre className=\"mt-1 text-gray-600 bg-white p-1 rounded overflow-x-auto\">\n                     {JSON.stringify(data, null, 2)}\n                   </pre>\n                </details>\n              );\n            }\n            // 可以添加对 FilePart 的处理\n            return null;\n          });\n        } else {\n           // 如果无法解析 parts，显示原始事件数据（用于调试）\n           content = <pre className=\"text-xs text-red-500\">未知事件结构: {JSON.stringify(eventData)}</pre>;\n        }\n\n        // 用一个容器包裹每次更新的内容\n        return <div key={index} className=\"update-event py-1 border-b border-gray-200\">{content}</div>;\n      })}\n    </div>\n  );\n};\n```\n\n**6. 注意事项和进一步优化:**\n\n* **错误处理:** 上述代码包含了基本的错误处理，但生产环境需要更细致的处理，例如区分网络错误、服务器错误、JSON 解析错误等。\n* **SSE 解析健壮性:** 手动解析 SSE 流需要仔细处理边界情况，例如事件跨多个 `read()` 调用到达、`retry:` 指令等。可以考虑使用成熟的前端 SSE 客户端库（如果它们支持通过 `Workspace` 的 `ReadableStream` 或允许自定义请求方式）。\n* **状态更新频率:** 如果服务器发送更新过于频繁，可能会导致 React 状态更新过多影响性能。可以考虑进行节流 (throttling) 或批处理 (batching) 更新。\n* **`DataPart` 的约定:** 为了让前端能“理解”并友好地展示 `DataPart` 的内容，前后端需要约定好 `data` 字段中可能包含的键名和结构。\n* **中止请求:** 代码中加入了 `AbortController`，允许在用户发起新的请求或离开页面时中止正在进行的 `Workspace` 请求和流式读取。\n* **类型安全:** 强烈建议在前端项目中也维护一套与 `core/a2a/types.py` 同步的 TypeScript 接口定义，以获得完整的类型检查好处。"
  },
  {
    "path": "web_for_a2a/README.md",
    "content": "# DeepResearch A2A Web UI\n\n## 概述\n\n本项目是一个基于 **Next.js**, **React**, **TypeScript** 和 **Tailwind CSS** 构建的 Web 用户界面 (UI)，旨在与 **DeepResearch A2A (Agent-to-Agent) 服务器** 进行交互。用户可以通过此界面发起深度研究任务，并**实时查看**由服务器通过 Server-Sent Events (SSE) 推送的研究进度更新和最终生成的报告。\n\n这个项目的主要目的是演示如何在现代 Web 前端应用中，使用浏览器原生 API (`Workspace`, `ReadableStream`) 来对接和处理符合 A2A 协议规范的流式响应。\n\n## 特性\n\n* **连接 A2A 服务:** 通过 HTTP 与指定的 DeepResearch A2A 服务器通信。\n* **发起研究任务:** 向服务器发送符合 A2A `tasks/sendSubscribe` 规范的请求以启动流式研究任务。\n* **实时流式更新:** 使用 `Workspace` API 的 `ReadableStream` 接收并解析来自服务器的 Server-Sent Events (SSE)，实时展示任务进度。\n* **结构化数据显示:** 能够区分并展示 A2A 事件中的 `TextPart` 和 `DataPart`。\n* **最终报告展示:** 在任务完成后，提取并展示最终的研究报告。\n* **基础状态与错误显示:** 提供简单的 UI 反馈，显示任务的当前状态（空闲、进行中、完成、错误）和遇到的问题。\n\n## 技术栈\n\n* **框架:** Next.js (App Router)\n* **UI 库:** React\n* **语言:** TypeScript\n* **样式:** Tailwind CSS\n* **核心 API:** Browser `Workspace` API, `ReadableStream`, `TextDecoder`\n* **辅助库:** `uuid` (用于生成客户端 Task ID 示例)\n\n## 目录结构 (相关部分)\n\n```\nmentis/\n└── web_for_a2a/            # Web UI 项目根目录\n    ├── app/                # Next.js App Router 目录\n    │   ├── api/\n    │   │   └── a2a/        # （可选）API Route 代理目录\n    │   │       └── [[...slug]]/\n    │   │           └── route.ts\n    │   ├── deepresearch/   # DeepResearch Agent 的 UI 页面\n    │   │   └── page.tsx    # ★ UI 界面的核心实现文件\n    │   └── layout.tsx      # 根布局\n    │   └── page.tsx        # 根页面 (可能重定向或包含链接)\n    ├── public/             # 静态资源\n    ├── .env.local          # (可选) 本地环境变量配置文件\n    ├── next.config.js      # Next.js 配置文件 (可能包含代理设置)\n    ├── package.json\n    ├── tailwind.config.ts\n    └── tsconfig.json\n```\n*(★ 表示本文档重点关注的文件)*\n\n## 前提条件\n\n* Node.js (推荐 LTS 版本) 和 npm / yarn / pnpm / uv 等包管理器。\n* **DeepResearch A2A 后端服务器** 必须正在运行，并且其地址可访问（默认为 `http://127.0.0.1:8000`）。\n* 对 React, Next.js, TypeScript 和 `Workspace` API 有基本了解。\n\n## 安装与设置\n\n1.  **导航到目录:**\n    ```bash\n    cd mentis/web_for_a2a\n    ```\n2.  **安装依赖:** (根据你项目使用的包管理器选择)\n    ```bash\n    npm install\n    # yarn install\n    # pnpm install\n    # uv sync\n    ```\n3.  **(可选) 配置后端服务器地址:**\n    * 默认情况下，前端会尝试连接 `http://127.0.0.1:8000`。\n    * 如果你使用了 API Route 代理（如 `/api/a2a`），或者你的 A2A 服务器地址不同，可以在 `web_for_a2a` 目录下创建一个 `.env.local` 文件，并设置环境变量：\n        ```dotenv\n        # .env.local\n        NEXT_PUBLIC_A2A_SERVER_URL=/api/a2a # 指向代理\n        # 或者\n        # NEXT_PUBLIC_A2A_SERVER_URL=http://your-backend-address:port # 直接指向后端\n        ```\n    * **注意:** 环境变量名必须以 `NEXT_PUBLIC_` 开头，才能在浏览器端的代码中访问。`page.tsx` 中的代码 `process.env.NEXT_PUBLIC_A2A_SERVER_URL` 会读取这个值。\n\n## 运行\n\n1.  **确保后端 A2A 服务器已启动。**\n2.  **启动 Next.js 开发服务器:**\n    ```bash\n    npm run dev\n    # yarn dev\n    # pnpm dev\n    # uv run dev (如果配置了脚本)\n    ```\n3.  **访问页面:** 在浏览器中打开 Next.js 应用的地址（通常是 `http://localhost:3000`），并导航到 DeepResearch 页面（例如 `http://localhost:3000/deepresearch`）。\n\n## 使用说明\n\n当前示例 UI 非常简单：\n\n1.  页面加载后，你会看到一个标题和一个按钮。\n2.  点击 **\"开始流式研究 (特斯拉主题)\"** 按钮。\n3.  按钮会变为 \"研究进行中...\" 并禁用。\n4.  页面上的 **\"当前状态\"** 会变为 `streaming`。\n5.  **\"流式内容输出:\"** 区域会开始实时显示从服务器推送过来的进度更新。你会看到 `[状态更新]` 或 `[收到报告片段]` 的标记，后面跟着相应的文本或结构化数据 (JSON 格式)。\n6.  当研究完成或出错时，**\"当前状态\"** 会更新为 `completed` 或 `error`，按钮会重新启用。\n7.  如果成功，最终的**研究报告**会显示在页面底部。\n8.  如果过程中出现错误，错误信息会显示在状态下方。\n\n## 核心实现：处理 A2A 流 (Fetch API + ReadableStream)\n\n这是前端实现中最关键的部分，位于 `app/deepresearch/page.tsx` 的 `startStream` 和 `processStream` 函数中。\n\n**为什么不直接用 `EventSource` API?**\n\n标准的 `EventSource` 浏览器 API 非常适合接收 SSE，但它通常只能发起 `GET` 请求。而 A2A 协议规定启动流式任务 (`tasks/sendSubscribe`) 需要使用 `POST` 请求（因为要传递包含研究主题的 `message` 等参数）。为了在不修改标准 A2A 服务器行为的前提下实现此功能，我们选用了更底层的 `Workspace` API。\n\n**`startStream` 函数主要流程:**\n\n1.  **重置状态:** 清空之前的输出、错误，设置状态为 `streaming`。\n2.  **创建 `AbortController`:** 用于在需要时（例如发起新请求或组件卸载）中止当前的 `Workspace` 请求。\n3.  **构建请求体:** 创建符合 A2A `tasks/sendSubscribe` 方法要求的 JSON-RPC 请求对象，包含 `method`, `id`, 以及 `params` (内含客户端生成的 `taskId`, `sessionId`, `message` 等)。\n4.  **发送 `Workspace` 请求:**\n    * 使用 `POST` 方法。\n    * 设置 `Content-Type: application/json` 和 `Accept: text/event-stream` 请求头。\n    * 将 JSON-RPC 对象字符串化后作为 `body`。\n    * 传入 `AbortController` 的 `signal`。\n5.  **检查初始响应:**\n    * 确认 `response.ok` (HTTP 状态码 2xx)。\n    * **关键检查:** 确认 `response.headers.get('content-type')` 包含 `text/event-stream`。如果不是，说明服务器未能成功建立 SSE 连接（可能是服务器端错误或未正确返回流类型），此时应抛出错误。\n    * **(调试日志)** 添加了打印所有响应头和 CORS 头 (`access-control-allow-origin`) 的日志，用于诊断连接问题。\n6.  **获取 `ReadableStream`:** 从 `response.body` 获取流式读取器 `reader`。\n7.  **调用 `processStream`:** 将 `reader` 传递给专门处理流的异步函数。\n\n**`processStream` 函数主要流程 (SSE 解析核心):**\n\n1.  **初始化:** 创建 `TextDecoder` 用于将服务器发送的 `Uint8Array` 数据块解码为文本；创建一个 `buffer` 字符串用于处理跨数据块的、不完整的 SSE 消息。\n2.  **循环读取:** 使用 `while (true)` 和 `await reader.read()` 不断读取数据块。\n3.  **解码与缓冲:** 将读取到的 `value` (Uint8Array) 解码并追加到 `buffer`。\n4.  **分割 SSE 事件:** **关键步骤！** SSE 事件由两个连续的换行符 (`\\n\\n`, `\\r\\r`, 或 `\\r\\n\\r\\n`) 分隔。代码使用正则表达式 `/\\r\\n\\r\\n|\\n\\n|\\r\\r/` 来查找并分割出 buffer 中完整的事件字符串 (`eventString`)。未处理完的部分保留在 `buffer` 中供下次 `read()` 后拼接。\n5.  **解析单个 SSE 事件:**\n    * 对每个分割出的 `eventString` 进行处理。\n    * 按行 (`\\n`, `\\r`, `\\r\\n`) 分割事件内部。\n    * 遍历每一行，主要查找以 `data:` 开头的行，提取其后的 JSON 字符串 (`jsonData`)。SSE 事件可能包含多行 `data:`，代码会将其拼接起来。同时也处理 `event:`, `id:`, `retry:` 等标准 SSE 字段（虽然本示例主要关心 `data:`）。\n    * **关键解析:** 使用 `JSON.parse(jsonData)` 将提取到的字符串解析为 JavaScript 对象 (`eventResponse`，预期符合 `SendTaskStreamingResponse` 接口)。\n    * **添加了详细日志:** 在解析前后都打印了原始数据和解析结果，便于调试。\n    * **错误处理:** 如果 `JSON.parse` 失败，会捕获异常，调用 `setError` 更新 UI，并停止处理流。\n6.  **处理解析后的数据:**\n    * 检查 `eventResponse.error`，如果存在则报告错误并停止。\n    * 获取 `eventData = eventResponse.result` (即 `TaskStatusUpdateEvent` 或 `TaskArtifactUpdateEvent`)。\n    * **更新 React 状态:** 调用 `setStreamedContent(prev => [...prev, eventData])` 将新的事件数据添加到状态数组中，这将触发 UI 重新渲染。\n    * **检查结束标志:** 检查 `eventData.final === true`。如果为 `true`，则设置状态为 `completed` 并标记流结束。\n7.  **循环与退出:** `while` 循环会持续进行，直到 `reader.read()` 返回 `done: true`，或者内部处理（如解析错误、收到 `final: true`）决定中断。\n\n**`useEffect` 处理最终报告:**\n\n* 当 `status` 变为 `'completed'` 时，此 Hook 会运行。\n* 它会反向遍历 `streamedContent` 数组，查找最后一个包含 `artifact` 的事件。\n* 如果找到，则从中提取 `TextPart` 的内容并设置到 `finalReport` 状态，用于在页面底部单独展示完整报告。\n\n## 状态管理\n\n主要使用 `useState` 管理以下关键状态：\n\n* `status`: `'idle' | 'streaming' | 'completed' | 'error' | 'aborted'` - UI 的宏观状态。\n* `streamedContent`: `StreamEventResult[]` - 存储从 SSE 流接收并解析出的所有事件 `result` 对象。\n* `error`: `string | null` - 存储发生的错误信息。\n* `finalReport`: `string | null` - 存储从最终 Artifact 中提取的报告文本。\n\n## 数据展示\n\n* **流式内容输出:** 通过 `.map()` 遍历 `streamedContent` 数组。\n    * 根据每个 `eventData` 是 `TaskStatusUpdateEvent` 还是 `TaskArtifactUpdateEvent` 来决定显示标记（\"[状态更新]\" 或 \"[收到报告片段]\")。\n    * 再遍历事件中的 `parts` 数组。\n    * 对 `TextPart`，直接显示 `part.text`。\n    * 对 `DataPart`，使用 `<pre>{JSON.stringify(part.data, null, 2)}</pre>` 格式化显示其 `data` 对象。**（优化点：可以根据 `data` 内部约定的字段进行更友好的渲染）**\n* **最终报告:** 当 `finalReport` 有值时，在页面底部使用 `<pre>` 标签展示（可以替换为 Markdown 渲染器）。\n\n## 限制与未来工作\n\n* **UI 基础:** 当前 UI 非常简化，仅用于演示核心流式逻辑。需要构建更完善的组件、布局和样式。\n* **仅流式:** 未包含发送同步任务 (`tasks/send`) 和轮询 (`tasks/get`) 的逻辑。\n* **硬编码主题:** 研究主题是硬编码的，需要改为用户输入。\n* **DataPart 展示:** 当前对 `DataPart` 只是简单显示 JSON，可以根据与后端约定的数据结构进行更丰富的可视化展示。\n* **Markdown 渲染:** 最终报告目前使用 `<pre>` 显示，应替换为真正的 Markdown 渲染组件（如 `react-markdown`）。\n* **错误处理:** 可以进一步细化错误处理和用户提示。\n* **多轮对话/状态保持:** 当前实现不支持需要 Agent 保持状态的多轮对话。\n* **真实推送通知:** 前端未处理 A2A 的推送通知逻辑。\n\n\n## 后续步骤\n\n1.  **构建更丰富的 UI 组件:** 将输入、状态、进度、报告显示拆分成独立的、样式更美观的 React 组件。\n2.  **美化 `DataPart` 展示:** 根据你和后端约定好的 `DataPart` 结构，更有意义地展示结构化信息，而不是只显示 JSON。\n3.  **实现用户输入:** 将硬编码的研究主题替换为真正的用户输入。\n4.  **添加更完善的错误处理和用户反馈:** 例如，区分不同类型的错误，提供重试按钮等。\n5.  **管理 AbortController:** 确保在组件卸载或发起新请求时，之前的 `Workspace` 请求能被正确中止。\n6.  **状态管理库 (可选):** 如果应用变得复杂，可以引入 Zustand, Jotai, Redux 等状态管理库。\n7.  **添加同步任务逻辑:** 如果需要，可以添加调用 `tasks/send` 和轮询 `tasks/get` 的逻辑。"
  },
  {
    "path": "web_for_a2a/app/api/a2a/route.ts",
    "content": "// 文件路径: app/api/a2a/[[...slug]]/route.ts (适用于 App Router)\n// 或 pages/api/a2a/[...slug].ts (适用于 Pages Router, 需 slight modification in handler signature)\n\nimport { type NextRequest, NextResponse } from 'next/server';\nimport { NextApiRequest, NextApiResponse } from 'next'; // For Pages Router\n\n// 后端 A2A 服务器的地址\nconst A2A_BACKEND_URL = process.env.A2A_BACKEND_URL || 'http://127.0.0.1:8000';\n\n// --- App Router Version ---\nexport async function POST(request: NextRequest) {\n  try {\n    // 1. 获取前端请求的 body\n    const body = await request.json();\n    console.log('[API Route] Forwarding POST request to:', A2A_BACKEND_URL);\n    console.log('[API Route] Request Body:', JSON.stringify(body, null, 2));\n\n    // 2. 构造转发到 A2A 后端的请求\n    // 注意： NextRequest.headers 是 Headers 对象, fetch 也接受 Headers 对象\n    // 我们需要筛选或传递合适的 Headers\n    const headersToForward = new Headers();\n    headersToForward.set('Content-Type', 'application/json');\n    // 如果后端需要 Accept 头来决定是否返回 SSE\n    if (body?.method === 'tasks/sendSubscribe') {\n        headersToForward.set('Accept', 'text/event-stream');\n    } else {\n         headersToForward.set('Accept', 'application/json');\n    }\n    // 你可能需要传递其他必要的头，例如 Authorization (如果需要的话)\n    // const authHeader = request.headers.get('Authorization');\n    // if (authHeader) headersToForward.set('Authorization', authHeader);\n\n\n    // 3. 使用 fetch 将请求转发到后端 A2A 服务器\n    const backendResponse = await fetch(A2A_BACKEND_URL, {\n      method: 'POST',\n      headers: headersToForward,\n      body: JSON.stringify(body),\n      // 重要：如果需要流式传输，Node fetch 需要 duplex:'half' (或者它默认支持流)\n      // 对于 Vercel Edge Runtime (默认在 App Router API Routes 中)， fetch 原生支持流\n      // cache: 'no-store', // 确保不缓存\n    });\n\n    console.log(`[API Route] Backend response status: ${backendResponse.status}`);\n    backendResponse.headers.forEach((value, key) => console.log(`[API Route] Backend header: ${key}: ${value}`));\n\n\n    // 4. 处理后端响应\n    const contentType = backendResponse.headers.get('content-type');\n\n    if (contentType?.includes('text/event-stream') && backendResponse.body) {\n      // 4a. 如果是 SSE 流，将其转发给前端\n      console.log('[API Route] Forwarding SSE stream...');\n      // 创建一个新的 ReadableStream 将后端流转发给前端\n      const stream = new ReadableStream({\n        async start(controller) {\n          const reader = backendResponse.body!.getReader();\n          const decoder = new TextDecoder(); // 用于调试日志\n          try {\n            while (true) {\n              const { done, value } = await reader.read();\n              if (done) {\n                console.log('[API Route] Backend stream ended.');\n                controller.close();\n                break;\n              }\n              const decodedChunk = decoder.decode(value); // 调试用\n              console.log('[API Route] Forwarding stream chunk:', decodedChunk.replace(/\\n/g, '\\\\n'));\n              controller.enqueue(value); // 将原始 Uint8Array 块转发给前端\n            }\n          } catch (error) {\n            console.error('[API Route] Error reading from backend stream:', error);\n            controller.error(error);\n          } finally {\n             // 确保 reader 被释放 (尽管在 done=true 或 error 时通常会自动处理)\n            try {\n                reader.releaseLock();\n            } catch {}\n          }\n        }\n      });\n\n      // 返回带有正确 SSE 头信息的流式响应\n      return new Response(stream, {\n        status: backendResponse.status,\n        headers: {\n          'Content-Type': 'text/event-stream',\n          'Cache-Control': 'no-cache',\n          'Connection': 'keep-alive',\n          // 可以选择性地转发其他必要的后端头信息\n        }\n      });\n\n    } else {\n      // 4b. 如果是普通 JSON 响应，解析并转发\n      console.log('[API Route] Forwarding JSON response...');\n      const jsonResponse = await backendResponse.json();\n      console.log('[API Route] Backend JSON:', jsonResponse);\n      return NextResponse.json(jsonResponse, { status: backendResponse.status });\n    }\n\n  } catch (error: any) {\n    console.error(\"[API Route] Error in proxy:\", error);\n    return NextResponse.json(\n        { error: 'Proxy error', detail: error.message },\n        { status: 500 }\n    );\n  }\n}\n\n// 可以选择性地添加 GET 处理 /.well-known/agent.json (如果前端也想通过代理获取)\nexport async function GET(request: NextRequest) {\n  const { pathname } = request.nextUrl;\n  if (pathname === '/api/a2a/.well-known/agent.json') {\n     try {\n         const backendResponse = await fetch(`${A2A_BACKEND_URL}/.well-known/agent.json`);\n         if (!backendResponse.ok) { throw new Error(`Backend error: ${backendResponse.status}`)};\n         const data = await backendResponse.json();\n         return NextResponse.json(data);\n     } catch (error: any) {\n         console.error(\"[API Route] Error fetching agent card:\", error);\n         return NextResponse.json({ error: 'Failed to fetch agent card'}, { status: 502 });\n     }\n  }\n   return NextResponse.json({ error: 'Not Found' }, { status: 404 });\n}\n\n// --- Pages Router Version (Alternative) ---\n/*\nimport type { NextApiRequest, NextApiResponse } from 'next';\nimport httpProxyMiddleware from 'next-http-proxy-middleware'; // 需要安装 next-http-proxy-middleware\n\nconst A2A_BACKEND_URL = process.env.A2A_BACKEND_URL || 'http://127.0.0.1:8000';\n\nexport const config = {\n  api: {\n    // 关闭 Next.js 的默认 body 解析，让代理处理\n    bodyParser: false,\n  },\n};\n\n// 使用 next-http-proxy-middleware 处理代理 (更简单，但流式支持可能需要验证)\nconst handler = (req: NextApiRequest, res: NextApiResponse) => {\n    console.log(`[API Route Pages] Forwarding request ${req.method} ${req.url} to ${A2A_BACKEND_URL}`);\n    return httpProxyMiddleware(req, res, {\n        target: A2A_BACKEND_URL,\n        // 重写路径，移除 /api/a2a 前缀\n        pathRewrite: [{\n            patternStr: '^/api/a2a',\n            replaceStr: '',\n        }],\n        // 可能需要配置 changeOrigin: true\n        changeOrigin: true,\n        // selfHandleResponse: true, // 可能需要手动处理流式响应头，如果库不支持\n        // onProxyRes: (proxyRes, req, res) => {\n        //    // 如果需要手动处理 SSE 头\n        //   if (proxyRes.headers['content-type']?.includes('text/event-stream')) {\n        //     res.setHeader('Content-Type', 'text/event-stream');\n        //     res.setHeader('Cache-Control', 'no-cache');\n        //     res.setHeader('Connection', 'keep-alive');\n        //     // 可能需要移除或修改其他头\n        //   }\n        // }\n    });\n};\n\nexport default handler;\n*/"
  },
  {
    "path": "web_for_a2a/app/deepresearch/page.tsx",
    "content": "// 文件路径: mentis/web_for_a2a/app/deepresearch/page.tsx\n'use client'; // 标记为客户端组件\n\nimport { useState, useCallback, useRef, useEffect } from 'react';\nimport { v4 as uuidv4 } from 'uuid';\n\n// --- A2A 类型定义 (简化版) ---\ninterface TextPart { type: \"text\"; text: string; }\ninterface DataPart { type: \"data\"; data: Record<string, any>; }\ntype Part = TextPart | DataPart;\ninterface Message { role: \"user\" | \"agent\"; parts: Part[]; }\n// 使用字符串类型来匹配 TaskState 枚举值\ntype TaskStateString = \"submitted\" | \"working\" | \"input-required\" | \"completed\" | \"canceled\" | \"failed\" | \"unknown\";\ninterface TaskStatus { state: TaskStateString | string; message?: Message; } // 允许 string 以防万一\ninterface Artifact { parts: Part[]; index?: number; /* 其他可选字段 */ }\ninterface TaskStatusUpdateEvent { id: string; status: TaskStatus; final: boolean; }\ninterface TaskArtifactUpdateEvent { id:string; artifact: Artifact; final?: boolean; }\ntype StreamEventResult = TaskStatusUpdateEvent | TaskArtifactUpdateEvent;\ninterface JSONRPCError { code: number; message: string; data?: any; }\ninterface SendTaskStreamingResponse {\n    jsonrpc?: \"2.0\";\n    id?: string | number | null;\n    result?: StreamEventResult;\n    error?: JSONRPCError;\n}\n// --- 类型定义结束 ---\n\nconst A2A_SERVER_URL = process.env.NEXT_PUBLIC_A2A_SERVER_URL || 'http://127.0.0.1:8000';\n\nexport default function DeepResearchPage() {\n  // --- 状态管理 ---\n  const [status, setStatus] = useState<'idle' | 'streaming' | 'completed' | 'error' | 'aborted'>('idle');\n  const [streamedContent, setStreamedContent] = useState<StreamEventResult[]>([]);\n  const [error, setError] = useState<string | null>(null);\n  const [finalReport, setFinalReport] = useState<string | null>(null);\n  const abortControllerRef = useRef<AbortController | null>(null);\n\n  // --- 清理函数 ---\n  useEffect(() => {\n    return () => {\n      console.log(\"组件卸载，中止进行中的 fetch 请求...\");\n      abortControllerRef.current?.abort();\n    };\n  }, []);\n\n  // --- 核心：启动流式请求并处理 (保持不变) ---\n  const startStream = useCallback(async () => {\n    console.log(\"[startStream] Initiating stream...\");\n    setStatus('streaming'); setError(null); setStreamedContent([]); setFinalReport(null);\n    if (abortControllerRef.current) { abortControllerRef.current.abort(); }\n    abortControllerRef.current = new AbortController(); const signal = abortControllerRef.current.signal;\n    const taskId = \"webui_deep_research_\" + uuidv4();\n    const research_topic = \"特斯拉电动汽车的市场分析和未来发展趋势\";\n    const message: Message = { role: \"user\", parts: [{ type: \"text\", text: research_topic }] };\n    const payload = { id: taskId, sessionId: \"webui_session_\" + uuidv4(), message: message, acceptedOutputModes: [\"text\"], metadata: { skill_name: \"deep_research\" } };\n    const requestBody = { jsonrpc: \"2.0\", method: \"tasks/sendSubscribe\", id: \"req-\" + taskId, params: payload };\n\n    try {\n      console.log(\"[startStream] Sending request:\", JSON.stringify(requestBody, null, 2));\n      const response = await fetch(A2A_SERVER_URL, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Accept': 'text/event-stream' }, body: JSON.stringify(requestBody), signal: signal });\n      console.log(`[startStream] Initial response status: ${response.status}`);\n      console.log(\"[startStream] Received Response Headers:\"); response.headers.forEach((value, key) => { console.log(`  ${key}: ${value}`); });\n      const corsHeader = response.headers.get(\"access-control-allow-origin\"); console.log(`[startStream] Access-Control-Allow-Origin Header value: ${corsHeader}`);\n      if (!response.ok) { let errorMsg = `HTTP error! status: ${response.status}`; try { const errJson = await response.json(); errorMsg = errJson?.error?.message || JSON.stringify(errJson); } catch { errorMsg = `${response.status} ${response.statusText}`; } throw new Error(errorMsg); }\n      const contentType = response.headers.get('content-type'); console.log(`[startStream] Initial response Content-Type: ${contentType}`);\n      if (!contentType || !contentType.includes('text/event-stream')) { let errorMsg = `Expected Content-Type 'text/event-stream', but got '${contentType}'`; try { const errBody = await response.text(); errorMsg += ` - Body: ${errBody}`; } catch {} throw new Error(errorMsg); }\n      const reader = response.body?.getReader(); if (!reader) throw new Error('Failed to get response body reader');\n      console.log(\"[startStream] Got reader, starting stream processing...\");\n      await processStream(reader); // 调用修正后的 processStream\n      setStatus(prevStatus => { if (prevStatus === 'streaming') { console.log(\"[startStream] Stream processing finished without error/final flag, setting status to completed.\"); return 'completed'; } console.log(\"[startStream] Stream processing finished, keeping status:\", prevStatus); return prevStatus; });\n    } catch (err: any) {\n      if (err.name === 'AbortError') { console.log('[startStream] Stream fetch aborted by client.'); setStatus(prevStatus => { if (prevStatus === 'streaming') { setError('请求已中止'); return 'aborted'; } return prevStatus; }); }\n      else { console.error(\"[startStream] Error during request setup or connection:\", err); setError(`请求或连接失败: ${err.message}`); setStatus('error'); }\n    } finally { console.log(\"[startStream] Cleaning up AbortController.\"); abortControllerRef.current = null; }\n  // eslint-disable-next-line react-hooks/exhaustive-deps\n  }, []);\n\n  // --- *** 核心修改：processStream 函数 *** ---\n  const processStream = async (reader: ReadableStreamDefaultReader<Uint8Array>) => {\n    const decoder = new TextDecoder();\n    let buffer = '';\n    let streamEndedInLoop = false;\n\n    console.log(\"[processStream] Starting stream processing loop.\");\n\n    while (!streamEndedInLoop) {\n      try {\n         console.log(\"[processStream] Waiting for reader.read()...\");\n         const { done, value } = await reader.read();\n         console.log(`[processStream] reader.read() returned: done=${done}, value size=${value?.length}`);\n\n         if (done) {\n             console.log(\"[processStream] Stream finished by reader (done=true).\");\n             streamEndedInLoop = true;\n             break; // 显式跳出 while 循环\n         }\n\n         buffer += decoder.decode(value, { stream: true });\n         console.log(`[processStream] Decoded chunk, current buffer size: ${buffer.length}`); // 打印 buffer 大小\n\n         // --- 使用正则表达式分割 SSE 事件，更健壮 ---\n         // SSE 事件由两个换行符分隔 (\\n\\n, \\r\\r, or \\r\\n\\r\\n)\n         const eventSeparatorRegex = /\\r\\n\\r\\n|\\n\\n|\\r\\r/;\n         let match;\n\n         // 循环处理 buffer 中的所有完整事件\n         while ((match = eventSeparatorRegex.exec(buffer)) !== null) {\n             const boundaryIndex = match.index;\n             const eventString = buffer.substring(0, boundaryIndex); // 提取事件部分\n             buffer = buffer.substring(boundaryIndex + match[0].length); // 移除已处理的事件和分隔符\n\n             if (!eventString.trim()) {\n                 console.log(\"[processStream] Skipping empty event string found by regex.\");\n                 continue; // 跳过空事件\n             }\n\n             console.log('[processStream] Processing raw SSE message:', eventString.replace(/\\n/g, '\\\\n'));\n\n             // SSE 事件通常包含多行 (event:, id:, data:, retry:)\n             // 我们主要关心 data: 行\n             const lines = eventString.split(/\\r\\n|\\n|\\r/); // 按行分割\n             let eventType = 'message'; // 默认事件类型\n             let eventDataString = '';\n             let eventId = '';\n\n             for (const line of lines) {\n                 if (line.startsWith('event:')) {\n                     eventType = line.substring(6).trim();\n                 } else if (line.startsWith('data:')) {\n                     // 如果 data 有多行，需要拼接\n                     eventDataString += line.substring(5).trim() + \"\\n\"; // 加换行符以区分多行 data\n                 } else if (line.startsWith('id:')) {\n                     eventId = line.substring(3).trim();\n                 } // 可以添加对 retry: 的处理\n             }\n             eventDataString = eventDataString.trim(); // 移除末尾的换行符\n\n             // 只处理我们关心的包含有效数据的事件\n             if (eventDataString) {\n                 console.log(`[processStream] Extracted SSE fields: type=${eventType}, id=${eventId}, data=${eventDataString}`);\n\n                 try {\n                     const eventResponse = JSON.parse(eventDataString) as SendTaskStreamingResponse;\n                     console.log('[processStream] Successfully parsed JSON:', eventResponse);\n\n                     if (eventResponse.error) {\n                         const error = eventResponse.error; console.error(\"[processStream] Received SSE Error from server:\", error);\n                         setError(`流式错误 (来自服务器): Code=${error.code}, Msg=${error.message}`); setStatus('error');\n                         streamEndedInLoop = true; break; // Exit inner processing loop\n                     }\n                     const eventData = eventResponse.result;\n                     if (eventData) {\n                         console.log(\"[processStream] Preparing to call setStreamedContent with:\", eventData);\n                         setStreamedContent(prev => [...prev, eventData]); // Update state\n                         console.log(\"[processStream] Call to setStreamedContent completed.\");\n\n                         if (eventData.final === true) {\n                             console.log(\"[processStream] Final event flag received. Setting status to completed.\");\n                             streamEndedInLoop = true; setStatus('completed');\n                             // Let the inner loop finish processing this chunk, outer loop will break\n                         } else {\n                             setStatus(prevStatus => (prevStatus !== 'completed' && prevStatus !== 'error' && prevStatus !== 'aborted') ? 'streaming' : prevStatus);\n                         }\n                     } else { console.log(\"[processStream] Skipping event with no result data.\"); }\n                 } catch (e: any) {\n                     console.error(\"[processStream] Failed to parse SSE JSON data:\", e, \"\\nRaw JSON string was:\", eventDataString);\n                     setError(`解析服务器事件失败: ${e.message}. 收到的数据 (部分): ${eventDataString.substring(0, 150)}...`); setStatus('error');\n                     streamEndedInLoop = true; break; // Exit inner processing loop\n                 }\n             } else {\n                 console.log(\"[processStream] Skipping SSE message with no data field.\");\n             }\n              if (streamEndedInLoop) break; // Exit inner processing loop if needed\n         } // end while match = regex.exec(buffer)\n\n          if(streamEndedInLoop) break; // Exit outer while if needed\n\n      } catch (readError: any) {\n           console.error(\"[processStream] Error reading from stream:\", readError);\n           if (readError.name !== 'AbortError') { setError(`读取流错误: ${readError.message}`); setStatus('error'); }\n           else { console.log(\"[processStream] Stream reading aborted by client.\"); setStatus('aborted'); }\n           streamEndedInLoop = true; break; // Exit outer while\n      }\n    } // end while (!streamEndedInLoop)\n    console.log(\"[processStream] Exited stream processing loop.\");\n  }; // end processStream\n\n  // --- useEffect 处理最终报告 (保持不变) ---\n  useEffect(() => {\n    if (status === 'completed' && streamedContent.length > 0) {\n      console.log(\"[useEffect] Status is completed, processing final report from streamedContent.\");\n      const finalArtifactEvent = [...streamedContent].reverse().find(ev => ev && 'artifact' in ev) as TaskArtifactUpdateEvent | undefined;\n      if (finalArtifactEvent?.artifact?.parts) {\n        const reportPart = finalArtifactEvent.artifact.parts.find(p => p.type === 'text') as TextPart | undefined;\n        if (reportPart) { console.log(\"[useEffect] Found final report text in artifact.\"); setFinalReport(reportPart.text); }\n        else { console.log(\"[useEffect] Completed, but no text part found in final artifact event.\"); }\n      } else {\n           console.log(\"[useEffect] Completed, but no artifact event found or artifact has no parts.\");\n           const lastStatusEvent = [...streamedContent].reverse().find(ev => ev && 'status' in ev) as TaskStatusUpdateEvent | undefined;\n           if (lastStatusEvent?.status?.message?.parts) {\n                const reportPart = lastStatusEvent.status.message.parts.find(p => p.type === 'text') as TextPart | undefined;\n                 if (reportPart) { console.warn(\"[useEffect] No artifact found, using text from last status update as final report (fallback).\"); setFinalReport(reportPart.text); }\n           }\n      }\n    }\n  }, [status, streamedContent]);\n\n  // --- UI 渲染 (保持不变) ---\n  return (\n    <div className=\"container mx-auto p-4 font-sans\">\n      {/* ... (JSX 代码同上一版本) ... */}\n      <h1 className=\"text-2xl font-bold mb-4\">DeepResearch A2A 流式客户端 (带调试日志 v2)</h1>\n      <button onClick={startStream} disabled={status === 'streaming'} className=\"px-4 py-2 bg-blue-500 text-white rounded hover:bg-blue-600 disabled:bg-gray-400\">\n        {status === 'streaming' ? '研究进行中...' : '开始流式研究 (特斯拉主题)'}\n      </button>\n      <div className=\"mt-4\">\n        <p><strong>当前状态:</strong> <span className={`font-semibold ${status === 'error' ? 'text-red-500' : status === 'completed' ? 'text-green-600': status === 'aborted' ? 'text-yellow-700' : 'text-blue-600'}`}>{status}</span></p>\n        {error && <p className=\"text-red-500 mt-2\"><strong>错误:</strong> {error}</p>}\n      </div>\n      <h2 className=\"text-xl font-semibold mt-6 mb-2\">流式内容输出:</h2>\n      <div className=\"stream-output p-4 border rounded bg-gray-100 min-h-[200px] max-h-[500px] overflow-y-auto text-sm font-mono\">\n        {streamedContent.length === 0 && status !== 'streaming' && status !== 'error' && status !== 'aborted' && <p className=\"text-gray-500\">尚未接收到流式内容。</p>}\n        {streamedContent.map((eventData, index) => {\n          let displayContent: React.ReactNode = null; let parts: Part[] | undefined = undefined;\n          if (eventData && 'status' in eventData && eventData.status?.message?.parts) { parts = eventData.status.message.parts; displayContent = <span className=\"text-blue-700\">[状态更新]</span>; }\n          else if (eventData && 'artifact' in eventData && eventData.artifact?.parts) { parts = eventData.artifact.parts; displayContent = <span className=\"text-green-700\">[收到报告片段]</span>; }\n          if (parts) { displayContent = (<>{displayContent}{\" \"}{parts.map((part, pIdx) => { if (part.type === 'text') {return <span key={pIdx}>{part.text}</span>;} else if (part.type === 'data') {return <pre key={pIdx} className=\"text-xs bg-gray-200 p-1 my-1 rounded overflow-x-auto\">{JSON.stringify(part.data, null, 2)}</pre>;} return null; })}</>); }\n          else if (typeof eventData === 'object' && eventData !== null) { displayContent = <pre className=\"text-xs text-gray-500\">{JSON.stringify(eventData, null, 2)}</pre>; }\n          else { displayContent = <span className=\"text-xs text-red-500\">未知事件: {String(eventData)}</span>;}\n          return <div key={index} className=\"py-1 border-b border-gray-300\">{displayContent}</div>;\n        })}\n        {status === 'streaming' && <p className=\"text-gray-500 mt-2 animate-pulse\">等待服务器事件...</p>}\n        {status === 'completed' && !finalReport && <p className=\"text-yellow-600 font-bold mt-2\">流处理完成，但未找到最终报告 Artifact。</p>}\n        {status === 'error' && <p className=\"text-red-700 font-bold mt-2\">流处理因错误终止。</p>}\n        {status === 'aborted' && <p className=\"text-yellow-700 font-bold mt-2\">流处理已中止。</p>}\n      </div>\n       {finalReport && (\n            <>\n                <h2 className=\"text-xl font-semibold mt-6 mb-2\">最终报告:</h2>\n                <div className=\"final-report p-4 border rounded bg-white prose max-w-none\"> <pre className=\"whitespace-pre-wrap text-sm\">{finalReport}</pre> </div>\n                {status === 'completed' && <p className=\"text-green-700 font-bold mt-2\">任务已成功完成。</p>}\n            </>\n       )}\n    </div>\n  );\n}"
  },
  {
    "path": "web_for_a2a/app/globals.css",
    "content": "@tailwind base;\n@tailwind components;\n@tailwind utilities;\n\n:root {\n  --foreground-rgb: 0, 0, 0;\n  --background-rgb: 255, 255, 255;\n}\n\nbody {\n  color: rgb(var(--foreground-rgb));\n  background: rgb(var(--background-rgb));\n}\n\n.prose {\n  max-width: 65ch;\n  color: inherit;\n}\n\n.prose pre {\n  background-color: #f3f4f6;\n  border-radius: 0.375rem;\n  padding: 0.75rem;\n  overflow-x: auto;\n}"
  },
  {
    "path": "web_for_a2a/app/layout.tsx",
    "content": "import './globals.css';\nimport type { Metadata } from 'next';\n\nexport const metadata: Metadata = {\n  title: 'DeepResearch A2A Web Client',\n  description: '基于Next.js的DeepResearch A2A流式客户端',\n};\n\nexport default function RootLayout({\n  children,\n}: {\n  children: React.ReactNode;\n}) {\n  return (\n    <html lang=\"zh\">\n      <body>\n        {children}\n      </body>\n    </html>\n  );\n}"
  },
  {
    "path": "web_for_a2a/app/page.tsx",
    "content": "'use client';\n\nimport Link from 'next/link';\n\nexport default function Home() {\n  return (\n    <div className=\"container mx-auto p-8\">\n      <h1 className=\"text-3xl font-bold mb-6\">DeepResearch A2A Web 客户端</h1>\n      \n      <div className=\"bg-white shadow-md rounded-lg p-6 mb-6\">\n        <h2 className=\"text-xl font-semibold mb-4\">功能介绍</h2>\n        <p className=\"mb-4\">\n          这是一个基于 Next.js 和 React 构建的 Web 客户端，用于连接 DeepResearch A2A 服务器并展示流式研究结果。\n          通过 Server-Sent Events (SSE) 技术，可以实时接收和显示研究进度和最终报告。\n        </p>\n        <p className=\"mb-4\">\n          本示例演示了如何从前端 Web 应用连接到 DeepResearch A2A 服务器 (<code>tasks/sendSubscribe</code> 端点)，\n          并接收、解析、显示 SSE 流。\n        </p>\n      </div>\n\n      <div className=\"bg-blue-50 border border-blue-200 rounded-lg p-6 mb-6\">\n        <h2 className=\"text-xl font-semibold mb-4\">使用前提</h2>\n        <ul className=\"list-disc pl-6 space-y-2\">\n          <li>\n            确保 <code>super_agents/deep_research/a2a_adapter/run_server.py</code> 启动的服务器正在运行在 \n            <code>http://127.0.0.1:8000</code> (或相应的地址)。\n          </li>\n          <li>\n            当前示例使用硬编码的研究主题 \"特斯拉电动汽车的市场分析和未来发展趋势\"。\n          </li>\n        </ul>\n      </div>\n\n      <Link \n        href=\"/deepresearch\" \n        className=\"inline-block px-6 py-3 bg-blue-600 text-white font-medium rounded-lg hover:bg-blue-700 transition-colors\"\n      >\n        进入 DeepResearch 示例页面\n      </Link>\n    </div>\n  );\n}"
  },
  {
    "path": "web_for_a2a/package.json",
    "content": "{\n  \"name\": \"web_for_a2a\",\n  \"version\": \"0.1.0\",\n  \"private\": true,\n  \"scripts\": {\n    \"dev\": \"next dev\",\n    \"build\": \"next build\",\n    \"start\": \"next start\",\n    \"lint\": \"next lint\"\n  },\n  \"dependencies\": {\n    \"next\": \"^14.0.0\",\n    \"react\": \"^18.2.0\",\n    \"react-dom\": \"^18.2.0\",\n    \"uuid\": \"^9.0.1\",\n    \"typescript\": \"^5.2.2\",\n    \"@types/node\": \"^20.8.9\",\n    \"@types/react\": \"^18.2.33\",\n    \"@types/react-dom\": \"^18.2.14\",\n    \"@types/uuid\": \"^9.0.6\",\n    \"autoprefixer\": \"^10.4.16\",\n    \"postcss\": \"^8.4.31\",\n    \"tailwindcss\": \"^3.3.5\"\n  }\n}"
  },
  {
    "path": "web_for_a2a/postcss.config.js",
    "content": "module.exports = {\n  plugins: {\n    tailwindcss: {},\n    autoprefixer: {},\n  },\n};"
  },
  {
    "path": "web_for_a2a/tailwind.config.js",
    "content": "/** @type {import('tailwindcss').Config} */\nmodule.exports = {\n  content: [\n    './pages/**/*.{js,ts,jsx,tsx,mdx}',\n    './components/**/*.{js,ts,jsx,tsx,mdx}',\n    './app/**/*.{js,ts,jsx,tsx,mdx}',\n  ],\n  theme: {\n    extend: {},\n  },\n  plugins: [],\n};"
  },
  {
    "path": "web_for_a2a/tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"lib\": [\n      \"dom\",\n      \"dom.iterable\",\n      \"esnext\"\n    ],\n    \"allowJs\": true,\n    \"skipLibCheck\": true,\n    \"strict\": false,\n    \"noEmit\": true,\n    \"incremental\": true,\n    \"module\": \"esnext\",\n    \"esModuleInterop\": true,\n    \"moduleResolution\": \"node\",\n    \"resolveJsonModule\": true,\n    \"isolatedModules\": true,\n    \"jsx\": \"preserve\",\n    \"plugins\": [\n      {\n        \"name\": \"next\"\n      }\n    ]\n  },\n  \"include\": [\n    \"next-env.d.ts\",\n    \".next/types/**/*.ts\",\n    \"**/*.ts\",\n    \"**/*.tsx\"\n  ],\n  \"exclude\": [\n    \"node_modules\"\n  ]\n}\n"
  }
]