[
  {
    "path": ".github/workflows/update-upstream.yml",
    "content": "name: Update Upstream Dependencies\n\non:\n  schedule:\n    # 每天 UTC 02:00 检查更新 (北京时间上午10点)\n    - cron: '0 2 * * *'\n  workflow_dispatch: # 允许手动触发\n\njobs:\n  check-updates:\n    runs-on: ubuntu-latest\n    permissions:\n      contents: write\n      pull-requests: write\n    \n    steps:\n    - name: Checkout repository\n      uses: actions/checkout@v4\n      \n    - name: Set up Python\n      uses: actions/setup-python@v4\n      with:\n        python-version: '3.11'\n        \n    - name: Install uv\n      uses: astral-sh/setup-uv@v4\n      \n    - name: Get current gemini-webapi version\n      id: current_version\n      run: |\n        current_version=$(grep -o 'gemini-webapi>=[0-9]\\+\\.[0-9]\\+\\.[0-9]\\+' pyproject.toml | sed 's/gemini-webapi>=//')\n        echo \"current=$current_version\" >> $GITHUB_OUTPUT\n        echo \"Current version: $current_version\"\n        \n    - name: Check latest upstream release\n      id: latest_version\n      run: |\n        latest_version=$(curl -s \"https://api.github.com/repos/HanaokaYuzu/Gemini-API/releases/latest\" | python3 -c \"import sys, json; print(json.load(sys.stdin)['tag_name'].lstrip('v'))\")\n        echo \"latest=$latest_version\" >> $GITHUB_OUTPUT\n        echo \"Latest version: $latest_version\"\n        \n    - name: Compare versions\n      id: version_check\n      run: |\n        current=\"${{ steps.current_version.outputs.current }}\"\n        latest=\"${{ steps.latest_version.outputs.latest }}\"\n        \n        if [ \"$current\" != \"$latest\" ]; then\n          echo \"needs_update=true\" >> $GITHUB_OUTPUT\n          echo \"Version update needed: $current -> $latest\"\n        else\n          echo \"needs_update=false\" >> $GITHUB_OUTPUT\n          echo \"Already up to date: $current\"\n        fi\n        \n    - name: Update pyproject.toml\n      if: steps.version_check.outputs.needs_update == 'true'\n      run: |\n        current=\"${{ steps.current_version.outputs.current }}\"\n        latest=\"${{ steps.latest_version.outputs.latest }}\"\n        \n        # 更新 pyproject.toml 中的版本号\n        sed -i \"s/gemini-webapi>=$current/gemini-webapi>=$latest/g\" pyproject.toml\n        \n        echo \"Updated gemini-webapi version from $current to $latest\"\n        \n    - name: Update lock file\n      if: steps.version_check.outputs.needs_update == 'true'\n      run: |\n        uv lock --upgrade-package gemini-webapi\n        \n    - name: Test installation\n      if: steps.version_check.outputs.needs_update == 'true'\n      run: |\n        uv sync\n        uv run python -c \"import gemini_webapi; print('gemini-webapi imported successfully')\"\n        \n    - name: Run linting\n      if: steps.version_check.outputs.needs_update == 'true'\n      run: |\n        uv run ruff check .\n        uv run ruff format --check .\n        \n    - name: Create Pull Request\n      if: steps.version_check.outputs.needs_update == 'true'\n      uses: peter-evans/create-pull-request@v6\n      with:\n        token: ${{ secrets.GITHUB_TOKEN }}\n        commit-message: |\n          ✨ feat: 升级上游版本 gemini-webapi 至 v${{ steps.latest_version.outputs.latest }}\n          \n          - 自动更新 gemini-webapi 从 v${{ steps.current_version.outputs.current }} 到 v${{ steps.latest_version.outputs.latest }}\n          - 更新 uv.lock 文件\n          - 验证安装和代码格式\n        title: '⬆️ 自动更新上游依赖: gemini-webapi v${{ steps.latest_version.outputs.latest }}'\n        body: |\n          ## 🔄 自动上游版本更新\n          \n          此 PR 自动更新了上游依赖版本：\n          \n          - **gemini-webapi**: `${{ steps.current_version.outputs.current }}` → `${{ steps.latest_version.outputs.latest }}`\n          - **上游发布页面**: https://github.com/HanaokaYuzu/Gemini-API/releases/tag/v${{ steps.latest_version.outputs.latest }}\n          \n          ### ✅ 自动验证完成\n          \n          - [x] 依赖安装测试通过\n          - [x] 代码格式检查通过\n          - [x] uv.lock 文件已更新\n          \n          ### 📋 手动检查清单\n          \n          在合并此 PR 前，请确认：\n          \n          - [ ] 查看上游更改日志，确认无破坏性变更\n          - [ ] 本地测试 API 功能正常\n          - [ ] 确认新版本兼容现有功能\n          \n          ---\n          \n          🤖 此 PR 由 GitHub Actions 自动生成\n        branch: auto-update/gemini-webapi-v${{ steps.latest_version.outputs.latest }}\n        delete-branch: true\n        draft: false\n        labels: |\n          dependencies\n          enhancement\n          automated"
  },
  {
    "path": ".gitignore",
    "content": ".python-version\n.idea\n.venv\nuv.lock\n.env\n__pycache__\n.cursor\n.ruff_cache\n"
  },
  {
    "path": "CLAUDE.md",
    "content": "# CLAUDE.md\n\nThis file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.\n\n## Architecture Overview\n\nThis is a FastAPI-based server that provides OpenAI-compatible API endpoints for Google's Gemini AI model via the `gemini-webapi` library. The server acts as a bridge, translating OpenAI API requests to Gemini API calls.\n\n### Core Components\n\n- **main.py**: Single-file application containing all API endpoints, authentication, and request handling\n- **Authentication**: Uses Gemini cookies (`SECURE_1PSID`, `SECURE_1PSIDTS`) for Gemini API access and optional `API_KEY` for server authentication\n- **API Endpoints**:\n  - `GET /`: Health check endpoint\n  - `GET /v1/models`: Lists available Gemini models in OpenAI format\n  - `POST /v1/chat/completions`: Main chat completion endpoint (supports streaming)\n\n### Key Features\n\n- OpenAI-compatible chat completions API\n- Streaming response support\n- Image processing (base64 encoded images via temporary files)\n- Markdown link correction for Google search results\n- CORS enabled for web clients\n- Docker containerization with uv package manager\n\n## Coding Conventions\n\nWhen making changes to this codebase, please adhere to the following principles:\n\n- **Keep It Simple, Stupid (KISS):** Write code that is simple, straightforward, and easy to understand. Avoid introducing unnecessary complexity.\n- **Don't Repeat Yourself (DRY):** Instead of duplicating code for similar functionalities, create generic, reusable functions. A good example is the `initProviderFilter` function in `scripts/settings.js`, which handles filtering logic for multiple providers in a unified way.\n- **Centralize Configuration:** Group related configurations together to make the code easier to maintain and extend. For instance, the `filterConfigurations` array in `scripts/settings.js` centralizes all the settings for the provider-specific filters, making it easy to add new ones in the future.\n\n## Development Commands\n\n### Environment Setup\n```bash\n# Install dependencies with uv (recommended)\nuv sync\n\n# Or with pip\npip install fastapi uvicorn gemini-webapi\n\n# Set up environment variables (copy from example)\ncp .env.example .env\n# Edit .env with your Gemini credentials\n```\n\n### Running the Server\n```bash\n# Development server with auto-reload\nuvicorn main:app --reload --host 127.0.0.1 --port 8000\n\n# Production server\nuvicorn main:app --host 0.0.0.0 --port 8000\n\n# Using uv\nuv run uvicorn main:app --reload --host 127.0.0.1 --port 8000\n```\n\n### Code Quality\n```bash\n# Lint and format with ruff\nruff check .\nruff format .\n```\n\n### Docker Commands\n```bash\n# Build and run with docker-compose\ndocker-compose up -d\n\n# View logs\ndocker-compose logs\n\n# Rebuild and restart\ndocker-compose up -d --build\n\n# Stop services\ndocker-compose down\n```\n\n## Configuration\n\n### Required Environment Variables\n- `SECURE_1PSID`: Gemini cookie for authentication (obtained from browser dev tools)\n- `SECURE_1PSIDTS`: Gemini cookie timestamp for authentication\n- `API_KEY`: Optional server authentication key\n- `ENABLE_THINKING`: Optional boolean to enable thinking content in responses (default: false)\n\n### Code Style\n- Uses ruff for linting and formatting\n- Line length: 150 characters\n- Tab-based indentation\n- Double quotes for strings\n- Ignores E501 (line length warnings due to custom 150 char limit)\n\n## Model Mapping\n\nThe server maps OpenAI model names to Gemini models through `map_model_name()` function. It supports fuzzy matching and falls back to sensible defaults based on keywords (pro, flash, vision, etc.).\n\n## Request Flow\n\n1. Client sends OpenAI-compatible request to `/v1/chat/completions`\n2. Server authenticates using optional API_KEY\n3. Messages are converted from OpenAI format to conversation string\n4. Images are decoded from base64 and saved to temporary files\n5. Request is sent to Gemini via `gemini-webapi`\n6. Response is processed, markdown corrected, and returned in OpenAI format\n7. Temporary files are cleaned up"
  },
  {
    "path": "Dockerfile",
    "content": "FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim\n\nWORKDIR /app\n\n# Install dependencies\nCOPY pyproject.toml .\nRUN uv sync\n\n# Copy application code\nCOPY main.py .\nCOPY assets/ assets/\n\n# Expose the port the app runs on\nEXPOSE 8000\n\n# Command to run the application\nCMD [\"uv\", \"run\", \"uvicorn\", \"main:app\", \"--host\", \"0.0.0.0\", \"--port\", \"8000\"]"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2025 RrOrange\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Gemi2Api-Server\n[HanaokaYuzu / Gemini-API](https://github.com/HanaokaYuzu/Gemini-API) 的服务端简单实现\n\n[![pE79pPf.png](https://s21.ax1x.com/2025/04/28/pE79pPf.png)](https://imgse.com/i/pE79pPf)\n\n## 快捷部署\n\n### Render\n\n[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/zhiyu1998/Gemi2Api-Server)\n\n### HuggingFace（由佬友@qqrr部署）\n\n[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/spaces/ykl45/gmn2a)\n\n## 直接运行\n\n0. 填入 `SECURE_1PSID` 和 `SECURE_1PSIDTS`（登录 Gemini 在浏览器开发工具中查找 Cookie），有必要的话可以填写 `API_KEY`\n```properties\nSECURE_1PSID = \"COOKIE VALUE HERE\"\nSECURE_1PSIDTS = \"COOKIE VALUE HERE\"\nAPI_KEY= \"API_KEY VALUE HERE\"\nTEMPORARY_CHAT = \"false\" # 使用临时对话模式，此模式会禁用部分功能如思考、图片生成等，默认关闭。\nAUTO_DELETE_CHAT = \"true\" # 生成结束后自动从web端删除对话记录，默认开启。TEMPORARY_CHAT为true时，此项无效。\nPUBLIC_BASE_URL = \"https://your-domain.com\" # 外部URL，用于生成图片代理链接，不填则会使用内部地址。使用反向代理时必填，否则可能导致图片无法访问。\n```\n1. `uv` 安装一下依赖\n> uv init\n> \n> uv add fastapi uvicorn gemini-webapi httpx h2\n\n> [!NOTE]  \n> 如果存在`pyproject.toml` 那么就使用下面的命令：  \n> uv sync\n\n或者 `pip` 也可以\n\n> pip install fastapi uvicorn gemini-webapi httpx h2\n\n2. 激活一下环境\n> source venv/bin/activate\n\n3. 启动\n> uvicorn main:app --reload --host 127.0.0.1 --port 8000\n\n> [!WARNING] \n> tips: 如果不填写 API_KEY ，那么就直接使用\n\n## 使用Docker运行（推荐）\n\n### 快速开始\n\n1. 克隆本项目\n   ```bash\n   git clone https://github.com/zhiyu1998/Gemi2Api-Server.git\n   ```\n\n2. 创建 `.env` 文件并填入你的 Gemini Cookie 凭据:\n   ```bash\n   cp .env.example .env\n   # 用编辑器打开 .env 文件，填入你的 Cookie 值\n   ```\n\n3. 启动服务:\n   ```bash\n   docker-compose up -d\n   ```\n\n4. 服务将在 http://0.0.0.0:8000 上运行\n\n### 其他 Docker 命令\n\n```bash\n# 查看日志\ndocker-compose logs\n\n# 重启服务\ndocker-compose restart\n\n# 停止服务\ndocker-compose down\n\n# 重新构建并启动\ndocker-compose up -d --build\n```\n\n## API端点\n\n- `GET /`: 服务状态检查\n- `GET /v1/models`: 获取可用模型列表\n- `POST /v1/chat/completions`: 与模型聊天 (类似OpenAI接口)\n- `GET /gemini-proxy/image`: 图片代理接口（有生成图片需求时，需要保证此端点可直接访问，如果使用反向代理则需要填写`PUBLIC_BASE_URL`环境变量）\n\n## 常见问题\n\n### 服务器报 500 问题解决方案\n\n500 的问题一般是 IP 不太行 或者 请求太频繁（后者等待一段时间或者重新新建一个隐身标签登录一下重新给 Secure_1PSID 和 Secure_1PSIDTS 即可），见 issue：\n- [__Secure-1PSIDTS · Issue #6 · HanaokaYuzu/Gemini-API](https://github.com/HanaokaYuzu/Gemini-API/issues/6)\n- [Failed to initialize client. SECURE_1PSIDTS could get expired frequently · Issue #72 · HanaokaYuzu/Gemini-API](https://github.com/HanaokaYuzu/Gemini-API/issues/72)\n\n解决步骤：\n1. 使用隐身标签访问 [Google Gemini](https://gemini.google.com/) 并登录\n2. 打开浏览器开发工具 (F12)\n3. 切换到 \"Application\" 或 \"应用程序\" 标签\n4. 在左侧找到 \"Cookies\" > \"gemini.google.com\"\n5. 复制 `__Secure-1PSID` 和 `__Secure-1PSIDTS` 的值\n6. 更新 `.env` 文件\n7. 重新构建并启动: `docker-compose up -d --build`\n\n## 致谢\n\n- 图片去水印算法基于 [journey-ad/gemini-watermark-remover](https://github.com/journey-ad/gemini-watermark-remover)以及[allenk/GeminiWatermarkTool](https://github.com/allenk/GeminiWatermarkTool)实现，并直接使用了其中的两张png图片。\n\n## 贡献\n\n同时感谢以下开发者对 `Gemi2Api-Server` 作出的贡献：\n\n<a href=\"https://github.com/zhiyu1998/Gemi2Api-Server/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=zhiyu1998/Gemi2Api-Server&max=1000\" />\n</a>"
  },
  {
    "path": "docker-compose.yml",
    "content": "version: \"3\"\n\nservices:\n  gemini-api:\n    build: .\n    ports:\n      - \"8000:8000\"\n    volumes:\n      - ./main.py:/app/main.py\n      - ./pyproject.toml:/app/pyproject.toml\n      - ./secrets:/app/secrets\n    env_file:\n      - .env\n    restart: unless-stopped\n"
  },
  {
    "path": "main.py",
    "content": "import asyncio\nimport base64\nimport hashlib\nimport hmac\nimport importlib.metadata\nimport io\nimport json\nimport logging\nimport os\nimport re\nimport secrets\nimport tempfile\nimport time\nimport uuid\nfrom contextlib import asynccontextmanager\nfrom datetime import datetime, timezone\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Union\nfrom urllib.parse import quote, urlparse\n\nimport httpx\nimport numpy as np\nfrom fastapi import Depends, FastAPI, Header, HTTPException, Request, Response\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom fastapi.responses import JSONResponse, StreamingResponse\nfrom gemini_webapi import GeminiClient, set_log_level\nfrom gemini_webapi.constants import Model\nfrom PIL import Image\nfrom pydantic import BaseModel\n\n# Configure logging\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(__name__)\nset_log_level(\"INFO\")\n\ngemini_client = None\ngemini_client_lock = asyncio.Lock()\n\n\n@asynccontextmanager\nasync def lifespan(app: FastAPI):\n\t\"\"\"Initialize the Gemini client during startup and close it on shutdown.\"\"\"\n\tawait get_gemini_client()\n\ttry:\n\t\tyield\n\tfinally:\n\t\tglobal gemini_client\n\t\tif gemini_client is not None:\n\t\t\ttry:\n\t\t\t\tawait gemini_client.close()\n\t\t\texcept Exception as e:\n\t\t\t\tlogger.warning(f\"Failed to close Gemini client during shutdown: {e}\")\n\t\t\tfinally:\n\t\t\t\tgemini_client = None\n\n\napp = FastAPI(title=\"Gemini API FastAPI Server\", lifespan=lifespan)\n\n\ndef get_gemini_webapi_version() -> str:\n\t\"\"\"Return the installed gemini-webapi package version for runtime diagnostics.\"\"\"\n\ttry:\n\t\treturn importlib.metadata.version(\"gemini-webapi\")\n\texcept importlib.metadata.PackageNotFoundError:\n\t\treturn \"unknown\"\n\n\ndef get_cached_1psidts_path(psid: str) -> str:\n\t\"\"\"Return the cache path for a rotated 1PSIDTS value.\"\"\"\n\tif not psid or not re.match(\"^[\\\\w\\\\-\\\\.]+$\", psid):\n\t\treturn \"\"\n\treturn os.path.join(GEMINI_COOKIE_PATH, f\".cached_1psidts_{psid}.txt\")\n\n\ndef load_cached_1psidts(psid: str) -> str:\n\t\"\"\"Load a cached rotated 1PSIDTS value for the given 1PSID.\"\"\"\n\tcached_file_path = get_cached_1psidts_path(psid)\n\tif not cached_file_path:\n\t\treturn \"\"\n\n\tif os.path.exists(cached_file_path):\n\t\ttry:\n\t\t\tcontent = Path(cached_file_path).read_text().strip()\n\t\t\tif content:\n\t\t\t\treturn content\n\t\texcept Exception as e:\n\t\t\tlogger.warning(f\"Error reading cache file {cached_file_path}: {e}\")\n\n\treturn \"\"\n\n\ndef get_cookie_value(cookies, name: str) -> str:\n\t\"\"\"Safely read a cookie value from an httpx cookie jar or mapping.\"\"\"\n\tif not cookies:\n\t\treturn \"\"\n\n\tfor domain in (\".google.com\", \".googleusercontent.com\", None):\n\t\ttry:\n\t\t\tvalue = cookies.get(name, domain=domain) if domain is not None else cookies.get(name)\n\t\texcept TypeError:\n\t\t\tvalue = cookies.get(name)\n\t\texcept Exception:\n\t\t\tvalue = \"\"\n\n\t\tif value:\n\t\t\treturn value\n\n\treturn \"\"\n\n\n# Add CORS middleware\napp.add_middleware(\n\tCORSMiddleware,\n\tallow_origins=[\"*\"],\n\tallow_credentials=True,\n\tallow_methods=[\"*\"],\n\tallow_headers=[\"*\"],\n)\n\n# Authentication credentials\nSECURE_1PSID = os.environ.get(\"SECURE_1PSID\", \"\")\nSECURE_1PSIDTS = os.environ.get(\"SECURE_1PSIDTS\", \"\")\nAPI_KEY = os.environ.get(\"API_KEY\", \"\")\nENABLE_THINKING = os.environ.get(\"ENABLE_THINKING\", \"false\").lower() == \"true\"\nTEMPORARY_CHAT = os.environ.get(\"TEMPORARY_CHAT\", \"false\").lower() == \"true\"\nAUTO_DELETE_CHAT = os.environ.get(\"AUTO_DELETE_CHAT\", \"true\").lower() == \"true\" and not TEMPORARY_CHAT\nPUBLIC_BASE_URL = os.environ.get(\"PUBLIC_BASE_URL\", \"\").rstrip(\"/\")\nSECRET_FILE_PATH = os.path.join(os.path.dirname(__file__), \"secrets\", \"proxy_secret\")\nGEMINI_COOKIE_PATH = os.path.join(os.path.dirname(__file__), \"secrets\")\nSESSION_VALIDATION_PROMPT = \"Reply with exactly OK.\"\nAUTH_FAILURE_TEXT_PATTERNS = (\n\t\"are you signed in\",\n\t\"sign in\",\n\t\"signed in\",\n\t\"log in\",\n\t\"logged in\",\n)\nDEFAULT_USER_AGENT = \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/144.0.0.0 Safari/537.36 Edg/144.0.0.0\"\n\nos.environ.setdefault(\"GEMINI_COOKIE_PATH\", GEMINI_COOKIE_PATH)\n\n\nasync def background_delete_chat(client: GeminiClient, cid: str):\n\t\"\"\"Deletes a chat conversation in the background to avoid blocking the main thread.\"\"\"\n\tif not cid:\n\t\treturn\n\ttry:\n\t\tawait client.delete_chat(cid)\n\texcept Exception as e:\n\t\tlogger.error(f\"Failed to auto-delete chat {cid}: {e}\")\n\n\ndef response_indicates_auth_failure(text: str) -> bool:\n\t\"\"\"Return True if the response text looks like a signed-out or degraded session.\"\"\"\n\tnormalized = (text or \"\").strip().lower()\n\tif not normalized:\n\t\treturn True\n\treturn any(pattern in normalized for pattern in AUTH_FAILURE_TEXT_PATTERNS)\n\n\nasync def fetch_readable_chat_response(client: GeminiClient, cid: str, retry_delays: List[int]) -> Optional[object]:\n\t\"\"\"Poll Gemini history until the chat becomes readable or retries are exhausted.\"\"\"\n\tfor attempt, delay in enumerate(retry_delays, start=1):\n\t\ttry:\n\t\t\tif delay:\n\t\t\t\tawait asyncio.sleep(delay)\n\n\t\t\trecovered = await client.fetch_latest_chat_response(cid)\n\t\t\tif recovered and getattr(recovered, \"text\", \"\"):\n\t\t\t\treturn recovered\n\t\texcept Exception as e:\n\t\t\tlogger.exception(\n\t\t\t\t\"Gemini history read failed for cid=%s on retry %s/%s after %ss delay: %s\",\n\t\t\t\tcid,\n\t\t\t\tattempt,\n\t\t\t\tlen(retry_delays),\n\t\t\t\tdelay,\n\t\t\t\te,\n\t\t\t)\n\t\t\tcontinue\n\n\treturn None\n\n\nasync def background_verify_chat_persistence(client: GeminiClient, cid: str, source: str):\n\t\"\"\"Best-effort verification that a returned cid is readable from Gemini history.\"\"\"\n\tif not cid:\n\t\treturn\n\n\tretry_delays = [1, 3, 8]\n\trecovered = await fetch_readable_chat_response(client, cid, retry_delays)\n\tif recovered:\n\t\tlogger.debug(\n\t\t\t\"Gemini history verification succeeded: source=%s cid=%s text_len=%s metadata=%s\",\n\t\t\tsource,\n\t\t\tcid,\n\t\t\tlen(recovered.text),\n\t\t\tgetattr(recovered, \"metadata\", None),\n\t\t)\n\t\treturn\n\n\tlogger.warning(\n\t\t\"Gemini history verification exhausted retries for cid=%s source=%s\",\n\t\tcid,\n\t\tsource,\n\t)\n\n\nasync def validate_gemini_client_session(client: GeminiClient, source: str):\n\t\"\"\"Verify that an initialized client can create and read back a normal persistent Gemini chat.\"\"\"\n\tvalidation_cid = None\n\ttry:\n\t\tresponse = await client.generate_content(SESSION_VALIDATION_PROMPT, temporary=False)\n\t\tresponse_text = getattr(response, \"text\", \"\") or \"\"\n\t\tmetadata = getattr(response, \"metadata\", None) or []\n\t\tvalidation_cid = metadata[0] if metadata else None\n\n\t\tif response_indicates_auth_failure(response_text):\n\t\t\traise ValueError(\"validation probe returned signed-out or empty content\")\n\n\t\tif not validation_cid:\n\t\t\traise ValueError(\"validation probe returned no persistent chat metadata\")\n\n\t\trecovered = await fetch_readable_chat_response(client, validation_cid, [1, 3, 8])\n\t\tif not recovered or response_indicates_auth_failure(getattr(recovered, \"text\", \"\") or \"\"):\n\t\t\traise ValueError(\"validation probe chat was not readable from Gemini history\")\n\n\t\tlogger.info(\"Gemini session validation succeeded using %s credentials\", source)\n\tfinally:\n\t\tif validation_cid:\n\t\t\ttry:\n\t\t\t\tawait client.delete_chat(validation_cid)\n\t\t\texcept Exception:\n\t\t\t\tlogger.debug(\"Failed to delete Gemini validation chat %s\", validation_cid)\n\n\ndef load_or_generate_secret() -> str:\n\t\"\"\"\n\tLoad the signature secret from file, or generate a new one if not found.\n\t\"\"\"\n\tif os.path.exists(SECRET_FILE_PATH):\n\t\ttry:\n\t\t\twith open(SECRET_FILE_PATH, \"r\") as f:\n\t\t\t\tsecret = f.read().strip()\n\t\t\t\tif secret:\n\t\t\t\t\tlogger.info(f\"Loaded proxy secret from {SECRET_FILE_PATH}\")\n\t\t\t\t\treturn secret\n\t\texcept Exception as e:\n\t\t\tlogger.warning(f\"Failed to read secret file, trying to generate a new one: {e}\")\n\n\t# Generate new secret if not found or error occurred\n\tnew_secret = secrets.token_hex(32)\n\ttry:\n\t\t# Ensure directory exists\n\t\tos.makedirs(os.path.dirname(SECRET_FILE_PATH), exist_ok=True)\n\t\twith open(SECRET_FILE_PATH, \"w\") as f:\n\t\t\tf.write(new_secret)\n\n\t\t# Set restrictive permissions (user-only readable/writable)\n\t\ttry:\n\t\t\tos.chmod(SECRET_FILE_PATH, 0o600)\n\t\texcept Exception as e:\n\t\t\tlogger.warning(f\"Failed to set restrictive permissions on {SECRET_FILE_PATH}: {e}\")\n\n\t\tlogger.info(f\"Generated new proxy secret and saved to {SECRET_FILE_PATH}\")\n\t\treturn new_secret\n\texcept Exception as e:\n\t\tlogger.error(f\"Error writing secret file: {e}\")\n\t\t# if unable to save, return an in-memory ephemeral secret instead of using API_KEY or SECURE_1PSID\n\t\tephemeral_secret = secrets.token_urlsafe(32)\n\t\tlogger.warning(\"Using an in-memory secret to proxy images for this session.\")\n\t\treturn ephemeral_secret\n\n\nSIGNATURE_SECRET = load_or_generate_secret()\n\n# Watermark removal constants\nASSETS_DIR = os.path.join(os.path.dirname(__file__), \"assets\")\nALPHA_MAP_CACHE = {}\n\n\ndef get_alpha_map(size: int) -> np.ndarray:\n\t\"\"\"Load and cache the alpha map from the background capture image.\"\"\"\n\tif size in ALPHA_MAP_CACHE:\n\t\treturn ALPHA_MAP_CACHE[size]\n\n\tbg_path = os.path.join(ASSETS_DIR, f\"bg_{size}.png\")\n\tif not os.path.exists(bg_path):\n\t\tlogger.warning(f\"Watermark asset not found: {bg_path}\")\n\t\treturn None\n\n\ttry:\n\t\twith Image.open(bg_path) as img:\n\t\t\timg_data = np.array(img.convert(\"RGB\"))\n\t\t\talpha_map = np.max(img_data, axis=2) / 255.0\n\t\t\tALPHA_MAP_CACHE[size] = alpha_map\n\t\t\treturn alpha_map\n\texcept Exception as e:\n\t\tlogger.error(f\"Error loading alpha map {size}: {e}\")\n\t\treturn None\n\n\ndef remove_gemini_watermark(image_bytes: bytes) -> bytes:\n\t\"\"\"Remove Gemini watermark using Reverse Alpha Blending.\"\"\"\n\ttry:\n\t\twith Image.open(io.BytesIO(image_bytes)) as img:\n\t\t\twidth, height = img.size\n\t\t\torig_format = img.format\n\n\t\t\tif width > 1024 and height > 1024:\n\t\t\t\tlogo_size, margin = 96, 64\n\t\t\telse:\n\t\t\t\tlogo_size, margin = 48, 32\n\n\t\t\talpha_map = get_alpha_map(logo_size)\n\t\t\tif alpha_map is None:\n\t\t\t\treturn image_bytes\n\n\t\t\tx = width - margin - logo_size\n\t\t\ty = height - margin - logo_size\n\t\t\tif x < 0 or y < 0:\n\t\t\t\tlogger.warning(f\"Image too small for watermark removal: {width}x{height}\")\n\t\t\t\treturn image_bytes\n\n\t\t\t# Reverse Alpha Blending: original = (watermarked - α × 255) / (1 - α)\n\t\t\timg_array = np.array(img.convert(\"RGB\")).astype(np.float64)\n\t\t\troi = img_array[y : y + logo_size, x : x + logo_size].copy()\n\n\t\t\talpha = np.clip(alpha_map, 0.002, 0.99)\n\t\t\talpha_expanded = np.expand_dims(alpha, axis=2)\n\t\t\tcleaned_roi = (roi - alpha_expanded * 255.0) / (1.0 - alpha_expanded)\n\t\t\tcleaned_roi = np.clip(np.round(cleaned_roi), 0, 255).astype(np.uint8)\n\n\t\t\timg_array_uint8 = np.array(img.convert(\"RGB\"))\n\t\t\timg_array_uint8[y : y + logo_size, x : x + logo_size] = cleaned_roi\n\n\t\t\tout_io = io.BytesIO()\n\t\t\tsave_format = orig_format or \"PNG\"\n\t\t\tif save_format.upper() == \"JPEG\":\n\t\t\t\tImage.fromarray(img_array_uint8).save(out_io, format=\"JPEG\", quality=95)\n\t\t\telse:\n\t\t\t\tImage.fromarray(img_array_uint8).save(out_io, format=save_format)\n\t\t\treturn out_io.getvalue()\n\n\texcept Exception as e:\n\t\tlogger.error(f\"Error removing watermark: {e}\")\n\t\treturn image_bytes\n\n\nif not SECURE_1PSID or not SECURE_1PSIDTS:\n\tlogger.warning(\"Gemini credentials are missing; set SECURE_1PSID and SECURE_1PSIDTS before serving requests.\")\nelse:\n\tlogger.info(\n\t\t\"Startup config: thinking=%s temporary_chat=%s auto_delete_chat=%s public_base_url=%s gemini_webapi=%s\",\n\t\tENABLE_THINKING,\n\t\tTEMPORARY_CHAT,\n\t\tAUTO_DELETE_CHAT,\n\t\tbool(PUBLIC_BASE_URL),\n\t\tget_gemini_webapi_version(),\n\t)\n\tif not re.match(\"^[\\\\w\\\\-\\\\.]+$\", SECURE_1PSID):\n\t\tlogger.warning(\n\t\t\t\"SECURE_1PSID contains characters outside the safe cache filename pattern. This may be valid for auth, but cached 1PSIDTS lookup will fall back to the env value.\"\n\t\t)\n\nif not API_KEY:\n\tlogger.info(\"API key authentication is disabled.\")\nelse:\n\tlogger.info(\"API key authentication is enabled.\")\n\n\ndef correct_markdown(md_text: str) -> str:\n\t\"\"\"\n\t修正Markdown文本，移除Google搜索链接包装器，并根据显示文本简化目标URL。\n\t\"\"\"\n\n\tdef simplify_link_target(text_content: str) -> str:\n\t\tmatch_colon_num = re.match(r\"([^:]+:\\d+)\", text_content)\n\t\tif match_colon_num:\n\t\t\treturn match_colon_num.group(1)\n\t\treturn text_content\n\n\tdef replacer(match: re.Match) -> str:\n\t\touter_open_paren = match.group(1)\n\t\tdisplay_text = match.group(2)\n\n\t\tnew_target_url = simplify_link_target(display_text)\n\t\tnew_link_segment = f\"[`{display_text}`]({new_target_url})\"\n\n\t\tif outer_open_paren:\n\t\t\treturn f\"{outer_open_paren}{new_link_segment})\"\n\t\telse:\n\t\t\treturn new_link_segment\n\n\tpattern = r\"(\\()?\\[`([^`]+?)`\\]\\((https://www.google.com/search\\?q=)(.*?)(?<!\\\\)\\)\\)*(\\))?\"\n\n\tfixed_google_links = re.sub(pattern, replacer, md_text)\n\t# fix wrapped markdownlink\n\tpattern = r\"`(\\[[^\\]]+\\]\\([^\\)]+\\))`\"\n\treturn re.sub(pattern, r\"\\1\", fixed_google_links)\n\n\n# Pydantic models for API requests and responses\nclass ContentItem(BaseModel):\n\ttype: str\n\ttext: Optional[str] = None\n\timage_url: Optional[Dict[str, str]] = None\n\n\nclass Message(BaseModel):\n\trole: str\n\tcontent: Union[str, List[ContentItem]]\n\tname: Optional[str] = None\n\n\nclass ChatCompletionRequest(BaseModel):\n\tmodel: str\n\tmessages: List[Message]\n\ttemperature: Optional[float] = 0.7\n\ttop_p: Optional[float] = 1.0\n\tn: Optional[int] = 1\n\tstream: Optional[bool] = False\n\tmax_tokens: Optional[int] = None\n\tpresence_penalty: Optional[float] = 0\n\tfrequency_penalty: Optional[float] = 0\n\tuser: Optional[str] = None\n\n\nclass Choice(BaseModel):\n\tindex: int\n\tmessage: Message\n\tfinish_reason: str\n\n\nclass Usage(BaseModel):\n\tprompt_tokens: int\n\tcompletion_tokens: int\n\ttotal_tokens: int\n\n\nclass ChatCompletionResponse(BaseModel):\n\tid: str\n\tobject: str = \"chat.completion\"\n\tcreated: int\n\tmodel: str\n\tchoices: List[Choice]\n\tusage: Usage\n\n\nclass ModelData(BaseModel):\n\tid: str\n\tobject: str = \"model\"\n\tcreated: int\n\towned_by: str = \"google\"\n\n\nclass ModelList(BaseModel):\n\tobject: str = \"list\"\n\tdata: List[ModelData]\n\n\n# Authentication dependency\nasync def verify_api_key(authorization: str = Header(None)):\n\t\"\"\"\n\tVerify the API key extracted from the Authorization header.\n\n\tRaises:\n\t\tHTTPException: If the authorization header is missing, incorrectly formatted, or the token is invalid.\n\t\"\"\"\n\tif not API_KEY:\n\t\t# If API_KEY is not set in environment, skip validation (for development)\n\t\treturn\n\n\tif not authorization:\n\t\traise HTTPException(status_code=401, detail=\"Missing Authorization header\")\n\n\ttry:\n\t\tscheme, token = authorization.split()\n\t\tif scheme.lower() != \"bearer\":\n\t\t\traise HTTPException(\n\t\t\t\tstatus_code=401,\n\t\t\t\tdetail=\"Invalid authentication scheme. Use Bearer token\",\n\t\t\t)\n\n\t\tif token != API_KEY:\n\t\t\traise HTTPException(status_code=401, detail=\"Invalid API key\")\n\texcept ValueError:\n\t\traise HTTPException(\n\t\t\tstatus_code=401,\n\t\t\tdetail=\"Invalid authorization format. Use 'Bearer YOUR_API_KEY'\",\n\t\t)\n\n\treturn token\n\n\n# Simple error handler middleware\n@app.middleware(\"http\")\nasync def error_handling(request: Request, call_next):\n\t\"\"\"\n\tGlobal middleware to catch unhandled exceptions, log the error,\n\tand return a standardized HTTP 500 response.\n\t\"\"\"\n\ttry:\n\t\treturn await call_next(request)\n\texcept Exception:\n\t\tlogger.exception(\"Request failed\")\n\t\treturn JSONResponse(\n\t\t\tstatus_code=500,\n\t\t\tcontent={\n\t\t\t\t\"error\": {\n\t\t\t\t\t\"message\": \"Internal server error\",\n\t\t\t\t\t\"type\": \"internal_server_error\",\n\t\t\t\t}\n\t\t\t},\n\t\t)\n\n\n# Get list of available models\n@app.get(\"/v1/models\")\nasync def list_models():\n\t\"\"\"返回 gemini_webapi 中声明的模型列表\"\"\"\n\tnow = int(datetime.now(tz=timezone.utc).timestamp())\n\tdata = [\n\t\t{\n\t\t\t\"id\": m.model_name,  # 如 \"gemini-2.0-flash\"\n\t\t\t\"object\": \"model\",\n\t\t\t\"created\": now,\n\t\t\t\"owned_by\": \"google-gemini-web\",\n\t\t}\n\t\tfor m in Model\n\t]\n\treturn {\"object\": \"list\", \"data\": data}\n\n\n# Helper to convert between Gemini and OpenAI model names\ndef map_model_name(openai_model_name: str) -> Model:\n\t\"\"\"根据模型名称字符串查找匹配的 Model 枚举值\"\"\"\n\tnormalized_openai_model_name = openai_model_name.lower()\n\n\t# 首先尝试直接查找匹配的模型名称\n\tfor m in Model:\n\t\tmodel_name = m.model_name if hasattr(m, \"model_name\") else str(m)\n\t\tif normalized_openai_model_name in model_name.lower():\n\t\t\treturn m\n\n\t# 如果找不到匹配项，使用默认映射\n\tmodel_keywords = {\n\t\t\"gemini-pro\": [\"pro\", \"2.0\"],\n\t\t\"gemini-pro-vision\": [\"vision\", \"pro\"],\n\t\t\"gemini-flash\": [\"flash\", \"2.0\"],\n\t\t\"gemini-1.5-pro\": [\"1.5\", \"pro\"],\n\t\t\"gemini-1.5-flash\": [\"1.5\", \"flash\"],\n\t}\n\n\t# 根据关键词模糊匹配\n\tkeywords = None\n\tfor key, candidate_keywords in model_keywords.items():\n\t\tnormalized_key = key.lower()\n\t\tmatches_key = normalized_key in normalized_openai_model_name\n\t\tmatches_any_kw = any(kw.lower() in normalized_openai_model_name for kw in candidate_keywords)\n\t\tif matches_key or matches_any_kw:\n\t\t\tkeywords = candidate_keywords\n\t\t\tbreak\n\n\tif keywords is None:\n\t\tif \"flash\" in normalized_openai_model_name:\n\t\t\tkeywords = [\"flash\"]\n\t\telif \"vision\" in normalized_openai_model_name:\n\t\t\tkeywords = [\"vision\"]\n\t\telse:\n\t\t\tkeywords = [\"pro\"]\n\n\tfor m in Model:\n\t\tmodel_name = m.model_name if hasattr(m, \"model_name\") else str(m)\n\t\tif all(kw.lower() in model_name.lower() for kw in keywords):\n\t\t\treturn m\n\n\t# 如果还是找不到，返回第一个模型\n\treturn next(iter(Model))\n\n\n# Prepare conversation history from OpenAI messages format\ndef prepare_conversation(messages: List[Message]) -> tuple:\n\t\"\"\"\n\tConvert a list of OpenAI-formatted message objects into a\n\tflat string conversation format suitable for the Gemini API.\n\tAlso extracts and saves base64 images to temporary files.\n\n\tReturns:\n\t\tA tuple containing the constructed conversation string and a list of paths to temporary image files.\n\t\"\"\"\n\tconversation = \"\"\n\ttemp_files = []\n\n\tfor msg in messages:\n\t\tif isinstance(msg.content, str):\n\t\t\t# String content handling\n\t\t\tif msg.role == \"system\":\n\t\t\t\tconversation += f\"System: {msg.content}\\n\\n\"\n\t\t\telif msg.role == \"user\":\n\t\t\t\tconversation += f\"Human: {msg.content}\\n\\n\"\n\t\t\telif msg.role == \"assistant\":\n\t\t\t\tconversation += f\"Assistant: {msg.content}\\n\\n\"\n\t\telse:\n\t\t\t# Mixed content handling\n\t\t\tif msg.role == \"user\":\n\t\t\t\tconversation += \"Human: \"\n\t\t\telif msg.role == \"system\":\n\t\t\t\tconversation += \"System: \"\n\t\t\telif msg.role == \"assistant\":\n\t\t\t\tconversation += \"Assistant: \"\n\n\t\t\tfor item in msg.content:\n\t\t\t\tif item.type == \"text\":\n\t\t\t\t\tconversation += item.text or \"\"\n\t\t\t\telif item.type == \"image_url\" and item.image_url:\n\t\t\t\t\t# Handle image\n\t\t\t\t\timage_url = item.image_url.get(\"url\", \"\")\n\t\t\t\t\tif image_url.startswith(\"data:image/\"):\n\t\t\t\t\t\t# Process base64 encoded image\n\t\t\t\t\t\ttry:\n\t\t\t\t\t\t\t# Extract the base64 part\n\t\t\t\t\t\t\tbase64_data = image_url.split(\",\")[1]\n\t\t\t\t\t\t\timage_data = base64.b64decode(base64_data)\n\n\t\t\t\t\t\t\t# Create temporary file to hold the image\n\t\t\t\t\t\t\twith tempfile.NamedTemporaryFile(delete=False, suffix=\".png\") as tmp:\n\t\t\t\t\t\t\t\ttmp.write(image_data)\n\t\t\t\t\t\t\t\ttemp_files.append(tmp.name)\n\t\t\t\t\t\texcept Exception as e:\n\t\t\t\t\t\t\tlogger.error(f\"Error processing base64 image: {str(e)}\")\n\n\t\t\tconversation += \"\\n\\n\"\n\n\t# Add a final prompt for the assistant to respond to\n\tconversation += \"Assistant: \"\n\n\treturn conversation, temp_files\n\n\n# Dependency to get the initialized Gemini client\nasync def get_gemini_client():\n\t\"\"\"\n\tGet or initialize the global GeminiClient instance.\n\n\tRaises:\n\t\tHTTPException: If initialization fails due to invalid parameters or connection issues.\n\t\"\"\"\n\tglobal gemini_client\n\tif gemini_client is not None:\n\t\treturn gemini_client\n\n\tasync with gemini_client_lock:\n\t\tif gemini_client is not None:\n\t\t\treturn gemini_client\n\n\t\ttry:\n\t\t\tpsid = SECURE_1PSID\n\t\t\tcached_psidts = load_cached_1psidts(psid)\n\t\t\tattempts = []\n\n\t\t\tif cached_psidts:\n\t\t\t\tattempts.append((\"cache\", cached_psidts))\n\t\t\tif SECURE_1PSIDTS:\n\t\t\t\tattempts.append((\"environment\", SECURE_1PSIDTS))\n\n\t\t\tseen_psidts = set()\n\t\t\tnew_attempts = []\n\t\t\tfor source, psidts in attempts:\n\t\t\t\tif not psidts or psidts in seen_psidts:\n\t\t\t\t\tcontinue\n\t\t\t\tseen_psidts.add(psidts)\n\t\t\t\tnew_attempts.append((source, psidts))\n\t\t\tattempts = new_attempts\n\n\t\t\tif not attempts:\n\t\t\t\traise HTTPException(\n\t\t\t\t\tstatus_code=500,\n\t\t\t\t\tdetail=\"Missing SECURE_1PSIDTS and no cached rotated 1PSIDTS is available\",\n\t\t\t\t)\n\n\t\t\tlast_error = None\n\t\t\tfor source, psidts in attempts:\n\t\t\t\ttmp_client = None\n\t\t\t\ttry:\n\t\t\t\t\tlogger.info(\"Initializing Gemini client using %s credentials\", source)\n\n\t\t\t\t\ttmp_client = GeminiClient(psid, psidts)\n\t\t\t\t\tawait tmp_client.init(timeout=300)\n\t\t\t\t\tawait validate_gemini_client_session(tmp_client, source)\n\n\t\t\t\t\tgemini_client = tmp_client\n\t\t\t\t\tbreak\n\t\t\t\texcept Exception as e:\n\t\t\t\t\tlast_error = e\n\t\t\t\t\tlogger.warning(f\"Gemini session setup failed using {source} 1PSIDTS: {e}\")\n\t\t\t\t\tif tmp_client is not None:\n\t\t\t\t\t\ttry:\n\t\t\t\t\t\t\tawait tmp_client.close()\n\t\t\t\t\t\texcept Exception:\n\t\t\t\t\t\t\tpass\n\n\t\t\tif gemini_client is None:\n\t\t\t\traise last_error\n\n\t\texcept Exception as e:\n\t\t\tlogger.error(f\"Failed to initialize Gemini client: {str(e)}\")\n\t\t\traise HTTPException(status_code=500, detail=f\"Failed to initialize Gemini client: {str(e)}\")\n\treturn gemini_client\n\n\ndef get_image_signature(url: str) -> str:\n\t\"\"\"\n\tGenerate a HMAC-SHA256 signature for the image URL using the persistent SIGNATURE_SECRET.\n\t\"\"\"\n\tsecret = SIGNATURE_SECRET.encode()\n\treturn hmac.new(secret, url.encode(), hashlib.sha256).hexdigest()\n\n\ndef postprocess_text(text: str) -> str:\n\t\"\"\"Apply text cleanup and markdown corrections to response text.\"\"\"\n\ttext = text.replace(\"&lt;\", \"<\").replace(\"\\\\<\", \"<\").replace(\"\\\\_\", \"_\").replace(\"\\\\>\", \">\")\n\treturn correct_markdown(text)\n\n\ndef extract_image_markdown(response, base_url: str) -> str:\n\t\"\"\"Extract images from a response and return markdown image links.\"\"\"\n\tresult = \"\"\n\tif hasattr(response, \"images\") and response.images:\n\t\tfor img in response.images:\n\t\t\timg_url = getattr(img, \"url\", None)\n\t\t\tif img_url:\n\t\t\t\tsig = get_image_signature(img_url)\n\t\t\t\tproxy_url = f\"{base_url}/gemini-proxy/image?url={quote(img_url)}&sig={sig}\"\n\t\t\t\tresult += f\"\\n\\n![🎨 Loading image...]({proxy_url})\"\n\treturn result\n\n\n@app.post(\"/v1/chat/completions\")\nasync def create_chat_completion(\n\trequest: ChatCompletionRequest,\n\traw_request: Request,\n\tapi_key: str = Depends(verify_api_key),\n):\n\t\"\"\"\n\tHandle chat completion requests, translating from OpenAI API format to Gemini API format.\n\tSupports both streaming and non-streaming responses, caching, thinking features,\n\tand background conversation cleanup based on configuration.\n\t\"\"\"\n\ttry:\n\t\t# 确保客户端已初始化\n\t\tglobal gemini_client\n\t\tgemini_client = await get_gemini_client()\n\n\t\t# 转换消息为对话格式\n\t\tconversation, temp_files = prepare_conversation(request.messages)\n\t\tlogger.info(\n\t\t\t\"Chat completion request: stream=%s requested_model=%s messages=%s temp_files=%s\",\n\t\t\trequest.stream,\n\t\t\trequest.model,\n\t\t\tlen(request.messages),\n\t\t\tlen(temp_files),\n\t\t)\n\n\t\t# 获取适当的模型\n\t\tmodel = map_model_name(request.model)\n\n\t\t# 创建响应对象\n\t\tcompletion_id = f\"chatcmpl-{uuid.uuid4()}\"\n\t\tcreated_time = int(time.time())\n\t\tbase_url = PUBLIC_BASE_URL or str(raw_request.base_url).rstrip(\"/\")\n\n\t\t# Prepare generate_content arguments\n\t\tgen_kwargs = {\"model\": model}\n\t\tif TEMPORARY_CHAT:\n\t\t\tgen_kwargs[\"temporary\"] = True\n\t\tif temp_files:\n\t\t\tgen_kwargs[\"files\"] = temp_files\n\n\t\tif request.stream:\n\t\t\t# Real streaming using upstream generate_content_stream\n\t\t\tasync def generate_stream():\n\t\t\t\ttry:\n\n\t\t\t\t\tdef make_chunk(delta: dict, finish_reason=None):\n\t\t\t\t\t\treturn (\n\t\t\t\t\t\t\t\"data: \"\n\t\t\t\t\t\t\t+ json.dumps(\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"id\": completion_id,\n\t\t\t\t\t\t\t\t\t\"object\": \"chat.completion.chunk\",\n\t\t\t\t\t\t\t\t\t\"created\": created_time,\n\t\t\t\t\t\t\t\t\t\"model\": request.model,\n\t\t\t\t\t\t\t\t\t\"choices\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"index\": 0,\n\t\t\t\t\t\t\t\t\t\t\t\"delta\": delta,\n\t\t\t\t\t\t\t\t\t\t\t\"finish_reason\": finish_reason,\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t+ \"\\n\\n\"\n\t\t\t\t\t\t)\n\n\t\t\t\t\t# Send initial role chunk\n\t\t\t\t\tyield make_chunk({\"role\": \"assistant\"})\n\n\t\t\t\t\tthinking_started = False\n\t\t\t\t\tthinking_ended = False\n\t\t\t\t\tyielded_images = 0\n\t\t\t\t\ttext_buffer = \"\"\n\t\t\t\t\tcaptured_cid = None\n\t\t\t\t\tchunk_count = 0\n\t\t\t\t\tlast_metadata = None\n\n\t\t\t\t\tasync for chunk in gemini_client.generate_content_stream(conversation, **gen_kwargs):\n\t\t\t\t\t\tchunk_count += 1\n\t\t\t\t\t\tif hasattr(chunk, \"metadata\") and chunk.metadata:\n\t\t\t\t\t\t\tlast_metadata = chunk.metadata\n\t\t\t\t\t\t# Capture conversation ID for auto-deletion\n\t\t\t\t\t\tif AUTO_DELETE_CHAT and captured_cid is None and hasattr(chunk, \"metadata\") and chunk.metadata and len(chunk.metadata) > 0:\n\t\t\t\t\t\t\tcaptured_cid = chunk.metadata[0]\n\n\t\t\t\t\t\t# Handle thinking/thoughts delta\n\t\t\t\t\t\tif ENABLE_THINKING and hasattr(chunk, \"thoughts_delta\") and chunk.thoughts_delta:\n\t\t\t\t\t\t\tif not thinking_started:\n\t\t\t\t\t\t\t\tyield make_chunk({\"content\": \"<think>\\n\"})\n\t\t\t\t\t\t\t\tthinking_started = True\n\n\t\t\t\t\t\t\t# Also include reasoning_content for full Open WebUI native compatibility\n\t\t\t\t\t\t\tyield make_chunk(\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"content\": chunk.thoughts_delta,\n\t\t\t\t\t\t\t\t\t\"reasoning_content\": chunk.thoughts_delta,\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t)\n\n\t\t\t\t\t\t# Handle text delta\n\t\t\t\t\t\tif hasattr(chunk, \"text_delta\") and chunk.text_delta:\n\t\t\t\t\t\t\t# Close thinking tag before first text content\n\t\t\t\t\t\t\tif thinking_started and not thinking_ended:\n\t\t\t\t\t\t\t\tthinking_ended = True\n\t\t\t\t\t\t\t\tyield make_chunk({\"content\": \"\\n</think>\\n\\n\"})\n\n\t\t\t\t\t\t\ttext_buffer += chunk.text_delta\n\t\t\t\t\t\t\tsafe_to_yield = False\n\n\t\t\t\t\t\t\t# Yield if buffer ends with whitespace and looks like it's outside a markdown link\n\t\t\t\t\t\t\tif (\n\t\t\t\t\t\t\t\ttext_buffer[-1].isspace()\n\t\t\t\t\t\t\t\tand text_buffer.count(\"[\") == text_buffer.count(\"]\")\n\t\t\t\t\t\t\t\tand text_buffer.count(\"(\") == text_buffer.count(\")\")\n\t\t\t\t\t\t\t):\n\t\t\t\t\t\t\t\tsafe_to_yield = True\n\t\t\t\t\t\t\telif len(text_buffer) > 500:\n\t\t\t\t\t\t\t\tsafe_to_yield = True\n\n\t\t\t\t\t\t\tif safe_to_yield:\n\t\t\t\t\t\t\t\tyield make_chunk({\"content\": postprocess_text(text_buffer)})\n\t\t\t\t\t\t\t\ttext_buffer = \"\"\n\n\t\t\t\t\t\t# Handle inline images as they arrive\n\t\t\t\t\t\tif hasattr(chunk, \"images\") and chunk.images and len(chunk.images) > yielded_images:\n\t\t\t\t\t\t\t# Close thinking tag if an image arrives before any text\n\t\t\t\t\t\t\tif thinking_started and not thinking_ended:\n\t\t\t\t\t\t\t\tthinking_ended = True\n\t\t\t\t\t\t\t\tyield make_chunk({\"content\": \"\\n</think>\\n\\n\"})\n\n\t\t\t\t\t\t\tnew_images = chunk.images[yielded_images:]\n\t\t\t\t\t\t\tfor img in new_images:\n\t\t\t\t\t\t\t\timg_url = getattr(img, \"url\", None)\n\t\t\t\t\t\t\t\tif img_url:\n\t\t\t\t\t\t\t\t\tsig = get_image_signature(img_url)\n\t\t\t\t\t\t\t\t\tproxy_url = f\"{base_url}/gemini-proxy/image?url={quote(img_url)}&sig={sig}\"\n\t\t\t\t\t\t\t\t\timg_md = f\"\\n\\n![🎨 Loading image...]({proxy_url})\\n\\n\"\n\t\t\t\t\t\t\t\t\tyield make_chunk({\"content\": img_md})\n\t\t\t\t\t\t\tyielded_images = len(chunk.images)\n\n\t\t\t\t\t# Flush any remaining text\n\t\t\t\t\tif text_buffer:\n\t\t\t\t\t\tyield make_chunk({\"content\": postprocess_text(text_buffer)})\n\n\t\t\t\t\t# Close thinking tag if it was never closed\n\t\t\t\t\tif thinking_started and not thinking_ended:\n\t\t\t\t\t\tyield make_chunk({\"content\": \"\\n</think>\\n\\n\"})\n\n\t\t\t\t\t# Send finish chunk\n\t\t\t\t\tyield make_chunk({}, finish_reason=\"stop\")\n\t\t\t\t\tyield \"data: [DONE]\\n\\n\"\n\n\t\t\t\t\tlogger.info(\n\t\t\t\t\t\t\"Streaming response completed: chunks=%s images=%s\",\n\t\t\t\t\t\tchunk_count,\n\t\t\t\t\t\tyielded_images,\n\t\t\t\t\t)\n\t\t\t\t\tif last_metadata and len(last_metadata) > 0 and not AUTO_DELETE_CHAT:\n\t\t\t\t\t\tasyncio.create_task(background_verify_chat_persistence(gemini_client, last_metadata[0], \"stream\"))\n\t\t\t\texcept Exception as e:\n\t\t\t\t\tlogger.error(f\"Error during streaming: {str(e)}\", exc_info=True)\n\t\t\t\t\t# Send error as a content chunk so the client sees it\n\t\t\t\t\terror_msg = \"\\n\\n[An internal error occurred while streaming]\"\n\t\t\t\t\tyield make_chunk({\"content\": error_msg})\n\t\t\t\t\tyield make_chunk({}, finish_reason=\"stop\")\n\t\t\t\t\tyield \"data: [DONE]\\n\\n\"\n\t\t\t\tfinally:\n\t\t\t\t\t# Create background task to delete the chat if AUTO_DELETE_CHAT is enabled\n\t\t\t\t\tif AUTO_DELETE_CHAT and captured_cid:\n\t\t\t\t\t\tasyncio.create_task(background_delete_chat(gemini_client, captured_cid))\n\n\t\t\t\t\t# 清理临时文件\n\t\t\t\t\tfor temp_file in temp_files:\n\t\t\t\t\t\ttry:\n\t\t\t\t\t\t\tos.unlink(temp_file)\n\t\t\t\t\t\texcept Exception as e:\n\t\t\t\t\t\t\tlogger.warning(f\"Failed to delete temp file {temp_file}: {str(e)}\")\n\n\t\t\treturn StreamingResponse(generate_stream(), media_type=\"text/event-stream\")\n\t\telse:\n\t\t\t# Non-streaming response\n\t\t\ttry:\n\t\t\t\tresponse = await gemini_client.generate_content(conversation, **gen_kwargs)\n\n\t\t\t\tif AUTO_DELETE_CHAT and hasattr(response, \"metadata\") and response.metadata and len(response.metadata) > 0:\n\t\t\t\t\tcid = response.metadata[0]\n\t\t\t\t\tasyncio.create_task(background_delete_chat(gemini_client, cid))\n\t\t\t\telif hasattr(response, \"metadata\") and response.metadata and len(response.metadata) > 0:\n\t\t\t\t\tasyncio.create_task(background_verify_chat_persistence(gemini_client, response.metadata[0], \"non-stream\"))\n\t\t\t\telif not getattr(response, \"metadata\", None):\n\t\t\t\t\tlogger.warning(\"Non-stream response returned no Gemini metadata. This request may not map to a persistent Gemini chat.\")\n\n\t\t\tfinally:\n\t\t\t\t# 清理临时文件\n\t\t\t\tfor temp_file in temp_files:\n\t\t\t\t\ttry:\n\t\t\t\t\t\tos.unlink(temp_file)\n\t\t\t\t\texcept Exception as e:\n\t\t\t\t\t\tlogger.warning(f\"Failed to delete temp file {temp_file}: {str(e)}\")\n\n\t\t\t# 提取文本响应\n\t\t\treply_text = \"\"\n\t\t\tif ENABLE_THINKING and hasattr(response, \"thoughts\") and response.thoughts:\n\t\t\t\treply_text += f\"<think>\\n{response.thoughts}\\n</think>\\n\\n\"\n\t\t\tif hasattr(response, \"text\"):\n\t\t\t\treply_text += response.text\n\t\t\telse:\n\t\t\t\treply_text += str(response)\n\n\t\t\t# 提取并追加图片响应\n\t\t\treply_text += extract_image_markdown(response, base_url)\n\t\t\treply_text = postprocess_text(reply_text)\n\n\t\t\tif not reply_text or reply_text.strip() == \"\":\n\t\t\t\tlogger.warning(\"Empty response received from Gemini\")\n\t\t\t\treply_text = \"Server returned an empty response. Please check that Gemini API credentials are valid.\"\n\n\t\t\tresult = {\n\t\t\t\t\"id\": completion_id,\n\t\t\t\t\"object\": \"chat.completion\",\n\t\t\t\t\"created\": created_time,\n\t\t\t\t\"model\": request.model,\n\t\t\t\t\"choices\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"index\": 0,\n\t\t\t\t\t\t\"message\": {\"role\": \"assistant\", \"content\": reply_text},\n\t\t\t\t\t\t\"finish_reason\": \"stop\",\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"usage\": {\n\t\t\t\t\t\"prompt_tokens\": len(conversation.split()),\n\t\t\t\t\t\"completion_tokens\": len(reply_text.split()),\n\t\t\t\t\t\"total_tokens\": len(conversation.split()) + len(reply_text.split()),\n\t\t\t\t},\n\t\t\t}\n\n\t\t\tlogger.info(\"Non-streaming response completed\")\n\t\t\treturn result\n\n\texcept Exception as e:\n\t\tlogger.error(f\"Error generating completion: {str(e)}\", exc_info=True)\n\t\traise HTTPException(status_code=500, detail=f\"Error generating completion: {str(e)}\")\n\n\n@app.get(\"/gemini-proxy/image\")\nasync def proxy_image(url: str, sig: str):\n\t\"\"\"\n\tProxy images from Google domains to bypass browser security policies.\n\tRequires a valid HMAC signature.\n\t\"\"\"\n\t# Verify signature\n\texpected_sig = get_image_signature(url)\n\tif not hmac.compare_digest(sig, expected_sig):\n\t\tlogger.warning(f\"Invalid signature for proxy request: {url}\")\n\t\traise HTTPException(status_code=403, detail=\"Invalid signature\")\n\n\t# Prevent open proxying\n\tallowed_domains = [\"google.com\", \"googleusercontent.com\", \"gstatic.com\"]\n\n\ttry:\n\t\tparsed = urlparse(url)\n\t\tif parsed.scheme not in [\"http\", \"https\"]:\n\t\t\tlogger.warning(f\"Invalid scheme in proxy request: {parsed.scheme}\")\n\t\t\traise HTTPException(status_code=400, detail=\"Invalid URL scheme\")\n\n\t\thostname = parsed.hostname\n\t\tif not hostname:\n\t\t\tlogger.warning(f\"No hostname in proxy request: {url}\")\n\t\t\traise HTTPException(status_code=400, detail=\"Invalid URL\")\n\n\t\thostname = hostname.lower()\n\t\tis_allowed = any(hostname == d or hostname.endswith(\".\" + d) for d in allowed_domains)\n\n\t\tif not is_allowed:\n\t\t\tlogger.warning(f\"Blocked proxy request for domain: {hostname}\")\n\t\t\traise HTTPException(status_code=403, detail=\"Domain not allowed\")\n\texcept ValueError:\n\t\tlogger.warning(f\"Malformed URL in proxy request: {url}\")\n\t\traise HTTPException(status_code=400, detail=\"Invalid URL\")\n\n\t# Minimal browser-like headers\n\theaders = {\n\t\t\"User-Agent\": DEFAULT_USER_AGENT,\n\t\t\"Accept\": \"image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8\",\n\t\t\"Accept-Language\": \"en-US,en;q=0.9\",\n\t\t\"Referer\": \"https://gemini.google.com/\",\n\t}\n\n\t# 10MB limit\n\tMAX_BYTES = 10 * 1024 * 1024\n\n\t# Use scoped cookies to prevent leakage during redirects\n\tjar = httpx.Cookies()\n\n\t# Use the freshest available 1PSIDTS without overriding env cookies up front.\n\tpsid = SECURE_1PSID\n\tpsidts = get_cookie_value(getattr(gemini_client, \"cookies\", None), \"__Secure-1PSIDTS\") or load_cached_1psidts(psid) or SECURE_1PSIDTS\n\n\tjar.set(\"__Secure-1PSID\", psid, domain=\".google.com\")\n\tjar.set(\"__Secure-1PSIDTS\", psidts, domain=\".google.com\")\n\tjar.set(\"__Secure-1PSID\", psid, domain=\".googleusercontent.com\")\n\tjar.set(\"__Secure-1PSIDTS\", psidts, domain=\".googleusercontent.com\")\n\n\tasync with httpx.AsyncClient(http2=True, cookies=jar, follow_redirects=True) as client:\n\t\ttry:\n\t\t\t# Fetch original resolution to keep watermark at expected size/position\n\t\t\tfetch_url = re.sub(r\"=s\\d+$\", \"=s0\", url) if re.search(r\"=s\\d+$\", url) else url + \"=s0\"\n\n\t\t\tasync with client.stream(\"GET\", fetch_url, timeout=15.0, headers=headers) as resp:\n\t\t\t\tif resp.status_code != 200:\n\t\t\t\t\tlogger.error(f\"Google returned {resp.status_code} for image: {url}\")\n\n\t\t\t\tresp.raise_for_status()\n\n\t\t\t\tcontent = bytearray()\n\t\t\t\tasync for chunk in resp.aiter_bytes():\n\t\t\t\t\tcontent.extend(chunk)\n\t\t\t\t\tif len(content) > MAX_BYTES:\n\t\t\t\t\t\tlogger.warning(f\"Image too large: {url} (exceeded {MAX_BYTES} bytes)\")\n\t\t\t\t\t\traise HTTPException(status_code=413, detail=\"Image too large\")\n\t\t\t\t# Validate Content-Type to prevent XSS/MIME sniffing\n\t\t\t\tupstream_content_type = resp.headers.get(\"content-type\", \"image/png\").lower()\n\t\t\t\tif not upstream_content_type.startswith(\"image/\"):\n\t\t\t\t\tlogger.warning(f\"Rejected non-image Content-Type: {upstream_content_type} for {url}\")\n\t\t\t\t\tmedia_type = \"image/png\"\n\t\t\t\telse:\n\t\t\t\t\tmedia_type = upstream_content_type\n\n\t\t\t\t# Process watermark removal\n\t\t\t\tif media_type in [\"image/png\", \"image/jpeg\", \"image/webp\"]:\n\t\t\t\t\tprocessed_content = remove_gemini_watermark(bytes(content))\n\t\t\t\telse:\n\t\t\t\t\tprocessed_content = bytes(content)\n\n\t\t\t\treturn Response(\n\t\t\t\t\tcontent=processed_content,\n\t\t\t\t\tmedia_type=media_type,\n\t\t\t\t\theaders={\n\t\t\t\t\t\t\"Cross-Origin-Resource-Policy\": \"cross-origin\",\n\t\t\t\t\t\t\"Access-Control-Allow-Origin\": \"*\",\n\t\t\t\t\t\t\"Cache-Control\": \"public, max-age=86400\",  # Cache for 24 hours\n\t\t\t\t\t\t\"X-Content-Type-Options\": \"nosniff\",\n\t\t\t\t\t},\n\t\t\t\t)\n\t\texcept httpx.HTTPStatusError as e:\n\t\t\tlogger.error(f\"Failed to fetch image: {e.response.status_code} for {url}\")\n\t\t\traise HTTPException(\n\t\t\t\tstatus_code=e.response.status_code,\n\t\t\t\tdetail=f\"Failed to fetch image: Google returned {e.response.status_code}\",\n\t\t\t)\n\t\texcept HTTPException:\n\t\t\traise\n\t\texcept Exception as e:\n\t\t\tlogger.error(f\"Proxy error: {str(e)}\")\n\t\t\traise HTTPException(status_code=500, detail=\"Internal proxy error\")\n\n\n@app.get(\"/\")\nasync def root():\n\t\"\"\"\n\tHealth check endpoint to verify the API server is currently running.\n\t\"\"\"\n\treturn {\"status\": \"online\", \"message\": \"Gemini API FastAPI Server is running\"}\n\n\nif __name__ == \"__main__\":\n\timport uvicorn\n\n\tuvicorn.run(\"main:app\", host=\"0.0.0.0\", port=8000, log_level=\"info\")\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[project]\nname = \"gemi2api-server\"\nversion = \"0.1.3\"\nlicense = \"MIT\"\ndescription = \"Add your description here\"\nreadme = \"README.md\"\nrequires-python = \">=3.11\"\ndependencies = [\n    \"browser-cookie3>=0.20.1\",\n    \"fastapi>=0.115.12\",\n    \"gemini-webapi>=1.21.0\",\n    \"uvicorn[standard]>=0.34.1\",\n    \"httpx>=0.27.0\",\n    \"h2>=4.1.0\",\n    \"Pillow>=10.3.0\",\n    \"numpy>=1.26.0\",\n]\n\n# 默认使用清华源，如果无法安装请取消pypi源的注释\n#[[tool.uv.index]]\n#name = \"pypi\"\n#url = \"https://pypi.org/simple\"\n\n[[tool.uv.index]]\nname = \"tuna\"\nurl = \"https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple\"\n\n[dependency-groups]\ndev = [\n    \"ruff>=0.15.6\",\n]\n\n[tool.ruff]\nline-length = 150  # 设置最大行长度\n\n[tool.ruff.lint]\nselect = [\"E\", \"F\", \"W\", \"I\"]  # 启用的规则（E: pycodestyle, F: pyflakes, W: pycodestyle warnings, I: isort）\nignore = [\"E501\", \"W191\"]  # 忽略特定规则（行长度警告和tab缩进警告）\n\n[tool.ruff.format]\nquote-style = \"double\"  # 使用双引号\nindent-style = \"tab\"  # 使用tab缩进\n"
  },
  {
    "path": "render.yaml",
    "content": "services:\n  - type: web\n    name: gemi2api-server\n    env: docker\n    plan: free\n    region: oregon\n    dockerfilePath: ./Dockerfile\n    repo: https://github.com/zhiyu1998/Gemi2Api-Server\n    branch: main\n    envVars:\n      - key: SECURE_1PSID\n        sync: false\n      - key: SECURE_1PSIDTS\n        sync: false\n      - key: API_KEY\n        sync: false\n      - key: GEMINI_COOKIE_PATH\n        value: /var/data/gemini_webapi\n    disks:\n      - name: gemini-cookie-cache\n        mountPath: /var/data/gemini_webapi\n        sizeGB: 1\n    healthCheckPath: /\n    autoDeploy: true\n"
  }
]