Full Code of mirrorange/clove for AI

main 0bdbbb4867d0 cached
75 files
256.5 KB
56.5k tokens
370 symbols
1 requests
Download .txt
Showing preview only (280K chars total). Download the full file or copy to clipboard to get everything.
Repository: mirrorange/clove
Branch: main
Commit: 0bdbbb4867d0
Files: 75
Total size: 256.5 KB

Directory structure:
gitextract_wf3w9e88/

├── .dockerignore
├── .github/
│   └── workflows/
│       ├── build-and-publish.yml
│       └── docker-publish.yml
├── .gitignore
├── .gitmodules
├── .python-version
├── Dockerfile
├── Dockerfile.huggingface
├── Dockerfile.pypi
├── LICENSE
├── MANIFEST.in
├── Makefile
├── README.md
├── README_en.md
├── app/
│   ├── __init__.py
│   ├── api/
│   │   ├── __init__.py
│   │   ├── main.py
│   │   └── routes/
│   │       ├── accounts.py
│   │       ├── claude.py
│   │       ├── settings.py
│   │       └── statistics.py
│   ├── core/
│   │   ├── __init__.py
│   │   ├── account.py
│   │   ├── claude_session.py
│   │   ├── config.py
│   │   ├── error_handler.py
│   │   ├── exceptions.py
│   │   ├── external/
│   │   │   └── claude_client.py
│   │   ├── http_client.py
│   │   └── static.py
│   ├── dependencies/
│   │   ├── __init__.py
│   │   └── auth.py
│   ├── locales/
│   │   ├── en.json
│   │   └── zh.json
│   ├── main.py
│   ├── models/
│   │   ├── __init__.py
│   │   ├── claude.py
│   │   ├── internal.py
│   │   └── streaming.py
│   ├── processors/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── claude_ai/
│   │   │   ├── __init__.py
│   │   │   ├── claude_api_processor.py
│   │   │   ├── claude_web_processor.py
│   │   │   ├── context.py
│   │   │   ├── event_parser_processor.py
│   │   │   ├── message_collector_processor.py
│   │   │   ├── model_injector_processor.py
│   │   │   ├── non_streaming_response_processor.py
│   │   │   ├── pipeline.py
│   │   │   ├── stop_sequences_processor.py
│   │   │   ├── streaming_response_processor.py
│   │   │   ├── tavern_test_message_processor.py
│   │   │   ├── token_counter_processor.py
│   │   │   ├── tool_call_event_processor.py
│   │   │   └── tool_result_processor.py
│   │   └── pipeline.py
│   ├── services/
│   │   ├── __init__.py
│   │   ├── account.py
│   │   ├── cache.py
│   │   ├── event_processing/
│   │   │   ├── __init__.py
│   │   │   ├── event_parser.py
│   │   │   └── event_serializer.py
│   │   ├── i18n.py
│   │   ├── oauth.py
│   │   ├── session.py
│   │   └── tool_call.py
│   └── utils/
│       ├── __init__.py
│       ├── logger.py
│       ├── messages.py
│       └── retry.py
├── docker-compose.yml
├── pyproject.toml
├── scripts/
│   └── build_wheel.py
└── tests/
    └── test_claude_request_models.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .dockerignore
================================================
# Git
.git/
.gitignore
.gitattributes

# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
.pytest_cache/
.coverage
.coverage.*
.cache
htmlcov/
.tox/
.nox/
*.cover
*.py,cover
.hypothesis/
.mypy_cache/
.dmypy.json
dmypy.json
.pyre/
.pytype/
.ruff_cache/
# uv.lock 需要保留(uv 构建需要)

# Virtual environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
.envrc

# Node.js / Frontend
**/node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
.pnpm-store/
lerna-debug.log*
.npm
.eslintcache
.stylelintcache
.node_repl_history
*.tsbuildinfo
.yarn/
.pnp.*

# Frontend build outputs (will be built in Docker)
front/dist/
front/build/
app/static/assets/
app/static/index.html
app/static/vite.svg

# IDEs and editors
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
Thumbs.db
*.sublime-project
*.sublime-workspace
.project
.classpath
.c9/
*.launch
.settings/
*.iml
.cursorignore
.cursorindexingignore

# OS files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Desktop.ini

# Documentation
docs/
*.md
!README.md
LICENSE

# Testing
coverage/
.nyc_output/
test/
tests/
*.test.js
*.spec.js
__tests__/

# Logs
*.log
logs/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*

# Temporary files
*.tmp
*.temp
.tmp/
.temp/
tmp/
temp/

# Local data (contains sensitive information)
data/
/data/
*.json
!package.json
!tsconfig*.json
!components.json
!app/locales/*.json

# CI/CD
.github/
.gitlab-ci.yml
.travis.yml
.circleci/
Jenkinsfile

# Docker files
Dockerfile*
docker-compose*.yml
.dockerignore

# Makefile and scripts
Makefile
scripts/

# Python packaging files
MANIFEST.in
setup.py
setup.cfg

# Miscellaneous
*.bak
*.orig
*.rej
.cache/

================================================
FILE: .github/workflows/build-and-publish.yml
================================================
name: Build and Publish to PyPI

on:
  release:
    types: [published]
  workflow_dispatch:
    inputs:
      publish_to_pypi:
        description: "Publish to PyPI"
        required: true
        default: false
        type: boolean
      publish_to_test_pypi:
        description: "Publish to Test PyPI"
        required: true
        default: false
        type: boolean

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4
        with:
          submodules: recursive

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.11"

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: "20"

      - name: Install pnpm
        uses: pnpm/action-setup@v4
        with:
          version: 9

      - name: Install Python dependencies
        run: |
          python -m pip install --upgrade pip
          pip install build wheel

      - name: Build frontend and wheel
        run: |
          python scripts/build_wheel.py

      - name: Store the distribution packages
        uses: actions/upload-artifact@v4
        with:
          name: python-package-distributions
          path: dist/

  publish-to-pypi:
    name: Publish to PyPI
    if: github.event_name == 'release' || (github.event_name == 'workflow_dispatch' && github.event.inputs.publish_to_pypi == 'true')
    needs:
      - build
    runs-on: ubuntu-latest

    environment:
      name: pypi
      url: https://pypi.org/p/clove-proxy

    permissions:
      id-token: write

    steps:
      - name: Download all the dists
        uses: actions/download-artifact@v4
        with:
          name: python-package-distributions
          path: dist/

      - name: Publish distribution to PyPI
        uses: pypa/gh-action-pypi-publish@release/v1
        with:
          password: ${{ secrets.PYPI_API_TOKEN }}

  publish-to-testpypi:
    name: Publish to TestPyPI
    if: github.event_name == 'workflow_dispatch' && github.event.inputs.publish_to_test_pypi == 'true'
    needs:
      - build
    runs-on: ubuntu-latest

    environment:
      name: testpypi
      url: https://test.pypi.org/p/clove-proxy

    permissions:
      id-token: write

    steps:
      - name: Download all the dists
        uses: actions/download-artifact@v4
        with:
          name: python-package-distributions
          path: dist/

      - name: Publish distribution to TestPyPI
        uses: pypa/gh-action-pypi-publish@release/v1
        with:
          repository-url: https://test.pypi.org/legacy/
          password: ${{ secrets.TEST_PYPI_API_TOKEN }}


================================================
FILE: .github/workflows/docker-publish.yml
================================================
name: Docker Build and Push

on:
  push:
    branches:
      - main
    tags:
      - "v*"
  pull_request:
    branches:
      - main
  workflow_dispatch:

env:
  DOCKER_HUB_USERNAME: ${{ secrets.DOCKER_HUB_USERNAME }}
  DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
  IMAGE_NAME: mirrorange/clove

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
      security-events: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          submodules: recursive

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to Docker Hub
        if: github.event_name != 'pull_request'
        uses: docker/login-action@v3
        with:
          username: ${{ env.DOCKER_HUB_USERNAME }}
          password: ${{ env.DOCKER_HUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.IMAGE_NAME }}
          tags: |
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{major}}
            type=raw,value=latest,enable={{is_default_branch}}

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Run security scan
        if: github.event_name != 'pull_request'
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: ${{ env.IMAGE_NAME }}:${{ steps.meta.outputs.version }}
          format: "sarif"
          output: "trivy-results.sarif"

      - name: Upload Trivy scan results to GitHub Security tab
        if: github.event_name != 'pull_request'
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: "trivy-results.sarif"


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[codz]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py.cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# UV
#   Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#uv.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
#poetry.toml

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#   pdm recommends including project-wide configuration in pdm.toml, but excluding .pdm-python.
#   https://pdm-project.org/en/latest/usage/project/#working-with-version-control
#pdm.lock
#pdm.toml
.pdm-python
.pdm-build/

# pixi
#   Similar to Pipfile.lock, it is generally recommended to include pixi.lock in version control.
#pixi.lock
#   Pixi creates a virtual environment in the .pixi directory, just like venv module creates one
#   in the .venv directory. It is recommended not to include this directory in version control.
.pixi

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.envrc
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

# Abstra
# Abstra is an AI-powered process automation framework.
# Ignore directories containing user credentials, local state, and settings.
# Learn more at https://abstra.io/docs
.abstra/

# Visual Studio Code
#  Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore 
#  that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
#  and can be added to the global gitignore or merged into this file. However, if you prefer, 
#  you could uncomment the following to ignore the entire vscode folder
# .vscode/

# Ruff stuff:
.ruff_cache/

# PyPI configuration file
.pypirc

# Cursor
#  Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
#  exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
#  refer to https://docs.cursor.com/context/ignore-files
.cursorignore
.cursorindexingignore

# Marimo
marimo/_static/
marimo/_lsp/
__marimo__/

# Data
/data/

# Built frontend static files
app/static/

================================================
FILE: .gitmodules
================================================
[submodule "front"]
	path = front
	url = https://github.com/mirrorange/clove-front.git


================================================
FILE: .python-version
================================================
3.13


================================================
FILE: Dockerfile
================================================
# Multi-stage Dockerfile for Clove (uv version)

# =============================================================================
# Stage 1: Build frontend
# =============================================================================
FROM node:20-alpine AS frontend-builder

# Install pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate

WORKDIR /app/front

# Copy frontend package files
COPY front/package.json front/pnpm-lock.yaml ./

# Install dependencies
RUN pnpm install --frozen-lockfile

# Copy frontend source
COPY front/ ./

# Build frontend
RUN pnpm run build

# =============================================================================
# Stage 2: Build Python application with uv
# =============================================================================
FROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim AS app

# uv optimization environment variables
ENV UV_COMPILE_BYTECODE=1 \
    UV_LINK_MODE=copy \
    UV_PYTHON_DOWNLOADS=0

WORKDIR /app

# Step 1: Copy dependency files only (leverage Docker layer caching)
COPY pyproject.toml uv.lock ./

# Install dependencies (without installing the project itself)
# --locked: Use lockfile for consistency
# --no-install-project: Only install dependencies, not the project
# --no-dev: Skip dev dependencies
# --extra rnet --extra curl: Install optional dependency groups
RUN --mount=type=cache,target=/root/.cache/uv \
    uv sync --locked --no-install-project --no-dev --extra rnet --extra curl

# Step 2: Copy application code and README.md (required by pyproject.toml)
COPY app/ ./app/
COPY README.md ./

# Step 3: Copy frontend build artifacts (required by pyproject.toml force-include)
COPY --from=frontend-builder /app/front/dist ./app/static

# Step 4: Install the project itself
RUN --mount=type=cache,target=/root/.cache/uv \
    uv sync --locked --no-dev --extra rnet --extra curl

# Create data directory
RUN mkdir -p /data

# Activate virtual environment (add .venv/bin to PATH)
ENV PATH="/app/.venv/bin:$PATH"

# Environment variables
ENV DATA_FOLDER=/data \
    HOST=0.0.0.0 \
    PORT=5201

# Expose port
EXPOSE 5201

# Reset ENTRYPOINT (uv image default is uv)
ENTRYPOINT []

# Run the application
CMD ["python", "-m", "app.main"]


================================================
FILE: Dockerfile.huggingface
================================================
# Simplified Dockerfile for Clove - For Huggingface Spaces
FROM python:3.11-slim

WORKDIR /app

# Install clove-proxy from PyPI
RUN pip install --no-cache-dir "clove-proxy[rnet]"

# Environment variables
ENV NO_FILESYSTEM_MODE=true
ENV HOST=0.0.0.0
ENV PORT=${PORT:-7860}

# Expose port
EXPOSE ${PORT:-7860}

# Run the application using the installed script
CMD ["clove"]


================================================
FILE: Dockerfile.pypi
================================================
# Simplified Dockerfile for Clove - Install from PyPI
FROM python:3.11-slim

WORKDIR /app

# Install clove-proxy from PyPI
RUN pip install --no-cache-dir "clove-proxy[rnet]"

# Create data directory
RUN mkdir -p /data

# Environment variables
ENV DATA_FOLDER=/data
ENV HOST=0.0.0.0
ENV PORT=${PORT:-5201}

# Expose port
EXPOSE ${PORT:-5201}

# Run the application using the installed script
CMD ["clove"]


================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2025 orange

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: MANIFEST.in
================================================
# Include all static files
recursive-include app/static *

# Include locale files
recursive-include app/locales *.json

# Include documentation
include README.md
include LICENSE

# Exclude development files
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
recursive-exclude * .DS_Store
global-exclude *.log
global-exclude *.tmp
global-exclude .git*

# Exclude test files
recursive-exclude tests *
recursive-exclude * test_*

# Exclude frontend source
recursive-exclude front *

# Exclude data directory
recursive-exclude data *


================================================
FILE: Makefile
================================================
.PHONY: help build build-frontend build-wheel install install-dev clean run test

# Default target
help:
	@echo "Available commands:"
	@echo "  make build          - Build frontend and create Python wheel"
	@echo "  make build-frontend - Build only the frontend"
	@echo "  make build-wheel    - Build only the Python wheel"
	@echo "  make install        - Build and install the package"
	@echo "  make install-dev    - Install in development mode"
	@echo "  make clean          - Clean build artifacts"
	@echo "  make run            - Run the application (development)"
	@echo "  make test           - Run tests"

# Build everything
build:
	@python scripts/build_wheel.py

# Build only frontend
build-frontend:
	@cd front && pnpm install && pnpm run build
	@rm -rf app/static
	@cp -r front/dist app/static
	@echo "✓ Frontend build complete"

# Build only wheel
build-wheel:
	@python scripts/build_wheel.py --skip-frontend

# Build and install
install: build
	@pip install dist/*.whl
	@echo "✓ Clove installed successfully"
	@echo "Run 'clove' to start the application"

# Install in development mode
install-dev:
	@pip install -e .
	@echo "✓ Clove installed in development mode"

# Clean build artifacts
clean:
	@rm -rf dist build *.egg-info
	@rm -rf app/__pycache__ app/**/__pycache__
	@rm -rf .pytest_cache .ruff_cache
	@find . -type f -name "*.pyc" -delete
	@find . -type f -name "*.pyo" -delete
	@echo "✓ Cleaned build artifacts"

# Run the application (development mode)
run:
	@python -m app.main

================================================
FILE: README.md
================================================
# Clove 🍀

<div align="center">

[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
[![Python](https://img.shields.io/badge/python-3.13+-blue.svg)](https://www.python.org/downloads/)
[![FastAPI](https://img.shields.io/badge/FastAPI-0.115+-green.svg)](https://fastapi.tiangolo.com)

**全力以赴的 Claude 反向代理 ✨**

[English](./README_en.md) | [简体中文](#)

</div>

## 🌟 这是什么?

Clove 是一个让你能够通过标准 Claude API 访问 Claude.ai 的反向代理工具。简单来说,它让各种 AI 应用都能连接上 Claude!

**最大亮点**:Clove 是首个支持通过 OAuth 认证访问 Claude 官方 API 的反向代理(就是 Claude Code 用的那个)!这意味着你能享受到完整的 Claude API 功能,包括原生系统消息和预填充等高级特性。

## 🚀 快速开始

只需要三步,就能开始使用:

### 1. 安装 Python

确保你的电脑上有 Python 3.13 或更高版本

### 2. 安装 Clove

```bash
pip install "clove-proxy[rnet]"
```

### 3. 启动!

```bash
clove
```

启动后会在控制台显示一个随机生成的临时管理密钥。登录管理页面后别忘了添加自己的密钥哦!

### 4. 配置账户

打开浏览器访问:http://localhost:5201

使用刚才的管理密钥登录,然后就可以添加你的 Claude 账户了~

## ✨ 核心功能

### 🔐 双模式运行

- **OAuth 模式**:优先使用,可以访问 Claude API 的全部功能
- **网页反代模式**:当 OAuth 不可用时自动切换,通过模拟 Claude.ai 网页版实现

### 🎯 超高兼容性

与其他反代工具(如 Clewd)相比,Clove 的兼容性非常出色:

- ✅ 完全支持 SillyTavern
- ✅ 支持绝大部分使用 Claude API 的应用
- ✅ 甚至支持 Claude Code 本身!

### 🛠️ 功能增强

#### 对于 OAuth 模式

- 完全访问 Claude API 的全部功能
- 支持原生系统消息
- 支持预填充功能
- 性能更好,更稳定

#### 对于 Claude.ai 网页反代模式

Clove 处理了 Claude.ai 网页版与 API 的各种差异:

- 图片上传支持
- 扩展思考(思维链)支持

即使是通过网页反代,Clove 也能让你使用原本不支持的功能:

- 工具调用(Function Calling)
- 停止序列(Stop Sequences)
- Token 计数(估算值)
- 非流式传输

Clove 尽可能让 Claude.ai 网页反代更接近 API,以期在所有应用程序中获得无缝体验。

### 🎨 友好的管理界面

- 现代化的 Web 管理界面
- 无需编辑配置文件
- 所有设置都能在管理页面上完成
- 自动管理用户配额和状态

### 🔄 智能功能

- **自动 OAuth 认证**:通过 Cookie 自动完成,无需手动登录 Claude Code
- **智能切换**:自动在 OAuth 和 Claude.ai 网页反代之间切换
- **配额管理**:超出配额时自动标记并在重置时恢复

## ⚠️ 局限性

### 1. Android Termux 用户注意

Clove 依赖 `curl_cffi` 来请求 claude.ai,但这个依赖无法在 Termux 上运行。

**解决方案**:

- 使用不含 curl_cffi 的版本:`pip install clove-proxy`
  - ✅ 通过 OAuth 访问 Claude API(需要在管理页面手动完成认证)
  - ❌ 无法使用网页反代功能
  - ❌ 无法自动完成 OAuth 认证
- 使用反向代理/镜像(如 fuclaude)
  - ✅ 可以使用全部功能
  - ❌ 需要额外的服务器(既然有搭建镜像的服务器,为什么要在 Termux 上部署呢 www)

### 2. 工具调用限制

如果你使用网页反代模式,避免接入会**大量并行执行工具调用**的应用。

- Clove 需要保持与 Claude.ai 的连接等待工具调用结果
- 过多并行调用会耗尽连接导致失败
- OAuth 模式不受此限制

### 3. 提示结构限制

当 Clove 使用网页反代时,Claude.ai 会在提示中添加额外的系统提示词和文件上传结构。当使用对结构要求高的提示词(如 RP 预设)时:

- 你可以预估请求将通过何种方式进行。在默认配置下:
  - 使用 Free 账户时,所有请求通过 Claude.ai 网页反代
  - 使用 Pro/Max 账户时,所有请求通过 Claude API 进行
  - 若存在多账户,Clove 始终优先使用可访问该模型 API 的账户
- 请选择与请求方式兼容的提示词

## 🔧 高级配置

### 环境变量

虽然大部分配置都能在管理界面完成,但你也可以通过环境变量进行设置:

```bash
# 端口配置
PORT=5201

# 管理密钥(不设置则自动生成)
ADMIN_API_KEYS==your-secret-key

# Claude.ai Cookie
COOKIES=sessionKey=your-session-key
```

更多配置请见 `.env.example` 文件。

### API 使用

配置完成后,你可以像使用标准 Claude API 一样使用 Clove:

```python
import anthropic

client = anthropic.Anthropic(
    base_url="http://localhost:5201",
    api_key="your-api-key"  # 在管理界面创建
)

response = client.messages.create(
    model="claude-opus-4-20250514",
    messages=[{"role": "user", "content": "Hello, Claude!"}],
    max_tokens=1024,
)
```

## 🤝 贡献

欢迎贡献代码!如果你有好的想法或发现了问题:

1. Fork 这个项目
2. 创建你的功能分支 (`git checkout -b feature/AmazingFeature`)
3. 提交你的修改 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 开一个 Pull Request

## 📄 许可证

本项目采用 MIT 许可证 - 查看 [LICENSE](LICENSE) 文件了解详情。

## 🙏 致谢

- [Anthropic Claude](https://www.anthropic.com/claude) - ~~可爱的小克~~ 强大的 AI 助手
- [Clewd](https://github.com/teralomaniac/clewd/) - 初代 Claude.ai 反向代理
- [ClewdR](https://github.com/Xerxes-2/clewdr) - 高性能 Claude.ai 反向代理
- [FastAPI](https://fastapi.tiangolo.com/) - 现代、快速的 Web 框架
- [Tailwind CSS](https://tailwindcss.com/) - CSS 框架
- [Shadcn UI](https://ui.shadcn.com/) - 现代化的 UI 组件库
- [Vite](https://vitejs.dev/) - 现代化的前端构建工具
- [React](https://reactjs.org/) - JavaScript 库

## ⚠️ 免责声明

本项目仅供学习和研究使用。使用本项目时,请遵守相关服务的使用条款。作者不对任何滥用或违反服务条款的行为负责。

## 📮 联系方式

如有问题或建议,欢迎通过以下方式联系:

- 提交 [Issue](https://github.com/mirrorange/clove/issues)
- 发送 Pull Request
- 发送邮件至:orange@freesia.ink

## 🌸 关于 Clove

丁香,桃金娘科蒲桃属植物,是一种常见的香料,也可用作中药。丁香(Clove)与丁香花(Syringa)是两种不同的植物哦~在本项目中,Clove 更接近 Claude 和 love 的合成词呢!

---

<div align="center">
Made with ❤️ by 🍊
</div>


================================================
FILE: README_en.md
================================================
# Clove 🍀

<div align="center">

[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
[![Python](https://img.shields.io/badge/python-3.13+-blue.svg)](https://www.python.org/downloads/)
[![FastAPI](https://img.shields.io/badge/FastAPI-0.115+-green.svg)](https://fastapi.tiangolo.com)

**The all-in-one Claude reverse proxy ✨**

[English](#) | [简体中文](./README.md)

</div>

## 🌟 What is this?

Clove is a reverse proxy tool that lets you access Claude.ai through a standard API. In simple terms, it allows various AI applications to connect to Claude!

**The biggest highlight**: Clove is the first reverse proxy to support accessing Claude's official API through OAuth authentication (the same one Claude Code uses)! This means you get the full Claude API experience, including advanced features like native system messages and prefilling.

## 🚀 Quick Start

Just three steps to get started:

### 1. Install Python

Make sure you have Python 3.13 or higher on your computer

### 2. Install Clove

```bash
pip install "clove-proxy[rnet]"
```

### 3. Launch!

```bash
clove
```

After starting, you'll see a randomly generated temporary admin key in the console. Don't forget to add your own key after logging into the admin panel!

### 4. Configure Your Account

Open your browser and go to: http://localhost:5201

Log in with the admin key from earlier, then you can add your Claude account~

## ✨ Core Features

### 🔐 Dual Mode Operation

- **OAuth Mode**: Preferred method, gives you access to all Claude API features
- **Web Proxy Mode**: Automatically switches when OAuth is unavailable, works by emulating the Claude.ai web interface

### 🎯 Outstanding Compatibility

Compared to other proxy tools (like Clewd), Clove offers exceptional compatibility:

- ✅ Full support for SillyTavern
- ✅ Works with most applications that use the Claude API
- ✅ Even supports Claude Code itself!

### 🛠️ Enhanced Features

#### For OAuth Mode

- Complete access to all Claude API features
- Native system message support
- Prefilling support
- Better performance and stability

#### For Claude.ai Web Proxy Mode

Clove handles all the differences between Claude.ai web version and the API:

- Image upload support
- Extended thinking (chain of thought) support

Even through web proxy, Clove enables features that weren't originally supported:

- Function Calling
- Stop Sequences
- Token counting (estimated)
- Non-streaming responses

Clove strives to make the Claude.ai web proxy as API-like as possible for a seamless experience across all applications.

### 🎨 Friendly Admin Interface

- Modern web management interface
- No need to edit config files
- All settings can be configured in the admin panel
- Automatic user quota and status management

### 🔄 Smart Features

- **Automatic OAuth Authentication**: Completed automatically through cookies, no manual Claude Code login needed
- **Intelligent Switching**: Automatically switches between OAuth and Claude.ai web proxy
- **Quota Management**: Automatically flags when quota is exceeded and restores when reset

## ⚠️ Limitations

### 1. Android Termux Users Note

Clove depends on `curl_cffi` to request claude.ai, but this dependency doesn't work on Termux.

**Solutions**:

- Use the version without curl_cffi: `pip install clove-proxy`
  - ✅ Access Claude API through OAuth (requires manual authentication in admin panel)
  - ❌ Cannot use web proxy features
  - ❌ Cannot auto-complete OAuth authentication
- Use a reverse proxy/mirror (like fuclaude)
  - ✅ Can use all features
  - ❌ Requires an additional server (but if you have a server for mirroring, why deploy on Termux? lol)

### 2. Tool Calling Limitations

If you're using web proxy mode, avoid connecting applications that perform **many parallel tool calls**.

- Clove needs to maintain connections with Claude.ai while waiting for tool call results
- Too many parallel calls will exhaust connections and cause failures
- OAuth mode is not affected by this limitation

### 3. Prompt Structure Limitations

When Clove uses web proxy, Claude.ai adds extra system prompts and file upload structures to your prompts. When using prompts with strict structural requirements (like RP presets):

- You can predict which method your request will use. With default settings:
  - Free accounts: All requests go through Claude.ai web proxy
  - Pro/Max accounts: All requests use Claude API
  - With multiple accounts, Clove always prioritizes accounts with API access for the requested model
- Choose prompts compatible with your request method

## 🔧 Advanced Configuration

### Environment Variables

While most settings can be configured in the admin interface, you can also use environment variables:

```bash
# Port configuration
PORT=5201

# Admin key (auto-generated if not set)
ADMIN_API_KEYS=your-secret-key

# Claude.ai Cookie
COOKIES=sessionKey=your-session-key
```

See `.env.example` for more configuration options.

### API Usage

Once configured, you can use Clove just like the standard Claude API:

```python
import anthropic

client = anthropic.Anthropic(
    base_url="http://localhost:5201",
    api_key="your-api-key"  # Create this in the admin panel
)

response = client.messages.create(
    model="claude-opus-4-20250514",
    messages=[{"role": "user", "content": "Hello, Claude!"}],
    max_tokens=1024,
)
```

## 🤝 Contributing

Contributions are welcome! If you have great ideas or found issues:

1. Fork this project
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## 🙏 Acknowledgments

- [Anthropic Claude](https://www.anthropic.com/claude) - ~~Adorable little Claude~~ Powerful AI assistant
- [Clewd](https://github.com/teralomaniac/clewd/) - The original Claude.ai reverse proxy
- [ClewdR](https://github.com/Xerxes-2/clewdr) - High-performance Claude.ai reverse proxy
- [FastAPI](https://fastapi.tiangolo.com/) - Modern, fast web framework
- [Tailwind CSS](https://tailwindcss.com/) - CSS framework
- [Shadcn UI](https://ui.shadcn.com/) - Modern UI component library
- [Vite](https://vitejs.dev/) - Modern frontend build tool
- [React](https://reactjs.org/) - JavaScript library

## ⚠️ Disclaimer

This project is for learning and research purposes only. When using this project, please comply with the terms of service of the relevant services. The author is not responsible for any misuse or violations of service terms.

## 📮 Contact

If you have questions or suggestions, feel free to reach out:

- Submit an [Issue](https://github.com/mirrorange/clove/issues)
- Send a Pull Request
- Email: orange@freesia.ink

## 🌸 About Clove

Clove is a plant from the Myrtaceae family's Syzygium genus, commonly used as a spice and in traditional medicine. Clove (丁香, the spice) and lilac flowers (丁香花, Syringa) are two different plants! In this project, the name Clove is actually a blend of "Claude" and "love"!

---

<div align="center">
Made with ❤️ by 🍊
</div>


================================================
FILE: app/__init__.py
================================================
__version__ = "0.1.0"


================================================
FILE: app/api/__init__.py
================================================


================================================
FILE: app/api/main.py
================================================
from fastapi import APIRouter
from app.api.routes import claude, accounts, settings, statistics

api_router = APIRouter()

api_router.include_router(claude.router, prefix="/v1", tags=["Claude API"])
api_router.include_router(
    accounts.router, prefix="/api/admin/accounts", tags=["Account Management"]
)
api_router.include_router(
    settings.router, prefix="/api/admin/settings", tags=["Settings Management"]
)
api_router.include_router(
    statistics.router, prefix="/api/admin/statistics", tags=["Statistics"]
)


================================================
FILE: app/api/routes/accounts.py
================================================
from typing import List, Optional
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel, Field
from uuid import UUID
import time

from app.core.exceptions import OAuthExchangeError
from app.dependencies.auth import AdminAuthDep
from app.services.account import account_manager
from app.core.account import AuthType, AccountStatus, OAuthToken
from app.services.oauth import oauth_authenticator


class OAuthTokenCreate(BaseModel):
    access_token: str
    refresh_token: str
    expires_at: float


class AccountCreate(BaseModel):
    cookie_value: Optional[str] = None
    oauth_token: Optional[OAuthTokenCreate] = None
    organization_uuid: Optional[UUID] = None
    capabilities: Optional[List[str]] = None


class AccountUpdate(BaseModel):
    cookie_value: Optional[str] = None
    oauth_token: Optional[OAuthTokenCreate] = None
    capabilities: Optional[List[str]] = None
    status: Optional[AccountStatus] = None


class OAuthCodeExchange(BaseModel):
    organization_uuid: UUID
    code: str
    pkce_verifier: str
    capabilities: Optional[List[str]] = None


class AccountResponse(BaseModel):
    organization_uuid: str
    capabilities: Optional[List[str]]
    cookie_value: Optional[str] = Field(None, description="Masked cookie value")
    status: AccountStatus
    auth_type: AuthType
    is_pro: bool
    is_max: bool
    has_oauth: bool
    last_used: str
    resets_at: Optional[str] = None


router = APIRouter()


@router.get("", response_model=List[AccountResponse])
async def list_accounts(_: AdminAuthDep):
    """List all accounts."""
    accounts = []

    for org_uuid, account in account_manager._accounts.items():
        accounts.append(
            AccountResponse(
                organization_uuid=org_uuid,
                capabilities=account.capabilities,
                cookie_value=account.cookie_value[:20] + "..."
                if account.cookie_value
                else None,
                status=account.status,
                auth_type=account.auth_type,
                is_pro=account.is_pro,
                is_max=account.is_max,
                has_oauth=account.oauth_token is not None,
                last_used=account.last_used.isoformat(),
                resets_at=account.resets_at.isoformat() if account.resets_at else None,
            )
        )

    return accounts


@router.get("/{organization_uuid}", response_model=AccountResponse)
async def get_account(organization_uuid: str, _: AdminAuthDep):
    """Get a specific account by organization UUID."""
    if organization_uuid not in account_manager._accounts:
        raise HTTPException(status_code=404, detail="Account not found")

    account = account_manager._accounts[organization_uuid]

    return AccountResponse(
        organization_uuid=organization_uuid,
        capabilities=account.capabilities,
        cookie_value=account.cookie_value[:20] + "..."
        if account.cookie_value
        else None,
        status=account.status,
        auth_type=account.auth_type,
        is_pro=account.is_pro,
        is_max=account.is_max,
        has_oauth=account.oauth_token is not None,
        last_used=account.last_used.isoformat(),
        resets_at=account.resets_at.isoformat() if account.resets_at else None,
    )


@router.post("", response_model=AccountResponse)
async def create_account(account_data: AccountCreate, _: AdminAuthDep):
    """Create a new account."""
    oauth_token = None
    if account_data.oauth_token:
        oauth_token = OAuthToken(
            access_token=account_data.oauth_token.access_token,
            refresh_token=account_data.oauth_token.refresh_token,
            expires_at=account_data.oauth_token.expires_at,
        )

    account = await account_manager.add_account(
        cookie_value=account_data.cookie_value,
        oauth_token=oauth_token,
        organization_uuid=str(account_data.organization_uuid),
        capabilities=account_data.capabilities,
    )

    return AccountResponse(
        organization_uuid=account.organization_uuid,
        capabilities=account.capabilities,
        cookie_value=account.cookie_value[:20] + "..."
        if account.cookie_value
        else None,
        status=account.status,
        auth_type=account.auth_type,
        is_pro=account.is_pro,
        is_max=account.is_max,
        has_oauth=account.oauth_token is not None,
        last_used=account.last_used.isoformat(),
        resets_at=account.resets_at.isoformat() if account.resets_at else None,
    )


@router.put("/{organization_uuid}", response_model=AccountResponse)
async def update_account(
    organization_uuid: str, account_data: AccountUpdate, _: AdminAuthDep
):
    """Update an existing account."""
    if organization_uuid not in account_manager._accounts:
        raise HTTPException(status_code=404, detail="Account not found")

    account = account_manager._accounts[organization_uuid]

    # Update fields if provided
    if account_data.cookie_value is not None:
        # Remove old cookie mapping if exists
        if (
            account.cookie_value
            and account.cookie_value in account_manager._cookie_to_uuid
        ):
            del account_manager._cookie_to_uuid[account.cookie_value]

        account.cookie_value = account_data.cookie_value
        account_manager._cookie_to_uuid[account_data.cookie_value] = organization_uuid

    if account_data.oauth_token is not None:
        account.oauth_token = OAuthToken(
            access_token=account_data.oauth_token.access_token,
            refresh_token=account_data.oauth_token.refresh_token,
            expires_at=account_data.oauth_token.expires_at,
        )
        # Update auth type based on what's available
        if account.cookie_value and account.oauth_token:
            account.auth_type = AuthType.BOTH
        elif account.oauth_token:
            account.auth_type = AuthType.OAUTH_ONLY
        else:
            account.auth_type = AuthType.COOKIE_ONLY

    if account_data.capabilities is not None:
        account.capabilities = account_data.capabilities

    if account_data.status is not None:
        account.status = account_data.status
        if account.status == AccountStatus.VALID:
            account.resets_at = None

    # Save changes
    account_manager.save_accounts()

    return AccountResponse(
        organization_uuid=organization_uuid,
        capabilities=account.capabilities,
        cookie_value=account.cookie_value[:20] + "..."
        if account.cookie_value
        else None,
        status=account.status,
        auth_type=account.auth_type,
        is_pro=account.is_pro,
        is_max=account.is_max,
        has_oauth=account.oauth_token is not None,
        last_used=account.last_used.isoformat(),
        resets_at=account.resets_at.isoformat() if account.resets_at else None,
    )


@router.delete("/{organization_uuid}")
async def delete_account(organization_uuid: str, _: AdminAuthDep):
    """Delete an account."""
    if organization_uuid not in account_manager._accounts:
        raise HTTPException(status_code=404, detail="Account not found")

    await account_manager.remove_account(organization_uuid)

    return {"message": "Account deleted successfully"}


@router.post("/oauth/exchange", response_model=AccountResponse)
async def exchange_oauth_code(exchange_data: OAuthCodeExchange, _: AdminAuthDep):
    """Exchange OAuth authorization code for tokens and create account."""
    # Exchange code for tokens
    token_data = await oauth_authenticator.exchange_token(
        exchange_data.code, exchange_data.pkce_verifier
    )

    if not token_data:
        raise OAuthExchangeError()

    # Create OAuth token object
    oauth_token = OAuthToken(
        access_token=token_data["access_token"],
        refresh_token=token_data["refresh_token"],
        expires_at=time.time() + token_data["expires_in"],
    )

    # Create account with OAuth token
    account = await account_manager.add_account(
        oauth_token=oauth_token,
        organization_uuid=str(exchange_data.organization_uuid),
        capabilities=exchange_data.capabilities,
    )

    return AccountResponse(
        organization_uuid=account.organization_uuid,
        capabilities=account.capabilities,
        cookie_value=None,
        status=account.status,
        auth_type=account.auth_type,
        is_pro=account.is_pro,
        is_max=account.is_max,
        has_oauth=True,
        last_used=account.last_used.isoformat(),
        resets_at=account.resets_at.isoformat() if account.resets_at else None,
    )


================================================
FILE: app/api/routes/claude.py
================================================
from fastapi import APIRouter, Request
from fastapi.responses import StreamingResponse, JSONResponse
from tenacity import (
    retry,
    retry_if_exception,
    stop_after_attempt,
    wait_fixed,
)

from app.core.config import settings
from app.core.exceptions import NoResponseError
from app.dependencies.auth import AuthDep
from app.models.claude import MessagesAPIRequest
from app.processors.claude_ai import ClaudeAIContext
from app.processors.claude_ai.pipeline import ClaudeAIPipeline
from app.utils.retry import is_retryable_error, log_before_sleep

router = APIRouter()


@router.post("/messages", response_model=None)
@retry(
    retry=retry_if_exception(is_retryable_error),
    stop=stop_after_attempt(settings.retry_attempts),
    wait=wait_fixed(settings.retry_interval),
    before_sleep=log_before_sleep,
    reraise=True,
)
async def create_message(
    request: Request, messages_request: MessagesAPIRequest, _: AuthDep
) -> StreamingResponse | JSONResponse:
    context = ClaudeAIContext(
        original_request=request,
        messages_api_request=messages_request,
    )

    context = await ClaudeAIPipeline().process(context)

    if not context.response:
        raise NoResponseError()

    return context.response


================================================
FILE: app/api/routes/settings.py
================================================
import os
import json
from typing import List
from fastapi import APIRouter, HTTPException
from pydantic import BaseModel, HttpUrl

from app.dependencies.auth import AdminAuthDep
from app.core.config import Settings, settings


class SettingsRead(BaseModel):
    """Model for returning settings."""

    api_keys: List[str]
    admin_api_keys: List[str]

    proxy_url: str | None

    claude_ai_url: HttpUrl
    claude_api_baseurl: HttpUrl

    custom_prompt: str | None
    use_real_roles: bool
    human_name: str
    assistant_name: str
    padtxt_length: int
    allow_external_images: bool

    preserve_chats: bool

    oauth_client_id: str
    oauth_authorize_url: str
    oauth_token_url: str
    oauth_redirect_uri: str


class SettingsUpdate(BaseModel):
    """Model for updating settings."""

    api_keys: List[str] | None = None
    admin_api_keys: List[str] | None = None

    proxy_url: str | None = None

    claude_ai_url: HttpUrl | None = None
    claude_api_baseurl: HttpUrl | None = None

    custom_prompt: str | None = None
    use_real_roles: bool | None = None
    human_name: str | None = None
    assistant_name: str | None = None
    padtxt_length: int | None = None
    allow_external_images: bool | None = None

    preserve_chats: bool | None = None

    oauth_client_id: str | None = None
    oauth_authorize_url: str | None = None
    oauth_token_url: str | None = None
    oauth_redirect_uri: str | None = None


router = APIRouter()


@router.get("", response_model=SettingsRead)
async def get_settings(_: AdminAuthDep) -> Settings:
    """Get current settings."""
    return settings


@router.put("", response_model=SettingsUpdate)
async def update_settings(_: AdminAuthDep, updates: SettingsUpdate) -> Settings:
    """Update settings and save to config.json."""
    update_dict = updates.model_dump(exclude_unset=True)

    if not settings.no_filesystem_mode:
        config_path = settings.data_folder / "config.json"
        settings.data_folder.mkdir(parents=True, exist_ok=True)

        if os.path.exists(config_path):
            try:
                with open(config_path, "r", encoding="utf-8") as f:
                    config_data = SettingsUpdate.model_validate_json(f.read())
            except (json.JSONDecodeError, IOError):
                config_data = SettingsUpdate()
        else:
            config_data = SettingsUpdate()

        config_data = config_data.model_copy(update=update_dict)

        try:
            with open(config_path, "w", encoding="utf-8") as f:
                f.write(config_data.model_dump_json(exclude_unset=True))
        except IOError as e:
            raise HTTPException(
                status_code=500, detail=f"Failed to save config: {str(e)}"
            )

    for key, value in update_dict.items():
        if hasattr(settings, key):
            setattr(settings, key, value)

    return settings


================================================
FILE: app/api/routes/statistics.py
================================================
from typing import Literal
from fastapi import APIRouter
from pydantic import BaseModel

from app.dependencies.auth import AdminAuthDep
from app.services.account import account_manager


class AccountStats(BaseModel):
    total_accounts: int
    valid_accounts: int
    rate_limited_accounts: int
    invalid_accounts: int
    active_sessions: int


class StatisticsResponse(BaseModel):
    status: Literal["healthy", "degraded"]
    accounts: AccountStats


router = APIRouter()


@router.get("", response_model=StatisticsResponse)
async def get_statistics(_: AdminAuthDep):
    """Get system statistics. Requires admin authentication."""
    stats = await account_manager.get_status()
    return {
        "status": "healthy" if stats["valid_accounts"] > 0 else "degraded",
        "accounts": stats,
    }


================================================
FILE: app/core/__init__.py
================================================


================================================
FILE: app/core/account.py
================================================
from typing import List, Optional
from enum import Enum
from datetime import datetime
from dataclasses import dataclass

from app.core.exceptions import (
    ClaudeAuthenticationError,
    ClaudeRateLimitedError,
    OAuthAuthenticationNotAllowedError,
    OrganizationDisabledError,
)


class AccountStatus(str, Enum):
    VALID = "valid"
    INVALID = "invalid"
    RATE_LIMITED = "rate_limited"


class AuthType(str, Enum):
    COOKIE_ONLY = "cookie_only"
    OAUTH_ONLY = "oauth_only"
    BOTH = "both"


@dataclass
class OAuthToken:
    """Encapsulates OAuth credentials for an account."""

    access_token: str
    refresh_token: str
    expires_at: float  # Unix timestamp

    def to_dict(self) -> dict:
        """Convert to dictionary for JSON serialization."""
        return {
            "access_token": self.access_token,
            "refresh_token": self.refresh_token,
            "expires_at": self.expires_at,
        }

    @classmethod
    def from_dict(cls, data: dict) -> "OAuthToken":
        """Create from dictionary."""
        return cls(
            access_token=data["access_token"],
            refresh_token=data["refresh_token"],
            expires_at=data["expires_at"],
        )


class Account:
    """Represents a Claude.ai account with cookie and/or OAuth authentication."""

    def __init__(
        self,
        organization_uuid: str,
        capabilities: Optional[List[str]] = None,
        cookie_value: Optional[str] = None,
        oauth_token: Optional[OAuthToken] = None,
        auth_type: AuthType = AuthType.COOKIE_ONLY,
    ):
        self.organization_uuid = organization_uuid
        self.capabilities = capabilities
        self.cookie_value = cookie_value
        self.status = AccountStatus.VALID
        self.auth_type = auth_type
        self.last_used = datetime.now()
        self.resets_at: Optional[datetime] = None
        self.oauth_token: Optional[OAuthToken] = oauth_token

    def __enter__(self) -> "Account":
        """Enter the context manager."""
        self.last_used = datetime.now()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        """Exit the context manager and handle CookieRateLimitedError."""
        if exc_type is ClaudeRateLimitedError and isinstance(
            exc_val, ClaudeRateLimitedError
        ):
            self.status = AccountStatus.RATE_LIMITED
            self.resets_at = exc_val.resets_at
            self.save()

        if exc_type is ClaudeAuthenticationError and isinstance(
            exc_val, ClaudeAuthenticationError
        ):
            self.status = AccountStatus.INVALID
            self.save()

        if exc_type is OrganizationDisabledError and isinstance(
            exc_val, OrganizationDisabledError
        ):
            self.status = AccountStatus.INVALID
            self.save()

        if exc_type is OAuthAuthenticationNotAllowedError and isinstance(
            exc_val, OAuthAuthenticationNotAllowedError
        ):
            if self.auth_type == AuthType.BOTH:
                self.auth_type = AuthType.COOKIE_ONLY
            else:
                self.status = AccountStatus.INVALID
            self.save()

        return False

    def save(self) -> None:
        from app.services.account import account_manager

        account_manager.save_accounts()

    def to_dict(self) -> dict:
        """Convert Account to dictionary for JSON serialization."""
        return {
            "organization_uuid": self.organization_uuid,
            "capabilities": self.capabilities,
            "cookie_value": self.cookie_value,
            "status": self.status.value,
            "auth_type": self.auth_type.value,
            "last_used": self.last_used.isoformat(),
            "resets_at": self.resets_at.isoformat() if self.resets_at else None,
            "oauth_token": self.oauth_token.to_dict() if self.oauth_token else None,
        }

    @classmethod
    def from_dict(cls, data: dict) -> "Account":
        """Create Account from dictionary."""
        account = cls(
            organization_uuid=data["organization_uuid"],
            capabilities=data.get("capabilities"),
            cookie_value=data.get("cookie_value"),
            auth_type=AuthType(data["auth_type"]),
        )
        account.status = AccountStatus(data["status"])
        account.last_used = datetime.fromisoformat(data["last_used"])
        account.resets_at = (
            datetime.fromisoformat(data["resets_at"]) if data["resets_at"] else None
        )

        if "oauth_token" in data and data["oauth_token"]:
            account.oauth_token = OAuthToken.from_dict(data["oauth_token"])

        return account

    @property
    def is_pro(self) -> bool:
        """Check if account has pro capabilities."""
        if not self.capabilities:
            return False

        pro_keywords = ["pro", "enterprise", "raven", "max"]
        return any(
            keyword in cap.lower()
            for cap in self.capabilities
            for keyword in pro_keywords
        )

    @property
    def is_max(self) -> bool:
        """Check if account has max capabilities."""
        if not self.capabilities:
            return False

        return any("max" in cap.lower() for cap in self.capabilities)

    def __repr__(self) -> str:
        """String representation of the Account."""
        return f"<Account organization_uuid={self.organization_uuid[:8]}... status={self.status.value} auth_type={self.auth_type.value}>"


================================================
FILE: app/core/claude_session.py
================================================
from typing import Dict, Any, AsyncIterator, Optional
from datetime import datetime
from app.core.http_client import Response
from loguru import logger

from app.core.config import settings
from app.core.external.claude_client import ClaudeWebClient
from app.services.account import account_manager


class ClaudeWebSession:
    def __init__(self, session_id: str):
        self.session_id = session_id
        self.last_activity = datetime.now()
        self.conv_uuid: Optional[str] = None
        self.paprika_mode: Optional[str] = None
        self.sse_stream: Optional[AsyncIterator[str]] = None

    async def initialize(self):
        """Initialize the session."""
        self.account = await account_manager.get_account_for_session(self.session_id)
        self.client = ClaudeWebClient(self.account)
        await self.client.initialize()

    async def stream(self, response: Response) -> AsyncIterator[str]:
        """Get the SSE stream."""
        buffer = b""
        async for chunk in response.aiter_bytes():
            self.update_activity()
            buffer += chunk
            lines = buffer.split(b"\n")
            buffer = lines[-1]
            for line in lines[:-1]:
                yield line.decode("utf-8") + "\n"

        if buffer:
            yield buffer.decode("utf-8")

        logger.debug(f"Stream completed for session {self.session_id}")

        from app.services.session import session_manager

        await session_manager.remove_session(self.session_id)

    async def cleanup(self):
        """Cleanup session resources."""
        logger.debug(f"Cleaning up session {self.session_id}")

        # Delete conversation if exists
        if self.conv_uuid and not settings.preserve_chats:
            await self.client.delete_conversation(self.conv_uuid)

        await account_manager.release_session(self.session_id)
        await self.client.cleanup()

    async def _ensure_conversation_initialized(self) -> None:
        """Ensure conversation is initialized. Create if not exists."""
        if not self.conv_uuid:
            conv_uuid, paprika_mode = await self.client.create_conversation()
            self.conv_uuid = conv_uuid
            self.paprika_mode = paprika_mode

    def update_activity(self):
        """Update last activity timestamp."""
        self.last_activity = datetime.now()

    async def send_message(self, payload: Dict[str, Any]) -> AsyncIterator[str]:
        """Process a completion request through the pipeline."""
        self.update_activity()

        await self._ensure_conversation_initialized()

        response = await self.client.send_message(
            payload,
            conv_uuid=self.conv_uuid,
        )
        self.sse_stream = self.stream(response)

        logger.debug(f"Sent message for session {self.session_id}")
        return self.sse_stream

    async def upload_file(
        self, file_data: bytes, filename: str, content_type: str
    ) -> str:
        """Upload a file and return file UUID."""
        return await self.client.upload_file(file_data, filename, content_type)

    async def send_tool_result(self, payload: Dict[str, Any]) -> None:
        """Send tool result to Claude.ai."""
        if not self.conv_uuid:
            raise ValueError(
                "Session must have an active conversation to send tool results"
            )

        await self.client.send_tool_result(payload, self.conv_uuid)

    async def set_paprika_mode(self, mode: Optional[str]) -> None:
        """Set the conversation mode."""
        await self._ensure_conversation_initialized()

        if self.paprika_mode == mode:
            return

        await self.client.set_paprika_mode(self.conv_uuid, mode)
        self.paprika_mode = mode


================================================
FILE: app/core/config.py
================================================
import os
import json
from pathlib import Path
from typing import Optional, List, Dict, Any
from pydantic_settings import BaseSettings, SettingsConfigDict
from pydantic import Field, HttpUrl, field_validator
from dotenv import load_dotenv

class Settings(BaseSettings):
    """Application settings with environment variable and JSON config support."""

    model_config = SettingsConfigDict(
        env_file=".env",
        env_ignore_empty=True,
        extra="ignore",
    )

    @classmethod
    def settings_customise_sources(
        cls,
        settings_cls,
        init_settings,
        env_settings,
        dotenv_settings,
        file_secret_settings,
    ):
        """Customize settings sources to add JSON config support.

        Priority order (highest to lowest):
        1. JSON config file
        2. Environment variables
        3. .env file
        4. Default values
        """
        return (
            init_settings,
            cls._json_config_settings,
            env_settings,
            dotenv_settings,
            file_secret_settings,
        )

    @classmethod
    def _json_config_settings(cls) -> Dict[str, Any]:
        """Load settings from JSON config file in data_folder."""

        # Check if NO_FILESYSTEM_MODE is enabled
        if os.environ.get("NO_FILESYSTEM_MODE", "").lower() in ("true", "1", "yes"):
            return {}

        # Load .env file to ensure environment variables are available
        load_dotenv()

        # First get data_folder from env or default
        data_folder = os.environ.get(
            "DATA_FOLDER", str(Path.home() / ".clove" / "data")
        )

        config_path = os.path.join(data_folder, "config.json")

        if os.path.exists(config_path):
            try:
                with open(config_path, "r", encoding="utf-8") as f:
                    config_data = json.load(f)
                    return config_data
            except (json.JSONDecodeError, IOError):
                # If there's an error reading the JSON, just return empty dict
                return {}
        return {}

    # Server settings
    host: str = Field(default="0.0.0.0", env="HOST")
    port: int = Field(default=5201, env="PORT")

    # Application configuration
    data_folder: Path = Field(
        default=Path.home() / ".clove" / "data",
        env="DATA_FOLDER",
        description="Folder path for storing persistent data (accounts, etc.)",
    )
    locales_folder: Path = Field(
        default=Path(__file__).parent.parent / "locales",
        env="LOCALES_FOLDER",
        description="Folder path for storing translation files",
    )
    static_folder: Path = Field(
        default=Path(__file__).parent.parent / "static",
        env="STATIC_FOLDER",
        description="Folder path for storing static files",
    )
    default_language: str = Field(
        default="en",
        env="DEFAULT_LANGUAGE",
        description="Default language code for translations",
    )
    retry_attempts: int = Field(
        default=3,
        env="RETRY_ATTEMPTS",
        description="Number of retry attempts for failed requests",
    )
    retry_interval: int = Field(
        default=1,
        env="RETRY_INTERVAL",
        description="Interval between retry attempts in seconds",
    )
    no_filesystem_mode: bool = Field(
        default=False,
        env="NO_FILESYSTEM_MODE",
        description="When True, disables all filesystem operations (accounts/settings stored in memory only)",
    )

    # Proxy settings
    proxy_url: Optional[str] = Field(default=None, env="PROXY_URL")

    # API Keys
    api_keys: List[str] | str = Field(
        default_factory=list,
        env="API_KEYS",
        description="Comma-separated list of API keys",
    )
    admin_api_keys: List[str] | str = Field(
        default_factory=list,
        env="ADMIN_API_KEYS",
        description="Comma-separated list of admin API keys",
    )

    # Claude URLs
    claude_ai_url: HttpUrl = Field(default="https://claude.ai", env="CLAUDE_AI_URL")
    claude_api_baseurl: HttpUrl = Field(
        default="https://api.anthropic.com", env="CLAUDE_API_BASEURL"
    )

    # Cookies
    cookies: List[str] | str = Field(
        default_factory=list,
        env="COOKIES",
        description="Comma-separated list of Claude.ai cookies",
    )

    # Content processing
    custom_prompt: Optional[str] = Field(default=None, env="CUSTOM_PROMPT")
    use_real_roles: bool = Field(default=True, env="USE_REAL_ROLES")
    human_name: str = Field(default="Human", env="CUSTOM_HUMAN_NAME")
    assistant_name: str = Field(default="Assistant", env="CUSTOM_ASSISTANT_NAME")
    pad_tokens: List[str] | str = Field(default_factory=list, env="PAD_TOKENS")
    padtxt_length: int = Field(default=0, env="PADTXT_LENGTH")
    allow_external_images: bool = Field(
        default=False,
        env="ALLOW_EXTERNAL_IMAGES",
        description="Allow downloading images from external URLs",
    )

    # Request settings
    request_timeout: int = Field(default=60, env="REQUEST_TIMEOUT")
    request_retries: int = Field(default=3, env="REQUEST_RETRIES")
    request_retry_interval: int = Field(default=1, env="REQUEST_RETRY_INTERVAL")

    # Feature flags
    preserve_chats: bool = Field(default=False, env="PRESERVE_CHATS")

    # Logging
    log_level: str = Field(default="INFO", env="LOG_LEVEL")
    log_to_file: bool = Field(
        default=False, env="LOG_TO_FILE", description="Enable logging to file"
    )
    log_file_path: str = Field(
        default="logs/app.log", env="LOG_FILE_PATH", description="Log file path"
    )
    log_file_rotation: str = Field(
        default="10 MB",
        env="LOG_FILE_ROTATION",
        description="Log file rotation (e.g., '10 MB', '1 day', '1 week')",
    )
    log_file_retention: str = Field(
        default="7 days",
        env="LOG_FILE_RETENTION",
        description="Log file retention (e.g., '7 days', '1 month')",
    )
    log_file_compression: str = Field(
        default="zip",
        env="LOG_FILE_COMPRESSION",
        description="Log file compression format",
    )

    # Session management settings
    session_timeout: int = Field(
        default=300,
        env="SESSION_TIMEOUT",
        description="Session idle timeout in seconds",
    )
    session_cleanup_interval: int = Field(
        default=30,
        env="SESSION_CLEANUP_INTERVAL",
        description="Interval for cleaning up expired sessions in seconds",
    )
    max_sessions_per_cookie: int = Field(
        default=3,
        env="MAX_SESSIONS_PER_COOKIE",
        description="Maximum number of concurrent sessions per cookie",
    )

    # Account management settings
    account_task_interval: int = Field(
        default=60,
        env="ACCOUNT_TASK_INTERVAL",
        description="Interval for account management task in seconds",
    )

    # Tool call settings
    tool_call_timeout: int = Field(
        default=300,
        env="TOOL_CALL_TIMEOUT",
        description="Timeout for pending tool calls in seconds",
    )
    tool_call_cleanup_interval: int = Field(
        default=60,
        env="TOOL_CALL_CLEANUP_INTERVAL",
        description="Interval for cleaning up expired tool calls in seconds",
    )

    # Cache settings
    cache_timeout: int = Field(
        default=300,
        env="CACHE_TIMEOUT",
        description="Timeout for cache checkpoints in seconds (default: 5 minutes)",
    )
    cache_cleanup_interval: int = Field(
        default=60,
        env="CACHE_CLEANUP_INTERVAL",
        description="Interval for cleaning up expired cache checkpoints in seconds",
    )

    # Claude OAuth settings
    oauth_client_id: str = Field(
        default="9d1c250a-e61b-44d9-88ed-5944d1962f5e",
        env="OAUTH_CLIENT_ID",
        description="OAuth client ID for Claude authentication",
    )
    oauth_authorize_url: str = Field(
        default="https://claude.ai/v1/oauth/{organization_uuid}/authorize",
        env="OAUTH_AUTHORIZE_URL",
        description="OAuth authorization endpoint URL template",
    )
    oauth_token_url: str = Field(
        default="https://console.anthropic.com/v1/oauth/token",
        env="OAUTH_TOKEN_URL",
        description="OAuth token exchange endpoint URL",
    )
    oauth_redirect_uri: str = Field(
        default="https://console.anthropic.com/oauth/code/callback",
        env="OAUTH_REDIRECT_URI",
        description="OAuth redirect URI for authorization flow",
    )

    # Claude API Specific
    max_models: List[str] | str = Field(
        default=[],
        env="MAX_MODELS",
        description="Comma-separated list of models that require max plan accounts",
    )

    @field_validator(
        "api_keys", "admin_api_keys", "cookies", "max_models", "pad_tokens"
    )
    def parse_comma_separated(cls, v: str | List[str]) -> List[str]:
        """Parse comma-separated string."""
        if isinstance(v, str):
            return [key.strip() for key in v.split(",") if key.strip()]
        return v


settings = Settings()


================================================
FILE: app/core/error_handler.py
================================================
from typing import Dict, Any
from fastapi import Request
from fastapi.responses import JSONResponse
from loguru import logger

from app.services.i18n import i18n_service
from app.core.exceptions import AppError


class ErrorHandler:
    """
    Centralized error handler for the application. Handles AppException.
    """

    @staticmethod
    def get_language_from_request(request: Request) -> str:
        """Extract language preference from request headers."""
        accept_language = request.headers.get("accept-language")
        return i18n_service.parse_accept_language(accept_language)

    @staticmethod
    def format_error_response(
        error_code: int, message: str, context: Dict[str, Any] = None
    ) -> Dict[str, Any]:
        """
        Format error response in standardized format.

        Args:
            error_code: 6-digit error code
            message: Localized error message
            context: Additional context information

        Returns:
            Formatted error response
        """
        response = {"detail": {"code": error_code, "message": message}}

        # Add context if provided and not empty
        if context:
            response["detail"]["context"] = context

        return response

    @staticmethod
    async def handle_app_exception(request: Request, exc: AppError) -> JSONResponse:
        """
        Handle AppException instances.

        Args:
            request: The FastAPI request object
            exc: The AppException instance

        Returns:
            JSONResponse with localized error message
        """
        language = ErrorHandler.get_language_from_request(request)

        # Get localized message
        message = i18n_service.get_message(
            message_key=exc.message_key, language=language, context=exc.context
        )

        # Format response
        response_data = ErrorHandler.format_error_response(
            error_code=exc.error_code,
            message=message,
            context=exc.context if exc.context else None,
        )

        # Log the error
        logger.warning(
            f"AppException: {exc.__class__.__name__} - "
            f"Code: {exc.error_code}, Message: {message}, "
            f"Context: {exc.context}"
        )

        return JSONResponse(status_code=exc.status_code, content=response_data)


# Exception handler functions for FastAPI
async def app_exception_handler(request: Request, exc: AppError) -> JSONResponse:
    """FastAPI exception handler for AppException."""
    return await ErrorHandler.handle_app_exception(request, exc)


================================================
FILE: app/core/exceptions.py
================================================
from datetime import datetime
from typing import Optional, Any, Dict


class AppError(Exception):
    """
    Base class for application-specific exceptions.
    """

    def __init__(
        self,
        error_code: int,
        message_key: str,
        status_code: int,
        context: Optional[Dict[str, Any]] = None,
        retryable: bool = False,
    ):
        self.error_code = error_code
        self.message_key = message_key
        self.status_code = status_code
        self.context = context if context is not None else {}
        self.retryable = retryable
        super().__init__(
            f"Error Code: {error_code}, Message Key: {message_key}, Context: {self.context}"
        )

    def __str__(self):
        return f"{self.__class__.__name__}(error_code={self.error_code}, message_key='{self.message_key}', status_code={self.status_code}, context={self.context})"


class InternalServerError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=500000,
            message_key="global.internalServerError",
            status_code=500,
            context=context,
        )


class NoAPIKeyProvidedError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=401010,
            message_key="global.noAPIKeyProvided",
            status_code=401,
            context=context,
        )


class InvalidAPIKeyError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=401011,
            message_key="global.invalidAPIKey",
            status_code=401,
            context=context,
        )


class NoAccountsAvailableError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=503100,
            message_key="accountManager.noAccountsAvailable",
            status_code=503,
            context=context,
            retryable=True,
        )


class ClaudeRateLimitedError(AppError):
    resets_at: datetime

    def __init__(self, resets_at: datetime, context: Optional[Dict[str, Any]] = None):
        self.resets_at = resets_at
        _context = context.copy() if context else {}
        _context["resets_at"] = resets_at.strftime("%Y-%m-%dT%H:%M:%SZ")
        super().__init__(
            error_code=429120,
            message_key="claudeClient.claudeRateLimited",
            status_code=429,
            context=_context,
            retryable=True,
        )


class CloudflareBlockedError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=503121,
            message_key="claudeClient.cloudflareBlocked",
            status_code=503,
            context=context,
        )


class OrganizationDisabledError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=400122,
            message_key="claudeClient.organizationDisabled",
            status_code=400,
            context=context,
            retryable=True,
        )


class InvalidModelNameError(AppError):
    def __init__(self, model_name: str, context: Optional[Dict[str, Any]] = None):
        _context = context.copy() if context else {}
        _context["model_name"] = model_name
        super().__init__(
            error_code=400123,
            message_key="claudeClient.invalidModelName",
            status_code=400,
            context=_context,
        )


class ClaudeAuthenticationError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=400124,
            message_key="claudeClient.authenticationError",
            status_code=400,
            context=context,
        )


class ClaudeHttpError(AppError):
    def __init__(
        self,
        url,
        status_code: int,
        error_type: str,
        error_message: Any,
        context: Optional[Dict[str, Any]] = None,
    ):
        _context = context.copy() if context else {}
        _context.update({
            "url": url,
            "status_code": status_code,
            "error_type": error_type,
            "error_message": error_message,
        })
        super().__init__(
            error_code=503130,
            message_key="claudeClient.httpError",
            status_code=status_code,
            context=_context,
            retryable=True,
        )


class NoValidMessagesError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=400140,
            message_key="messageProcessor.noValidMessages",
            status_code=400,
            context=context,
        )


class ExternalImageDownloadError(AppError):
    def __init__(self, url: str, context: Optional[Dict[str, Any]] = None):
        _context = context.copy() if context else {}
        _context.update({"url": url})
        super().__init__(
            error_code=503141,
            message_key="messageProcessor.externalImageDownloadError",
            status_code=503,
            context=_context,
        )


class ExternalImageNotAllowedError(AppError):
    def __init__(self, url: str, context: Optional[Dict[str, Any]] = None):
        _context = context.copy() if context else {}
        _context.update({"url": url})
        super().__init__(
            error_code=400142,
            message_key="messageProcessor.externalImageNotAllowed",
            status_code=400,
            context=_context,
        )


class NoResponseError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=503160,
            message_key="pipeline.noResponse",
            status_code=503,
            context=context,
        )


class OAuthExchangeError(AppError):
    def __init__(self, reason: str, context: Optional[Dict[str, Any]] = None):
        _context = context.copy() if context else {}
        _context["reason"] = reason or "Unknown"
        super().__init__(
            error_code=400180,
            message_key="oauthService.oauthExchangeError",
            status_code=400,
            context=_context,
        )


class OrganizationInfoError(AppError):
    def __init__(self, reason: str, context: Optional[Dict[str, Any]] = None):
        _context = context.copy() if context else {}
        _context["reason"] = reason or "Unknown"
        super().__init__(
            error_code=503181,
            message_key="oauthService.organizationInfoError",
            status_code=503,
            context=_context,
        )


class CookieAuthorizationError(AppError):
    def __init__(self, reason: str, context: Optional[Dict[str, Any]] = None):
        _context = context.copy() if context else {}
        _context["reason"] = reason or "Unknown"
        super().__init__(
            error_code=400182,
            message_key="oauthService.cookieAuthorizationError",
            status_code=400,
            context=_context,
        )


class OAuthAuthenticationNotAllowedError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=400183,
            message_key="oauthService.oauthAuthenticationNotAllowed",
            status_code=400,
            context=context,
        )


class ClaudeStreamingError(AppError):
    def __init__(
        self,
        error_type: str,
        error_message: str,
        context: Optional[Dict[str, Any]] = None,
    ):
        _context = context.copy() if context else {}
        _context.update({
            "error_type": error_type,
            "error_message": error_message,
        })
        super().__init__(
            error_code=503500,
            message_key="processors.nonStreamingResponseProcessor.streamingError",
            status_code=503,
            context=_context,
            retryable=True,
        )


class NoMessageError(AppError):
    def __init__(self, context: Optional[Dict[str, Any]] = None):
        super().__init__(
            error_code=503501,
            message_key="processors.nonStreamingResponseProcessor.noMessage",
            status_code=503,
            context=context,
            retryable=True,
        )


================================================
FILE: app/core/external/claude_client.py
================================================
import json
from loguru import logger
from datetime import datetime, timezone
from typing import Optional, Dict, Any
from urllib.parse import urljoin
from uuid import uuid4

from app.core.http_client import (
    create_session,
    Response,
    AsyncSession,
)

from app.core.config import settings
from app.core.exceptions import (
    ClaudeAuthenticationError,
    ClaudeRateLimitedError,
    CloudflareBlockedError,
    OrganizationDisabledError,
    ClaudeHttpError,
)
from app.models.internal import UploadResponse
from app.core.account import Account


class ClaudeWebClient:
    """Client for interacting with Claude.ai."""

    def __init__(self, account: Account):
        self.account = account
        self.session: Optional[AsyncSession] = None
        self.endpoint = settings.claude_ai_url.encoded_string().rstrip("/")

    async def initialize(self):
        """Initialize the client session."""
        self.session = create_session(
            timeout=settings.request_timeout,
            impersonate="chrome",
            proxy=settings.proxy_url,
            follow_redirects=False,
        )

    async def cleanup(self):
        """Clean up resources."""
        if self.session:
            await self.session.close()

    def _build_headers(
        self, cookie: str, conv_uuid: Optional[str] = None
    ) -> Dict[str, str]:
        """Build request headers."""
        headers = {
            "Accept": "text/event-stream",
            "Accept-Language": "en-US,en;q=0.9",
            "Cache-Control": "no-cache",
            "Cookie": cookie,
            "Origin": self.endpoint,
            "Referer": f"{self.endpoint}/new",
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
        }

        if conv_uuid:
            headers["Referer"] = f"{self.endpoint}/chat/{conv_uuid}"

        return headers

    async def _request(
        self,
        method: str,
        url: str,
        conv_uuid: Optional[str] = None,
        stream=None,
        **kwargs,
    ) -> Response:
        """Make HTTP request with error handling."""
        if not self.session:
            await self.initialize()

        with self.account as account:
            cookie_value = account.cookie_value
            headers = self._build_headers(cookie_value, conv_uuid)
            kwargs["headers"] = {**headers, **kwargs.get("headers", {})}
            response: Response = await self.session.request(
                method=method, url=url, stream=stream, **kwargs
            )

            if response.status_code < 300:
                return response

            if response.status_code == 302:
                raise CloudflareBlockedError()

            try:
                error_data = await response.json()
                error_body = error_data.get("error", {})
                error_message = error_body.get("message", "Unknown error")
                error_type = error_body.get("type", "unknown")
            except Exception:
                error_message = f"HTTP {response.status_code} error with empty response"
                error_type = "empty_response"

            if (
                response.status_code == 400
                and error_message == "This organization has been disabled."
            ):
                raise OrganizationDisabledError()

            if response.status_code == 403 and error_message == "Invalid authorization":
                raise ClaudeAuthenticationError()

            if response.status_code == 429:
                try:
                    error_message_data = json.loads(error_message)
                    resets_at = error_message_data.get("resetsAt")
                    if resets_at and isinstance(resets_at, int):
                        reset_time = datetime.fromtimestamp(resets_at, tz=timezone.utc)
                        logger.error(f"Rate limit exceeded, resets at: {reset_time}")
                        raise ClaudeRateLimitedError(resets_at=reset_time)
                except json.JSONDecodeError:
                    pass

            raise ClaudeHttpError(
                url=url,
                status_code=response.status_code,
                error_type=error_type,
                error_message=error_message,
            )

    async def create_conversation(self) -> str:
        """Create a new conversation."""
        url = urljoin(
            self.endpoint,
            f"/api/organizations/{self.account.organization_uuid}/chat_conversations",
        )

        uuid = uuid4()

        payload = {
            "name": "Hello World!",
            "uuid": str(uuid),
        }
        response = await self._request("POST", url, json=payload)

        data = await response.json()
        conv_uuid = data.get("uuid")
        paprika_mode = data.get("settings", {}).get("paprika_mode")
        logger.info(f"Created conversation: {conv_uuid}")

        return conv_uuid, paprika_mode

    async def set_paprika_mode(self, conv_uuid: str, mode: Optional[str]) -> None:
        """Set the conversation mode."""
        url = urljoin(
            self.endpoint,
            f"/api/organizations/{self.account.organization_uuid}/chat_conversations/{conv_uuid}",
        )
        payload = {"settings": {"paprika_mode": mode}}
        await self._request("PUT", url, json=payload)
        logger.debug(f"Set conversation {conv_uuid} mode: {mode}")

    async def upload_file(
        self, file_data: bytes, filename: str, content_type: str
    ) -> str:
        """Upload a file and return file UUID."""
        url = urljoin(self.endpoint, f"/api/{self.account.organization_uuid}/upload")
        files = {"file": (filename, file_data, content_type)}

        response = await self._request("POST", url, files=files)

        data = UploadResponse.model_validate(await response.json())
        return data.file_uuid

    async def send_message(self, payload: Dict[str, Any], conv_uuid: str) -> Response:
        """Send a message and return the response."""
        url = urljoin(
            self.endpoint,
            f"/api/organizations/{self.account.organization_uuid}/chat_conversations/{conv_uuid}/completion",
        )

        headers = {
            "Accept": "text/event-stream",
        }

        response = await self._request(
            "POST", url, conv_uuid=conv_uuid, json=payload, headers=headers, stream=True
        )

        return response

    async def send_tool_result(self, payload: Dict[str, Any], conv_uuid: str):
        """Send tool result to Claude.ai."""
        url = urljoin(
            self.endpoint,
            f"/api/organizations/{self.account.organization_uuid}/chat_conversations/{conv_uuid}/tool_result",
        )

        await self._request("POST", url, conv_uuid=conv_uuid, json=payload)

    async def delete_conversation(self, conv_uuid: str) -> None:
        """Delete a conversation."""
        if not conv_uuid:
            return

        url = urljoin(
            self.endpoint,
            f"/api/organizations/{self.account.organization_uuid}/chat_conversations/{conv_uuid}",
        )
        try:
            await self._request("DELETE", url, conv_uuid=conv_uuid)
            logger.info(f"Deleted conversation: {conv_uuid}")
        except Exception as e:
            logger.warning(f"Failed to delete conversation: {e}")


================================================
FILE: app/core/http_client.py
================================================
"""HTTP client abstraction layer that supports both curl_cffi and httpx."""

from abc import ABC, abstractmethod
from typing import Optional, Dict, Any, Tuple, AsyncIterator
from tenacity import (
    retry,
    retry_if_exception_type,
    stop_after_attempt,
    wait_fixed,
)
from loguru import logger
import json

from app.core.config import settings
from app.utils.retry import log_before_sleep

try:
    import rnet
    from rnet import Client as RnetClient, Method as RnetMethod
    from rnet.exceptions import RequestError as RnetRequestError

    RNET_AVAILABLE = True
except ImportError:
    RNET_AVAILABLE = False

try:
    from curl_cffi.requests import (
        AsyncSession as CurlAsyncSession,
        Response as CurlResponse,
    )
    from curl_cffi.requests.exceptions import RequestException as CurlRequestException
    import curl_cffi

    CURL_CFFI_AVAILABLE = True
except ImportError:
    CURL_CFFI_AVAILABLE = False

# Always try to import httpx as fallback
try:
    import httpx

    HTTPX_AVAILABLE = True
except ImportError:
    HTTPX_AVAILABLE = False

if not RNET_AVAILABLE and not CURL_CFFI_AVAILABLE and not HTTPX_AVAILABLE:
    raise ImportError(
        "Neither rnet, curl_cffi nor httpx is installed. Please install at least one of them."
    )


class Response(ABC):
    """Abstract response class."""

    @property
    @abstractmethod
    def status_code(self) -> int:
        """Get response status code."""
        pass

    @abstractmethod
    async def json(self) -> Any:
        """Parse response as JSON."""
        pass

    @property
    @abstractmethod
    def headers(self) -> Dict[str, str]:
        """Get response headers."""
        pass

    @abstractmethod
    def aiter_bytes(self, chunk_size: Optional[int] = None) -> AsyncIterator[bytes]:
        """Iterate over response bytes."""
        pass


class CurlResponseWrapper(Response):
    """curl_cffi response wrapper."""

    def __init__(self, response: "CurlResponse", stream: bool = False):
        self._response = response
        self._stream = stream

    @property
    def status_code(self) -> int:
        return self._response.status_code

    async def json(self) -> Any:
        if self._stream:
            content = ""
            async for chunk in self._response.aiter_content():
                content += chunk.decode("utf-8")
            return json.loads(content)
        else:
            return self._response.json()

    @property
    def headers(self) -> Dict[str, str]:
        return self._response.headers

    async def aiter_bytes(
        self, chunk_size: Optional[int] = None
    ) -> AsyncIterator[bytes]:
        async for chunk in self._response.aiter_content(chunk_size):
            yield chunk
        await self._response.aclose()


class HttpxResponse(Response):
    """httpx response wrapper."""

    def __init__(self, response: httpx.Response):
        self._response = response

    @property
    def status_code(self) -> int:
        return self._response.status_code

    async def json(self) -> Any:
        await self._response.aread()
        return self._response.json()

    @property
    def headers(self) -> Dict[str, str]:
        return self._response.headers

    async def aiter_bytes(
        self, chunk_size: Optional[int] = None
    ) -> AsyncIterator[bytes]:
        async for chunk in self._response.aiter_bytes(chunk_size):
            yield chunk
        await self._response.aclose()


if RNET_AVAILABLE:

    class RnetResponse(Response):
        """rnet response wrapper."""

        def __init__(self, response: "rnet.Response"):
            self._response = response

        @property
        def status_code(self) -> int:
            return self._response.status.as_int()

        async def json(self) -> Any:
            return await self._response.json()

        @property
        def headers(self) -> Dict[str, str]:
            headers_dict = {}
            for key, value in self._response.headers:
                key_str = key.decode("utf-8") if isinstance(key, bytes) else key
                value_str = value.decode("utf-8") if isinstance(value, bytes) else value
                headers_dict[key_str] = value_str
            return headers_dict

        async def aiter_bytes(
            self, chunk_size: Optional[int] = None
        ) -> AsyncIterator[bytes]:
            async with self._response.stream() as streamer:
                async for chunk in streamer:
                    yield chunk
            await self._response.close()


class AsyncSession(ABC):
    """Abstract async session class."""

    @abstractmethod
    async def request(
        self,
        method: str,
        url: str,
        headers: Optional[Dict[str, str]] = None,
        json: Optional[Any] = None,
        data: Optional[Any] = None,
        stream: bool = False,
        **kwargs,
    ) -> Response:
        """Make an HTTP request."""
        pass

    @abstractmethod
    async def close(self):
        """Close the session."""
        pass

    async def __aenter__(self):
        return self

    async def __aexit__(self, exc_type, exc_val, exc_tb):
        await self.close()


if CURL_CFFI_AVAILABLE:

    class CurlAsyncSessionWrapper(AsyncSession):
        """curl_cffi async session wrapper."""

        def __init__(
            self,
            timeout: int = settings.request_timeout,
            impersonate: str = "chrome",
            proxy: Optional[str] = settings.proxy_url,
            follow_redirects: bool = True,
        ):
            self._session = CurlAsyncSession(
                timeout=timeout,
                impersonate=impersonate,
                proxy=proxy,
                allow_redirects=follow_redirects,
            )

        def process_files(self, files: dict) -> curl_cffi.CurlMime:
            # Create multipart form
            multipart = curl_cffi.CurlMime()

            # Handle different file formats
            if isinstance(files, dict):
                for field_name, file_info in files.items():
                    if isinstance(file_info, tuple):
                        # Format: {"field": (filename, data, content_type)}
                        if len(file_info) >= 3:
                            filename, file_data, content_type = file_info[:3]
                        elif len(file_info) == 2:
                            filename, file_data = file_info
                            content_type = "application/octet-stream"
                        else:
                            raise ValueError(
                                f"Invalid file tuple format for field {field_name}"
                            )

                        multipart.addpart(
                            name=field_name,
                            content_type=content_type,
                            filename=filename,
                            data=file_data,
                        )
                    else:
                        # Simple format: {"field": data}
                        multipart.addpart(
                            name=field_name,
                            data=file_info,
                        )

            return multipart

        @retry(
            stop=stop_after_attempt(settings.request_retries),
            wait=wait_fixed(settings.request_retry_interval),
            retry=retry_if_exception_type(CurlRequestException),
            before_sleep=log_before_sleep,
            reraise=True,
        )
        async def request(
            self,
            method: str,
            url: str,
            headers: Optional[Dict[str, str]] = None,
            json: Optional[Any] = None,
            data: Optional[Any] = None,
            stream: bool = False,
            **kwargs,
        ) -> Response:
            logger.debug(f"Making {method} request to {url}")

            # Handle file uploads - convert files parameter to multipart
            files = kwargs.pop("files", None)

            multipart = None

            if files:
                multipart = self.process_files(files)
                kwargs["multipart"] = multipart

            try:
                response = await self._session.request(
                    method=method,
                    url=url,
                    headers=headers,
                    json=json,
                    data=data,
                    stream=stream,
                    **kwargs,
                )
                return CurlResponseWrapper(response, stream=stream)
            finally:
                if multipart:
                    multipart.close()

        async def close(self):
            await self._session.close()


if RNET_AVAILABLE:

    class RnetAsyncSession(AsyncSession):
        """rnet async session wrapper."""

        def __init__(
            self,
            timeout: int = settings.request_timeout,
            impersonate: str = "chrome",
            proxy: Optional[str] = settings.proxy_url,
            follow_redirects: bool = True,
        ):
            # Map impersonate string to rnet Emulation enum
            emulation_map = {
                "chrome": rnet.Emulation.Chrome142,
                "firefox": rnet.Emulation.Firefox136,
                "safari": rnet.Emulation.Safari18,
                "edge": rnet.Emulation.Edge134,
            }

            # Use Chrome as default if not found in map
            rnet_emulation = emulation_map.get(
                impersonate.lower(), rnet.Emulation.Chrome142
            )

            # Create proxy list if proxy is provided
            proxies = None
            if proxy:
                proxies = [rnet.Proxy.all(proxy)]

            self._client = RnetClient(
                emulation=rnet_emulation,
                timeout=timeout,
                proxies=proxies,
                allow_redirects=follow_redirects,
            )

        @retry(
            stop=stop_after_attempt(settings.request_retries),
            wait=wait_fixed(settings.request_retry_interval),
            retry=retry_if_exception_type(RnetRequestError),
            before_sleep=log_before_sleep,
            reraise=True,
        )
        async def request(
            self,
            method: str,
            url: str,
            headers: Optional[Dict[str, str]] = None,
            json: Optional[Any] = None,
            data: Optional[Any] = None,
            stream: bool = False,
            **kwargs,
        ) -> Response:
            logger.debug(f"Making {method} request to {url}")

            # Map method string to rnet Method enum
            method_map = {
                "GET": RnetMethod.GET,
                "POST": RnetMethod.POST,
                "PUT": RnetMethod.PUT,
                "DELETE": RnetMethod.DELETE,
                "PATCH": RnetMethod.PATCH,
                "HEAD": RnetMethod.HEAD,
                "OPTIONS": RnetMethod.OPTIONS,
                "TRACE": RnetMethod.TRACE,
            }

            rnet_method = method_map.get(method.upper(), RnetMethod.GET)

            # Handle file uploads - convert files parameter to multipart
            files = kwargs.pop("files", None)
            multipart = None

            if files:
                # Convert files dict to rnet Multipart
                parts = []
                for field_name, file_info in files.items():
                    if isinstance(file_info, tuple):
                        # Format: {"field": (filename, data, content_type)}
                        if len(file_info) >= 3:
                            filename, file_data, content_type = file_info[:3]
                        elif len(file_info) == 2:
                            filename, file_data = file_info
                            content_type = "application/octet-stream"
                        else:
                            raise ValueError(
                                f"Invalid file tuple format for field {field_name}"
                            )

                        parts.append(
                            rnet.Part(
                                name=field_name,
                                value=file_data,
                                filename=filename,
                                mime=content_type,
                            )
                        )
                    else:
                        # Simple format: {"field": data}
                        parts.append(rnet.Part(name=field_name, value=file_info))

                multipart = rnet.Multipart(*parts)
                kwargs["multipart"] = multipart

            request_kwargs = {}
            if headers:
                request_kwargs["headers"] = headers
            if json is not None:
                request_kwargs["json"] = json
            elif data is not None:
                # rnet uses 'form' for form data, 'body' for raw data
                if isinstance(data, dict) or isinstance(data, list):
                    request_kwargs["form"] = (
                        [(k, v) for k, v in data.items()]
                        if isinstance(data, dict)
                        else data
                    )
                else:
                    request_kwargs["body"] = data

            request_kwargs.update(kwargs)

            response = await self._client.request(
                method=rnet_method,
                url=url,
                **request_kwargs,
            )

            return RnetResponse(response)

        async def close(self):
            # rnet Client doesn't have an explicit close method
            # The connection pooling is handled internally
            pass


if HTTPX_AVAILABLE:

    class HttpxAsyncSession(AsyncSession):
        """httpx async session wrapper."""

        def __init__(
            self,
            timeout: int = settings.request_timeout,
            impersonate: str = "chrome",
            proxy: Optional[str] = settings.proxy_url,
            follow_redirects: bool = True,
        ):
            self._client = httpx.AsyncClient(
                timeout=timeout,
                proxy=proxy,
                follow_redirects=follow_redirects,
            )

        async def stream(
            self,
            method: str,
            url: str,
            headers: Optional[Dict[str, str]] = None,
            json: Optional[Any] = None,
            data: Optional[Any] = None,
            **kwargs,
        ) -> Response:
            """
            Alternative to `httpx.request()` that streams the response body
            instead of loading it into memory at once.

            **Parameters**: See `httpx.request`.

            See also: [Streaming Responses][0]

            [0]: /quickstart#streaming-responses
            """
            request = self._client.build_request(
                method=method,
                url=url,
                data=data,
                json=json,
                headers=headers,
                **kwargs,
            )
            response = await self._client.send(
                request=request,
                stream=True,
            )

            return response

        @retry(
            stop=stop_after_attempt(settings.request_retries),
            wait=wait_fixed(settings.request_retry_interval),
            retry=retry_if_exception_type(httpx.RequestError),
            before_sleep=log_before_sleep,
            reraise=True,
        )
        async def request(
            self,
            method: str,
            url: str,
            headers: Optional[Dict[str, str]] = None,
            json: Optional[Any] = None,
            data: Optional[Any] = None,
            stream: bool = False,
            **kwargs,
        ) -> Response:
            logger.debug(f"Making {method} request to {url}")
            if stream:
                response = await self.stream(
                    method=method,
                    url=url,
                    headers=headers,
                    json=json,
                    data=data,
                    **kwargs,
                )
            else:
                response = await self._client.request(
                    method=method,
                    url=url,
                    headers=headers,
                    json=json,
                    data=data,
                    **kwargs,
                )

            return HttpxResponse(response)

        async def close(self):
            await self._client.aclose()


def create_session(
    timeout: int = settings.request_timeout,
    impersonate: str = "chrome",
    proxy: Optional[str] = settings.proxy_url,
    follow_redirects: bool = True,
) -> AsyncSession:
    """Create an async session using the available HTTP client.

    Prefers rnet if available, then curl_cffi, falls back to httpx.
    """
    if RNET_AVAILABLE:
        logger.debug("Using rnet as HTTP client")
        return RnetAsyncSession(
            timeout=timeout,
            impersonate=impersonate,
            proxy=proxy,
            follow_redirects=follow_redirects,
        )
    elif CURL_CFFI_AVAILABLE:
        logger.debug("Using curl_cffi as HTTP client")
        return CurlAsyncSessionWrapper(
            timeout=timeout,
            impersonate=impersonate,
            proxy=proxy,
            follow_redirects=follow_redirects,
        )
    else:
        logger.debug("Using httpx as HTTP client (rnet and curl_cffi not available)")
        return HttpxAsyncSession(
            timeout=timeout,
            impersonate=impersonate,
            proxy=proxy,
            follow_redirects=follow_redirects,
        )


def create_plain_session(
    timeout: int = settings.request_timeout,
    proxy: Optional[str] = settings.proxy_url,
    follow_redirects: bool = True,
) -> AsyncSession:
    """Create a plain HTTP session WITHOUT browser fingerprinting/impersonation.

    Used for API endpoints (e.g. OAuth token exchange at console.anthropic.com)
    that reject requests containing browser-injected headers (User-Agent, Origin,
    TLS fingerprints) with 429 errors.

    Prefers httpx (zero header injection). Falls back to curl_cffi or rnet
    without impersonation if httpx is unavailable.
    """
    if HTTPX_AVAILABLE:
        logger.debug("Using httpx as plain HTTP client")
        return HttpxAsyncSession(
            timeout=timeout,
            proxy=proxy,
            follow_redirects=follow_redirects,
        )
    elif CURL_CFFI_AVAILABLE:
        logger.debug("Using curl_cffi (no impersonation) as plain HTTP client")
        return CurlAsyncSessionWrapper(
            timeout=timeout,
            impersonate=None,
            proxy=proxy,
            follow_redirects=follow_redirects,
        )
    else:
        logger.debug("Using rnet (no impersonation) as plain HTTP client")
        return RnetAsyncSession(
            timeout=timeout,
            impersonate=None,
            proxy=proxy,
            follow_redirects=follow_redirects,
        )


async def download_image(url: str, timeout: int = 30) -> Tuple[bytes, str]:
    """Download an image from a URL and return content and content type.

    Uses the unified session interface that works with both curl_cffi and httpx.
    """
    async with create_session(timeout=timeout) as session:
        response = await session.request("GET", url)
        content_type = response.headers.get("content-type", "image/jpeg")

        # Read the response content
        content = b""
        async for chunk in response.aiter_bytes():
            content += chunk

        return content, content_type


# Export the appropriate exception class
if RNET_AVAILABLE:
    RequestException = RnetRequestError
elif CURL_CFFI_AVAILABLE:
    RequestException = CurlRequestException
else:
    RequestException = httpx.RequestError


================================================
FILE: app/core/static.py
================================================
from fastapi import FastAPI, HTTPException
from fastapi.responses import FileResponse
from fastapi.staticfiles import StaticFiles
from loguru import logger

from app.core.config import settings


def register_static_routes(app: FastAPI):
    """Register static file routes for the application."""

    if settings.static_folder.exists():
        app.mount(
            "/assets",
            StaticFiles(directory=str(settings.static_folder / "assets")),
            name="assets",
        )

        # Serve index.html for SPA routes
        @app.get("/{full_path:path}")
        async def serve_spa(full_path: str):
            """Serve index.html for all non-API routes (SPA support)."""
            index_path = settings.static_folder / "index.html"
            if index_path.exists():
                return FileResponse(str(index_path))
            raise HTTPException(status_code=404, detail="Frontend not built")
    else:
        logger.warning(
            "Static files directory not found. Run 'pnpm build' in the front directory to build the frontend."
        )


================================================
FILE: app/dependencies/__init__.py
================================================


================================================
FILE: app/dependencies/auth.py
================================================
from typing import Optional, Annotated
from loguru import logger
from fastapi import Depends, Header
import secrets

from app.core.config import settings
from app.core.exceptions import InvalidAPIKeyError

_temp_admin_api_key: Optional[str] = None

if not settings.admin_api_keys:
    _temp_admin_api_key = f"sk-admin-{secrets.token_urlsafe(32)}"
    logger.warning(
        f"No admin API keys configured. Generated temporary admin API key: {_temp_admin_api_key}"
    )
    logger.warning(
        "This is a temporary key and will not be saved. Please configure admin API keys in settings."
    )


async def get_api_key(
    x_api_key: Annotated[Optional[str], Header()] = None,
    authorization: Annotated[Optional[str], Header()] = None,
) -> str:
    # Check X-API-Key header
    api_key = x_api_key

    # Check Authorization header
    if not api_key and authorization:
        if authorization.startswith("Bearer "):
            api_key = authorization[7:]

    if not api_key:
        raise InvalidAPIKeyError()

    return api_key


APIKeyDep = Annotated[str, Depends(get_api_key)]


async def verify_api_key(
    api_key: APIKeyDep,
) -> str:
    # Verify against configured keys
    valid_keys = settings.api_keys + settings.admin_api_keys + [_temp_admin_api_key]

    if not valid_keys:
        logger.error("No API keys configured, Please configure at least one API key.")
        raise InvalidAPIKeyError()

    if api_key not in valid_keys:
        raise InvalidAPIKeyError()

    return api_key


AuthDep = Annotated[str, Depends(verify_api_key)]


async def verify_admin_api_key(
    api_key: APIKeyDep,
) -> str:
    # Verify against configured admin keys
    valid_keys = settings.admin_api_keys + [_temp_admin_api_key]

    if not valid_keys:
        logger.error(
            "No admin API keys configured, Please configure at least one admin API key."
        )
        raise InvalidAPIKeyError()

    if api_key not in valid_keys:
        raise InvalidAPIKeyError()

    return api_key


AdminAuthDep = Annotated[str, Depends(verify_admin_api_key)]


================================================
FILE: app/locales/en.json
================================================
{
  "global": {
    "internalServerError": "An internal server error occurred. Please try again later.",
    "noAPIKeyProvided": "No API key provided. Please include an API key in the request.",
    "invalidAPIKey": "Invalid API key. Please check your API key and try again."
  },
  "accountManager": {
    "noAccountsAvailable": "No accounts are currently available. Please try again later."
  },
  "oauthService": {
    "oauthExchangeError": "Failed to exchange authorization code for tokens.",
    "organizationInfoError": "Failed to get organization Info: {reason}",
    "cookieAuthorizationError": "Failed to authorize with cookie: {reason}",
    "oauthAuthenticationNotAllowed": "OAuth authentication is not allowed for this organization. Only Pro and Max accounts support OAuth authentication."
  },
  "claudeClient": {
    "claudeRateLimited": "Claude AI rate limit exceeded. Please try again after {resets_at}.",
    "cloudflareBlocked": "Request blocked by Cloudflare. Please check your IP address.",
    "organizationDisabled": "Your Claude AI account has been disabled.",
    "httpError": "HTTP error occurred when calling Claude AI: {error_type} - {error_message} (Status: {status_code})",
    "invalidModelName": "Invalid model name provided. Please ensure you have access to model {model_name}.",
    "authenticationError": "Authentication error. Please check your Claude Cookie or OAuth credentials, and ensure you have installed the curl dependency and are not in a Termux environment."
  },
  "messageProcessor": {
    "noValidMessages": "No valid messages found in the request.",
    "externalImageDownloadError": "Failed to download external image from: {url}",
    "externalImageNotAllowed": "External images are not allowed: {url}"
  },
  "pipeline": {
    "noResponse": "No response received from the service. Please try again."
  },
  "processors": {
    "nonStreamingResponseProcessor": {
      "streamingError": "Streaming error occurred: {error_type} - {error_message}",
      "noMessage": "No message received in the response."
    }
  }
}


================================================
FILE: app/locales/zh.json
================================================
{
  "global": {
    "internalServerError": "服务器内部错误。请稍后重试。",
    "noAPIKeyProvided": "未提供 API 密钥。请在请求中包含 API 密钥。",
    "invalidAPIKey": "无效的 API 密钥。请检查您的 API 密钥并重试。"
  },
  "accountManager": {
    "noAccountsAvailable": "当前没有可用的账户。请稍后重试。"
  },
  "oauthService": {
    "oauthExchangeError": "无法将授权代码兑换为令牌:{reason}",
    "organizationInfoError": "无法获取组织信息:{reason}",
    "cookieAuthorizationError": "无法使用 Cookie 进行授权:{reason}",
    "oauthAuthenticationNotAllowed": "此组织不允许 OAuth 认证。仅有 Pro 和 Max 账户支持 OAuth 认证。"
  },
  "claudeClient": {
    "claudeRateLimited": "Claude API 速率限制已超出。请在 {resets_at} 后重试。",
    "cloudflareBlocked": "请求被 Cloudflare 阻止。请检查您的连接。",
    "organizationDisabled": "您的 Claude AI 账户已被禁用。",
    "httpError": "请求 Claude AI 时发生 HTTP 错误:{error_type} - {error_message}(状态码:{status_code})",
    "invalidModelName": "提供的模型名称无效。请确保您有权访问 {model_name} 模型。",
    "authenticationError": "身份验证错误。请检查您的 Claude Cookie 或 OAuth 凭证;并确保安装了 curl 依赖且不在 Termux 环境下。"
  },
  "messageProcessor": {
    "noValidMessages": "请求中未找到有效消息。",
    "externalImageDownloadError": "无法从以下地址下载外部图片:{url}",
    "externalImageNotAllowed": "不允许使用外部图片:{url}"
  },
  "pipeline": {
    "noResponse": "未收到服务响应。请重试。"
  },
  "processors": {
    "nonStreamingResponseProcessor": {
      "streamingError": "流式传输中收到错误:{error_type} - {error_message}",
      "noMessage": "响应中未收到消息。"
    }
  }
}


================================================
FILE: app/main.py
================================================
from loguru import logger
from contextlib import asynccontextmanager
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

from app.api.main import api_router
from app.core.config import settings
from app.core.error_handler import app_exception_handler
from app.core.exceptions import AppError
from app.core.static import register_static_routes
from app.utils.logger import configure_logger
from app.services.account import account_manager
from app.services.session import session_manager
from app.services.tool_call import tool_call_manager
from app.services.cache import cache_service


@asynccontextmanager
async def lifespan(app: FastAPI):
    """Application lifespan manager."""
    logger.info("Starting Clove...")

    configure_logger()

    # Load accounts
    account_manager.load_accounts()

    for cookie in settings.cookies:
        await account_manager.add_account(cookie_value=cookie)

    # Start tasks
    await account_manager.start_task()
    await session_manager.start_cleanup_task()
    await tool_call_manager.start_cleanup_task()
    await cache_service.start_cleanup_task()

    yield

    logger.info("Shutting down Clove...")

    # Save accounts
    account_manager.save_accounts()

    # Stop tasks
    await account_manager.stop_task()
    await session_manager.cleanup_all()
    await tool_call_manager.cleanup_all()
    await cache_service.cleanup_all()


app = FastAPI(
    title="Clove",
    description="A Claude.ai reverse proxy",
    version="0.1.0",
    lifespan=lifespan,
)

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# Include routers
app.include_router(api_router)

# Static files
register_static_routes(app)

# Exception handlers
app.add_exception_handler(AppError, app_exception_handler)


# Health check
@app.get("/health")
async def health():
    """Health check endpoint."""
    stats = await account_manager.get_status()
    return {"status": "healthy" if stats["valid_accounts"] > 0 else "degraded"}


def main():
    """Main entry point for the application."""
    import uvicorn

    uvicorn.run(
        "app.main:app",
        host=settings.host,
        port=settings.port,
        reload=False,
    )


if __name__ == "__main__":
    main()


================================================
FILE: app/models/__init__.py
================================================


================================================
FILE: app/models/claude.py
================================================
from typing import Optional, List, Union, Literal, Dict, Any
from pydantic import BaseModel, ConfigDict, Field, model_validator
from enum import Enum


class Role(str, Enum):
    USER = "user"
    ASSISTANT = "assistant"


class ImageType(str, Enum):
    JPEG = "image/jpeg"
    PNG = "image/png"
    GIF = "image/gif"
    WEBP = "image/webp"


# Image sources
class Base64ImageSource(BaseModel):
    type: Literal["base64"] = "base64"
    media_type: ImageType = Field(..., description="MIME type of the image")
    data: str = Field(..., description="Base64 encoded image data")


class URLImageSource(BaseModel):
    type: Literal["url"] = "url"
    url: str = Field(..., description="URL of the image")


class FileImageSource(BaseModel):
    type: Literal["file"] = "file"
    file_uuid: str = Field(..., description="UUID of the uploaded file")


# Web search result
class WebSearchResult(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["web_search_result"]
    title: str
    url: str
    encrypted_content: str
    page_age: Optional[str] = None


# Cache control
class CacheControl(BaseModel):
    type: Literal["ephemeral"]


# Content types
class TextContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["text"]
    text: str
    cache_control: Optional[CacheControl] = None


class ImageContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["image"]
    source: Base64ImageSource | URLImageSource | FileImageSource
    cache_control: Optional[CacheControl] = None


class ThinkingContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["thinking"]
    thinking: str


# redacted_thinking 块:API 可能返回被审查的思考内容
class RedactedThinkingContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["redacted_thinking"]
    data: str


class ToolUseContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["tool_use"]
    id: str
    name: str
    input: Dict[str, Any]
    cache_control: Optional[CacheControl] = None


class ToolResultContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["tool_result"]
    tool_use_id: str
    content: str | List[TextContent | ImageContent]
    is_error: Optional[bool] = False
    cache_control: Optional[CacheControl] = None


class ServerToolUseContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["server_tool_use"]
    id: str
    name: str
    input: Dict[str, Any]
    cache_control: Optional[CacheControl] = None


class WebSearchToolResultContent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["web_search_tool_result"]
    tool_use_id: str
    content: List[WebSearchResult]
    cache_control: Optional[CacheControl] = None


ContentBlock = Union[
    TextContent,
    ImageContent,
    ThinkingContent,
    RedactedThinkingContent,
    ToolUseContent,
    ToolResultContent,
    ServerToolUseContent,
    WebSearchToolResultContent,
]


class InputMessage(BaseModel):
    model_config = ConfigDict(extra="allow")
    role: Role
    content: Union[str, List[ContentBlock]]


class ThinkingOptions(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["enabled", "disabled", "adaptive"] = "disabled"
    budget_tokens: Optional[int] = None


class ToolChoice(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["auto", "any", "tool", "none"] = "auto"
    name: Optional[str] = None
    disable_parallel_tool_use: Optional[bool] = None


class CustomToolSpec(BaseModel):
    model_config = ConfigDict(extra="allow")
    description: Optional[str] = None
    input_schema: Optional[Any] = None


class Tool(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Optional[str] = None
    name: Optional[str] = None
    input_schema: Optional[Any] = None
    description: Optional[str] = None
    custom: Optional[CustomToolSpec] = None


class OutputConfig(BaseModel):
    """Output configuration (effort, format, etc). effort and structured outputs are now GA."""

    model_config = ConfigDict(extra="allow")
    effort: Optional[Literal["low", "medium", "high", "max"]] = None


class OutputFormat(BaseModel):
    """Output format for structured outputs (deprecated, use output_config.format instead)."""

    model_config = ConfigDict(extra="allow", populate_by_name=True, serialize_by_alias=True)
    type: Literal["json_schema"]
    schema_: Optional[Dict[str, Any]] = Field(default=None, alias="schema")


class ServerToolUsage(BaseModel):
    model_config = ConfigDict(extra="allow")
    web_search_requests: Optional[int] = None


class Usage(BaseModel):
    model_config = ConfigDict(extra="allow")
    input_tokens: int
    output_tokens: int
    cache_creation_input_tokens: Optional[int] = 0
    cache_read_input_tokens: Optional[int] = 0
    server_tool_use: Optional[ServerToolUsage] = None


class MessagesAPIRequest(BaseModel):
    model_config = ConfigDict(extra="allow")
    model: str = Field(default="claude-opus-4-20250514")
    messages: List[InputMessage]
    max_tokens: int = Field(default=8192, ge=1)
    system: Optional[str | List[TextContent]] = None
    temperature: Optional[float] = Field(default=None, ge=0, le=1)
    top_p: Optional[float] = Field(default=None, ge=0, le=1)
    top_k: Optional[int] = Field(default=None, ge=0)
    stop_sequences: Optional[List[str]] = None
    stream: Optional[bool] = False
    metadata: Optional[Dict[str, Any]] = None
    thinking: Optional[ThinkingOptions] = None
    tool_choice: Optional[ToolChoice] = None
    tools: Optional[List[Tool]] = None
    output_config: Optional[OutputConfig] = None
    output_format: Optional[OutputFormat] = None

    @model_validator(mode="after")
    def validate_thinking_tokens(self) -> "MessagesAPIRequest":
        """Ensure max_tokens > thinking.budget_tokens when thinking is enabled."""
        if (
            self.thinking
            and self.thinking.type == "enabled"
            and self.thinking.budget_tokens is not None
            and self.max_tokens <= self.thinking.budget_tokens
        ):
            self.max_tokens = self.thinking.budget_tokens + 1
        return self


class Message(BaseModel):
    model_config = ConfigDict(extra="allow")
    id: str
    type: Literal["message"]
    role: Literal["assistant"]
    content: List[ContentBlock]
    model: str
    stop_reason: Optional[
        Literal[
            "end_turn",
            "max_tokens",
            "stop_sequence",
            "tool_use",
            "pause_turn",
            "refusal",
        ]
    ] = None
    stop_sequence: Optional[str] = None
    usage: Optional[Usage] = None


================================================
FILE: app/models/internal.py
================================================
from typing import List, Optional
from pydantic import BaseModel, Field
from .claude import Tool


class Attachment(BaseModel):
    extracted_content: str
    file_name: str
    file_type: str
    file_size: int

    @classmethod
    def from_text(cls, content: str) -> "Attachment":
        """Create text attachment."""
        return cls(
            extracted_content=content,
            file_name="paste.txt",
            file_type="txt",
            file_size=len(content),
        )


class ClaudeWebRequest(BaseModel):
    max_tokens_to_sample: int
    attachments: List[Attachment]
    files: List[str] = Field(default_factory=list)
    model: Optional[str] = None
    rendering_mode: str = "messages"
    prompt: str = ""
    timezone: str
    tools: List[Tool] = Field(default_factory=list)


class UploadResponse(BaseModel):
    file_uuid: str


================================================
FILE: app/models/streaming.py
================================================
from typing import Optional, Union, Dict, Any, Literal
from pydantic import BaseModel, RootModel, ConfigDict

from .claude import ContentBlock, Message, Usage


# Base event types
class BaseEvent(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: str


# Delta types
class TextDelta(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["text_delta"]
    text: str


class InputJsonDelta(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["input_json_delta"]
    partial_json: str


class ThinkingDelta(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["thinking_delta"]
    thinking: str


class SignatureDelta(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: Literal["signature_delta"]
    signature: str


Delta = Union[TextDelta, InputJsonDelta, ThinkingDelta, SignatureDelta]


class MessageDeltaData(BaseModel):
    model_config = ConfigDict(extra="allow")
    stop_reason: Optional[
        Literal["end_turn", "max_tokens", "stop_sequence", "tool_use", "pause_turn", "refusal"]
    ] = None
    stop_sequence: Optional[str] = None


# Error model
class ErrorInfo(BaseModel):
    model_config = ConfigDict(extra="allow")
    type: str
    message: str


# Event models
class MessageStartEvent(BaseEvent):
    type: Literal["message_start"]
    message: Message


class ContentBlockStartEvent(BaseEvent):
    type: Literal["content_block_start"]
    index: int
    content_block: ContentBlock


class ContentBlockDeltaEvent(BaseEvent):
    type: Literal["content_block_delta"]
    index: int
    delta: Delta


class ContentBlockStopEvent(BaseEvent):
    type: Literal["content_block_stop"]
    index: int


class MessageDeltaEvent(BaseEvent):
    type: Literal["message_delta"]
    delta: MessageDeltaData
    usage: Optional[Usage] = None


class MessageStopEvent(BaseEvent):
    type: Literal["message_stop"]


class PingEvent(BaseEvent):
    type: Literal["ping"]


class ErrorEvent(BaseEvent):
    type: Literal["error"]
    error: ErrorInfo


class UnknownEvent(BaseEvent):
    type: str
    data: Dict[str, Any]


# Union of all streaming event types
class StreamingEvent(RootModel):
    root: Union[
        MessageStartEvent,
        ContentBlockStartEvent,
        ContentBlockDeltaEvent,
        ContentBlockStopEvent,
        MessageDeltaEvent,
        MessageStopEvent,
        PingEvent,
        ErrorEvent,
        UnknownEvent,
    ]


================================================
FILE: app/processors/__init__.py
================================================
from app.processors.base import BaseProcessor, BaseContext
from app.processors.claude_ai import (
    ClaudeAIContext,
    TestMessageProcessor,
    ClaudeWebProcessor,
    EventParsingProcessor,
    StreamingResponseProcessor,
    MessageCollectorProcessor,
    NonStreamingResponseProcessor,
    TokenCounterProcessor,
    ToolResultProcessor,
    ToolCallEventProcessor,
    StopSequencesProcessor,
)

__all__ = [
    # Base classes
    "BaseProcessor",
    "BaseContext",
    # Claude AI Pipeline
    "ClaudeAIContext",
    "TestMessageProcessor",
    "ClaudeWebProcessor",
    "EventParsingProcessor",
    "StreamingResponseProcessor",
    "MessageCollectorProcessor",
    "NonStreamingResponseProcessor",
    "TokenCounterProcessor",
    "ToolResultProcessor",
    "ToolCallEventProcessor",
    "StopSequencesProcessor",
]


================================================
FILE: app/processors/base.py
================================================
"""Base classes for request processing pipeline."""

from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from typing import Optional

from fastapi import Request
from fastapi.responses import StreamingResponse, JSONResponse


@dataclass
class BaseContext:
    """Base context passed between processors in the pipeline."""

    original_request: Request
    response: Optional[StreamingResponse | JSONResponse] = None
    metadata: dict = field(
        default_factory=dict
    )  # For storing custom data between processors


class BaseProcessor(ABC):
    """Base class for all request processors."""

    @abstractmethod
    async def process(self, context: BaseContext) -> BaseContext:
        """
        Process the request context.

        Args:
            context: The processing context

        Returns:
            Updated context.
        """
        pass

    @property
    def name(self) -> str:
        """Get the processor name."""
        return self.__class__.__name__


================================================
FILE: app/processors/claude_ai/__init__.py
================================================
from app.processors.claude_ai.context import ClaudeAIContext
from app.processors.claude_ai.pipeline import ClaudeAIPipeline
from app.processors.claude_ai.tavern_test_message_processor import TestMessageProcessor
from app.processors.claude_ai.claude_web_processor import ClaudeWebProcessor
from app.processors.claude_ai.claude_api_processor import ClaudeAPIProcessor
from app.processors.claude_ai.event_parser_processor import EventParsingProcessor
from app.processors.claude_ai.streaming_response_processor import (
    StreamingResponseProcessor,
)
from app.processors.claude_ai.message_collector_processor import (
    MessageCollectorProcessor,
)
from app.processors.claude_ai.non_streaming_response_processor import (
    NonStreamingResponseProcessor,
)
from app.processors.claude_ai.token_counter_processor import TokenCounterProcessor
from app.processors.claude_ai.tool_result_processor import ToolResultProcessor
from app.processors.claude_ai.tool_call_event_processor import ToolCallEventProcessor
from app.processors.claude_ai.stop_sequences_processor import StopSequencesProcessor
from app.processors.claude_ai.model_injector_processor import ModelInjectorProcessor

__all__ = [
    "ClaudeAIContext",
    "ClaudeAIPipeline",
    "TestMessageProcessor",
    "ClaudeWebProcessor",
    "ClaudeAPIProcessor",
    "EventParsingProcessor",
    "StreamingResponseProcessor",
    "MessageCollectorProcessor",
    "NonStreamingResponseProcessor",
    "TokenCounterProcessor",
    "ToolResultProcessor",
    "ToolCallEventProcessor",
    "StopSequencesProcessor",
    "ModelInjectorProcessor",
]


================================================
FILE: app/processors/claude_ai/claude_api_processor.py
================================================
from app.core.http_client import (
    Response,
    AsyncSession,
    create_session,
)
from datetime import datetime, timedelta, UTC
from typing import Dict
from loguru import logger
from fastapi.responses import StreamingResponse

from app.models.claude import MessagesAPIRequest, TextContent
from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.services.account import account_manager
from app.services.cache import cache_service
from app.core.exceptions import (
    ClaudeHttpError,
    ClaudeRateLimitedError,
    InvalidModelNameError,
    NoAccountsAvailableError,
    OAuthAuthenticationNotAllowedError,
)
from app.core.config import settings


class ClaudeAPIProcessor(BaseProcessor):
    """Processor that calls Claude Messages API directly using OAuth authentication."""

    def __init__(self):
        self.messages_api_url = (
            settings.claude_api_baseurl.encoded_string().rstrip("/") + "/v1/messages"
        )

    async def _request_messages_api(
        self, session: AsyncSession, request_json: str, headers: Dict[str, str]
    ) -> Response:
        """Make HTTP request with retry mechanism for curl_cffi exceptions."""
        response: Response = await session.request(
            "POST",
            self.messages_api_url,
            data=request_json,
            headers=headers,
            stream=True,
        )
        return response

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Process Claude API request using OAuth authentication.

        Requires:
            - messages_api_request in context

        Produces:
            - response in context (StreamingResponse)
        """
        if context.response:
            logger.debug("Skipping ClaudeAPIProcessor due to existing response")
            return context

        if not context.messages_api_request:
            logger.warning(
                "Skipping ClaudeAPIProcessor due to missing messages_api_request"
            )
            return context

        self._insert_system_message(context)

        try:
            # First try to get account from cache service
            cached_account_id, checkpoints = cache_service.process_messages(
                context.messages_api_request.model,
                context.messages_api_request.messages,
                context.messages_api_request.system,
            )

            account = None
            if cached_account_id:
                account = await account_manager.get_account_by_id(cached_account_id)
                if account:
                    logger.info(f"Using cached account: {cached_account_id[:8]}...")

            # If no cached account or account not available, get a new one
            if not account:
                account = await account_manager.get_account_for_oauth(
                    is_max=True
                    if (context.messages_api_request.model in settings.max_models)
                    else None
                )

            with account:
                request_json = context.messages_api_request.model_dump_json(
                    exclude_none=True
                )
                headers = self._prepare_headers(
                    account.oauth_token.access_token,
                    context.messages_api_request,
                    context.original_request,
                )

                session = create_session(
                    proxy=settings.proxy_url,
                    timeout=settings.request_timeout,
                    impersonate="chrome",
                    follow_redirects=False,
                )

                response = await self._request_messages_api(
                    session, request_json, headers
                )

                resets_at = response.headers.get("anthropic-ratelimit-unified-reset")
                if resets_at:
                    try:
                        resets_at = int(resets_at)
                        account.resets_at = datetime.fromtimestamp(resets_at, tz=UTC)
                    except ValueError:
                        logger.error(
                            f"Invalid resets_at format from Claude API: {resets_at}"
                        )
                        account.resets_at = None

                # Handle rate limiting
                if response.status_code == 429:
                    next_hour = datetime.now(UTC).replace(
                        minute=0, second=0, microsecond=0
                    ) + timedelta(hours=1)
                    raise ClaudeRateLimitedError(
                        resets_at=account.resets_at or next_hour
                    )

                if response.status_code >= 400:
                    error_data = await response.json()

                    if (
                        response.status_code == 400
                        and error_data.get("error", {}).get("message")
                        == "system: Invalid model name"
                    ):
                        raise InvalidModelNameError(context.messages_api_request.model)

                    if (
                        response.status_code == 401
                        and error_data.get("error", {}).get("message")
                        == "OAuth authentication is currently not allowed for this organization."
                    ):
                        raise OAuthAuthenticationNotAllowedError()

                    logger.error(
                        f"Claude API error: {response.status_code} - {error_data}"
                    )
                    raise ClaudeHttpError(
                        url=self.messages_api_url,
                        status_code=response.status_code,
                        error_type=error_data.get("error", {}).get("type", "unknown"),
                        error_message=error_data.get("error", {}).get(
                            "message", "Unknown error"
                        ),
                    )

                async def stream_response():
                    async for chunk in response.aiter_bytes():
                        yield chunk

                    await session.close()

                filtered_headers = {}
                for key, value in response.headers.items():
                    if key.lower() in ["content-encoding", "content-length"]:
                        logger.debug(f"Filtering out header: {key}: {value}")
                        continue
                    filtered_headers[key] = value

                context.response = StreamingResponse(
                    stream_response(),
                    status_code=response.status_code,
                    headers=filtered_headers,
                )

                # Stop pipeline on success
                context.metadata["stop_pipeline"] = True
                logger.info("Successfully processed request via Claude API")

                # Store checkpoints in cache service after successful request
                if checkpoints and account:
                    cache_service.add_checkpoints(
                        checkpoints, account.organization_uuid
                    )

        except (NoAccountsAvailableError, InvalidModelNameError):
            logger.debug("No accounts available for Claude API, continuing pipeline")

        return context

    def _insert_system_message(self, context: ClaudeAIContext) -> None:
        """Insert system message into the request."""

        request = context.messages_api_request

        # Handle system field
        system_message_text = (
            "You are Claude Code, Anthropic's official CLI for Claude."
        )
        system_message = TextContent(type="text", text=system_message_text)

        if isinstance(request.system, str) and request.system:
            request.system = [
                system_message,
                TextContent(type="text", text=request.system),
            ]
        elif isinstance(request.system, list) and request.system:
            if request.system[0].text == system_message_text:
                logger.debug("System message already exists, skipping injection.")
            else:
                request.system = [system_message] + request.system
        else:
            request.system = [system_message]

    def _prepare_headers(
        self,
        access_token: str,
        request: MessagesAPIRequest,
        original_request=None,
    ) -> Dict[str, str]:
        """Prepare headers for Claude API request.

        Beta headers: oauth 是 OAuth 认证必需的。
        effort 和 structured-outputs 已 GA,不再需要 beta header。
        客户端的 anthropic-beta header 会被透传(去重合并)。
        """
        # oauth beta 是 OAuth 认证必需的
        beta_features = ["oauth-2025-04-20"]

        # 透传客户端 anthropic-beta header,与内部 beta 去重合并
        if original_request:
            client_beta = original_request.headers.get("anthropic-beta", "")
            if client_beta:
                for beta in client_beta.split(","):
                    beta = beta.strip()
                    if beta and beta not in beta_features:
                        beta_features.append(beta)

        return {
            "Authorization": f"Bearer {access_token}",
            "anthropic-beta": ",".join(beta_features),
            "anthropic-version": "2023-06-01",
            "Content-Type": "application/json",
        }


================================================
FILE: app/processors/claude_ai/claude_web_processor.py
================================================
import time
import base64
import random
import string
from typing import List
from loguru import logger

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.services.session import session_manager
from app.models.internal import ClaudeWebRequest, Attachment
from app.core.exceptions import NoValidMessagesError
from app.core.config import settings
from app.utils.messages import process_messages


class ClaudeWebProcessor(BaseProcessor):
    """Claude AI processor that handles session management, request building, and sending to Claude AI."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Claude AI processor that:
        1. Gets or creates a Claude session
        2. Builds ClaudeWebRequest from messages_api_request
        3. Sends the request to Claude.ai

        Requires:
            - messages_api_request in context

        Produces:
            - claude_session in context
            - claude_web_request in context
            - original_stream in context
        """
        if context.original_stream:
            logger.debug("Skipping ClaudeWebProcessor due to existing original_stream")
            return context

        if not context.messages_api_request:
            logger.warning(
                "Skipping ClaudeWebProcessor due to missing messages_api_request"
            )
            return context

        # Step 1: Get or create Claude session
        if not context.claude_session:
            session_id = context.metadata.get("session_id")
            if not session_id:
                session_id = f"session_{int(time.time() * 1000)}"
                context.metadata["session_id"] = session_id

            logger.debug(f"Creating new session: {session_id}")
            context.claude_session = await session_manager.get_or_create_session(
                session_id
            )

        # Step 2: Build ClaudeWebRequest
        if not context.claude_web_request:
            request = context.messages_api_request

            if not request.messages:
                raise NoValidMessagesError()

            merged_text, images = await process_messages(
                request.messages, request.system
            )
            if not merged_text:
                raise NoValidMessagesError()

            if settings.padtxt_length > 0:
                pad_tokens = settings.pad_tokens or (
                    string.ascii_letters + string.digits
                )
                pad_text = "".join(random.choices(pad_tokens, k=settings.padtxt_length))
                merged_text = pad_text + merged_text
                logger.debug(
                    f"Added {settings.padtxt_length} padding tokens to the beginning of the message"
                )

            image_file_ids: List[str] = []
            if images:
                for i, image_source in enumerate(images):
                    try:
                        # Convert base64 to bytes
                        image_data = base64.b64decode(image_source.data)

                        # Upload to Claude
                        file_id = await context.claude_session.upload_file(
                            file_data=image_data,
                            filename=f"image_{i}.png",  # Default filename
                            content_type=image_source.media_type,
                        )
                        image_file_ids.append(file_id)
                        logger.debug(f"Uploaded image {i}: {file_id}")
                    except Exception as e:
                        logger.error(f"Failed to upload image {i}: {e}")

            await context.claude_session._ensure_conversation_initialized()

            paprika_mode = (
                "extended"
                if (
                    context.claude_session.account.is_pro
                    and request.thinking
                    and request.thinking.type in ("enabled", "adaptive")
                )
                else None
            )

            await context.claude_session.set_paprika_mode(paprika_mode)

            web_request = ClaudeWebRequest(
                max_tokens_to_sample=request.max_tokens,
                attachments=[Attachment.from_text(merged_text)],
                files=image_file_ids,
                model=request.model,
                rendering_mode="messages",
                prompt=settings.custom_prompt or "",
                timezone="UTC",
                tools=request.tools or [],
            )

            context.claude_web_request = web_request
            logger.debug(f"Built web request with {len(image_file_ids)} images")

        # Step 3: Send to Claude
        logger.debug(
            f"Sending request to Claude.ai for session {context.claude_session.session_id}"
        )

        request_dict = context.claude_web_request.model_dump(exclude_none=True)
        context.original_stream = await context.claude_session.send_message(
            request_dict
        )

        return context


================================================
FILE: app/processors/claude_ai/context.py
================================================
from dataclasses import dataclass
from typing import Optional, AsyncIterator

from app.core.claude_session import ClaudeWebSession
from app.models.claude import Message, MessagesAPIRequest
from app.models.internal import ClaudeWebRequest
from app.models.streaming import StreamingEvent
from app.processors.base import BaseContext


@dataclass
class ClaudeAIContext(BaseContext):
    messages_api_request: Optional[MessagesAPIRequest] = None
    claude_web_request: Optional[ClaudeWebRequest] = None
    claude_session: Optional[ClaudeWebSession] = None
    original_stream: Optional[AsyncIterator[str]] = None
    event_stream: Optional[AsyncIterator[StreamingEvent]] = None
    collected_message: Optional[Message] = None


================================================
FILE: app/processors/claude_ai/event_parser_processor.py
================================================
from loguru import logger

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.services.event_processing.event_parser import EventParser


class EventParsingProcessor(BaseProcessor):
    """Processor that parses SSE streams into StreamingEvent objects."""

    def __init__(self):
        super().__init__()
        self.parser = EventParser()

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Parse the original_stream into event_stream.

        Requires:
            - original_stream in context

        Produces:
            - event_stream in context
        """
        if context.event_stream:
            logger.debug("Skipping EventParsingProcessor due to existing event_stream")
            return context

        if not context.original_stream:
            logger.warning(
                "Skipping EventParsingProcessor due to missing original_stream"
            )
            return context

        logger.debug("Starting event parsing from SSE stream")
        context.event_stream = self.parser.parse_stream(context.original_stream)

        return context


================================================
FILE: app/processors/claude_ai/message_collector_processor.py
================================================
import json5
from typing import AsyncIterator
from loguru import logger

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.models.streaming import (
    Delta,
    StreamingEvent,
    MessageStartEvent,
    ContentBlockStartEvent,
    ContentBlockDeltaEvent,
    ContentBlockStopEvent,
    MessageDeltaEvent,
    MessageStopEvent,
    ErrorEvent,
    ErrorInfo,
    TextDelta,
    InputJsonDelta,
    ThinkingDelta,
)
from app.models.claude import (
    ContentBlock,
    ServerToolUseContent,
    TextContent,
    ThinkingContent,
    ToolResultContent,
    ToolUseContent,
)


class MessageCollectorProcessor(BaseProcessor):
    """Processor that collects streaming events into a Message object without consuming the stream."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Collect streaming events into a Message object and update it in real-time.
        This processor runs for both streaming and non-streaming requests.

        Requires:
            - event_stream in context

        Produces:
            - collected_message in context (updated in real-time)
            - event_stream in context (wrapped to collect messages without consuming)
        """
        if not context.event_stream:
            logger.warning(
                "Skipping MessageCollectorProcessor due to missing event_stream"
            )
            return context

        logger.debug("Setting up message collection from stream")

        original_stream = context.event_stream

        new_stream = self._collect_messages_generator(original_stream, context)
        context.event_stream = new_stream

        return context

    async def _collect_messages_generator(
        self, event_stream: AsyncIterator[StreamingEvent], context: ClaudeAIContext
    ) -> AsyncIterator[StreamingEvent]:
        """
        Generator that collects messages from the stream without consuming events.
        Updates context.collected_message in real-time.
        """
        context.collected_message = None

        async for event in event_stream:
            # Process the event to build/update the message
            if isinstance(event.root, MessageStartEvent):
                context.collected_message = event.root.message.model_copy(deep=True)
                logger.debug(f"Message started: {context.collected_message.id}")

            elif isinstance(event.root, ContentBlockStartEvent):
                if context.collected_message:
                    while len(context.collected_message.content) <= event.root.index:
                        context.collected_message.content.append(None)
                    context.collected_message.content[event.root.index] = (
                        event.root.content_block.model_copy(deep=True)
                    )
                    logger.debug(
                        f"Content block {event.root.index} started: {event.root.content_block.type}"
                    )

            elif isinstance(event.root, ContentBlockDeltaEvent):
                if context.collected_message and event.root.index < len(
                    context.collected_message.content
                ):
                    self._apply_delta(
                        context.collected_message.content[event.root.index],
                        event.root.delta,
                    )

            elif isinstance(event.root, ContentBlockStopEvent):
                # Boundary checking to prevent IndexError caused by refusal responses
                if (
                    context.collected_message
                    and event.root.index < len(context.collected_message.content)
                ):
                    block = context.collected_message.content[event.root.index]
                    if isinstance(block, (ToolUseContent, ServerToolUseContent)):
                        if hasattr(block, "input_json") and block.input_json:
                            block.input = json5.loads(block.input_json)
                            del block.input_json
                    if isinstance(block, ToolResultContent):
                        if hasattr(block, "content_json") and block.content_json:
                            block = ToolResultContent(
                                **block.model_dump(exclude={"content"}),
                                content=json5.loads(block.content_json),
                            )
                            del block.content_json
                            context.collected_message.content[event.root.index] = block
                    logger.debug(f"Content block {event.root.index} stopped")
                else:
                    logger.debug(
                        f"Content block {event.root.index} stop skipped (no corresponding start)"
                    )

            elif isinstance(event.root, MessageDeltaEvent):
                if context.collected_message and event.root.delta:
                    if event.root.delta.stop_reason:
                        context.collected_message.stop_reason = (
                            event.root.delta.stop_reason
                        )
                        # When refusal is detected and content is empty, yield ErrorEvent
                        if (
                            event.root.delta.stop_reason == "refusal"
                            and not context.collected_message.content
                        ):
                            logger.warning("Request refused by Claude's safety filter")
                            error_event = StreamingEvent(
                                root=ErrorEvent(
                                    type="error",
                                    error=ErrorInfo(
                                        type="refusal",
                                        message="Chat paused: Claude's safety filters flagged this message. This occasionally happens with normal, safe messages. Try rephrasing or using a different model."
                                    )
                                )
                            )
                            yield error_event
                    if event.root.delta.stop_sequence:
                        context.collected_message.stop_sequence = (
                            event.root.delta.stop_sequence
                        )
                if context.collected_message and event.root.usage:
                    context.collected_message.usage = event.root.usage

            elif isinstance(event.root, MessageStopEvent):
                if context.collected_message:
                    context.collected_message.content = [
                        block
                        for block in context.collected_message.content
                        if block is not None
                    ]
                    logger.debug(
                        f"Message stopped with {len(context.collected_message.content)} content blocks"
                    )

            elif isinstance(event.root, ErrorEvent):
                logger.warning(f"Error event received: {event.root.error.message}")

            # Yield the event without modification
            yield event

        if context.collected_message:
            logger.debug(
                f"Collected message:\n{context.collected_message.model_dump()}"
            )

    def _apply_delta(self, content_block: ContentBlock, delta: Delta) -> None:
        """Apply a delta to a content block."""
        if isinstance(delta, TextDelta):
            if isinstance(content_block, TextContent):
                content_block.text += delta.text
        elif isinstance(delta, ThinkingDelta):
            if isinstance(content_block, ThinkingContent):
                content_block.thinking += delta.thinking
        elif isinstance(delta, InputJsonDelta):
            if isinstance(content_block, (ToolUseContent, ServerToolUseContent)):
                if hasattr(content_block, "input_json"):
                    content_block.input_json += delta.partial_json
                else:
                    content_block.input_json = delta.partial_json
            if isinstance(content_block, ToolResultContent):
                if hasattr(content_block, "content_json"):
                    content_block.content_json += delta.partial_json
                else:
                    content_block.content_json = delta.partial_json


================================================
FILE: app/processors/claude_ai/model_injector_processor.py
================================================
from typing import AsyncIterator
from loguru import logger

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.models.streaming import (
    MessageStartEvent,
    StreamingEvent,
)


class ModelInjectorProcessor(BaseProcessor):
    """Processor that injects model information when it's missing from MessageStartEvent."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Intercept MessageStartEvent and add model information if missing.

        Requires:
            - event_stream in context
            - messages_api_request in context (for model information)

        Produces:
            - event_stream with updated MessageStartEvent containing model
        """
        if not context.event_stream:
            logger.warning(
                "Skipping ModelInjectorProcessor due to missing event_stream"
            )
            return context

        if not context.messages_api_request:
            logger.warning(
                "Skipping ModelInjectorProcessor due to missing messages_api_request"
            )
            return context

        logger.debug("Setting up model injection for stream")

        original_stream = context.event_stream
        new_stream = self._inject_model_generator(original_stream, context)
        context.event_stream = new_stream

        return context

    async def _inject_model_generator(
        self,
        event_stream: AsyncIterator[StreamingEvent],
        context: ClaudeAIContext,
    ) -> AsyncIterator[StreamingEvent]:
        """
        Generator that adds model to MessageStartEvent if missing.
        """
        # Get model from request
        model = context.messages_api_request.model

        async for event in event_stream:
            if isinstance(event.root, MessageStartEvent):
                # Check if model is missing or empty
                if not event.root.message.model:
                    event.root.message.model = model
                    logger.debug(f"Injected model '{model}' into MessageStartEvent")
                else:
                    logger.debug(
                        f"MessageStartEvent already has model: '{event.root.message.model}'"
                    )

            yield event


================================================
FILE: app/processors/claude_ai/non_streaming_response_processor.py
================================================
from loguru import logger
from fastapi.responses import JSONResponse

from app.core.exceptions import ClaudeStreamingError, NoMessageError
from app.models.streaming import ErrorEvent
from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext


class NonStreamingResponseProcessor(BaseProcessor):
    """Processor that builds a non-streaming JSON response from collected message."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Build a non-streaming JSON response from the collected message.
        This processor only runs for non-streaming requests.

        Requires:
            - messages_api_request with stream=False
            - collected_message in context (must consume entire stream first)

        Produces:
            - response (JSONResponse) in context
        """
        if context.response:
            logger.debug(
                "Skipping NonStreamingResponseProcessor due to existing response"
            )
            return context

        if context.messages_api_request and context.messages_api_request.stream is True:
            logger.debug("Skipping NonStreamingResponseProcessor for streaming request")
            return context

        if not context.event_stream:
            logger.warning(
                "Skipping NonStreamingResponseProcessor due to missing event_stream"
            )
            return context

        logger.info("Building non-streaming response")

        # Consume the entire stream to ensure collected_message is complete
        async for event in context.event_stream:
            if isinstance(event.root, ErrorEvent):
                raise ClaudeStreamingError(
                    error_type=event.root.error.type,
                    error_message=event.root.error.message,
                )

        if not context.collected_message:
            logger.error("No message collected after consuming stream")
            raise NoMessageError()

        context.response = JSONResponse(
            content=context.collected_message.model_dump(exclude_none=True),
            headers={
                "Content-Type": "application/json",
                "Cache-Control": "no-cache",
            },
        )

        return context


================================================
FILE: app/processors/claude_ai/pipeline.py
================================================
from typing import List, Optional
from loguru import logger

from app.services.session import session_manager
from app.processors.pipeline import ProcessingPipeline
from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.processors.claude_ai.tavern_test_message_processor import TestMessageProcessor
from app.processors.claude_ai.claude_web_processor import ClaudeWebProcessor
from app.processors.claude_ai.claude_api_processor import ClaudeAPIProcessor
from app.processors.claude_ai.event_parser_processor import EventParsingProcessor
from app.processors.claude_ai.streaming_response_processor import (
    StreamingResponseProcessor,
)
from app.processors.claude_ai.message_collector_processor import (
    MessageCollectorProcessor,
)
from app.processors.claude_ai.non_streaming_response_processor import (
    NonStreamingResponseProcessor,
)
from app.processors.claude_ai.token_counter_processor import TokenCounterProcessor
from app.processors.claude_ai.tool_result_processor import ToolResultProcessor
from app.processors.claude_ai.tool_call_event_processor import ToolCallEventProcessor
from app.processors.claude_ai.stop_sequences_processor import StopSequencesProcessor
from app.processors.claude_ai.model_injector_processor import ModelInjectorProcessor


class ClaudeAIPipeline(ProcessingPipeline):
    def __init__(self, processors: Optional[List[BaseProcessor]] = None):
        """
        Initialize the pipeline with processors.

        Args:
            processors: List of processors to use. If None, default processors are used.
        """
        processors = (
            [
                TestMessageProcessor(),
                ToolResultProcessor(),
                ClaudeAPIProcessor(),
                ClaudeWebProcessor(),
                EventParsingProcessor(),
                ModelInjectorProcessor(),
                StopSequencesProcessor(),
                ToolCallEventProcessor(),
                MessageCollectorProcessor(),
                TokenCounterProcessor(),
                StreamingResponseProcessor(),
                NonStreamingResponseProcessor(),
            ]
            if processors is None
            else processors
        )

        super().__init__(processors)

    async def process(
        self,
        context: ClaudeAIContext,
    ) -> ClaudeAIContext:
        """
        Process a Claude API request through the pipeline.

        Args:
            context: The processing context

        Returns:
            Updated context.

        Raises:
            Exception: If any processor fails or no response is generated
        """
        try:
            return await super().process(context)
        except Exception as e:
            if context.claude_session:
                await session_manager.remove_session(context.claude_session.session_id)
            logger.error(f"Pipeline processing failed: {e}")
            raise e


================================================
FILE: app/processors/claude_ai/stop_sequences_processor.py
================================================
from typing import AsyncIterator, List
from loguru import logger

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.models.streaming import (
    StreamingEvent,
    ContentBlockDeltaEvent,
    ContentBlockStopEvent,
    MessageDeltaEvent,
    MessageStopEvent,
    MessageDeltaData,
    TextDelta,
)
from app.services.session import session_manager


class StopSequencesProcessor(BaseProcessor):
    """Processor that handles stop sequences in streaming responses."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Process streaming events to detect and handle stop sequences.

        Requires:
            - event_stream in context
            - messages_api_request in context (for stop_sequences)

        Produces:
            - Modified event_stream that stops when a stop sequence is detected
            - Injects MessageDelta and MessageStop events when stop sequence found
        """
        if not context.event_stream:
            logger.warning(
                "Skipping StopSequencesProcessor due to missing event_stream"
            )
            return context

        if not context.messages_api_request:
            logger.warning(
                "Skipping StopSequencesProcessor due to missing messages_api_request"
            )
            return context

        stop_sequences = context.messages_api_request.stop_sequences
        if not stop_sequences:
            logger.debug("No stop sequences configured, skipping processor")
            return context

        logger.debug(f"Setting up stop sequences processing for: {stop_sequences}")

        original_stream = context.event_stream
        new_stream = self._process_stop_sequences(
            original_stream, stop_sequences, context
        )
        context.event_stream = new_stream

        return context

    async def _process_stop_sequences(
        self,
        event_stream: AsyncIterator[StreamingEvent],
        stop_sequences: List[str],
        context: ClaudeAIContext,
    ) -> AsyncIterator[StreamingEvent]:
        """
        Process events and stop when a stop sequence is detected.
        Uses incremental matching with buffering.
        """
        stop_sequences_set = set(stop_sequences)

        buffer = ""
        current_index = 0

        # Track potential matches: (start_position, current_matched_text)
        potential_matches = []

        async for event in event_stream:
            if isinstance(event.root, ContentBlockDeltaEvent) and isinstance(
                event.root.delta, TextDelta
            ):
                text = event.root.delta.text
                current_index = event.root.index

                for char in text:
                    buffer += char
                    current_pos = len(buffer) - 1

                    potential_matches.append((current_pos, ""))

                    new_matches = []
                    for start_pos, matched_text in potential_matches:
                        extended_match = matched_text + char

                        could_match = False
                        for stop_seq in stop_sequences:
                            if stop_seq.startswith(extended_match):
                                could_match = True
                                break

                        if could_match:
                            new_matches.append((start_pos, extended_match))

                            if extended_match in stop_sequences_set:
                                logger.debug(
                                    f"Stop sequence detected: '{extended_match}'"
                                )

                                safe_text = buffer[:start_pos]

                                if safe_text:
                                    yield StreamingEvent(
                                        root=ContentBlockDeltaEvent(
                                            type="content_block_delta",
                                            index=current_index,
                                            delta=TextDelta(
                                                type="text_delta", text=safe_text
                                            ),
                                        )
                                    )

                                yield StreamingEvent(
                                    root=ContentBlockStopEvent(
                                        type="content_block_stop", index=current_index
                                    )
                                )

                                yield StreamingEvent(
                                    root=MessageDeltaEvent(
                                        type="message_delta",
                                        delta=MessageDeltaData(
                                            stop_reason="stop_sequence",
                                            stop_sequence=extended_match,
                                        ),
                                        usage=None,
                                    )
                                )

                                yield StreamingEvent(
                                    root=MessageStopEvent(type="message_stop")
                                )

                                if context.claude_session:
                                    await session_manager.remove_session(
                                        context.claude_session.session_id
                                    )

                                return

                    potential_matches = new_matches

                    if potential_matches:
                        earliest_start = min(
                            start_pos for start_pos, _ in potential_matches
                        )
                        safe_length = earliest_start
                    else:
                        safe_length = len(buffer)

                    if safe_length > 0:
                        safe_text = buffer[:safe_length]
                        yield StreamingEvent(
                            root=ContentBlockDeltaEvent(
                                type="content_block_delta",
                                index=current_index,
                                delta=TextDelta(type="text_delta", text=safe_text),
                            )
                        )

                        buffer = buffer[safe_length:]
                        new_matches = []
                        for start_pos, matched_text in potential_matches:
                            new_start = start_pos - safe_length
                            if new_start >= 0:
                                new_matches.append((new_start, matched_text))
                        potential_matches = new_matches

            else:
                # Non-text event - flush buffer and reset
                if buffer:
                    yield StreamingEvent(
                        root=ContentBlockDeltaEvent(
                            type="content_block_delta",
                            index=current_index,
                            delta=TextDelta(type="text_delta", text=buffer),
                        )
                    )
                    buffer = ""
                    potential_matches = []

                yield event


================================================
FILE: app/processors/claude_ai/streaming_response_processor.py
================================================
from loguru import logger

from fastapi.responses import StreamingResponse

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.services.event_processing.event_serializer import EventSerializer


class StreamingResponseProcessor(BaseProcessor):
    """Processor that serializes event streams and creates a StreamingResponse."""

    def __init__(self):
        super().__init__()
        self.serializer = EventSerializer()

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Serialize the event_stream and create a StreamingResponse.

        Requires:
            - event_stream in context

        Produces:
            - response in context

        This processor typically marks the end of the pipeline by returning STOP action.
        """
        if context.response:
            logger.debug("Skipping StreamingResponseProcessor due to existing response")
            return context

        if not context.event_stream:
            logger.warning(
                "Skipping StreamingResponseProcessor due to missing event_stream"
            )
            return context

        if (
            not context.messages_api_request
            or context.messages_api_request.stream is not True
        ):
            logger.debug(
                "Skipping StreamingResponseProcessor due to non-streaming request"
            )
            return context

        logger.info("Creating streaming response from event stream")

        sse_stream = self.serializer.serialize_stream(context.event_stream)

        context.response = StreamingResponse(
            sse_stream,
            media_type="text/event-stream",
            headers={
                "Cache-Control": "no-cache",
                "Connection": "keep-alive",
                "X-Accel-Buffering": "no",  # Disable nginx buffering
            },
        )

        return context


================================================
FILE: app/processors/claude_ai/tavern_test_message_processor.py
================================================
from loguru import logger
import uuid

from fastapi.responses import JSONResponse

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.models.claude import (
    Message,
    Role,
    TextContent,
    Usage,
)


class TestMessageProcessor(BaseProcessor):
    """Processor that handles test messages."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Check if this is a test message and respond immediately if so.

        Test message criteria:
        - Only one message in messages array
        - Message role is "user"
        - Message content is "Hi"
        - stream is False

        If it's a test message, creates a MessagesAPIResponse and stops the pipeline.
        """
        if not context.messages_api_request:
            return context

        request = context.messages_api_request

        if (
            len(request.messages) == 1
            and request.messages[0].role == Role.USER
            and request.stream is False
            and (
                (
                    isinstance(request.messages[0].content, str)
                    and request.messages[0].content == "Hi"
                )
                or (
                    isinstance(request.messages[0].content, list)
                    and len(request.messages[0].content) == 1
                    and isinstance(request.messages[0].content[0], TextContent)
                    and request.messages[0].content[0].text == "Hi"
                )
            )
        ):
            logger.debug("Test message detected, returning canned response")

            response = Message(
                id=f"msg_{uuid.uuid4().hex[:10]}",
                type="message",
                role="assistant",
                content=[
                    TextContent(type="text", text="Hello! How can I assist you today?")
                ],
                model=request.model,
                stop_reason="end_turn",
                stop_sequence=None,
                usage=Usage(input_tokens=1, output_tokens=9),
            )

            context.response = JSONResponse(
                content=response.model_dump(), status_code=200
            )

            context.metadata["stop_pipeline"] = True
            return context

        return context


================================================
FILE: app/processors/claude_ai/token_counter_processor.py
================================================
from typing import AsyncIterator
from loguru import logger
import tiktoken

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.models.streaming import (
    MessageStartEvent,
    StreamingEvent,
    MessageDeltaEvent,
)
from app.models.claude import Usage
from app.utils.messages import process_messages

encoder = tiktoken.get_encoding("cl100k_base")


class TokenCounterProcessor(BaseProcessor):
    """Processor that estimates token usage when it's not provided by the API."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Intercept MessageDeltaEvent and add token usage estimation if missing.

        Requires:
            - event_stream in context
            - messages_api_request in context (for input token counting)
            - collected_message in context (for output token counting)

        Produces:
            - event_stream with updated MessageDeltaEvent containing usage
        """
        if not context.event_stream:
            logger.warning("Skipping TokenCounterProcessor due to missing event_stream")
            return context

        if not context.messages_api_request:
            logger.warning(
                "Skipping TokenCounterProcessor due to missing messages_api_request"
            )
            return context

        logger.debug("Setting up token counting for stream")

        original_stream = context.event_stream
        new_stream = self._count_tokens_generator(original_stream, context)
        context.event_stream = new_stream

        return context

    async def _count_tokens_generator(
        self,
        event_stream: AsyncIterator[StreamingEvent],
        context: ClaudeAIContext,
    ) -> AsyncIterator[StreamingEvent]:
        """
        Generator that adds token usage to MessageDeltaEvent if missing.
        """
        # Pre-calculate input tokens once
        input_tokens = await self._calculate_input_tokens(context)

        async for event in event_stream:
            if (
                isinstance(event.root, MessageStartEvent)
                and not event.root.message.usage
            ):
                usage = Usage(
                    input_tokens=input_tokens,
                    output_tokens=1,
                    cache_creation_input_tokens=0,
                    cache_read_input_tokens=0,
                )

                event.root.message.usage = usage
                context.collected_message.usage = usage

                logger.debug(f"Added token usage estimation: input={input_tokens}")

            if isinstance(event.root, MessageDeltaEvent) and not event.root.usage:
                output_tokens = await self._calculate_output_tokens(context)

                usage = Usage(
                    input_tokens=input_tokens,
                    output_tokens=output_tokens,
                    cache_creation_input_tokens=0,
                    cache_read_input_tokens=0,
                )

                event.root.usage = usage
                context.collected_message.usage = usage

                logger.debug(
                    f"Added token usage estimation: input={input_tokens}, output={output_tokens}"
                )

            yield event

    async def _calculate_input_tokens(self, context: ClaudeAIContext) -> int:
        """Calculate input tokens from the request messages."""
        if not context.messages_api_request:
            return 0

        merged_text, _ = await process_messages(
            context.messages_api_request.messages, context.messages_api_request.system
        )

        try:
            tokens = len(encoder.encode(merged_text, disallowed_special=()))
        except Exception:
            logger.warning("Tiktoken encoding failed for input, falling back to estimation")
            tokens = len(merged_text) // 4

        logger.debug(f"Calculated {tokens} input tokens")
        return tokens

    async def _calculate_output_tokens(self, context: ClaudeAIContext) -> int:
        """Calculate output tokens from the collected message."""
        if not context.collected_message:
            return 0

        merged_text, _ = await process_messages([context.collected_message])

        try:
            tokens = len(encoder.encode(merged_text, disallowed_special=()))
        except Exception:
            logger.warning("Tiktoken encoding failed for output, falling back to estimation")
            tokens = len(merged_text) // 4

        logger.debug(f"Calculated {tokens} output tokens")
        return tokens


================================================
FILE: app/processors/claude_ai/tool_call_event_processor.py
================================================
from typing import AsyncIterator, Optional
from loguru import logger

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.models.streaming import (
    StreamingEvent,
    ContentBlockStartEvent,
    ContentBlockStopEvent,
    MessageDeltaEvent,
    MessageStopEvent,
    MessageDeltaData,
)
from app.models.claude import ToolResultContent, ToolUseContent
from app.services.tool_call import tool_call_manager


class ToolCallEventProcessor(BaseProcessor):
    """Processor that handles tool use events in the streaming response."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Intercept tool use content blocks and inject MessageDelta/MessageStop events.

        Requires:
            - event_stream in context
            - cladue_session in context

        Produces:
            - Modified event_stream with injected events for tool calls
            - Pauses session when tool call is detected
        """
        if not context.event_stream:
            logger.warning(
                "Skipping ToolCallEventProcessor due to missing event_stream"
            )
            return context

        if not context.claude_session:
            logger.warning("Skipping ToolCallEventProcessor due to missing session")
            return context

        logger.debug("Setting up tool call event processing")

        original_stream = context.event_stream
        new_stream = self._process_tool_events(original_stream, context)
        context.event_stream = new_stream

        return context

    async def _process_tool_events(
        self,
        event_stream: AsyncIterator[StreamingEvent],
        context: ClaudeAIContext,
    ) -> AsyncIterator[StreamingEvent]:
        """
        Process events and inject MessageDelta/MessageStop when tool use is detected.
        """
        current_tool_use_id: Optional[str] = None
        tool_use_detected = False
        content_block_index: Optional[int] = None
        tool_result_detected = False

        async for event in event_stream:
            # Check for ContentBlockStartEvent with tool_use type
            if isinstance(event.root, ContentBlockStartEvent):
                if isinstance(event.root.content_block, ToolUseContent):
                    current_tool_use_id = event.root.content_block.id
                    content_block_index = event.root.index
                    tool_use_detected = True
                    logger.debug(
                        f"Detected tool use start: {current_tool_use_id} "
                        f"(name: {event.root.content_block.name})"
                    )
                elif isinstance(event.root.content_block, ToolResultContent):
                    logger.debug(
                        f"Detected tool result: {event.root.content_block.tool_use_id}"
                    )
                    tool_result_detected = True

            # Yield the original event
            if tool_result_detected:
                logger.debug("Skipping tool result content block")
            else:
                yield event

            # Check for ContentBlockStopEvent for a tool use block
            if isinstance(event.root, ContentBlockStopEvent):
                if tool_result_detected:
                    logger.debug("Tool result block ended")
                    tool_result_detected = False
                if (
                    tool_use_detected
                    and content_block_index is not None
                    and event.root.index == content_block_index
                ):
                    logger.debug(f"Tool use block ended: {current_tool_use_id}")

                    message_delta = MessageDeltaEvent(
                        type="message_delta",
                        delta=MessageDeltaData(stop_reason="tool_use"),
                        usage=None,
                    )
                    yield StreamingEvent(root=message_delta)

                    message_stop = MessageStopEvent(type="message_stop")
                    yield StreamingEvent(root=message_stop)

                    # Register the tool call
                    if current_tool_use_id and context.claude_session:
                        tool_call_manager.register_tool_call(
                            tool_use_id=current_tool_use_id,
                            session_id=context.claude_session.session_id,
                            message_id=context.collected_message.id
                            if context.collected_message
                            else None,
                        )

                        logger.info(
                            f"Registered tool call {current_tool_use_id} for session {context.claude_session.session_id}"
                        )

                    current_tool_use_id = None
                    tool_use_detected = False
                    content_block_index = None

                    break


================================================
FILE: app/processors/claude_ai/tool_result_processor.py
================================================
import uuid
from loguru import logger

from app.processors.base import BaseProcessor
from app.processors.claude_ai import ClaudeAIContext
from app.models.claude import TextContent, ToolResultContent
from app.models.streaming import MessageStartEvent, StreamingEvent
from app.services.tool_call import tool_call_manager
from app.services.session import session_manager
from app.services.event_processing import EventSerializer

event_serializer = EventSerializer()


class ToolResultProcessor(BaseProcessor):
    """Processor that handles tool result messages and resumes paused sessions."""

    async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
        """
        Check if the last message is a tool result and handle accordingly.

        Requires:
            - messages_api_request in context

        Produces:
            - Resumes paused session if tool result matches
            - Sets event_stream from resumed session
            - Skips normal request building/sending
        """
        if not context.messages_api_request:
            logger.warning(
                "Skipping ToolResultProcessor due to missing messages_api_request"
            )
            return context

        messages = context.messages_api_request.messages
        if not messages:
            return context

        last_message = messages[-1]

        if last_message.role != "user":
            return context

        if isinstance(last_message.content, str):
            return context

        # Find tool result content block
        lsat_content_block = last_message.content[-1]
        if not isinstance(lsat_content_block, ToolResultContent):
            return context

        tool_result = lsat_content_block

        logger.debug(f"Found tool result for tool_use_id: {tool_result.tool_use_id}")

        # Check if we have a pending tool call for this ID
        tool_call_state = tool_call_manager.get_tool_call(tool_result.tool_use_id)
        if not tool_call_state:
            logger.debug(
                f"No pending tool call found for tool_use_id: {tool_result.tool_use_id}"
            )
            return context

        # Get the session
        session = await session_manager.get_session(tool_call_state.session_id)
        if not session:
            logger.error(
                f"Session {tool_call_state.session_id} not found for tool call {tool_result.tool_use_id}"
            )
            tool_call_manager.complete_tool_call(tool_result.tool_use_id)
            return context

        if isinstance(tool_result.content, str):
            tool_result.content = [TextContent(type="text", text=tool_result.content)]
        tool_result_payload = tool_result.model_dump()

        await session.send_tool_result(tool_result_payload)
        logger.info(
            f"Sent tool result for {tool_result.tool_use_id} to session {session.session_id}"
        )

        if not session.sse_stream:
            logger.error(f"No stream available for session {session.session_id}")
            tool_call_manager.complete_tool_call(tool_result.tool_use_id)
            return context

        # Continue with the existing stream
        resumed_stream = session.sse_stream

        message_start_event = MessageStartEvent(
            type="message_start",
            message=context.collected_message
            if context.collected_message
            else {
                "id": tool_call_state.message_id or str(uuid.uuid4()),
                "type": "message",
                "role": "assistant",
                "content": [],
                "model": context.messages_api_request.model,
            },
        )

        # Create a generator that yields the message start event followed by the resumed stream
        async def resumed_event_stream():
            yield event_serializer.serialize_event(
                StreamingEvent(root=message_start_event)
            )
            async for event in resumed_stream:
                yield event

        context.original_stream = resumed_event_stream()
        context.claude_session = session

        tool_call_manager.complete_tool_call(tool_result.tool_use_id)

        # Skip the normal Claude AI processor
        context.metadata["skip_processors"] = [
            "ClaudeAPIProcessor",
            "ClaudeWebProcessor",
        ]

        return context


================================================
FILE: app/processors/pipeline.py
================================================
from typing import List, Optional
from loguru import logger

from app.processors.base import BaseContext, BaseProcessor


class ProcessingPipeline(BaseProcessor):
    """
    Main pipeline for processing Claude requests.
    """

    def __init__(self, processors: Optional[List[BaseProcessor]] = None):
        """
        Initialize the pipeline with processors.

        Args:
            processors: List of processors to use. If None, default processors are used.
        """
        self.processors = processors

        logger.debug(f"Initialized pipeline with {len(self.processors)} processors")
        for processor in self.processors:
            logger.debug(f"  - {processor.name}")

    async def process(self, context: BaseContext) -> BaseContext:
        """
        Process a request through the pipeline.

        Args:
            context: The processing context

        Returns:
            Updated context.
        """

        logger.debug("Starting pipeline processing")

        # Process through each processor
        for i, processor in enumerate(self.processors):
            if processor.name in context.metadata.get("skip_processors", []):
                logger.debug(
                    f"Skipping processor {processor.name} due to being in skip_processors list"
                )
                continue

            logger.debug(
                f"Running processor {i + 1}/{len(self.processors)}: {processor.name}"
            )

            context = await processor.process(context)

            if context.metadata.get("stop_pipeline", False):
                logger.debug(f"Pipeline stopped by {processor.name}")
                break

        logger.debug("Pipeline processing completed successfully")
        return context


================================================
FILE: app/services/__init__.py
================================================


================================================
FILE: app/services/account.py
================================================
import asyncio
from datetime import datetime, UTC
from typing import List, Optional, Dict, Set

from collections import defaultdict
from loguru import logger
import threading
import json
import uuid

from app.core.config import settings
from app.core.exceptions import NoAccountsAvailableError
from app.core.account import Account, AccountStatus, AuthType, OAuthToken
from app.services.oauth import oauth_authenticator


class AccountManager:
    """
    Singleton manager for Claude.ai accounts with load balancing and rate limit recovery.
    Supports both cookie and OAuth authentication.
    """

    _instance: Optional["AccountManager"] = None
    _lock = threading.Lock()

    def __new__(cls):
        """Implement singleton pattern."""
        if cls._instance is None:
            with cls._lock:
                if cls._instance is None:
                    cls._instance = super().__new__(cls)
        return cls._instance

    def __init__(self):
        """Initialize the AccountManager."""
        self._accounts: Dict[str, Account] = {}  # organization_uuid -> Account
        self._cookie_to_uuid: Dict[str, str] = {}  # cookie_value -> organization_uuid
        self._session_accounts: Dict[str, str] = {}  # session_id -> organization_uuid
        self._account_sessions: Dict[str, Set[str]] = defaultdict(
            set
        )  # organization_uuid -> set of session_ids
        self._account_task: Optional[asyncio.Task] = None
        self._max_sessions_per_account = settings.max_sessions_per_cookie
        self._account_task_interval = settings.account_task_interval

        logger.info("AccountManager initialized")

    async def add_account(
        self,
        cookie_value: Optional[str] = None,
        oauth_token: Optional[OAuthToken] = None,
        organization_uuid: Optional[str] = None,
        capabilities: Optional[List[str]] = None,
    ) -> Account:
        """Add a new account to the manager.

        Args:
            cookie_value: The cookie value (optional)
            oauth_token: The OAuth token (optional)
            organization_uuid: The organization UUID (optional, will be fetched or generated if not provided)
            capabilities: The account capabilities (optional)

        Raises:
            ValueError: If neither cookie_value nor oauth_token is provided
        """
        if not cookie_value and not oauth_token:
            raise ValueError("Either cookie_value or oauth_token must be provided")

        if cookie_value and cookie_value in self._cookie_to_uuid:
            return self._accounts[self._cookie_to_uuid[cookie_value]]

        if cookie_value and (not organization_uuid or not capabilities):
            (
                fetched_uuid,
                capabilities,
            ) = await oauth_authenticator.get_organization_info(cookie_value)
            if fetched_uuid:
                organization_uuid = fetched_uuid

        if organization_uuid and organization_uuid in self._accounts:
            existing_account = self._accounts[organization_uuid]

            if cookie_value and existing_account.cookie_value != cookie_value:
                if existing_account.cookie_value:
                    del self._cookie_to_uuid[existing_account.cookie_value]
                existing_account.cookie_value = cookie_value
                self._cookie_to_uuid[cookie_value] = organization_uuid
            return existing_account

        if not organization_uuid:
            organization_uuid = str(uuid.uuid4())
            logger.info(f"Generated new organization UUID: {organization_uuid}")

        # Create new account
        if cookie_value and oauth_token:
            auth_type = AuthType.BOTH
        elif cookie_value:
            auth_type = AuthType.COOKIE_ONLY
        else:
            auth_type = AuthType.OAUTH_ONLY

        account = Account(
            organization_uuid=organization_uuid,
            capabilities=capabilities,
            cookie_value=cookie_value,
            oauth_token=oauth_token,
            auth_type=auth_type,
        )
        self._accounts[organization_uuid] = account
        self.save_accounts()

        if cookie_value:
            self._cookie_to_uuid[cookie_value] = organization_uuid

        logger.info(
            f"Added new account: {organization_uuid[:8]}... "
            f"(auth_type: {auth_type.value}, "
            f"cookie: {cookie_value[:20] + '...' if cookie_value else 'None'}, "
            f"oauth: {'Yes' if oauth_token else 'No'})"
        )

        if auth_type == AuthType.COOKIE_ONLY:
            asyncio.create_task(self._attempt_oauth_authentication(account))

        return account

    async def remove_account(self, organization_uuid: str) -> None:
        """Remove an account from the manager."""
        if organization_uuid in self._accounts:
            account = self._accounts[organization_uuid]
            sessions_to_remove = list(
                self._account_sessions.get(organization_uuid, set())
            )

            for session_id in sessions_to_remove:
                if session_id in self._session_accounts:
                    del self._session_accounts[session_id]

            if account.cookie_value and account.cookie_value in self._cookie_to_uuid:
                del self._cookie_to_uuid[account.cookie_value]

            del self._accounts[organization_uuid]

            if organization_uuid in self._account_sessions:
                del self._account_sessions[organization_uuid]

            logger.info(f"Removed account: {organization_uuid[:8]}...")
            self.save_accounts()

    async def get_account_for_session(
        self,
        session_id: str,
        is_pro: Optional[bool] = None,
        is_max: Optional[bool] = None,
    ) -> Account:
        """
        Get an available account for the session with load balancing.

        Args:
            session_id: Unique identifier for the session
            is_pro: Filter by pro capability. None means any.
            is_max: Filter by max capability. None means any.

        Returns:
            Account instance if available
        """
        # Convert single auth_type to list for uniform handling
        if session_id in self._session_accounts:
            organization_uuid = self._session_accounts[session_id]
            if organization_uuid in self._accounts:
                account = self._accounts[organization_uuid]
                if account.status == AccountStatus.VALID:
                    return account
                else:
                    del self._session_accounts[session_id]
                    self._account_sessions[organization_uuid].discard(session_id)

        best_account = None
        min_sessions = float("inf")
        earliest_last_used = None

        for organization_uuid, account in self._accounts.items():
            if account.status != AccountStatus.VALID:
                continue

            # Filter by auth type if specified
            if account.auth_type not in [AuthType.BOTH, AuthType.COOKIE_ONLY]:
                continue

            # Filter by capabilities if specified
            if is_pro is not None and account.is_pro != is_pro:
                continue
            if is_max is not None and account.is_max != is_max:
                continue

            session_count = len(self._account_sessions[organization_uuid])
            if session_count >= self._max_sessions_per_account:
                continue

            # Select account with least sessions
            # If multiple accounts have the same least sessions, select the one with earliest last_used
            if session_count < min_sessions or (
                session_count == min_sessions
                and (
                    earliest_last_used is not None
                    and account.last_used < earliest_last_used
                )
            ):
                min_sessions = session_count
                earliest_last_used = account.last_used
                best_account = account

        if best_account:
            self._session_accounts[session_id] = best_account.organization_uuid
            self._account_sessions[best_account.organization_uuid].add(session_id)

            logger.debug(
                f"Assigned account to session {session_id}, "
                f"account now has {len(self._account_sessions[best_account.organization_uuid])} sessions"
            )

            return best_account

        raise NoAccountsAvailableError()

    async def get_account_for_oauth(
        self,
        is_pro: Optional[bool] = None,
        is_max: Optional[bool] = None,
    ) -> Account:
        """
        Get an available account for OAuth authentication.

        Args:
            is_pro: Filter by pro capability. None means any.
            is_max: Filter by max capability. None means any.

        Returns:
            Account instance if available
        """
        earliest_account = None
        earliest_last_used = None

        for account in self._accounts.values():
            if account.status != AccountStatus.VALID:
                continue

            if account.auth_type not in [AuthType.OAUTH_ONLY, AuthType.BOTH]:
                continue

            # Filter by capabilities if specified
            if is_pro is not None and account.is_pro != is_pro:
                continue
            if is_max is not None and account.is_max != is_max:
                continue

            if earliest_last_used is None or account.last_used < earliest_last_used:
                earliest_last_used = account.last_used
                earliest_account = account

        if earliest_account:
            logger.debug(
                f"Selected OAuth account: {earliest_account.organization_uuid[:8]}... "
                f"(last used: {earliest_account.last_used.isoformat()})"
            )
            return earliest_account

        raise NoAccountsAvailableError()

    async def get_account_by_id(self, account_id: str) -> Optional[Account]:
        """
        Get an account by its organization UUID.

        Args:
            account_id: The organization UUID of the account

        Returns:
            Account instance if found and valid, None otherwise
        """
        account = self._accounts.get(account_id)
        
        if account and account.status == AccountStatus.VALID:
            logger.debug(f"Retrieved account by ID: {account
Download .txt
gitextract_wf3w9e88/

├── .dockerignore
├── .github/
│   └── workflows/
│       ├── build-and-publish.yml
│       └── docker-publish.yml
├── .gitignore
├── .gitmodules
├── .python-version
├── Dockerfile
├── Dockerfile.huggingface
├── Dockerfile.pypi
├── LICENSE
├── MANIFEST.in
├── Makefile
├── README.md
├── README_en.md
├── app/
│   ├── __init__.py
│   ├── api/
│   │   ├── __init__.py
│   │   ├── main.py
│   │   └── routes/
│   │       ├── accounts.py
│   │       ├── claude.py
│   │       ├── settings.py
│   │       └── statistics.py
│   ├── core/
│   │   ├── __init__.py
│   │   ├── account.py
│   │   ├── claude_session.py
│   │   ├── config.py
│   │   ├── error_handler.py
│   │   ├── exceptions.py
│   │   ├── external/
│   │   │   └── claude_client.py
│   │   ├── http_client.py
│   │   └── static.py
│   ├── dependencies/
│   │   ├── __init__.py
│   │   └── auth.py
│   ├── locales/
│   │   ├── en.json
│   │   └── zh.json
│   ├── main.py
│   ├── models/
│   │   ├── __init__.py
│   │   ├── claude.py
│   │   ├── internal.py
│   │   └── streaming.py
│   ├── processors/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── claude_ai/
│   │   │   ├── __init__.py
│   │   │   ├── claude_api_processor.py
│   │   │   ├── claude_web_processor.py
│   │   │   ├── context.py
│   │   │   ├── event_parser_processor.py
│   │   │   ├── message_collector_processor.py
│   │   │   ├── model_injector_processor.py
│   │   │   ├── non_streaming_response_processor.py
│   │   │   ├── pipeline.py
│   │   │   ├── stop_sequences_processor.py
│   │   │   ├── streaming_response_processor.py
│   │   │   ├── tavern_test_message_processor.py
│   │   │   ├── token_counter_processor.py
│   │   │   ├── tool_call_event_processor.py
│   │   │   └── tool_result_processor.py
│   │   └── pipeline.py
│   ├── services/
│   │   ├── __init__.py
│   │   ├── account.py
│   │   ├── cache.py
│   │   ├── event_processing/
│   │   │   ├── __init__.py
│   │   │   ├── event_parser.py
│   │   │   └── event_serializer.py
│   │   ├── i18n.py
│   │   ├── oauth.py
│   │   ├── session.py
│   │   └── tool_call.py
│   └── utils/
│       ├── __init__.py
│       ├── logger.py
│       ├── messages.py
│       └── retry.py
├── docker-compose.yml
├── pyproject.toml
├── scripts/
│   └── build_wheel.py
└── tests/
    └── test_claude_request_models.py
Download .txt
SYMBOL INDEX (370 symbols across 46 files)

FILE: app/api/routes/accounts.py
  class OAuthTokenCreate (line 14) | class OAuthTokenCreate(BaseModel):
  class AccountCreate (line 20) | class AccountCreate(BaseModel):
  class AccountUpdate (line 27) | class AccountUpdate(BaseModel):
  class OAuthCodeExchange (line 34) | class OAuthCodeExchange(BaseModel):
  class AccountResponse (line 41) | class AccountResponse(BaseModel):
  function list_accounts (line 58) | async def list_accounts(_: AdminAuthDep):
  function get_account (line 84) | async def get_account(organization_uuid: str, _: AdminAuthDep):
  function create_account (line 108) | async def create_account(account_data: AccountCreate, _: AdminAuthDep):
  function update_account (line 142) | async def update_account(
  function delete_account (line 205) | async def delete_account(organization_uuid: str, _: AdminAuthDep):
  function exchange_oauth_code (line 216) | async def exchange_oauth_code(exchange_data: OAuthCodeExchange, _: Admin...

FILE: app/api/routes/claude.py
  function create_message (line 29) | async def create_message(

FILE: app/api/routes/settings.py
  class SettingsRead (line 11) | class SettingsRead(BaseModel):
  class SettingsUpdate (line 37) | class SettingsUpdate(BaseModel):
  function get_settings (line 67) | async def get_settings(_: AdminAuthDep) -> Settings:
  function update_settings (line 73) | async def update_settings(_: AdminAuthDep, updates: SettingsUpdate) -> S...

FILE: app/api/routes/statistics.py
  class AccountStats (line 9) | class AccountStats(BaseModel):
  class StatisticsResponse (line 17) | class StatisticsResponse(BaseModel):
  function get_statistics (line 26) | async def get_statistics(_: AdminAuthDep):

FILE: app/core/account.py
  class AccountStatus (line 14) | class AccountStatus(str, Enum):
  class AuthType (line 20) | class AuthType(str, Enum):
  class OAuthToken (line 27) | class OAuthToken:
    method to_dict (line 34) | def to_dict(self) -> dict:
    method from_dict (line 43) | def from_dict(cls, data: dict) -> "OAuthToken":
  class Account (line 52) | class Account:
    method __init__ (line 55) | def __init__(
    method __enter__ (line 72) | def __enter__(self) -> "Account":
    method __exit__ (line 77) | def __exit__(self, exc_type, exc_val, exc_tb):
    method save (line 109) | def save(self) -> None:
    method to_dict (line 114) | def to_dict(self) -> dict:
    method from_dict (line 128) | def from_dict(cls, data: dict) -> "Account":
    method is_pro (line 148) | def is_pro(self) -> bool:
    method is_max (line 161) | def is_max(self) -> bool:
    method __repr__ (line 168) | def __repr__(self) -> str:

FILE: app/core/claude_session.py
  class ClaudeWebSession (line 11) | class ClaudeWebSession:
    method __init__ (line 12) | def __init__(self, session_id: str):
    method initialize (line 19) | async def initialize(self):
    method stream (line 25) | async def stream(self, response: Response) -> AsyncIterator[str]:
    method cleanup (line 45) | async def cleanup(self):
    method _ensure_conversation_initialized (line 56) | async def _ensure_conversation_initialized(self) -> None:
    method update_activity (line 63) | def update_activity(self):
    method send_message (line 67) | async def send_message(self, payload: Dict[str, Any]) -> AsyncIterator...
    method upload_file (line 82) | async def upload_file(
    method send_tool_result (line 88) | async def send_tool_result(self, payload: Dict[str, Any]) -> None:
    method set_paprika_mode (line 97) | async def set_paprika_mode(self, mode: Optional[str]) -> None:

FILE: app/core/config.py
  class Settings (line 9) | class Settings(BaseSettings):
    method settings_customise_sources (line 19) | def settings_customise_sources(
    method _json_config_settings (line 44) | def _json_config_settings(cls) -> Dict[str, Any]:
    method parse_comma_separated (line 265) | def parse_comma_separated(cls, v: str | List[str]) -> List[str]:

FILE: app/core/error_handler.py
  class ErrorHandler (line 10) | class ErrorHandler:
    method get_language_from_request (line 16) | def get_language_from_request(request: Request) -> str:
    method format_error_response (line 22) | def format_error_response(
    method handle_app_exception (line 45) | async def handle_app_exception(request: Request, exc: AppError) -> JSO...
  function app_exception_handler (line 81) | async def app_exception_handler(request: Request, exc: AppError) -> JSON...

FILE: app/core/exceptions.py
  class AppError (line 5) | class AppError(Exception):
    method __init__ (line 10) | def __init__(
    method __str__ (line 27) | def __str__(self):
  class InternalServerError (line 31) | class InternalServerError(AppError):
    method __init__ (line 32) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class NoAPIKeyProvidedError (line 41) | class NoAPIKeyProvidedError(AppError):
    method __init__ (line 42) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class InvalidAPIKeyError (line 51) | class InvalidAPIKeyError(AppError):
    method __init__ (line 52) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class NoAccountsAvailableError (line 61) | class NoAccountsAvailableError(AppError):
    method __init__ (line 62) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class ClaudeRateLimitedError (line 72) | class ClaudeRateLimitedError(AppError):
    method __init__ (line 75) | def __init__(self, resets_at: datetime, context: Optional[Dict[str, An...
  class CloudflareBlockedError (line 88) | class CloudflareBlockedError(AppError):
    method __init__ (line 89) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class OrganizationDisabledError (line 98) | class OrganizationDisabledError(AppError):
    method __init__ (line 99) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class InvalidModelNameError (line 109) | class InvalidModelNameError(AppError):
    method __init__ (line 110) | def __init__(self, model_name: str, context: Optional[Dict[str, Any]] ...
  class ClaudeAuthenticationError (line 121) | class ClaudeAuthenticationError(AppError):
    method __init__ (line 122) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class ClaudeHttpError (line 131) | class ClaudeHttpError(AppError):
    method __init__ (line 132) | def __init__(
  class NoValidMessagesError (line 156) | class NoValidMessagesError(AppError):
    method __init__ (line 157) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class ExternalImageDownloadError (line 166) | class ExternalImageDownloadError(AppError):
    method __init__ (line 167) | def __init__(self, url: str, context: Optional[Dict[str, Any]] = None):
  class ExternalImageNotAllowedError (line 178) | class ExternalImageNotAllowedError(AppError):
    method __init__ (line 179) | def __init__(self, url: str, context: Optional[Dict[str, Any]] = None):
  class NoResponseError (line 190) | class NoResponseError(AppError):
    method __init__ (line 191) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class OAuthExchangeError (line 200) | class OAuthExchangeError(AppError):
    method __init__ (line 201) | def __init__(self, reason: str, context: Optional[Dict[str, Any]] = No...
  class OrganizationInfoError (line 212) | class OrganizationInfoError(AppError):
    method __init__ (line 213) | def __init__(self, reason: str, context: Optional[Dict[str, Any]] = No...
  class CookieAuthorizationError (line 224) | class CookieAuthorizationError(AppError):
    method __init__ (line 225) | def __init__(self, reason: str, context: Optional[Dict[str, Any]] = No...
  class OAuthAuthenticationNotAllowedError (line 236) | class OAuthAuthenticationNotAllowedError(AppError):
    method __init__ (line 237) | def __init__(self, context: Optional[Dict[str, Any]] = None):
  class ClaudeStreamingError (line 246) | class ClaudeStreamingError(AppError):
    method __init__ (line 247) | def __init__(
  class NoMessageError (line 267) | class NoMessageError(AppError):
    method __init__ (line 268) | def __init__(self, context: Optional[Dict[str, Any]] = None):

FILE: app/core/external/claude_client.py
  class ClaudeWebClient (line 26) | class ClaudeWebClient:
    method __init__ (line 29) | def __init__(self, account: Account):
    method initialize (line 34) | async def initialize(self):
    method cleanup (line 43) | async def cleanup(self):
    method _build_headers (line 48) | def _build_headers(
    method _request (line 67) | async def _request(
    method create_conversation (line 129) | async def create_conversation(self) -> str:
    method set_paprika_mode (line 151) | async def set_paprika_mode(self, conv_uuid: str, mode: Optional[str]) ...
    method upload_file (line 161) | async def upload_file(
    method send_message (line 173) | async def send_message(self, payload: Dict[str, Any], conv_uuid: str) ...
    method send_tool_result (line 190) | async def send_tool_result(self, payload: Dict[str, Any], conv_uuid: s...
    method delete_conversation (line 199) | async def delete_conversation(self, conv_uuid: str) -> None:

FILE: app/core/http_client.py
  class Response (line 52) | class Response(ABC):
    method status_code (line 57) | def status_code(self) -> int:
    method json (line 62) | async def json(self) -> Any:
    method headers (line 68) | def headers(self) -> Dict[str, str]:
    method aiter_bytes (line 73) | def aiter_bytes(self, chunk_size: Optional[int] = None) -> AsyncIterat...
  class CurlResponseWrapper (line 78) | class CurlResponseWrapper(Response):
    method __init__ (line 81) | def __init__(self, response: "CurlResponse", stream: bool = False):
    method status_code (line 86) | def status_code(self) -> int:
    method json (line 89) | async def json(self) -> Any:
    method headers (line 99) | def headers(self) -> Dict[str, str]:
    method aiter_bytes (line 102) | async def aiter_bytes(
  class HttpxResponse (line 110) | class HttpxResponse(Response):
    method __init__ (line 113) | def __init__(self, response: httpx.Response):
    method status_code (line 117) | def status_code(self) -> int:
    method json (line 120) | async def json(self) -> Any:
    method headers (line 125) | def headers(self) -> Dict[str, str]:
    method aiter_bytes (line 128) | async def aiter_bytes(
  class RnetResponse (line 138) | class RnetResponse(Response):
    method __init__ (line 141) | def __init__(self, response: "rnet.Response"):
    method status_code (line 145) | def status_code(self) -> int:
    method json (line 148) | async def json(self) -> Any:
    method headers (line 152) | def headers(self) -> Dict[str, str]:
    method aiter_bytes (line 160) | async def aiter_bytes(
  class AsyncSession (line 169) | class AsyncSession(ABC):
    method request (line 173) | async def request(
    method close (line 187) | async def close(self):
    method __aenter__ (line 191) | async def __aenter__(self):
    method __aexit__ (line 194) | async def __aexit__(self, exc_type, exc_val, exc_tb):
  class CurlAsyncSessionWrapper (line 200) | class CurlAsyncSessionWrapper(AsyncSession):
    method __init__ (line 203) | def __init__(
    method process_files (line 217) | def process_files(self, files: dict) -> curl_cffi.CurlMime:
    method request (line 258) | async def request(
    method close (line 294) | async def close(self):
  class RnetAsyncSession (line 300) | class RnetAsyncSession(AsyncSession):
    method __init__ (line 303) | def __init__(
    method request (line 342) | async def request(
    method close (line 429) | async def close(self):
  class HttpxAsyncSession (line 437) | class HttpxAsyncSession(AsyncSession):
    method __init__ (line 440) | def __init__(
    method stream (line 453) | async def stream(
    method request (line 494) | async def request(
    method close (line 526) | async def close(self):
  function create_session (line 530) | def create_session(
  function create_plain_session (line 566) | def create_plain_session(
  function download_image (line 605) | async def download_image(url: str, timeout: int = 30) -> Tuple[bytes, str]:

FILE: app/core/static.py
  function register_static_routes (line 9) | def register_static_routes(app: FastAPI):

FILE: app/dependencies/auth.py
  function get_api_key (line 21) | async def get_api_key(
  function verify_api_key (line 42) | async def verify_api_key(
  function verify_admin_api_key (line 61) | async def verify_admin_api_key(

FILE: app/main.py
  function lifespan (line 19) | async def lifespan(app: FastAPI):
  function health (line 78) | async def health():
  function main (line 84) | def main():

FILE: app/models/claude.py
  class Role (line 6) | class Role(str, Enum):
  class ImageType (line 11) | class ImageType(str, Enum):
  class Base64ImageSource (line 19) | class Base64ImageSource(BaseModel):
  class URLImageSource (line 25) | class URLImageSource(BaseModel):
  class FileImageSource (line 30) | class FileImageSource(BaseModel):
  class WebSearchResult (line 36) | class WebSearchResult(BaseModel):
  class CacheControl (line 46) | class CacheControl(BaseModel):
  class TextContent (line 51) | class TextContent(BaseModel):
  class ImageContent (line 58) | class ImageContent(BaseModel):
  class ThinkingContent (line 65) | class ThinkingContent(BaseModel):
  class RedactedThinkingContent (line 72) | class RedactedThinkingContent(BaseModel):
  class ToolUseContent (line 78) | class ToolUseContent(BaseModel):
  class ToolResultContent (line 87) | class ToolResultContent(BaseModel):
  class ServerToolUseContent (line 96) | class ServerToolUseContent(BaseModel):
  class WebSearchToolResultContent (line 105) | class WebSearchToolResultContent(BaseModel):
  class InputMessage (line 125) | class InputMessage(BaseModel):
  class ThinkingOptions (line 131) | class ThinkingOptions(BaseModel):
  class ToolChoice (line 137) | class ToolChoice(BaseModel):
  class CustomToolSpec (line 144) | class CustomToolSpec(BaseModel):
  class Tool (line 150) | class Tool(BaseModel):
  class OutputConfig (line 159) | class OutputConfig(BaseModel):
  class OutputFormat (line 166) | class OutputFormat(BaseModel):
  class ServerToolUsage (line 174) | class ServerToolUsage(BaseModel):
  class Usage (line 179) | class Usage(BaseModel):
  class MessagesAPIRequest (line 188) | class MessagesAPIRequest(BaseModel):
    method validate_thinking_tokens (line 207) | def validate_thinking_tokens(self) -> "MessagesAPIRequest":
  class Message (line 219) | class Message(BaseModel):

FILE: app/models/internal.py
  class Attachment (line 6) | class Attachment(BaseModel):
    method from_text (line 13) | def from_text(cls, content: str) -> "Attachment":
  class ClaudeWebRequest (line 23) | class ClaudeWebRequest(BaseModel):
  class UploadResponse (line 34) | class UploadResponse(BaseModel):

FILE: app/models/streaming.py
  class BaseEvent (line 8) | class BaseEvent(BaseModel):
  class TextDelta (line 14) | class TextDelta(BaseModel):
  class InputJsonDelta (line 20) | class InputJsonDelta(BaseModel):
  class ThinkingDelta (line 26) | class ThinkingDelta(BaseModel):
  class SignatureDelta (line 32) | class SignatureDelta(BaseModel):
  class MessageDeltaData (line 41) | class MessageDeltaData(BaseModel):
  class ErrorInfo (line 50) | class ErrorInfo(BaseModel):
  class MessageStartEvent (line 57) | class MessageStartEvent(BaseEvent):
  class ContentBlockStartEvent (line 62) | class ContentBlockStartEvent(BaseEvent):
  class ContentBlockDeltaEvent (line 68) | class ContentBlockDeltaEvent(BaseEvent):
  class ContentBlockStopEvent (line 74) | class ContentBlockStopEvent(BaseEvent):
  class MessageDeltaEvent (line 79) | class MessageDeltaEvent(BaseEvent):
  class MessageStopEvent (line 85) | class MessageStopEvent(BaseEvent):
  class PingEvent (line 89) | class PingEvent(BaseEvent):
  class ErrorEvent (line 93) | class ErrorEvent(BaseEvent):
  class UnknownEvent (line 98) | class UnknownEvent(BaseEvent):
  class StreamingEvent (line 104) | class StreamingEvent(RootModel):

FILE: app/processors/base.py
  class BaseContext (line 12) | class BaseContext:
  class BaseProcessor (line 22) | class BaseProcessor(ABC):
    method process (line 26) | async def process(self, context: BaseContext) -> BaseContext:
    method name (line 39) | def name(self) -> str:

FILE: app/processors/claude_ai/claude_api_processor.py
  class ClaudeAPIProcessor (line 26) | class ClaudeAPIProcessor(BaseProcessor):
    method __init__ (line 29) | def __init__(self):
    method _request_messages_api (line 34) | async def _request_messages_api(
    method process (line 47) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
    method _insert_system_message (line 195) | def _insert_system_message(self, context: ClaudeAIContext) -> None:
    method _prepare_headers (line 219) | def _prepare_headers(

FILE: app/processors/claude_ai/claude_web_processor.py
  class ClaudeWebProcessor (line 17) | class ClaudeWebProcessor(BaseProcessor):
    method process (line 20) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:

FILE: app/processors/claude_ai/context.py
  class ClaudeAIContext (line 12) | class ClaudeAIContext(BaseContext):

FILE: app/processors/claude_ai/event_parser_processor.py
  class EventParsingProcessor (line 8) | class EventParsingProcessor(BaseProcessor):
    method __init__ (line 11) | def __init__(self):
    method process (line 15) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:

FILE: app/processors/claude_ai/message_collector_processor.py
  class MessageCollectorProcessor (line 32) | class MessageCollectorProcessor(BaseProcessor):
    method process (line 35) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
    method _collect_messages_generator (line 62) | async def _collect_messages_generator(
    method _apply_delta (line 173) | def _apply_delta(self, content_block: ContentBlock, delta: Delta) -> N...

FILE: app/processors/claude_ai/model_injector_processor.py
  class ModelInjectorProcessor (line 12) | class ModelInjectorProcessor(BaseProcessor):
    method process (line 15) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
    method _inject_model_generator (line 46) | async def _inject_model_generator(

FILE: app/processors/claude_ai/non_streaming_response_processor.py
  class NonStreamingResponseProcessor (line 10) | class NonStreamingResponseProcessor(BaseProcessor):
    method process (line 13) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:

FILE: app/processors/claude_ai/pipeline.py
  class ClaudeAIPipeline (line 28) | class ClaudeAIPipeline(ProcessingPipeline):
    method __init__ (line 29) | def __init__(self, processors: Optional[List[BaseProcessor]] = None):
    method process (line 57) | async def process(

FILE: app/processors/claude_ai/stop_sequences_processor.py
  class StopSequencesProcessor (line 18) | class StopSequencesProcessor(BaseProcessor):
    method process (line 21) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
    method _process_stop_sequences (line 60) | async def _process_stop_sequences(

FILE: app/processors/claude_ai/streaming_response_processor.py
  class StreamingResponseProcessor (line 10) | class StreamingResponseProcessor(BaseProcessor):
    method __init__ (line 13) | def __init__(self):
    method process (line 17) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:

FILE: app/processors/claude_ai/tavern_test_message_processor.py
  class TestMessageProcessor (line 16) | class TestMessageProcessor(BaseProcessor):
    method process (line 19) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:

FILE: app/processors/claude_ai/token_counter_processor.py
  class TokenCounterProcessor (line 18) | class TokenCounterProcessor(BaseProcessor):
    method process (line 21) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
    method _count_tokens_generator (line 51) | async def _count_tokens_generator(
    method _calculate_input_tokens (line 98) | async def _calculate_input_tokens(self, context: ClaudeAIContext) -> int:
    method _calculate_output_tokens (line 116) | async def _calculate_output_tokens(self, context: ClaudeAIContext) -> ...

FILE: app/processors/claude_ai/tool_call_event_processor.py
  class ToolCallEventProcessor (line 18) | class ToolCallEventProcessor(BaseProcessor):
    method process (line 21) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:
    method _process_tool_events (line 51) | async def _process_tool_events(

FILE: app/processors/claude_ai/tool_result_processor.py
  class ToolResultProcessor (line 15) | class ToolResultProcessor(BaseProcessor):
    method process (line 18) | async def process(self, context: ClaudeAIContext) -> ClaudeAIContext:

FILE: app/processors/pipeline.py
  class ProcessingPipeline (line 7) | class ProcessingPipeline(BaseProcessor):
    method __init__ (line 12) | def __init__(self, processors: Optional[List[BaseProcessor]] = None):
    method process (line 25) | async def process(self, context: BaseContext) -> BaseContext:

FILE: app/services/account.py
  class AccountManager (line 17) | class AccountManager:
    method __new__ (line 26) | def __new__(cls):
    method __init__ (line 34) | def __init__(self):
    method add_account (line 48) | async def add_account(
    method remove_account (line 127) | async def remove_account(self, organization_uuid: str) -> None:
    method get_account_for_session (line 150) | async def get_account_for_session(
    method get_account_for_oauth (line 226) | async def get_account_for_oauth(
    method get_account_by_id (line 270) | async def get_account_by_id(self, account_id: str) -> Optional[Account]:
    method release_session (line 295) | async def release_session(self, session_id: str) -> None:
    method start_task (line 306) | async def start_task(self) -> None:
    method stop_task (line 311) | async def stop_task(self) -> None:
    method _task_loop (line 320) | async def _task_loop(self) -> None:
    method _check_and_recover_accounts (line 333) | async def _check_and_recover_accounts(self) -> None:
    method _check_and_refresh_accounts (line 350) | async def _check_and_refresh_accounts(self) -> None:
    method _refresh_account_token (line 364) | async def _refresh_account_token(self, account: Account) -> None:
    method _attempt_oauth_authentication (line 389) | async def _attempt_oauth_authentication(self, account: Account) -> None:
    method get_status (line 406) | async def get_status(self) -> Dict:
    method save_accounts (line 444) | def save_accounts(self) -> None:
    method load_accounts (line 468) | def load_accounts(self) -> None:
    method __repr__ (line 501) | def __repr__(self) -> str:

FILE: app/services/cache.py
  class CacheCheckpoint (line 27) | class CacheCheckpoint:
    method __init__ (line 30) | def __init__(self, checkpoint: str, account_id: str):
  class CacheService (line 36) | class CacheService:
    method __new__ (line 45) | def __new__(cls):
    method __init__ (line 53) | def __init__(self):
    method process_messages (line 64) | def process_messages(
    method add_checkpoints (line 130) | def add_checkpoints(self, checkpoints: List[str], account_id: str) -> ...
    method _update_hasher (line 149) | def _update_hasher(self, hasher: "hashlib._Hash", data: Dict) -> None:
    method _content_block_to_dict (line 164) | def _content_block_to_dict(self, content_block: ContentBlock) -> Dict:
    method start_cleanup_task (line 194) | async def start_cleanup_task(self) -> None:
    method stop_cleanup_task (line 200) | async def stop_cleanup_task(self) -> None:
    method _cleanup_loop (line 210) | async def _cleanup_loop(self) -> None:
    method _cleanup_expired_checkpoints (line 222) | def _cleanup_expired_checkpoints(self) -> None:
    method cleanup_all (line 240) | async def cleanup_all(self) -> None:
    method __repr__ (line 246) | def __repr__(self) -> str:

FILE: app/services/event_processing/event_parser.py
  class SSEMessage (line 15) | class SSEMessage:
  class EventParser (line 20) | class EventParser:
    method __init__ (line 23) | def __init__(self, skip_unknown_events: bool = True):
    method parse_stream (line 27) | async def parse_stream(
    method _process_buffer (line 50) | async def _process_buffer(self) -> AsyncIterator[StreamingEvent]:
    method _parse_sse_message (line 64) | def _parse_sse_message(self, message_text: str) -> SSEMessage:
    method _create_streaming_event (line 90) | def _create_streaming_event(self, sse_msg: SSEMessage) -> Optional[Str...
    method flush (line 124) | async def flush(self) -> AsyncIterator[StreamingEvent]:

FILE: app/services/event_processing/event_serializer.py
  class EventSerializer (line 7) | class EventSerializer:
    method __init__ (line 10) | def __init__(self, skip_unknown_events: bool = True):
    method serialize_stream (line 13) | async def serialize_stream(
    method serialize_event (line 30) | def serialize_event(self, event: StreamingEvent) -> Optional[str]:
    method serialize_batch (line 62) | async def serialize_batch(self, events: list[StreamingEvent]) -> str:

FILE: app/services/i18n.py
  class I18nService (line 9) | class I18nService:
    method __init__ (line 15) | def __init__(self):
    method _load_translations (line 21) | def _load_translations(self) -> None:
    method _get_nested_value (line 36) | def _get_nested_value(self, data: Dict[str, Any], key: str) -> Optiona...
    method _interpolate_message (line 52) | def _interpolate_message(self, message: str, context: Dict[str, Any]) ...
    method get_message (line 67) | def get_message(
    method parse_accept_language (line 107) | def parse_accept_language(self, accept_language: Optional[str]) -> str:
    method get_supported_languages (line 143) | def get_supported_languages(self) -> list[str]:
    method reload_translations (line 147) | def reload_translations(self) -> None:

FILE: app/services/oauth.py
  class OAuthAuthenticator (line 24) | class OAuthAuthenticator:
    method _generate_pkce (line 27) | def _generate_pkce(self) -> Tuple[str, str]:
    method _build_headers (line 41) | def _build_headers(self, cookie: str) -> Dict[str, str]:
    method _request (line 55) | async def _request(self, method: str, url: str, **kwargs) -> Response:
    method _token_request (line 82) | async def _token_request(self, url: str, data: dict) -> Response:
    method get_organization_info (line 114) | async def get_organization_info(self, cookie: str) -> Tuple[str, List[...
    method authorize_with_cookie (line 159) | async def authorize_with_cookie(
    method exchange_token (line 227) | async def exchange_token(self, code: str, verifier: str) -> Dict:
    method refresh_access_token (line 266) | async def refresh_access_token(self, refresh_token: str) -> Optional[D...
    method authenticate_account (line 288) | async def authenticate_account(self, account: Account) -> bool:
    method refresh_account_token (line 329) | async def refresh_account_token(self, account: Account) -> bool:

FILE: app/services/session.py
  class SessionManager (line 11) | class SessionManager:
    method __new__ (line 19) | def __new__(cls):
    method __init__ (line 27) | def __init__(self):
    method get_or_create_session (line 40) | async def get_or_create_session(self, session_id: str) -> ClaudeWebSes...
    method get_session (line 61) | async def get_session(self, session_id: str) -> Optional[ClaudeWebSess...
    method remove_session (line 83) | async def remove_session(self, session_id: str) -> None:
    method _is_session_expired (line 94) | async def _is_session_expired(self, session: ClaudeWebSession) -> bool:
    method _remove_session (line 111) | async def _remove_session(self, session_id: str) -> None:
    method start_cleanup_task (line 129) | async def start_cleanup_task(self) -> None:
    method stop_cleanup_task (line 135) | async def stop_cleanup_task(self) -> None:
    method _cleanup_loop (line 145) | async def _cleanup_loop(self) -> None:
    method _cleanup_expired_sessions (line 157) | async def _cleanup_expired_sessions(self) -> None:
    method cleanup_all (line 172) | async def cleanup_all(self) -> None:
    method __repr__ (line 184) | def __repr__(self) -> str:

FILE: app/services/tool_call.py
  class ToolCallState (line 10) | class ToolCallState:
    method __init__ (line 13) | def __init__(self, tool_use_id: str, session_id: str):
  class ToolCallManager (line 20) | class ToolCallManager:
    method __new__ (line 28) | def __new__(cls):
    method __init__ (line 36) | def __init__(self):
    method register_tool_call (line 48) | def register_tool_call(
    method get_tool_call (line 66) | def get_tool_call(self, tool_use_id: str) -> Optional[ToolCallState]:
    method complete_tool_call (line 78) | def complete_tool_call(self, tool_use_id: str) -> None:
    method start_cleanup_task (line 91) | async def start_cleanup_task(self) -> None:
    method stop_cleanup_task (line 97) | async def stop_cleanup_task(self) -> None:
    method _cleanup_loop (line 107) | async def _cleanup_loop(self) -> None:
    method _cleanup_expired_tool_calls (line 119) | def _cleanup_expired_tool_calls(self) -> None:
    method cleanup_all (line 136) | async def cleanup_all(self) -> None:
    method __repr__ (line 142) | def __repr__(self) -> str:

FILE: app/utils/logger.py
  function configure_logger (line 8) | def configure_logger():

FILE: app/utils/messages.py
  function process_messages (line 23) | async def process_messages(
  function extract_image_from_url (line 107) | async def extract_image_from_url(url: str) -> Optional[Base64ImageSource]:

FILE: app/utils/retry.py
  function is_retryable_error (line 7) | def is_retryable_error(exception):
  function log_before_sleep (line 12) | def log_before_sleep(retry_state: RetryCallState) -> None:

FILE: scripts/build_wheel.py
  function run_command (line 11) | def run_command(cmd, cwd=None, check=True):
  function clean_directories (line 21) | def clean_directories():
  function check_node_installed (line 31) | def check_node_installed():
  function check_pnpm_installed (line 45) | def check_pnpm_installed():
  function build_frontend (line 60) | def build_frontend():
  function build_wheel (line 94) | def build_wheel():
  function parse_args (line 117) | def parse_args():
  function main (line 135) | def main():

FILE: tests/test_claude_request_models.py
  class MessagesAPIRequestToolParsingTests (line 6) | class MessagesAPIRequestToolParsingTests(unittest.TestCase):
    method test_accepts_custom_tool_payload_without_top_level_input_schema (line 7) | def test_accepts_custom_tool_payload_without_top_level_input_schema(se...
    method test_accepts_server_web_search_tool_without_input_schema (line 34) | def test_accepts_server_web_search_tool_without_input_schema(self) -> ...
Condensed preview — 75 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (283K chars).
[
  {
    "path": ".dockerignore",
    "chars": 1793,
    "preview": "# Git\n.git/\n.gitignore\n.gitattributes\n\n# Python\n__pycache__/\n*.py[cod]\n*$py.class\n*.so\n.Python\nbuild/\ndevelop-eggs/\ndist"
  },
  {
    "path": ".github/workflows/build-and-publish.yml",
    "chars": 2649,
    "preview": "name: Build and Publish to PyPI\n\non:\n  release:\n    types: [published]\n  workflow_dispatch:\n    inputs:\n      publish_to"
  },
  {
    "path": ".github/workflows/docker-publish.yml",
    "chars": 2275,
    "preview": "name: Docker Build and Push\n\non:\n  push:\n    branches:\n      - main\n    tags:\n      - \"v*\"\n  pull_request:\n    branches:"
  },
  {
    "path": ".gitignore",
    "chars": 4745,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[codz]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packag"
  },
  {
    "path": ".gitmodules",
    "chars": 87,
    "preview": "[submodule \"front\"]\n\tpath = front\n\turl = https://github.com/mirrorange/clove-front.git\n"
  },
  {
    "path": ".python-version",
    "chars": 5,
    "preview": "3.13\n"
  },
  {
    "path": "Dockerfile",
    "chars": 2231,
    "preview": "# Multi-stage Dockerfile for Clove (uv version)\n\n# ====================================================================="
  },
  {
    "path": "Dockerfile.huggingface",
    "chars": 372,
    "preview": "# Simplified Dockerfile for Clove - For Huggingface Spaces\nFROM python:3.11-slim\n\nWORKDIR /app\n\n# Install clove-proxy fr"
  },
  {
    "path": "Dockerfile.pypi",
    "chars": 405,
    "preview": "# Simplified Dockerfile for Clove - Install from PyPI\nFROM python:3.11-slim\n\nWORKDIR /app\n\n# Install clove-proxy from Py"
  },
  {
    "path": "LICENSE",
    "chars": 1063,
    "preview": "MIT License\n\nCopyright (c) 2025 orange\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof "
  },
  {
    "path": "MANIFEST.in",
    "chars": 540,
    "preview": "# Include all static files\nrecursive-include app/static *\n\n# Include locale files\nrecursive-include app/locales *.json\n\n"
  },
  {
    "path": "Makefile",
    "chars": 1501,
    "preview": ".PHONY: help build build-frontend build-wheel install install-dev clean run test\n\n# Default target\nhelp:\n\t@echo \"Availab"
  },
  {
    "path": "README.md",
    "chars": 4063,
    "preview": "# Clove 🍀\n\n<div align=\"center\">\n\n[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)\n[![Python](htt"
  },
  {
    "path": "README_en.md",
    "chars": 7201,
    "preview": "# Clove 🍀\n\n<div align=\"center\">\n\n[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)\n[![Python](htt"
  },
  {
    "path": "app/__init__.py",
    "chars": 22,
    "preview": "__version__ = \"0.1.0\"\n"
  },
  {
    "path": "app/api/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/api/main.py",
    "chars": 520,
    "preview": "from fastapi import APIRouter\nfrom app.api.routes import claude, accounts, settings, statistics\n\napi_router = APIRouter("
  },
  {
    "path": "app/api/routes/accounts.py",
    "chars": 8586,
    "preview": "from typing import List, Optional\nfrom fastapi import APIRouter, HTTPException\nfrom pydantic import BaseModel, Field\nfro"
  },
  {
    "path": "app/api/routes/claude.py",
    "chars": 1245,
    "preview": "from fastapi import APIRouter, Request\nfrom fastapi.responses import StreamingResponse, JSONResponse\nfrom tenacity impor"
  },
  {
    "path": "app/api/routes/settings.py",
    "chars": 2893,
    "preview": "import os\nimport json\nfrom typing import List\nfrom fastapi import APIRouter, HTTPException\nfrom pydantic import BaseMode"
  },
  {
    "path": "app/api/routes/statistics.py",
    "chars": 809,
    "preview": "from typing import Literal\nfrom fastapi import APIRouter\nfrom pydantic import BaseModel\n\nfrom app.dependencies.auth impo"
  },
  {
    "path": "app/core/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/core/account.py",
    "chars": 5496,
    "preview": "from typing import List, Optional\nfrom enum import Enum\nfrom datetime import datetime\nfrom dataclasses import dataclass\n"
  },
  {
    "path": "app/core/claude_session.py",
    "chars": 3751,
    "preview": "from typing import Dict, Any, AsyncIterator, Optional\nfrom datetime import datetime\nfrom app.core.http_client import Res"
  },
  {
    "path": "app/core/config.py",
    "chars": 9044,
    "preview": "import os\nimport json\nfrom pathlib import Path\nfrom typing import Optional, List, Dict, Any\nfrom pydantic_settings impor"
  },
  {
    "path": "app/core/error_handler.py",
    "chars": 2590,
    "preview": "from typing import Dict, Any\nfrom fastapi import Request\nfrom fastapi.responses import JSONResponse\nfrom loguru import l"
  },
  {
    "path": "app/core/exceptions.py",
    "chars": 8421,
    "preview": "from datetime import datetime\nfrom typing import Optional, Any, Dict\n\n\nclass AppError(Exception):\n    \"\"\"\n    Base class"
  },
  {
    "path": "app/core/external/claude_client.py",
    "chars": 7336,
    "preview": "import json\nfrom loguru import logger\nfrom datetime import datetime, timezone\nfrom typing import Optional, Dict, Any\nfro"
  },
  {
    "path": "app/core/http_client.py",
    "chars": 19805,
    "preview": "\"\"\"HTTP client abstraction layer that supports both curl_cffi and httpx.\"\"\"\n\nfrom abc import ABC, abstractmethod\nfrom ty"
  },
  {
    "path": "app/core/static.py",
    "chars": 1076,
    "preview": "from fastapi import FastAPI, HTTPException\nfrom fastapi.responses import FileResponse\nfrom fastapi.staticfiles import St"
  },
  {
    "path": "app/dependencies/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/dependencies/auth.py",
    "chars": 2075,
    "preview": "from typing import Optional, Annotated\nfrom loguru import logger\nfrom fastapi import Depends, Header\nimport secrets\n\nfro"
  },
  {
    "path": "app/locales/en.json",
    "chars": 2068,
    "preview": "{\n  \"global\": {\n    \"internalServerError\": \"An internal server error occurred. Please try again later.\",\n    \"noAPIKeyPr"
  },
  {
    "path": "app/locales/zh.json",
    "chars": 1363,
    "preview": "{\n  \"global\": {\n    \"internalServerError\": \"服务器内部错误。请稍后重试。\",\n    \"noAPIKeyProvided\": \"未提供 API 密钥。请在请求中包含 API 密钥。\",\n    \""
  },
  {
    "path": "app/main.py",
    "chars": 2326,
    "preview": "from loguru import logger\nfrom contextlib import asynccontextmanager\nfrom fastapi import FastAPI\nfrom fastapi.middleware"
  },
  {
    "path": "app/models/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/models/claude.py",
    "chars": 6737,
    "preview": "from typing import Optional, List, Union, Literal, Dict, Any\nfrom pydantic import BaseModel, ConfigDict, Field, model_va"
  },
  {
    "path": "app/models/internal.py",
    "chars": 857,
    "preview": "from typing import List, Optional\nfrom pydantic import BaseModel, Field\nfrom .claude import Tool\n\n\nclass Attachment(Base"
  },
  {
    "path": "app/models/streaming.py",
    "chars": 2460,
    "preview": "from typing import Optional, Union, Dict, Any, Literal\nfrom pydantic import BaseModel, RootModel, ConfigDict\n\nfrom .clau"
  },
  {
    "path": "app/processors/__init__.py",
    "chars": 829,
    "preview": "from app.processors.base import BaseProcessor, BaseContext\nfrom app.processors.claude_ai import (\n    ClaudeAIContext,\n "
  },
  {
    "path": "app/processors/base.py",
    "chars": 1014,
    "preview": "\"\"\"Base classes for request processing pipeline.\"\"\"\n\nfrom abc import ABC, abstractmethod\nfrom dataclasses import datacla"
  },
  {
    "path": "app/processors/claude_ai/__init__.py",
    "chars": 1598,
    "preview": "from app.processors.claude_ai.context import ClaudeAIContext\nfrom app.processors.claude_ai.pipeline import ClaudeAIPipel"
  },
  {
    "path": "app/processors/claude_ai/claude_api_processor.py",
    "chars": 9388,
    "preview": "from app.core.http_client import (\n    Response,\n    AsyncSession,\n    create_session,\n)\nfrom datetime import datetime, "
  },
  {
    "path": "app/processors/claude_ai/claude_web_processor.py",
    "chars": 5060,
    "preview": "import time\nimport base64\nimport random\nimport string\nfrom typing import List\nfrom loguru import logger\n\nfrom app.proces"
  },
  {
    "path": "app/processors/claude_ai/context.py",
    "chars": 723,
    "preview": "from dataclasses import dataclass\nfrom typing import Optional, AsyncIterator\n\nfrom app.core.claude_session import Claude"
  },
  {
    "path": "app/processors/claude_ai/event_parser_processor.py",
    "chars": 1176,
    "preview": "from loguru import logger\n\nfrom app.processors.base import BaseProcessor\nfrom app.processors.claude_ai import ClaudeAICo"
  },
  {
    "path": "app/processors/claude_ai/message_collector_processor.py",
    "chars": 8446,
    "preview": "import json5\nfrom typing import AsyncIterator\nfrom loguru import logger\n\nfrom app.processors.base import BaseProcessor\nf"
  },
  {
    "path": "app/processors/claude_ai/model_injector_processor.py",
    "chars": 2303,
    "preview": "from typing import AsyncIterator\nfrom loguru import logger\n\nfrom app.processors.base import BaseProcessor\nfrom app.proce"
  },
  {
    "path": "app/processors/claude_ai/non_streaming_response_processor.py",
    "chars": 2294,
    "preview": "from loguru import logger\nfrom fastapi.responses import JSONResponse\n\nfrom app.core.exceptions import ClaudeStreamingErr"
  },
  {
    "path": "app/processors/claude_ai/pipeline.py",
    "chars": 2958,
    "preview": "from typing import List, Optional\nfrom loguru import logger\n\nfrom app.services.session import session_manager\nfrom app.p"
  },
  {
    "path": "app/processors/claude_ai/stop_sequences_processor.py",
    "chars": 7404,
    "preview": "from typing import AsyncIterator, List\nfrom loguru import logger\n\nfrom app.processors.base import BaseProcessor\nfrom app"
  },
  {
    "path": "app/processors/claude_ai/streaming_response_processor.py",
    "chars": 1955,
    "preview": "from loguru import logger\n\nfrom fastapi.responses import StreamingResponse\n\nfrom app.processors.base import BaseProcesso"
  },
  {
    "path": "app/processors/claude_ai/tavern_test_message_processor.py",
    "chars": 2350,
    "preview": "from loguru import logger\nimport uuid\n\nfrom fastapi.responses import JSONResponse\n\nfrom app.processors.base import BaseP"
  },
  {
    "path": "app/processors/claude_ai/token_counter_processor.py",
    "chars": 4594,
    "preview": "from typing import AsyncIterator\nfrom loguru import logger\nimport tiktoken\n\nfrom app.processors.base import BaseProcesso"
  },
  {
    "path": "app/processors/claude_ai/tool_call_event_processor.py",
    "chars": 4982,
    "preview": "from typing import AsyncIterator, Optional\nfrom loguru import logger\n\nfrom app.processors.base import BaseProcessor\nfrom"
  },
  {
    "path": "app/processors/claude_ai/tool_result_processor.py",
    "chars": 4371,
    "preview": "import uuid\nfrom loguru import logger\n\nfrom app.processors.base import BaseProcessor\nfrom app.processors.claude_ai impor"
  },
  {
    "path": "app/processors/pipeline.py",
    "chars": 1768,
    "preview": "from typing import List, Optional\nfrom loguru import logger\n\nfrom app.processors.base import BaseContext, BaseProcessor\n"
  },
  {
    "path": "app/services/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/services/account.py",
    "chars": 19113,
    "preview": "import asyncio\nfrom datetime import datetime, UTC\nfrom typing import List, Optional, Dict, Set\n\nfrom collections import "
  },
  {
    "path": "app/services/cache.py",
    "chars": 9081,
    "preview": "import asyncio\nimport hashlib\nimport json\nimport threading\nfrom datetime import datetime, timedelta\nfrom typing import D"
  },
  {
    "path": "app/services/event_processing/__init__.py",
    "chars": 141,
    "preview": "from .event_parser import EventParser\nfrom .event_serializer import EventSerializer\n\n__all__ = [\n    \"EventParser\",\n    "
  },
  {
    "path": "app/services/event_processing/event_parser.py",
    "chars": 4264,
    "preview": "import json\nfrom typing import AsyncIterator, Optional\nfrom dataclasses import dataclass\nfrom loguru import logger\n\nfrom"
  },
  {
    "path": "app/services/event_processing/event_serializer.py",
    "chars": 2326,
    "preview": "import json\nfrom typing import AsyncIterator, Optional\n\nfrom app.models.streaming import StreamingEvent, UnknownEvent\n\n\n"
  },
  {
    "path": "app/services/i18n.py",
    "chars": 5129,
    "preview": "import json\nimport re\nfrom typing import Dict, Any, Optional\nfrom loguru import logger\n\nfrom app.core.config import sett"
  },
  {
    "path": "app/services/oauth.py",
    "chars": 12155,
    "preview": "import base64\nimport hashlib\nimport secrets\nimport time\nfrom typing import Dict, List, Optional, Tuple\nfrom urllib.parse"
  },
  {
    "path": "app/services/session.py",
    "chars": 6426,
    "preview": "import asyncio\nfrom typing import Dict, Optional\nfrom datetime import datetime, timedelta\nimport threading\nfrom loguru i"
  },
  {
    "path": "app/services/tool_call.py",
    "chars": 5049,
    "preview": "import asyncio\nfrom typing import Dict, Optional\nfrom datetime import datetime, timedelta\nimport threading\nfrom loguru i"
  },
  {
    "path": "app/utils/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "app/utils/logger.py",
    "chars": 786,
    "preview": "import sys\nfrom pathlib import Path\nfrom loguru import logger\n\nfrom app.core.config import settings\n\n\ndef configure_logg"
  },
  {
    "path": "app/utils/messages.py",
    "chars": 5917,
    "preview": "import base64\nfrom typing import List, Optional, Tuple\nfrom loguru import logger\n\nfrom app.core.http_client import downl"
  },
  {
    "path": "app/utils/retry.py",
    "chars": 906,
    "preview": "from loguru import logger\nfrom tenacity import RetryCallState\n\nfrom app.core.exceptions import AppError\n\n\ndef is_retryab"
  },
  {
    "path": "docker-compose.yml",
    "chars": 1021,
    "preview": "version: \"3.8\"\n\nservices:\n  clove:\n    build:\n      context: .\n      dockerfile: Dockerfile\n    container_name: clove\n  "
  },
  {
    "path": "pyproject.toml",
    "chars": 1736,
    "preview": "[project]\nname = \"clove-proxy\"\nversion = \"0.3.1\"\ndescription = \"A Claude.ai reverse proxy\"\nreadme = \"README.md\"\nrequires"
  },
  {
    "path": "scripts/build_wheel.py",
    "chars": 5101,
    "preview": "#!/usr/bin/env python3\n\"\"\"Build script for Clove - builds frontend and creates Python wheel.\"\"\"\n\nimport argparse\nimport "
  },
  {
    "path": "tests/test_claude_request_models.py",
    "chars": 1905,
    "preview": "import unittest\n\nfrom app.models.claude import MessagesAPIRequest\n\n\nclass MessagesAPIRequestToolParsingTests(unittest.Te"
  }
]

About this extraction

This page contains the full source code of the mirrorange/clove GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 75 files (256.5 KB), approximately 56.5k tokens, and a symbol index with 370 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!