[
  {
    "path": ".claude/launch.json",
    "content": "{\n  \"version\": \"0.0.1\",\n  \"configurations\": [\n    {\n      \"name\": \"frontend\",\n      \"runtimeExecutable\": \"yarn\",\n      \"runtimeArgs\": [\"dev\"],\n      \"port\": 5173,\n      \"cwd\": \"frontend\"\n    }\n  ]\n}\n"
  },
  {
    "path": ".gitattributes",
    "content": "# Auto detect text files and perform LF normalization\n* text=auto\n"
  },
  {
    "path": ".github/FUNDING.yml",
    "content": "github: [abi]\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Go to '...'\n2. Click on '....'\n3. Scroll down to '....'\n4. See error\n\n**Screenshots of backend AND frontend terminal logs**\nIf applicable, add screenshots to help explain your problem.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/custom.md",
    "content": "---\nname: Custom issue template\nabout: Describe this issue template's purpose here.\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.\n"
  },
  {
    "path": ".gitignore",
    "content": ".aider*\nnode_modules\n\n# Project-related files\n\n# Run logs\nbackend/run_logs/*\n\n# Weird Docker setup related files\nbackend/backend/*\n\n# Env vars\nfrontend/.env.local\n.env\n\n# Mac files\n.DS_Store\n\n#Rodney\n.rodney\n"
  },
  {
    "path": ".vscode/settings.json",
    "content": "{\n  \"python.analysis.typeCheckingMode\": \"strict\",\n  \"python.analysis.extraPaths\": [\"./backend\"],\n  \"python.autoComplete.extraPaths\": [\"./backend\"]\n}\n"
  },
  {
    "path": "AGENTS.md",
    "content": "# Project Agent Instructions\n\nPython environment:\n\n- Always use the backend Poetry virtualenv (`backend-py3.10`) for Python commands.\n- Preferred invocation: `cd backend && poetry run <command>`.\n- If you need to activate directly, use Poetry to discover it in the current environment:\n  - `cd backend && poetry env activate` (then run the `source .../bin/activate` command it prints)\n\nTesting policy:\n\n- Always run backend tests after every code change: `cd backend && poetry run pytest`.\n- Always run type checking after every code change: `cd backend && poetry run pyright`.\n- Type checking policy: no new warnings in changed files (`pyright`).\n\n## Frontend\n\n- Frontend: `cd frontend && yarn lint`\n\nIf changes touch both, run both sets.\n\n## Prompt formatting\n\n- Prefer triple-quoted strings (`\"\"\"...\"\"\"`) for multi-line prompt text.\n- For interpolated multi-line prompts, prefer a single triple-quoted f-string over concatenated string fragments.\n\n# Hosted\n\nThe hosted version is on the `hosted` branch. The `hosted` branch connects to a saas backend, which is a seperate codebase at ../screenshot-to-code-saas\n"
  },
  {
    "path": "CLAUDE.md",
    "content": "# Project Agent Instructions\n\nPython environment:\n\n- Always use the backend Poetry virtualenv (`backend-py3.10`) for Python commands.\n- Preferred invocation: `cd backend && poetry run <command>`.\n- If you need to activate directly, use Poetry to discover it in the current environment:\n  - `cd backend && poetry env activate` (then run the `source .../bin/activate` command it prints)\n\nTesting policy:\n\n- Always run backend tests after every code change: `cd backend && poetry run pytest`.\n- Always run type checking after every code change: `cd backend && poetry run pyright`.\n- Type checking policy: no new warnings in changed files (`pyright`).\n\n## Frontend\n\n- Frontend: `cd frontend && yarn lint`\n\nIf changes touch both, run both sets.\n\n## Prompt formatting\n\n- Prefer triple-quoted strings (`\"\"\"...\"\"\"`) for multi-line prompt text.\n- For interpolated multi-line prompts, prefer a single triple-quoted f-string over concatenated string fragments.\n\n# Hosted\n\nThe hosted version is on the `hosted` branch. The `hosted` branch connects to a saas backend, which is a seperate codebase at ../screenshot-to-code-saas\n"
  },
  {
    "path": "Evaluation.md",
    "content": "## Evaluating models and prompts\n\nEvaluation dataset consists of 16 screenshots. A Python script for running screenshot-to-code on the dataset and a UI for rating outputs is included. With this set up, we can compare and evaluate various models and prompts.\n\n### Running evals\n\n- Input screenshots should be located at `backend/evals_data/inputs` and the outputs will be `backend/evals_data/outputs`. If you want to modify this, modify `EVALS_DIR` in `backend/evals/config.py`. You can download the input screenshot dataset here: TODO.\n- Set a stack and model (`STACK` var, `MODEL` var) in `backend/run_evals.py`\n- Run `OPENAI_API_KEY=sk-... python run_evals.py` - this runs the screenshot-to-code on the input dataset in parallel but it will still take a few minutes to complete.\n- Once the script is done, you can find the outputs in `backend/evals_data/outputs`.\n\n### Rating evals\n\nIn order to view and rate the outputs, visit your front-end at `/evals`.\n\n- Rate each output on a scale of 1-4\n- You can also print the page as PDF to share your results with others.\n\nGenerally, I run three tests for each model/prompt + stack combo and take the average score out of those tests to evaluate.\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2023 Abi Raja\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# screenshot-to-code\n\nA simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI. Now supporting Gemini 3 and Claude Opus 4.5!\n\nhttps://github.com/user-attachments/assets/85b911c0-efea-4957-badb-daa97ec402ad\n\nSupported stacks:\n\n- HTML + Tailwind\n- HTML + CSS\n- React + Tailwind\n- Vue + Tailwind\n- Bootstrap\n- Ionic + Tailwind\n- SVG\n\nSupported AI models:\n\n- Gemini 3 Flash and Pro - Best models! (Google)\n- Claude Opus 4.5 - Best model! (Anthropic)\n- GPT-5.3, GPT-5.2, GPT-4.1 (OpenAI)\n- Other models are available as well but we recommend using the above models.\n- DALL-E 3 or Flux Schnell (using Replicate) for image generation\n\nSee the [Examples](#-examples) section below for more demos.\n\nWe have experimental support for taking a video/screen recording of a website in action and turning that into a functional prototype.\n\n![google in app quick 3](https://github.com/abi/screenshot-to-code/assets/23818/8758ffa4-9483-4b9b-bb66-abd6d1594c33)\n\n[Learn more about video here](https://github.com/abi/screenshot-to-code/wiki/Screen-Recording-to-Code).\n\n[Follow me on Twitter for updates](https://twitter.com/_abi_).\n\n## 🌍 Hosted Version\n\n[Try it live on the hosted version (paid)](https://screenshottocode.com).\n\n## 🛠 Getting Started\n\nThe app has a React/Vite frontend and a FastAPI backend.\n\nKeys needed:\n\n- [OpenAI API key](https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md), Anthropic key, or Google Gemini key\n- Multiple keys are recommended so you can compare results from different models\n\nIf you'd like to run the app with Ollama open source models (not recommended due to poor quality results), [follow this comment](https://github.com/abi/screenshot-to-code/issues/354#issuecomment-2435479853).\n\nRun the backend (I use Poetry for package management - `pip install --upgrade poetry` if you don't have it):\n\n```bash\ncd backend\necho \"OPENAI_API_KEY=sk-your-key\" > .env\necho \"ANTHROPIC_API_KEY=your-key\" >> .env\necho \"GEMINI_API_KEY=your-key\" >> .env\npoetry install\npoetry env activate\n# run the printed command, e.g. source /path/to/venv/bin/activate\npoetry run uvicorn main:app --reload --port 7001\n```\n\nYou can also set up the keys using the settings dialog on the front-end (click the gear icon after loading the frontend).\n\nRun the frontend:\n\n```bash\ncd frontend\nyarn\nyarn dev\n```\n\nOpen http://localhost:5173 to use the app.\n\nIf you prefer to run the backend on a different port, update VITE_WS_BACKEND_URL in `frontend/.env.local`\n\n## Docker\n\nIf you have Docker installed on your system, in the root directory, run:\n\n```bash\necho \"OPENAI_API_KEY=sk-your-key\" > .env\ndocker-compose up -d --build\n```\n\nThe app will be up and running at http://localhost:5173. Note that you can't develop the application with this setup as the file changes won't trigger a rebuild.\n\n## 🙋‍♂️ FAQs\n\n- **I'm running into an error when setting up the backend. How can I fix it?** [Try this](https://github.com/abi/screenshot-to-code/issues/3#issuecomment-1814777959). If that still doesn't work, open an issue.\n- **How do I get an OpenAI API key?** See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md\n- **How can I configure an OpenAI proxy?** - If you're not able to access the OpenAI API directly (due to e.g. country restrictions), you can try a VPN or you can configure the OpenAI base URL to use a proxy: Set OPENAI_BASE_URL in the `backend/.env` or directly in the UI in the settings dialog. Make sure the URL has \"v1\" in the path so it should look like this: `https://xxx.xxxxx.xxx/v1`\n- **How can I update the backend host that my front-end connects to?** - Configure VITE_HTTP_BACKEND_URL and VITE_WS_BACKEND_URL in front/.env.local For example, set VITE_HTTP_BACKEND_URL=http://124.10.20.1:7001\n- **Seeing UTF-8 errors when running the backend?** - On windows, open the .env file with notepad++, then go to Encoding and select UTF-8.\n- **How can I provide feedback?** For feedback, feature requests and bug reports, open an issue or ping me on [Twitter](https://twitter.com/_abi_).\n\n## 📚 Examples\n\n**NYTimes**\n\n| Original                                                                                                                                                        | Replica                                                                                                                                                         |\n| --------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| <img width=\"1238\" alt=\"Screenshot 2023-11-20 at 12 54 03 PM\" src=\"https://github.com/user-attachments/assets/6b0ae86c-1b0f-4598-a578-c7b62205b3e2\"> | <img width=\"1414\" alt=\"Screenshot 2023-11-20 at 12 59 56 PM\" src=\"https://github.com/user-attachments/assets/981c490e-9be6-407e-8e46-2642f0ca613e\"> |\n\n\n**Instagram**\n\nhttps://github.com/user-attachments/assets/a335a105-f9cc-40e6-ac6b-64e5390bfc21\n\n**Hacker News**\n\n\nhttps://github.com/user-attachments/assets/205cb5c7-9c3c-438d-acd4-26dfe6e077e5\n"
  },
  {
    "path": "TESTING.md",
    "content": "# Testing Guide\n\nThis guide explains how to run tests for the Screenshot to Code project.\n\n## Backend Tests\n\nThe backend uses pytest for testing. All tests are located in the `backend/tests` directory.\n\n### Prerequisites\n\nMake sure you have Poetry installed and have installed all dependencies:\n\n```bash\ncd backend\npoetry install\n```\n\n### Running Tests\n\n#### Run all tests\n```bash\ncd backend\npoetry run pytest\n```\n\n#### Run tests with verbose output\n```bash\npoetry run pytest -vv\n```\n\n#### Run a specific test file\n```bash\npoetry run pytest tests/test_screenshot.py\n```\n\n#### Run a specific test class\n```bash\npoetry run pytest tests/test_screenshot.py::TestNormalizeUrl\n```\n\n#### Run a specific test method\n```bash\npoetry run pytest tests/test_screenshot.py::TestNormalizeUrl::test_url_without_protocol\n```\n\n#### Run tests with coverage report\n```bash\npoetry run pytest --cov=routes\n```\n\n#### Run tests in parallel (requires pytest-xdist)\n```bash\npoetry install --with dev pytest-xdist  # Install if not already installed\npoetry run pytest -n auto\n```\n\n### Test Configuration\n\nThe pytest configuration is defined in `backend/pytest.ini`:\n- Tests are discovered in the `tests` directory\n- Test files must match the pattern `test_*.py`\n- Test classes must start with `Test`\n- Test functions must start with `test_`\n- Verbose output and short traceback format are enabled by default\n\n### Writing New Tests\n\n1. Create a new test file in `backend/tests/` following the naming convention `test_<module>.py`\n2. Import the functions/classes you want to test\n3. Write test functions or classes following pytest conventions\n\nExample:\n```python\nimport pytest\nfrom routes.screenshot import normalize_url\n\ndef test_url_normalization():\n    assert normalize_url(\"example.com\") == \"https://example.com\"\n```\n"
  },
  {
    "path": "Troubleshooting.md",
    "content": "### Getting an OpenAI API key with GPT-4 model access\n\nYou don't need a ChatGPT Pro account. Screenshot to code uses API keys from your OpenAI developer account. In order to get access to the GPT4 Vision model, log into your OpenAI account and then, follow these instructions:\n\n1. Open [OpenAI Dashboard](https://platform.openai.com/)\n1. Go to Settings > Billing\n1. Click at the Add payment details\n<img width=\"900\" alt=\"285636868-c80deb92-ab47-45cd-988f-deee67fbd44d\" src=\"https://github.com/abi/screenshot-to-code/assets/23818/4e0f4b77-9578-4f9a-803c-c12b1502f3d7\">\n\n4. You have to buy some credits. The minimum is $5.\n5. Go to Settings > Limits and check at the bottom of the page, your current tier has to be \"Tier 1\" to have GPT4 access\n<img width=\"900\" alt=\"285636973-da38bd4d-8a78-4904-8027-ca67d729b933\" src=\"https://github.com/abi/screenshot-to-code/assets/23818/8d07cd84-0cf9-4f88-bc00-80eba492eadf\">\n\n6. Navigate to OpenAI [api keys](https://platform.openai.com/api-keys) page and create and copy a new secret key.\n7. Go to Screenshot to code and paste it in the Settings dialog under OpenAI key (gear icon). Your key is only stored in your browser. Never stored on our servers.\n\n## Still not working?\n\n- Some users have also reported that it can take upto 30 minutes after your credit purchase for the GPT4 vision model to be activated.\n- You need to add credits to your account AND set it to renew when credits run out in order to be upgraded to Tier 1. Make sure your \"Settings > Limits\" page shows that you are at Tier 1.\n\nIf you've followed these steps, and it still doesn't work, feel free to open a Github issue. We only provide support for the open source version since we don't have debugging logs on the hosted version. If you're looking to use the hosted version, we recommend getting a paid subscription on screenshottocode.com\n"
  },
  {
    "path": "backend/.gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\ncover/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\n.pybuilder/\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n#   For a library or package, you might want to ignore these files since the code is\n#   intended to run in multiple environments; otherwise, check them in:\n# .python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# poetry\n#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.\n#   This is especially recommended for binary packages to ensure reproducibility, and is more\n#   commonly ignored for libraries.\n#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control\n#poetry.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# Cython debug symbols\ncython_debug/\n\n# PyCharm\n#  JetBrains specific template is maintainted in a separate JetBrains.gitignore that can\n#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore\n#  and can be added to the global gitignore or merged into this file.  For a more nuclear\n#  option (not recommended) you can uncomment the following to ignore the entire idea folder.\n#.idea/\n\n\n# Temporary eval output\nevals_data\n\n\n# Temporary video evals (Remove before merge)\nvideo_evals\n"
  },
  {
    "path": "backend/.pre-commit-config.yaml",
    "content": "# See https://pre-commit.com for more information\n# See https://pre-commit.com/hooks.html for more hooks\nrepos:\n  - repo: https://github.com/pre-commit/pre-commit-hooks\n    rev: v3.2.0\n    hooks:\n      # - id: end-of-file-fixer\n      - id: check-yaml\n      - id: check-added-large-files\n  # - repo: local\n  #   hooks:\n  #     - id: poetry-pytest\n  #       name: Run pytest with Poetry\n  #       entry: poetry run --directory backend pytest\n  #       language: system\n  #       pass_filenames: false\n  #       always_run: true\n  #       files: ^backend/\n  #     # - id: poetry-pyright\n  #     #   name: Run pyright with Poetry\n  #     #   entry: poetry run --directory backend pyright\n  #     #   language: system\n  #     #   pass_filenames: false\n  #     #   always_run: true\n  #     #   files: ^backend/\n"
  },
  {
    "path": "backend/Dockerfile",
    "content": "FROM python:3.12.3-slim-bullseye\n\nENV POETRY_VERSION 1.8.0\n\n# Install system dependencies\nRUN pip install \"poetry==$POETRY_VERSION\"\n\n# Set work directory\nWORKDIR /app\n\n# Copy only requirements to cache them in docker layer\nCOPY poetry.lock pyproject.toml /app/\n\n# Disable the creation of virtual environments\nRUN poetry config virtualenvs.create false\n\n# Install dependencies\nRUN poetry install\n\n# Copy the current directory contents into the container at /app\nCOPY ./ /app/\n"
  },
  {
    "path": "backend/README.md",
    "content": "# Run the type checker\n\npoetry run pyright\n\n# Run tests\n\npoetry run pytest\n\n## Prompt Summary\n\nUse `print_prompt_summary` from `utils.py` to quickly visualize prompts:\n\n```python\nfrom utils import print_prompt_summary\nprint_prompt_summary(prompt_messages)\n```\n"
  },
  {
    "path": "backend/agent/engine.py",
    "content": "import asyncio\nimport uuid\nfrom typing import Any, Awaitable, Callable, Dict, List, Optional\n\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom codegen.utils import extract_html_content\nfrom llm import Llm\n\nfrom agent.providers.base import ExecutedToolCall, ProviderSession, StreamEvent\nfrom agent.providers.factory import create_provider_session\nfrom agent.state import AgentFileState, seed_file_state_from_messages\nfrom agent.tools import (\n    AgentToolRuntime,\n    extract_content_from_args,\n    extract_path_from_args,\n    summarize_text,\n    summarize_tool_input,\n)\n\n\nclass AgentEngine:\n    def __init__(\n        self,\n        send_message: Callable[\n            [str, Optional[str], int, Optional[Dict[str, Any]], Optional[str]],\n            Awaitable[None],\n        ],\n        variant_index: int,\n        openai_api_key: Optional[str],\n        openai_base_url: Optional[str],\n        anthropic_api_key: Optional[str],\n        gemini_api_key: Optional[str],\n        should_generate_images: bool,\n        initial_file_state: Optional[Dict[str, str]] = None,\n        option_codes: Optional[List[str]] = None,\n    ):\n        self.send_message = send_message\n        self.variant_index = variant_index\n        self.openai_api_key = openai_api_key\n        self.openai_base_url = openai_base_url\n        self.anthropic_api_key = anthropic_api_key\n        self.gemini_api_key = gemini_api_key\n        self.should_generate_images = should_generate_images\n\n        self.file_state = AgentFileState()\n        if initial_file_state and initial_file_state.get(\"content\"):\n            self.file_state.path = initial_file_state.get(\"path\") or \"index.html\"\n            self.file_state.content = initial_file_state[\"content\"]\n\n        self.tool_runtime = AgentToolRuntime(\n            file_state=self.file_state,\n            should_generate_images=should_generate_images,\n            openai_api_key=openai_api_key,\n            openai_base_url=openai_base_url,\n            option_codes=option_codes,\n        )\n        self._tool_preview_lengths: Dict[str, int] = {}\n\n    def _next_event_id(self, prefix: str) -> str:\n        return f\"{prefix}-{self.variant_index}-{uuid.uuid4().hex[:8]}\"\n\n    async def _send(\n        self,\n        msg_type: str,\n        value: Optional[str] = None,\n        data: Optional[Dict[str, Any]] = None,\n        event_id: Optional[str] = None,\n    ) -> None:\n        await self.send_message(msg_type, value, self.variant_index, data, event_id)\n\n    def _mark_preview_length(self, tool_event_id: Optional[str], length: int) -> None:\n        if not tool_event_id:\n            return\n        current = self._tool_preview_lengths.get(tool_event_id, 0)\n        if length > current:\n            self._tool_preview_lengths[tool_event_id] = length\n\n    async def _stream_code_preview(self, tool_event_id: Optional[str], content: str) -> None:\n        if not tool_event_id or not content:\n            return\n\n        already_sent = self._tool_preview_lengths.get(tool_event_id, 0)\n        total_len = len(content)\n        if already_sent >= total_len:\n            return\n\n        max_chunks = 18\n        min_step = 200\n        step = max(min_step, total_len // max_chunks)\n        start = already_sent if already_sent > 0 else 0\n\n        for end in range(start + step, total_len, step):\n            await self._send(\"setCode\", content[:end])\n            self._mark_preview_length(tool_event_id, end)\n            await asyncio.sleep(0.01)\n\n        await self._send(\"setCode\", content)\n        self._mark_preview_length(tool_event_id, total_len)\n\n    async def _handle_streamed_tool_delta(\n        self,\n        event: StreamEvent,\n        started_tool_ids: set[str],\n        streamed_lengths: Dict[str, int],\n    ) -> None:\n        if event.type != \"tool_call_delta\":\n            return\n        if event.tool_name != \"create_file\":\n            return\n        if not event.tool_call_id:\n            return\n\n        content = extract_content_from_args(event.tool_arguments)\n        if content is None:\n            return\n\n        tool_event_id = event.tool_call_id\n        if tool_event_id not in started_tool_ids:\n            path = (\n                extract_path_from_args(event.tool_arguments)\n                or self.file_state.path\n                or \"index.html\"\n            )\n            await self._send(\n                \"toolStart\",\n                data={\n                    \"name\": \"create_file\",\n                    \"input\": {\n                        \"path\": path,\n                        \"contentLength\": len(content),\n                        \"preview\": summarize_text(content, 200),\n                    },\n                },\n                event_id=tool_event_id,\n            )\n            started_tool_ids.add(tool_event_id)\n\n        last_len = streamed_lengths.get(tool_event_id, 0)\n        if last_len == 0 and content:\n            streamed_lengths[tool_event_id] = len(content)\n            await self._send(\"setCode\", content)\n            self._mark_preview_length(tool_event_id, len(content))\n        elif len(content) - last_len >= 40:\n            streamed_lengths[tool_event_id] = len(content)\n            await self._send(\"setCode\", content)\n            self._mark_preview_length(tool_event_id, len(content))\n\n    async def _run_with_session(self, session: ProviderSession) -> str:\n        max_steps = 20\n\n        for _ in range(max_steps):\n            assistant_event_id = self._next_event_id(\"assistant\")\n            thinking_event_id = self._next_event_id(\"thinking\")\n            started_tool_ids: set[str] = set()\n            streamed_lengths: Dict[str, int] = {}\n\n            async def on_event(event: StreamEvent) -> None:\n                if event.type == \"assistant_delta\":\n                    if event.text:\n                        await self._send(\n                            \"assistant\",\n                            event.text,\n                            event_id=assistant_event_id,\n                        )\n                    return\n\n                if event.type == \"thinking_delta\":\n                    if event.text:\n                        await self._send(\n                            \"thinking\",\n                            event.text,\n                            event_id=thinking_event_id,\n                        )\n                    return\n\n                if event.type == \"tool_call_delta\":\n                    await self._handle_streamed_tool_delta(\n                        event,\n                        started_tool_ids,\n                        streamed_lengths,\n                    )\n\n            turn = await session.stream_turn(on_event)\n\n            if not turn.tool_calls:\n                return await self._finalize_response(turn.assistant_text)\n\n            executed_tool_calls: List[ExecutedToolCall] = []\n            for tool_call in turn.tool_calls:\n                tool_event_id = tool_call.id or self._next_event_id(\"tool\")\n                if tool_event_id not in started_tool_ids:\n                    await self._send(\n                        \"toolStart\",\n                        data={\n                            \"name\": tool_call.name,\n                            \"input\": summarize_tool_input(tool_call, self.file_state),\n                        },\n                        event_id=tool_event_id,\n                    )\n\n                if tool_call.name == \"create_file\":\n                    content = extract_content_from_args(tool_call.arguments)\n                    if content:\n                        await self._stream_code_preview(tool_event_id, content)\n\n                tool_result = await self.tool_runtime.execute(tool_call)\n                if tool_result.updated_content:\n                    await self._send(\"setCode\", tool_result.updated_content)\n\n                await self._send(\n                    \"toolResult\",\n                    data={\n                        \"name\": tool_call.name,\n                        \"output\": tool_result.summary,\n                        \"ok\": tool_result.ok,\n                    },\n                    event_id=tool_event_id,\n                )\n                executed_tool_calls.append(\n                    ExecutedToolCall(tool_call=tool_call, result=tool_result)\n                )\n\n            session.append_tool_results(turn, executed_tool_calls)\n\n        raise Exception(\"Agent exceeded max tool turns\")\n\n    async def run(self, model: Llm, prompt_messages: List[ChatCompletionMessageParam]) -> str:\n        seed_file_state_from_messages(self.file_state, prompt_messages)\n\n        session = create_provider_session(\n            model=model,\n            prompt_messages=prompt_messages,\n            should_generate_images=self.should_generate_images,\n            openai_api_key=self.openai_api_key,\n            openai_base_url=self.openai_base_url,\n            anthropic_api_key=self.anthropic_api_key,\n            gemini_api_key=self.gemini_api_key,\n        )\n        try:\n            return await self._run_with_session(session)\n        finally:\n            await session.close()\n\n    async def _finalize_response(self, assistant_text: str) -> str:\n        if self.file_state.content:\n            return self.file_state.content\n\n        html = extract_html_content(assistant_text)\n        if html:\n            self.file_state.content = html\n            await self._send(\"setCode\", html)\n\n        return self.file_state.content\n"
  },
  {
    "path": "backend/agent/providers/__init__.py",
    "content": "from agent.providers.anthropic import AnthropicProviderSession, serialize_anthropic_tools\nfrom agent.providers.base import (\n    EventSink,\n    ExecutedToolCall,\n    ProviderSession,\n    ProviderTurn,\n    StreamEvent,\n)\nfrom agent.providers.factory import create_provider_session\nfrom agent.providers.gemini import GeminiProviderSession, serialize_gemini_tools\nfrom agent.providers.openai import OpenAIProviderSession, parse_event, serialize_openai_tools\n\n__all__ = [\n    \"AnthropicProviderSession\",\n    \"EventSink\",\n    \"ExecutedToolCall\",\n    \"GeminiProviderSession\",\n    \"OpenAIProviderSession\",\n    \"ProviderSession\",\n    \"ProviderTurn\",\n    \"StreamEvent\",\n    \"create_provider_session\",\n    \"parse_event\",\n    \"serialize_anthropic_tools\",\n    \"serialize_gemini_tools\",\n    \"serialize_openai_tools\",\n]\n"
  },
  {
    "path": "backend/agent/providers/anthropic/__init__.py",
    "content": "from agent.providers.anthropic.provider import (\n    AnthropicProviderSession,\n    serialize_anthropic_tools,\n    _extract_anthropic_usage,\n)\n\n__all__ = [\n    \"AnthropicProviderSession\",\n    \"serialize_anthropic_tools\",\n    \"_extract_anthropic_usage\",\n]\n"
  },
  {
    "path": "backend/agent/providers/anthropic/image.py",
    "content": "# pyright: reportUnknownVariableType=false\n\"\"\"Claude-specific image processing.\n\nHandles resizing and compressing images to comply with Claude's vision API\nlimits before sending them as base64-encoded payloads.\n\nComparison with official Anthropic docs\n(https://docs.anthropic.com/en/docs/build-with-claude/vision):\n\n  Aligned:\n    - 5 MB per-image size limit matches the documented API maximum.\n    - Output uses the correct base64 source format (type, media_type, data).\n\n  Divergences:\n    - Max dimension is set to 7990 px as a safety margin; the API rejects at\n      8000 px.  This is intentionally conservative.\n    - The docs note that when >20 images are sent in a single request the\n      per-image limit drops to 2000x2000 px.  We do not enforce that stricter\n      limit here (the app typically sends far fewer images).\n    - JPEG conversion drops alpha channels, which is acceptable for website\n      screenshots but would degrade transparent PNGs.\n\n  Recommendation:\n    The docs recommend resizing to 1568 px on the long edge (~1.15 megapixels)\n    for optimal time-to-first-token.  Images above that threshold are resized\n    server-side anyway, so sending larger images only adds latency and\n    bandwidth cost with no quality benefit.  Consider lowering\n    CLAUDE_MAX_IMAGE_DIMENSION to 1568.\n\"\"\"\n\nimport base64\nimport io\nimport time\n\nfrom PIL import Image\n\n# Hard API limit: 5 MB per image (base64-encoded).\nCLAUDE_IMAGE_MAX_SIZE = 5 * 1024 * 1024\n\n# API rejects images wider or taller than 8000 px.  We use 7990 as a safety\n# margin.  Note: the docs recommend 1568 px for best latency (see module\n# docstring).\nCLAUDE_MAX_IMAGE_DIMENSION = 7990\n\n\ndef process_image(image_data_url: str) -> tuple[str, str]:\n    \"\"\"Resize / compress a data-URL image to fit Claude's vision limits.\n\n    Returns (media_type, base64_data) suitable for an ``image`` content block.\n    \"\"\"\n    media_type = image_data_url.split(\";\")[0].split(\":\")[1]\n    base64_data = image_data_url.split(\",\")[1]\n    image_bytes = base64.b64decode(base64_data)\n\n    img = Image.open(io.BytesIO(image_bytes))\n\n    is_under_dimension_limit = (\n        img.width < CLAUDE_MAX_IMAGE_DIMENSION\n        and img.height < CLAUDE_MAX_IMAGE_DIMENSION\n    )\n    is_under_size_limit = len(base64_data) <= CLAUDE_IMAGE_MAX_SIZE\n\n    if is_under_dimension_limit and is_under_size_limit:\n        return (media_type, base64_data)\n\n    start_time = time.time()\n\n    if not is_under_dimension_limit:\n        if img.width > img.height:\n            new_width = CLAUDE_MAX_IMAGE_DIMENSION\n            new_height = int((CLAUDE_MAX_IMAGE_DIMENSION / img.width) * img.height)\n        else:\n            new_height = CLAUDE_MAX_IMAGE_DIMENSION\n            new_width = int((CLAUDE_MAX_IMAGE_DIMENSION / img.height) * img.width)\n\n        img = img.resize((new_width, new_height), Image.DEFAULT_STRATEGY)\n\n    quality = 95\n    output = io.BytesIO()\n    img = img.convert(\"RGB\")\n    img.save(output, format=\"JPEG\", quality=quality)\n\n    while (\n        len(base64.b64encode(output.getvalue())) > CLAUDE_IMAGE_MAX_SIZE\n        and quality > 10\n    ):\n        output = io.BytesIO()\n        img.save(output, format=\"JPEG\", quality=quality)\n        quality -= 5\n\n    end_time = time.time()\n    processing_time = end_time - start_time\n    print(f\"[CLAUDE IMAGE PROCESSING] processing time: {processing_time:.2f} seconds\")\n\n    return (\"image/jpeg\", base64.b64encode(output.getvalue()).decode(\"utf-8\"))\n"
  },
  {
    "path": "backend/agent/providers/anthropic/provider.py",
    "content": "# pyright: reportUnknownVariableType=false\nimport copy\nimport json\nimport uuid\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, cast\n\nfrom anthropic import AsyncAnthropic\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom agent.providers.base import (\n    EventSink,\n    ExecutedToolCall,\n    ProviderSession,\n    ProviderTurn,\n    StreamEvent,\n)\nfrom agent.providers.anthropic.image import process_image\nfrom agent.providers.pricing import MODEL_PRICING\nfrom agent.providers.token_usage import TokenUsage\nfrom agent.tools import CanonicalToolDefinition, ToolCall, parse_json_arguments\nfrom llm import Llm\n\nTHINKING_MODELS = {\n    Llm.CLAUDE_4_5_SONNET_2025_09_29.value,\n    Llm.CLAUDE_4_5_OPUS_2025_11_01.value,\n}\nADAPTIVE_THINKING_MODELS = {\n    Llm.CLAUDE_OPUS_4_6.value,\n    Llm.CLAUDE_SONNET_4_6.value,\n}\n\n\ndef _convert_openai_messages_to_claude(\n    messages: List[ChatCompletionMessageParam],\n) -> tuple[str, List[Dict[str, Any]]]:\n    cloned_messages = copy.deepcopy(messages)\n\n    system_prompt = cast(str, cloned_messages[0].get(\"content\"))\n    claude_messages = [dict(message) for message in cloned_messages[1:]]\n\n    for message in claude_messages:\n        if not isinstance(message[\"content\"], list):\n            continue\n\n        for content in message[\"content\"]:  # type: ignore\n            if content[\"type\"] != \"image_url\":\n                continue\n\n            content[\"type\"] = \"image\"\n            image_data_url = cast(str, content[\"image_url\"][\"url\"])\n            media_type, base64_data = process_image(image_data_url)\n            del content[\"image_url\"]\n            content[\"source\"] = {\n                \"type\": \"base64\",\n                \"media_type\": media_type,\n                \"data\": base64_data,\n            }\n\n    return system_prompt, claude_messages\n\n\ndef serialize_anthropic_tools(\n    tools: List[CanonicalToolDefinition],\n) -> List[Dict[str, Any]]:\n    return [\n        {\n            \"name\": tool.name,\n            \"description\": tool.description,\n            \"eager_input_streaming\": True,\n            \"input_schema\": copy.deepcopy(tool.parameters),\n        }\n        for tool in tools\n    ]\n\n\n@dataclass\nclass AnthropicParseState:\n    assistant_text: str = \"\"\n    tool_blocks: Dict[int, Dict[str, Any]] = field(default_factory=dict)\n    tool_json_buffers: Dict[int, str] = field(default_factory=dict)\n\n\nasync def _parse_stream_event(\n    event: Any,\n    state: AnthropicParseState,\n    on_event: EventSink,\n) -> None:\n    if event.type == \"content_block_start\":\n        block = event.content_block\n        if getattr(block, \"type\", None) != \"tool_use\":\n            return\n\n        tool_id = getattr(block, \"id\", None) or f\"tool-{uuid.uuid4().hex[:6]}\"\n        tool_name = getattr(block, \"name\", None) or \"unknown_tool\"\n        args = getattr(block, \"input\", None)\n        state.tool_blocks[event.index] = {\n            \"id\": tool_id,\n            \"name\": tool_name,\n        }\n        state.tool_json_buffers[event.index] = \"\"\n        if args:\n            await on_event(\n                StreamEvent(\n                    type=\"tool_call_delta\",\n                    tool_call_id=tool_id,\n                    tool_name=tool_name,\n                    tool_arguments=args,\n                )\n            )\n        return\n\n    if event.type != \"content_block_delta\":\n        return\n\n    if event.delta.type == \"thinking_delta\":\n        await on_event(StreamEvent(type=\"thinking_delta\", text=event.delta.thinking))\n        return\n\n    if event.delta.type == \"text_delta\":\n        state.assistant_text += event.delta.text\n        await on_event(StreamEvent(type=\"assistant_delta\", text=event.delta.text))\n        return\n\n    if event.delta.type != \"input_json_delta\":\n        return\n\n    partial_json = getattr(event.delta, \"partial_json\", None) or \"\"\n    if not partial_json:\n        return\n\n    buffer = state.tool_json_buffers.get(event.index, \"\") + partial_json\n    state.tool_json_buffers[event.index] = buffer\n    meta = state.tool_blocks.get(event.index)\n    if not meta:\n        return\n\n    await on_event(\n        StreamEvent(\n            type=\"tool_call_delta\",\n            tool_call_id=meta.get(\"id\"),\n            tool_name=meta.get(\"name\"),\n            tool_arguments=buffer,\n        )\n    )\n\n\ndef _extract_tool_calls(final_message: Any) -> List[ToolCall]:\n    tool_calls: List[ToolCall] = []\n    if final_message and final_message.content:\n        for block in final_message.content:\n            if block.type != \"tool_use\":\n                continue\n            raw_input = getattr(block, \"input\", {})\n            args: Dict[str, Any]\n            if isinstance(raw_input, dict):\n                args = cast(Dict[str, Any], raw_input)\n            else:\n                parsed, error = parse_json_arguments(raw_input)\n                if error:\n                    args = {\"INVALID_JSON\": str(raw_input)}\n                else:\n                    args = parsed\n            tool_calls.append(\n                ToolCall(\n                    id=block.id,\n                    name=block.name,\n                    arguments=args,\n                )\n            )\n    return tool_calls\n\n\ndef _extract_anthropic_usage(final_message: Any) -> TokenUsage:\n    \"\"\"Extract unified token usage from an Anthropic final message.\n\n    Anthropic includes thinking tokens in ``output_tokens`` so no extra\n    addition is needed.  ``total`` is computed since the API doesn't provide it.\n    \"\"\"\n    usage = getattr(final_message, \"usage\", None)\n    if usage is None:\n        return TokenUsage()\n    input_tokens = getattr(usage, \"input_tokens\", 0) or 0\n    output_tokens = getattr(usage, \"output_tokens\", 0) or 0\n    cache_read = getattr(usage, \"cache_read_input_tokens\", 0) or 0\n    cache_write = getattr(usage, \"cache_creation_input_tokens\", 0) or 0\n    return TokenUsage(\n        input=input_tokens,\n        output=output_tokens,\n        cache_read=cache_read,\n        cache_write=cache_write,\n        total=input_tokens + output_tokens + cache_read + cache_write,\n    )\n\n\nclass AnthropicProviderSession(ProviderSession):\n    def __init__(\n        self,\n        client: AsyncAnthropic,\n        model: Llm,\n        prompt_messages: List[ChatCompletionMessageParam],\n        tools: List[Dict[str, Any]],\n    ):\n        self._client = client\n        self._model = model\n        self._tools = tools\n        self._total_usage = TokenUsage()\n        system_prompt, claude_messages = _convert_openai_messages_to_claude(prompt_messages)\n        self._system_prompt = system_prompt\n        self._messages = claude_messages\n\n    async def stream_turn(self, on_event: EventSink) -> ProviderTurn:\n        stream_kwargs: Dict[str, Any] = {\n            \"model\": self._model.value,\n            \"max_tokens\": 50000,\n            \"system\": self._system_prompt,\n            \"messages\": self._messages,\n            \"tools\": self._tools,\n            \"cache_control\": {\"type\": \"ephemeral\"},\n        }\n\n        if self._model.value in ADAPTIVE_THINKING_MODELS:\n            stream_kwargs[\"thinking\"] = {\n                \"type\": \"adaptive\",\n            }\n            effort = (\n                \"high\"\n                if self._model.value == Llm.CLAUDE_SONNET_4_6.value\n                else \"max\"\n            )\n            stream_kwargs[\"output_config\"] = {\"effort\": effort}\n        elif self._model.value in THINKING_MODELS:\n            stream_kwargs[\"thinking\"] = {\n                \"type\": \"enabled\",\n                \"budget_tokens\": 10000,\n            }\n        else:\n            stream_kwargs[\"temperature\"] = 0.0\n\n        state = AnthropicParseState()\n        async with self._client.messages.stream(**stream_kwargs) as stream:\n            async for event in stream:\n                await _parse_stream_event(event, state, on_event)\n            final_message = await stream.get_final_message()\n\n        self._total_usage.accumulate(_extract_anthropic_usage(final_message))\n\n        tool_calls = _extract_tool_calls(final_message)\n        return ProviderTurn(\n            assistant_text=state.assistant_text,\n            tool_calls=tool_calls,\n            assistant_turn=final_message,\n        )\n\n    def append_tool_results(\n        self,\n        turn: ProviderTurn,\n        executed_tool_calls: list[ExecutedToolCall],\n    ) -> None:\n        assistant_blocks: List[Dict[str, Any]] = []\n        if turn.assistant_text:\n            assistant_blocks.append({\"type\": \"text\", \"text\": turn.assistant_text})\n\n        for call in turn.tool_calls:\n            assistant_blocks.append(\n                {\n                    \"type\": \"tool_use\",\n                    \"id\": call.id,\n                    \"name\": call.name,\n                    \"input\": call.arguments,\n                }\n            )\n\n        self._messages.append({\"role\": \"assistant\", \"content\": assistant_blocks})\n\n        tool_result_blocks: List[Dict[str, Any]] = []\n        for executed in executed_tool_calls:\n            tool_result_blocks.append(\n                {\n                    \"type\": \"tool_result\",\n                    \"tool_use_id\": executed.tool_call.id,\n                    \"content\": json.dumps(executed.result.result),\n                    \"is_error\": not executed.result.ok,\n                }\n            )\n\n        self._messages.append({\"role\": \"user\", \"content\": tool_result_blocks})\n\n    async def close(self) -> None:\n        u = self._total_usage\n        model_name = self._model.value\n        pricing = MODEL_PRICING.get(model_name)\n        cost_str = f\" cost=${u.cost(pricing):.4f}\" if pricing else \"\"\n        cache_hit_rate_str = f\" cache_hit_rate={u.cache_hit_rate_percent():.2f}%\"\n        print(\n            f\"[TOKEN USAGE] provider=anthropic model={model_name} | \"\n            f\"input={u.input} output={u.output} \"\n            f\"cache_read={u.cache_read} cache_write={u.cache_write} \"\n            f\"total={u.total}{cache_hit_rate_str}{cost_str}\"\n        )\n        await self._client.close()\n"
  },
  {
    "path": "backend/agent/providers/base.py",
    "content": "from dataclasses import dataclass\nfrom typing import Any, Awaitable, Callable, Literal, Optional, Protocol\n\nfrom agent.tools import ToolCall, ToolExecutionResult\n\n\nStreamEventType = Literal[\n    \"assistant_delta\",\n    \"thinking_delta\",\n    \"tool_call_delta\",\n]\n\n\n@dataclass\nclass StreamEvent:\n    type: StreamEventType\n    text: str = \"\"\n    tool_call_id: Optional[str] = None\n    tool_name: Optional[str] = None\n    tool_arguments: Any = None\n\n\n@dataclass\nclass ProviderTurn:\n    assistant_text: str\n    tool_calls: list[ToolCall]\n    # Provider-native assistant turn object required to continue the conversation.\n    assistant_turn: Any = None\n\n\n@dataclass\nclass ExecutedToolCall:\n    tool_call: ToolCall\n    result: ToolExecutionResult\n\n\nEventSink = Callable[[StreamEvent], Awaitable[None]]\n\n\nclass ProviderSession(Protocol):\n    async def stream_turn(self, on_event: EventSink) -> ProviderTurn:\n        ...\n\n    def append_tool_results(\n        self,\n        turn: ProviderTurn,\n        executed_tool_calls: list[ExecutedToolCall],\n    ) -> None:\n        ...\n\n    async def close(self) -> None:\n        ...\n"
  },
  {
    "path": "backend/agent/providers/factory.py",
    "content": "from typing import Optional\n\nfrom anthropic import AsyncAnthropic\nfrom google import genai\nfrom openai import AsyncOpenAI\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom agent.providers.anthropic import AnthropicProviderSession, serialize_anthropic_tools\nfrom agent.providers.base import ProviderSession\nfrom agent.providers.gemini import GeminiProviderSession, serialize_gemini_tools\nfrom agent.providers.openai import OpenAIProviderSession, serialize_openai_tools\nfrom agent.tools import canonical_tool_definitions\nfrom llm import ANTHROPIC_MODELS, GEMINI_MODELS, OPENAI_MODELS, Llm\n\n\ndef create_provider_session(\n    model: Llm,\n    prompt_messages: list[ChatCompletionMessageParam],\n    should_generate_images: bool,\n    openai_api_key: Optional[str],\n    openai_base_url: Optional[str],\n    anthropic_api_key: Optional[str],\n    gemini_api_key: Optional[str],\n) -> ProviderSession:\n    canonical_tools = canonical_tool_definitions(\n        image_generation_enabled=should_generate_images\n    )\n\n    if model in OPENAI_MODELS:\n        if not openai_api_key:\n            raise Exception(\"OpenAI API key is missing.\")\n\n        client = AsyncOpenAI(api_key=openai_api_key, base_url=openai_base_url)\n        return OpenAIProviderSession(\n            client=client,\n            model=model,\n            prompt_messages=prompt_messages,\n            tools=serialize_openai_tools(canonical_tools),\n        )\n\n    if model in ANTHROPIC_MODELS:\n        if not anthropic_api_key:\n            raise Exception(\"Anthropic API key is missing.\")\n\n        client = AsyncAnthropic(api_key=anthropic_api_key)\n        return AnthropicProviderSession(\n            client=client,\n            model=model,\n            prompt_messages=prompt_messages,\n            tools=serialize_anthropic_tools(canonical_tools),\n        )\n\n    if model in GEMINI_MODELS:\n        if not gemini_api_key:\n            raise Exception(\"Gemini API key is missing.\")\n\n        client = genai.Client(api_key=gemini_api_key)\n        return GeminiProviderSession(\n            client=client,\n            model=model,\n            prompt_messages=prompt_messages,\n            tools=serialize_gemini_tools(canonical_tools),\n        )\n\n    raise ValueError(f\"Unsupported model: {model.value}\")\n"
  },
  {
    "path": "backend/agent/providers/gemini.py",
    "content": "# pyright: reportUnknownVariableType=false\nimport base64\nimport copy\nimport uuid\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, cast\n\nfrom google import genai\nfrom google.genai import types\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom agent.providers.base import (\n    EventSink,\n    ExecutedToolCall,\n    ProviderSession,\n    ProviderTurn,\n    StreamEvent,\n)\nfrom agent.providers.pricing import MODEL_PRICING\nfrom agent.providers.token_usage import TokenUsage\nfrom agent.tools import CanonicalToolDefinition, ToolCall\nfrom llm import Llm\n\n\nDEFAULT_VIDEO_FPS = 10\n\n\ndef serialize_gemini_tools(tools: List[CanonicalToolDefinition]) -> List[types.Tool]:\n    declarations = [\n        types.FunctionDeclaration(\n            name=tool.name,\n            description=tool.description,\n            parameters_json_schema=copy.deepcopy(tool.parameters),\n        )\n        for tool in tools\n    ]\n    return [types.Tool(function_declarations=declarations)]\n\n\ndef _get_gemini_api_model_name(model: Llm) -> str:\n    if model in [Llm.GEMINI_3_FLASH_PREVIEW_HIGH, Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL]:\n        return \"gemini-3-flash-preview\"\n    if model in [\n        Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n        Llm.GEMINI_3_1_PRO_PREVIEW_MEDIUM,\n        Llm.GEMINI_3_1_PRO_PREVIEW_LOW,\n    ]:\n        return \"gemini-3.1-pro-preview\"\n    return model.value\n\n\ndef _get_thinking_level_for_model(model: Llm) -> str:\n    if model in [\n        Llm.GEMINI_3_FLASH_PREVIEW_HIGH,\n        Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n    ]:\n        return \"high\"\n    if model == Llm.GEMINI_3_1_PRO_PREVIEW_LOW:\n        return \"low\"\n    if model == Llm.GEMINI_3_1_PRO_PREVIEW_MEDIUM:\n        return \"medium\"\n    if model == Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL:\n        return \"minimal\"\n    return \"high\"\n\n\ndef _extract_text_from_content(content: str | List[Dict[str, Any]]) -> str:\n    if isinstance(content, str):\n        return content\n\n    for content_part in content:\n        if content_part.get(\"type\") == \"text\":\n            return content_part.get(\"text\", \"\")\n\n    return \"\"\n\n\ndef _detect_mime_type_from_base64(base64_data: str) -> str | None:\n    try:\n        decoded = base64.b64decode(base64_data[:32])\n\n        if decoded[:8] == b\"\\x89PNG\\r\\n\\x1a\\n\":\n            return \"image/png\"\n        if decoded[:2] == b\"\\xff\\xd8\":\n            return \"image/jpeg\"\n        if decoded[:6] in (b\"GIF87a\", b\"GIF89a\"):\n            return \"image/gif\"\n        if decoded[:4] == b\"RIFF\" and decoded[8:12] == b\"WEBP\":\n            return \"image/webp\"\n\n        if decoded[4:8] == b\"ftyp\":\n            return \"video/mp4\"\n        if decoded[:4] == b\"\\x1aE\\xdf\\xa3\":\n            return \"video/webm\"\n    except Exception:\n        pass\n\n    return None\n\n\ndef _extract_images_from_content(content: str | List[Dict[str, Any]]) -> List[Dict[str, str]]:\n    if isinstance(content, str):\n        return []\n\n    images: List[Dict[str, str]] = []\n    for content_part in content:\n        if content_part.get(\"type\") != \"image_url\":\n            continue\n\n        image_url = content_part[\"image_url\"][\"url\"]\n        if image_url.startswith(\"data:\"):\n            mime_type = image_url.split(\";\")[0].split(\":\")[1]\n            base64_data = image_url.split(\",\")[1]\n\n            if mime_type == \"application/octet-stream\":\n                detected_mime = _detect_mime_type_from_base64(base64_data)\n                if detected_mime:\n                    mime_type = detected_mime\n                else:\n                    print(\"Warning: Could not detect MIME type for data URL, skipping\")\n                    continue\n\n            images.append({\"mime_type\": mime_type, \"data\": base64_data})\n            continue\n\n        images.append({\"uri\": image_url})\n\n    return images\n\n\ndef _convert_message_to_gemini_content(\n    message: ChatCompletionMessageParam,\n) -> types.Content:\n    role = message.get(\"role\", \"user\")\n    content = message.get(\"content\", \"\")\n    gemini_role = \"model\" if role == \"assistant\" else \"user\"\n\n    parts: List[types.Part | Dict[str, str]] = []\n\n    text = _extract_text_from_content(content)  # type: ignore\n    image_data_list = _extract_images_from_content(content)  # type: ignore\n\n    if text:\n        parts.append({\"text\": text})\n\n    for image_data in image_data_list:\n        if \"data\" in image_data:\n            mime_type = image_data[\"mime_type\"]\n            media_bytes = base64.b64decode(image_data[\"data\"])\n            if mime_type.startswith(\"video/\"):\n                parts.append(\n                    types.Part(\n                        inline_data=types.Blob(data=media_bytes, mime_type=mime_type),\n                        video_metadata=types.VideoMetadata(fps=DEFAULT_VIDEO_FPS),\n                        media_resolution=types.PartMediaResolutionLevel.MEDIA_RESOLUTION_HIGH,\n                    )\n                )\n                continue\n\n            parts.append(\n                types.Part.from_bytes(\n                    data=media_bytes,\n                    mime_type=mime_type,\n                    media_resolution=types.PartMediaResolutionLevel.MEDIA_RESOLUTION_ULTRA_HIGH,\n                )\n            )\n            continue\n\n        if \"uri\" in image_data:\n            parts.append({\"file_uri\": image_data[\"uri\"]})\n\n    return types.Content(role=gemini_role, parts=parts)  # type: ignore\n\n\n@dataclass\nclass GeminiParseState:\n    assistant_text: str = \"\"\n    tool_calls: List[ToolCall] = field(default_factory=list)\n    model_parts: List[types.Part] = field(default_factory=list)\n    model_role: str = \"model\"\n\n\ndef _extract_usage(chunk: types.GenerateContentResponse) -> TokenUsage | None:\n    \"\"\"Extract unified token usage from a Gemini streaming chunk.\n\n    Gemini reports thinking tokens separately; they are folded into ``output``\n    to match the unified schema used by the other providers.\n\n    ``prompt_token_count`` *includes* ``cached_content_token_count``, so we\n    subtract cached tokens to get the non-cached input count (same approach\n    as the OpenAI provider).\n    \"\"\"\n    meta = chunk.usage_metadata\n    if meta is None:\n        return None\n    candidates = meta.candidates_token_count or 0\n    thoughts = meta.thoughts_token_count or 0\n    prompt_tokens = meta.prompt_token_count or 0\n    cached_tokens = meta.cached_content_token_count or 0\n    return TokenUsage(\n        input=prompt_tokens - cached_tokens,\n        output=candidates + thoughts,\n        cache_read=cached_tokens,\n        cache_write=0,\n        total=meta.total_token_count or 0,\n    )\n\n\nasync def _parse_chunk(\n    chunk: types.GenerateContentResponse,\n    state: GeminiParseState,\n    on_event: EventSink,\n) -> None:\n    if not chunk.candidates:\n        return\n\n    candidate_content = chunk.candidates[0].content\n    if not candidate_content or not candidate_content.parts:\n        return\n\n    if candidate_content.role:\n        state.model_role = candidate_content.role\n\n    for part in candidate_content.parts:\n        # Preserve each model part as streamed so thought signatures remain attached.\n        state.model_parts.append(part)\n\n        if getattr(part, \"thought\", False) and part.text:\n            await on_event(StreamEvent(type=\"thinking_delta\", text=part.text))\n            continue\n\n        if part.function_call:\n            args = part.function_call.args or {}\n            tool_id = part.function_call.id or f\"tool-{uuid.uuid4().hex[:6]}\"\n            tool_name = part.function_call.name or \"unknown_tool\"\n\n            await on_event(\n                StreamEvent(\n                    type=\"tool_call_delta\",\n                    tool_call_id=tool_id,\n                    tool_name=tool_name,\n                    tool_arguments=args,\n                )\n            )\n\n            state.tool_calls.append(\n                ToolCall(\n                    id=tool_id,\n                    name=tool_name,\n                    arguments=args,\n                )\n            )\n            continue\n\n        if part.text:\n            state.assistant_text += part.text\n            await on_event(StreamEvent(type=\"assistant_delta\", text=part.text))\n\n\nclass GeminiProviderSession(ProviderSession):\n    def __init__(\n        self,\n        client: genai.Client,\n        model: Llm,\n        prompt_messages: List[ChatCompletionMessageParam],\n        tools: List[types.Tool],\n    ):\n        self._client = client\n        self._model = model\n        self._tools = tools\n        self._total_usage = TokenUsage()\n\n        self._system_prompt = str(prompt_messages[0].get(\"content\", \"\"))\n        self._contents: List[types.Content] = [\n            _convert_message_to_gemini_content(msg) for msg in prompt_messages[1:]\n        ]\n\n    async def stream_turn(self, on_event: EventSink) -> ProviderTurn:\n        thinking_level = _get_thinking_level_for_model(self._model)\n        config = types.GenerateContentConfig(\n            temperature=1.0,\n            max_output_tokens=50000,\n            system_instruction=self._system_prompt,\n            thinking_config=types.ThinkingConfig(\n                thinking_level=cast(Any, thinking_level),\n                include_thoughts=True,\n            ),\n            tools=self._tools,\n        )\n\n        stream = await self._client.aio.models.generate_content_stream(\n            model=_get_gemini_api_model_name(self._model),\n            contents=cast(Any, self._contents),\n            config=config,\n        )\n\n        state = GeminiParseState()\n        turn_usage: TokenUsage | None = None\n        async for chunk in stream:\n            await _parse_chunk(chunk, state, on_event)\n            chunk_usage = _extract_usage(chunk)\n            if chunk_usage is not None:\n                turn_usage = chunk_usage\n\n        if turn_usage is not None:\n            self._total_usage.accumulate(turn_usage)\n\n        assistant_turn = (\n            types.Content(role=state.model_role, parts=state.model_parts)\n            if state.model_parts\n            else None\n        )\n\n        return ProviderTurn(\n            assistant_text=state.assistant_text,\n            tool_calls=state.tool_calls,\n            assistant_turn=assistant_turn,\n        )\n\n    def append_tool_results(\n        self,\n        turn: ProviderTurn,\n        executed_tool_calls: list[ExecutedToolCall],\n    ) -> None:\n        model_content = turn.assistant_turn\n        if not isinstance(model_content, types.Content) or not model_content.parts:\n            raise ValueError(\n                \"Gemini step is missing model content. Cannot append tool results without the original model turn.\"\n            )\n\n        self._contents.append(model_content)\n\n        tool_result_parts: List[types.Part] = []\n        for executed in executed_tool_calls:\n            tool_result_parts.append(\n                types.Part.from_function_response(\n                    name=executed.tool_call.name,\n                    response=executed.result.result,\n                )\n            )\n\n        self._contents.append(types.Content(role=\"tool\", parts=tool_result_parts))\n\n    async def close(self) -> None:\n        u = self._total_usage\n        model_name = _get_gemini_api_model_name(self._model)\n        pricing = MODEL_PRICING.get(model_name)\n        cost_str = f\" cost=${u.cost(pricing):.4f}\" if pricing else \"\"\n        cache_hit_rate_str = f\" cache_hit_rate={u.cache_hit_rate_percent():.2f}%\"\n        print(\n            f\"[TOKEN USAGE] provider=gemini model={model_name} | \"\n            f\"input={u.input} output={u.output} \"\n            f\"cache_read={u.cache_read} cache_write={u.cache_write} \"\n            f\"total={u.total}{cache_hit_rate_str}{cost_str}\"\n        )\n"
  },
  {
    "path": "backend/agent/providers/openai.py",
    "content": "# pyright: reportUnknownVariableType=false\nimport copy\nimport json\nimport uuid\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List\n\nfrom openai import AsyncOpenAI\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom agent.providers.base import (\n    EventSink,\n    ExecutedToolCall,\n    ProviderSession,\n    ProviderTurn,\n    StreamEvent,\n)\nfrom agent.providers.pricing import MODEL_PRICING\nfrom agent.providers.token_usage import TokenUsage\nfrom agent.state import ensure_str\nfrom agent.tools import CanonicalToolDefinition, ToolCall, parse_json_arguments\nfrom config import IS_DEBUG_ENABLED\nfrom fs_logging.openai_turn_inputs import OpenAITurnInputLogger\nfrom llm import Llm, get_openai_api_name, get_openai_reasoning_effort\n\n\ndef _convert_message_to_responses_input(\n    message: ChatCompletionMessageParam,\n) -> Dict[str, Any]:\n    role = message.get(\"role\", \"user\")\n    content = message.get(\"content\", \"\")\n\n    if isinstance(content, str):\n        return {\"role\": role, \"content\": content}\n\n    parts: List[Dict[str, Any]] = []\n    if isinstance(content, list):\n        for part in content:\n            if not isinstance(part, dict):\n                continue\n            if part.get(\"type\") == \"text\":\n                parts.append({\"type\": \"input_text\", \"text\": part.get(\"text\", \"\")})\n            elif part.get(\"type\") == \"image_url\":\n                image_url = part.get(\"image_url\", {})\n                parts.append(\n                    {\n                        \"type\": \"input_image\",\n                        \"image_url\": image_url.get(\"url\", \"\"),\n                        \"detail\": image_url.get(\"detail\", \"high\"),\n                    }\n                )\n\n    return {\"role\": role, \"content\": parts}\n\n\ndef _get_event_attr(event: Any, key: str, default: Any = None) -> Any:\n    if hasattr(event, key):\n        return getattr(event, key)\n    if isinstance(event, dict):\n        return event.get(key, default)\n    return default\n\n\ndef _copy_schema(schema: Dict[str, Any]) -> Dict[str, Any]:\n    return copy.deepcopy(schema)\n\n\ndef _nullable_type(type_value: Any) -> Any:\n    if isinstance(type_value, list):\n        if \"null\" not in type_value:\n            return [*type_value, \"null\"]\n        return type_value\n    if isinstance(type_value, str):\n        return [type_value, \"null\"]\n    return type_value\n\n\ndef _make_responses_schema_strict(schema: Dict[str, Any]) -> Dict[str, Any]:\n    schema_copy: Dict[str, Any] = _copy_schema(schema)\n\n    def transform(node: Dict[str, Any], in_object_property: bool = False) -> None:\n        node_type = node.get(\"type\")\n\n        if node_type == \"object\":\n            node[\"additionalProperties\"] = False\n            properties = node.get(\"properties\") or {}\n            if isinstance(properties, dict):\n                node[\"required\"] = list(properties.keys())\n                for prop in properties.values():\n                    if isinstance(prop, dict):\n                        transform(prop, in_object_property=True)\n            return\n\n        if node_type == \"array\":\n            if in_object_property:\n                node[\"type\"] = _nullable_type(node_type)\n            items = node.get(\"items\")\n            if isinstance(items, dict):\n                transform(items, in_object_property=False)\n            return\n\n        if in_object_property and node_type is not None:\n            node[\"type\"] = _nullable_type(node_type)\n\n    transform(schema_copy, in_object_property=False)\n    return schema_copy\n\n\ndef serialize_openai_tools(\n    tools: List[CanonicalToolDefinition],\n) -> List[Dict[str, Any]]:\n    serialized: List[Dict[str, Any]] = []\n    for tool in tools:\n        schema = _make_responses_schema_strict(tool.parameters)\n        serialized.append(\n            {\n                \"type\": \"function\",\n                \"name\": tool.name,\n                \"description\": tool.description,\n                \"parameters\": schema,\n                \"strict\": True,\n            }\n        )\n    return serialized\n@dataclass\nclass OpenAIResponsesParseState:\n    assistant_text: str = \"\"\n    tool_calls: Dict[str, Dict[str, Any]] = field(default_factory=dict)\n    item_to_call_id: Dict[str, str] = field(default_factory=dict)\n    output_items_by_index: Dict[int, Dict[str, Any]] = field(default_factory=dict)\n    saw_reasoning_summary_text_delta: bool = False\n    last_emitted_reasoning_summary_part: str = \"\"\n    turn_usage: TokenUsage | None = None\n\n\ndef _extract_openai_usage(response: Any) -> TokenUsage:\n    \"\"\"Extract unified token usage from an OpenAI Responses ``response.completed`` event.\n\n    OpenAI includes cached tokens inside ``input_tokens``, so they are subtracted\n    to get the non-cached input count.\n    \"\"\"\n    usage = _get_event_attr(response, \"usage\")\n    if usage is None:\n        return TokenUsage()\n    input_tokens = _get_event_attr(usage, \"input_tokens\", 0) or 0\n    output_tokens = _get_event_attr(usage, \"output_tokens\", 0) or 0\n    total_tokens = _get_event_attr(usage, \"total_tokens\", 0) or 0\n\n    details = _get_event_attr(usage, \"input_tokens_details\") or {}\n    cached_tokens = _get_event_attr(details, \"cached_tokens\", 0) or 0\n\n    return TokenUsage(\n        input=input_tokens - cached_tokens,\n        output=output_tokens,\n        cache_read=cached_tokens,\n        cache_write=0,\n        total=total_tokens,\n    )\n\n\nasync def parse_event(\n    event: Any,\n    state: OpenAIResponsesParseState,\n    on_event: EventSink,\n) -> None:\n    event_type = _get_event_attr(event, \"type\")\n    if event_type in (\n        \"response.created\",\n        \"response.completed\",\n        \"response.done\",\n        \"response.output_item.done\",\n    ):\n        if event_type == \"response.completed\":\n            response = _get_event_attr(event, \"response\")\n            if response:\n                state.turn_usage = _extract_openai_usage(response)\n        if event_type == \"response.output_item.done\":\n            output_index = _get_event_attr(event, \"output_index\")\n            item = _get_event_attr(event, \"item\")\n            if isinstance(output_index, int) and item:\n                state.output_items_by_index[output_index] = item\n        return\n\n    if event_type == \"response.output_text.delta\":\n        delta = _get_event_attr(event, \"delta\", \"\")\n        if delta:\n            state.assistant_text += delta\n            await on_event(StreamEvent(type=\"assistant_delta\", text=delta))\n        return\n\n    if event_type in (\n        \"response.reasoning_text.delta\",\n        \"response.reasoning_summary_text.delta\",\n    ):\n        delta = _get_event_attr(event, \"delta\", \"\")\n        if delta:\n            if event_type == \"response.reasoning_summary_text.delta\":\n                state.saw_reasoning_summary_text_delta = True\n            await on_event(StreamEvent(type=\"thinking_delta\", text=delta))\n        return\n\n    if event_type in (\n        \"response.reasoning_summary_part.added\",\n        \"response.reasoning_summary_part.done\",\n    ):\n        if state.saw_reasoning_summary_text_delta:\n            return\n        part = _get_event_attr(event, \"part\") or {}\n        text = _get_event_attr(part, \"text\", \"\")\n        if text and text != state.last_emitted_reasoning_summary_part:\n            state.last_emitted_reasoning_summary_part = text\n            await on_event(StreamEvent(type=\"thinking_delta\", text=text))\n        return\n\n    if event_type == \"response.output_item.added\":\n        item = _get_event_attr(event, \"item\")\n        item_type = _get_event_attr(item, \"type\") if item else None\n        output_index = _get_event_attr(event, \"output_index\")\n        if isinstance(output_index, int) and item:\n            state.output_items_by_index.setdefault(output_index, item)\n\n        if item and item_type in (\"function_call\", \"custom_tool_call\"):\n            item_id = _get_event_attr(item, \"id\")\n            call_id = _get_event_attr(item, \"call_id\") or item_id\n            if item_id and call_id:\n                state.item_to_call_id[item_id] = call_id\n            if call_id:\n                if item_id and item_id in state.tool_calls and item_id != call_id:\n                    existing = state.tool_calls.pop(item_id)\n                    state.tool_calls[call_id] = {\n                        **existing,\n                        \"id\": call_id,\n                    }\n                args_value = _get_event_attr(item, \"arguments\")\n                if args_value is None and item_type == \"custom_tool_call\":\n                    args_value = _get_event_attr(item, \"input\")\n                state.tool_calls.setdefault(\n                    call_id,\n                    {\n                        \"id\": call_id,\n                        \"name\": _get_event_attr(item, \"name\"),\n                        \"arguments\": args_value or \"\",\n                    },\n                )\n                if args_value:\n                    await on_event(\n                        StreamEvent(\n                            type=\"tool_call_delta\",\n                            tool_call_id=call_id,\n                            tool_name=_get_event_attr(item, \"name\"),\n                            tool_arguments=args_value,\n                        )\n                    )\n        return\n\n    if event_type in (\n        \"response.function_call_arguments.delta\",\n        \"response.mcp_call_arguments.delta\",\n        \"response.custom_tool_call_input.delta\",\n    ):\n        item_id = _get_event_attr(event, \"item_id\")\n        call_id = _get_event_attr(event, \"call_id\")\n        if call_id and item_id:\n            state.item_to_call_id[item_id] = call_id\n        if not call_id:\n            call_id = state.item_to_call_id.get(item_id) if item_id else None\n        if not call_id and item_id:\n            call_id = item_id\n        if not call_id:\n            return\n\n        entry = state.tool_calls.setdefault(\n            call_id,\n            {\n                \"id\": call_id,\n                \"name\": _get_event_attr(event, \"name\"),\n                \"arguments\": \"\",\n            },\n        )\n        delta_value = _get_event_attr(event, \"delta\")\n        if delta_value is None:\n            delta_value = _get_event_attr(event, \"input\")\n        entry[\"arguments\"] += ensure_str(delta_value)\n\n        await on_event(\n            StreamEvent(\n                type=\"tool_call_delta\",\n                tool_call_id=call_id,\n                tool_name=entry.get(\"name\"),\n                tool_arguments=entry.get(\"arguments\"),\n            )\n        )\n        return\n\n    if event_type not in (\n        \"response.function_call_arguments.done\",\n        \"response.mcp_call_arguments.done\",\n        \"response.custom_tool_call_input.done\",\n    ):\n        return\n\n    item_id = _get_event_attr(event, \"item_id\")\n    call_id = _get_event_attr(event, \"call_id\")\n    if call_id and item_id:\n        state.item_to_call_id[item_id] = call_id\n    if not call_id:\n        call_id = state.item_to_call_id.get(item_id) if item_id else None\n    if not call_id and item_id:\n        call_id = item_id\n    if not call_id:\n        return\n\n    entry = state.tool_calls.setdefault(\n        call_id,\n        {\n            \"id\": call_id,\n            \"name\": _get_event_attr(event, \"name\"),\n            \"arguments\": \"\",\n        },\n    )\n    final_value = _get_event_attr(event, \"arguments\")\n    if final_value is None:\n        final_value = _get_event_attr(event, \"input\")\n    if final_value is None:\n        final_value = entry[\"arguments\"]\n    entry[\"arguments\"] = final_value\n    if _get_event_attr(event, \"name\"):\n        entry[\"name\"] = _get_event_attr(event, \"name\")\n\n    await on_event(\n        StreamEvent(\n            type=\"tool_call_delta\",\n            tool_call_id=call_id,\n            tool_name=entry.get(\"name\"),\n            tool_arguments=entry.get(\"arguments\"),\n        )\n    )\n\n    output_index = _get_event_attr(event, \"output_index\")\n    if (\n        item_id\n        and isinstance(output_index, int)\n        and isinstance(state.output_items_by_index.get(output_index), dict)\n    ):\n        state.output_items_by_index[output_index] = {\n            **state.output_items_by_index[output_index],\n            \"arguments\": entry[\"arguments\"],\n            \"call_id\": call_id,\n            \"name\": entry.get(\"name\"),\n        }\n\n\ndef _build_provider_turn(state: OpenAIResponsesParseState) -> ProviderTurn:\n    output_items = [\n        state.output_items_by_index[idx]\n        for idx in sorted(state.output_items_by_index.keys())\n        if state.output_items_by_index.get(idx)\n    ]\n\n    tool_items = [\n        item\n        for item in output_items\n        if isinstance(item, dict)\n        and item.get(\"type\") in (\"function_call\", \"custom_tool_call\")\n    ]\n\n    tool_calls: List[ToolCall] = []\n    if tool_items:\n        for item in tool_items:\n            raw_args = item.get(\"arguments\")\n            if raw_args is None and item.get(\"type\") == \"custom_tool_call\":\n                raw_args = item.get(\"input\")\n            args, error = parse_json_arguments(raw_args)\n            if error:\n                args = {\"INVALID_JSON\": ensure_str(raw_args)}\n            call_id = item.get(\"call_id\") or item.get(\"id\")\n            tool_calls.append(\n                ToolCall(\n                    id=call_id or f\"call-{uuid.uuid4().hex[:6]}\",\n                    name=item.get(\"name\") or \"unknown_tool\",\n                    arguments=args,\n                )\n            )\n    else:\n        for entry in state.tool_calls.values():\n            args, error = parse_json_arguments(entry.get(\"arguments\"))\n            if error:\n                args = {\"INVALID_JSON\": ensure_str(entry.get(\"arguments\"))}\n            call_id = entry.get(\"id\") or entry.get(\"call_id\")\n            tool_calls.append(\n                ToolCall(\n                    id=call_id or f\"call-{uuid.uuid4().hex[:6]}\",\n                    name=entry.get(\"name\") or \"unknown_tool\",\n                    arguments=args,\n                )\n            )\n\n    assistant_turn: List[Dict[str, Any]] = output_items if tool_calls else []\n\n    return ProviderTurn(\n        assistant_text=state.assistant_text,\n        tool_calls=tool_calls,\n        assistant_turn=assistant_turn,\n    )\n\n\nclass OpenAIProviderSession(ProviderSession):\n    def __init__(\n        self,\n        client: AsyncOpenAI,\n        model: Llm,\n        prompt_messages: List[ChatCompletionMessageParam],\n        tools: List[Dict[str, Any]],\n    ):\n        self._client = client\n        self._model = model\n        self._tools = tools\n        self._total_usage = TokenUsage()\n        self._turn_input_logger = OpenAITurnInputLogger(\n            model,\n            enabled=IS_DEBUG_ENABLED,\n        )\n        self._input_items: List[Dict[str, Any]] = [\n            _convert_message_to_responses_input(message) for message in prompt_messages\n        ]\n\n    async def stream_turn(self, on_event: EventSink) -> ProviderTurn:\n        model_name = get_openai_api_name(self._model)\n        params: Dict[str, Any] = {\n            \"model\": model_name,\n            \"input\": self._input_items,\n            \"tools\": self._tools,\n            \"tool_choice\": \"auto\",\n            \"stream\": True,\n            \"max_output_tokens\": 50000,\n        }\n        if model_name == \"gpt-5.4-2026-03-05\":\n            params[\"prompt_cache_retention\"] = \"24h\"\n        reasoning_effort = get_openai_reasoning_effort(self._model)\n        if reasoning_effort:\n            params[\"reasoning\"] = {\"effort\": reasoning_effort, \"summary\": \"auto\"}\n\n        self._turn_input_logger.record_turn_input(\n            self._input_items,\n            request_payload=params,\n        )\n\n        state = OpenAIResponsesParseState()\n        stream = await self._client.responses.create(**params)  # type: ignore\n        async for event in stream:  # type: ignore\n            await parse_event(event, state, on_event)\n\n        if state.turn_usage is not None:\n            self._turn_input_logger.record_turn_usage(state.turn_usage)\n            self._total_usage.accumulate(state.turn_usage)\n\n        return _build_provider_turn(state)\n\n    def append_tool_results(\n        self,\n        turn: ProviderTurn,\n        executed_tool_calls: list[ExecutedToolCall],\n    ) -> None:\n        assistant_output_items = turn.assistant_turn or []\n        if assistant_output_items:\n            self._input_items.extend(assistant_output_items)\n\n        tool_output_items: List[Dict[str, Any]] = []\n        for executed in executed_tool_calls:\n            tool_output_items.append(\n                {\n                    \"type\": \"function_call_output\",\n                    \"call_id\": executed.tool_call.id,\n                    \"output\": json.dumps(executed.result.result),\n                }\n            )\n        self._input_items.extend(tool_output_items)\n\n    async def close(self) -> None:\n        u = self._total_usage\n        model_name = get_openai_api_name(self._model)\n        pricing = MODEL_PRICING.get(model_name)\n        cost_str = f\" cost=${u.cost(pricing):.4f}\" if pricing else \"\"\n        cache_hit_rate_str = f\" cache_hit_rate={u.cache_hit_rate_percent():.2f}%\"\n        print(\n            f\"[TOKEN USAGE] provider=openai model={model_name} | \"\n            f\"input={u.input} output={u.output} \"\n            f\"cache_read={u.cache_read} cache_write={u.cache_write} \"\n            f\"total={u.total}{cache_hit_rate_str}{cost_str}\"\n        )\n        report_path = self._turn_input_logger.write_html_report()\n        if report_path:\n            print(f\"[OPENAI TURN INPUT] HTML report: {report_path}\")\n        await self._client.close()\n"
  },
  {
    "path": "backend/agent/providers/pricing.py",
    "content": "from dataclasses import dataclass\nfrom typing import Dict\n\n\n@dataclass\nclass ModelPricing:\n    \"\"\"Per-million-token pricing in USD.\"\"\"\n\n    input: float = 0.0\n    output: float = 0.0\n    cache_read: float = 0.0\n    cache_write: float = 0.0\n\n\n# Pricing keyed by the API model name string sent to the provider.\nMODEL_PRICING: Dict[str, ModelPricing] = {\n    # --- OpenAI ---\n    \"gpt-4.1-2025-04-14\": ModelPricing(\n        input=2.00, output=8.00, cache_read=0.50\n    ),\n    \"gpt-5.2-codex\": ModelPricing(\n        input=1.75, output=14.00, cache_read=0.4375\n    ),\n    \"gpt-5.3-codex\": ModelPricing(\n        input=1.75, output=14.00, cache_read=0.4375\n    ),\n    \"gpt-5.4-2026-03-05\": ModelPricing(\n        input=2.50, output=15.00, cache_read=0.25\n    ),\n    # --- Anthropic ---\n    \"claude-sonnet-4-6\": ModelPricing(\n        input=3.00, output=15.00, cache_read=0.30, cache_write=3.75\n    ),\n    \"claude-sonnet-4-5-20250929\": ModelPricing(\n        input=3.00, output=15.00, cache_read=0.30, cache_write=3.75\n    ),\n    \"claude-opus-4-5-20251101\": ModelPricing(\n        input=5.00, output=25.00, cache_read=0.50, cache_write=6.25\n    ),\n    \"claude-opus-4-6\": ModelPricing(\n        input=5.00, output=25.00, cache_read=0.50, cache_write=6.25\n    ),\n    # --- Gemini ---\n    \"gemini-3-flash-preview\": ModelPricing(\n        input=0.50, output=3.00, cache_read=0.05\n    ),\n    \"gemini-3-pro-preview\": ModelPricing(\n        input=2.00, output=12.00, cache_read=0.20\n    ),\n    \"gemini-3.1-pro-preview\": ModelPricing(\n        input=2.00, output=12.00, cache_read=0.20\n    ),\n}\n"
  },
  {
    "path": "backend/agent/providers/token_usage.py",
    "content": "from __future__ import annotations\n\nfrom dataclasses import dataclass\n\nfrom agent.providers.pricing import ModelPricing\n\n\n@dataclass\nclass TokenUsage:\n    \"\"\"Unified token usage across all providers.\n\n    Log line example:\n        [TOKEN USAGE] provider=gemini model=... | input=1000 output=500\n            cache_read=200 cache_write=0 total=1700 cost=$0.0020\n\n    Fields:\n        input:       Non-cached input tokens (billed at full input rate).\n                     For providers whose API includes cached tokens in the\n                     prompt count (OpenAI, Gemini), cached tokens are\n                     subtracted so this is always *exclusive* of cache_read.\n        output:      Output tokens including thinking/reasoning (billed at\n                     output rate).\n        cache_read:  Input tokens served from cache (billed at reduced rate).\n        cache_write: Input tokens written to cache (Anthropic only).\n        total:       All tokens as reported by the provider API. Equals\n                     input + cache_read + output (+ thinking for Gemini).\n\n    Total input sent to the model = input + cache_read + cache_write.\n    Cost = (input * input_rate + output * output_rate\n            + cache_read * cache_read_rate + cache_write * cache_write_rate)\n           / 1_000_000\n    \"\"\"\n\n    input: int = 0\n    output: int = 0\n    cache_read: int = 0\n    cache_write: int = 0\n    total: int = 0\n\n    def accumulate(self, other: TokenUsage) -> None:\n        self.input += other.input\n        self.output += other.output\n        self.cache_read += other.cache_read\n        self.cache_write += other.cache_write\n        self.total += other.total\n\n    def cost(self, pricing: ModelPricing) -> float:\n        \"\"\"Compute cost in USD using per-million-token rates.\"\"\"\n        return (\n            self.input * pricing.input\n            + self.output * pricing.output\n            + self.cache_read * pricing.cache_read\n            + self.cache_write * pricing.cache_write\n        ) / 1_000_000\n\n    def total_input_tokens(self) -> int:\n        \"\"\"All input tokens, including non-cached, cache-read, and cache-write.\"\"\"\n        return self.input + self.cache_read + self.cache_write\n\n    def cache_hit_rate_percent(self) -> float:\n        \"\"\"Percent of total input tokens served from cache.\"\"\"\n        total_input = self.total_input_tokens()\n        if total_input == 0:\n            return 0.0\n        return (self.cache_read / total_input) * 100.0\n"
  },
  {
    "path": "backend/agent/providers/types.py",
    "content": "from agent.providers.base import (\n    EventSink,\n    ExecutedToolCall,\n    ProviderTurn,\n    StreamEvent,\n)\n\n# Backwards-compatible alias for older imports.\nStepResult = ProviderTurn\n\n__all__ = [\n    \"EventSink\",\n    \"ExecutedToolCall\",\n    \"ProviderTurn\",\n    \"StepResult\",\n    \"StreamEvent\",\n]\n"
  },
  {
    "path": "backend/agent/runner.py",
    "content": "from agent.engine import AgentEngine\n\n\nclass Agent(AgentEngine):\n    pass\n"
  },
  {
    "path": "backend/agent/state.py",
    "content": "from dataclasses import dataclass\nfrom typing import Any, List\n\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom codegen.utils import extract_html_content\n\n\n@dataclass\nclass AgentFileState:\n    path: str = \"index.html\"\n    content: str = \"\"\n\n\ndef ensure_str(value: Any) -> str:\n    if value is None:\n        return \"\"\n    return str(value)\n\n\ndef extract_text_content(message: ChatCompletionMessageParam) -> str:\n    content = message.get(\"content\", \"\")\n    if isinstance(content, str):\n        return content\n    if isinstance(content, list):\n        for part in content:\n            if isinstance(part, dict) and part.get(\"type\") == \"text\":\n                return ensure_str(part.get(\"text\"))\n    return \"\"\n\n\ndef seed_file_state_from_messages(\n    file_state: AgentFileState,\n    prompt_messages: List[ChatCompletionMessageParam],\n) -> None:\n    if file_state.content:\n        return\n\n    for message in reversed(prompt_messages):\n        if message.get(\"role\") != \"assistant\":\n            continue\n        raw_text = extract_text_content(message)\n        if not raw_text:\n            continue\n        extracted = extract_html_content(raw_text)\n        file_state.content = extracted or raw_text\n        if not file_state.path:\n            file_state.path = \"index.html\"\n        return\n\n    if not prompt_messages:\n        return\n\n    system_message = prompt_messages[0]\n    if system_message.get(\"role\") != \"system\":\n        return\n\n    system_text = extract_text_content(system_message)\n    markers = [\n        \"Here is the code of the app:\",\n    ]\n    for marker in markers:\n        if marker not in system_text:\n            continue\n        raw_text = system_text.split(marker, 1)[1].strip()\n        extracted = extract_html_content(raw_text)\n        file_state.content = extracted or raw_text\n        if not file_state.path:\n            file_state.path = \"index.html\"\n        return\n"
  },
  {
    "path": "backend/agent/tools/__init__.py",
    "content": "from agent.tools.definitions import canonical_tool_definitions\nfrom agent.tools.parsing import (\n    extract_content_from_args,\n    extract_path_from_args,\n    parse_json_arguments,\n)\nfrom agent.tools.runtime import AgentToolRuntime, AgentToolbox\nfrom agent.tools.summaries import summarize_text, summarize_tool_input\nfrom agent.tools.types import (\n    CanonicalToolDefinition,\n    ToolCall,\n    ToolExecutionResult,\n)\n\n__all__ = [\n    \"AgentToolRuntime\",\n    \"AgentToolbox\",\n    \"CanonicalToolDefinition\",\n    \"ToolCall\",\n    \"ToolExecutionResult\",\n    \"canonical_tool_definitions\",\n    \"extract_content_from_args\",\n    \"extract_path_from_args\",\n    \"parse_json_arguments\",\n    \"summarize_text\",\n    \"summarize_tool_input\",\n]\n"
  },
  {
    "path": "backend/agent/tools/definitions.py",
    "content": "from typing import Any, Dict, List\n\nfrom agent.tools.types import CanonicalToolDefinition\n\n\ndef _create_schema() -> Dict[str, Any]:\n    return {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Path for the main HTML file. Use index.html if unsure.\",\n            },\n            \"content\": {\n                \"type\": \"string\",\n                \"description\": \"Full HTML for the single-file app.\",\n            },\n        },\n        \"required\": [\"content\"],\n    }\n\n\ndef _edit_schema() -> Dict[str, Any]:\n    return {\n        \"type\": \"object\",\n        \"properties\": {\n            \"path\": {\n                \"type\": \"string\",\n                \"description\": \"Path for the main HTML file.\",\n            },\n            \"old_text\": {\n                \"type\": \"string\",\n                \"description\": \"Exact text to replace. Must match the file contents.\",\n            },\n            \"new_text\": {\n                \"type\": \"string\",\n                \"description\": \"Replacement text.\",\n            },\n            \"count\": {\n                \"type\": \"integer\",\n                \"description\": \"How many occurrences to replace. Use -1 for all.\",\n            },\n            \"edits\": {\n                \"type\": \"array\",\n                \"items\": {\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"old_text\": {\"type\": \"string\"},\n                        \"new_text\": {\"type\": \"string\"},\n                        \"count\": {\"type\": \"integer\"},\n                    },\n                    \"required\": [\"old_text\", \"new_text\"],\n                },\n            },\n        },\n    }\n\n\ndef _image_schema() -> Dict[str, Any]:\n    return {\n        \"type\": \"object\",\n        \"properties\": {\n            \"prompts\": {\n                \"type\": \"array\",\n                \"items\": {\n                    \"type\": \"string\",\n                    \"description\": \"Prompt describing a single image to generate.\",\n                },\n            }\n        },\n        \"required\": [\"prompts\"],\n    }\n\n\ndef _remove_background_schema() -> Dict[str, Any]:\n    return {\n        \"type\": \"object\",\n        \"properties\": {\n            \"image_urls\": {\n                \"type\": \"array\",\n                \"items\": {\n                    \"type\": \"string\",\n                    \"description\": \"URL of an image to remove the background from.\",\n                },\n            },\n        },\n        \"required\": [\"image_urls\"],\n    }\n\n\ndef _retrieve_option_schema() -> Dict[str, Any]:\n    return {\n        \"type\": \"object\",\n        \"properties\": {\n            \"option_number\": {\n                \"type\": \"integer\",\n                \"description\": \"1-based option number to retrieve (Option 1, Option 2, etc.).\",\n            }\n        },\n        \"required\": [\"option_number\"],\n    }\n\n\ndef canonical_tool_definitions(\n    image_generation_enabled: bool = True,\n) -> List[CanonicalToolDefinition]:\n    tools: List[CanonicalToolDefinition] = [\n        CanonicalToolDefinition(\n            name=\"create_file\",\n            description=(\n                \"Create the main HTML file for the app. Use exactly once to write the \"\n                \"full HTML. Returns a success message and file metadata.\"\n            ),\n            parameters=_create_schema(),\n        ),\n        CanonicalToolDefinition(\n            name=\"edit_file\",\n            description=(\n                \"Edit the main HTML file using exact string replacements. Do not \"\n                \"regenerate the entire file. Returns a success message plus edit \"\n                \"details, including a unified diff and first changed line.\"\n            ),\n            parameters=_edit_schema(),\n        ),\n    ]\n    if image_generation_enabled:\n        tools.append(\n            CanonicalToolDefinition(\n                name=\"generate_images\",\n                description=(\n                    \"Generate image URLs from prompts. Use to replace placeholder images. \"\n                    \"You can pass multiple prompts at once.\"\n                ),\n                parameters=_image_schema(),\n            )\n        )\n    tools.extend(\n        [\n            CanonicalToolDefinition(\n                name=\"remove_background\",\n                description=(\n                    \"Remove the background from one or more images. You can pass multiple \"\n                    \"image URLs at once. Returns URLs to the processed images with \"\n                    \"transparent backgrounds.\"\n                ),\n                parameters=_remove_background_schema(),\n            ),\n            CanonicalToolDefinition(\n                name=\"retrieve_option\",\n                description=(\n                    \"Retrieve the full HTML for a specific option (variant) so you can \"\n                    \"reference it.\"\n                ),\n                parameters=_retrieve_option_schema(),\n            ),\n        ]\n    )\n    return tools\n"
  },
  {
    "path": "backend/agent/tools/parsing.py",
    "content": "# pyright: reportUnknownVariableType=false\nimport json\nfrom typing import Any, Dict, Optional, Tuple\n\nfrom agent.state import ensure_str\n\n\ndef parse_json_arguments(raw_args: Any) -> Tuple[Dict[str, Any], Optional[str]]:\n    if isinstance(raw_args, dict):\n        return raw_args, None\n    if raw_args is None:\n        return {}, None\n    raw_text = ensure_str(raw_args).strip()\n    if not raw_text:\n        return {}, None\n    try:\n        return json.loads(raw_text), None\n    except json.JSONDecodeError as exc:\n        return {}, f\"Invalid JSON arguments: {exc}\"\n\n\ndef _strip_incomplete_escape(value: str) -> str:\n    if not value:\n        return value\n    trailing = 0\n    for ch in reversed(value):\n        if ch == \"\\\\\":\n            trailing += 1\n        else:\n            break\n    if trailing % 2 == 1:\n        return value[:-1]\n    return value\n\n\ndef _extract_partial_json_string(raw_text: str, key: str) -> Optional[str]:\n    if not raw_text:\n        return None\n    token = f'\"{key}\"'\n    idx = raw_text.find(token)\n    if idx == -1:\n        return None\n    colon = raw_text.find(\":\", idx + len(token))\n    if colon == -1:\n        return None\n    cursor = colon + 1\n    while cursor < len(raw_text) and raw_text[cursor].isspace():\n        cursor += 1\n    if cursor >= len(raw_text) or raw_text[cursor] != '\"':\n        return None\n\n    start = cursor + 1\n    last_quote: Optional[int] = None\n    cursor = start\n    while cursor < len(raw_text):\n        if raw_text[cursor] == '\"':\n            backslashes = 0\n            back = cursor - 1\n            while back >= start and raw_text[back] == \"\\\\\":\n                backslashes += 1\n                back -= 1\n            if backslashes % 2 == 0:\n                last_quote = cursor\n        cursor += 1\n\n    partial = raw_text[start:] if last_quote is None else raw_text[start:last_quote]\n    partial = _strip_incomplete_escape(partial)\n    if not partial:\n        return \"\"\n\n    try:\n        return json.loads(f'\"{partial}\"')\n    except Exception:\n        return (\n            partial.replace(\"\\\\n\", \"\\n\")\n            .replace(\"\\\\t\", \"\\t\")\n            .replace(\"\\\\r\", \"\\r\")\n            .replace('\\\\\"', '\"')\n            .replace(\"\\\\\\\\\", \"\\\\\")\n        )\n\n\ndef extract_content_from_args(raw_args: Any) -> Optional[str]:\n    if isinstance(raw_args, dict):\n        content = raw_args.get(\"content\")\n        if content is None:\n            return None\n        return ensure_str(content)\n    raw_text = ensure_str(raw_args)\n    return _extract_partial_json_string(raw_text, \"content\")\n\n\ndef extract_path_from_args(raw_args: Any) -> Optional[str]:\n    if isinstance(raw_args, dict):\n        path = raw_args.get(\"path\")\n        return ensure_str(path) if path is not None else None\n    raw_text = ensure_str(raw_args)\n    return _extract_partial_json_string(raw_text, \"path\")\n"
  },
  {
    "path": "backend/agent/tools/runtime.py",
    "content": "# pyright: reportUnknownVariableType=false\nimport asyncio\nimport difflib\nfrom typing import Any, Dict, List, Optional, Tuple, Union\n\nfrom codegen.utils import extract_html_content\nfrom config import REPLICATE_API_KEY\nfrom image_generation.generation import process_tasks\nfrom image_generation.replicate import remove_background\n\nfrom agent.state import AgentFileState, ensure_str\nfrom agent.tools.types import ToolCall, ToolExecutionResult\nfrom agent.tools.summaries import summarize_text\n\n\nclass AgentToolRuntime:\n    def __init__(\n        self,\n        file_state: AgentFileState,\n        should_generate_images: bool,\n        openai_api_key: Optional[str],\n        openai_base_url: Optional[str],\n        option_codes: Optional[List[str]] = None,\n    ):\n        self.file_state = file_state\n        self.should_generate_images = should_generate_images\n        self.openai_api_key = openai_api_key\n        self.openai_base_url = openai_base_url\n        self.option_codes = option_codes or []\n\n    async def execute(self, tool_call: ToolCall) -> ToolExecutionResult:\n        if \"INVALID_JSON\" in tool_call.arguments:\n            invalid_json = ensure_str(tool_call.arguments.get(\"INVALID_JSON\"))\n            return ToolExecutionResult(\n                ok=False,\n                result={\n                    \"error\": \"Tool arguments were invalid JSON.\",\n                    \"INVALID_JSON\": invalid_json,\n                },\n                summary={\"error\": \"Invalid JSON tool arguments\"},\n            )\n\n        if tool_call.name == \"create_file\":\n            return self._create_file(tool_call.arguments)\n        if tool_call.name == \"edit_file\":\n            return self._edit_file(tool_call.arguments)\n        if tool_call.name == \"generate_images\":\n            return await self._generate_images(tool_call.arguments)\n        if tool_call.name == \"remove_background\":\n            return await self._remove_background(tool_call.arguments)\n        if tool_call.name == \"retrieve_option\":\n            return self._retrieve_option(tool_call.arguments)\n        return ToolExecutionResult(\n            ok=False,\n            result={\"error\": f\"Unknown tool: {tool_call.name}\"},\n            summary={\"error\": f\"Unknown tool: {tool_call.name}\"},\n        )\n\n    def _create_file(self, args: Dict[str, Any]) -> ToolExecutionResult:\n        path = ensure_str(args.get(\"path\") or self.file_state.path or \"index.html\")\n        content = ensure_str(args.get(\"content\"))\n        if not content:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"create_file requires non-empty content\"},\n                summary={\"error\": \"Missing content\"},\n            )\n\n        extracted = extract_html_content(content)\n        self.file_state.path = path\n        self.file_state.content = extracted or content\n\n        summary = {\n            \"path\": self.file_state.path,\n            \"contentLength\": len(self.file_state.content),\n            \"preview\": summarize_text(self.file_state.content, 320),\n        }\n        result = {\n            \"content\": f\"Successfully created file at {self.file_state.path}.\",\n            \"details\": {\n                \"path\": self.file_state.path,\n                \"contentLength\": len(self.file_state.content),\n            },\n        }\n        return ToolExecutionResult(\n            ok=True,\n            result=result,\n            summary=summary,\n            updated_content=self.file_state.content,\n        )\n\n    @staticmethod\n    def _generate_diff(old_content: str, new_content: str, path: str) -> Dict[str, Any]:\n        \"\"\"Generate a unified diff between old and new content.\"\"\"\n        old_lines = old_content.splitlines(keepends=True)\n        new_lines = new_content.splitlines(keepends=True)\n        diff_lines = list(\n            difflib.unified_diff(old_lines, new_lines, fromfile=path, tofile=path)\n        )\n        diff_str = \"\".join(diff_lines)\n\n        first_changed_line: Optional[int] = None\n        for line in diff_lines:\n            if not line.startswith(\"@@\"):\n                continue\n            try:\n                plus_part = line.split(\"+\")[1].split(\"@@\")[0].strip()\n                first_changed_line = int(plus_part.split(\",\")[0])\n            except (IndexError, ValueError):\n                pass\n            break\n\n        return {\n            \"diff\": diff_str,\n            \"firstChangedLine\": first_changed_line,\n        }\n\n    def _apply_single_edit(\n        self,\n        content: str,\n        old_text: str,\n        new_text: str,\n        count: Optional[int],\n    ) -> Tuple[str, int]:\n        if old_text not in content:\n            return content, 0\n\n        if count is None:\n            replace_count = 1\n        elif count < 0:\n            replace_count = content.count(old_text)\n        else:\n            replace_count = count\n\n        updated = content.replace(old_text, new_text, replace_count)\n        return updated, min(replace_count, content.count(old_text))\n\n    def _edit_file(self, args: Dict[str, Any]) -> ToolExecutionResult:\n        if not self.file_state.content:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"No file exists yet. Call create_file first.\"},\n                summary={\"error\": \"No file to edit\"},\n            )\n\n        edits = args.get(\"edits\")\n        if not edits:\n            old_text = ensure_str(args.get(\"old_text\"))\n            new_text = ensure_str(args.get(\"new_text\"))\n            count = args.get(\"count\")\n            edits = [{\"old_text\": old_text, \"new_text\": new_text, \"count\": count}]\n\n        if not isinstance(edits, list):\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"edits must be a list\"},\n                summary={\"error\": \"Invalid edits payload\"},\n            )\n\n        content = self.file_state.content\n        original_content = content\n        summary_edits: List[Dict[str, Any]] = []\n        for edit in edits:\n            old_text = ensure_str(edit.get(\"old_text\"))\n            new_text = ensure_str(edit.get(\"new_text\"))\n            count = edit.get(\"count\")\n            if not old_text:\n                return ToolExecutionResult(\n                    ok=False,\n                    result={\"error\": \"edit_file requires old_text\"},\n                    summary={\"error\": \"Missing old_text\"},\n                )\n\n            content, replaced = self._apply_single_edit(content, old_text, new_text, count)\n            if replaced == 0:\n                return ToolExecutionResult(\n                    ok=False,\n                    result={\"error\": \"old_text not found\", \"old_text\": old_text},\n                    summary={\n                        \"error\": \"old_text not found\",\n                        \"old_text\": summarize_text(old_text, 160),\n                    },\n                )\n\n            summary_edits.append(\n                {\n                    \"old_text\": summarize_text(old_text, 140),\n                    \"new_text\": summarize_text(new_text, 140),\n                    \"replaced\": replaced,\n                }\n            )\n\n        self.file_state.content = content\n        path = self.file_state.path or \"index.html\"\n        diff_info = self._generate_diff(original_content, content, path)\n        summary = {\n            \"path\": path,\n            \"edits\": summary_edits,\n            \"contentLength\": len(self.file_state.content),\n            \"diff\": diff_info[\"diff\"],\n            \"firstChangedLine\": diff_info[\"firstChangedLine\"],\n        }\n        result = {\n            \"content\": f\"Successfully edited file at {path}.\",\n            \"details\": {\n                \"diff\": diff_info[\"diff\"],\n                \"firstChangedLine\": diff_info[\"firstChangedLine\"],\n            },\n        }\n        return ToolExecutionResult(\n            ok=True,\n            result=result,\n            summary=summary,\n            updated_content=self.file_state.content,\n        )\n\n    async def _generate_images(self, args: Dict[str, Any]) -> ToolExecutionResult:\n        if not self.should_generate_images:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"Image generation is disabled.\"},\n                summary={\"error\": \"Image generation disabled\"},\n            )\n\n        prompts = args.get(\"prompts\") or []\n        if not isinstance(prompts, list) or not prompts:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"generate_images requires a non-empty prompts list\"},\n                summary={\"error\": \"Missing prompts\"},\n            )\n\n        cleaned = [prompt.strip() for prompt in prompts if isinstance(prompt, str)]\n        unique_prompts = list(dict.fromkeys([p for p in cleaned if p]))\n        if not unique_prompts:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"No valid prompts provided\"},\n                summary={\"error\": \"No valid prompts\"},\n            )\n        if REPLICATE_API_KEY:\n            model = \"flux\"\n            api_key = REPLICATE_API_KEY\n            base_url = None\n        else:\n            if not self.openai_api_key:\n                return ToolExecutionResult(\n                    ok=False,\n                    result={\"error\": \"No API key available for image generation.\"},\n                    summary={\"error\": \"Missing image generation API key\"},\n                )\n            model = \"dalle3\"\n            api_key = self.openai_api_key\n            base_url = self.openai_base_url\n\n        generated = await process_tasks(unique_prompts, api_key, base_url, model)  # type: ignore\n        merged_results = {\n            prompt: url for prompt, url in zip(unique_prompts, generated)\n        }\n        summary_items = [\n            {\n                \"prompt\": prompt,\n                \"url\": url,\n                \"status\": \"ok\" if url else \"error\",\n            }\n            for prompt, url in merged_results.items()\n        ]\n        result = {\"images\": merged_results}\n        summary = {\"images\": summary_items}\n        return ToolExecutionResult(ok=True, result=result, summary=summary)\n\n    async def _remove_background(self, args: Dict[str, Any]) -> ToolExecutionResult:\n        if not REPLICATE_API_KEY:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"Background removal requires REPLICATE_API_KEY.\"},\n                summary={\"error\": \"Missing Replicate API key\"},\n            )\n\n        image_urls = args.get(\"image_urls\") or []\n        if not isinstance(image_urls, list) or not image_urls:\n            return ToolExecutionResult(\n                ok=False,\n                result={\n                    \"error\": \"remove_background requires a non-empty image_urls list\"\n                },\n                summary={\"error\": \"Missing image_urls\"},\n            )\n\n        cleaned = [url.strip() for url in image_urls if isinstance(url, str)]\n        unique_urls = list(dict.fromkeys([u for u in cleaned if u]))\n        if not unique_urls:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"No valid image URLs provided\"},\n                summary={\"error\": \"No valid image_urls\"},\n            )\n\n        batch_size = 20\n        raw_results: list[str | BaseException] = []\n        for i in range(0, len(unique_urls), batch_size):\n            batch = unique_urls[i : i + batch_size]\n            tasks = [remove_background(url, REPLICATE_API_KEY) for url in batch]\n            raw_results.extend(await asyncio.gather(*tasks, return_exceptions=True))\n\n        results: List[Dict[str, Any]] = []\n        for url, raw in zip(unique_urls, raw_results):\n            if isinstance(raw, BaseException):\n                print(f\"Background removal failed for {url}: {raw}\")\n                results.append(\n                    {\"image_url\": url, \"result_url\": None, \"status\": \"error\"}\n                )\n            else:\n                results.append(\n                    {\"image_url\": url, \"result_url\": raw, \"status\": \"ok\"}\n                )\n\n        summary_items = [\n            {\n                \"image_url\": summarize_text(r[\"image_url\"], 100),\n                \"result_url\": r[\"result_url\"],\n                \"status\": r[\"status\"],\n            }\n            for r in results\n        ]\n        return ToolExecutionResult(\n            ok=True,\n            result={\"images\": results},\n            summary={\"images\": summary_items},\n        )\n\n    def _retrieve_option(self, args: Dict[str, Any]) -> ToolExecutionResult:\n        raw_option_number = args.get(\"option_number\")\n        raw_index = args.get(\"index\")\n\n        def coerce_int(value: Any) -> Optional[int]:\n            if value is None:\n                return None\n            try:\n                return int(value)\n            except (TypeError, ValueError):\n                return None\n\n        option_number = coerce_int(raw_option_number)\n        index = coerce_int(raw_index)\n\n        if option_number is None and index is None:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"retrieve_option requires option_number\"},\n                summary={\"error\": \"Missing option_number\"},\n            )\n\n        resolved_index = index if option_number is None else option_number - 1\n        if resolved_index is None:\n            return ToolExecutionResult(\n                ok=False,\n                result={\"error\": \"Invalid option_number\"},\n                summary={\"error\": \"Invalid option_number\"},\n            )\n\n        if resolved_index < 0 or resolved_index >= len(self.option_codes):\n            return ToolExecutionResult(\n                ok=False,\n                result={\n                    \"error\": \"Option index out of range\",\n                    \"option_number\": resolved_index + 1,\n                    \"available\": len(self.option_codes),\n                },\n                summary={\n                    \"error\": \"Option index out of range\",\n                    \"available\": len(self.option_codes),\n                },\n            )\n\n        code = ensure_str(self.option_codes[resolved_index])\n        if not code.strip():\n            return ToolExecutionResult(\n                ok=False,\n                result={\n                    \"error\": \"Option code is empty or unavailable\",\n                    \"option_number\": resolved_index + 1,\n                },\n                summary={\"error\": \"Option code unavailable\"},\n            )\n\n        summary = {\n            \"option_number\": resolved_index + 1,\n            \"contentLength\": len(code),\n            \"preview\": summarize_text(code, 200),\n        }\n        result = {\"option_number\": resolved_index + 1, \"code\": code}\n        return ToolExecutionResult(ok=True, result=result, summary=summary)\n\n\n# Backwards-compatible alias for older imports.\nAgentToolbox = AgentToolRuntime\n"
  },
  {
    "path": "backend/agent/tools/summaries.py",
    "content": "# pyright: reportUnknownVariableType=false\nfrom typing import Any, Dict\n\nfrom agent.state import AgentFileState, ensure_str\nfrom agent.tools.types import ToolCall\n\n\ndef summarize_text(value: str, limit: int = 240) -> str:\n    if len(value) <= limit:\n        return value\n    return value[:limit] + \"...\"\n\n\ndef summarize_tool_input(tool_call: ToolCall, file_state: AgentFileState) -> Dict[str, Any]:\n    args = tool_call.arguments or {}\n\n    if tool_call.name == \"create_file\":\n        content = ensure_str(args.get(\"content\"))\n        return {\n            \"path\": args.get(\"path\") or file_state.path,\n            \"contentLength\": len(content),\n            \"preview\": summarize_text(content, 200),\n        }\n\n    if tool_call.name == \"edit_file\":\n        edits = args.get(\"edits\")\n        if not edits:\n            edits = [\n                {\n                    \"old_text\": args.get(\"old_text\"),\n                    \"new_text\": args.get(\"new_text\"),\n                    \"count\": args.get(\"count\"),\n                }\n            ]\n        summary_edits = []\n        for edit in edits if isinstance(edits, list) else []:\n            summary_edits.append(\n                {\n                    \"old_text\": summarize_text(ensure_str(edit.get(\"old_text\")), 160),\n                    \"new_text\": summarize_text(ensure_str(edit.get(\"new_text\")), 160),\n                    \"count\": edit.get(\"count\"),\n                }\n            )\n        return {\n            \"path\": args.get(\"path\") or file_state.path,\n            \"edits\": summary_edits,\n        }\n\n    if tool_call.name == \"generate_images\":\n        prompts = args.get(\"prompts\") or []\n        if isinstance(prompts, list):\n            return {\n                \"count\": len(prompts),\n                \"prompts\": [ensure_str(p) for p in prompts],\n            }\n\n    if tool_call.name == \"remove_background\":\n        image_urls = args.get(\"image_urls\") or []\n        if isinstance(image_urls, list):\n            return {\n                \"count\": len(image_urls),\n                \"image_urls\": [ensure_str(u) for u in image_urls],\n            }\n        return {\"image_urls\": []}\n\n    if tool_call.name == \"retrieve_option\":\n        return {\n            \"option_number\": args.get(\"option_number\"),\n            \"index\": args.get(\"index\"),\n        }\n\n    return args\n"
  },
  {
    "path": "backend/agent/tools/types.py",
    "content": "from dataclasses import dataclass\nfrom typing import Any, Dict, Optional\n\n\n@dataclass(frozen=True)\nclass ToolCall:\n    id: str\n    name: str\n    arguments: Dict[str, Any]\n\n\n@dataclass\nclass ToolExecutionResult:\n    ok: bool\n    result: Dict[str, Any]\n    summary: Dict[str, Any]\n    updated_content: Optional[str] = None\n\n\n@dataclass(frozen=True)\nclass CanonicalToolDefinition:\n    name: str\n    description: str\n    parameters: Dict[str, Any]\n"
  },
  {
    "path": "backend/codegen/__init__.py",
    "content": ""
  },
  {
    "path": "backend/codegen/test_utils.py",
    "content": "import unittest\nfrom codegen.utils import extract_html_content\n\n\nclass TestUtils(unittest.TestCase):\n\n    def test_extract_html_content_with_html_tags(self):\n        text = \"<html><body><p>Hello, World!</p></body></html>\"\n        expected = \"<html><body><p>Hello, World!</p></body></html>\"\n        result = extract_html_content(text)\n        self.assertEqual(result, expected)\n\n    def test_extract_html_content_without_html_tags(self):\n        text = \"No HTML content here.\"\n        expected = \"No HTML content here.\"\n        result = extract_html_content(text)\n        self.assertEqual(result, expected)\n\n    def test_extract_html_content_with_partial_html_tags(self):\n        text = \"<html><body><p>Hello, World!</p></body>\"\n        expected = \"<html><body><p>Hello, World!</p></body>\"\n        result = extract_html_content(text)\n        self.assertEqual(result, expected)\n\n    def test_extract_html_content_with_multiple_html_tags(self):\n        text = \"<html><body><p>First</p></body></html> Some text <html><body><p>Second</p></body></html>\"\n        expected = \"<html><body><p>First</p></body></html>\"\n        result = extract_html_content(text)\n        self.assertEqual(result, expected)\n\n    ## The following are tests based on actual LLM outputs\n\n    def test_extract_html_content_some_explanation_before(self):\n        text = \"\"\"Got it! You want the song list to be displayed horizontally. I'll update the code to ensure that the song list is displayed in a horizontal layout.\n\n        Here's the updated code:\n\n        <html lang=\"en\"><head></head><body class=\"bg-black text-white\"></body></html>\"\"\"\n        expected = '<html lang=\"en\"><head></head><body class=\"bg-black text-white\"></body></html>'\n        result = extract_html_content(text)\n        self.assertEqual(result, expected)\n\n    def test_markdown_tags(self):\n        text = \"```html<head></head>```\"\n        expected = \"```html<head></head>```\"\n        result = extract_html_content(text)\n        self.assertEqual(result, expected)\n\n    def test_doctype_text(self):\n        text = '<!DOCTYPE html><html lang=\"en\"><head></head><body></body></html>'\n        expected = '<html lang=\"en\"><head></head><body></body></html>'\n        result = extract_html_content(text)\n        self.assertEqual(result, expected)\n\n\nif __name__ == \"__main__\":\n    unittest.main()\n"
  },
  {
    "path": "backend/codegen/utils.py",
    "content": "import re\n\n\ndef extract_html_content(text: str) -> str:\n    file_match = re.search(\n        r\"<file\\s+path=\\\"[^\\\"]+\\\">\\s*(.*?)\\s*</file>\",\n        text,\n        re.DOTALL | re.IGNORECASE,\n    )\n    if file_match:\n        return extract_html_content(file_match.group(1).strip())\n\n    # First, strip markdown code fences if present\n    text = re.sub(r'^```html?\\s*\\n?', '', text, flags=re.MULTILINE)\n    text = re.sub(r'\\n?```\\s*$', '', text, flags=re.MULTILINE)\n\n    # Try to find DOCTYPE + html tags together\n    match_with_doctype = re.search(\n        r\"(<!DOCTYPE\\s+html[^>]*>.*?<html.*?>.*?</html>)\", text, re.DOTALL | re.IGNORECASE\n    )\n    if match_with_doctype:\n        return match_with_doctype.group(1)\n\n    # Fall back to just <html> tags\n    match = re.search(r\"(<html.*?>.*?</html>)\", text, re.DOTALL)\n    if match:\n        return match.group(1)\n    else:\n        # Otherwise, we just send the previous HTML over\n        print(\n            \"[HTML Extraction] No <html> tags found in the generated content\"\n        )\n        return text\n"
  },
  {
    "path": "backend/config.py",
    "content": "import os\n\nNUM_VARIANTS = 4\nNUM_VARIANTS_VIDEO = 2\n\n# LLM-related\nOPENAI_API_KEY = os.environ.get(\"OPENAI_API_KEY\", None)\nANTHROPIC_API_KEY = os.environ.get(\"ANTHROPIC_API_KEY\", None)\nGEMINI_API_KEY = os.environ.get(\"GEMINI_API_KEY\", None)\nOPENAI_BASE_URL = os.environ.get(\"OPENAI_BASE_URL\", None)\n\n# Image generation (optional)\nREPLICATE_API_KEY = os.environ.get(\"REPLICATE_API_KEY\", None)\n\n# Debugging-related\nIS_DEBUG_ENABLED = bool(os.environ.get(\"IS_DEBUG_ENABLED\", False))\nDEBUG_DIR = os.environ.get(\"DEBUG_DIR\", \"\")\n\n# Set to True when running in production (on the hosted version)\n# Used as a feature flag to enable or disable certain features\nIS_PROD = os.environ.get(\"IS_PROD\", False)\n"
  },
  {
    "path": "backend/custom_types.py",
    "content": "from typing import Literal\n\n\nInputMode = Literal[\n    \"image\",\n    \"video\",\n    \"text\",\n]\n"
  },
  {
    "path": "backend/debug/DebugFileWriter.py",
    "content": "import os\nimport logging\nimport uuid\n\nfrom config import DEBUG_DIR, IS_DEBUG_ENABLED\n\n\nclass DebugFileWriter:\n    def __init__(self):\n        if not IS_DEBUG_ENABLED:\n            return\n\n        try:\n            self.debug_artifacts_path = os.path.expanduser(\n                f\"{DEBUG_DIR}/{str(uuid.uuid4())}\"\n            )\n            os.makedirs(self.debug_artifacts_path, exist_ok=True)\n            print(f\"Debugging artifacts will be stored in: {self.debug_artifacts_path}\")\n        except:\n            logging.error(\"Failed to create debug directory\")\n\n    def write_to_file(self, filename: str, content: str) -> None:\n        try:\n            with open(os.path.join(self.debug_artifacts_path, filename), \"w\") as file:\n                file.write(content)\n        except Exception as e:\n            logging.error(f\"Failed to write to file: {e}\")\n\n    def extract_html_content(self, text: str) -> str:\n        return str(text.split(\"<html>\")[-1].rsplit(\"</html>\", 1)[0] + \"</html>\")\n"
  },
  {
    "path": "backend/debug/__init__.py",
    "content": ""
  },
  {
    "path": "backend/evals/__init__.py",
    "content": ""
  },
  {
    "path": "backend/evals/config.py",
    "content": "EVALS_DIR = \"./evals_data\"\n"
  },
  {
    "path": "backend/evals/core.py",
    "content": "from config import (\n    ANTHROPIC_API_KEY,\n    GEMINI_API_KEY,\n    OPENAI_API_KEY,\n    OPENAI_BASE_URL,\n)\nfrom llm import Llm, OPENAI_MODELS, ANTHROPIC_MODELS, GEMINI_MODELS\nfrom agent.runner import Agent\nfrom prompts.create.image import build_image_prompt_messages\nfrom prompts.prompt_types import Stack\nfrom openai.types.chat import ChatCompletionMessageParam\nfrom typing import Any\n\n\nasync def generate_code_for_image(image_url: str, stack: Stack, model: Llm) -> str:\n    prompt_messages = build_image_prompt_messages(\n        image_data_urls=[image_url],\n        stack=stack,\n        text_prompt=\"\",\n        image_generation_enabled=True,\n    )\n    async def send_message(\n        _: str,\n        __: str | None,\n        ___: int,\n        ____: dict[str, Any] | None = None,\n        _____: str | None = None,\n    ) -> None:\n        # Evals do not stream tool/assistant messages to a frontend.\n        return None\n\n    if model in ANTHROPIC_MODELS and not ANTHROPIC_API_KEY:\n        raise Exception(\"Anthropic API key not found\")\n    if model in GEMINI_MODELS and not GEMINI_API_KEY:\n        raise Exception(\"Gemini API key not found\")\n    if model in OPENAI_MODELS and not OPENAI_API_KEY:\n        raise Exception(\"OpenAI API key not found\")\n\n    print(f\"[EVALS] Using agent runner for model: {model.value}\")\n\n    runner = Agent(\n        send_message=send_message,\n        variant_index=0,\n        openai_api_key=OPENAI_API_KEY,\n        openai_base_url=OPENAI_BASE_URL,\n        anthropic_api_key=ANTHROPIC_API_KEY,\n        gemini_api_key=GEMINI_API_KEY,\n        should_generate_images=True,\n        initial_file_state=None,\n        option_codes=None,\n    )\n    return await runner.run(model, prompt_messages)\n"
  },
  {
    "path": "backend/evals/runner.py",
    "content": "from typing import Any, Awaitable, Callable, Coroutine, List, Optional, Tuple\nimport asyncio\nimport os\nfrom datetime import datetime\nimport time\nimport inspect\nfrom llm import Llm\nfrom prompts.prompt_types import Stack\nfrom .core import generate_code_for_image\nfrom .utils import image_to_data_url\nfrom .config import EVALS_DIR\n\nMAX_EVAL_RETRIES = 2\n\n\ndef _resolve_eval_filenames(input_files: Optional[List[str]]) -> List[str]:\n    input_dir = EVALS_DIR + \"/inputs\"\n    if input_files and len(input_files) > 0:\n        return [os.path.basename(f) for f in input_files if f.endswith(\".png\")]\n    return [f for f in os.listdir(input_dir) if f.endswith(\".png\")]\n\n\ndef _output_html_filename(original_filename: str, attempt_idx: int) -> str:\n    return f\"{os.path.splitext(original_filename)[0]}_{attempt_idx}.html\"\n\n\ndef get_eval_output_subfolder(stack: Stack, model: str) -> str:\n    today = datetime.now().strftime(\"%b_%d_%Y\")\n    output_dir = EVALS_DIR + \"/outputs\"\n    return os.path.join(output_dir, f\"{today}_{model}_{stack}\")\n\n\ndef count_pending_eval_tasks(\n    stack: Stack,\n    model: str,\n    input_files: Optional[List[str]] = None,\n    n: int = 1,\n    diff_mode: bool = False,\n) -> Tuple[int, int]:\n    evals = _resolve_eval_filenames(input_files)\n    if not diff_mode:\n        return len(evals) * n, 0\n\n    output_subfolder = get_eval_output_subfolder(stack=stack, model=model)\n    pending_tasks = 0\n    skipped_existing_tasks = 0\n    for original_filename in evals:\n        for n_idx in range(n):\n            output_filename = _output_html_filename(original_filename, n_idx)\n            output_path = os.path.join(output_subfolder, output_filename)\n            if os.path.exists(output_path):\n                skipped_existing_tasks += 1\n            else:\n                pending_tasks += 1\n    return pending_tasks, skipped_existing_tasks\n\n\nasync def generate_code_and_time(\n    image_url: str,\n    stack: Stack,\n    model: Llm,\n    original_input_filename: str,\n    attempt_idx: int,\n) -> Tuple[str, int, Optional[str], Optional[float], Optional[Exception], int]:\n    \"\"\"\n    Generates code for an image, measures the time taken, and returns identifiers\n    along with success/failure status.\n    Returns a tuple:\n    (original_input_filename, attempt_idx, content, duration, error_object, retries_used)\n    content and duration are None if an error occurs during generation.\n    \"\"\"\n    retries_used = 0\n    while True:\n        start_time = time.perf_counter()\n        try:\n            content = await generate_code_for_image(\n                image_url=image_url, stack=stack, model=model\n            )\n            end_time = time.perf_counter()\n            duration = end_time - start_time\n            return (\n                original_input_filename,\n                attempt_idx,\n                content,\n                duration,\n                None,\n                retries_used,\n            )\n        except Exception as e:\n            if retries_used >= MAX_EVAL_RETRIES:\n                print(\n                    f\"Error during code generation for {original_input_filename} \"\n                    f\"(attempt {attempt_idx}, retries exhausted): {e}\"\n                )\n                return (\n                    original_input_filename,\n                    attempt_idx,\n                    None,\n                    None,\n                    e,\n                    retries_used,\n                )\n            retries_used += 1\n            print(\n                f\"Retrying {original_input_filename} (attempt {attempt_idx}) \"\n                f\"{retries_used}/{MAX_EVAL_RETRIES} after error: {e}\"\n            )\n\n\nasync def run_image_evals(\n    stack: Optional[Stack] = None,\n    model: Optional[str] = None,\n    n: int = 1,\n    input_files: Optional[List[str]] = None,\n    diff_mode: bool = False,\n    progress_callback: Optional[Callable[[dict[str, Any]], Any | Awaitable[Any]]] = None,\n) -> List[str]:\n    INPUT_DIR = EVALS_DIR + \"/inputs\"\n    evals = _resolve_eval_filenames(input_files)\n\n    if not stack:\n        raise ValueError(\"No stack was provided\")\n    if not model:\n        raise ValueError(\"No model was provided\")\n\n    print(\"User selected stack:\", stack)\n    print(\"User selected model:\", model)\n    selected_model = Llm(model)\n    print(f\"Running evals for {selected_model.value} model\")\n    \n    if input_files and len(input_files) > 0:\n        print(f\"Running on {len(evals)} selected files\")\n    else:\n        print(f\"Running on all {len(evals)} files in {INPUT_DIR}\")\n\n    output_subfolder = get_eval_output_subfolder(\n        stack=stack,\n        model=selected_model.value,\n    )\n    os.makedirs(output_subfolder, exist_ok=True)\n\n    task_coroutines: List[\n        Coroutine[\n            Any,\n            Any,\n            Tuple[str, int, Optional[str], Optional[float], Optional[Exception], int],\n        ]\n    ] = []\n    skipped_existing_tasks = 0\n    for original_filename in evals:\n        # Handle both full paths and relative filenames\n        if os.path.isabs(original_filename):\n            filepath = original_filename\n            original_filename = os.path.basename(original_filename)\n        else:\n            filepath = os.path.join(INPUT_DIR, original_filename)\n\n        data_url: Optional[str] = None\n        for n_idx in range(n):\n            output_filename = _output_html_filename(original_filename, n_idx)\n            output_path = os.path.join(output_subfolder, output_filename)\n            if diff_mode and os.path.exists(output_path):\n                skipped_existing_tasks += 1\n                continue\n\n            if data_url is None:\n                data_url = await image_to_data_url(filepath)\n            current_model_for_task = (\n                selected_model if n_idx == 0 else Llm.GPT_4_1_2025_04_14\n            )\n            coro = generate_code_and_time(\n                image_url=data_url,\n                stack=stack,\n                model=current_model_for_task,\n                original_input_filename=original_filename,\n                attempt_idx=n_idx,\n            )\n            task_coroutines.append(coro)\n\n    if diff_mode and skipped_existing_tasks > 0:\n        print(\n            f\"Diff mode: skipping {skipped_existing_tasks} existing outputs for \"\n            f\"{selected_model.value}\"\n        )\n\n    print(f\"Processing {len(task_coroutines)} tasks...\")\n    total_tasks = len(task_coroutines)\n    completed_tasks = 0\n\n    output_files: List[str] = []\n    timing_data: List[str] = []\n    failed_tasks_log: List[str] = []\n\n    async def emit_progress(event: dict[str, Any]) -> None:\n        if progress_callback is None:\n            return\n        maybe_awaitable = progress_callback(event)\n        if inspect.isawaitable(maybe_awaitable):\n            await maybe_awaitable\n\n    for future in asyncio.as_completed(task_coroutines):\n        try:\n            (\n                task_orig_fn,\n                task_attempt_idx,\n                generated_content,\n                time_taken,\n                error_obj,\n                retries_used,\n            ) = await future\n            completed_tasks += 1\n\n            output_html_filename_base = os.path.splitext(task_orig_fn)[0]\n            final_output_html_filename = (\n                f\"{output_html_filename_base}_{task_attempt_idx}.html\"\n            )\n            output_html_filepath = os.path.join(\n                output_subfolder, final_output_html_filename\n            )\n\n            if error_obj is not None:\n                failed_tasks_log.append(\n                    f\"Input: {task_orig_fn}, Attempt: {task_attempt_idx}, OutputFile: \"\n                    f\"{final_output_html_filename}, Retries: {retries_used}, \"\n                    f\"Error: Generation failed - {str(error_obj)}\"\n                )\n                await emit_progress(\n                    {\n                        \"type\": \"task_complete\",\n                        \"completed_tasks\": completed_tasks,\n                        \"total_tasks\": total_tasks,\n                        \"input_file\": task_orig_fn,\n                        \"attempt_idx\": task_attempt_idx,\n                        \"success\": False,\n                        \"error\": str(error_obj),\n                        \"retries_used\": retries_used,\n                    }\n                )\n            elif generated_content is not None and time_taken is not None:\n                try:\n                    with open(output_html_filepath, \"w\") as file:\n                        file.write(generated_content)\n                    timing_data.append(\n                        f\"{final_output_html_filename}: {time_taken:.2f} seconds\"\n                    )\n                    output_files.append(final_output_html_filename)\n                    print(\n                        f\"Successfully processed and wrote {final_output_html_filename}\"\n                    )\n                    await emit_progress(\n                        {\n                            \"type\": \"task_complete\",\n                            \"completed_tasks\": completed_tasks,\n                            \"total_tasks\": total_tasks,\n                            \"input_file\": task_orig_fn,\n                            \"attempt_idx\": task_attempt_idx,\n                            \"success\": True,\n                            \"output_file\": final_output_html_filename,\n                            \"duration_seconds\": time_taken,\n                            \"retries_used\": retries_used,\n                        }\n                    )\n                except Exception as e_write:\n                    failed_tasks_log.append(\n                        f\"Input: {task_orig_fn}, Attempt: {task_attempt_idx}, OutputFile: {final_output_html_filename}, Error: Writing to file failed - {str(e_write)}\"\n                    )\n                    await emit_progress(\n                        {\n                            \"type\": \"task_complete\",\n                            \"completed_tasks\": completed_tasks,\n                            \"total_tasks\": total_tasks,\n                            \"input_file\": task_orig_fn,\n                            \"attempt_idx\": task_attempt_idx,\n                            \"success\": False,\n                            \"error\": str(e_write),\n                        }\n                    )\n            else:\n                failed_tasks_log.append(\n                    f\"Input: {task_orig_fn}, Attempt: {task_attempt_idx}, OutputFile: {final_output_html_filename}, Error: Unknown issue - content or time_taken is None without explicit error.\"\n                )\n                await emit_progress(\n                    {\n                        \"type\": \"task_complete\",\n                        \"completed_tasks\": completed_tasks,\n                        \"total_tasks\": total_tasks,\n                        \"input_file\": task_orig_fn,\n                        \"attempt_idx\": task_attempt_idx,\n                        \"success\": False,\n                        \"error\": \"Unknown issue during task processing.\",\n                    }\n                )\n\n        except Exception as e_as_completed:\n            print(f\"A task in as_completed failed unexpectedly: {e_as_completed}\")\n            failed_tasks_log.append(\n                f\"Critical Error: A task processing failed - {str(e_as_completed)}\"\n            )\n            completed_tasks += 1\n            await emit_progress(\n                {\n                    \"type\": \"task_complete\",\n                    \"completed_tasks\": completed_tasks,\n                    \"total_tasks\": total_tasks,\n                    \"input_file\": \"unknown\",\n                    \"attempt_idx\": -1,\n                    \"success\": False,\n                    \"error\": str(e_as_completed),\n                }\n            )\n\n    # Write timing data for successful tasks\n    if timing_data:\n        timing_file_path = os.path.join(output_subfolder, \"generation_times.txt\")\n        try:\n            is_new_or_empty_file = (\n                not os.path.exists(timing_file_path)\n                or os.path.getsize(timing_file_path) == 0\n            )\n\n            with open(timing_file_path, \"a\") as file:\n                if is_new_or_empty_file:\n                    file.write(f\"Model: {selected_model.value}\\n\")\n                elif timing_data:\n                    file.write(\"\\n\")\n\n                file.write(\"\\n\".join(timing_data))\n            print(f\"Timing data saved to {timing_file_path}\")\n        except Exception as e:\n            print(f\"Error writing timing file {timing_file_path}: {e}\")\n\n    # Write log for failed tasks\n    if failed_tasks_log:\n        failed_log_path = os.path.join(output_subfolder, \"failed_tasks.txt\")\n        try:\n            with open(failed_log_path, \"w\") as file:\n                file.write(\"\\n\".join(failed_tasks_log))\n            print(f\"Failed tasks log saved to {failed_log_path}\")\n        except Exception as e:\n            print(f\"Error writing failed tasks log {failed_log_path}: {e}\")\n\n    return output_files\n"
  },
  {
    "path": "backend/evals/utils.py",
    "content": "import base64\n\n\nasync def image_to_data_url(filepath: str):\n    with open(filepath, \"rb\") as image_file:\n        encoded_string = base64.b64encode(image_file.read()).decode()\n    return f\"data:image/png;base64,{encoded_string}\"\n"
  },
  {
    "path": "backend/fs_logging/__init__.py",
    "content": ""
  },
  {
    "path": "backend/fs_logging/openai_input_compare.py",
    "content": "import json\nfrom dataclasses import dataclass\nfrom typing import Any, TypeAlias, cast\n\nfrom fs_logging.openai_input_formatting import (\n    summarize_responses_input_item,\n    to_serializable,\n)\n\nJSONScalar: TypeAlias = None | bool | int | float | str\nJSONValue: TypeAlias = JSONScalar | list[\"JSONValue\"] | dict[str, \"JSONValue\"]\n\n\n@dataclass(frozen=True)\nclass OpenAIInputDifference:\n    item_index: int\n    path: str\n    left_summary: str\n    right_summary: str\n    left_value: Any\n    right_value: Any\n\n\n@dataclass(frozen=True)\nclass OpenAIInputComparison:\n    common_prefix_items: int\n    left_item_count: int\n    right_item_count: int\n    difference: OpenAIInputDifference | None\n\n\ndef _extract_input_items(payload: Any) -> list[JSONValue]:\n    serialized = cast(JSONValue, to_serializable(payload))\n    if isinstance(serialized, list):\n        return serialized\n    if isinstance(serialized, dict):\n        serialized_dict = cast(dict[str, JSONValue], serialized)\n        input_items = serialized_dict.get(\"input\")\n        if isinstance(input_items, list):\n            return cast(list[JSONValue], input_items)\n    raise ValueError(\"Expected a raw input array or a request payload with an 'input' list\")\n\n\ndef _as_json_dict(value: JSONValue) -> dict[str, JSONValue]:\n    return cast(dict[str, JSONValue], value)\n\n\ndef _as_json_list(value: JSONValue) -> list[JSONValue]:\n    return cast(list[JSONValue], value)\n\n\ndef _append_dict_path(path: str, key: str) -> str:\n    if not path:\n        return key\n    return f\"{path}.{key}\"\n\n\ndef _append_list_path(path: str, index: int) -> str:\n    return f\"{path}[{index}]\"\n\n\ndef _find_first_value_difference(\n    left: JSONValue,\n    right: JSONValue,\n    path: str = \"\",\n) -> tuple[str, JSONValue, JSONValue] | None:\n    if type(left) is not type(right):\n        return path, left, right\n\n    if isinstance(left, dict):\n        left_dict = _as_json_dict(left)\n        right_dict = _as_json_dict(right)\n        left_keys = list(left_dict.keys())\n        right_keys = list(right_dict.keys())\n        for index in range(min(len(left_keys), len(right_keys))):\n            left_key = left_keys[index]\n            right_key = right_keys[index]\n            if left_key != right_key:\n                key_path = _append_dict_path(path, left_key)\n                return key_path, left, right\n\n        if len(left_keys) != len(right_keys):\n            extra_key = (\n                left_keys[len(right_keys)]\n                if len(left_keys) > len(right_keys)\n                else right_keys[len(left_keys)]\n            )\n            key_path = _append_dict_path(path, extra_key)\n            left_value = left_dict.get(extra_key)\n            right_value = right_dict.get(extra_key)\n            return key_path, cast(JSONValue, left_value), cast(JSONValue, right_value)\n\n        for key in left_keys:\n            nested = _find_first_value_difference(\n                left_dict[key],\n                right_dict[key],\n                _append_dict_path(path, key),\n            )\n            if nested is not None:\n                return nested\n        return None\n\n    if isinstance(left, list):\n        left_list = _as_json_list(left)\n        right_list = _as_json_list(right)\n        for index in range(min(len(left_list), len(right_list))):\n            nested = _find_first_value_difference(\n                left_list[index],\n                right_list[index],\n                _append_list_path(path, index),\n            )\n            if nested is not None:\n                return nested\n\n        if len(left_list) != len(right_list):\n            index = min(len(left_list), len(right_list))\n            item_path = _append_list_path(path, index)\n            left_value = left_list[index] if index < len(left_list) else None\n            right_value = right_list[index] if index < len(right_list) else None\n            return item_path, left_value, right_value\n        return None\n\n    if left != right:\n        return path, left, right\n\n    return None\n\n\ndef compare_openai_inputs(\n    left_payload: Any,\n    right_payload: Any,\n) -> OpenAIInputComparison:\n    left_items = _extract_input_items(left_payload)\n    right_items = _extract_input_items(right_payload)\n\n    common_prefix_items = 0\n    for index in range(min(len(left_items), len(right_items))):\n        left_item = left_items[index]\n        right_item = right_items[index]\n        if left_item == right_item:\n            common_prefix_items += 1\n            continue\n\n        nested_difference = _find_first_value_difference(left_item, right_item)\n        nested_path = \"\" if nested_difference is None else nested_difference[0]\n        path = f\"input[{index}]\"\n        if nested_path:\n            if nested_path.startswith(\"[\"):\n                path = f\"{path}{nested_path}\"\n            else:\n                path = f\"{path}.{nested_path}\"\n\n        left_value = left_item if nested_difference is None else nested_difference[1]\n        right_value = right_item if nested_difference is None else nested_difference[2]\n\n        return OpenAIInputComparison(\n            common_prefix_items=common_prefix_items,\n            left_item_count=len(left_items),\n            right_item_count=len(right_items),\n            difference=OpenAIInputDifference(\n                item_index=index,\n                path=path,\n                left_summary=summarize_responses_input_item(index, left_item),\n                right_summary=summarize_responses_input_item(index, right_item),\n                left_value=left_value,\n                right_value=right_value,\n            ),\n        )\n\n    if len(left_items) != len(right_items):\n        index = min(len(left_items), len(right_items))\n        left_item = left_items[index] if index < len(left_items) else None\n        right_item = right_items[index] if index < len(right_items) else None\n        return OpenAIInputComparison(\n            common_prefix_items=common_prefix_items,\n            left_item_count=len(left_items),\n            right_item_count=len(right_items),\n            difference=OpenAIInputDifference(\n                item_index=index,\n                path=f\"input[{index}]\",\n                left_summary=(\n                    summarize_responses_input_item(index, left_item)\n                    if left_item is not None\n                    else f\"{index:02d} <missing>\"\n                ),\n                right_summary=(\n                    summarize_responses_input_item(index, right_item)\n                    if right_item is not None\n                    else f\"{index:02d} <missing>\"\n                ),\n                left_value=left_item,\n                right_value=right_item,\n            ),\n        )\n\n    return OpenAIInputComparison(\n        common_prefix_items=common_prefix_items,\n        left_item_count=len(left_items),\n        right_item_count=len(right_items),\n        difference=None,\n    )\n\n\ndef format_openai_input_comparison(comparison: OpenAIInputComparison) -> str:\n    lines = [\n        \"OpenAI input comparison\",\n        f\"common_prefix_items={comparison.common_prefix_items}\",\n        f\"left_item_count={comparison.left_item_count}\",\n        f\"right_item_count={comparison.right_item_count}\",\n    ]\n\n    difference = comparison.difference\n    if difference is None:\n        lines.append(\"difference=none\")\n        return \"\\n\".join(lines)\n\n    lines.extend(\n        [\n            f\"first_different_item_index={difference.item_index}\",\n            f\"first_different_path={difference.path}\",\n            f\"left_summary={difference.left_summary}\",\n            f\"right_summary={difference.right_summary}\",\n            \"left_value=\" + json.dumps(difference.left_value, indent=2, ensure_ascii=False),\n            \"right_value=\" + json.dumps(\n                difference.right_value,\n                indent=2,\n                ensure_ascii=False,\n            ),\n        ]\n    )\n    return \"\\n\".join(lines)\n\n\ndef compare_openai_input_json_strings(\n    left_json: str,\n    right_json: str,\n) -> OpenAIInputComparison:\n    left_payload = json.loads(left_json)\n    right_payload = json.loads(right_json)\n    return compare_openai_inputs(left_payload, right_payload)\n"
  },
  {
    "path": "backend/fs_logging/openai_input_formatting.py",
    "content": "# pyright: reportUnknownVariableType=false\nimport json\nfrom typing import Any\n\nfrom agent.state import ensure_str\n\n\ndef truncate_for_log(value: Any, max_len: int = 120) -> str:\n    text = ensure_str(value).replace(\"\\n\", \"\\\\n\")\n    if len(text) <= max_len:\n        return text\n    return f\"{text[:max_len]}...\"\n\n\ndef as_dict(value: Any) -> dict[str, Any] | None:\n    if isinstance(value, dict):\n        return value\n\n    model_dump = getattr(value, \"model_dump\", None)\n    if callable(model_dump):\n        dumped = model_dump()\n        if isinstance(dumped, dict):\n            return dumped\n\n    to_dict = getattr(value, \"to_dict\", None)\n    if callable(to_dict):\n        dumped = to_dict()\n        if isinstance(dumped, dict):\n            return dumped\n\n    dict_method = getattr(value, \"dict\", None)\n    if callable(dict_method):\n        dumped = dict_method()\n        if isinstance(dumped, dict):\n            return dumped\n\n    raw_dict = getattr(value, \"__dict__\", None)\n    if isinstance(raw_dict, dict):\n        normalized = {k: v for k, v in raw_dict.items() if not k.startswith(\"_\")}\n        if normalized:\n            return normalized\n\n    return None\n\n\ndef to_serializable(value: Any) -> Any:\n    if value is None or isinstance(value, (bool, int, float, str)):\n        return value\n\n    if isinstance(value, dict):\n        return {ensure_str(k): to_serializable(v) for k, v in value.items()}\n\n    if isinstance(value, (list, tuple)):\n        return [to_serializable(v) for v in value]\n\n    value_as_dict = as_dict(value)\n    if value_as_dict is not None:\n        return to_serializable(value_as_dict)\n\n    return ensure_str(value)\n\n\ndef summarize_content_part(part: Any) -> str:\n    part_dict = as_dict(part)\n    if part_dict is None:\n        return f\"{type(part).__name__}\"\n\n    part_type = part_dict.get(\"type\", \"unknown\")\n\n    if part_type in (\"input_text\", \"text\", \"output_text\", \"summary_text\"):\n        text = ensure_str(part_dict.get(\"text\", \"\"))\n        return (\n            f\"{part_type}(chars={len(text)} \"\n            f\"preview='{truncate_for_log(text, max_len=80)}')\"\n        )\n\n    if part_type in (\"input_image\", \"image_url\"):\n        image_url_value: Any = part_dict.get(\"image_url\", \"\")\n        detail: str | None = None\n        if isinstance(image_url_value, dict):\n            detail = ensure_str(image_url_value.get(\"detail\", \"\"))\n            image_url_value = image_url_value.get(\"url\", \"\")\n        else:\n            detail = ensure_str(part_dict.get(\"detail\", \"\"))\n\n        url_text = ensure_str(image_url_value)\n        detail_text = detail or \"-\"\n        return (\n            f\"{part_type}(detail={detail_text} \"\n            f\"url='{truncate_for_log(url_text, max_len=80)}')\"\n        )\n\n    return f\"{part_type}(keys={sorted(part_dict.keys())})\"\n\n\ndef summarize_function_call_output_payload(output_text: str) -> str:\n    try:\n        parsed = json.loads(output_text)\n    except json.JSONDecodeError:\n        return (\n            f\"output_chars={len(output_text)} \"\n            f\"preview='{truncate_for_log(output_text)}'\"\n        )\n\n    if not isinstance(parsed, dict):\n        return (\n            f\"output_type={type(parsed).__name__} \"\n            f\"preview='{truncate_for_log(parsed)}'\"\n        )\n\n    if \"error\" in parsed:\n        error_text = ensure_str(parsed.get(\"error\"))\n        return f\"error='{truncate_for_log(error_text)}'\"\n\n    summary_parts: list[str] = []\n\n    content_text = ensure_str(parsed.get(\"content\"))\n    if content_text:\n        summary_parts.append(f\"content='{truncate_for_log(content_text, max_len=80)}'\")\n\n    details = parsed.get(\"details\")\n    if isinstance(details, dict):\n        path = ensure_str(details.get(\"path\"))\n\n        diff_text = details.get(\"diff\")\n        if (not path) and isinstance(diff_text, str) and diff_text:\n            for line in diff_text.splitlines():\n                if line.startswith(\"--- \"):\n                    path = line.removeprefix(\"--- \").strip()\n                    break\n\n        if path:\n            summary_parts.append(f\"path={path}\")\n\n        edits = details.get(\"edits\")\n        if isinstance(edits, list):\n            summary_parts.append(f\"edits={len(edits)}\")\n\n        content_length = details.get(\"contentLength\")\n        if isinstance(content_length, int):\n            summary_parts.append(f\"content_length={content_length}\")\n\n        first_changed_line = details.get(\"firstChangedLine\")\n        if isinstance(first_changed_line, int):\n            summary_parts.append(f\"first_changed_line={first_changed_line}\")\n\n        if isinstance(diff_text, str) and diff_text:\n            diff_lines = diff_text.count(\"\\n\")\n            summary_parts.append(f\"diff_chars={len(diff_text)}\")\n            summary_parts.append(f\"diff_lines={diff_lines}\")\n\n    if not summary_parts:\n        summary_parts.append(f\"keys={sorted(parsed.keys())}\")\n\n    return \" \".join(summary_parts)\n\n\ndef summarize_responses_input_item(index: int, item: Any) -> str:\n    item_dict = as_dict(item)\n    if item_dict is None:\n        return f\"{index:02d} item_type={type(item).__name__}\"\n\n    if \"role\" in item_dict:\n        role = ensure_str(item_dict.get(\"role\", \"unknown\"))\n        content = item_dict.get(\"content\", \"\")\n        if isinstance(content, str):\n            return (\n                f\"{index:02d} role={role} content=str chars={len(content)} \"\n                f\"preview='{truncate_for_log(content)}'\"\n            )\n        if isinstance(content, list):\n            part_summaries = [summarize_content_part(part) for part in content]\n            return (\n                f\"{index:02d} role={role} content_parts={len(content)} \"\n                f\"[{'; '.join(part_summaries)}]\"\n            )\n        return f\"{index:02d} role={role} content_type={type(content).__name__}\"\n\n    item_type = ensure_str(item_dict.get(\"type\", \"unknown\"))\n\n    if item_type in (\"function_call\", \"custom_tool_call\"):\n        raw_args = (\n            item_dict.get(\"input\")\n            if item_type == \"custom_tool_call\"\n            else item_dict.get(\"arguments\")\n        )\n        args_text = ensure_str(raw_args or \"\")\n        call_id = item_dict.get(\"call_id\") or item_dict.get(\"id\")\n        return (\n            f\"{index:02d} type={item_type} name={item_dict.get('name')} \"\n            f\"call_id={call_id} args_chars={len(args_text)} \"\n            f\"preview='{truncate_for_log(args_text)}'\"\n        )\n\n    if item_type == \"function_call_output\":\n        output_text = ensure_str(item_dict.get(\"output\", \"\"))\n        return (\n            f\"{index:02d} type=function_call_output call_id={item_dict.get('call_id')} \"\n            f\"{summarize_function_call_output_payload(output_text)}\"\n        )\n\n    if item_type == \"message\":\n        role = ensure_str(item_dict.get(\"role\", \"unknown\"))\n        content = item_dict.get(\"content\", [])\n        if isinstance(content, list):\n            part_summaries = [summarize_content_part(part) for part in content]\n            return (\n                f\"{index:02d} type=message role={role} parts={len(content)} \"\n                f\"[{'; '.join(part_summaries)}]\"\n            )\n        return (\n            f\"{index:02d} type=message role={role} \"\n            f\"content_type={type(content).__name__}\"\n        )\n\n    if item_type == \"reasoning\":\n        summary = item_dict.get(\"summary\")\n        if isinstance(summary, list):\n            summary_parts = [summarize_content_part(part) for part in summary]\n            return (\n                f\"{index:02d} type=reasoning summary_parts={len(summary)} \"\n                f\"[{'; '.join(summary_parts)}]\"\n            )\n        return f\"{index:02d} type=reasoning summary_type={type(summary).__name__}\"\n\n    return f\"{index:02d} type={item_type} keys={sorted(item_dict.keys())}\"\n"
  },
  {
    "path": "backend/fs_logging/openai_turn_inputs.py",
    "content": "# pyright: reportUnknownVariableType=false\nimport json\nimport os\nimport uuid\nfrom dataclasses import dataclass, field\nfrom datetime import datetime\nfrom html import escape\nfrom typing import Any, Sequence\n\nfrom agent.providers.pricing import MODEL_PRICING\nfrom agent.providers.token_usage import TokenUsage\nfrom agent.state import ensure_str\nfrom fs_logging.openai_input_formatting import (\n    summarize_responses_input_item,\n    to_serializable,\n)\nfrom llm import Llm, get_openai_api_name\n\n\ndef _render_json_scalar(value: Any) -> str:\n    if value is None:\n        return \"<span class='json-null'>null</span>\"\n    if isinstance(value, bool):\n        return f\"<span class='json-bool'>{str(value).lower()}</span>\"\n    if isinstance(value, (int, float)):\n        return f\"<span class='json-number'>{escape(ensure_str(value))}</span>\"\n    text = ensure_str(value)\n    if \"\\n\" not in text and len(text) <= 160:\n        return f\"<code class='json-string'>{escape(text)}</code>\"\n    return (\n        \"<details class='json-string-block'>\"\n        f\"<summary>string ({len(text)} chars)</summary>\"\n        f\"<pre>{escape(text)}</pre>\"\n        \"</details>\"\n    )\n\n\ndef _render_json_node(value: Any, label: str | None = None) -> str:\n    label_html = \"\"\n    if label is not None:\n        label_html = f\"<span class='json-key'>{escape(label)}</span>: \"\n\n    if isinstance(value, dict):\n        parts = [\n            \"<details class='json-node'>\",\n            (\n                f\"<summary>{label_html}\"\n                f\"<span class='json-type'>object ({len(value)} keys)</span></summary>\"\n            ),\n            \"<div class='json-children'>\",\n        ]\n        for child_key, child_value in value.items():\n            parts.append(_render_json_node(child_value, ensure_str(child_key)))\n        parts.append(\"</div>\")\n        parts.append(\"</details>\")\n        return \"\".join(parts)\n\n    if isinstance(value, list):\n        parts = [\n            \"<details class='json-node'>\",\n            (\n                f\"<summary>{label_html}\"\n                f\"<span class='json-type'>array ({len(value)} items)</span></summary>\"\n            ),\n            \"<div class='json-children'>\",\n        ]\n        for index, child_value in enumerate(value):\n            parts.append(_render_json_node(child_value, f\"[{index}]\"))\n        parts.append(\"</div>\")\n        parts.append(\"</details>\")\n        return \"\".join(parts)\n\n    return (\n        \"<div class='json-leaf'>\"\n        f\"{label_html}{_render_json_scalar(value)}\"\n        \"</div>\"\n    )\n\n\ndef _render_copy_controls(copy_target_id: str, button_label: str) -> str:\n    return (\n        \"<div class='copy-controls'>\"\n        f\"<button type='button' class='copy-button' data-copy-target='{escape(copy_target_id)}'>\"\n        f\"{escape(button_label)}\"\n        \"</button>\"\n        \"<span class='copy-status' aria-live='polite'></span>\"\n        \"</div>\"\n    )\n\ndef _log_openai_turn_input(model: Llm, turn_index: int, input_items: Sequence[Any]) -> None:\n    model_name = get_openai_api_name(model)\n    print(\n        f\"[OPENAI TURN INPUT] model={model_name} \"\n        f\"turn={turn_index} items={len(input_items)}\"\n    )\n    for index, item in enumerate(input_items):\n        print(\n            f\"[OPENAI TURN INPUT] \"\n            f\"{summarize_responses_input_item(index, item)}\"\n        )\n\n\ndef _is_openai_turn_input_console_enabled() -> bool:\n    value = os.environ.get(\"OPENAI_TURN_INPUT_CONSOLE\", \"\")\n    return value.strip().lower() in {\"1\", \"true\", \"yes\", \"on\"}\n\n\n@dataclass\nclass OpenAITurnInputItem:\n    index: int\n    summary: str\n    payload: Any\n\n\n@dataclass\nclass OpenAITurnUsageSummary:\n    input_tokens: int\n    output_tokens: int\n    cache_read: int\n    cache_write: int\n    total_tokens: int\n    cache_hit_rate_percent: float\n    cost_usd: float | None\n\n\n@dataclass\nclass OpenAITurnInputReport:\n    turn_index: int\n    items: list[OpenAITurnInputItem]\n    request_payload: Any | None = None\n    usage: OpenAITurnUsageSummary | None = None\n\n\n@dataclass\nclass OpenAITurnInputLogger:\n    model: Llm\n    enabled: bool = False\n    report_id: str = field(default_factory=lambda: uuid.uuid4().hex)\n    _turn_index: int = 0\n    _turns: list[OpenAITurnInputReport] = field(default_factory=list)\n\n    def record_turn_input(\n        self,\n        input_items: Sequence[Any],\n        request_payload: Any | None = None,\n    ) -> None:\n        if not self.enabled:\n            return\n\n        self._turn_index += 1\n        if _is_openai_turn_input_console_enabled():\n            _log_openai_turn_input(self.model, self._turn_index, input_items)\n\n        turn_items = [\n            OpenAITurnInputItem(\n                index=index,\n                summary=summarize_responses_input_item(index, item),\n                payload=to_serializable(item),\n            )\n            for index, item in enumerate(input_items)\n        ]\n        self._turns.append(\n            OpenAITurnInputReport(\n                turn_index=self._turn_index,\n                items=turn_items,\n                request_payload=to_serializable(request_payload),\n            )\n        )\n\n    def record_turn_usage(self, usage: TokenUsage) -> None:\n        if not self.enabled or not self._turns:\n            return\n\n        pricing = MODEL_PRICING.get(get_openai_api_name(self.model))\n        cost_usd = usage.cost(pricing) if pricing else None\n        self._turns[-1].usage = OpenAITurnUsageSummary(\n            input_tokens=usage.input,\n            output_tokens=usage.output,\n            cache_read=usage.cache_read,\n            cache_write=usage.cache_write,\n            total_tokens=usage.total,\n            cache_hit_rate_percent=usage.cache_hit_rate_percent(),\n            cost_usd=cost_usd,\n        )\n\n    def write_html_report(self) -> str | None:\n        if not self.enabled:\n            return None\n\n        try:\n            logs_path = os.environ.get(\"LOGS_PATH\", os.getcwd())\n            logs_directory = os.path.join(logs_path, \"run_logs\")\n            os.makedirs(logs_directory, exist_ok=True)\n\n            timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n            model_name = get_openai_api_name(self.model).replace(\"/\", \"_\")\n            filename = (\n                f\"openai_turn_inputs_{model_name}_{timestamp}_{self.report_id[:8]}.html\"\n            )\n            filepath = os.path.join(logs_directory, filename)\n\n            with open(filepath, \"w\", encoding=\"utf-8\") as f:\n                f.write(self._render_html_report())\n            return filepath\n        except Exception as e:\n            print(f\"[OPENAI TURN INPUT] Failed to write HTML report: {e}\")\n            return None\n\n    def _render_html_report(self) -> str:\n        model_name = get_openai_api_name(self.model)\n        html_parts = [\n            \"<!DOCTYPE html>\",\n            \"<html lang='en'>\",\n            \"<head>\",\n            \"  <meta charset='UTF-8' />\",\n            \"  <meta name='viewport' content='width=device-width, initial-scale=1.0' />\",\n            \"  <title>OpenAI Turn Input Report</title>\",\n            \"  <style>\",\n            \"    body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif; margin: 24px; color: #111827; }\",\n            \"    h1, h2, h3 { margin: 0 0 12px; }\",\n            \"    .meta { margin-bottom: 18px; color: #4b5563; }\",\n            \"    .turn { border: 1px solid #d1d5db; border-radius: 8px; padding: 14px; margin-bottom: 16px; }\",\n            \"    table { width: 100%; border-collapse: collapse; margin-top: 8px; }\",\n            \"    th, td { border: 1px solid #e5e7eb; padding: 8px; vertical-align: top; text-align: left; }\",\n            \"    th { background: #f9fafb; }\",\n            \"    code, pre { font-family: ui-monospace, SFMono-Regular, Menlo, monospace; font-size: 12px; }\",\n            \"    details { margin-top: 10px; }\",\n            \"    pre { background: #0b1020; color: #d1d5db; padding: 12px; border-radius: 8px; overflow: auto; max-height: 420px; }\",\n            \"    .usage-table { width: auto; min-width: 540px; margin-top: 10px; }\",\n            \"    .usage-table th { width: 180px; }\",\n            \"    .usage-none { margin-top: 10px; color: #6b7280; font-style: italic; }\",\n            \"    .payload-wrap { margin-top: 10px; }\",\n            \"    .json-view { margin-top: 8px; padding: 10px; border: 1px solid #e5e7eb; border-radius: 8px; background: #fcfcfd; }\",\n            \"    .json-node, .json-string-block { margin: 6px 0; }\",\n            \"    .json-node > summary, .json-string-block > summary { cursor: pointer; color: #1f2937; }\",\n            \"    .json-children { margin-left: 16px; border-left: 1px solid #e5e7eb; padding-left: 10px; }\",\n            \"    .json-leaf { margin: 6px 0; }\",\n            \"    .json-key { color: #7c3aed; font-family: ui-monospace, SFMono-Regular, Menlo, monospace; font-size: 12px; }\",\n            \"    .json-type { color: #2563eb; font-family: ui-monospace, SFMono-Regular, Menlo, monospace; font-size: 12px; }\",\n            \"    .json-number, .json-bool, .json-null { font-family: ui-monospace, SFMono-Regular, Menlo, monospace; font-size: 12px; color: #0f766e; }\",\n            \"    .json-string { white-space: pre-wrap; overflow-wrap: anywhere; }\",\n            \"    .copy-controls { display: flex; align-items: center; gap: 8px; margin-top: 10px; }\",\n            \"    .copy-button { border: 1px solid #cbd5e1; background: #fff; color: #111827; border-radius: 6px; padding: 6px 10px; cursor: pointer; font-size: 12px; }\",\n            \"    .copy-button:hover { background: #f8fafc; }\",\n            \"    .copy-status { color: #2563eb; font-size: 12px; min-height: 16px; }\",\n            \"    .copy-source { display: none; }\",\n            \"  </style>\",\n            \"</head>\",\n            \"<body>\",\n            \"  <h1>OpenAI Turn Input Report</h1>\",\n            (\n                \"  <div class='meta'>\"\n                f\"report_id={escape(self.report_id)} | \"\n                f\"model={escape(model_name)} | turns={len(self._turns)}\"\n                \"</div>\"\n            ),\n        ]\n\n        for turn in self._turns:\n            html_parts.append(\"  <section class='turn'>\")\n            html_parts.append(\n                f\"    <h2>Turn {turn.turn_index} (items={len(turn.items)})</h2>\"\n            )\n            if turn.request_payload is not None:\n                request_payload_json = json.dumps(\n                    turn.request_payload,\n                    indent=2,\n                    ensure_ascii=False,\n                )\n                request_input_json: str | None = None\n                if isinstance(turn.request_payload, dict) and \"input\" in turn.request_payload:\n                    request_input_json = json.dumps(\n                        turn.request_payload[\"input\"],\n                        indent=2,\n                        ensure_ascii=False,\n                    )\n                html_parts.append(\"    <details class='payload-wrap' open>\")\n                html_parts.append(\"      <summary>Request payload</summary>\")\n                if request_input_json is not None:\n                    request_input_id = f\"request-input-turn-{turn.turn_index}\"\n                    html_parts.append(\n                        \"      \"\n                        + _render_copy_controls(request_input_id, \"Copy input JSON\")\n                    )\n                    html_parts.append(\n                        f\"      <pre id='{escape(request_input_id)}' class='copy-source'>\"\n                        f\"{escape(request_input_json)}</pre>\"\n                    )\n                html_parts.append(\"      <div class='json-view'>\")\n                html_parts.append(_render_json_node(turn.request_payload, \"root\"))\n                html_parts.append(\"      </div>\")\n                html_parts.append(\"      <details>\")\n                html_parts.append(\"        <summary>Raw JSON payload</summary>\")\n                html_parts.append(\n                    f\"        <pre>{escape(request_payload_json)}</pre>\"\n                )\n                html_parts.append(\"      </details>\")\n                html_parts.append(\"    </details>\")\n            if turn.usage is not None:\n                cost_text = \"n/a\"\n                if isinstance(turn.usage.cost_usd, (float, int)):\n                    cost_text = f\"${turn.usage.cost_usd:.4f}\"\n\n                html_parts.append(\"    <table class='usage-table'>\")\n                html_parts.append(\n                    \"      <thead><tr><th>Metric</th><th>Value</th></tr></thead>\"\n                )\n                html_parts.append(\"      <tbody>\")\n                html_parts.append(\n                    \"        \"\n                    f\"<tr><td>Input tokens</td><td>{turn.usage.input_tokens}</td></tr>\"\n                )\n                html_parts.append(\n                    \"        \"\n                    f\"<tr><td>Output tokens</td><td>{turn.usage.output_tokens}</td></tr>\"\n                )\n                html_parts.append(\n                    \"        \"\n                    f\"<tr><td>Cache read</td><td>{turn.usage.cache_read}</td></tr>\"\n                )\n                html_parts.append(\n                    \"        \"\n                    f\"<tr><td>Cache write</td><td>{turn.usage.cache_write}</td></tr>\"\n                )\n                html_parts.append(\n                    \"        \"\n                    f\"<tr><td>Total tokens</td><td>{turn.usage.total_tokens}</td></tr>\"\n                )\n                html_parts.append(\n                    \"        <tr><td>Cache hit rate</td>\"\n                    f\"<td>{turn.usage.cache_hit_rate_percent:.2f}%</td></tr>\"\n                )\n                html_parts.append(\n                    f\"        <tr><td>Cost</td><td>{escape(cost_text)}</td></tr>\"\n                )\n                html_parts.append(\"      </tbody>\")\n                html_parts.append(\"    </table>\")\n            else:\n                html_parts.append(\n                    \"    <div class='usage-none'>Usage unavailable for this turn.</div>\"\n                )\n\n            html_parts.append(\"    <table>\")\n            html_parts.append(\n                \"      <thead><tr><th style='width:70px'>Index</th><th>Summary</th></tr></thead>\"\n            )\n            html_parts.append(\"      <tbody>\")\n            for item in turn.items:\n                html_parts.append(\n                    \"        \"\n                    f\"<tr><td>{item.index:02d}</td><td><code>{escape(item.summary)}</code></td></tr>\"\n                )\n            html_parts.append(\"      </tbody>\")\n            html_parts.append(\"    </table>\")\n\n            for item in turn.items:\n                payload_json = json.dumps(\n                    item.payload,\n                    indent=2,\n                    ensure_ascii=False,\n                )\n                html_parts.append(\"    <details class='payload-wrap'>\")\n                html_parts.append(\n                    f\"      <summary>Item {item.index:02d} payload</summary>\"\n                )\n                html_parts.append(\"      <div class='json-view'>\")\n                html_parts.append(_render_json_node(item.payload, \"root\"))\n                html_parts.append(\"      </div>\")\n                html_parts.append(\"      <details>\")\n                html_parts.append(\"        <summary>Raw JSON payload</summary>\")\n                html_parts.append(f\"        <pre>{escape(payload_json)}</pre>\")\n                html_parts.append(\"      </details>\")\n                html_parts.append(\"    </details>\")\n            html_parts.append(\"  </section>\")\n\n        html_parts.extend(\n            [\n                \"  <script>\",\n                \"    document.addEventListener('click', async (event) => {\",\n                \"      const target = event.target;\",\n                \"      if (!(target instanceof HTMLButtonElement)) {\",\n                \"        return;\",\n                \"      }\",\n                \"      const copyTargetId = target.dataset.copyTarget;\",\n                \"      if (!copyTargetId) {\",\n                \"        return;\",\n                \"      }\",\n                \"      const source = document.getElementById(copyTargetId);\",\n                \"      const status = target.parentElement?.querySelector('.copy-status');\",\n                \"      if (!source) {\",\n                \"        if (status) { status.textContent = 'Missing source'; }\",\n                \"        return;\",\n                \"      }\",\n                \"      try {\",\n                \"        await navigator.clipboard.writeText(source.textContent || '');\",\n                \"        if (status) { status.textContent = 'Copied'; }\",\n                \"      } catch (_error) {\",\n                \"        if (status) { status.textContent = 'Copy failed'; }\",\n                \"      }\",\n                \"      window.setTimeout(() => {\",\n                \"        if (status) { status.textContent = ''; }\",\n                \"      }, 1600);\",\n                \"    });\",\n                \"  </script>\",\n                \"</body>\",\n                \"</html>\",\n            ]\n        )\n        return \"\\n\".join(html_parts)\n"
  },
  {
    "path": "backend/image_generation/__init__.py",
    "content": ""
  },
  {
    "path": "backend/image_generation/core.py",
    "content": "from image_generation.generation import (\n    generate_image_dalle,\n    generate_image_replicate,\n    process_tasks,\n)\n\n\n__all__ = [\n    \"process_tasks\",\n    \"generate_image_dalle\",\n    \"generate_image_replicate\",\n]\n"
  },
  {
    "path": "backend/image_generation/generation.py",
    "content": "import asyncio\nimport time\nfrom typing import List, Literal, Union\n\nfrom openai import AsyncOpenAI\n\nfrom image_generation.replicate import call_replicate\n\n\nREPLICATE_BATCH_SIZE = 20\n\n\nasync def process_tasks(\n    prompts: List[str],\n    api_key: str,\n    base_url: str | None,\n    model: Literal[\"dalle3\", \"flux\"],\n) -> List[Union[str, None]]:\n    start_time = time.time()\n    results: list[str | BaseException | None]\n    if model == \"dalle3\":\n        tasks = [generate_image_dalle(prompt, api_key, base_url) for prompt in prompts]\n        results = await asyncio.gather(*tasks, return_exceptions=True)\n    else:\n        results = []\n        for i in range(0, len(prompts), REPLICATE_BATCH_SIZE):\n            batch = prompts[i : i + REPLICATE_BATCH_SIZE]\n            tasks = [generate_image_replicate(p, api_key) for p in batch]\n            results.extend(await asyncio.gather(*tasks, return_exceptions=True))\n    end_time = time.time()\n    generation_time = end_time - start_time\n    print(f\"Image generation time: {generation_time:.2f} seconds\")\n\n    processed_results: List[Union[str, None]] = []\n    for result in results:\n        if isinstance(result, BaseException):\n            print(f\"An exception occurred: {result}\")\n            processed_results.append(None)\n        else:\n            processed_results.append(result)\n\n    return processed_results\n\n\nasync def generate_image_dalle(\n    prompt: str, api_key: str, base_url: str | None\n) -> Union[str, None]:\n    client = AsyncOpenAI(api_key=api_key, base_url=base_url)\n    res = await client.images.generate(\n        model=\"dall-e-3\",\n        quality=\"standard\",\n        style=\"natural\",\n        n=1,\n        size=\"1024x1024\",\n        prompt=prompt,\n    )\n    await client.close()\n    if not res.data:\n        return None\n    return res.data[0].url\n\n\nasync def generate_image_replicate(prompt: str, api_key: str) -> str:\n    # We use Flux 2 Klein\n    return await call_replicate(\n        {\n            \"prompt\": prompt,\n            \"aspect_ratio\": \"1:1\",\n            \"output_format\": \"png\",\n        },\n        api_key,\n    )\n"
  },
  {
    "path": "backend/image_generation/replicate.py",
    "content": "import asyncio\nimport httpx\nfrom typing import Any, Mapping, cast\n\n\nREPLICATE_API_BASE_URL = \"https://api.replicate.com/v1\"\nFLUX_MODEL_PATH = \"black-forest-labs/flux-2-klein-4b\"\nREMOVE_BACKGROUND_VERSION = (\n    \"a029dff38972b5fda4ec5d75d7d1cd25aeff621d2cf4946a41055d7db66b80bc\"\n)\nPOLL_INTERVAL_SECONDS = 0.1\nMAX_POLLS = 100\n\n\ndef _build_headers(api_token: str) -> dict[str, str]:\n    return {\n        \"Authorization\": f\"Bearer {api_token}\",\n        \"Content-Type\": \"application/json\",\n    }\n\n\ndef _extract_prediction_id(response_json: Mapping[str, Any]) -> str:\n    prediction_id = response_json.get(\"id\")\n    if not isinstance(prediction_id, str) or not prediction_id:\n        raise ValueError(\"Prediction ID not found in initial response.\")\n    return prediction_id\n\n\nasync def _poll_prediction(\n    client: httpx.AsyncClient, prediction_id: str, headers: dict[str, str]\n) -> dict[str, Any]:\n    status_check_url = f\"{REPLICATE_API_BASE_URL}/predictions/{prediction_id}\"\n\n    for _ in range(MAX_POLLS):\n        await asyncio.sleep(POLL_INTERVAL_SECONDS)\n        status_response = await client.get(status_check_url, headers=headers)\n        status_response.raise_for_status()\n        status_response_raw: Any = status_response.json()\n        if not isinstance(status_response_raw, dict):\n            raise ValueError(\"Invalid prediction status response.\")\n        status_response_json = cast(dict[str, Any], status_response_raw)\n\n        status = status_response_json.get(\"status\")\n        if status == \"succeeded\":\n            return cast(dict[str, Any], status_response_json)\n        if status == \"error\":\n            error_message = str(status_response_json.get(\"error\", \"Unknown error\"))\n            raise ValueError(f\"Inference errored out: {error_message}\")\n        if status == \"failed\":\n            raise ValueError(\"Inference failed\")\n\n    raise TimeoutError(\"Inference timed out\")\n\n\nasync def _run_prediction(\n    endpoint_url: str, payload: dict[str, Any], api_token: str\n) -> Any:\n    headers = _build_headers(api_token)\n\n    async with httpx.AsyncClient() as client:\n        try:\n            response = await client.post(endpoint_url, headers=headers, json=payload)\n            response.raise_for_status()\n            response_json = response.json()\n            if not isinstance(response_json, dict):\n                raise ValueError(\"Invalid prediction creation response.\")\n\n            prediction_id = _extract_prediction_id(response_json)\n            final_response = await _poll_prediction(client, prediction_id, headers)\n            return final_response.get(\"output\")\n        except httpx.HTTPStatusError as exc:\n            raise ValueError(f\"HTTP error occurred: {exc}\") from exc\n        except httpx.RequestError as exc:\n            raise ValueError(f\"An error occurred while requesting: {exc}\") from exc\n        except asyncio.TimeoutError as exc:\n            raise TimeoutError(\"Request timed out\") from exc\n        except (TimeoutError, ValueError):\n            raise\n        except Exception as exc:\n            raise ValueError(f\"An unexpected error occurred: {exc}\") from exc\n\n\ndef _extract_output_url(result: Any, context: str) -> str:\n    if isinstance(result, str):\n        return result\n\n    if isinstance(result, dict):\n        url = cast(Any, result.get(\"url\"))\n        if isinstance(url, str) and url:\n            return url\n\n    if isinstance(result, list) and len(result) > 0:\n        first: Any = result[0]\n        if isinstance(first, str) and first:\n            return first\n        if isinstance(first, Mapping):\n            url = cast(Any, first.get(\"url\"))\n            if isinstance(url, str) and url:\n                return url\n\n    raise ValueError(f\"Unexpected response from {context}: {result}\")\n\n\nasync def call_replicate_model(\n    model_path: str, input: dict[str, Any], api_token: str\n) -> Any:\n    return await _run_prediction(\n        f\"{REPLICATE_API_BASE_URL}/models/{model_path}/predictions\",\n        {\"input\": input},\n        api_token,\n    )\n\n\nasync def call_replicate_version(\n    version: str, input: dict[str, Any], api_token: str\n) -> Any:\n    return await _run_prediction(\n        f\"{REPLICATE_API_BASE_URL}/predictions\",\n        {\"version\": version, \"input\": input},\n        api_token,\n    )\n\n\nasync def remove_background(image_url: str, api_token: str) -> str:\n    result = await call_replicate_version(\n        REMOVE_BACKGROUND_VERSION,\n        {\n            \"image\": image_url,\n            \"format\": \"png\",\n            \"reverse\": False,\n            \"threshold\": 0,\n            \"background_type\": \"rgba\",\n        },\n        api_token,\n    )\n    return _extract_output_url(result, \"background remover\")\n\n\nasync def call_replicate(input: dict[str, str | int], api_token: str) -> str:\n    result = await call_replicate_model(FLUX_MODEL_PATH, input, api_token)\n    return _extract_output_url(result, \"Flux prediction\")\n"
  },
  {
    "path": "backend/llm.py",
    "content": "from enum import Enum\nfrom typing import TypedDict\n\n\n# Actual model versions that are passed to the LLMs and stored in our logs\nclass Llm(Enum):\n    # GPT\n    GPT_4_1_2025_04_14 = \"gpt-4.1-2025-04-14\"\n    GPT_5_2_CODEX_LOW = \"gpt-5.2-codex (low thinking)\"\n    GPT_5_2_CODEX_MEDIUM = \"gpt-5.2-codex (medium thinking)\"\n    GPT_5_2_CODEX_HIGH = \"gpt-5.2-codex (high thinking)\"\n    GPT_5_2_CODEX_XHIGH = \"gpt-5.2-codex (xhigh thinking)\"\n    GPT_5_3_CODEX_LOW = \"gpt-5.3-codex (low thinking)\"\n    GPT_5_3_CODEX_MEDIUM = \"gpt-5.3-codex (medium thinking)\"\n    GPT_5_3_CODEX_HIGH = \"gpt-5.3-codex (high thinking)\"\n    GPT_5_3_CODEX_XHIGH = \"gpt-5.3-codex (xhigh thinking)\"\n    GPT_5_4_2026_03_05_NONE = \"gpt-5.4-2026-03-05 (no thinking)\"\n    GPT_5_4_2026_03_05_LOW = \"gpt-5.4-2026-03-05 (low thinking)\"\n    GPT_5_4_2026_03_05_MEDIUM = \"gpt-5.4-2026-03-05 (medium thinking)\"\n    GPT_5_4_2026_03_05_HIGH = \"gpt-5.4-2026-03-05 (high thinking)\"\n    GPT_5_4_2026_03_05_XHIGH = \"gpt-5.4-2026-03-05 (xhigh thinking)\"\n    # Claude\n    CLAUDE_SONNET_4_6 = \"claude-sonnet-4-6\"\n    CLAUDE_4_5_SONNET_2025_09_29 = \"claude-sonnet-4-5-20250929\"\n    CLAUDE_4_5_OPUS_2025_11_01 = \"claude-opus-4-5-20251101\"\n    CLAUDE_OPUS_4_6 = \"claude-opus-4-6\"\n    # Gemini\n    GEMINI_3_FLASH_PREVIEW_HIGH = \"gemini-3-flash-preview (high thinking)\"\n    GEMINI_3_FLASH_PREVIEW_MINIMAL = \"gemini-3-flash-preview (minimal thinking)\"\n    GEMINI_3_1_PRO_PREVIEW_HIGH = \"gemini-3.1-pro-preview (high thinking)\"\n    GEMINI_3_1_PRO_PREVIEW_MEDIUM = \"gemini-3.1-pro-preview (medium thinking)\"\n    GEMINI_3_1_PRO_PREVIEW_LOW = \"gemini-3.1-pro-preview (low thinking)\"\n\n\nclass Completion(TypedDict):\n    duration: float\n    code: str\n\n\n# Explicitly map each model to the provider backing it.  This keeps provider\n# groupings authoritative and avoids relying on name conventions when checking\n# models elsewhere in the codebase.\nMODEL_PROVIDER: dict[Llm, str] = {\n    # OpenAI models\n    Llm.GPT_4_1_2025_04_14: \"openai\",\n    Llm.GPT_5_2_CODEX_LOW: \"openai\",\n    Llm.GPT_5_2_CODEX_MEDIUM: \"openai\",\n    Llm.GPT_5_2_CODEX_HIGH: \"openai\",\n    Llm.GPT_5_2_CODEX_XHIGH: \"openai\",\n    Llm.GPT_5_3_CODEX_LOW: \"openai\",\n    Llm.GPT_5_3_CODEX_MEDIUM: \"openai\",\n    Llm.GPT_5_3_CODEX_HIGH: \"openai\",\n    Llm.GPT_5_3_CODEX_XHIGH: \"openai\",\n    Llm.GPT_5_4_2026_03_05_NONE: \"openai\",\n    Llm.GPT_5_4_2026_03_05_LOW: \"openai\",\n    Llm.GPT_5_4_2026_03_05_MEDIUM: \"openai\",\n    Llm.GPT_5_4_2026_03_05_HIGH: \"openai\",\n    Llm.GPT_5_4_2026_03_05_XHIGH: \"openai\",\n    # Anthropic models\n    Llm.CLAUDE_SONNET_4_6: \"anthropic\",\n    Llm.CLAUDE_4_5_SONNET_2025_09_29: \"anthropic\",\n    Llm.CLAUDE_4_5_OPUS_2025_11_01: \"anthropic\",\n    Llm.CLAUDE_OPUS_4_6: \"anthropic\",\n    # Gemini models\n    Llm.GEMINI_3_FLASH_PREVIEW_HIGH: \"gemini\",\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL: \"gemini\",\n    Llm.GEMINI_3_1_PRO_PREVIEW_HIGH: \"gemini\",\n    Llm.GEMINI_3_1_PRO_PREVIEW_MEDIUM: \"gemini\",\n    Llm.GEMINI_3_1_PRO_PREVIEW_LOW: \"gemini\",\n}\n\n# Convenience sets for membership checks\nOPENAI_MODELS = {m for m, p in MODEL_PROVIDER.items() if p == \"openai\"}\nANTHROPIC_MODELS = {m for m, p in MODEL_PROVIDER.items() if p == \"anthropic\"}\nGEMINI_MODELS = {m for m, p in MODEL_PROVIDER.items() if p == \"gemini\"}\n\nOPENAI_MODEL_CONFIG: dict[Llm, dict[str, str]] = {\n    Llm.GPT_4_1_2025_04_14: {\"api_name\": \"gpt-4.1-2025-04-14\"},\n    Llm.GPT_5_2_CODEX_LOW: {\"api_name\": \"gpt-5.2-codex\", \"reasoning_effort\": \"low\"},\n    Llm.GPT_5_2_CODEX_MEDIUM: {\"api_name\": \"gpt-5.2-codex\", \"reasoning_effort\": \"medium\"},\n    Llm.GPT_5_2_CODEX_HIGH: {\"api_name\": \"gpt-5.2-codex\", \"reasoning_effort\": \"high\"},\n    Llm.GPT_5_2_CODEX_XHIGH: {\"api_name\": \"gpt-5.2-codex\", \"reasoning_effort\": \"xhigh\"},\n    Llm.GPT_5_3_CODEX_LOW: {\"api_name\": \"gpt-5.3-codex\", \"reasoning_effort\": \"low\"},\n    Llm.GPT_5_3_CODEX_MEDIUM: {\"api_name\": \"gpt-5.3-codex\", \"reasoning_effort\": \"medium\"},\n    Llm.GPT_5_3_CODEX_HIGH: {\"api_name\": \"gpt-5.3-codex\", \"reasoning_effort\": \"high\"},\n    Llm.GPT_5_3_CODEX_XHIGH: {\"api_name\": \"gpt-5.3-codex\", \"reasoning_effort\": \"xhigh\"},\n    Llm.GPT_5_4_2026_03_05_NONE: {\n        \"api_name\": \"gpt-5.4-2026-03-05\",\n        \"reasoning_effort\": \"none\",\n    },\n    Llm.GPT_5_4_2026_03_05_LOW: {\n        \"api_name\": \"gpt-5.4-2026-03-05\",\n        \"reasoning_effort\": \"low\",\n    },\n    Llm.GPT_5_4_2026_03_05_MEDIUM: {\n        \"api_name\": \"gpt-5.4-2026-03-05\",\n        \"reasoning_effort\": \"medium\",\n    },\n    Llm.GPT_5_4_2026_03_05_HIGH: {\n        \"api_name\": \"gpt-5.4-2026-03-05\",\n        \"reasoning_effort\": \"high\",\n    },\n    Llm.GPT_5_4_2026_03_05_XHIGH: {\n        \"api_name\": \"gpt-5.4-2026-03-05\",\n        \"reasoning_effort\": \"xhigh\",\n    },\n}\n\n\ndef get_openai_api_name(model: Llm) -> str:\n    return OPENAI_MODEL_CONFIG[model][\"api_name\"]\n\n\ndef get_openai_reasoning_effort(model: Llm) -> str | None:\n    return OPENAI_MODEL_CONFIG.get(model, {}).get(\"reasoning_effort\")\n"
  },
  {
    "path": "backend/main.py",
    "content": "# Load environment variables first\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n\nfrom fastapi import FastAPI\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom config import IS_DEBUG_ENABLED\nfrom routes import screenshot, generate_code, home, evals\n\napp = FastAPI(openapi_url=None, docs_url=None, redoc_url=None)\n\n\n@app.on_event(\"startup\")\nasync def log_debug_mode() -> None:\n    debug_status = \"ENABLED\" if IS_DEBUG_ENABLED else \"DISABLED\"\n    print(f\"Backend startup complete. Debug mode is {debug_status}.\")\n\n# Configure CORS settings\napp.add_middleware(\n    CORSMiddleware,\n    allow_origins=[\"*\"],\n    allow_credentials=True,\n    allow_methods=[\"*\"],\n    allow_headers=[\"*\"],\n)\n\n# Add routes\napp.include_router(generate_code.router)\napp.include_router(screenshot.router)\napp.include_router(home.router)\napp.include_router(evals.router)\n"
  },
  {
    "path": "backend/prompts/__init__.py",
    "content": "from prompts.system_prompt import SYSTEM_PROMPT\n\n__all__ = [\n    \"SYSTEM_PROMPT\",\n]\n"
  },
  {
    "path": "backend/prompts/create/__init__.py",
    "content": "from custom_types import InputMode\nfrom prompts.create.image import build_image_prompt_messages\nfrom prompts.create.text import build_text_prompt_messages\nfrom prompts.create.video import build_video_prompt_messages\nfrom prompts.prompt_types import Stack, UserTurnInput\nfrom prompts.message_builder import Prompt\n\n\ndef build_create_prompt_from_input(\n    input_mode: InputMode,\n    stack: Stack,\n    prompt: UserTurnInput,\n    image_generation_enabled: bool,\n) -> Prompt:\n    if input_mode == \"image\":\n        image_urls = prompt.get(\"images\", [])\n        text_prompt = prompt.get(\"text\", \"\")\n        return build_image_prompt_messages(\n            image_data_urls=image_urls,\n            stack=stack,\n            text_prompt=text_prompt,\n            image_generation_enabled=image_generation_enabled,\n        )\n    if input_mode == \"text\":\n        return build_text_prompt_messages(\n            text_prompt=prompt[\"text\"],\n            stack=stack,\n            image_generation_enabled=image_generation_enabled,\n        )\n    if input_mode == \"video\":\n        video_urls = prompt.get(\"videos\", [])\n        if not video_urls:\n            raise ValueError(\"Video mode requires a video to be provided\")\n        video_url = video_urls[0]\n        return build_video_prompt_messages(\n            video_data_url=video_url,\n            stack=stack,\n            text_prompt=prompt.get(\"text\", \"\"),\n            image_generation_enabled=image_generation_enabled,\n        )\n    raise ValueError(f\"Unsupported input mode: {input_mode}\")\n\n\n__all__ = [\"build_create_prompt_from_input\"]\n"
  },
  {
    "path": "backend/prompts/create/image.py",
    "content": "from openai.types.chat import ChatCompletionContentPartParam, ChatCompletionMessageParam\n\nfrom prompts.prompt_types import Stack\nfrom prompts import system_prompt\nfrom prompts.policies import build_selected_stack_policy, build_user_image_policy\n\ndef build_image_prompt_messages(\n    image_data_urls: list[str],\n    stack: Stack,\n    text_prompt: str,\n    image_generation_enabled: bool,\n) -> list[ChatCompletionMessageParam]:\n    image_policy = build_user_image_policy(image_generation_enabled)\n    selected_stack = build_selected_stack_policy(stack)\n    user_prompt = f\"\"\"\nGenerate code for a web page that looks exactly like the provided screenshot(s).\n\n{selected_stack}\n\n## Replication instructions\n\n- Make sure the app looks exactly like the screenshot.\n- Use the exact text from the screenshot.\n- {image_policy}\n\n## Multiple screenshots\n\nIf multiple screenshots are provided, organize them meaningfully:\n\n- If they appear to be different pages in a website, make them distinct pages and link them.\n- If they look like different tabs or views in an app, connect them with appropriate navigation.\n- If they appear unrelated, create a scaffold that separates them into \"Screenshot 1\", \"Screenshot 2\", \"Screenshot 3\", etc. so it is easy to navigate.\n- For mobile screenshots, do not include the device frame or browser chrome; focus only on the actual UI mockups.\n\"\"\"\n\n    # Add additional instructions provided by the user\n    if text_prompt.strip():\n        user_prompt = f\"{user_prompt}\\n\\nAdditional instructions: {text_prompt}\"\n\n    user_content: list[ChatCompletionContentPartParam] = []\n    for image_data_url in image_data_urls:\n        user_content.append(\n            {\n                \"type\": \"image_url\",\n                \"image_url\": {\"url\": image_data_url, \"detail\": \"high\"},\n            }\n        )\n    user_content.append(\n        {\n            \"type\": \"text\",\n            \"text\": user_prompt,\n        }\n    )\n    return [\n        {\n            \"role\": \"system\",\n            \"content\": system_prompt.SYSTEM_PROMPT,\n        },\n        {\n            \"role\": \"user\",\n            \"content\": user_content,\n        },\n    ]\n"
  },
  {
    "path": "backend/prompts/create/text.py",
    "content": "from openai.types.chat import ChatCompletionMessageParam\n\nfrom prompts.prompt_types import Stack\nfrom prompts import system_prompt\nfrom prompts.policies import build_selected_stack_policy, build_user_image_policy\n\n\ndef build_text_prompt_messages(\n    text_prompt: str,\n    stack: Stack,\n    image_generation_enabled: bool,\n) -> list[ChatCompletionMessageParam]:\n    image_policy = build_user_image_policy(image_generation_enabled)\n    selected_stack = build_selected_stack_policy(stack)\n\n    USER_PROMPT = f\"\"\"\nGenerate UI for {text_prompt}.\n{selected_stack}\n\n# Instructions\n\n- Make sure to make it look modern and sleek.\n- Use modern, professional fonts and colors.\n- Follow UX best practices.\n- {image_policy}\"\"\"\n\n    return [\n        {\n            \"role\": \"system\",\n            \"content\": system_prompt.SYSTEM_PROMPT,\n        },\n        {\n            \"role\": \"user\",\n            \"content\": USER_PROMPT,\n        },\n    ]\n"
  },
  {
    "path": "backend/prompts/create/video.py",
    "content": "from openai.types.chat import ChatCompletionContentPartParam, ChatCompletionMessageParam\nfrom prompts.prompt_types import Stack\nfrom prompts import system_prompt\nfrom prompts.policies import build_selected_stack_policy, build_user_image_policy\n\n\ndef build_video_prompt_messages(\n    video_data_url: str,\n    stack: Stack,\n    text_prompt: str,\n    image_generation_enabled: bool,\n) -> list[ChatCompletionMessageParam]:\n    image_policy = build_user_image_policy(image_generation_enabled)\n    selected_stack = build_selected_stack_policy(stack)\n    user_text = f\"\"\"\n    You have been given a video of a user interacting with a web app. You need to re-create the same app exactly such that the same user interactions will produce the same results in the app you build.\n\n    - Watch the entire video carefully and understand all the user interactions and UI state changes.\n    - Make sure the app looks exactly like what you see in the video.\n    - Pay close attention to background color, text color, font size, font family,\n    padding, margin, border, etc. Match the colors and sizes exactly.\n    - {image_policy}\n    - If some functionality requires a backend call, just mock the data instead.\n    - MAKE THE APP FUNCTIONAL using JavaScript. Allow the user to interact with the app and get the same behavior as shown in the video.\n    - Use SVGs and interactive 3D elements if needed to match the functionality shown in the video.\n\n    Analyze this video and generate the code.\n    \n    {selected_stack}\n    \"\"\"\n    if text_prompt.strip():\n        user_text = user_text + \"\\n\\nAdditional instructions: \" + text_prompt\n\n    user_content: list[ChatCompletionContentPartParam] = [\n        {\n            \"type\": \"image_url\",\n            \"image_url\": {\"url\": video_data_url, \"detail\": \"high\"},\n        },\n        {\n            \"type\": \"text\",\n            \"text\": user_text,\n        },\n    ]\n\n    return [\n        {\n            \"role\": \"system\",\n            \"content\": system_prompt.SYSTEM_PROMPT,\n        },\n        {\n            \"role\": \"user\",\n            \"content\": user_content,\n        },\n    ]\n"
  },
  {
    "path": "backend/prompts/message_builder.py",
    "content": "from typing import cast\n\nfrom openai.types.chat import ChatCompletionContentPartParam, ChatCompletionMessageParam\n\nfrom prompts.prompt_types import PromptHistoryMessage\n\nPrompt = list[ChatCompletionMessageParam]\n\n\ndef _wrap_assistant_file_content(content: str, path: str = \"index.html\") -> str:\n    stripped = content.strip()\n    if stripped.startswith(\"<file \") and stripped.endswith(\"</file>\"):\n        return stripped\n    return f'<file path=\"{path}\">\\n{stripped}\\n</file>'\n\n\ndef build_history_message(item: PromptHistoryMessage) -> ChatCompletionMessageParam:\n    role = item[\"role\"]\n    image_urls = item.get(\"images\", [])\n    video_urls = item.get(\"videos\", [])\n    media_urls = [*image_urls, *video_urls]\n\n    if role == \"user\" and len(media_urls) > 0:\n        user_content: list[ChatCompletionContentPartParam] = []\n\n        for media_url in media_urls:\n            user_content.append(\n                {\n                    \"type\": \"image_url\",\n                    \"image_url\": {\"url\": media_url, \"detail\": \"high\"},\n                }\n            )\n\n        user_content.append(\n            {\n                \"type\": \"text\",\n                \"text\": item.get(\"text\", \"\"),\n            }\n        )\n\n        return cast(\n            ChatCompletionMessageParam,\n            {\n                \"role\": role,\n                \"content\": user_content,\n            },\n        )\n\n    return cast(\n        ChatCompletionMessageParam,\n        {\n            \"role\": role,\n            \"content\": (\n                _wrap_assistant_file_content(item.get(\"text\", \"\"))\n                if role == \"assistant\"\n                else item.get(\"text\", \"\")\n            ),\n        },\n    )\n"
  },
  {
    "path": "backend/prompts/pipeline.py",
    "content": "from custom_types import InputMode\nfrom prompts.create import build_create_prompt_from_input\nfrom prompts.plan import derive_prompt_construction_plan\nfrom prompts.prompt_types import PromptHistoryMessage, Stack, UserTurnInput\nfrom prompts.message_builder import Prompt\nfrom prompts.update import (\n    build_update_prompt_from_file_snapshot,\n    build_update_prompt_from_history,\n)\n\n\nasync def build_prompt_messages(\n    stack: Stack,\n    input_mode: InputMode,\n    generation_type: str,\n    prompt: UserTurnInput,\n    history: list[PromptHistoryMessage],\n    file_state: dict[str, str] | None = None,\n    image_generation_enabled: bool = True,\n) -> Prompt:\n    plan = derive_prompt_construction_plan(\n        stack=stack,\n        input_mode=input_mode,\n        generation_type=generation_type,\n        history=history,\n        file_state=file_state,\n    )\n\n    strategy = plan[\"construction_strategy\"]\n    if strategy == \"update_from_history\":\n        return build_update_prompt_from_history(\n            stack=stack,\n            history=history,\n            image_generation_enabled=image_generation_enabled,\n        )\n    if strategy == \"update_from_file_snapshot\":\n        assert file_state is not None\n        return build_update_prompt_from_file_snapshot(\n            stack=stack,\n            prompt=prompt,\n            file_state=file_state,\n            image_generation_enabled=image_generation_enabled,\n        )\n    return build_create_prompt_from_input(\n        input_mode,\n        stack,\n        prompt,\n        image_generation_enabled,\n    )\n"
  },
  {
    "path": "backend/prompts/plan.py",
    "content": "from custom_types import InputMode\nfrom prompts.prompt_types import (\n    PromptConstructionPlan,\n    PromptHistoryMessage,\n    Stack,\n)\n\n\ndef derive_prompt_construction_plan(\n    stack: Stack,\n    input_mode: InputMode,\n    generation_type: str,\n    history: list[PromptHistoryMessage],\n    file_state: dict[str, str] | None,\n) -> PromptConstructionPlan:\n    if generation_type == \"update\":\n        if len(history) > 0:\n            strategy = \"update_from_history\"\n        elif file_state and file_state.get(\"content\", \"\").strip():\n            strategy = \"update_from_file_snapshot\"\n        else:\n            raise ValueError(\"Update requests require history or fileState.content\")\n        return {\n            \"generation_type\": \"update\",\n            \"input_mode\": input_mode,\n            \"stack\": stack,\n            \"construction_strategy\": strategy,\n        }\n\n    return {\n        \"generation_type\": \"create\",\n        \"input_mode\": input_mode,\n        \"stack\": stack,\n        \"construction_strategy\": \"create_from_input\",\n    }\n"
  },
  {
    "path": "backend/prompts/policies.py",
    "content": "from prompts.prompt_types import Stack\n\n\ndef build_selected_stack_policy(stack: Stack) -> str:\n    return f\"Selected stack: {stack}.\"\n\n\ndef build_user_image_policy(image_generation_enabled: bool) -> str:\n    if image_generation_enabled:\n        return (\n            \"Image generation is enabled for this request. Use generate_images for \"\n            \"missing assets when needed.\"\n        )\n\n    return (\n        \"Image generation is disabled for this request. Do not call generate_images. \"\n        \"Use provided media, CSS effects, or placeholder URLs (https://placehold.co).\"\n    )\n"
  },
  {
    "path": "backend/prompts/prompt_types.py",
    "content": "from typing import List, Literal, TypedDict\n\n\nclass UserTurnInput(TypedDict):\n    \"\"\"Normalized current user turn payload from the request.\"\"\"\n\n    text: str\n    images: List[str]\n    videos: List[str]\n\n\nclass PromptHistoryMessage(TypedDict):\n    \"\"\"Explicit role-based message structure for edit history.\"\"\"\n\n    role: Literal[\"user\", \"assistant\"]\n    text: str\n    images: List[str]\n    videos: List[str]\n\n\nPromptConstructionStrategy = Literal[\n    \"create_from_input\",\n    \"update_from_history\",\n    \"update_from_file_snapshot\",\n]\n\n\nStack = Literal[\n    \"html_css\",\n    \"html_tailwind\",\n    \"react_tailwind\",\n    \"bootstrap\",\n    \"ionic_tailwind\",\n    \"vue_tailwind\",\n]\n\n\nclass PromptConstructionPlan(TypedDict):\n    \"\"\"Derived plan used by prompt builders to choose a single construction path.\"\"\"\n\n    generation_type: Literal[\"create\", \"update\"]\n    input_mode: Literal[\"image\", \"video\", \"text\"]\n    stack: Stack\n    construction_strategy: PromptConstructionStrategy\n"
  },
  {
    "path": "backend/prompts/request_parsing.py",
    "content": "from typing import List, cast\n\nfrom prompts.prompt_types import PromptHistoryMessage, UserTurnInput\n\n\ndef _to_string_list(value: object) -> List[str]:\n    if not isinstance(value, list):\n        return []\n    raw_list = cast(List[object], value)\n    return [item for item in raw_list if isinstance(item, str)]\n\n\ndef parse_prompt_content(raw_prompt: object) -> UserTurnInput:\n    if not isinstance(raw_prompt, dict):\n        return {\"text\": \"\", \"images\": [], \"videos\": []}\n\n    prompt_dict = cast(dict[str, object], raw_prompt)\n    text = prompt_dict.get(\"text\")\n    return {\n        \"text\": text if isinstance(text, str) else \"\",\n        \"images\": _to_string_list(prompt_dict.get(\"images\")),\n        \"videos\": _to_string_list(prompt_dict.get(\"videos\")),\n    }\n\n\ndef parse_prompt_history(raw_history: object) -> List[PromptHistoryMessage]:\n    if not isinstance(raw_history, list):\n        return []\n\n    history: List[PromptHistoryMessage] = []\n    raw_items = cast(List[object], raw_history)\n    for item in raw_items:\n        if not isinstance(item, dict):\n            continue\n\n        item_dict = cast(dict[str, object], item)\n        role_value = item_dict.get(\"role\")\n        if not isinstance(role_value, str) or role_value not in (\"user\", \"assistant\"):\n            continue\n\n        text = item_dict.get(\"text\")\n        history.append(\n            {\n                \"role\": role_value,\n                \"text\": text if isinstance(text, str) else \"\",\n                \"images\": _to_string_list(item_dict.get(\"images\")),\n                \"videos\": _to_string_list(item_dict.get(\"videos\")),\n            }\n        )\n\n    return history\n"
  },
  {
    "path": "backend/prompts/system_prompt.py",
    "content": "SYSTEM_PROMPT = \"\"\"\nYou are a coding agent that's an expert at building front-ends.\n\n# Tone and style\n\n- Be extremely concise in your chat responses.\n- Do not include code snippets in your messages. Use the file creation and editing tools for all code.\n- At the end of the task, respond with a one or two sentence summary of what was built.\n- Always respond to the user in the language that they used. Our system prompts and tooling instructions are in English, but the user may choose to speak in another language and you should respond in that language. But if you're unsure, always pick English.\n\n# Tooling instructions\n\n- You have access to tools for file creation, file editing, image handling, and option retrieval.\n- The main file is a single HTML file. Use path \"index.html\" unless told otherwise.\n- For a brand new app, call create_file exactly once with the full HTML.\n- For updates, call edit_file using exact string replacements. Do NOT regenerate the entire file.\n- Do not output raw HTML in chat. Any code changes must go through tools.\n- When available, use generate_images to create image URLs from prompts (you may pass multiple prompts). The image generation AI is not capable of generating images with a transparent background.\n- Use remove_background to remove backgrounds from provided image URLs when needed (you may pass multiple image URLs).\n- Use retrieve_option to fetch the full HTML for a specific option (1-based option_number) when a user references another option.\n\n\n# Stack-specific instructions\n\n## Tailwind\n\n- Use this script to include Tailwind: <script src=\"https://cdn.tailwindcss.com\"></script>\n\n## html_css\n\n- Only use HTML, CSS and JS.\n- Do not use Tailwind\n\n## Bootstrap\n\n- Use this script to include Bootstrap: <link href=\"https://cdn.jsdelivr.net/npm/bootstrap@5.3.2/dist/css/bootstrap.min.css\" rel=\"stylesheet\" integrity=\"sha384-T3c6CoIi6uLrA9TneNEoa7RxnatzjcDSCmG1MXxSR1GAsXEV/Dwwykc2MPK8M2HN\" crossorigin=\"anonymous\">\n\n## React\n\n- Use these script to include React so that it can run on a standalone page:\n    <script src=\"https://cdn.jsdelivr.net/npm/react@18.0.0/umd/react.development.js\"></script>\n    <script src=\"https://cdn.jsdelivr.net/npm/react-dom@18.0.0/umd/react-dom.development.js\"></script>\n    <script src=\"https://unpkg.com/@babel/standalone/babel.min.js\"></script>\n- For babel, make sure to use https://unpkg.com/@babel/standalone/babel.min.js. DO NOT USE https://cdn.babeljs.io/babel.min.js as it is not the correct version and will cause errors.\n- Use this script to include Tailwind: <script src=\"https://cdn.tailwindcss.com\"></script>\n\n## Ionic\n\n- Use these script to include Ionic so that it can run on a standalone page:\n    <script type=\"module\" src=\"https://cdn.jsdelivr.net/npm/@ionic/core/dist/ionic/ionic.esm.js\"></script>\n    <script nomodule src=\"https://cdn.jsdelivr.net/npm/@ionic/core/dist/ionic/ionic.js\"></script>\n    <link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/@ionic/core/css/ionic.bundle.css\" />\n- Use this script to include Tailwind: <script src=\"https://cdn.tailwindcss.com\"></script>\n- ionicons for icons, add the following <script> tags near the end of the page, right before the closing </body> tag:\n    <script type=\"module\">\n        import ionicons from 'https://cdn.jsdelivr.net/npm/ionicons/+esm'\n    </script>\n    <script nomodule src=\"https://cdn.jsdelivr.net/npm/ionicons/dist/esm/ionicons.min.js\"></script>\n    <link href=\"https://cdn.jsdelivr.net/npm/ionicons/dist/collection/components/icon/icon.min.css\" rel=\"stylesheet\">\n\n## Vue\n\n- Use these script to include Vue so that it can run on a standalone page:\n  <script src=\"https://registry.npmmirror.com/vue/3.3.11/files/dist/vue.global.js\"></script>\n- Use this script to include Tailwind: <script src=\"https://cdn.tailwindcss.com\"></script>\n- Use Vue using the global build like so:\n\n<div id=\"app\">{{ message }}</div>\n<script>\n  const { createApp, ref } = Vue\n  createApp({\n    setup() {\n      const message = ref('Hello vue!')\n      return {\n        message\n      }\n    }\n  }).mount('#app')\n</script>\n\n## General instructions for all stacks\n\n- You can use Google Fonts or other publicly accessible fonts.\n- Except for Ionic, Font Awesome for icons: <link rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css\"></link>\n\n\"\"\"\n"
  },
  {
    "path": "backend/prompts/update/__init__.py",
    "content": "from prompts.update.from_file_snapshot import build_update_prompt_from_file_snapshot\nfrom prompts.update.from_history import build_update_prompt_from_history\n\n__all__ = [\n    \"build_update_prompt_from_file_snapshot\",\n    \"build_update_prompt_from_history\",\n]\n"
  },
  {
    "path": "backend/prompts/update/from_file_snapshot.py",
    "content": "from typing import cast\n\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom prompts import system_prompt\nfrom prompts.policies import build_selected_stack_policy, build_user_image_policy\nfrom prompts.prompt_types import Stack, UserTurnInput\nfrom prompts.message_builder import Prompt, build_history_message\n\n\ndef build_update_prompt_from_file_snapshot(\n    stack: Stack,\n    prompt: UserTurnInput,\n    file_state: dict[str, str],\n    image_generation_enabled: bool,\n) -> Prompt:\n    path = file_state.get(\"path\", \"index.html\")\n    request_text = prompt.get(\"text\", \"\").strip() or \"Apply the requested update.\"\n    selected_stack = build_selected_stack_policy(stack)\n    image_policy = build_user_image_policy(image_generation_enabled)\n    bootstrap_text = f\"\"\"{selected_stack}\n\n{image_policy}\n\nYou are editing an existing file.\n\n<current_file path=\"{path}\">\n{file_state[\"content\"]}\n</current_file>\n\n<change_request>\n{request_text}\n</change_request>\"\"\"\n    return [\n        cast(\n            ChatCompletionMessageParam,\n            {\n                \"role\": \"system\",\n                \"content\": system_prompt.SYSTEM_PROMPT,\n            },\n        ),\n        build_history_message(\n            {\n                \"role\": \"user\",\n                \"text\": bootstrap_text,\n                \"images\": prompt.get(\"images\", []),\n                \"videos\": prompt.get(\"videos\", []),\n            }\n        ),\n    ]\n"
  },
  {
    "path": "backend/prompts/update/from_history.py",
    "content": "from typing import cast\n\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom prompts import system_prompt\nfrom prompts.policies import build_selected_stack_policy, build_user_image_policy\nfrom prompts.prompt_types import PromptHistoryMessage, Stack\nfrom prompts.message_builder import Prompt, build_history_message\n\n\ndef build_update_prompt_from_history(\n    stack: Stack,\n    history: list[PromptHistoryMessage],\n    image_generation_enabled: bool,\n) -> Prompt:\n    first_user_index = next(\n        (index for index, item in enumerate(history) if item[\"role\"] == \"user\"),\n        -1,\n    )\n    if first_user_index == -1:\n        raise ValueError(\"Update history must include at least one user message\")\n\n    prompt_messages: Prompt = [\n        cast(\n            ChatCompletionMessageParam,\n            {\n                \"role\": \"system\",\n                \"content\": system_prompt.SYSTEM_PROMPT,\n            },\n        )\n    ]\n    selected_stack = build_selected_stack_policy(stack)\n    image_policy = build_user_image_policy(image_generation_enabled)\n    for index, item in enumerate(history):\n        if index == first_user_index:\n            stack_prefix = f\"\"\"{selected_stack}\n\n{image_policy}\"\"\"\n            user_text = item.get(\"text\", \"\")\n            prefixed_text = (\n                f\"{stack_prefix}\\n\\n{user_text}\" if user_text.strip() else stack_prefix\n            )\n            prompt_messages.append(\n                build_history_message(\n                    {\n                        \"role\": \"user\",\n                        \"text\": prefixed_text,\n                        \"images\": item.get(\"images\", []),\n                        \"videos\": item.get(\"videos\", []),\n                    }\n                )\n            )\n            continue\n\n        prompt_messages.append(build_history_message(item))\n\n    return prompt_messages\n"
  },
  {
    "path": "backend/pyproject.toml",
    "content": "[tool.poetry]\nname = \"backend\"\nversion = \"0.1.0\"\ndescription = \"\"\nauthors = [\"Abi Raja <abimanyuraja@gmail.com>\"]\nlicense = \"MIT\"\npackage-mode = false\n\n[tool.poetry.dependencies]\npython = \"^3.10\"\nfastapi = \"^0.115.6\"\nuvicorn = \"^0.25.0\"\nwebsockets = \"^14.1\"\nopenai = \"2.16.0\"\npython-dotenv = \"^1.0.0\"\nbeautifulsoup4 = \"^4.12.2\"\nhttpx = \"^0.28.1\"\npre-commit = \"^3.6.2\"\nanthropic = \"^0.84.0\"\nmoviepy = \"^1.0.3\"\npillow = \"^10.3.0\"\ntypes-pillow = \"^10.2.0.20240520\"\naiohttp = \"^3.9.5\"\npydantic = \"^2.10\"\ngoogle-genai = \"^1.16.1\"\nlangfuse = \"^3.0.2\"\n\n[tool.poetry.group.dev.dependencies]\npytest = \"^7.4.3\"\npyright = \"^1.1.352\"\npytest-asyncio = \"^0.21\"\n\n[build-system]\nrequires = [\"poetry-core\"]\nbuild-backend = \"poetry.core.masonry.api\"\n"
  },
  {
    "path": "backend/pyrightconfig.json",
    "content": "{\n  \"exclude\": [\"image_generation.py\"],\n  \"typeCheckingMode\": \"basic\",\n  \"reportMissingTypeStubs\": \"none\",\n  \"reportUnknownVariableType\": \"warning\"\n}\n"
  },
  {
    "path": "backend/pytest.ini",
    "content": "[pytest]\ntestpaths = tests\npython_files = test_*.py\npython_classes = Test*\npython_functions = test_*\naddopts = -v --tb=short\nasyncio_mode = auto\n"
  },
  {
    "path": "backend/routes/evals.py",
    "content": "import os\nimport asyncio\nimport json\nfrom fastapi import APIRouter, Query, Request, HTTPException\nfrom fastapi.responses import StreamingResponse\nfrom pydantic import BaseModel\nfrom evals.utils import image_to_data_url\nfrom evals.config import EVALS_DIR\nfrom typing import Set\nfrom evals.runner import run_image_evals, count_pending_eval_tasks\nfrom typing import List, Dict\nfrom llm import Llm\nfrom prompts.prompt_types import Stack\nfrom pathlib import Path\nfrom fs_logging.openai_input_compare import (\n    compare_openai_inputs,\n    format_openai_input_comparison,\n)\n\nrouter = APIRouter()\n\n# Update this if the number of outputs generated per input changes\nN = 1\n\n\nclass Eval(BaseModel):\n    input: str\n    outputs: list[str]\n\n\nclass InputFile(BaseModel):\n    name: str\n    path: str\n\n\n@router.get(\"/eval_input_files\", response_model=List[InputFile])\nasync def get_eval_input_files():\n    \"\"\"Get a list of all input files available for evaluations\"\"\"\n    input_dir = os.path.join(EVALS_DIR, \"inputs\")\n    try:\n        files: list[InputFile] = []\n        for filename in os.listdir(input_dir):\n            if filename.endswith(\".png\"):\n                file_path = os.path.join(input_dir, filename)\n                files.append(InputFile(name=filename, path=file_path))\n        return sorted(files, key=lambda x: x.name)\n    except Exception as e:\n        raise HTTPException(\n            status_code=500, detail=f\"Error reading input files: {str(e)}\"\n        )\n\n\n@router.get(\"/evals\", response_model=list[Eval])\nasync def get_evals(folder: str):\n    if not folder:\n        raise HTTPException(status_code=400, detail=\"Folder path is required\")\n\n    folder_path = Path(folder)\n    if not folder_path.exists():\n        raise HTTPException(status_code=404, detail=f\"Folder not found: {folder}\")\n\n    try:\n        evals: list[Eval] = []\n        # Get all HTML files from folder\n        files = {\n            f: os.path.join(folder, f)\n            for f in os.listdir(folder)\n            if f.endswith(\".html\")\n        }\n\n        # Extract base names\n        base_names: Set[str] = set()\n        for filename in files.keys():\n            base_name = (\n                filename.rsplit(\"_\", 1)[0]\n                if \"_\" in filename\n                else filename.replace(\".html\", \"\")\n            )\n            base_names.add(base_name)\n\n        for base_name in base_names:\n            input_path = os.path.join(EVALS_DIR, \"inputs\", f\"{base_name}.png\")\n            if not os.path.exists(input_path):\n                continue\n\n            # Find matching output file\n            output_file = None\n            for filename, filepath in files.items():\n                if filename.startswith(base_name):\n                    output_file = filepath\n                    break\n\n            if output_file:\n                input_data = await image_to_data_url(input_path)\n                with open(output_file, \"r\", encoding=\"utf-8\") as f:\n                    output_html = f.read()\n                evals.append(Eval(input=input_data, outputs=[output_html]))\n\n        return evals\n\n    except Exception as e:\n        raise HTTPException(status_code=500, detail=f\"Error processing evals: {str(e)}\")\n\n\nclass PairwiseEvalResponse(BaseModel):\n    evals: list[Eval]\n    folder1_name: str\n    folder2_name: str\n\n\n@router.get(\"/pairwise-evals\", response_model=PairwiseEvalResponse)\nasync def get_pairwise_evals(\n    folder1: str = Query(\n        \"...\",\n        description=\"Absolute path to first folder\",\n    ),\n    folder2: str = Query(\n        \"..\",\n        description=\"Absolute path to second folder\",\n    ),\n):\n    if not os.path.exists(folder1) or not os.path.exists(folder2):\n        return {\"error\": \"One or both folders do not exist\"}\n\n    evals: list[Eval] = []\n\n    # Get all HTML files from first folder\n    files1 = {\n        f: os.path.join(folder1, f) for f in os.listdir(folder1) if f.endswith(\".html\")\n    }\n    files2 = {\n        f: os.path.join(folder2, f) for f in os.listdir(folder2) if f.endswith(\".html\")\n    }\n\n    # Find common base names (ignoring any suffixes)\n    common_names: Set[str] = set()\n    for f1 in files1.keys():\n        base_name: str = f1.rsplit(\"_\", 1)[0] if \"_\" in f1 else f1.replace(\".html\", \"\")\n        for f2 in files2.keys():\n            if f2.startswith(base_name):\n                common_names.add(base_name)\n\n    # For each matching pair, create an eval\n    for base_name in common_names:\n        # Find the corresponding input image\n        input_image = None\n        input_path = os.path.join(EVALS_DIR, \"inputs\", f\"{base_name}.png\")\n        if os.path.exists(input_path):\n            input_image = await image_to_data_url(input_path)\n        else:\n            input_image = \"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=\"  # 1x1 transparent PNG\n\n        # Get the HTML contents\n        output1 = None\n        output2 = None\n\n        # Find matching files in folder1\n        for f1 in files1.keys():\n            if f1.startswith(base_name):\n                with open(files1[f1], \"r\") as f:\n                    output1 = f.read()\n                break\n\n        # Find matching files in folder2\n        for f2 in files2.keys():\n            if f2.startswith(base_name):\n                with open(files2[f2], \"r\") as f:\n                    output2 = f.read()\n                break\n\n        if output1 and output2:\n            evals.append(Eval(input=input_image, outputs=[output1, output2]))\n\n    # Extract folder names for the UI\n    folder1_name = os.path.basename(folder1)\n    folder2_name = os.path.basename(folder2)\n\n    return PairwiseEvalResponse(\n        evals=evals, folder1_name=folder1_name, folder2_name=folder2_name\n    )\n\n\nclass RunEvalsRequest(BaseModel):\n    models: List[str]\n    stack: Stack\n    files: List[str] = []  # Optional list of specific file paths to run evals on\n    diff_mode: bool = False\n\n\nclass OpenAIInputCompareRequest(BaseModel):\n    left_json: str\n    right_json: str\n\n\nclass OpenAIInputCompareDifferenceResponse(BaseModel):\n    item_index: int\n    path: str\n    left_summary: str\n    right_summary: str\n    left_value: object | None\n    right_value: object | None\n\n\nclass OpenAIInputCompareResponse(BaseModel):\n    common_prefix_items: int\n    left_item_count: int\n    right_item_count: int\n    difference: OpenAIInputCompareDifferenceResponse | None\n    formatted: str\n\n\ndef _load_openai_input_compare_payload(raw_json: str, side: str) -> object:\n    try:\n        payload = json.loads(raw_json)\n    except json.JSONDecodeError as error:\n        raise HTTPException(\n            status_code=400,\n            detail=(\n                f\"Invalid {side} JSON: {error.msg} \"\n                f\"(line {error.lineno}, column {error.colno})\"\n            ),\n        )\n\n    try:\n        compare_openai_inputs(payload, payload)\n    except ValueError as error:\n        raise HTTPException(status_code=400, detail=f\"Invalid {side} payload: {error}\")\n\n    return payload\n\n\n@router.post(\"/openai-input-compare\", response_model=OpenAIInputCompareResponse)\nasync def compare_openai_inputs_for_evals(\n    request: OpenAIInputCompareRequest,\n) -> OpenAIInputCompareResponse:\n    left_payload = _load_openai_input_compare_payload(request.left_json, \"left\")\n    right_payload = _load_openai_input_compare_payload(request.right_json, \"right\")\n    comparison = compare_openai_inputs(left_payload, right_payload)\n\n    difference = None\n    if comparison.difference is not None:\n        difference = OpenAIInputCompareDifferenceResponse(\n            item_index=comparison.difference.item_index,\n            path=comparison.difference.path,\n            left_summary=comparison.difference.left_summary,\n            right_summary=comparison.difference.right_summary,\n            left_value=comparison.difference.left_value,\n            right_value=comparison.difference.right_value,\n        )\n\n    return OpenAIInputCompareResponse(\n        common_prefix_items=comparison.common_prefix_items,\n        left_item_count=comparison.left_item_count,\n        right_item_count=comparison.right_item_count,\n        difference=difference,\n        formatted=format_openai_input_comparison(comparison),\n    )\n\n\n@router.post(\"/run_evals\", response_model=List[str])\nasync def run_evals(request: RunEvalsRequest) -> List[str]:\n    \"\"\"Run evaluations on selected images in the inputs directory for multiple models\"\"\"\n    all_output_files: List[str] = []\n\n    for model in request.models:\n        output_files = await run_image_evals(\n            model=model,\n            stack=request.stack,\n            input_files=request.files,\n            diff_mode=request.diff_mode,\n        )\n        all_output_files.extend(output_files)\n\n    return all_output_files\n\n\ndef _count_eval_files(selected_files: List[str]) -> int:\n    if selected_files:\n        return len([f for f in selected_files if f.endswith(\".png\")])\n\n    input_dir = os.path.join(EVALS_DIR, \"inputs\")\n    return len([f for f in os.listdir(input_dir) if f.endswith(\".png\")])\n\n\n@router.post(\"/run_evals_stream\")\nasync def run_evals_stream(request: RunEvalsRequest):\n    \"\"\"Run evaluations and stream progress events as newline-delimited JSON.\"\"\"\n    if not request.models:\n        raise HTTPException(status_code=400, detail=\"At least one model is required\")\n\n    per_model_task_counts: Dict[str, int] = {}\n    per_model_skipped_existing: Dict[str, int] = {}\n    if request.diff_mode:\n        for model in request.models:\n            pending_tasks, skipped_tasks = count_pending_eval_tasks(\n                stack=request.stack,\n                model=model,\n                input_files=request.files,\n                n=N,\n                diff_mode=True,\n            )\n            per_model_task_counts[model] = pending_tasks\n            per_model_skipped_existing[model] = skipped_tasks\n    else:\n        per_model_task_count = _count_eval_files(request.files)\n        for model in request.models:\n            per_model_task_counts[model] = per_model_task_count\n            per_model_skipped_existing[model] = 0\n\n    total_tasks = sum(per_model_task_counts.values())\n    total_skipped_existing = sum(per_model_skipped_existing.values())\n\n    async def event_generator():\n        queue: asyncio.Queue[dict] = asyncio.Queue()\n\n        async def emit(event: dict) -> None:\n            await queue.put(event)\n\n        async def run_all_models() -> None:\n            all_output_files: List[str] = []\n            completed_offset = 0\n\n            try:\n                await emit(\n                    {\n                        \"type\": \"start\",\n                        \"total_models\": len(request.models),\n                        \"tasks_per_model\": per_model_task_counts,\n                        \"total_tasks\": total_tasks,\n                        \"completed_tasks\": 0,\n                        \"diff_mode\": request.diff_mode,\n                        \"total_skipped_existing\": total_skipped_existing,\n                    }\n                )\n\n                for model_index, model in enumerate(request.models, start=1):\n                    model_task_count = per_model_task_counts.get(model, 0)\n                    model_skipped_existing = per_model_skipped_existing.get(model, 0)\n                    await emit(\n                        {\n                            \"type\": \"model_start\",\n                            \"model\": model,\n                            \"model_index\": model_index,\n                            \"total_models\": len(request.models),\n                            \"model_tasks\": model_task_count,\n                            \"model_skipped_existing\": model_skipped_existing,\n                        }\n                    )\n\n                    async def on_progress(event: dict) -> None:\n                        await emit(\n                            {\n                                **event,\n                                \"model\": model,\n                                \"model_index\": model_index,\n                                \"total_models\": len(request.models),\n                                \"global_completed_tasks\": completed_offset\n                                + event.get(\"completed_tasks\", 0),\n                                \"global_total_tasks\": total_tasks,\n                            }\n                        )\n\n                    output_files = await run_image_evals(\n                        model=model,\n                        stack=request.stack,\n                        input_files=request.files,\n                        diff_mode=request.diff_mode,\n                        progress_callback=on_progress,\n                    )\n                    all_output_files.extend(output_files)\n                    completed_offset += model_task_count\n\n                await emit(\n                    {\n                        \"type\": \"complete\",\n                        \"completed_tasks\": total_tasks,\n                        \"total_tasks\": total_tasks,\n                        \"output_files\": all_output_files,\n                    }\n                )\n            except Exception as e:\n                await emit({\"type\": \"error\", \"message\": str(e)})\n            finally:\n                await emit({\"type\": \"done\"})\n\n        producer = asyncio.create_task(run_all_models())\n        while True:\n            event = await queue.get()\n            if event.get(\"type\") == \"done\":\n                break\n            yield json.dumps(event) + \"\\n\"\n        await producer\n\n    return StreamingResponse(event_generator(), media_type=\"application/x-ndjson\")\n\n\n@router.get(\"/models\", response_model=Dict[str, List[str]])\nasync def get_models():\n    current_models = [model.value for model in Llm]\n\n    # Import Stack type from prompts.prompt_types and get all literal values\n    available_stacks = list(Stack.__args__)\n\n    return {\"models\": current_models, \"stacks\": available_stacks}\n\n\nclass BestOfNEvalsResponse(BaseModel):\n    evals: list[Eval]\n    folder_names: list[str]\n\n\n@router.get(\"/best-of-n-evals\", response_model=BestOfNEvalsResponse)\nasync def get_best_of_n_evals(request: Request):\n    # Get all query parameters\n    query_params = dict(request.query_params)\n\n    # Extract all folder paths (folder1, folder2, folder3, etc.)\n    folders: list[str] = []\n    i = 1\n    while f\"folder{i}\" in query_params:\n        folders.append(query_params[f\"folder{i}\"])\n        i += 1\n\n    if not folders:\n        return {\"error\": \"No folders provided\"}\n\n    # Validate folders exist\n    for folder in folders:\n        if not os.path.exists(folder):\n            return {\"error\": f\"Folder does not exist: {folder}\"}\n\n    evals: list[Eval] = []\n    folder_names = [os.path.basename(folder) for folder in folders]\n\n    # Get HTML files from all folders\n    files_by_folder = []\n    for folder in folders:\n        files = {\n            f: os.path.join(folder, f)\n            for f in os.listdir(folder)\n            if f.endswith(\".html\")\n        }\n        files_by_folder.append(files)\n\n    # Find common base names across all folders\n    common_names: Set[str] = set()\n    base_names_first_folder = {\n        f.rsplit(\"_\", 1)[0] if \"_\" in f else f.replace(\".html\", \"\")\n        for f in files_by_folder[0].keys()\n    }\n\n    for base_name in base_names_first_folder:\n        found_in_all = True\n        for folder_files in files_by_folder[1:]:\n            if not any(f.startswith(base_name) for f in folder_files.keys()):\n                found_in_all = False\n                break\n        if found_in_all:\n            common_names.add(base_name)\n\n    # For each matching set, create an eval\n    for base_name in common_names:\n        # Find the corresponding input image\n        input_image = None\n        input_path = os.path.join(EVALS_DIR, \"inputs\", f\"{base_name}.png\")\n        if os.path.exists(input_path):\n            input_image = await image_to_data_url(input_path)\n        else:\n            input_image = \"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=\"\n\n        # Get HTML contents from all folders\n        outputs: list[str] = []\n        for folder_files in files_by_folder:\n            output_content: str | None = None\n            for filename in folder_files.keys():\n                if filename.startswith(base_name):\n                    with open(folder_files[filename], \"r\") as f:\n                        output_content = f.read()\n                    break\n            if output_content:\n                outputs.append(output_content)\n            else:\n                outputs.append(\"<html><body>Output not found</body></html>\")\n\n        if len(outputs) == len(folders):  # Only add if we have outputs from all folders\n            evals.append(Eval(input=input_image, outputs=outputs))\n\n    return BestOfNEvalsResponse(evals=evals, folder_names=folder_names)\n\n\nclass OutputFolder(BaseModel):\n    name: str\n    path: str\n    modified_time: float\n\n\n@router.get(\"/output_folders\", response_model=List[OutputFolder])\nasync def get_output_folders():\n    \"\"\"Get a list of all output folders available for evaluations, sorted by recently modified\"\"\"\n    output_dir = os.path.join(EVALS_DIR, \"results\")\n    try:\n        folders: list[OutputFolder] = []\n        for folder_name in os.listdir(output_dir):\n            folder_path = os.path.join(output_dir, folder_name)\n            if os.path.isdir(folder_path) and not folder_name.startswith(\".\"):\n                # Get modification time\n                modified_time = os.path.getmtime(folder_path)\n                folders.append(\n                    OutputFolder(\n                        name=folder_name, path=folder_path, modified_time=modified_time\n                    )\n                )\n\n        # Sort by modified time, most recent first\n        return sorted(folders, key=lambda x: x.modified_time, reverse=True)\n    except Exception as e:\n        raise HTTPException(\n            status_code=500, detail=f\"Error reading output folders: {str(e)}\"\n        )\n"
  },
  {
    "path": "backend/routes/generate_code.py",
    "content": "import asyncio\nfrom dataclasses import dataclass, field\nfrom abc import ABC, abstractmethod\nimport traceback\nfrom typing import Callable, Awaitable\nfrom fastapi import APIRouter, WebSocket\nimport openai\nfrom websockets.exceptions import ConnectionClosedOK, ConnectionClosedError\nfrom config import (\n    ANTHROPIC_API_KEY,\n    GEMINI_API_KEY,\n    IS_DEBUG_ENABLED,\n    IS_PROD,\n    NUM_VARIANTS,\n    NUM_VARIANTS_VIDEO,\n    OPENAI_API_KEY,\n    OPENAI_BASE_URL,\n    REPLICATE_API_KEY,\n)\nfrom custom_types import InputMode\nfrom llm import (\n    Llm,\n)\nfrom typing import (\n    Any,\n    Callable,\n    Coroutine,\n    Dict,\n    List,\n    Literal,\n    cast,\n    get_args,\n)\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom utils import print_prompt_preview\n\n# WebSocket message types\nMessageType = Literal[\n    \"chunk\",\n    \"status\",\n    \"setCode\",\n    \"error\",\n    \"variantComplete\",\n    \"variantError\",\n    \"variantCount\",\n    \"variantModels\",\n    \"thinking\",\n    \"assistant\",\n    \"toolStart\",\n    \"toolResult\",\n]\nfrom prompts.pipeline import build_prompt_messages\nfrom prompts.request_parsing import parse_prompt_content, parse_prompt_history\nfrom prompts.prompt_types import PromptHistoryMessage, Stack, UserTurnInput\nfrom agent.runner import Agent\nfrom routes.model_choice_sets import (\n    ALL_KEYS_MODELS_DEFAULT,\n    ALL_KEYS_MODELS_TEXT_CREATE,\n    ALL_KEYS_MODELS_UPDATE,\n    ANTHROPIC_ONLY_MODELS,\n    GEMINI_ANTHROPIC_MODELS,\n    GEMINI_OPENAI_MODELS,\n    GEMINI_ONLY_MODELS,\n    OPENAI_ANTHROPIC_MODELS,\n    OPENAI_ONLY_MODELS,\n    VIDEO_VARIANT_MODELS,\n)\n\n# from utils import pprint_prompt\nfrom ws.constants import APP_ERROR_WEB_SOCKET_CODE  # type: ignore\n\n\nrouter = APIRouter()\n\n\n@dataclass\nclass PipelineContext:\n    \"\"\"Context object that carries state through the pipeline\"\"\"\n\n    websocket: WebSocket\n    ws_comm: \"WebSocketCommunicator | None\" = None\n    params: Dict[str, Any] = field(default_factory=dict)\n    extracted_params: \"ExtractedParams | None\" = None\n    prompt_messages: List[ChatCompletionMessageParam] = field(default_factory=list)\n    variant_models: List[Llm] = field(default_factory=list)\n    completions: List[str] = field(default_factory=list)\n    variant_completions: Dict[int, str] = field(default_factory=dict)\n    metadata: Dict[str, Any] = field(default_factory=dict)\n\n    @property\n    def send_message(self):\n        assert self.ws_comm is not None\n        return self.ws_comm.send_message\n\n    @property\n    def throw_error(self):\n        assert self.ws_comm is not None\n        return self.ws_comm.throw_error\n\n\nclass Middleware(ABC):\n    \"\"\"Base class for all pipeline middleware\"\"\"\n\n    @abstractmethod\n    async def process(\n        self, context: PipelineContext, next_func: Callable[[], Awaitable[None]]\n    ) -> None:\n        \"\"\"Process the context and call the next middleware\"\"\"\n        pass\n\n\nclass Pipeline:\n    \"\"\"Pipeline for processing WebSocket code generation requests\"\"\"\n\n    def __init__(self):\n        self.middlewares: List[Middleware] = []\n\n    def use(self, middleware: Middleware) -> \"Pipeline\":\n        \"\"\"Add a middleware to the pipeline\"\"\"\n        self.middlewares.append(middleware)\n        return self\n\n    async def execute(self, websocket: WebSocket) -> None:\n        \"\"\"Execute the pipeline with the given WebSocket\"\"\"\n        context = PipelineContext(websocket=websocket)\n\n        # Build the middleware chain\n        async def start(ctx: PipelineContext):\n            pass  # End of pipeline\n\n        chain = start\n        for middleware in reversed(self.middlewares):\n            chain = self._wrap_middleware(middleware, chain)\n\n        await chain(context)\n\n    def _wrap_middleware(\n        self,\n        middleware: Middleware,\n        next_func: Callable[[PipelineContext], Awaitable[None]],\n    ) -> Callable[[PipelineContext], Awaitable[None]]:\n        \"\"\"Wrap a middleware with its next function\"\"\"\n\n        async def wrapped(context: PipelineContext) -> None:\n            await middleware.process(context, lambda: next_func(context))\n\n        return wrapped\n\n\nclass WebSocketCommunicator:\n    \"\"\"Handles WebSocket communication with consistent error handling\"\"\"\n\n    def __init__(self, websocket: WebSocket):\n        self.websocket = websocket\n        self.is_closed = False\n\n    async def accept(self) -> None:\n        \"\"\"Accept the WebSocket connection\"\"\"\n        await self.websocket.accept()\n        print(\"Incoming websocket connection...\")\n\n    async def send_message(\n        self,\n        type: MessageType,\n        value: str | None,\n        variantIndex: int,\n        data: Dict[str, Any] | None = None,\n        eventId: str | None = None,\n    ) -> None:\n        \"\"\"Send a message to the client with debug logging\"\"\"\n        if self.is_closed:\n            return\n\n        # Print for debugging on the backend\n        if type == \"error\":\n            print(f\"Error (variant {variantIndex + 1}): {value}\")\n        elif type == \"status\":\n            print(f\"Status (variant {variantIndex + 1}): {value}\")\n        elif type == \"variantComplete\":\n            print(f\"Variant {variantIndex + 1} complete\")\n        elif type == \"variantError\":\n            print(f\"Variant {variantIndex + 1} error: {value}\")\n\n        try:\n            payload: Dict[str, Any] = {\"type\": type, \"variantIndex\": variantIndex}\n            if value is not None:\n                payload[\"value\"] = value\n            if data is not None:\n                payload[\"data\"] = data\n            if eventId is not None:\n                payload[\"eventId\"] = eventId\n            await self.websocket.send_json(payload)\n        except (ConnectionClosedOK, ConnectionClosedError):\n            print(f\"WebSocket closed by client, skipping message: {type}\")\n            self.is_closed = True\n\n    async def throw_error(self, message: str) -> None:\n        \"\"\"Send an error message and close the connection\"\"\"\n        print(message)\n        if not self.is_closed:\n            try:\n                await self.websocket.send_json({\"type\": \"error\", \"value\": message})\n                await self.websocket.close(APP_ERROR_WEB_SOCKET_CODE)\n            except (ConnectionClosedOK, ConnectionClosedError):\n                print(\"WebSocket already closed by client\")\n            self.is_closed = True\n\n    async def receive_params(self) -> Dict[str, Any]:\n        \"\"\"Receive parameters from the client\"\"\"\n        params: Dict[str, Any] = await self.websocket.receive_json()\n        print(\"Received params\")\n        return params\n\n    async def close(self) -> None:\n        \"\"\"Close the WebSocket connection\"\"\"\n        if not self.is_closed:\n            try:\n                await self.websocket.close()\n            except (ConnectionClosedOK, ConnectionClosedError):\n                pass  # Already closed by client\n            self.is_closed = True\n\n\n@dataclass\nclass ExtractedParams:\n    stack: Stack\n    input_mode: InputMode\n    should_generate_images: bool\n    openai_api_key: str | None\n    anthropic_api_key: str | None\n    gemini_api_key: str | None\n    openai_base_url: str | None\n    generation_type: Literal[\"create\", \"update\"]\n    prompt: UserTurnInput\n    history: List[PromptHistoryMessage]\n    file_state: Dict[str, str] | None\n    option_codes: List[str]\n\n\nclass ParameterExtractionStage:\n    \"\"\"Handles parameter extraction and validation from WebSocket requests\"\"\"\n\n    def __init__(self, throw_error: Callable[[str], Coroutine[Any, Any, None]]):\n        self.throw_error = throw_error\n\n    async def extract_and_validate(self, params: Dict[str, Any]) -> ExtractedParams:\n        \"\"\"Extract and validate all parameters from the request\"\"\"\n        # Read the code config settings (stack) from the request.\n        generated_code_config = params.get(\"generatedCodeConfig\", \"\")\n        if generated_code_config not in get_args(Stack):\n            await self.throw_error(\n                f\"Invalid generated code config: {generated_code_config}\"\n            )\n            raise ValueError(f\"Invalid generated code config: {generated_code_config}\")\n        validated_stack = cast(Stack, generated_code_config)\n\n        # Validate the input mode\n        input_mode = params.get(\"inputMode\")\n        if input_mode not in get_args(InputMode):\n            await self.throw_error(f\"Invalid input mode: {input_mode}\")\n            raise ValueError(f\"Invalid input mode: {input_mode}\")\n        validated_input_mode = cast(InputMode, input_mode)\n\n        openai_api_key = self._get_from_settings_dialog_or_env(\n            params, \"openAiApiKey\", OPENAI_API_KEY\n        )\n\n        # If neither is provided, we throw an error later only if Claude is used.\n        anthropic_api_key = self._get_from_settings_dialog_or_env(\n            params, \"anthropicApiKey\", ANTHROPIC_API_KEY\n        )\n        gemini_api_key = self._get_from_settings_dialog_or_env(\n            params, \"geminiApiKey\", GEMINI_API_KEY\n        )\n\n        # Base URL for OpenAI API\n        openai_base_url: str | None = None\n        # Disable user-specified OpenAI Base URL in prod\n        if not IS_PROD:\n            openai_base_url = self._get_from_settings_dialog_or_env(\n                params, \"openAiBaseURL\", OPENAI_BASE_URL\n            )\n        if not openai_base_url:\n            print(\"Using official OpenAI URL\")\n\n        # Get the image generation flag from the request. Fall back to True if not provided.\n        should_generate_images = bool(params.get(\"isImageGenerationEnabled\", True))\n\n        # Extract and validate generation type\n        generation_type = params.get(\"generationType\", \"create\")\n        if generation_type not in [\"create\", \"update\"]:\n            await self.throw_error(f\"Invalid generation type: {generation_type}\")\n            raise ValueError(f\"Invalid generation type: {generation_type}\")\n        generation_type = cast(Literal[\"create\", \"update\"], generation_type)\n\n        # Extract prompt content\n        prompt: UserTurnInput = parse_prompt_content(params.get(\"prompt\"))\n\n        # Extract history (default to empty list)\n        history: List[PromptHistoryMessage] = parse_prompt_history(\n            params.get(\"history\")\n        )\n\n        # Extract file state for agent edits\n        raw_file_state = params.get(\"fileState\")\n        file_state: Dict[str, str] | None = None\n        if isinstance(raw_file_state, dict):\n            content = raw_file_state.get(\"content\")\n            if isinstance(content, str) and content.strip():\n                path = raw_file_state.get(\"path\") or \"index.html\"\n                file_state = {\"path\": path, \"content\": content}\n\n        raw_option_codes = params.get(\"optionCodes\")\n        option_codes: List[str] = []\n        if isinstance(raw_option_codes, list):\n            for entry in raw_option_codes:\n                if isinstance(entry, str):\n                    option_codes.append(entry)\n                elif entry is None:\n                    option_codes.append(\"\")\n                else:\n                    option_codes.append(str(entry))\n\n        return ExtractedParams(\n            stack=validated_stack,\n            input_mode=validated_input_mode,\n            should_generate_images=should_generate_images,\n            openai_api_key=openai_api_key,\n            anthropic_api_key=anthropic_api_key,\n            gemini_api_key=gemini_api_key,\n            openai_base_url=openai_base_url,\n            generation_type=generation_type,\n            prompt=prompt,\n            history=history,\n            file_state=file_state,\n            option_codes=option_codes,\n        )\n\n    def _get_from_settings_dialog_or_env(\n        self, params: dict[str, Any], key: str, env_var: str | None\n    ) -> str | None:\n        \"\"\"Get value from client settings or environment variable\"\"\"\n        value = params.get(key)\n        if value:\n            print(f\"Using {key} from client-side settings dialog\")\n            return value\n\n        if env_var:\n            print(f\"Using {key} from environment variable\")\n            return env_var\n\n        return None\n\n\nclass ModelSelectionStage:\n    \"\"\"Handles selection of variant models based on available API keys and generation type\"\"\"\n\n    def __init__(self, throw_error: Callable[[str], Coroutine[Any, Any, None]]):\n        self.throw_error = throw_error\n\n    async def select_models(\n        self,\n        generation_type: Literal[\"create\", \"update\"],\n        input_mode: InputMode,\n        openai_api_key: str | None,\n        anthropic_api_key: str | None,\n        gemini_api_key: str | None = None,\n    ) -> List[Llm]:\n        \"\"\"Select appropriate models based on available API keys\"\"\"\n        try:\n            num_variants = 2 if generation_type == \"update\" else NUM_VARIANTS\n            variant_models = self._get_variant_models(\n                generation_type,\n                input_mode,\n                num_variants,\n                openai_api_key,\n                anthropic_api_key,\n                gemini_api_key,\n            )\n\n            # Print the variant models (one per line)\n            print(\"Variant models:\")\n            for index, model in enumerate(variant_models):\n                print(f\"Variant {index + 1}: {model.value}\")\n\n            return variant_models\n        except Exception:\n            await self.throw_error(\n                \"No OpenAI, Anthropic, or Gemini API key found. Please add the environment variable \"\n                \"OPENAI_API_KEY, ANTHROPIC_API_KEY, or GEMINI_API_KEY to backend/.env or in the settings dialog. \"\n                \"If you add it to .env, make sure to restart the backend server.\"\n            )\n            raise Exception(\"No API key\")\n\n    def _get_variant_models(\n        self,\n        generation_type: Literal[\"create\", \"update\"],\n        input_mode: InputMode,\n        num_variants: int,\n        openai_api_key: str | None,\n        anthropic_api_key: str | None,\n        gemini_api_key: str | None,\n    ) -> List[Llm]:\n        \"\"\"Simple model cycling that scales with num_variants\"\"\"\n\n        # Video mode requires Gemini - 2 variants for comparison\n        if input_mode == \"video\":\n            if not gemini_api_key:\n                raise Exception(\n                    \"Video mode requires a Gemini API key. \"\n                    \"Please add GEMINI_API_KEY to backend/.env or in the settings dialog\"\n                )\n            return list(VIDEO_VARIANT_MODELS)\n\n        # Define models based on available API keys\n        if gemini_api_key and anthropic_api_key and openai_api_key:\n            if input_mode == \"text\" and generation_type == \"create\":\n                models = list(ALL_KEYS_MODELS_TEXT_CREATE)\n            elif generation_type == \"update\":\n                models = list(ALL_KEYS_MODELS_UPDATE)\n            else:\n                models = list(ALL_KEYS_MODELS_DEFAULT)\n        elif gemini_api_key and anthropic_api_key:\n            models = list(GEMINI_ANTHROPIC_MODELS)\n        elif gemini_api_key and openai_api_key:\n            models = list(GEMINI_OPENAI_MODELS)\n        elif openai_api_key and anthropic_api_key:\n            models = list(OPENAI_ANTHROPIC_MODELS)\n        elif gemini_api_key:\n            models = list(GEMINI_ONLY_MODELS)\n        elif anthropic_api_key:\n            models = list(ANTHROPIC_ONLY_MODELS)\n        elif openai_api_key:\n            models = list(OPENAI_ONLY_MODELS)\n        else:\n            raise Exception(\"No OpenAI or Anthropic key\")\n\n        # Cycle through models: [A, B] with num=5 becomes [A, B, A, B, A]\n        selected_models: List[Llm] = []\n        for i in range(num_variants):\n            selected_models.append(models[i % len(models)])\n\n        return selected_models\n\n\nclass PromptCreationStage:\n    \"\"\"Handles prompt assembly for code generation\"\"\"\n\n    def __init__(self, throw_error: Callable[[str], Coroutine[Any, Any, None]]):\n        self.throw_error = throw_error\n\n    async def build_prompt_messages(\n        self,\n        extracted_params: ExtractedParams,\n    ) -> List[ChatCompletionMessageParam]:\n        \"\"\"Create prompt messages\"\"\"\n        try:\n            prompt_messages = await build_prompt_messages(\n                stack=extracted_params.stack,\n                input_mode=extracted_params.input_mode,\n                generation_type=extracted_params.generation_type,\n                prompt=extracted_params.prompt,\n                history=extracted_params.history,\n                file_state=extracted_params.file_state,\n                image_generation_enabled=extracted_params.should_generate_images,\n            )\n            print_prompt_preview(prompt_messages)\n\n            return prompt_messages\n        except Exception:\n            await self.throw_error(\n                \"Error assembling prompt. Contact support at support@picoapps.xyz\"\n            )\n            raise\n\n\nclass PostProcessingStage:\n    \"\"\"Handles post-processing after code generation completes\"\"\"\n\n    def __init__(self):\n        pass\n\n    async def process_completions(\n        self,\n        completions: List[str],\n        websocket: WebSocket,\n    ) -> None:\n        \"\"\"Process completions and perform cleanup.\"\"\"\n        return None\n\n\nclass AgenticGenerationStage:\n    \"\"\"Handles agent tool-calling generation for each variant.\"\"\"\n\n    def __init__(\n        self,\n        send_message: Callable[[MessageType, str | None, int, Dict[str, Any] | None, str | None], Coroutine[Any, Any, None]],\n        openai_api_key: str | None,\n        openai_base_url: str | None,\n        anthropic_api_key: str | None,\n        gemini_api_key: str | None,\n        should_generate_images: bool,\n        file_state: Dict[str, str] | None,\n        option_codes: List[str] | None,\n    ):\n        self.send_message = send_message\n        self.openai_api_key = openai_api_key\n        self.openai_base_url = openai_base_url\n        self.anthropic_api_key = anthropic_api_key\n        self.gemini_api_key = gemini_api_key\n        self.should_generate_images = should_generate_images\n        self.file_state = file_state\n        self.option_codes = option_codes or []\n\n    async def process_variants(\n        self,\n        variant_models: List[Llm],\n        prompt_messages: List[ChatCompletionMessageParam],\n    ) -> Dict[int, str]:\n        tasks: List[asyncio.Task[str]] = []\n        for index, model in enumerate(variant_models):\n            tasks.append(\n                asyncio.create_task(\n                    self._run_variant(index, model, prompt_messages)\n                )\n            )\n\n        results = await asyncio.gather(*tasks, return_exceptions=True)\n        variant_completions: Dict[int, str] = {}\n        for index, result in enumerate(results):\n            if isinstance(result, BaseException):\n                print(f\"Variant {index + 1} failed: {result}\")\n                continue\n            if result:\n                variant_completions[index] = result\n\n        return variant_completions\n\n    async def _run_variant(\n        self,\n        index: int,\n        model: Llm,\n        prompt_messages: List[ChatCompletionMessageParam],\n    ) -> str:\n        try:\n            async def send_runner_message(\n                type: str,\n                value: str | None,\n                variant_index: int,\n                data: Dict[str, Any] | None,\n                event_id: str | None,\n            ) -> None:\n                await self.send_message(\n                    cast(MessageType, type),\n                    value,\n                    variant_index,\n                    data,\n                    event_id,\n                )\n\n            runner = Agent(\n                send_message=send_runner_message,\n                variant_index=index,\n                openai_api_key=self.openai_api_key,\n                openai_base_url=self.openai_base_url,\n                anthropic_api_key=self.anthropic_api_key,\n                gemini_api_key=self.gemini_api_key,\n                should_generate_images=self.should_generate_images,\n                initial_file_state=self.file_state,\n                option_codes=self.option_codes,\n            )\n            completion = await runner.run(model, prompt_messages)\n            if completion:\n                await self.send_message(\"setCode\", completion, index, None, None)\n            await self.send_message(\n                \"variantComplete\",\n                \"Variant generation complete\",\n                index,\n                None,\n                None,\n            )\n            return completion\n        except openai.AuthenticationError as e:\n            print(f\"[VARIANT {index + 1}] OpenAI Authentication failed\", e)\n            error_message = (\n                \"Incorrect OpenAI key. Please make sure your OpenAI API key is correct, \"\n                \"or create a new OpenAI API key on your OpenAI dashboard.\"\n                + (\n                    \" Alternatively, you can purchase code generation credits directly on this website.\"\n                    if IS_PROD\n                    else \"\"\n                )\n            )\n            await self.send_message(\"variantError\", error_message, index, None, None)\n            return \"\"\n        except openai.NotFoundError as e:\n            print(f\"[VARIANT {index + 1}] OpenAI Model not found\", e)\n            error_message = (\n                e.message\n                + \". Please make sure you have followed the instructions correctly to obtain \"\n                \"an OpenAI key with GPT vision access: \"\n                \"https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md\"\n                + (\n                    \" Alternatively, you can purchase code generation credits directly on this website.\"\n                    if IS_PROD\n                    else \"\"\n                )\n            )\n            await self.send_message(\"variantError\", error_message, index, None, None)\n            return \"\"\n        except openai.RateLimitError as e:\n            print(f\"[VARIANT {index + 1}] OpenAI Rate limit exceeded\", e)\n            error_message = (\n                \"OpenAI error - 'You exceeded your current quota, please check your plan and billing details.'\"\n                + (\n                    \" Alternatively, you can purchase code generation credits directly on this website.\"\n                    if IS_PROD\n                    else \"\"\n                )\n            )\n            await self.send_message(\"variantError\", error_message, index, None, None)\n            return \"\"\n        except Exception as e:\n            print(f\"Error in variant {index + 1}: {e}\")\n            traceback.print_exception(type(e), e, e.__traceback__)\n            await self.send_message(\"variantError\", str(e), index, None, None)\n            return \"\"\n\n\n# Pipeline Middleware Implementations\n\n\nclass WebSocketSetupMiddleware(Middleware):\n    \"\"\"Handles WebSocket setup and teardown\"\"\"\n\n    async def process(\n        self, context: PipelineContext, next_func: Callable[[], Awaitable[None]]\n    ) -> None:\n        # Create and setup WebSocket communicator\n        context.ws_comm = WebSocketCommunicator(context.websocket)\n        await context.ws_comm.accept()\n\n        try:\n            await next_func()\n        finally:\n            # Always close the WebSocket\n            await context.ws_comm.close()\n\n\nclass ParameterExtractionMiddleware(Middleware):\n    \"\"\"Handles parameter extraction and validation\"\"\"\n\n    async def process(\n        self, context: PipelineContext, next_func: Callable[[], Awaitable[None]]\n    ) -> None:\n        # Receive parameters\n        assert context.ws_comm is not None\n        context.params = await context.ws_comm.receive_params()\n\n        # Extract and validate\n        param_extractor = ParameterExtractionStage(context.throw_error)\n        context.extracted_params = await param_extractor.extract_and_validate(\n            context.params\n        )\n\n        # Log what we're generating\n        print(\n            f\"Generating {context.extracted_params.stack} code in {context.extracted_params.input_mode} mode\"\n        )\n\n        await next_func()\n\n\nclass StatusBroadcastMiddleware(Middleware):\n    \"\"\"Sends initial status messages to all variants\"\"\"\n\n    async def process(\n        self, context: PipelineContext, next_func: Callable[[], Awaitable[None]]\n    ) -> None:\n        # Determine variant count based on input mode and generation type.\n        # Edit/update flows use two variants to keep latency and cost down.\n        assert context.extracted_params is not None\n        is_video_mode = context.extracted_params.input_mode == \"video\"\n        is_update = context.extracted_params.generation_type == \"update\"\n        num_variants = (\n            NUM_VARIANTS_VIDEO if is_video_mode else 2 if is_update else NUM_VARIANTS\n        )\n\n        # Tell frontend how many variants we're using\n        await context.send_message(\"variantCount\", str(num_variants), 0)\n\n        for i in range(num_variants):\n            await context.send_message(\"status\", \"Generating code...\", i)\n\n        await next_func()\n\n\nclass PromptCreationMiddleware(Middleware):\n    \"\"\"Handles prompt creation\"\"\"\n\n    async def process(\n        self, context: PipelineContext, next_func: Callable[[], Awaitable[None]]\n    ) -> None:\n        prompt_creator = PromptCreationStage(context.throw_error)\n        assert context.extracted_params is not None\n        context.prompt_messages = await prompt_creator.build_prompt_messages(\n            context.extracted_params,\n        )\n        await next_func()\n\n\nclass CodeGenerationMiddleware(Middleware):\n    \"\"\"Handles the main code generation logic\"\"\"\n\n    async def process(\n        self, context: PipelineContext, next_func: Callable[[], Awaitable[None]]\n    ) -> None:\n        try:\n            assert context.extracted_params is not None\n\n            # Select models (handles video mode internally)\n            model_selector = ModelSelectionStage(context.throw_error)\n            context.variant_models = await model_selector.select_models(\n                generation_type=context.extracted_params.generation_type,\n                input_mode=context.extracted_params.input_mode,\n                openai_api_key=context.extracted_params.openai_api_key,\n                anthropic_api_key=context.extracted_params.anthropic_api_key,\n                gemini_api_key=context.extracted_params.gemini_api_key,\n            )\n            if IS_DEBUG_ENABLED:\n                await context.send_message(\n                    \"variantModels\",\n                    None,\n                    0,\n                    {\"models\": [model.value for model in context.variant_models]},\n                    None,\n                )\n\n            generation_stage = AgenticGenerationStage(\n                send_message=context.send_message,\n                openai_api_key=context.extracted_params.openai_api_key,\n                openai_base_url=context.extracted_params.openai_base_url,\n                anthropic_api_key=context.extracted_params.anthropic_api_key,\n                gemini_api_key=context.extracted_params.gemini_api_key,\n                should_generate_images=context.extracted_params.should_generate_images,\n                file_state=context.extracted_params.file_state,\n                option_codes=context.extracted_params.option_codes,\n            )\n\n            context.variant_completions = await generation_stage.process_variants(\n                variant_models=context.variant_models,\n                prompt_messages=context.prompt_messages,\n            )\n\n            # Check if all variants failed\n            if len(context.variant_completions) == 0:\n                await context.throw_error(\n                    \"Error generating code. Please contact support.\"\n                )\n                return  # Don't continue the pipeline\n\n            # Convert to list format\n            context.completions = []\n            for i in range(len(context.variant_models)):\n                if i in context.variant_completions:\n                    context.completions.append(context.variant_completions[i])\n                else:\n                    context.completions.append(\"\")\n\n        except Exception as e:\n            print(f\"[GENERATE_CODE] Unexpected error: {e}\")\n            await context.throw_error(f\"An unexpected error occurred: {str(e)}\")\n            return  # Don't continue the pipeline\n\n        await next_func()\n\n\nclass PostProcessingMiddleware(Middleware):\n    \"\"\"Handles post-processing and logging\"\"\"\n\n    async def process(\n        self, context: PipelineContext, next_func: Callable[[], Awaitable[None]]\n    ) -> None:\n        post_processor = PostProcessingStage()\n        await post_processor.process_completions(\n            context.completions, context.websocket\n        )\n\n        await next_func()\n\n\n@router.websocket(\"/generate-code\")\nasync def stream_code(websocket: WebSocket):\n    \"\"\"Handle WebSocket code generation requests using a pipeline pattern\"\"\"\n    pipeline = Pipeline()\n\n    # Configure the pipeline\n    pipeline.use(WebSocketSetupMiddleware())\n    pipeline.use(ParameterExtractionMiddleware())\n    pipeline.use(StatusBroadcastMiddleware())\n    pipeline.use(PromptCreationMiddleware())\n    pipeline.use(CodeGenerationMiddleware())\n    pipeline.use(PostProcessingMiddleware())\n\n    # Execute the pipeline\n    await pipeline.execute(websocket)\n"
  },
  {
    "path": "backend/routes/home.py",
    "content": "from fastapi import APIRouter\nfrom fastapi.responses import HTMLResponse\n\n\nrouter = APIRouter()\n\n\n@router.get(\"/\")\nasync def get_status():\n    return HTMLResponse(\n        content=\"<h3>Your backend is running correctly. Please open the front-end URL (default is http://localhost:5173) to use screenshot-to-code.</h3>\"\n    )\n"
  },
  {
    "path": "backend/routes/model_choice_sets.py",
    "content": "from llm import Llm\n\n# Video variants always use Gemini.\nVIDEO_VARIANT_MODELS = (\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n    Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n)\n\n# All API keys available.\n\n# Image (Create)\n\nALL_KEYS_MODELS_DEFAULT = (\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n    Llm.GPT_5_2_CODEX_HIGH,\n    Llm.GEMINI_3_FLASH_PREVIEW_HIGH,\n    Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n)\n\n# Text (Create)\n\nALL_KEYS_MODELS_TEXT_CREATE = (\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n    Llm.GPT_5_2_CODEX_HIGH,\n    Llm.CLAUDE_OPUS_4_6,\n    Llm.GEMINI_3_1_PRO_PREVIEW_LOW,\n)\n\n# Image + Text (Update)\n\nALL_KEYS_MODELS_UPDATE = (\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n    Llm.GPT_5_4_2026_03_05_LOW,\n)\n\n# Key subset fallbacks.\nGEMINI_ANTHROPIC_MODELS = (\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n    Llm.GEMINI_3_1_PRO_PREVIEW_LOW,\n    Llm.CLAUDE_OPUS_4_6,\n    Llm.GEMINI_3_FLASH_PREVIEW_HIGH,\n    Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n)\nGEMINI_OPENAI_MODELS = (\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n    Llm.GEMINI_3_1_PRO_PREVIEW_LOW,\n    Llm.GPT_5_2_CODEX_HIGH,\n    Llm.GPT_5_2_CODEX_MEDIUM,\n)\nOPENAI_ANTHROPIC_MODELS = (\n    Llm.CLAUDE_OPUS_4_6,\n    Llm.GPT_5_2_CODEX_HIGH,\n    Llm.GPT_5_2_CODEX_MEDIUM,\n)\nGEMINI_ONLY_MODELS = (\n    Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n    Llm.GEMINI_3_1_PRO_PREVIEW_LOW,\n    Llm.GEMINI_3_FLASH_PREVIEW_HIGH,\n    Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n)\nANTHROPIC_ONLY_MODELS = (\n    Llm.CLAUDE_OPUS_4_6,\n    Llm.CLAUDE_SONNET_4_6,\n)\nOPENAI_ONLY_MODELS = (\n    Llm.GPT_5_2_CODEX_HIGH,\n    Llm.GPT_5_2_CODEX_MEDIUM,\n)\n"
  },
  {
    "path": "backend/routes/screenshot.py",
    "content": "import base64\nfrom fastapi import APIRouter, HTTPException\nfrom pydantic import BaseModel\nimport httpx\nfrom urllib.parse import urlparse\n\nrouter = APIRouter()\n\n\ndef normalize_url(url: str) -> str:\n    \"\"\"\n    Normalize URL to ensure it has a proper protocol.\n    If no protocol is specified, default to https://\n    \"\"\"\n    url = url.strip()\n    \n    # Parse the URL\n    parsed = urlparse(url)\n    \n    # Check if we have a scheme\n    if not parsed.scheme:\n        # No scheme, add https://\n        url = f\"https://{url}\"\n    elif parsed.scheme in ['http', 'https']:\n        # Valid scheme, keep as is\n        pass\n    else:\n        # Check if this might be a domain with port (like example.com:8080)\n        # urlparse treats this as scheme:netloc, but we want to handle it as domain:port\n        if ':' in url and not url.startswith(('http://', 'https://', 'ftp://', 'file://')):\n            # Likely a domain:port without protocol\n            url = f\"https://{url}\"\n        else:\n            # Invalid protocol\n            raise ValueError(f\"Unsupported protocol: {parsed.scheme}\")\n    \n    return url\n\n\ndef bytes_to_data_url(image_bytes: bytes, mime_type: str) -> str:\n    base64_image = base64.b64encode(image_bytes).decode(\"utf-8\")\n    return f\"data:{mime_type};base64,{base64_image}\"\n\n\nasync def capture_screenshot(\n    target_url: str, api_key: str, device: str = \"desktop\"\n) -> bytes:\n    api_base_url = \"https://api.screenshotone.com/take\"\n\n    params = {\n        \"access_key\": api_key,\n        \"url\": target_url,\n        \"full_page\": \"true\",\n        \"device_scale_factor\": \"1\",\n        \"format\": \"png\",\n        \"block_ads\": \"true\",\n        \"block_cookie_banners\": \"true\",\n        \"block_trackers\": \"true\",\n        \"cache\": \"false\",\n        \"viewport_width\": \"342\",\n        \"viewport_height\": \"684\",\n    }\n\n    if device == \"desktop\":\n        params[\"viewport_width\"] = \"1280\"\n        params[\"viewport_height\"] = \"832\"\n\n    async with httpx.AsyncClient(timeout=60) as client:\n        response = await client.get(api_base_url, params=params)\n        if response.status_code == 200 and response.content:\n            return response.content\n        else:\n            raise Exception(\"Error taking screenshot\")\n\n\nclass ScreenshotRequest(BaseModel):\n    url: str\n    apiKey: str\n\n\nclass ScreenshotResponse(BaseModel):\n    url: str\n\n\n@router.post(\"/api/screenshot\")\nasync def app_screenshot(request: ScreenshotRequest):\n    # Extract the URL from the request body\n    url = request.url\n    api_key = request.apiKey\n\n    try:\n        # Normalize the URL\n        normalized_url = normalize_url(url)\n        \n        # Capture screenshot with normalized URL\n        image_bytes = await capture_screenshot(normalized_url, api_key=api_key)\n\n        # Convert the image bytes to a data url\n        data_url = bytes_to_data_url(image_bytes, \"image/png\")\n\n        return ScreenshotResponse(url=data_url)\n    except ValueError as e:\n        # Handle URL normalization errors\n        raise HTTPException(status_code=500, detail=str(e))\n    except Exception as e:\n        # Handle other errors\n        raise HTTPException(status_code=500, detail=f\"Error capturing screenshot: {str(e)}\")\n"
  },
  {
    "path": "backend/run_evals.py",
    "content": "# Load environment variables first\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nimport asyncio\nfrom evals.runner import run_image_evals\n\n\nasync def main():\n    await run_image_evals()\n\n\n# async def text_main():\n#     OUTPUT_DIR = EVALS_DIR + \"/outputs\"\n\n#     GENERAL_TEXT_V1 = [\n#         \"Login form\",\n#         \"Simple notification\",\n#         \"button\",\n#         \"saas dashboard\",\n#         \"landing page for barber shop\",\n#     ]\n\n#     tasks: list[Coroutine[Any, Any, str]] = []\n#     for prompt in GENERAL_TEXT_V1:\n#         for n in range(N):  # Generate N tasks for each input\n#             if n == 0:\n#                 task = generate_code_for_text(\n#                     text=prompt,\n#                     stack=STACK,\n#                     model=Llm.CLAUDE_4_5_SONNET_2025_09_29,\n#                 )\n#             else:\n#                 task = generate_code_for_text(\n#                     text=prompt, stack=STACK, model=Llm.GPT_4_1_2025_04_14\n#                 )\n#             tasks.append(task)\n\n#     print(f\"Generating {len(tasks)} codes\")\n\n#     results = await asyncio.gather(*tasks)\n\n#     os.makedirs(OUTPUT_DIR, exist_ok=True)\n\n#     for i, content in enumerate(results):\n#         # Calculate index for filename and output number\n#         eval_index = i // N\n#         output_number = i % N\n#         filename = GENERAL_TEXT_V1[eval_index]\n#         # File name is derived from the original filename in evals with an added output number\n#         output_filename = f\"{os.path.splitext(filename)[0]}_{output_number}.html\"\n#         output_filepath = os.path.join(OUTPUT_DIR, output_filename)\n#         with open(output_filepath, \"w\") as file:\n#             file.write(content)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n"
  },
  {
    "path": "backend/run_image_generation_evals.py",
    "content": "import asyncio\nimport os\nfrom typing import List, Optional, Literal\nfrom dotenv import load_dotenv\nimport aiohttp\nfrom image_generation.generation import process_tasks\n\nEVALS = [\n    \"Romantic Background\",\n    \"Company logo: A stylized green sprout emerging from a circle\",\n    \"Placeholder image of a PDF cover with abstract design\",\n    \"A complex bubble diagram showing various interconnected features and aspects of FestivalPro, with a large central bubble surrounded by smaller bubbles of different colors representing different categories and functionalities\",\n    \"A vibrant, abstract visualization of the RhythmRise experience ecosystem, featuring interconnected neon elements representing music, technology, and human connection\",\n    \"Banner with text 'LiblibAI学院 课程入口'\",\n    \"Profile picture of Pierre-Louis Labonne\",\n    \"Two hands holding iPhone 14 models with colorful displays\",\n    \"Portrait of a woman with long dark hair smiling at the camera\",\n    \"Threadless logo on a gradient background from light pink to coral\",\n    \"Jordan Schlansky Shows Conan His Favorite Nose Hair Trimmer\",\n    \"Team Coco\",\n    \"Intro to Large Language Models\",\n    \"Andrej Karpathy\",\n    \"He built a $200 million toy company\",\n    \"CNBC International\",\n    \"What will happen in year three of the war?\",\n    \"Channel\",\n    \"This is it\",\n    \"How ASML Dominates Chip Machines\",\n]\n\n# Load environment variables\nload_dotenv()\n\n# Get API keys from environment variables\nOPENAI_API_KEY: Optional[str] = os.getenv(\"OPENAI_API_KEY\")\nREPLICATE_API_TOKEN: Optional[str] = os.getenv(\"REPLICATE_API_TOKEN\")\n\n# Directory to save generated images\nOUTPUT_DIR: str = \"generated_images\"\n\n\nasync def generate_and_save_images(\n    prompts: List[str],\n    model: Literal[\"dalle3\", \"flux\"],\n    api_key: Optional[str],\n) -> None:\n    # Ensure the output directory exists\n    os.makedirs(OUTPUT_DIR, exist_ok=True)\n\n    if api_key is None:\n        raise ValueError(f\"API key for {model} is not set in the environment variables\")\n\n    # Generate images\n    results: List[Optional[str]] = await process_tasks(\n        prompts, api_key, None, model=model\n    )\n\n    # Save images to disk\n    async with aiohttp.ClientSession() as session:\n        for i, image_url in enumerate(results):\n            if image_url:\n                # Get the image data\n                async with session.get(image_url) as response:\n                    image_data: bytes = await response.read()\n\n                # Save the image with a filename based on the input eval\n                prefix = \"replicate_\" if model == \"flux\" else \"dalle3_\"\n                filename: str = (\n                    f\"{prefix}{prompts[i][:50].replace(' ', '_').replace(':', '')}.png\"\n                )\n                filepath: str = os.path.join(OUTPUT_DIR, filename)\n                with open(filepath, \"wb\") as f:\n                    f.write(image_data)\n                print(f\"Saved {model} image: {filepath}\")\n            else:\n                print(f\"Failed to generate {model} image for prompt: {prompts[i]}\")\n\n\nasync def main() -> None:\n    # await generate_and_save_images(EVALS, \"dalle3\", OPENAI_API_KEY)\n    await generate_and_save_images(EVALS, \"flux\", REPLICATE_API_TOKEN)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n"
  },
  {
    "path": "backend/start.py",
    "content": "import argparse\n\nimport uvicorn\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"--port\", type=int, default=7001)\n    args = parser.parse_args()\n    uvicorn.run(\"main:app\", port=args.port, reload=True)\n"
  },
  {
    "path": "backend/tests/__init__.py",
    "content": ""
  },
  {
    "path": "backend/tests/test_agent_tool_runtime.py",
    "content": "import pytest\n\nfrom agent.state import AgentFileState\nfrom agent.tools.runtime import AgentToolRuntime\nfrom agent.tools.types import ToolCall\n\n\ndef test_edit_file_returns_structured_result_with_diff() -> None:\n    runtime = AgentToolRuntime(\n        file_state=AgentFileState(\n            path=\"index.html\",\n            content=\"<div>before</div>\\n<p>keep</p>\\n\",\n        ),\n        should_generate_images=False,\n        openai_api_key=None,\n        openai_base_url=None,\n    )\n\n    result = runtime._edit_file(\n        {\n            \"old_text\": \"<div>before</div>\",\n            \"new_text\": \"<div>after</div>\",\n        }\n    )\n\n    assert result.ok is True\n    assert result.updated_content == \"<div>after</div>\\n<p>keep</p>\\n\"\n    assert result.result[\"content\"] == \"Successfully edited file at index.html.\"\n    assert set(result.result[\"details\"].keys()) == {\"diff\", \"firstChangedLine\"}\n    assert result.result[\"details\"][\"firstChangedLine\"] == 1\n    assert \"--- index.html\" in result.result[\"details\"][\"diff\"]\n    assert \"+++ index.html\" in result.result[\"details\"][\"diff\"]\n    assert \"-<div>before</div>\" in result.result[\"details\"][\"diff\"]\n    assert \"+<div>after</div>\" in result.result[\"details\"][\"diff\"]\n    assert result.summary[\"firstChangedLine\"] == 1\n    assert result.summary[\"diff\"] == result.result[\"details\"][\"diff\"]\n\n\n@pytest.mark.asyncio\nasync def test_execute_edit_file_uses_updated_result_shape() -> None:\n    runtime = AgentToolRuntime(\n        file_state=AgentFileState(path=\"index.html\", content=\"<main>old</main>\"),\n        should_generate_images=False,\n        openai_api_key=None,\n        openai_base_url=None,\n    )\n\n    result = await runtime.execute(\n        ToolCall(\n            id=\"call-1\",\n            name=\"edit_file\",\n            arguments={\"old_text\": \"old\", \"new_text\": \"new\"},\n        )\n    )\n\n    # execute() is sync for edit_file and should preserve the structured payload.\n    assert result.ok is True\n    assert result.result[\"content\"] == \"Successfully edited file at index.html.\"\n    assert set(result.result[\"details\"].keys()) == {\"diff\", \"firstChangedLine\"}\n    assert \"--- index.html\" in result.result[\"details\"][\"diff\"]\n"
  },
  {
    "path": "backend/tests/test_agent_tools.py",
    "content": "from agent.tools import canonical_tool_definitions\n\n\ndef test_canonical_tool_definitions_include_generate_images_when_enabled() -> None:\n    tool_names = [tool.name for tool in canonical_tool_definitions(True)]\n    assert \"generate_images\" in tool_names\n\n\ndef test_canonical_tool_definitions_exclude_generate_images_when_disabled() -> None:\n    tool_names = [tool.name for tool in canonical_tool_definitions(False)]\n    assert \"generate_images\" not in tool_names\n\n\ndef test_edit_file_tool_description_matches_runtime_output_shape() -> None:\n    edit_tool = next(\n        tool for tool in canonical_tool_definitions(True) if tool.name == \"edit_file\"\n    )\n\n    assert \"success message\" in edit_tool.description\n    assert \"unified diff\" in edit_tool.description\n"
  },
  {
    "path": "backend/tests/test_batching.py",
    "content": "import asyncio\n\nimport pytest\n\nfrom image_generation import generation\nfrom agent.tools.runtime import AgentToolRuntime\nfrom agent.tools.types import ToolCall\nfrom agent.state import AgentFileState\n\n\n@pytest.mark.asyncio\nasync def test_process_tasks_batches_replicate_calls(\n    monkeypatch: pytest.MonkeyPatch,\n) -> None:\n    monkeypatch.setattr(generation, \"REPLICATE_BATCH_SIZE\", 3)\n\n    concurrent = 0\n    max_concurrent = 0\n\n    async def tracking_generate(prompt: str, api_key: str) -> str:\n        nonlocal concurrent, max_concurrent\n        concurrent += 1\n        max_concurrent = max(max_concurrent, concurrent)\n        await asyncio.sleep(0.01)\n        concurrent -= 1\n        return f\"url-for-{prompt}\"\n\n    monkeypatch.setattr(generation, \"generate_image_replicate\", tracking_generate)\n\n    prompts = [f\"prompt-{i}\" for i in range(7)]\n    results = await generation.process_tasks(prompts, \"key\", None, \"flux\")\n\n    assert len(results) == 7\n    assert results == [f\"url-for-prompt-{i}\" for i in range(7)]\n    assert max_concurrent <= 3\n\n\n@pytest.mark.asyncio\nasync def test_remove_background_batches_calls(\n    monkeypatch: pytest.MonkeyPatch,\n) -> None:\n    monkeypatch.setattr(\"agent.tools.runtime.REPLICATE_API_KEY\", \"fake-key\")\n\n    concurrent = 0\n    max_concurrent = 0\n\n    async def tracking_remove_bg(image_url: str, api_token: str) -> str:\n        nonlocal concurrent, max_concurrent\n        concurrent += 1\n        max_concurrent = max(max_concurrent, concurrent)\n        await asyncio.sleep(0.01)\n        concurrent -= 1\n        return f\"nobg-{image_url}\"\n\n    monkeypatch.setattr(\"agent.tools.runtime.remove_background\", tracking_remove_bg)\n\n    runtime = AgentToolRuntime(\n        file_state=AgentFileState(),\n        should_generate_images=True,\n        openai_api_key=None,\n        openai_base_url=None,\n    )\n\n    urls = [f\"https://example.com/img-{i}.png\" for i in range(25)]\n\n    result = await runtime.execute(\n        ToolCall(id=\"test\", name=\"remove_background\", arguments={\"image_urls\": urls})\n    )\n\n    assert result.ok\n    assert len(result.result[\"images\"]) == 25\n    assert all(r[\"status\"] == \"ok\" for r in result.result[\"images\"])\n    assert max_concurrent <= 20\n"
  },
  {
    "path": "backend/tests/test_codegen_utils.py",
    "content": "from codegen.utils import extract_html_content\n\n\ndef test_extract_html_content_from_wrapped_file_tag() -> None:\n    text = '<file path=\"index.html\">\\n<html><body><p>Hello</p></body></html>\\n</file>'\n\n    result = extract_html_content(text)\n\n    assert result == \"<html><body><p>Hello</p></body></html>\"\n"
  },
  {
    "path": "backend/tests/test_evals_openai_input_compare.py",
    "content": "import pytest\nfrom fastapi import HTTPException\n\nfrom routes.evals import OpenAIInputCompareRequest, compare_openai_inputs_for_evals\n\n\n@pytest.mark.asyncio\nasync def test_compare_openai_inputs_for_evals_returns_first_difference() -> None:\n    response = await compare_openai_inputs_for_evals(\n        OpenAIInputCompareRequest(\n            left_json=(\n                '{\"input\":[{\"role\":\"system\",\"content\":\"A\"},'\n                '{\"role\":\"user\",\"content\":\"Build dashboard\"}]}'\n            ),\n            right_json=(\n                '{\"input\":[{\"role\":\"system\",\"content\":\"A\"},'\n                '{\"role\":\"user\",\"content\":\"Build landing page\"}]}'\n            ),\n        )\n    )\n\n    assert response.common_prefix_items == 1\n    assert response.left_item_count == 2\n    assert response.right_item_count == 2\n    assert response.difference is not None\n    assert response.difference.path == \"input[1].content\"\n    assert response.difference.left_value == \"Build dashboard\"\n    assert response.difference.right_value == \"Build landing page\"\n\n\n@pytest.mark.asyncio\nasync def test_compare_openai_inputs_for_evals_rejects_invalid_json() -> None:\n    with pytest.raises(HTTPException) as error_info:\n        await compare_openai_inputs_for_evals(\n            OpenAIInputCompareRequest(\n                left_json='{\"input\": [',\n                right_json='{\"input\": []}',\n            )\n        )\n\n    assert error_info.value.status_code == 400\n    assert \"Invalid left JSON\" in error_info.value.detail\n"
  },
  {
    "path": "backend/tests/test_image_generation_replicate.py",
    "content": "import pytest\n\nfrom image_generation import replicate\n\n\ndef test_extract_output_url_from_string() -> None:\n    assert (\n        replicate._extract_output_url(\"https://example.com/image.png\", \"test\")\n        == \"https://example.com/image.png\"\n    )\n\n\ndef test_extract_output_url_from_dict() -> None:\n    assert (\n        replicate._extract_output_url({\"url\": \"https://example.com/image.png\"}, \"test\")\n        == \"https://example.com/image.png\"\n    )\n\n\ndef test_extract_output_url_from_list() -> None:\n    assert (\n        replicate._extract_output_url([\"https://example.com/image.png\"], \"test\")\n        == \"https://example.com/image.png\"\n    )\n\n\ndef test_extract_output_url_from_list_item_dict() -> None:\n    assert (\n        replicate._extract_output_url(\n            [{\"url\": \"https://example.com/image.png\"}], \"test\"\n        )\n        == \"https://example.com/image.png\"\n    )\n\n\ndef test_extract_output_url_invalid_raises() -> None:\n    with pytest.raises(ValueError):\n        replicate._extract_output_url([], \"test\")\n\n\n@pytest.mark.asyncio\nasync def test_call_replicate_uses_flux_model(monkeypatch: pytest.MonkeyPatch) -> None:\n    captured: dict[str, object] = {}\n\n    async def fake_call_replicate_model(\n        model_path: str, input: dict[str, object], api_token: str\n    ) -> list[str]:\n        captured[\"model_path\"] = model_path\n        captured[\"input\"] = input\n        captured[\"api_token\"] = api_token\n        return [\"https://example.com/flux.png\"]\n\n    monkeypatch.setattr(replicate, \"call_replicate_model\", fake_call_replicate_model)\n\n    result = await replicate.call_replicate({\"prompt\": \"test\", \"seed\": 1}, \"token-123\")\n\n    assert result == \"https://example.com/flux.png\"\n    assert captured[\"model_path\"] == replicate.FLUX_MODEL_PATH\n    assert captured[\"api_token\"] == \"token-123\"\n\n\n@pytest.mark.asyncio\nasync def test_remove_background_uses_version_and_normalizes_output(\n    monkeypatch: pytest.MonkeyPatch,\n) -> None:\n    captured: dict[str, object] = {}\n\n    async def fake_call_replicate_version(\n        version: str, input: dict[str, object], api_token: str\n    ) -> dict[str, str]:\n        captured[\"version\"] = version\n        captured[\"input\"] = input\n        captured[\"api_token\"] = api_token\n        return {\"url\": \"https://example.com/no-bg.png\"}\n\n    monkeypatch.setattr(replicate, \"call_replicate_version\", fake_call_replicate_version)\n\n    result = await replicate.remove_background(\"https://example.com/input.png\", \"token\")\n\n    assert result == \"https://example.com/no-bg.png\"\n    assert captured[\"version\"] == replicate.REMOVE_BACKGROUND_VERSION\n    assert captured[\"api_token\"] == \"token\"\n"
  },
  {
    "path": "backend/tests/test_model_selection.py",
    "content": "import pytest\nfrom unittest.mock import AsyncMock\nfrom routes.generate_code import ModelSelectionStage\nfrom llm import Llm\n\n\nclass TestModelSelectionAllKeys:\n    \"\"\"Test model selection when Gemini, Anthropic, and OpenAI API keys are present.\"\"\"\n\n    def setup_method(self):\n        \"\"\"Set up test fixtures.\"\"\"\n        mock_throw_error = AsyncMock()\n        self.model_selector = ModelSelectionStage(mock_throw_error)\n\n    @pytest.mark.asyncio\n    async def test_gemini_anthropic_create(self):\n        \"\"\"All keys: fixed order for four variants.\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"create\",\n            input_mode=\"text\",\n            openai_api_key=\"key\",\n            anthropic_api_key=\"key\",\n            gemini_api_key=\"key\",\n        )\n\n        expected = [\n            Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n            Llm.GPT_5_2_CODEX_HIGH,\n            Llm.CLAUDE_OPUS_4_6,\n            Llm.GEMINI_3_1_PRO_PREVIEW_LOW,\n        ]\n        assert models == expected\n\n    @pytest.mark.asyncio\n    async def test_gemini_anthropic_update_text(self):\n        \"\"\"All keys text update: uses two fast edit variants.\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"update\",\n            input_mode=\"text\",\n            openai_api_key=\"key\",\n            anthropic_api_key=\"key\",\n            gemini_api_key=\"key\",\n        )\n\n        expected = [\n            Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n            Llm.GPT_5_4_2026_03_05_LOW,\n        ]\n        assert models == expected\n\n    @pytest.mark.asyncio\n    async def test_gemini_anthropic_update(self):\n        \"\"\"All keys image update: uses two fast edit variants.\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"update\",\n            input_mode=\"image\",\n            openai_api_key=\"key\",\n            anthropic_api_key=\"key\",\n            gemini_api_key=\"key\",\n        )\n\n        expected = [\n            Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n            Llm.GPT_5_4_2026_03_05_LOW,\n        ]\n        assert models == expected\n\n    @pytest.mark.asyncio\n    async def test_video_create_prefers_gemini_minimal_then_3_1_high(self):\n        \"\"\"Video create always uses two Gemini variants in fixed order.\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"create\",\n            input_mode=\"video\",\n            openai_api_key=\"key\",\n            anthropic_api_key=\"key\",\n            gemini_api_key=\"key\",\n        )\n\n        expected = [\n            Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n            Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n        ]\n        assert models == expected\n\n    @pytest.mark.asyncio\n    async def test_video_update_prefers_gemini_minimal_then_3_1_high(self):\n        \"\"\"Video update always uses the same two Gemini variants as video create.\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"update\",\n            input_mode=\"video\",\n            openai_api_key=\"key\",\n            anthropic_api_key=\"key\",\n            gemini_api_key=\"key\",\n        )\n\n        expected = [\n            Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL,\n            Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n        ]\n        assert models == expected\n\n\nclass TestModelSelectionOpenAIAnthropic:\n    \"\"\"Test model selection when only OpenAI and Anthropic keys are present.\"\"\"\n\n    def setup_method(self):\n        \"\"\"Set up test fixtures.\"\"\"\n        mock_throw_error = AsyncMock()\n        self.model_selector = ModelSelectionStage(mock_throw_error)\n\n    @pytest.mark.asyncio\n    async def test_openai_anthropic(self):\n        \"\"\"OpenAI + Anthropic: Claude Opus 4.6, GPT 5.2 Codex (high/medium), cycling\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"create\",\n            input_mode=\"text\",\n            openai_api_key=\"key\",\n            anthropic_api_key=\"key\",\n            gemini_api_key=None,\n        )\n\n        expected = [\n            Llm.CLAUDE_OPUS_4_6,\n            Llm.GPT_5_2_CODEX_HIGH,\n            Llm.GPT_5_2_CODEX_MEDIUM,\n            Llm.CLAUDE_OPUS_4_6,\n        ]\n        assert models == expected\n\n\nclass TestModelSelectionAnthropicOnly:\n    \"\"\"Test model selection when only Anthropic key is present.\"\"\"\n\n    def setup_method(self):\n        \"\"\"Set up test fixtures.\"\"\"\n        mock_throw_error = AsyncMock()\n        self.model_selector = ModelSelectionStage(mock_throw_error)\n\n    @pytest.mark.asyncio\n    async def test_anthropic_only(self):\n        \"\"\"Anthropic only: Claude Opus 4.6 and Claude Sonnet 4.6 cycling\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"create\",\n            input_mode=\"text\",\n            openai_api_key=None,\n            anthropic_api_key=\"key\",\n            gemini_api_key=None,\n        )\n\n        expected = [\n            Llm.CLAUDE_OPUS_4_6,\n            Llm.CLAUDE_SONNET_4_6,\n            Llm.CLAUDE_OPUS_4_6,\n            Llm.CLAUDE_SONNET_4_6,\n        ]\n        assert models == expected\n\n\nclass TestModelSelectionOpenAIOnly:\n    \"\"\"Test model selection when only OpenAI key is present.\"\"\"\n\n    def setup_method(self):\n        \"\"\"Set up test fixtures.\"\"\"\n        mock_throw_error = AsyncMock()\n        self.model_selector = ModelSelectionStage(mock_throw_error)\n\n    @pytest.mark.asyncio\n    async def test_openai_only(self):\n        \"\"\"OpenAI only: GPT 5.2 Codex (high/medium) only\"\"\"\n        models = await self.model_selector.select_models(\n            generation_type=\"create\",\n            input_mode=\"text\",\n            openai_api_key=\"key\",\n            anthropic_api_key=None,\n            gemini_api_key=None,\n        )\n\n        expected = [\n            Llm.GPT_5_2_CODEX_HIGH,\n            Llm.GPT_5_2_CODEX_MEDIUM,\n            Llm.GPT_5_2_CODEX_HIGH,\n            Llm.GPT_5_2_CODEX_MEDIUM,\n        ]\n        assert models == expected\n\n\nclass TestModelSelectionNoKeys:\n    \"\"\"Test model selection when no API keys are present.\"\"\"\n\n    def setup_method(self):\n        \"\"\"Set up test fixtures.\"\"\"\n        mock_throw_error = AsyncMock()\n        self.model_selector = ModelSelectionStage(mock_throw_error)\n\n    @pytest.mark.asyncio\n    async def test_no_keys_raises_error(self):\n        \"\"\"No keys: Should raise an exception\"\"\"\n        with pytest.raises(Exception, match=\"No API key\"):\n            await self.model_selector.select_models(\n                generation_type=\"create\",\n                input_mode=\"text\",\n                openai_api_key=None,\n                anthropic_api_key=None,\n                gemini_api_key=None,\n            )\n"
  },
  {
    "path": "backend/tests/test_openai_input_compare.py",
    "content": "from typing import Any\n\nfrom fs_logging.openai_input_compare import (\n    compare_openai_inputs,\n    format_openai_input_comparison,\n)\n\n\ndef test_compare_openai_inputs_returns_none_for_identical_inputs() -> None:\n    payload: dict[str, Any] = {\n        \"input\": [\n            {\"role\": \"system\", \"content\": \"You are a coding agent.\"},\n            {\"role\": \"user\", \"content\": \"Build a dashboard.\"},\n        ]\n    }\n\n    comparison = compare_openai_inputs(payload, payload)\n\n    assert comparison.common_prefix_items == 2\n    assert comparison.left_item_count == 2\n    assert comparison.right_item_count == 2\n    assert comparison.difference is None\n    assert \"difference=none\" in format_openai_input_comparison(comparison)\n\n\ndef test_compare_openai_inputs_finds_first_different_block_and_field() -> None:\n    left_payload: dict[str, Any] = {\n        \"input\": [\n            {\"role\": \"system\", \"content\": \"You are a coding agent.\"},\n            {\n                \"role\": \"user\",\n                \"content\": [\n                    {\"type\": \"input_text\", \"text\": \"Build a dashboard.\"},\n                    {\n                        \"type\": \"input_image\",\n                        \"image_url\": \"data:image/png;base64,left\",\n                        \"detail\": \"original\",\n                    },\n                ],\n            },\n        ]\n    }\n    right_payload: dict[str, Any] = {\n        \"input\": [\n            {\"role\": \"system\", \"content\": \"You are a coding agent.\"},\n            {\n                \"role\": \"user\",\n                \"content\": [\n                    {\"type\": \"input_text\", \"text\": \"Build a dashboard.\"},\n                    {\n                        \"type\": \"input_image\",\n                        \"image_url\": \"data:image/png;base64,right\",\n                        \"detail\": \"original\",\n                    },\n                ],\n            },\n        ]\n    }\n\n    comparison = compare_openai_inputs(left_payload, right_payload)\n\n    assert comparison.common_prefix_items == 1\n    assert comparison.difference is not None\n    assert comparison.difference.item_index == 1\n    assert comparison.difference.path == \"input[1].content[1].image_url\"\n    assert comparison.difference.left_value == \"data:image/png;base64,left\"\n    assert comparison.difference.right_value == \"data:image/png;base64,right\"\n\n\ndef test_compare_openai_inputs_accepts_raw_input_arrays() -> None:\n    left_input: list[dict[str, Any]] = [\n        {\"role\": \"system\", \"content\": \"You are a coding agent.\"},\n        {\"role\": \"user\", \"content\": \"Build a dashboard.\"},\n    ]\n    right_input: list[dict[str, Any]] = [\n        {\"role\": \"system\", \"content\": \"You are a coding agent.\"},\n        {\"role\": \"user\", \"content\": \"Build a landing page.\"},\n    ]\n\n    comparison = compare_openai_inputs(left_input, right_input)\n    formatted = format_openai_input_comparison(comparison)\n\n    assert comparison.difference is not None\n    assert comparison.difference.path == \"input[1].content\"\n    assert \"first_different_item_index=1\" in formatted\n    assert \"first_different_path=input[1].content\" in formatted\n"
  },
  {
    "path": "backend/tests/test_openai_provider_session.py",
    "content": "import copy\nfrom typing import Any\n\nimport pytest\n\nfrom agent.providers.base import ExecutedToolCall, ProviderTurn\nfrom agent.providers.openai import OpenAIProviderSession\nfrom agent.tools import ToolCall, ToolExecutionResult\nfrom llm import Llm\n\n\nclass _EmptyAsyncStream:\n    def __aiter__(self) -> \"_EmptyAsyncStream\":\n        return self\n\n    async def __anext__(self) -> object:\n        raise StopAsyncIteration\n\n\nclass _FakeResponses:\n    def __init__(self) -> None:\n        self.calls: list[dict[str, Any]] = []\n\n    async def create(self, **kwargs: Any) -> _EmptyAsyncStream:\n        self.calls.append(copy.deepcopy(kwargs))\n        return _EmptyAsyncStream()\n\n\nclass _FakeOpenAIClient:\n    def __init__(self) -> None:\n        self.responses = _FakeResponses()\n\n    async def close(self) -> None:\n        return None\n\n\nasync def _noop_event_sink(_: Any) -> None:\n    return None\n\n\ndef _test_tools() -> list[dict[str, Any]]:\n    return [\n        {\n            \"type\": \"function\",\n            \"name\": \"edit_file\",\n            \"description\": \"Apply an edit.\",\n            \"parameters\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"path\": {\"type\": \"string\"},\n                },\n                \"required\": [\"path\"],\n            },\n            \"strict\": True,\n        }\n    ]\n\n\n@pytest.mark.asyncio\nasync def test_openai_provider_session_omits_prompt_cache_key_across_turns() -> None:\n    client = _FakeOpenAIClient()\n    session = OpenAIProviderSession(\n        client=client,  # type: ignore[arg-type]\n        model=Llm.GPT_5_2_CODEX_HIGH,\n        prompt_messages=[{\"role\": \"user\", \"content\": \"Build a landing page.\"}],\n        tools=_test_tools(),\n    )\n\n    first_turn = await session.stream_turn(_noop_event_sink)\n    session.append_tool_results(\n        ProviderTurn(\n            assistant_text=first_turn.assistant_text,\n            tool_calls=[],\n            assistant_turn=[\n                {\n                    \"type\": \"function_call\",\n                    \"call_id\": \"call-1\",\n                    \"name\": \"edit_file\",\n                    \"arguments\": '{\"path\":\"index.html\"}',\n                }\n            ],\n        ),\n        [\n            ExecutedToolCall(\n                tool_call=ToolCall(\n                    id=\"call-1\",\n                    name=\"edit_file\",\n                    arguments={\"path\": \"index.html\"},\n                ),\n                result=ToolExecutionResult(\n                    ok=True,\n                    result={\n                        \"content\": \"Successfully edited file at index.html.\",\n                        \"details\": {\n                            \"diff\": \"--- index.html\\n+++ index.html\\n@@ -1 +1 @@\\n-a\\n+b\\n\",\n                            \"firstChangedLine\": 1,\n                        },\n                    },\n                    summary={\"content\": \"Successfully edited file at index.html.\"},\n                ),\n            )\n        ],\n    )\n    await session.stream_turn(_noop_event_sink)\n\n    first_call = client.responses.calls[0]\n    second_call = client.responses.calls[1]\n    first_input = first_call[\"input\"]\n    second_input = second_call[\"input\"]\n\n    assert \"prompt_cache_key\" not in first_call\n    assert \"prompt_cache_key\" not in second_call\n    assert \"prompt_cache_retention\" not in first_call\n    assert \"prompt_cache_retention\" not in second_call\n    assert isinstance(first_input, list)\n    assert isinstance(second_input, list)\n    assert len(second_input) > len(first_input)\n\n\n@pytest.mark.asyncio\nasync def test_openai_provider_session_omits_prompt_cache_key_for_all_prompts() -> None:\n    first_client = _FakeOpenAIClient()\n    second_client = _FakeOpenAIClient()\n    different_prompt_client = _FakeOpenAIClient()\n\n    first_session = OpenAIProviderSession(\n        client=first_client,  # type: ignore[arg-type]\n        model=Llm.GPT_5_2_CODEX_HIGH,\n        prompt_messages=[{\"role\": \"user\", \"content\": \"Build a landing page.\"}],\n        tools=_test_tools(),\n    )\n    second_session = OpenAIProviderSession(\n        client=second_client,  # type: ignore[arg-type]\n        model=Llm.GPT_5_2_CODEX_HIGH,\n        prompt_messages=[{\"role\": \"user\", \"content\": \"Build a landing page.\"}],\n        tools=_test_tools(),\n    )\n    different_prompt_session = OpenAIProviderSession(\n        client=different_prompt_client,  # type: ignore[arg-type]\n        model=Llm.GPT_5_2_CODEX_HIGH,\n        prompt_messages=[{\"role\": \"user\", \"content\": \"Build a dashboard.\"}],\n        tools=_test_tools(),\n    )\n\n    await first_session.stream_turn(_noop_event_sink)\n    await second_session.stream_turn(_noop_event_sink)\n    await different_prompt_session.stream_turn(_noop_event_sink)\n\n    assert \"prompt_cache_key\" not in first_client.responses.calls[0]\n    assert \"prompt_cache_key\" not in second_client.responses.calls[0]\n    assert \"prompt_cache_key\" not in different_prompt_client.responses.calls[0]\n\n\n@pytest.mark.asyncio\nasync def test_openai_provider_session_uses_gpt_5_4_none_reasoning_effort() -> None:\n    client = _FakeOpenAIClient()\n    session = OpenAIProviderSession(\n        client=client,  # type: ignore[arg-type]\n        model=Llm.GPT_5_4_2026_03_05_NONE,\n        prompt_messages=[{\"role\": \"user\", \"content\": \"Build a dashboard.\"}],\n        tools=_test_tools(),\n    )\n\n    await session.stream_turn(_noop_event_sink)\n\n    first_call = client.responses.calls[0]\n\n    assert first_call[\"model\"] == \"gpt-5.4-2026-03-05\"\n    assert first_call[\"prompt_cache_retention\"] == \"24h\"\n    assert first_call[\"reasoning\"] == {\"effort\": \"none\", \"summary\": \"auto\"}\n\n\n@pytest.mark.asyncio\nasync def test_openai_provider_session_uses_gpt_5_4_high_reasoning_effort() -> None:\n    client = _FakeOpenAIClient()\n    session = OpenAIProviderSession(\n        client=client,  # type: ignore[arg-type]\n        model=Llm.GPT_5_4_2026_03_05_HIGH,\n        prompt_messages=[{\"role\": \"user\", \"content\": \"Build a dashboard.\"}],\n        tools=_test_tools(),\n    )\n\n    await session.stream_turn(_noop_event_sink)\n\n    first_call = client.responses.calls[0]\n\n    assert first_call[\"model\"] == \"gpt-5.4-2026-03-05\"\n    assert first_call[\"prompt_cache_retention\"] == \"24h\"\n    assert first_call[\"reasoning\"] == {\"effort\": \"high\", \"summary\": \"auto\"}\n"
  },
  {
    "path": "backend/tests/test_openai_reasoning_parser.py",
    "content": "import pytest\n\nfrom agent.providers.openai import (\n    OpenAIResponsesParseState,\n    _convert_message_to_responses_input,\n    parse_event,\n)\nfrom agent.providers.types import StreamEvent\n\n\n@pytest.mark.asyncio\nasync def test_reasoning_summary_part_skipped_after_summary_delta() -> None:\n    state = OpenAIResponsesParseState()\n    events: list[StreamEvent] = []\n\n    async def on_event(event: StreamEvent) -> None:\n        events.append(event)\n\n    await parse_event(\n        {\"type\": \"response.reasoning_summary_text.delta\", \"delta\": \"Planning step.\"},\n        state,\n        on_event,\n    )\n    await parse_event(\n        {\n            \"type\": \"response.reasoning_summary_part.done\",\n            \"part\": {\"text\": \"Planning step.\"},\n        },\n        state,\n        on_event,\n    )\n\n    thinking_text = [event.text for event in events if event.type == \"thinking_delta\"]\n    assert thinking_text == [\"Planning step.\"]\n\n\n@pytest.mark.asyncio\nasync def test_reasoning_summary_part_added_and_done_emits_once() -> None:\n    state = OpenAIResponsesParseState()\n    events: list[StreamEvent] = []\n\n    async def on_event(event: StreamEvent) -> None:\n        events.append(event)\n\n    await parse_event(\n        {\n            \"type\": \"response.reasoning_summary_part.added\",\n            \"part\": {\"text\": \"Refining layout and assets.\"},\n        },\n        state,\n        on_event,\n    )\n    await parse_event(\n        {\n            \"type\": \"response.reasoning_summary_part.done\",\n            \"part\": {\"text\": \"Refining layout and assets.\"},\n        },\n        state,\n        on_event,\n    )\n\n    thinking_text = [event.text for event in events if event.type == \"thinking_delta\"]\n    assert thinking_text == [\"Refining layout and assets.\"]\n\n\ndef test_convert_image_url_defaults_to_high_detail() -> None:\n    message = {\n        \"role\": \"user\",\n        \"content\": [\n            {\"type\": \"image_url\", \"image_url\": {\"url\": \"data:image/png;base64,abc\"}},\n        ],\n    }\n    result = _convert_message_to_responses_input(message)  # type: ignore\n    image_part = result[\"content\"][0]\n    assert image_part[\"detail\"] == \"high\"\n\n\ndef test_convert_image_url_preserves_explicit_detail() -> None:\n    message = {\n        \"role\": \"user\",\n        \"content\": [\n            {\n                \"type\": \"image_url\",\n                \"image_url\": {\"url\": \"data:image/png;base64,abc\", \"detail\": \"low\"},\n            },\n        ],\n    }\n    result = _convert_message_to_responses_input(message)  # type: ignore\n    image_part = result[\"content\"][0]\n    assert image_part[\"detail\"] == \"low\"\n"
  },
  {
    "path": "backend/tests/test_openai_turn_input_logging.py",
    "content": "from pathlib import Path\n\nfrom agent.providers.token_usage import TokenUsage\nfrom fs_logging.openai_turn_inputs import OpenAITurnInputLogger\nfrom llm import Llm\n\n\ndef test_openai_turn_input_logger_writes_html_report(tmp_path, monkeypatch) -> None:\n    monkeypatch.setenv(\"LOGS_PATH\", str(tmp_path))\n\n    logger = OpenAITurnInputLogger(model=Llm.GPT_5_2_CODEX_LOW, enabled=True)\n    logger.record_turn_input(\n        [\n            {\n                \"role\": \"user\",\n                \"content\": \"Build this page\",\n            },\n            {\n                \"type\": \"function_call\",\n                \"name\": \"read_file\",\n                \"call_id\": \"call-1\",\n                \"arguments\": '{\"path\":\"/tmp/example.txt\"}',\n            },\n        ]\n    )\n    logger.record_turn_usage(\n        TokenUsage(\n            input=1200,\n            output=300,\n            cache_read=600,\n            total=2100,\n        )\n    )\n\n    report_path = logger.write_html_report()\n\n    assert report_path is not None\n    report = Path(report_path)\n    assert report.exists()\n    assert report.parent == tmp_path / \"run_logs\"\n\n    html = report.read_text(encoding=\"utf-8\")\n    assert \"OpenAI Turn Input Report\" in html\n    assert \"Turn 1 (items=2)\" in html\n    assert \"Build this page\" in html\n    assert \"read_file\" in html\n    assert \"Input tokens\" in html\n    assert \"1200\" in html\n    assert \"Cache hit rate\" in html\n    assert \"33.33%\" in html\n    assert \"Cost\" in html\n    assert \"$\" in html\n\n\ndef test_openai_turn_input_logger_preserves_full_large_payloads(\n    tmp_path, monkeypatch\n) -> None:\n    monkeypatch.setenv(\"LOGS_PATH\", str(tmp_path))\n\n    logger = OpenAITurnInputLogger(model=Llm.GPT_5_3_CODEX_LOW, enabled=True)\n    logger.record_turn_input(\n        [\n            {\n                \"role\": \"user\",\n                \"content\": [{\"type\": \"input_text\", \"text\": \"BEGIN-\" + (\"x\" * 450) + \"-END\"}],\n            }\n        ]\n    )\n\n    report_path = logger.write_html_report()\n\n    assert report_path is not None\n    html = Path(report_path).read_text(encoding=\"utf-8\")\n    assert \"Usage unavailable for this turn.\" in html\n    assert \"Raw JSON payload\" in html\n    assert \"string (460 chars)\" in html\n    assert \"BEGIN-\" in html\n    assert \"-END\" in html\n    assert \"truncated 50 chars\" not in html\n\n\ndef test_openai_turn_input_logger_includes_request_payload(\n    tmp_path, monkeypatch\n) -> None:\n    monkeypatch.setenv(\"LOGS_PATH\", str(tmp_path))\n\n    logger = OpenAITurnInputLogger(model=Llm.GPT_5_2_CODEX_HIGH, enabled=True)\n    logger.record_turn_input(\n        [\n            {\n                \"role\": \"user\",\n                \"content\": \"Build this page\",\n            }\n        ],\n        request_payload={\n            \"model\": \"gpt-5.2-codex\",\n            \"input\": [{\"role\": \"user\", \"content\": \"Build this page\"}],\n        },\n    )\n\n    report_path = logger.write_html_report()\n\n    assert report_path is not None\n    html = Path(report_path).read_text(encoding=\"utf-8\")\n    assert \"Request payload\" in html\n    assert \"Copy input JSON\" in html\n    assert \"request-input-turn-1\" in html\n\n\ndef test_openai_turn_input_logger_disabled_writes_nothing(tmp_path, monkeypatch) -> None:\n    monkeypatch.setenv(\"LOGS_PATH\", str(tmp_path))\n\n    logger = OpenAITurnInputLogger(model=Llm.GPT_5_2_CODEX_LOW)\n    logger.record_turn_input([{\"role\": \"user\", \"content\": \"Build this page\"}])\n    logger.record_turn_usage(TokenUsage(input=100, output=50, total=150))\n\n    report_path = logger.write_html_report()\n\n    assert report_path is None\n    assert not (tmp_path / \"run_logs\").exists()\n\n\ndef test_openai_turn_input_logger_summarizes_function_call_output(\n    tmp_path, monkeypatch\n) -> None:\n    monkeypatch.setenv(\"LOGS_PATH\", str(tmp_path))\n\n    logger = OpenAITurnInputLogger(model=Llm.GPT_5_2_CODEX_LOW, enabled=True)\n    logger.record_turn_input(\n        [\n            {\n                \"type\": \"function_call_output\",\n                \"call_id\": \"call-1\",\n                \"output\": (\n                    '{\"content\":\"Successfully edited file at index.html.\",'\n                    '\"details\":{\"diff\":\"--- index.html\\\\n+++ index.html\\\\n@@ -1 +1 @@\\\\n-a\\\\n+b\\\\n\",'\n                    '\"firstChangedLine\":1}}'\n                ),\n            }\n        ]\n    )\n\n    report_path = logger.write_html_report()\n\n    assert report_path is not None\n    html = Path(report_path).read_text(encoding=\"utf-8\")\n    assert \"type=function_call_output call_id=call-1\" in html\n    assert \"path=index.html\" in html\n    assert \"first_changed_line=1\" in html\n    assert \"diff_chars=\" in html\n    assert 'preview=&#x27;{&quot;content&quot;:' not in html\n"
  },
  {
    "path": "backend/tests/test_parameter_extraction_stage.py",
    "content": "from unittest.mock import AsyncMock\n\nimport pytest\n\nfrom routes.generate_code import ParameterExtractionStage\n\n\n@pytest.mark.asyncio\nasync def test_extracts_gemini_api_key_from_settings_dialog() -> None:\n    stage = ParameterExtractionStage(AsyncMock())\n\n    extracted = await stage.extract_and_validate(\n        {\n            \"generatedCodeConfig\": \"html_tailwind\",\n            \"inputMode\": \"text\",\n            \"openAiApiKey\": \"\",\n            \"anthropicApiKey\": \"\",\n            \"geminiApiKey\": \"gemini-from-ui\",\n            \"prompt\": {\"text\": \"hello\"},\n        }\n    )\n\n    assert extracted.gemini_api_key == \"gemini-from-ui\"\n\n\n@pytest.mark.asyncio\nasync def test_extracts_gemini_api_key_from_env_when_not_in_request(monkeypatch: pytest.MonkeyPatch) -> None:\n    monkeypatch.setattr(\"routes.generate_code.GEMINI_API_KEY\", \"gemini-from-env\")\n    stage = ParameterExtractionStage(AsyncMock())\n\n    extracted = await stage.extract_and_validate(\n        {\n            \"generatedCodeConfig\": \"html_tailwind\",\n            \"inputMode\": \"text\",\n            \"prompt\": {\"text\": \"hello\"},\n        }\n    )\n\n    assert extracted.gemini_api_key == \"gemini-from-env\"\n"
  },
  {
    "path": "backend/tests/test_prompt_summary.py",
    "content": "import io\nimport sys\nfrom typing import cast\n\nfrom openai.types.chat import ChatCompletionMessageParam\n\nfrom utils import (\n    format_prompt_preview,\n    format_prompt_summary,\n    print_prompt_preview,\n    print_prompt_summary,\n)\n\n\ndef test_format_prompt_summary():\n    messages = [\n        {\"role\": \"system\", \"content\": \"lorem ipsum dolor sit amet\"},\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"text\", \"text\": \"hello world\"},\n                {\n                    \"type\": \"image_url\",\n                    \"image_url\": {\"url\": \"data:image/png;base64,AAA\"},\n                },\n                {\n                    \"type\": \"image_url\",\n                    \"image_url\": {\"url\": \"data:image/png;base64,BBB\"},\n                },\n            ],\n        },\n    ]\n\n    summary = format_prompt_summary(messages)\n    assert \"SYSTEM: lorem ipsum\" in summary\n    assert \"[2 images]\" in summary\n\n\ndef test_print_prompt_summary():\n    messages = [\n        {\"role\": \"system\", \"content\": \"short message\"},\n        {\"role\": \"user\", \"content\": \"hello\"},\n    ]\n\n    # Capture stdout\n    captured_output = io.StringIO()\n    sys.stdout = captured_output\n    \n    print_prompt_summary(cast(list[ChatCompletionMessageParam], messages))\n    \n    # Reset stdout\n    sys.stdout = sys.__stdout__\n    \n    output = captured_output.getvalue()\n    \n    # Check that output contains box characters and content\n    assert \"┌─\" in output\n    assert \"└─\" in output\n    assert \"PROMPT SUMMARY\" in output\n    assert \"SYSTEM: short message\" in output\n    assert \"USER: hello\" in output\n\n\ndef test_print_prompt_summary_long_content():\n    messages = [\n        {\"role\": \"system\", \"content\": \"This is a very long system message that should be wrapped properly within the box boundaries\"},\n        {\"role\": \"user\", \"content\": \"short\"},\n    ]\n\n    # Capture stdout\n    captured_output = io.StringIO()\n    sys.stdout = captured_output\n    \n    print_prompt_summary(cast(list[ChatCompletionMessageParam], messages))\n    \n    # Reset stdout\n    sys.stdout = sys.__stdout__\n    \n    output = captured_output.getvalue()\n    lines = output.strip().split('\\n')\n    \n    # Check that all lines have consistent box formatting\n    for line in lines:\n        if line.startswith('│') and line.endswith('│'):\n            # All content lines should have same length\n            assert len(line) == len(lines[0]) if lines[0].startswith('┌') else True\n    \n    # Check content is present\n    assert \"PROMPT SUMMARY\" in output\n    assert \"SYSTEM:\" in output\n    assert \"USER: short\" in output\n\n\ndef test_format_prompt_summary_no_truncate():\n    messages = [\n        {\"role\": \"system\", \"content\": \"This is a very long message that would normally be truncated at 40 characters but should be shown in full\"},\n    ]\n\n    # Test with truncation (default)\n    summary_truncated = format_prompt_summary(\n        cast(list[ChatCompletionMessageParam], messages)\n    )\n    assert \"...\" in summary_truncated\n    assert len(summary_truncated.split(\": \", 1)[1]) <= 50  # Role + truncated content\n\n    # Test without truncation\n    summary_full = format_prompt_summary(\n        cast(list[ChatCompletionMessageParam], messages), truncate=False\n    )\n    assert \"...\" not in summary_full\n    assert \"shown in full\" in summary_full\n\n\ndef test_print_prompt_summary_no_truncate():\n    messages = [\n        {\"role\": \"system\", \"content\": \"This is a very long message that would normally be truncated but should be shown in full when truncate=False\"},\n    ]\n\n    # Capture stdout\n    captured_output = io.StringIO()\n    sys.stdout = captured_output\n    \n    print_prompt_summary(\n        cast(list[ChatCompletionMessageParam], messages), truncate=False\n    )\n    \n    # Reset stdout\n    sys.stdout = sys.__stdout__\n    \n    output = captured_output.getvalue()\n    \n    # Check that full content is shown\n    assert \"shown in full when truncate=False\" in output\n    assert \"...\" not in output\n\n\ndef test_format_prompt_preview_collapses_long_content():\n    long_code = \"<html>\\n\" + (\"x\" * 800) + \"\\n</html>\"\n    messages = [\n        {\"role\": \"system\", \"content\": \"short\"},\n        {\"role\": \"assistant\", \"content\": long_code},\n    ]\n\n    preview = format_prompt_preview(\n        cast(list[ChatCompletionMessageParam], messages), max_chars_per_message=120\n    )\n\n    assert \"1. SYSTEM\" in preview\n    assert \"2. ASSISTANT\" in preview\n    assert \"[collapsed \" in preview\n\n\ndef test_print_prompt_preview():\n    messages = [\n        {\"role\": \"system\", \"content\": \"System message\"},\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\"type\": \"text\", \"text\": \"User request\"},\n                {\"type\": \"image_url\", \"image_url\": {\"url\": \"data:image/png;base64,AAA\"}},\n            ],\n        },\n    ]\n\n    captured_output = io.StringIO()\n    sys.stdout = captured_output\n\n    print_prompt_preview(cast(list[ChatCompletionMessageParam], messages))\n\n    sys.stdout = sys.__stdout__\n\n    output = captured_output.getvalue()\n    assert \"PROMPT PREVIEW\" in output\n    assert \"1. SYSTEM\" in output\n    assert \"2. USER [1 media]\" in output\n"
  },
  {
    "path": "backend/tests/test_prompts.py",
    "content": "import pytest\nfrom unittest.mock import patch, MagicMock\nimport sys\nfrom typing import Any, Dict, List, TypedDict, cast\nfrom openai.types.chat import ChatCompletionMessageParam\n\n# Mock moviepy before importing prompts\nsys.modules[\"moviepy\"] = MagicMock()\nsys.modules[\"moviepy.editor\"] = MagicMock()\n\nfrom prompts.pipeline import build_prompt_messages\nfrom prompts.plan import derive_prompt_construction_plan\nfrom prompts.prompt_types import Stack\n\n# Type definitions for test structures\nclass ExpectedResult(TypedDict):\n    messages: List[ChatCompletionMessageParam]\n\n\ndef assert_structure_match(actual: object, expected: object, path: str = \"\") -> None:\n    \"\"\"\n    Compare actual and expected structures with special markers:\n    - <ANY>: Matches any value\n    - <CONTAINS:text>: Checks if the actual value contains 'text'\n\n    Args:\n        actual: The actual value to check\n        expected: The expected value or pattern\n        path: Current path in the structure (for error messages)\n    \"\"\"\n    if (\n        isinstance(expected, str)\n        and expected.startswith(\"<\")\n        and expected.endswith(\">\")\n    ):\n        # Handle special markers\n        if expected == \"<ANY>\":\n            # Match any value\n            return\n        elif expected.startswith(\"<CONTAINS:\") and expected.endswith(\">\"):\n            # Extract the text to search for\n            search_text = expected[10:-1]  # Remove \"<CONTAINS:\" and \">\"\n            assert isinstance(\n                actual, str\n            ), f\"At {path}: expected string, got {type(actual).__name__}\"\n            assert (\n                search_text in actual\n            ), f\"At {path}: '{search_text}' not found in '{actual}'\"\n            return\n\n    # Handle different types\n    if isinstance(expected, dict):\n        assert isinstance(\n            actual, dict\n        ), f\"At {path}: expected dict, got {type(actual).__name__}\"\n        expected_dict: Dict[str, object] = expected\n        actual_dict: Dict[str, object] = actual\n        for key, value in expected_dict.items():\n            assert key in actual_dict, f\"At {path}: key '{key}' not found in actual\"\n            assert_structure_match(actual_dict[key], value, f\"{path}.{key}\" if path else key)\n    elif isinstance(expected, list):\n        assert isinstance(\n            actual, list\n        ), f\"At {path}: expected list, got {type(actual).__name__}\"\n        expected_list: List[object] = expected\n        actual_list: List[object] = actual\n        assert len(actual_list) == len(\n            expected_list\n        ), f\"At {path}: list length mismatch (expected {len(expected_list)}, got {len(actual_list)})\"\n        for i, (a, e) in enumerate(zip(actual_list, expected_list)):\n            assert_structure_match(a, e, f\"{path}[{i}]\")\n    else:\n        # Direct comparison for other types\n        assert actual == expected, f\"At {path}: expected {expected}, got {actual}\"\n\n\nclass TestCreatePrompt:\n    \"\"\"Test cases for create_prompt function.\"\"\"\n\n    # Test data constants\n    TEST_IMAGE_URL: str = \"data:image/png;base64,test_image_data\"\n    RESULT_IMAGE_URL: str = \"data:image/png;base64,result_image_data\"\n    MOCK_SYSTEM_PROMPT: str = \"Mock HTML Tailwind system prompt\"\n    TEST_STACK: Stack = \"html_tailwind\"\n    ENABLED_IMAGE_POLICY: str = (\n        \"Image generation is enabled for this request. Use generate_images for \"\n        \"missing assets when needed.\"\n    )\n\n    @staticmethod\n    def wrapped_file(content: str) -> str:\n        return f'<file path=\"index.html\">\\n{content}\\n</file>'\n\n    def test_plan_create_uses_create_from_input(self) -> None:\n        plan = derive_prompt_construction_plan(\n            stack=self.TEST_STACK,\n            input_mode=\"image\",\n            generation_type=\"create\",\n            history=[],\n            file_state=None,\n        )\n        assert plan[\"construction_strategy\"] == \"create_from_input\"\n\n    def test_plan_update_with_history_uses_history_strategy(self) -> None:\n        plan = derive_prompt_construction_plan(\n            stack=self.TEST_STACK,\n            input_mode=\"image\",\n            generation_type=\"update\",\n            history=[{\"role\": \"user\", \"text\": \"change\", \"images\": [], \"videos\": []}],\n            file_state=None,\n        )\n        assert plan[\"construction_strategy\"] == \"update_from_history\"\n\n    def test_plan_update_without_history_uses_file_snapshot_strategy(self) -> None:\n        plan = derive_prompt_construction_plan(\n            stack=self.TEST_STACK,\n            input_mode=\"image\",\n            generation_type=\"update\",\n            history=[],\n            file_state={\"path\": \"index.html\", \"content\": \"<html></html>\"},\n        )\n        assert plan[\"construction_strategy\"] == \"update_from_file_snapshot\"\n\n    @pytest.mark.asyncio\n    async def test_image_mode_create_single_image(self) -> None:\n        \"\"\"Test create generation with single image in image mode.\"\"\"\n        # Setup test data\n        params: Dict[str, Any] = {\n            \"prompt\": {\"text\": \"\", \"images\": [self.TEST_IMAGE_URL]},\n            \"generationType\": \"create\",\n        }\n\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            # Call the function\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params.get(\"history\", []),\n            )\n\n            # Define expected structure\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\"role\": \"system\", \"content\": self.MOCK_SYSTEM_PROMPT},\n                    {\n                        \"role\": \"user\",\n                        \"content\": [\n                            {\n                                \"type\": \"image_url\",\n                                \"image_url\": {\n                                    \"url\": self.TEST_IMAGE_URL,\n                                    \"detail\": \"high\",\n                                },\n                            },\n                            {\n                                \"type\": \"text\",\n                                \"text\": \"<CONTAINS:Generate code for a web page that looks exactly like the provided screenshot(s).>\",\n                            },\n                        ],\n                    },\n                ],\n            }\n\n            # Assert the structure matches\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_image_mode_create_with_image_generation_disabled(self) -> None:\n        params: Dict[str, Any] = {\n            \"prompt\": {\"text\": \"\", \"images\": [self.TEST_IMAGE_URL]},\n            \"generationType\": \"create\",\n        }\n\n        with patch(\"prompts.system_prompt.SYSTEM_PROMPT\", new=self.MOCK_SYSTEM_PROMPT):\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=[],\n                image_generation_enabled=False,\n            )\n\n        system_content = messages[0].get(\"content\")\n        assert isinstance(system_content, str)\n        assert system_content == self.MOCK_SYSTEM_PROMPT\n\n        user_content = messages[1].get(\"content\")\n        assert isinstance(user_content, list)\n        text_part = next(\n            (\n                part\n                for part in user_content\n                if isinstance(part, dict) and part.get(\"type\") == \"text\"\n            ),\n            None,\n        )\n        assert isinstance(text_part, dict)\n        user_text = text_part.get(\"text\")\n        assert isinstance(user_text, str)\n        assert \"Image generation is disabled for this request. Do not call generate_images.\" in user_text\n\n\n    @pytest.mark.asyncio\n    async def test_image_mode_update_with_history(self) -> None:\n        \"\"\"Test update generation with conversation history in image mode.\"\"\"\n        # Setup test data\n        params: Dict[str, Any] = {\n            \"prompt\": {\"text\": \"\", \"images\": [self.TEST_IMAGE_URL]},\n            \"generationType\": \"update\",\n            \"history\": [\n                {\"role\": \"assistant\", \"text\": \"<html>Initial code</html>\", \"images\": [], \"videos\": []},\n                {\"role\": \"user\", \"text\": \"Make the background blue\", \"images\": [], \"videos\": []},\n                {\"role\": \"assistant\", \"text\": \"<html>Updated code</html>\", \"images\": [], \"videos\": []},\n                {\"role\": \"user\", \"text\": \"Add a header\", \"images\": [], \"videos\": []},\n            ],\n        }\n\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            # Call the function\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params.get(\"history\", []),\n            )\n\n            # Define expected structure\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\n                        \"role\": \"system\",\n                        \"content\": self.MOCK_SYSTEM_PROMPT,\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Initial code</html>\"),\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": (\n                            f\"Selected stack: {self.TEST_STACK}.\\n\\n\"\n                            f\"{self.ENABLED_IMAGE_POLICY}\\n\\n\"\n                            \"Make the background blue\"\n                        ),\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Updated code</html>\"),\n                    },\n                    {\"role\": \"user\", \"content\": \"Add a header\"},\n                ],\n            }\n\n            # Assert the structure matches\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_update_history_with_image_generation_disabled(self) -> None:\n        with patch(\"prompts.system_prompt.SYSTEM_PROMPT\", new=self.MOCK_SYSTEM_PROMPT):\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=\"update\",\n                prompt={\"text\": \"\", \"images\": [self.TEST_IMAGE_URL], \"videos\": []},\n                history=[\n                    {\"role\": \"assistant\", \"text\": \"<html>Initial code</html>\", \"images\": [], \"videos\": []},\n                    {\"role\": \"user\", \"text\": \"Make the background blue\", \"images\": [], \"videos\": []},\n                    {\"role\": \"assistant\", \"text\": \"<html>Updated code</html>\", \"images\": [], \"videos\": []},\n                ],\n                image_generation_enabled=False,\n            )\n\n        system_content = messages[0].get(\"content\")\n        assert isinstance(system_content, str)\n        assert system_content == self.MOCK_SYSTEM_PROMPT\n\n        first_user_content = messages[2].get(\"content\")\n        assert isinstance(first_user_content, str)\n        assert \"Selected stack: html_tailwind.\" in first_user_content\n        assert \"Image generation is disabled for this request. Do not call generate_images.\" in first_user_content\n        assert \"Make the background blue\" in first_user_content\n\n    @pytest.mark.asyncio\n    async def test_text_mode_create_generation(self) -> None:\n        \"\"\"Test create generation from text description in text mode.\"\"\"\n        # Setup test data\n        text_description: str = \"a modern landing page with hero section\"\n        params: Dict[str, Any] = {\n            \"prompt\": {\n                \"text\": text_description,\n                \"images\": []\n            },\n            \"generationType\": \"create\"\n        }\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            # Call the function\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"text\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params.get(\"history\", []),\n            )\n            \n            # Define expected structure\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\n                        \"role\": \"system\",\n                        \"content\": self.MOCK_SYSTEM_PROMPT\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": f\"<CONTAINS:Generate UI for {text_description}>\"\n                    }\n                ],\n            }\n            \n            # Assert the structure matches\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_text_mode_update_with_history(self) -> None:\n        \"\"\"Test update generation with conversation history in text mode.\"\"\"\n        # Setup test data\n        text_description: str = \"a dashboard with charts\"\n        params: Dict[str, Any] = {\n            \"prompt\": {\n                \"text\": text_description,\n                \"images\": []\n            },\n            \"generationType\": \"update\",\n            \"history\": [\n                {\"role\": \"assistant\", \"text\": \"<html>Initial dashboard</html>\", \"images\": [], \"videos\": []},\n                {\"role\": \"user\", \"text\": \"Add a sidebar\", \"images\": [], \"videos\": []},\n                {\"role\": \"assistant\", \"text\": \"<html>Dashboard with sidebar</html>\", \"images\": [], \"videos\": []},\n                {\"role\": \"user\", \"text\": \"Now add a navigation menu\", \"images\": [], \"videos\": []},\n            ]\n        }\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            # Call the function\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"text\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params.get(\"history\", []),\n            )\n            \n            # Define expected structure\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\n                        \"role\": \"system\",\n                        \"content\": self.MOCK_SYSTEM_PROMPT,\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Initial dashboard</html>\")\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": (\n                            f\"Selected stack: {self.TEST_STACK}.\\n\\n\"\n                            f\"{self.ENABLED_IMAGE_POLICY}\\n\\n\"\n                            \"Add a sidebar\"\n                        ),\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\n                            \"<html>Dashboard with sidebar</html>\"\n                        )\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": \"Now add a navigation menu\"\n                    }\n                ],\n            }\n            \n            # Assert the structure matches\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_video_mode_basic_prompt_creation(self) -> None:\n        \"\"\"Test basic video prompt creation in video mode.\n\n        For video mode with generation_type=\"create\", we now assemble\n        a regular system+user prompt so video generation can run through\n        the agent runner path.\n        \"\"\"\n        # Setup test data\n        video_data_url: str = \"data:video/mp4;base64,test_video_data\"\n        params: Dict[str, Any] = {\n            \"prompt\": {\n                \"text\": \"\",\n                \"images\": [],\n                \"videos\": [video_data_url],\n            },\n            \"generationType\": \"create\"\n        }\n\n        # Call the function\n        messages = await build_prompt_messages(\n            stack=self.TEST_STACK,\n            input_mode=\"video\",\n            generation_type=params[\"generationType\"],\n            prompt=params[\"prompt\"],\n            history=params.get(\"history\", []),\n        )\n\n        expected: ExpectedResult = {\n            \"messages\": [\n                {\n                    \"role\": \"system\",\n                    \"content\": \"<CONTAINS:You are a coding agent that's an expert at building front-ends.>\",\n                },\n                {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"image_url\",\n                            \"image_url\": {\"url\": video_data_url, \"detail\": \"high\"},\n                        },\n                        {\n                            \"type\": \"text\",\n                            \"text\": \"<CONTAINS:Analyze this video and generate the code.>\",\n                        },\n                    ],\n                },\n            ],\n        }\n\n        # Assert the structure matches\n        actual: ExpectedResult = {\"messages\": messages}\n        assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_create_raises_on_unsupported_input_mode(self) -> None:\n        params: Dict[str, Any] = {\n            \"prompt\": {\"text\": \"\", \"images\": [self.TEST_IMAGE_URL], \"videos\": []},\n            \"generationType\": \"create\",\n        }\n\n        with pytest.raises(ValueError, match=\"Unsupported input mode: audio\"):\n            await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=cast(Any, \"audio\"),\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=[],\n            )\n\n\n    @pytest.mark.asyncio\n    async def test_image_mode_update_with_single_image_in_history(self) -> None:\n        \"\"\"Test update with user message containing a single image.\"\"\"\n        # Setup test data\n        reference_image_url: str = \"data:image/png;base64,reference_image\"\n        params: Dict[str, Any] = {\n            \"prompt\": {\"text\": \"\", \"images\": [self.TEST_IMAGE_URL]},\n            \"generationType\": \"update\",\n            \"history\": [\n                {\"role\": \"assistant\", \"text\": \"<html>Initial code</html>\", \"images\": [], \"videos\": []},\n                {\"role\": \"user\", \"text\": \"Add a button\", \"images\": [reference_image_url], \"videos\": []},\n                {\"role\": \"assistant\", \"text\": \"<html>Code with button</html>\", \"images\": [], \"videos\": []},\n            ]\n        }\n\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            # Call the function\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params.get(\"history\", []),\n            )\n\n            # Define expected structure\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\n                        \"role\": \"system\",\n                        \"content\": self.MOCK_SYSTEM_PROMPT,\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Initial code</html>\"),\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": [\n                            {\n                                \"type\": \"image_url\",\n                                \"image_url\": {\n                                    \"url\": reference_image_url,\n                                    \"detail\": \"high\",\n                                },\n                            },\n                            {\n                                \"type\": \"text\",\n                                \"text\": (\n                                    f\"Selected stack: {self.TEST_STACK}.\\n\\n\"\n                                    f\"{self.ENABLED_IMAGE_POLICY}\\n\\n\"\n                                    \"Add a button\"\n                                ),\n                            },\n                        ],\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Code with button</html>\"),\n                    },\n                ],\n            }\n\n            # Assert the structure matches\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_image_mode_update_with_multiple_images_in_history(self) -> None:\n        \"\"\"Test update with user message containing multiple images.\"\"\"\n        # Setup test data\n        example1_url: str = \"data:image/png;base64,example1\"\n        example2_url: str = \"data:image/png;base64,example2\"\n        params: Dict[str, Any] = {\n            \"prompt\": {\"text\": \"\", \"images\": [self.TEST_IMAGE_URL]},\n            \"generationType\": \"update\",\n            \"history\": [\n                {\"role\": \"assistant\", \"text\": \"<html>Initial code</html>\", \"images\": [], \"videos\": []},\n                {\"role\": \"user\", \"text\": \"Style like these examples\", \"images\": [example1_url, example2_url], \"videos\": []},\n                {\"role\": \"assistant\", \"text\": \"<html>Styled code</html>\", \"images\": [], \"videos\": []},\n            ]\n        }\n\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            # Call the function\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params.get(\"history\", []),\n            )\n\n            # Define expected structure\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\n                        \"role\": \"system\",\n                        \"content\": self.MOCK_SYSTEM_PROMPT,\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Initial code</html>\"),\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": [\n                            {\n                                \"type\": \"image_url\",\n                                \"image_url\": {\n                                    \"url\": example1_url,\n                                    \"detail\": \"high\",\n                                },\n                            },\n                            {\n                                \"type\": \"image_url\",\n                                \"image_url\": {\n                                    \"url\": example2_url,\n                                    \"detail\": \"high\",\n                                },\n                            },\n                            {\n                                \"type\": \"text\",\n                                \"text\": (\n                                    f\"Selected stack: {self.TEST_STACK}.\\n\\n\"\n                                    f\"{self.ENABLED_IMAGE_POLICY}\\n\\n\"\n                                    \"Style like these examples\"\n                                ),\n                            },\n                        ],\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Styled code</html>\"),\n                    },\n                ],\n            }\n\n            # Assert the structure matches\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_update_with_empty_images_arrays(self) -> None:\n        \"\"\"Test that empty images arrays don't break existing functionality.\"\"\"\n        # Setup test data with explicit empty images arrays\n        params: Dict[str, Any] = {\n            \"prompt\": {\"text\": \"\", \"images\": [self.TEST_IMAGE_URL]},\n            \"generationType\": \"update\",\n            \"history\": [\n                {\"role\": \"assistant\", \"text\": \"<html>Initial code</html>\", \"images\": [], \"videos\": []},\n                {\"role\": \"user\", \"text\": \"Make it blue\", \"images\": [], \"videos\": []},\n                {\"role\": \"assistant\", \"text\": \"<html>Blue code</html>\", \"images\": [], \"videos\": []},\n            ]\n        }\n\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            # Call the function\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params.get(\"history\", []),\n            )\n\n            # Define expected structure - should be text-only messages\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\n                        \"role\": \"system\",\n                        \"content\": self.MOCK_SYSTEM_PROMPT,\n                    },\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Initial code</html>\"),\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": (\n                            f\"Selected stack: {self.TEST_STACK}.\\n\\n\"\n                            f\"{self.ENABLED_IMAGE_POLICY}\\n\\n\"\n                            \"Make it blue\"\n                        ),\n                    },  # Text-only message\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": self.wrapped_file(\"<html>Blue code</html>\"),\n                    },\n                ],\n            }\n\n            # Assert the structure matches\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n\n    @pytest.mark.asyncio\n    async def test_update_bootstraps_from_file_state_when_history_is_empty(self) -> None:\n        \"\"\"Update should synthesize a user message from fileState + prompt when history is empty.\"\"\"\n        ref_image_url: str = \"data:image/png;base64,ref_image\"\n        params: Dict[str, Any] = {\n            \"generationType\": \"update\",\n            \"prompt\": {\"text\": \"Make the header blue\", \"images\": [ref_image_url], \"videos\": []},\n            \"history\": [],\n            \"fileState\": {\n                \"path\": \"index.html\",\n                \"content\": \"<html>Original imported code</html>\",\n            },\n        }\n\n        with patch(\n            \"prompts.system_prompt.SYSTEM_PROMPT\",\n            new=self.MOCK_SYSTEM_PROMPT,\n        ):\n            messages = await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=params[\"generationType\"],\n                prompt=params[\"prompt\"],\n                history=params[\"history\"],\n                file_state=params[\"fileState\"],\n            )\n\n            expected: ExpectedResult = {\n                \"messages\": [\n                    {\n                        \"role\": \"system\",\n                        \"content\": self.MOCK_SYSTEM_PROMPT,\n                    },\n                    {\n                        \"role\": \"user\",\n                        \"content\": [\n                            {\n                                \"type\": \"image_url\",\n                                \"image_url\": {\n                                    \"url\": ref_image_url,\n                                    \"detail\": \"high\",\n                                },\n                            },\n                            {\n                                \"type\": \"text\",\n                                \"text\": \"<CONTAINS:<current_file path=\\\"index.html\\\">>\",\n                            },\n                        ],\n                    },\n                ],\n            }\n\n            actual: ExpectedResult = {\"messages\": messages}\n            assert_structure_match(actual, expected)\n            user_content = messages[1].get(\"content\")\n            assert isinstance(user_content, list)\n            text_part = next(\n                (part for part in user_content if isinstance(part, dict) and part.get(\"type\") == \"text\"),\n                None,\n            )\n            assert isinstance(text_part, dict)\n            synthesized_text = text_part.get(\"text\", \"\")\n            assert isinstance(synthesized_text, str)\n            assert f\"Selected stack: {self.TEST_STACK}.\" in synthesized_text\n            assert \"<html>Original imported code</html>\" in synthesized_text\n            assert \"<change_request>\" in synthesized_text\n            assert \"Make the header blue\" in synthesized_text\n\n    @pytest.mark.asyncio\n    async def test_update_requires_history_or_file_state(self) -> None:\n        with pytest.raises(ValueError):\n            await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=\"update\",\n                prompt={\"text\": \"Change title\", \"images\": [], \"videos\": []},\n                history=[],\n            )\n\n    @pytest.mark.asyncio\n    async def test_update_history_requires_user_message(self) -> None:\n        with pytest.raises(\n            ValueError, match=\"Update history must include at least one user message\"\n        ):\n            await build_prompt_messages(\n                stack=self.TEST_STACK,\n                input_mode=\"image\",\n                generation_type=\"update\",\n                prompt={\"text\": \"Change title\", \"images\": [], \"videos\": []},\n                history=[\n                    {\n                        \"role\": \"assistant\",\n                        \"text\": \"<html>Code only</html>\",\n                        \"images\": [],\n                        \"videos\": [],\n                    }\n                ],\n            )\n"
  },
  {
    "path": "backend/tests/test_request_parsing.py",
    "content": "from prompts.request_parsing import parse_prompt_content, parse_prompt_history\n\n\ndef test_parse_prompt_content_with_valid_data() -> None:\n    result = parse_prompt_content(\n        {\n            \"text\": \"Build this page\",\n            \"images\": [\"img1\", \"img2\"],\n            \"videos\": [\"vid1\"],\n        }\n    )\n\n    assert result == {\n        \"text\": \"Build this page\",\n        \"images\": [\"img1\", \"img2\"],\n        \"videos\": [\"vid1\"],\n    }\n\n\ndef test_parse_prompt_content_filters_invalid_media_types() -> None:\n    result = parse_prompt_content(\n        {\n            \"text\": \"Prompt\",\n            \"images\": [\"img1\", 123, None, \"img2\"],\n            \"videos\": [\"vid1\", {\"x\": 1}, \"vid2\"],\n        }\n    )\n\n    assert result == {\n        \"text\": \"Prompt\",\n        \"images\": [\"img1\", \"img2\"],\n        \"videos\": [\"vid1\", \"vid2\"],\n    }\n\n\ndef test_parse_prompt_content_defaults_for_invalid_payload() -> None:\n    assert parse_prompt_content(None) == {\"text\": \"\", \"images\": [], \"videos\": []}\n    assert parse_prompt_content(\"bad\") == {\"text\": \"\", \"images\": [], \"videos\": []}\n    assert parse_prompt_content({\"text\": 1}) == {\"text\": \"\", \"images\": [], \"videos\": []}\n\n\ndef test_parse_prompt_history_with_valid_entries() -> None:\n    result = parse_prompt_history(\n        [\n            {\n                \"role\": \"assistant\",\n                \"text\": \"<html/>\",\n                \"images\": [],\n                \"videos\": [],\n            },\n            {\n                \"role\": \"user\",\n                \"text\": \"Please update\",\n                \"images\": [\"img1\"],\n                \"videos\": [\"vid1\"],\n            },\n        ]\n    )\n\n    assert result == [\n        {\n            \"role\": \"assistant\",\n            \"text\": \"<html/>\",\n            \"images\": [],\n            \"videos\": [],\n        },\n        {\n            \"role\": \"user\",\n            \"text\": \"Please update\",\n            \"images\": [\"img1\"],\n            \"videos\": [\"vid1\"],\n        },\n    ]\n\n\ndef test_parse_prompt_history_filters_invalid_items() -> None:\n    result = parse_prompt_history(\n        [\n            \"bad\",\n            {\"role\": \"tool\", \"text\": \"skip me\"},\n            {\"role\": \"user\", \"text\": \"keep me\", \"images\": [\"img1\", 3], \"videos\": [None, \"vid1\"]},\n            {\"role\": \"assistant\", \"text\": 123, \"images\": \"bad\", \"videos\": \"bad\"},\n        ]\n    )\n\n    assert result == [\n        {\n            \"role\": \"user\",\n            \"text\": \"keep me\",\n            \"images\": [\"img1\"],\n            \"videos\": [\"vid1\"],\n        },\n        {\n            \"role\": \"assistant\",\n            \"text\": \"\",\n            \"images\": [],\n            \"videos\": [],\n        },\n    ]\n\n\ndef test_parse_prompt_history_defaults_for_invalid_payload() -> None:\n    assert parse_prompt_history(None) == []\n    assert parse_prompt_history({\"role\": \"user\"}) == []\n"
  },
  {
    "path": "backend/tests/test_screenshot.py",
    "content": "import pytest\nfrom routes.screenshot import normalize_url\n\n\nclass TestNormalizeUrl:\n    \"\"\"Test cases for URL normalization functionality.\"\"\"\n    \n    def test_url_without_protocol(self):\n        \"\"\"Test that URLs without protocol get https:// added.\"\"\"\n        assert normalize_url(\"example.com\") == \"https://example.com\"\n        assert normalize_url(\"www.example.com\") == \"https://www.example.com\"\n        assert normalize_url(\"subdomain.example.com\") == \"https://subdomain.example.com\"\n    \n    def test_url_with_http_protocol(self):\n        \"\"\"Test that existing http protocol is preserved.\"\"\"\n        assert normalize_url(\"http://example.com\") == \"http://example.com\"\n        assert normalize_url(\"http://www.example.com\") == \"http://www.example.com\"\n    \n    def test_url_with_https_protocol(self):\n        \"\"\"Test that existing https protocol is preserved.\"\"\"\n        assert normalize_url(\"https://example.com\") == \"https://example.com\"\n        assert normalize_url(\"https://www.example.com\") == \"https://www.example.com\"\n    \n    def test_url_with_path_and_params(self):\n        \"\"\"Test URLs with paths and query parameters.\"\"\"\n        assert normalize_url(\"example.com/path\") == \"https://example.com/path\"\n        assert normalize_url(\"example.com/path?param=value\") == \"https://example.com/path?param=value\"\n        assert normalize_url(\"example.com:8080/path\") == \"https://example.com:8080/path\"\n    \n    def test_url_with_whitespace(self):\n        \"\"\"Test that whitespace is stripped.\"\"\"\n        assert normalize_url(\"  example.com  \") == \"https://example.com\"\n        assert normalize_url(\"\\texample.com\\n\") == \"https://example.com\"\n    \n    def test_invalid_protocols(self):\n        \"\"\"Test that unsupported protocols raise ValueError.\"\"\"\n        with pytest.raises(ValueError, match=\"Unsupported protocol: ftp\"):\n            normalize_url(\"ftp://example.com\")\n        \n        with pytest.raises(ValueError, match=\"Unsupported protocol: file\"):\n            normalize_url(\"file:///path/to/file\")\n    \n    def test_localhost_urls(self):\n        \"\"\"Test localhost URLs.\"\"\"\n        assert normalize_url(\"localhost\") == \"https://localhost\"\n        assert normalize_url(\"localhost:3000\") == \"https://localhost:3000\"\n        assert normalize_url(\"http://localhost:8080\") == \"http://localhost:8080\"\n    \n    def test_ip_address_urls(self):\n        \"\"\"Test IP address URLs.\"\"\"\n        assert normalize_url(\"192.168.1.1\") == \"https://192.168.1.1\"\n        assert normalize_url(\"192.168.1.1:8080\") == \"https://192.168.1.1:8080\"\n        assert normalize_url(\"http://192.168.1.1\") == \"http://192.168.1.1\"\n    \n    def test_complex_urls(self):\n        \"\"\"Test more complex URL scenarios.\"\"\"\n        assert normalize_url(\"example.com/path/to/page.html#section\") == \"https://example.com/path/to/page.html#section\"\n        assert normalize_url(\"user:pass@example.com\") == \"https://user:pass@example.com\"\n        assert normalize_url(\"example.com?q=search&lang=en\") == \"https://example.com?q=search&lang=en\"\n"
  },
  {
    "path": "backend/tests/test_status_broadcast.py",
    "content": "from types import SimpleNamespace\nfrom unittest.mock import AsyncMock, MagicMock\nfrom typing import Any, cast\n\nimport pytest\n\nfrom routes.generate_code import (\n    ExtractedParams,\n    PipelineContext,\n    StatusBroadcastMiddleware,\n)\n\n\n@pytest.mark.asyncio\nasync def test_video_update_broadcasts_two_variants() -> None:\n    sent_messages: list[tuple[str, str | None, int]] = []\n\n    async def send_message(\n        msg_type: str,\n        value: str | None,\n        variant_index: int,\n        data=None,\n        eventId=None,\n    ) -> None:\n        sent_messages.append((msg_type, value, variant_index))\n\n    context = PipelineContext(websocket=MagicMock())\n    context.ws_comm = cast(\n        Any,\n        SimpleNamespace(\n            send_message=send_message,\n            throw_error=AsyncMock(),\n        ),\n    )\n    context.extracted_params = ExtractedParams(\n        stack=\"html_tailwind\",\n        input_mode=\"video\",\n        should_generate_images=True,\n        openai_api_key=None,\n        anthropic_api_key=None,\n        gemini_api_key=\"key\",\n        openai_base_url=None,\n        generation_type=\"update\",\n        prompt={\"text\": \"Edit this video output\", \"images\": [], \"videos\": []},\n        history=[],\n        file_state=None,\n        option_codes=[],\n    )\n\n    middleware = StatusBroadcastMiddleware()\n    next_called = False\n\n    async def next_func() -> None:\n        nonlocal next_called\n        next_called = True\n\n    await middleware.process(context, next_func)\n\n    assert sent_messages[0] == (\"variantCount\", \"2\", 0)\n    status_messages = [m for m in sent_messages if m[0] == \"status\"]\n    assert len(status_messages) == 2\n    assert [m[2] for m in status_messages] == [0, 1]\n    assert next_called is True\n\n\n@pytest.mark.asyncio\nasync def test_image_update_broadcasts_two_variants() -> None:\n    sent_messages: list[tuple[str, str | None, int]] = []\n\n    async def send_message(\n        msg_type: str,\n        value: str | None,\n        variant_index: int,\n        data=None,\n        eventId=None,\n    ) -> None:\n        sent_messages.append((msg_type, value, variant_index))\n\n    context = PipelineContext(websocket=MagicMock())\n    context.ws_comm = cast(\n        Any,\n        SimpleNamespace(\n            send_message=send_message,\n            throw_error=AsyncMock(),\n        ),\n    )\n    context.extracted_params = ExtractedParams(\n        stack=\"html_tailwind\",\n        input_mode=\"image\",\n        should_generate_images=True,\n        openai_api_key=\"key\",\n        anthropic_api_key=\"key\",\n        gemini_api_key=None,\n        openai_base_url=None,\n        generation_type=\"update\",\n        prompt={\"text\": \"Edit this screenshot\", \"images\": [\"data:image/png;base64,abc\"], \"videos\": []},\n        history=[],\n        file_state={\"path\": \"index.html\", \"content\": \"<html></html>\"},\n        option_codes=[],\n    )\n\n    middleware = StatusBroadcastMiddleware()\n    next_called = False\n\n    async def next_func() -> None:\n        nonlocal next_called\n        next_called = True\n\n    await middleware.process(context, next_func)\n\n    assert sent_messages[0] == (\"variantCount\", \"2\", 0)\n    status_messages = [m for m in sent_messages if m[0] == \"status\"]\n    assert len(status_messages) == 2\n    assert [m[2] for m in status_messages] == [0, 1]\n    assert next_called is True\n"
  },
  {
    "path": "backend/tests/test_token_usage.py",
    "content": "\"\"\"Tests for unified token usage tracking and cost computation.\"\"\"\n\nfrom types import SimpleNamespace\n\nfrom agent.providers.pricing import MODEL_PRICING, ModelPricing\nfrom agent.providers.token_usage import TokenUsage\nfrom agent.providers.anthropic import _extract_anthropic_usage\nfrom agent.providers.gemini import _extract_usage as _extract_gemini_usage\nfrom agent.providers.openai import _extract_openai_usage\n\n\n# ---------------------------------------------------------------------------\n# TokenUsage.accumulate\n# ---------------------------------------------------------------------------\n\n\nclass TestAccumulate:\n    def test_sums_all_fields(self) -> None:\n        a = TokenUsage(input=100, output=50, cache_read=20, cache_write=10, total=180)\n        b = TokenUsage(input=200, output=80, cache_read=30, cache_write=5, total=315)\n        a.accumulate(b)\n        assert a == TokenUsage(\n            input=300, output=130, cache_read=50, cache_write=15, total=495\n        )\n\n    def test_accumulate_zero_is_noop(self) -> None:\n        a = TokenUsage(input=100, output=50, cache_read=20, total=170)\n        a.accumulate(TokenUsage())\n        assert a == TokenUsage(input=100, output=50, cache_read=20, total=170)\n\n    def test_multiple_accumulations(self) -> None:\n        total = TokenUsage()\n        for i in range(1, 4):\n            total.accumulate(TokenUsage(input=i * 10, output=i * 5, total=i * 15))\n        # input: 10+20+30=60, output: 5+10+15=30, total: 15+30+45=90\n        assert total.input == 60\n        assert total.output == 30\n        assert total.total == 90\n\n\n# ---------------------------------------------------------------------------\n# TokenUsage.cost\n# ---------------------------------------------------------------------------\n\n\nclass TestCost:\n    def test_basic_cost(self) -> None:\n        usage = TokenUsage(input=1_000_000, output=1_000_000, total=2_000_000)\n        pricing = ModelPricing(input=2.00, output=8.00)\n        # 1M * $2 + 1M * $8 = $10\n        assert usage.cost(pricing) == 10.0\n\n    def test_zero_tokens_zero_cost(self) -> None:\n        usage = TokenUsage()\n        pricing = ModelPricing(input=5.00, output=25.00, cache_read=0.50)\n        assert usage.cost(pricing) == 0.0\n\n    def test_cache_heavy_scenario(self) -> None:\n        # 100k non-cached input, 900k cached, 500k output\n        usage = TokenUsage(\n            input=100_000, output=500_000, cache_read=900_000, total=1_500_000\n        )\n        pricing = ModelPricing(input=2.00, output=8.00, cache_read=0.50)\n        # 100k * $2/M + 500k * $8/M + 900k * $0.50/M\n        # = $0.20 + $4.00 + $0.45 = $4.65\n        expected = (100_000 * 2.00 + 500_000 * 8.00 + 900_000 * 0.50) / 1_000_000\n        assert abs(usage.cost(pricing) - expected) < 1e-9\n\n    def test_anthropic_with_cache_write(self) -> None:\n        usage = TokenUsage(\n            input=500_000,\n            output=200_000,\n            cache_read=300_000,\n            cache_write=100_000,\n            total=1_100_000,\n        )\n        pricing = ModelPricing(\n            input=3.00, output=15.00, cache_read=0.30, cache_write=3.75\n        )\n        expected = (\n            500_000 * 3.00\n            + 200_000 * 15.00\n            + 300_000 * 0.30\n            + 100_000 * 3.75\n        ) / 1_000_000\n        assert abs(usage.cost(pricing) - expected) < 1e-9\n\n\nclass TestCacheHitRate:\n    def test_zero_total_input_is_zero_percent(self) -> None:\n        usage = TokenUsage()\n        assert usage.total_input_tokens() == 0\n        assert usage.cache_hit_rate_percent() == 0.0\n\n    def test_cache_hit_rate_without_cache_write(self) -> None:\n        usage = TokenUsage(input=300, cache_read=100)\n        assert usage.total_input_tokens() == 400\n        assert abs(usage.cache_hit_rate_percent() - 25.0) < 1e-9\n\n    def test_cache_hit_rate_includes_cache_write_in_denominator(self) -> None:\n        usage = TokenUsage(input=300, cache_read=100, cache_write=100)\n        assert usage.total_input_tokens() == 500\n        assert abs(usage.cache_hit_rate_percent() - 20.0) < 1e-9\n\n\n# ---------------------------------------------------------------------------\n# Gemini: _extract_usage\n# ---------------------------------------------------------------------------\n\n\ndef _gemini_chunk(\n    prompt: int = 0,\n    candidates: int = 0,\n    thoughts: int = 0,\n    cached: int = 0,\n    total: int = 0,\n) -> SimpleNamespace:\n    \"\"\"Build a fake Gemini GenerateContentResponse with usage_metadata.\"\"\"\n    return SimpleNamespace(\n        usage_metadata=SimpleNamespace(\n            prompt_token_count=prompt,\n            candidates_token_count=candidates,\n            thoughts_token_count=thoughts,\n            cached_content_token_count=cached,\n            total_token_count=total,\n        )\n    )\n\n\nclass TestGeminiExtract:\n    def test_normal_response(self) -> None:\n        chunk = _gemini_chunk(\n            prompt=1000, candidates=400, thoughts=100, cached=200, total=1500\n        )\n        usage = _extract_gemini_usage(chunk)  # type: ignore[arg-type]\n        assert usage is not None\n        assert usage.input == 800  # 1000 - 200\n        assert usage.output == 500  # 400 + 100\n        assert usage.cache_read == 200\n        assert usage.cache_write == 0\n        assert usage.total == 1500\n\n    def test_no_cache(self) -> None:\n        chunk = _gemini_chunk(prompt=500, candidates=200, thoughts=50, total=750)\n        usage = _extract_gemini_usage(chunk)  # type: ignore[arg-type]\n        assert usage is not None\n        assert usage.input == 500\n        assert usage.cache_read == 0\n\n    def test_no_usage_metadata_returns_none(self) -> None:\n        chunk = SimpleNamespace(usage_metadata=None)\n        assert _extract_gemini_usage(chunk) is None  # type: ignore[arg-type]\n\n    def test_none_subfields_default_to_zero(self) -> None:\n        chunk = SimpleNamespace(\n            usage_metadata=SimpleNamespace(\n                prompt_token_count=None,\n                candidates_token_count=None,\n                thoughts_token_count=None,\n                cached_content_token_count=None,\n                total_token_count=None,\n            )\n        )\n        usage = _extract_gemini_usage(chunk)  # type: ignore[arg-type]\n        assert usage == TokenUsage()\n\n\n# ---------------------------------------------------------------------------\n# OpenAI: _extract_openai_usage\n# ---------------------------------------------------------------------------\n\n\ndef _openai_response(\n    input_tokens: int = 0,\n    output_tokens: int = 0,\n    total_tokens: int = 0,\n    cached_tokens: int = 0,\n) -> SimpleNamespace:\n    \"\"\"Build a fake OpenAI response.completed event payload.\"\"\"\n    return SimpleNamespace(\n        usage=SimpleNamespace(\n            input_tokens=input_tokens,\n            output_tokens=output_tokens,\n            total_tokens=total_tokens,\n            input_tokens_details=SimpleNamespace(cached_tokens=cached_tokens),\n        )\n    )\n\n\nclass TestOpenAIExtract:\n    def test_normal_response(self) -> None:\n        resp = _openai_response(\n            input_tokens=1000, output_tokens=500, total_tokens=1500, cached_tokens=300\n        )\n        usage = _extract_openai_usage(resp)\n        assert usage.input == 700  # 1000 - 300\n        assert usage.output == 500\n        assert usage.cache_read == 300\n        assert usage.cache_write == 0\n        assert usage.total == 1500\n\n    def test_no_cache(self) -> None:\n        resp = _openai_response(\n            input_tokens=800, output_tokens=200, total_tokens=1000\n        )\n        usage = _extract_openai_usage(resp)\n        assert usage.input == 800\n        assert usage.cache_read == 0\n\n    def test_no_usage_returns_empty(self) -> None:\n        resp = SimpleNamespace()  # no .usage attribute\n        usage = _extract_openai_usage(resp)\n        assert usage == TokenUsage()\n\n    def test_no_input_tokens_details(self) -> None:\n        resp = SimpleNamespace(\n            usage=SimpleNamespace(\n                input_tokens=500,\n                output_tokens=200,\n                total_tokens=700,\n            )\n        )\n        usage = _extract_openai_usage(resp)\n        assert usage.input == 500\n        assert usage.cache_read == 0\n\n\n# ---------------------------------------------------------------------------\n# Anthropic: _extract_anthropic_usage\n# ---------------------------------------------------------------------------\n\n\ndef _anthropic_message(\n    input_tokens: int = 0,\n    output_tokens: int = 0,\n    cache_read: int = 0,\n    cache_write: int = 0,\n) -> SimpleNamespace:\n    \"\"\"Build a fake Anthropic final message with usage.\"\"\"\n    return SimpleNamespace(\n        usage=SimpleNamespace(\n            input_tokens=input_tokens,\n            output_tokens=output_tokens,\n            cache_read_input_tokens=cache_read,\n            cache_creation_input_tokens=cache_write,\n        )\n    )\n\n\nclass TestAnthropicExtract:\n    def test_normal_response(self) -> None:\n        msg = _anthropic_message(\n            input_tokens=1000, output_tokens=500, cache_read=200, cache_write=50\n        )\n        usage = _extract_anthropic_usage(msg)\n        assert usage.input == 1000\n        assert usage.output == 500\n        assert usage.cache_read == 200\n        assert usage.cache_write == 50\n        assert usage.total == 1750  # sum of all fields\n\n    def test_no_cache(self) -> None:\n        msg = _anthropic_message(input_tokens=600, output_tokens=300)\n        usage = _extract_anthropic_usage(msg)\n        assert usage.input == 600\n        assert usage.cache_read == 0\n        assert usage.cache_write == 0\n        assert usage.total == 900\n\n    def test_no_usage_returns_empty(self) -> None:\n        msg = SimpleNamespace()  # no .usage attribute\n        usage = _extract_anthropic_usage(msg)\n        assert usage == TokenUsage()\n\n\n# ---------------------------------------------------------------------------\n# MODEL_PRICING lookup\n# ---------------------------------------------------------------------------\n\n\nclass TestModelPricing:\n    def test_known_models_have_pricing(self) -> None:\n        for name in [\n            \"gpt-4.1-2025-04-14\",\n            \"gpt-5.2-codex\",\n            \"claude-opus-4-6\",\n            \"claude-sonnet-4-6\",\n            \"gemini-3-flash-preview\",\n            \"gemini-3-pro-preview\",\n            \"gpt-5.4-2026-03-05\",\n        ]:\n            assert name in MODEL_PRICING, f\"missing pricing for {name}\"\n\n    def test_unknown_model_returns_none(self) -> None:\n        assert MODEL_PRICING.get(\"nonexistent-model\") is None\n\n    def test_anthropic_has_cache_write_rate(self) -> None:\n        for name in [\"claude-opus-4-6\", \"claude-sonnet-4-6\"]:\n            assert MODEL_PRICING[name].cache_write > 0\n\n    def test_openai_gemini_no_cache_write(self) -> None:\n        for name in [\"gpt-4.1-2025-04-14\", \"gemini-3-flash-preview\"]:\n            assert MODEL_PRICING[name].cache_write == 0.0\n"
  },
  {
    "path": "backend/utils.py",
    "content": "import copy\nimport json\nimport textwrap\nfrom typing import List\nfrom openai.types.chat import ChatCompletionMessageParam\n\n\ndef pprint_prompt(prompt_messages: List[ChatCompletionMessageParam]):\n    print(json.dumps(truncate_data_strings(prompt_messages), indent=4))\n\n\ndef format_prompt_summary(prompt_messages: List[ChatCompletionMessageParam], truncate: bool = True) -> str:\n    parts: list[str] = []\n    for message in prompt_messages:\n        role = message[\"role\"]\n        content = message.get(\"content\")\n        text = \"\"\n        image_count = 0\n\n        if isinstance(content, list):\n            for item in content:\n                if item[\"type\"] == \"text\":\n                    text += item[\"text\"] + \" \"\n                elif item[\"type\"] == \"image_url\":\n                    image_count += 1\n        else:\n            text = str(content)\n\n        text = text.strip()\n        if truncate and len(text) > 40:\n            text = text[:40] + \"...\"\n\n        img_part = f\" + [{image_count} images]\" if image_count else \"\"\n        parts.append(f\"  {role.upper()}: {text}{img_part}\")\n\n    return \"\\n\".join(parts)\n\n\ndef print_prompt_summary(prompt_messages: List[ChatCompletionMessageParam], truncate: bool = True):\n    summary = format_prompt_summary(prompt_messages, truncate)\n    lines = summary.split('\\n')\n    \n    # Find the maximum line length, with a minimum of 20\n    # If truncating, max is 80, otherwise allow up to 120 for full content\n    max_allowed = 80 if truncate else 120\n    max_length = max(len(line) for line in lines) if lines else 20\n    max_length = max(20, min(max_allowed, max_length))\n    \n    # Ensure title fits\n    title = \"PROMPT SUMMARY\"\n    max_length = max(max_length, len(title) + 4)\n    \n    print(\"┌─\" + \"─\" * max_length + \"─┐\")\n    title_padding = (max_length - len(title)) // 2\n    print(f\"│ {' ' * title_padding}{title}{' ' * (max_length - len(title) - title_padding)} │\")\n    print(\"├─\" + \"─\" * max_length + \"─┤\")\n    \n    for line in lines:\n        if len(line) <= max_length:\n            print(f\"│ {line:<{max_length}} │\")\n        else:\n            # Wrap long lines\n            words = line.split()\n            current_line = \"\"\n            for word in words:\n                if len(current_line + \" \" + word) <= max_length:\n                    current_line += (\" \" + word) if current_line else word\n                else:\n                    if current_line:\n                        print(f\"│ {current_line:<{max_length}} │\")\n                    current_line = word\n            if current_line:\n                print(f\"│ {current_line:<{max_length}} │\")\n    \n    print(\"└─\" + \"─\" * max_length + \"─┘\")\n    print()\n\n\ndef _collapse_preview_text(text: str, max_chars: int = 280) -> str:\n    trimmed = text.strip()\n    if not trimmed:\n        return \"(no text)\"\n\n    looks_like_code = any(\n        marker in trimmed for marker in (\"```\", \"<html\", \"<!DOCTYPE\", \"function \", \"class \")\n    )\n    normalized = trimmed if looks_like_code else \" \".join(trimmed.split())\n\n    if len(normalized) <= max_chars:\n        return normalized\n\n    head_len = max_chars // 2\n    tail_len = max_chars // 4\n    head = normalized[:head_len].rstrip()\n    tail = normalized[-tail_len:].lstrip()\n    omitted = max(0, len(normalized) - len(head) - len(tail))\n    return f\"{head} ... [collapsed {omitted} chars] ... {tail}\"\n\n\ndef format_prompt_preview(\n    prompt_messages: List[ChatCompletionMessageParam],\n    max_chars_per_message: int = 280,\n) -> str:\n    parts: list[str] = []\n    for idx, message in enumerate(prompt_messages):\n        role = str(message.get(\"role\", \"unknown\")).upper()\n        content = message.get(\"content\")\n        text_chunks: list[str] = []\n        media_count = 0\n\n        if isinstance(content, list):\n            for item in content:\n                if not isinstance(item, dict):\n                    continue\n                item_type = item.get(\"type\")\n                if item_type == \"image_url\":\n                    media_count += 1\n                elif item_type == \"text\":\n                    item_text = item.get(\"text\")\n                    if isinstance(item_text, str):\n                        text_chunks.append(item_text)\n        else:\n            text_chunks.append(\"\" if content is None else str(content))\n\n        preview_text = _collapse_preview_text(\n            \"\\n\".join(text_chunks), max_chars=max_chars_per_message\n        )\n        media_suffix = f\" [{media_count} media]\" if media_count else \"\"\n        parts.append(f\"{idx + 1}. {role}{media_suffix}\")\n        wrapped_preview = textwrap.wrap(\n            preview_text, width=100, break_long_words=False, break_on_hyphens=False\n        )\n        if wrapped_preview:\n            for line in wrapped_preview:\n                parts.append(f\"   {line}\")\n        else:\n            parts.append(\"   (no text)\")\n\n    return \"\\n\".join(parts)\n\n\ndef print_prompt_preview(prompt_messages: List[ChatCompletionMessageParam]) -> None:\n    preview = format_prompt_preview(prompt_messages)\n    lines = preview.split(\"\\n\")\n    max_length = max(len(line) for line in lines) if lines else 20\n    max_length = max(20, min(120, max_length))\n\n    title = \"PROMPT PREVIEW\"\n    max_length = max(max_length, len(title) + 4)\n\n    print(\"┌─\" + \"─\" * max_length + \"─┐\")\n    title_padding = (max_length - len(title)) // 2\n    print(\n        f\"│ {' ' * title_padding}{title}{' ' * (max_length - len(title) - title_padding)} │\"\n    )\n    print(\"├─\" + \"─\" * max_length + \"─┤\")\n\n    for line in lines:\n        if len(line) <= max_length:\n            print(f\"│ {line:<{max_length}} │\")\n        else:\n            wrapped = textwrap.wrap(\n                line, width=max_length, break_long_words=False, break_on_hyphens=False\n            )\n            for wrapped_line in wrapped:\n                print(f\"│ {wrapped_line:<{max_length}} │\")\n\n    print(\"└─\" + \"─\" * max_length + \"─┘\")\n    print()\n\n\ndef truncate_data_strings(data: List[ChatCompletionMessageParam]):  # type: ignore\n    # Deep clone the data to avoid modifying the original object\n    cloned_data = copy.deepcopy(data)\n\n    if isinstance(cloned_data, dict):\n        for key, value in cloned_data.items():  # type: ignore\n            # Recursively call the function if the value is a dictionary or a list\n            if isinstance(value, (dict, list)):\n                cloned_data[key] = truncate_data_strings(value)  # type: ignore\n            # Truncate the string if it it's long and add ellipsis and length\n            elif isinstance(value, str):\n                cloned_data[key] = value[:40]  # type: ignore\n                if len(value) > 40:\n                    cloned_data[key] += \"...\" + f\" ({len(value)} chars)\"  # type: ignore\n\n    elif isinstance(cloned_data, list):  # type: ignore\n        # Process each item in the list\n        cloned_data = [truncate_data_strings(item) for item in cloned_data]  # type: ignore\n\n    return cloned_data  # type: ignore\n"
  },
  {
    "path": "backend/video/__init__.py",
    "content": "from video.cost_estimation import (\n    CostEstimate,\n    MediaResolution,\n    TokenEstimate,\n    calculate_cost,\n    estimate_video_generation_cost,\n    estimate_video_input_tokens,\n    format_cost_estimate,\n    get_video_duration_from_bytes,\n)\nfrom video.utils import (\n    extract_tag_content,\n    get_video_bytes_and_mime_type,\n)\n\n__all__ = [\n    # Cost estimation\n    \"CostEstimate\",\n    \"MediaResolution\",\n    \"TokenEstimate\",\n    \"calculate_cost\",\n    \"estimate_video_generation_cost\",\n    \"estimate_video_input_tokens\",\n    \"format_cost_estimate\",\n    \"get_video_duration_from_bytes\",\n    # Video utilities\n    \"extract_tag_content\",\n    \"get_video_bytes_and_mime_type\",\n]\n"
  },
  {
    "path": "backend/video/cost_estimation.py",
    "content": "from dataclasses import dataclass\nfrom enum import Enum\nfrom typing import Tuple\nfrom llm import Llm\n\n\nclass MediaResolution(Enum):\n    LOW = \"low\"\n    MEDIUM = \"medium\"\n    HIGH = \"high\"\n\n\n@dataclass\nclass TokenEstimate:\n    input_tokens: int\n    estimated_output_tokens: int\n    total_tokens: int\n\n\n@dataclass\nclass CostEstimate:\n    input_cost: float\n    output_cost: float\n    total_cost: float\n    input_tokens: int\n    output_tokens: int\n\n\n# Gemini 3 video token rates per frame (from documentation)\n# https://ai.google.dev/gemini-api/docs/media-resolution#token-counts\nVIDEO_TOKENS_PER_FRAME = {\n    MediaResolution.LOW: 70,\n    MediaResolution.MEDIUM: 70,\n    MediaResolution.HIGH: 280,\n}\n\n# Prompt overhead (system prompt + user text)\nPROMPT_TOKENS_ESTIMATE = 1200\n\n# Pricing per million tokens (USD)\n# https://ai.google.dev/gemini-api/docs/pricing\nGEMINI_PRICING = {\n    # Gemini 3 Flash Preview\n    \"gemini-3-flash-preview\": {\n        \"input_per_million\": 0.50,  # text/image/video\n        \"output_per_million\": 3.00,  # including thinking tokens\n    },\n    # Gemini 3 Pro Preview\n    \"gemini-3.1-pro-preview\": {\n        \"input_per_million\": 2.00,\n        \"output_per_million\": 12.00,\n    },\n}\n\n\ndef get_model_api_name(model: Llm) -> str:\n    if model in [Llm.GEMINI_3_FLASH_PREVIEW_HIGH, Llm.GEMINI_3_FLASH_PREVIEW_MINIMAL]:\n        return \"gemini-3-flash-preview\"\n    elif model in [\n        Llm.GEMINI_3_1_PRO_PREVIEW_HIGH,\n        Llm.GEMINI_3_1_PRO_PREVIEW_MEDIUM,\n        Llm.GEMINI_3_1_PRO_PREVIEW_LOW,\n    ]:\n        return \"gemini-3.1-pro-preview\"\n    return model.value\n\n\ndef estimate_video_input_tokens(\n    video_duration_seconds: float,\n    fps: float = 1.0,\n    media_resolution: MediaResolution = MediaResolution.HIGH,\n) -> int:\n    # Calculate frames based on fps\n    total_frames = video_duration_seconds * fps\n    tokens_per_frame = VIDEO_TOKENS_PER_FRAME[media_resolution]\n    frame_tokens = int(total_frames * tokens_per_frame)\n\n    # Add prompt overhead (system prompt + user text)\n    total_tokens = frame_tokens + PROMPT_TOKENS_ESTIMATE\n\n    return total_tokens\n\n\ndef estimate_output_tokens(\n    max_output_tokens: int = 50000,\n    thinking_level: str = \"high\",\n) -> int:\n    # Rough estimation: thinking takes up significant portion of output\n    # High thinking: ~60-70% of output may be thinking tokens\n    # Low thinking: ~30-40% of output may be thinking tokens\n    # Minimal thinking: ~10-20% of output may be thinking tokens\n    thinking_multipliers = {\n        \"high\": 0.7,\n        \"low\": 0.5,\n        \"minimal\": 0.3,\n    }\n\n    multiplier = thinking_multipliers.get(thinking_level, 0.5)\n\n    # Assume we use roughly 60% of max output tokens on average\n    estimated_usage = int(max_output_tokens * 0.6)\n\n    return estimated_usage\n\n\ndef calculate_cost(\n    input_tokens: int,\n    output_tokens: int,\n    model: Llm,\n) -> CostEstimate:\n    model_name = get_model_api_name(model)\n    pricing = GEMINI_PRICING.get(model_name, GEMINI_PRICING[\"gemini-3-flash-preview\"])\n\n    input_cost = (input_tokens / 1_000_000) * pricing[\"input_per_million\"]\n    output_cost = (output_tokens / 1_000_000) * pricing[\"output_per_million\"]\n    total_cost = input_cost + output_cost\n\n    return CostEstimate(\n        input_cost=input_cost,\n        output_cost=output_cost,\n        total_cost=total_cost,\n        input_tokens=input_tokens,\n        output_tokens=output_tokens,\n    )\n\n\ndef estimate_video_generation_cost(\n    video_duration_seconds: float,\n    model: Llm,\n    fps: float = 1.0,\n    media_resolution: MediaResolution = MediaResolution.HIGH,\n    max_output_tokens: int = 50000,\n    thinking_level: str = \"high\",\n) -> CostEstimate:\n    input_tokens = estimate_video_input_tokens(\n        video_duration_seconds=video_duration_seconds,\n        fps=fps,\n        media_resolution=media_resolution,\n    )\n\n    output_tokens = estimate_output_tokens(\n        max_output_tokens=max_output_tokens,\n        thinking_level=thinking_level,\n    )\n\n    return calculate_cost(\n        input_tokens=input_tokens,\n        output_tokens=output_tokens,\n        model=model,\n    )\n\n\ndef format_cost_estimate(cost: CostEstimate) -> str:\n    return (\n        f\"Estimated Cost:\\n\"\n        f\"  Input tokens: {cost.input_tokens:,} (${cost.input_cost:.4f})\\n\"\n        f\"  Output tokens: {cost.output_tokens:,} (${cost.output_cost:.4f})\\n\"\n        f\"  Total estimated cost: ${cost.total_cost:.4f}\"\n    )\n\n\ndef format_detailed_input_estimate(\n    video_duration_seconds: float,\n    fps: float,\n    media_resolution: MediaResolution,\n    model: Llm,\n) -> str:\n    total_frames = video_duration_seconds * fps\n    tokens_per_frame = VIDEO_TOKENS_PER_FRAME[media_resolution]\n    frame_tokens = int(total_frames * tokens_per_frame)\n    total_input_tokens = frame_tokens + PROMPT_TOKENS_ESTIMATE\n\n    model_name = get_model_api_name(model)\n    pricing = GEMINI_PRICING.get(model_name, GEMINI_PRICING[\"gemini-3-flash-preview\"])\n    input_cost = (total_input_tokens / 1_000_000) * pricing[\"input_per_million\"]\n\n    return (\n        f\"Input Token Calculation:\\n\"\n        f\"  Frames: {video_duration_seconds:.2f}s × {fps} fps = {total_frames:.1f} frames\\n\"\n        f\"  Frame tokens: {total_frames:.1f} × {tokens_per_frame} tokens/frame = {frame_tokens:,} tokens\\n\"\n        f\"  Prompt overhead: {PROMPT_TOKENS_ESTIMATE:,} tokens\\n\"\n        f\"  Total input: {total_input_tokens:,} tokens\\n\"\n        f\"  Cost: {total_input_tokens:,} ÷ 1M × ${pricing['input_per_million']:.2f} = ${input_cost:.4f}\"\n    )\n\n\ndef get_video_duration_from_bytes(video_bytes: bytes) -> float | None:\n    try:\n        import tempfile\n        import os\n        from moviepy.editor import VideoFileClip\n\n        # Write bytes to a temporary file\n        with tempfile.NamedTemporaryFile(suffix=\".mp4\", delete=False) as tmp_file:\n            tmp_file.write(video_bytes)\n            tmp_path = tmp_file.name\n\n        try:\n            clip = VideoFileClip(tmp_path)\n            duration = clip.duration\n            clip.close()\n            return duration\n        finally:\n            os.unlink(tmp_path)\n\n    except Exception as e:\n        print(f\"Error getting video duration: {e}\")\n        return None\n"
  },
  {
    "path": "backend/video/utils.py",
    "content": "import base64\n\n\ndef extract_tag_content(tag: str, text: str) -> str:\n    \"\"\"\n    Extracts content for a given tag from the provided text.\n\n    :param tag: The tag to search for.\n    :param text: The text to search within.\n    :return: The content found within the tag, if any.\n    \"\"\"\n    tag_start = f\"<{tag}>\"\n    tag_end = f\"</{tag}>\"\n    start_idx = text.find(tag_start)\n    end_idx = text.find(tag_end, start_idx)\n    if start_idx != -1 and end_idx != -1:\n        return text[start_idx : end_idx + len(tag_end)]\n    return \"\"\n\n\ndef get_video_bytes_and_mime_type(video_data_url: str) -> tuple[bytes, str]:\n    video_encoded_data = video_data_url.split(\",\")[1]\n    video_bytes = base64.b64decode(video_encoded_data)\n    mime_type = video_data_url.split(\";\")[0].split(\":\")[1]\n    return video_bytes, mime_type\n"
  },
  {
    "path": "backend/ws/__init__.py",
    "content": ""
  },
  {
    "path": "backend/ws/constants.py",
    "content": "# WebSocket protocol (RFC 6455) allows for the use of custom close codes in the range 4000-4999\nAPP_ERROR_WEB_SOCKET_CODE = 4332\n"
  },
  {
    "path": "blog/evaluating-claude.md",
    "content": "# Claude 3 for converting screenshots to code\n\nClaude 3 dropped yesterday, claiming to rival GPT-4 on a wide variety of tasks. I maintain a very popular open source project called “screenshot-to-code” (this one!) that uses GPT-4 vision to convert screenshots/designs into clean code. Naturally, I was excited to see how good Claude 3 was at this task.\n\n**TLDR:** Claude 3 is on par with GPT-4 vision for screenshot to code, better in some ways but worse in others.\n\n## Evaluation Setup\n\nI don’t know of a public benchmark for “screenshot to code” so I created simple evaluation setup for the purposes of testing:\n\n- **Evaluation Dataset**: 16 screenshots with a mix of UI elements, landing pages, dashboards and popular websites.\n<img width=\"784\" alt=\"Screenshot 2024-03-05 at 3 05 52 PM\" src=\"https://github.com/abi/screenshot-to-code/assets/23818/c32af2db-eb5a-44c1-9a19-2f0c3dd11ab4\">\n\n- **Evaluation Metric**: Replication accuracy, as in “How close does the generated code look to the screenshot?” While there are other metrics that are important like code quality, speed and so on, this is by far the #1 thing most users of the repo care about.\n- **Evaluation Mechanism**: Each output is subjectively rated by a human on a rating scale from 0 to 4. 4 = very close to an exact replica while 0 = nothing like the screenshot. With 16 screenshots, the maximum any model can score is 64.\n\n\nTo make the evaluation process easy, I created [a Python script](https://github.com/abi/screenshot-to-code/blob/main/backend/run_evals.py) that runs code for all the inputs in parallel. I also made a simple UI to do a side-by-side comparison of the input and output.\n\n![Google Chrome](https://github.com/abi/screenshot-to-code/assets/23818/38126f8f-205d-4ed1-b8cf-039e81dcc3d0)\n\n\n## Results\n\nQuick note about what kind of code we’ll be generating: currently, screenshot-to-code supports generating code in HTML + Tailwind, React, Vue, and several other frameworks. Stacks can impact the replication accuracy quite a bit. For example, because Bootstrap uses a relatively restrictive set of user elements, generations using Bootstrap tend to have a distinct \"Bootstrap\" style.\n\nI only ran the evals on HTML/Tailwind here which is the stack where GPT-4 vision tends to perform the best.\n\nHere are the results (average of 3 runs for each model):\n\n- GPT-4 Vision obtains a score of **65.10%** - this is what we’re trying to beat\n- Claude 3 Sonnet receives a score of **70.31%**, which is a bit better.\n- Surprisingly, Claude 3 Opus which is supposed to be the smarter and slower model scores worse than both GPT-4 vision and Claude 3 Sonnet, comes in at **61.46%**.\n\nOverall, a very strong showing for Claude 3. Obviously, there's a lot of subjectivity involved in this evaluation but Claude 3 is definitely on par with GPT-4 Vision, if not better.\n\nYou can see the [side-by-side comparison for a run of Claude 3 Sonnet here](https://github.com/abi/screenshot-to-code-files/blob/main/sonnet%20results.png). And for [a run of GPT-4 Vision here](https://github.com/abi/screenshot-to-code-files/blob/main/gpt%204%20vision%20results.png).\n\nOther notes:\n\n- The prompts used are optimized for GPT-4 vision. Adjusting the prompts a bit for Claude did yield a small improvement. But nothing game-changing and potentially not worth the trade-off of maintaining two sets of prompts.\n- All the models excel at code quality - the quality is usually comparable to a human or better.\n- Claude 3 is much less lazy than GPT-4 Vision. When asked to recreate Hacker News, GPT-4 Vision will only create two items in the list and leave comments in this code like `<!-- Repeat for each news item -->` and `<!-- ... other news items ... -->`.\n<img width=\"699\" alt=\"Screenshot 2024-03-05 at 9 25 04 PM\" src=\"https://github.com/abi/screenshot-to-code/assets/23818/04b03155-45e0-40b0-8de0-b1f0b4382bee\">\n\nWhile Claude 3 Sonnet can sometimes be lazy too, most of the time, it does what you ask it to do.\n\n<img width=\"904\" alt=\"Screenshot 2024-03-05 at 9 30 23 PM\" src=\"https://github.com/abi/screenshot-to-code/assets/23818/b7c7d1ba-47c1-414d-928f-6989e81cf41d\">\n\n- For some reason, all the models struggle with side-by-side \"flex\" layouts\n<img width=\"1090\" alt=\"Screenshot 2024-03-05 at 9 20 58 PM\" src=\"https://github.com/abi/screenshot-to-code/assets/23818/8957bb3a-da66-467d-997d-1c7cc24e6d9a\">\n\n- Claude 3 Sonnet is a lot faster\n- Claude 3 gets background and text colors wrong quite often! (like in the Hacker News image above)\n- My suspicion is that Claude 3 Opus results can be improved to be on par with the other models through better prompting\n  \nOverall, I'm very impressed with Claude 3 Sonnet for this use case. I've added it as an alternative to GPT-4 Vision in the open source repo (hosted version update coming soon).\n\nIf you’d like to contribute to this effort, I have some documentation on [running these evals yourself here](https://github.com/abi/screenshot-to-code/blob/main/Evaluation.md). I'm also working on a better evaluation mechanism with Elo ratings and would love some help on that.\n"
  },
  {
    "path": "design-docs/agent-tool-calling-flow.md",
    "content": "# Agent Tool-Calling Flow (Backend)\n\nThis document explains exactly what happens after prompt messages are built and a variant starts running in the agent.\n\n## Entry Point\n\nPer variant, `Agent(...).run(model, prompt_messages)` is called from:\n\n- `backend/routes/generate_code.py` (`AgenticGenerationStage._run_variant`)\n\n`Agent` is a thin wrapper over `AgentEngine`:\n\n- `backend/agent/runner.py`\n- `backend/agent/engine.py`\n\n## Core Tool-Calling Loop\n\nThe main loop lives in:\n\n- `backend/agent/engine.py` -> `AgentEngine._run_with_session(...)`\n\nLoop behavior:\n\n1. Start turn-local streaming state.\n- Create event IDs for assistant/thinking streams.\n- Initialize:\n  - `started_tool_ids`\n  - `streamed_lengths`\n\n2. Stream one provider turn.\n- Call `turn = await session.stream_turn(on_event)`\n- `on_event` handles streamed deltas:\n  - `assistant_delta` -> websocket `assistant`\n  - `thinking_delta` -> websocket `thinking`\n  - `tool_call_delta` -> `_handle_streamed_tool_delta(...)`\n\n3. Branch by tool calls.\n- If `turn.tool_calls` is empty: finalize and return.\n- Otherwise execute each tool call, emit tool lifecycle messages, and collect results.\n\n4. Continue conversation with tool results.\n- Call `session.append_tool_results(turn, executed_tool_calls)`\n- Next loop iteration sends another model turn with updated history.\n\n5. Guardrail.\n- Maximum 20 tool turns; raises if exceeded.\n\n## Tool Execution\n\nTool runtime:\n\n- `backend/agent/tools/runtime.py` -> `AgentToolRuntime.execute(...)`\n\nTool definitions:\n\n- `backend/agent/tools/definitions.py` -> `canonical_tool_definitions(...)`\n\nSupported tools:\n\n- `create_file`\n- `edit_file`\n- `generate_images`\n- `remove_background`\n- `retrieve_option`\n\nExecution lifecycle per tool call:\n\n1. Emit `toolStart` (unless already emitted from streamed args).\n2. If `create_file`, stream preview code chunks while args are still arriving.\n3. Execute tool in runtime.\n4. If tool returns `updated_content`, emit `setCode`.\n5. Emit `toolResult` with `{ name, output, ok }`.\n\n### Live streamed `create_file` preview\n\nThe engine parses partial tool arguments from provider deltas using:\n\n- `backend/agent/tools/parsing.py`:\n  - `extract_content_from_args(...)`\n  - `extract_path_from_args(...)`\n\nThen `_handle_streamed_tool_delta(...)` in `engine.py`:\n\n- Emits early `toolStart` for `create_file`\n- Emits incremental `setCode` updates as `content` grows\n\nThis allows frontend preview before actual tool execution completes.\n\n## Provider-Specific Continuation\n\nProvider contract:\n\n- `backend/agent/providers/base.py`\n  - `ProviderSession`\n  - `ProviderTurn`\n\nEach provider returns a `ProviderTurn` with:\n\n- `assistant_text`\n- `tool_calls`\n- `assistant_turn` (provider-native turn object needed for continuation)\n\nAfter tool execution, each provider appends tool results differently.\n\n### OpenAI continuation\n\n- `backend/agent/providers/openai.py` -> `OpenAIProviderSession.append_tool_results(...)`\n\nBehavior:\n\n1. Append prior assistant output items (`turn.assistant_turn`) to request history.\n2. Append one `function_call_output` per tool result:\n- `{\"type\":\"function_call_output\",\"call_id\":...,\"output\": json_string}`\n\nNext `responses.create(...)` turn uses this updated item list.\n\n### Anthropic continuation\n\n- `backend/agent/providers/anthropic.py` -> `AnthropicProviderSession.append_tool_results(...)`\n\nBehavior:\n\n1. Append assistant message blocks:\n- optional text block\n- tool_use blocks (`id`, `name`, `input`)\n2. Append user message with tool_result blocks:\n- `tool_use_id`, serialized result content, `is_error`\n\nNext `messages.stream(...)` turn continues from these blocks.\n\n### Gemini continuation\n\n- `backend/agent/providers/gemini.py` -> `GeminiProviderSession.append_tool_results(...)`\n\nBehavior:\n\n1. Append exact original model content (`turn.assistant_turn`).\n2. Append `role=\"tool\"` content with `Part.from_function_response(...)` per tool.\n\nThis preserves the model part structure required for reliable continuation (including thought-signature-sensitive flows).\n\n## Response Streaming to Frontend\n\nFrontend websocket message types emitted during generation:\n\n- `assistant`\n- `thinking`\n- `toolStart`\n- `toolResult`\n- `setCode`\n\nWhere they come from:\n\n1. Provider parser emits `StreamEvent` deltas during `stream_turn(...)`.\n2. Engine forwards deltas immediately via `send_message(...)`.\n3. Tool execution adds explicit lifecycle events and code updates.\n\nTypical per-turn stream sequence:\n\n1. thinking/assistant deltas\n2. tool call deltas (optional)\n3. `toolStart`\n4. `setCode` previews (for `create_file`, optional)\n5. `toolResult`\n6. next model turn starts, repeat\n\nFinalization:\n\n- If no more tool calls, engine returns final code from in-memory file state.\n- If file state is empty, engine tries HTML extraction from final assistant text.\n\n## Module Map\n\n- Engine orchestration: `backend/agent/engine.py`\n- Agent entrypoint: `backend/agent/runner.py`\n- Provider factory: `backend/agent/providers/factory.py`\n- Provider contract: `backend/agent/providers/base.py`\n- Provider implementations:\n  - `backend/agent/providers/openai.py`\n  - `backend/agent/providers/anthropic.py`\n  - `backend/agent/providers/gemini.py`\n- Tool system:\n  - `backend/agent/tools/definitions.py`\n  - `backend/agent/tools/runtime.py`\n  - `backend/agent/tools/parsing.py`\n  - `backend/agent/tools/summaries.py`\n"
  },
  {
    "path": "design-docs/agentic-runner-refactor.md",
    "content": "# Agentic Runner Refactor Spec\n\n## Goals\n- Reduce duplicated streaming logic across OpenAI/Anthropic/Gemini runners.\n- Centralize tool schemas and telemetry formatting.\n- Make the agent pipeline easier to test, extend, and reason about.\n\n## Decision: Unified Stream Loop + Provider Adapters\n- Introduce a provider-agnostic stream loop that consumes normalized events\n  (assistant_delta, thinking_delta, tool_call_delta, tool_call_complete, done).\n- Add per-provider adapters that translate native streams into normalized events.\n- Centralize tool execution, `toolStart`/`toolResult` emission, and `setCode` preview\n  streaming inside the unified loop.\n- Keep per-provider adapters small and focused on parsing provider-specific payloads.\n\n## Decision: Canonical Tool Definitions + Serializer Layer\n- Define tool schemas once in a canonical representation.\n- Add serializer helpers to produce OpenAI Responses, Anthropic, and Gemini tool\n  schemas from the canonical form.\n- Centralize tool input/output summaries to keep UI telemetry consistent across\n  providers and reduce duplication.\n\n## Planned Removals\n\n### Image Cache\n- Remove prompt-to-URL image caching in the agent tool layer.\n- Rationale: simplify state, reduce hidden cross-variant coupling.\n- Follow-up: ensure image generation remains deterministic per request when needed\n  (e.g., pass explicit seeds or expose caching at a higher layer if required).\n\n### OpenAI ChatCompletion Path\n- Remove the legacy ChatCompletion streaming path.\n- Route all OpenAI models through the Responses API implementation.\n- Update model lists and runtime checks to eliminate the ChatCompletion branch.\n\n### Non-Agentic Generation Paths (e.g., Video)\nKeep video generation, but route it through the agent runner:\n- Replace video-specific streaming helpers with agent runner support for video inputs.\n- Remove conditional branches that bypass the agent path for video create/update.\n- Preserve video-specific prompt and media handling, but integrate it into the\n  agent tool/stream pipeline.\n- Update tests and docs to reflect a single agent generation path that\n  supports video inputs.\n\n## File/Module Split\n- `agent/runner.py`: orchestration + shared stream loop.\n- `agent/providers/`: provider adapters (openai, responses, anthropic, gemini).\n- `agent/tools.py`: tool definitions, serialization, and execution.\n- `agent/state.py`: file state + seeding utilities.\n\n## Non-Goals\n- No functional UX changes beyond the removal items above.\n- No redesign of the frontend agent activity UI; it should continue to consume\n  the same tool/assistant/thinking events.\n"
  },
  {
    "path": "design-docs/commits-and-variants.md",
    "content": "# Commits and Non-Blocking Variants\n\nThis document explains how the commit system and non-blocking variant generation work in screenshot-to-code.\n\n## Commit System\n\n### What are Commits?\n\nCommits represent discrete versions in the application's history. Each commit contains:\n\n- **Hash**: Unique identifier (generated using `nanoid()`)\n- **Parent Hash**: Links to previous commit for history tracking\n- **Variants**: Multiple code generation options (typically 2)\n- **Selected Variant**: Which variant the user is currently viewing\n- **Status**: Whether the commit is still being edited (`isCommitted: false`) or finalized (`isCommitted: true`)\n\n### Commit Types\n\n```typescript\ntype CommitType = \"ai_create\" | \"ai_edit\" | \"code_create\";\n```\n\n- **ai_create**: Initial generation from screenshot/video\n- **ai_edit**: Updates based on user instructions\n- **code_create**: Import from existing code\n\n### Data Structure\n\n```typescript\ntype Commit = {\n  hash: CommitHash;\n  parentHash: CommitHash | null;\n  dateCreated: Date;\n  isCommitted: boolean;\n  variants: Variant[];\n  selectedVariantIndex: number;\n  type: CommitType;\n  inputs: any; // Type-specific inputs\n}\n\ntype Variant = {\n  code: string;\n  status: VariantStatus;\n}\n\ntype VariantStatus = \"generating\" | \"complete\" | \"cancelled\";\n```\n\n### Storage and Management\n\nCommits are stored in the project store as a flat record:\n\n```typescript\ncommits: Record<CommitHash, Commit>\nhead: CommitHash | null  // Current active commit\n```\n\nThe `head` pointer tracks which commit is currently active. History is reconstructed by following `parentHash` links.\n\n## Non-Blocking Variants\n\n### Traditional Variant Generation (Before)\n\n```\nStart Generation → Wait for ALL variants → Show results\nUser Experience: [Loading...........................] → Ready\n```\n\nProblems:\n- Users wait for the slowest variant\n- No interaction until everything completes\n- Poor perceived performance\n\n### Non-Blocking Variant Generation (After)\n\n```\nStart Generation → Show results as each variant completes\nUser Experience: [Loading.....] → Ready (Option 1)\n                 [Loading..........] → Ready (Option 2)\n```\n\nBenefits:\n- Immediate interaction when first variant completes\n- Can switch between completed variants while others generate\n- Significantly improved perceived performance\n\n### Implementation Overview\n\n#### Frontend Changes\n\n**App.tsx**: Enhanced event handling\n```typescript\n// New WebSocket events\nonVariantComplete: (variantIndex) => {\n  updateVariantStatus(commit.hash, variantIndex, 'complete');\n}\nonVariantError: (variantIndex, error) => {\n  updateVariantStatus(commit.hash, variantIndex, 'cancelled');\n}\n```\n\n**Sidebar.tsx**: Dual-condition UI\n```typescript\n// Show update UI when either condition is true\n{(appState === AppState.CODE_READY || isSelectedVariantComplete) && (\n  <UpdateInterface />\n)}\n```\n\n**Variants.tsx**: Real-time status indicators\n- Green dot: Complete variants\n- Red dot: Cancelled variants  \n- Spinner: Currently generating variants\n\n#### Backend Changes\n\n**generate_code.py**: Independent variant processing\n```python\n# Process each variant independently\nasync def process_variant_completion(index: int, task: asyncio.Task):\n    completion = await task  # Wait for THIS variant only\n    \n    # Process images immediately\n    processed_html = await perform_image_generation(...)\n    \n    # Send to frontend immediately\n    await send_message(\"setCode\", processed_html, index)\n    await send_message(\"variantComplete\", \"Variant generation complete\", index)\n```\n\n### State Management\n\n#### App State vs Variant Status\n\nThe system uses a **hybrid state approach**:\n\n- **AppState**: Global generation status (`INITIAL` → `CODING` → `CODE_READY`)\n- **Variant Status**: Individual variant status (`generating` → `complete`/`cancelled`)\n\n#### UI Logic\n\n```typescript\n// UI shows update interface when either:\nconst canUpdate = \n  appState === AppState.CODE_READY ||           // All variants done\n  isSelectedVariantComplete;                    // Selected variant done\n\n// User can interact immediately when their selected variant completes\n```\n\n### WebSocket Protocol\n\n#### Events from Backend\n\n```typescript\ntype WebSocketResponse = {\n  type: \"chunk\" | \"status\" | \"setCode\" | \"variantComplete\" | \"variantError\";\n  value: string;\n  variantIndex: number;\n}\n```\n\n- **chunk**: Streaming code content during generation\n- **status**: Status updates (e.g., \"Generating images...\")\n- **setCode**: Final code for a variant\n- **variantComplete**: Variant finished successfully\n- **variantError**: Variant failed with error\n\n#### Event Flow\n\n```\nBackend: Generate Variant 1 → \"setCode\" → \"variantComplete\"\nFrontend: Update UI → Allow interaction\n\nBackend: Generate Variant 2 → \"setCode\" → \"variantComplete\"  \nFrontend: Update UI → User can switch to this variant\n\nBackend: Generate Variant 3 → \"variantError\"\nFrontend: Show error → Mark as cancelled\n```\n\n### User Experience Flow\n\n1. **User starts generation**\n   - All variants marked as `status: \"generating\"`\n   - UI shows loading state with spinners\n\n2. **First variant completes**\n   - Receives `variantComplete` event\n   - Status updated to `\"complete\"`\n   - If this is the selected variant → UI immediately allows updates\n   - User can start editing while other variants generate\n\n3. **User switches variants**\n   - Can switch to any completed variant immediately\n   - Can switch to generating variants (will show loading until complete)\n\n4. **User starts update**\n   - Automatically cancels all other generating variants\n   - Prevents wasted computation\n\n### Benefits\n\n1. **Perceived Performance**: Users see results 2-3x faster\n2. **Parallel Processing**: Multiple models generate simultaneously\n3. **Flexible Interaction**: Switch between ready options while others work\n4. **Resource Efficiency**: Cancel unused variants when user makes changes\n5. **Graceful Degradation**: System works even if some variants fail\n\n### Technical Considerations\n\n#### Variant Cancellation\n\nWhen users start updates, other generating variants are cancelled:\n\n```typescript\n// Cancel generating variants when user updates\ncurrentCommit.variants.forEach((variant, index) => {\n  if (index !== selectedVariantIndex && variant.status === 'generating') {\n    wsRef.current.send(JSON.stringify({\n      type: \"cancel_variant\",\n      variantIndex: index\n    }));\n  }\n});\n```\n\n#### Error Handling\n\nEach variant handles errors independently:\n- Failed variants don't block successful ones\n- Users see specific error messages per variant\n- System remains functional if some variants fail\n\n#### WebSocket Lifecycle\n\n- New generations replace previous WebSocket connections\n- Previous connections are closed to prevent resource leaks\n- Backend handles connection state checking before sending messages\n\nThis architecture enables a responsive, non-blocking user experience while maintaining system reliability and resource efficiency.\n"
  },
  {
    "path": "design-docs/general.md",
    "content": "## Input mode\n\n- Input mode is used for model selection (but it shouldn’t be really? I don’t know)\n  - Model selection\n  - Prompt selection\n\n## Models\n\nCurrent Gemini client only uses messages[0] - doesn't support conversation history like OpenAI/Claude clients\n"
  },
  {
    "path": "design-docs/images-in-update-history.md",
    "content": "# Images in Update History\n\n## Status: ✅ IMPLEMENTED\n\nMultiple images in update history are fully supported in the backend.\n\n## Implementation\n\n### Core Function\n- `create_message_from_history_item()` in `prompts/__init__.py` handles image processing\n- User messages with `images` array create multipart content (images + text)\n- Assistant messages remain text-only (code)\n- Empty `images` arrays gracefully fallback to text-only\n\n### Supported Flows\n- ✅ Regular updates with images\n- ✅ Imported code updates with images  \n- ✅ Multiple images per message\n- ✅ Backward compatibility (no images)\n\n### Tests\n- ✅ Single image in history (`test_prompts.py`)\n- ✅ Multiple images in history (`test_prompts.py`, `test_prompts_additional.py`)\n- ✅ Imported code with images (`test_prompts.py`, `test_prompts_additional.py`)\n- ✅ Empty images arrays (`test_prompts_additional.py`)\n\n## Usage\nFrontend can send update history items with:\n```typescript\n{\n  text: \"Update instructions\",\n  images: [\"data:image/png;base64,img1\", \"data:image/png;base64,img2\"]\n}\n```\n\nBackend automatically creates proper multipart messages for AI models.\n"
  },
  {
    "path": "design-docs/prompt-history-refactor.md",
    "content": "# Prompt History Refactor (Frontend -> Backend)\n\n## Goal\nSimplify edit prompt history so we no longer reconstruct conversation state from commit ancestry and index parity.\n\nThe new model is:\n- Frontend stores explicit conversation history per variant.\n- Frontend sends explicit role-based history to backend.\n- Backend trusts structured history and only assembles messages.\n\n## Key decisions (final)\n- No `sourceVersionNumber` field is required.\n  - The selected variant is the source of truth for \"what history to extend next\".\n- No backend persistence/migration work is needed for this refactor.\n  - Prompt history state is frontend-side session/project state.\n- No legacy reconstruction parity path.\n  - Old reconstruction/index-parity behavior is removed rather than preserved.\n\n## Previous behavior (removed)\nBefore this refactor:\n- Frontend rebuilt history via `extractHistory(...)` by walking commit parent links.\n- Backend inferred message role from array index parity (`assistant` for even, `user` for odd).\n- History state was implicit and brittle, especially with branching and variant selection.\n\n## New frontend storage model\n\n### 1) Per-variant explicit history\nEach variant now carries its own history:\n- `Variant.history: VariantHistoryMessage[]`\n- `VariantHistoryMessage`:\n  - `role: \"user\" | \"assistant\"`\n  - `text: string`\n  - `imageAssetIds: string[]`\n  - `videoAssetIds: string[]`\n\nThis history is authoritative for that variant.\n\n### 2) Shared media asset store\nFrontend keeps media in one shared map:\n- `assetsById: Record<string, PromptAsset>`\n- `PromptAsset`:\n  - `id`\n  - `type: \"image\" | \"video\"`\n  - `dataUrl`\n\nVariant history references media by ID, not by embedding large base64 strings directly.\n\n### 3) Utilities extracted from `App.tsx`\nPrompt-history/media helper logic moved to:\n- `frontend/src/lib/prompt-history.ts`\n\nKey helpers:\n- `cloneVariantHistory(...)`\n- `registerAssetIds(...)`\n- `toRequestHistory(...)`\n- `buildUserHistoryMessage(...)`\n- `buildAssistantHistoryMessage(...)`\n\n## How requests are built now\n\n### Create flow\n- Create seeds `variantHistory` with a single `user` message.\n- Images/videos are registered in `assetsById` and referenced by IDs in the variant history.\n- Request payload includes:\n  - `prompt` (create input)\n  - `variantHistory` (for local commit state)\n\n### Update flow\n- Uses the currently selected variant as source of truth.\n- Base history = selected variant history (or fallback assistant snapshot of current code if empty).\n- Appends new `user` update message (+ optional media IDs).\n- Converts variant history into request history (`role`, `text`, `images`, `videos`) via `toRequestHistory(...)`.\n\n### Completion behavior\nOn variant completion:\n- Final generated code is appended as an `assistant` history message for that specific variant.\n\n## Branching behavior\nWe still keep flat version labels (v1, v2, ...), but edits can branch from any selected version/variant.\n\nImportant detail:\n- The active selected variant's explicit history is what gets extended for the next edit.\n- This naturally supports branching without reconstructing from global commit ancestry.\n\n## Request payload shape (simplified)\nWhen sending an edit, frontend sends explicit role history like:\n- `history[i].role`: `\"user\"` or `\"assistant\"`\n- `history[i].text`: textual instruction or generated code\n- `history[i].images`: data URLs for image inputs for that message\n- `history[i].videos`: data URLs for video inputs for that message\n\nBackend does not infer roles by index anymore; it uses the provided role directly.\n\n## Backend parsing and prompt assembly\n\n### Request parsing extracted\nRaw request normalization moved into:\n- `backend/prompts/request_parsing.py`\n\nFunctions:\n- `parse_prompt_content(raw_prompt)`\n- `parse_prompt_history(raw_history)`\n\n`generate_code.py` now calls these helpers, so the route file is smaller and parsing is centralized.\n\n### Prompt assembly changes\n`backend/prompts/builders.py` now:\n- Consumes explicit `PromptHistoryMessage` entries.\n- Uses provided `role` directly (no index parity inference).\n- For update generation, builds:\n  - `system` message\n  - followed by all provided explicit history messages\n\n### Imported-code path\nImported-code update logic now works with explicit role history and chooses the latest relevant `user` instruction cleanly.\n\n## Logging/observability updates\n- Removed runtime `PROMPT SUMMARY` logging from generation path.\n- Added compact `PROMPT PREVIEW` logging before model execution.\n- Large text/code is collapsed for readability.\n\n## Tests updated/added\n\n### Backend\n- Updated prompt assembly expectations to explicit role history:\n  - `backend/tests/test_prompts.py`\n- Merged and removed duplicate file:\n  - deleted `backend/tests/test_prompts_additional.py`\n- Added request parser tests:\n  - `backend/tests/test_request_parsing.py`\n- Added prompt preview tests:\n  - `backend/tests/test_prompt_summary.py`\n\n### Frontend\n- Removed legacy `extractHistory` tests and updated fixtures:\n  - `frontend/src/components/history/utils.test.ts`\n- Added helper tests for extracted prompt-history utilities:\n  - `frontend/src/lib/prompt-history.test.ts`\n\n## Net result\nThe prompt-history pipeline is now explicit, variant-local, and much easier to reason about:\n- No implicit role inference.\n- No tree reconstruction for edit prompts.\n- Cleaner branch handling via selected variant history.\n- Smaller route parsing surface via dedicated parser module.\n"
  },
  {
    "path": "design-docs/variant-system.md",
    "content": "# Variant System\n\n## Overview\n\nThe variant system generates multiple code options in parallel, allowing users to compare different AI-generated implementations. The system defaults to 3 variants and scales automatically by changing `NUM_VARIANTS` in config.\n\n## Configuration\n\n**Key Setting:** `NUM_VARIANTS = 3` in `backend/config.py`\n\nChanging this value automatically scales the entire system to support any number of variants.\n\n## Model Selection\n\nModels cycle based on available API keys:\n\n```python\n# Both API keys present\nmodels = [claude_model, Llm.GPT_4_1_NANO_2025_04_14]\n\n# Claude only  \nmodels = [claude_model, Llm.CLAUDE_4_5_SONNET_2025_09_29]\n\n# OpenAI only\nmodels = [Llm.GPT_4O_2024_11_20]\n```\n\n**Cycling:** If models = [A, B] and NUM_VARIANTS = 5, result is [A, B, A, B, A]\n\n**Generation Type:**\n- **Create**: Primary model is Claude 3.7 Sonnet\n- **Update**: Primary model is Claude Sonnet 4.5\n\n## Frontend\n\n### Grid Layouts\n- **2 variants**: 2-column \n- **3 variants**: 2-column (third wraps below - prevents squishing)\n- **4 variants**: 2x2 grid  \n- **5-6 variants**: 3-column \n- **7+ variants**: 4-column\n\n### Keyboard Shortcuts\n- **Option/Alt + 1, 2, 3...**: Switch variants\n- Works globally, even in text fields\n- Uses `event.code` for cross-platform compatibility\n- Visual indicators show ⌥1, ⌥2, ⌥3\n\n## Architecture\n\n### Backend\n- `StatusBroadcastMiddleware` sends `variantCount` to frontend\n- `ModelSelectionStage` cycles through available models\n- Pipeline generates variants in parallel via WebSocket\n\n### Frontend  \n- Learns variant count from backend dynamically\n- `resizeVariants()` adapts UI to backend count\n- Error handling per variant with status display\n\n## WebSocket Messages\n\n```typescript\n\"variantCount\" | \"chunk\" | \"status\" | \"setCode\" | \"variantComplete\" | \"variantError\"\n```\n\n## Implementation Notes\n\n✅ **Scalable**: Change `NUM_VARIANTS` and everything adapts  \n✅ **Cross-platform**: Keyboard shortcuts work Mac/Windows  \n✅ **Responsive**: Grid layouts adapt to count  \n✅ **Simple**: Model cycling handles any variant count  \n\n## Key Files\n\n- `backend/config.py` - NUM_VARIANTS setting\n- `backend/routes/generate_code.py` - Model selection pipeline  \n- `frontend/src/components/variants/Variants.tsx` - UI and shortcuts\n- `frontend/src/store/project-store.ts` - State management\n"
  },
  {
    "path": "docker-compose.yml",
    "content": "version: '3.9'\n\nservices:\n  backend:\n    build: \n      context: ./backend\n      dockerfile: Dockerfile\n    \n    env_file:\n      - .env\n    \n    # or \n    # environment:\n      #- BACKEND_PORT=7001   # if you change the port, make sure to also change the VITE_WS_BACKEND_URL at frontend/.env.local\n      # - OPENAI_API_KEY=your_openai_api_key\n    \n    ports:\n      - \"${BACKEND_PORT:-7001}:${BACKEND_PORT:-7001}\"\n\n    command: poetry run uvicorn main:app --host 0.0.0.0 --port ${BACKEND_PORT:-7001}\n  \n  frontend:\n    build:\n      context: ./frontend\n      dockerfile: Dockerfile\n    ports:\n      - \"5173:5173\"\n"
  },
  {
    "path": "frontend/.eslintrc.cjs",
    "content": "module.exports = {\n  root: true,\n  env: { browser: true, es2020: true },\n  extends: [\n    'eslint:recommended',\n    'plugin:@typescript-eslint/recommended',\n    'plugin:react-hooks/recommended',\n  ],\n  ignorePatterns: ['dist', '.eslintrc.cjs'],\n  parser: '@typescript-eslint/parser',\n  plugins: ['react-refresh'],\n  rules: {\n    'react-refresh/only-export-components': [\n      'warn',\n      { allowConstantExport: true },\n    ],\n  },\n}\n"
  },
  {
    "path": "frontend/.gitignore",
    "content": "# Logs\nlogs\n*.log\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\npnpm-debug.log*\nlerna-debug.log*\n\nnode_modules\ndist\ndist-ssr\n*.local\n\n# Editor directories and files\n.vscode/*\n!.vscode/extensions.json\n.idea\n.DS_Store\n*.suo\n*.ntvs*\n*.njsproj\n*.sln\n*.sw?\n\n# Env files\n.env*\n\n# Test files\nsrc/tests/results/\n"
  },
  {
    "path": "frontend/Dockerfile",
    "content": "FROM node:22-bullseye-slim\n\n# Set the working directory in the container\nWORKDIR /app\n\n# Copy package.json and yarn.lock\nCOPY package.json yarn.lock /app/\n\n# Set the environment variable to skip Puppeteer download\nENV PUPPETEER_SKIP_DOWNLOAD=true\n\n# Install dependencies\nRUN yarn install\n\n# Copy the current directory contents into the container at /app\nCOPY ./ /app/\n\n# Expose port 5173 to access the server\nEXPOSE 5173\n\n# Command to run the application\nCMD [\"yarn\", \"dev\", \"--host\", \"0.0.0.0\"]\n"
  },
  {
    "path": "frontend/components.json",
    "content": "{\n  \"$schema\": \"https://ui.shadcn.com/schema.json\",\n  \"style\": \"new-york\",\n  \"rsc\": false,\n  \"tsx\": true,\n  \"tailwind\": {\n    \"config\": \"tailwind.config.js\",\n    \"css\": \"src/index.css\",\n    \"baseColor\": \"slate\",\n    \"cssVariables\": true\n  },\n  \"aliases\": {\n    \"components\": \"@/components\",\n    \"utils\": \"@/lib/utils\"\n  }\n}\n"
  },
  {
    "path": "frontend/index.html",
    "content": "<!DOCTYPE html>\n<html lang=\"en\">\n  <head>\n    <meta charset=\"UTF-8\" />\n    <link rel=\"icon\" type=\"image/png\" href=\"/favicon/main.png\" />\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n\n    <!-- Google Fonts -->\n    <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\" />\n    <link rel=\"preconnect\" href=\"https://fonts.gstatic.com\" crossorigin />\n    <link\n      href=\"https://fonts.googleapis.com/css2?family=Poppins:wght@300;400;700&display=swap\"\n      rel=\"stylesheet\"\n    />\n\n    <!-- Injected code for hosted version -->\n    <%- injectHead %>\n\n    <title>Screenshot to Code</title>\n\n    <!-- Open Graph Meta Tags -->\n    <meta property=\"og:title\" content=\"Screenshot to Code\" />\n    <meta\n      property=\"og:description\"\n      content=\"Convert any screenshot or design to clean code\"\n    />\n    <meta\n      property=\"og:image\"\n      content=\"https://screenshottocode.com/brand/twitter-summary-card.png\"\n    />\n    <meta property=\"og:image:width\" content=\"1200\" />\n    <meta property=\"og:image:height\" content=\"628\" />\n    <meta property=\"og:url\" content=\"https://screenshottocode.com\" />\n    <meta property=\"og:type\" content=\"website\" />\n    <!-- Twitter Card tags -->\n    <meta name=\"twitter:card\" content=\"summary_large_image\" />\n    <meta name=\"twitter:site\" content=\"@picoapps\" />\n    <!-- Keep in sync with og:title, og:description and og:image -->\n    <meta name=\"twitter:title\" content=\"Screenshot to Code\" />\n    <meta\n      name=\"twitter:description\"\n      content=\"Convert any screenshot or design to clean code\"\n    />\n    <meta\n      name=\"twitter:image\"\n      content=\"https://screenshottocode.com/brand/twitter-summary-card.png\"\n    />\n  </head>\n  <body>\n    <div id=\"root\"></div>\n    <script type=\"module\" src=\"/src/main.tsx\"></script>\n  </body>\n</html>\n"
  },
  {
    "path": "frontend/jest.config.js",
    "content": "export default {\n  preset: \"ts-jest\",\n  testEnvironment: \"node\",\n  setupFiles: [\"<rootDir>/src/setupTests.ts\"],\n  transform: {\n    \"^.+\\\\.tsx?$\": \"ts-jest\",\n  },\n  testTimeout: 30000,\n};\n"
  },
  {
    "path": "frontend/package.json",
    "content": "{\n  \"name\": \"screenshot-to-code\",\n  \"private\": true,\n  \"version\": \"0.0.0\",\n  \"type\": \"module\",\n  \"scripts\": {\n    \"dev\": \"vite\",\n    \"dev-hosted\": \"vite --mode prod\",\n    \"build\": \"tsc && vite build\",\n    \"build-hosted\": \"tsc && vite build --mode prod\",\n    \"lint\": \"eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0\",\n    \"preview\": \"vite preview\",\n    \"test\": \"jest\",\n    \"test:qa\": \"RUN_E2E=true TEST_ROOT_PATH=src/tests yarn test src/tests/qa.test.ts\"\n  },\n  \"dependencies\": {\n    \"@codemirror/lang-html\": \"^6.4.6\",\n    \"@radix-ui/react-accordion\": \"^1.1.2\",\n    \"@radix-ui/react-alert-dialog\": \"^1.0.5\",\n    \"@radix-ui/react-checkbox\": \"^1.0.4\",\n    \"@radix-ui/react-collapsible\": \"^1.0.3\",\n    \"@radix-ui/react-dialog\": \"^1.0.5\",\n    \"@radix-ui/react-hover-card\": \"^1.0.7\",\n    \"@radix-ui/react-icons\": \"^1.3.0\",\n    \"@radix-ui/react-label\": \"^2.0.2\",\n    \"@radix-ui/react-popover\": \"^1.0.7\",\n    \"@radix-ui/react-progress\": \"^1.0.3\",\n    \"@radix-ui/react-scroll-area\": \"^1.0.5\",\n    \"@radix-ui/react-select\": \"^2.0.0\",\n    \"@radix-ui/react-separator\": \"^1.0.3\",\n    \"@radix-ui/react-slot\": \"^1.0.2\",\n    \"@radix-ui/react-switch\": \"^1.0.3\",\n    \"@radix-ui/react-tabs\": \"^1.0.4\",\n    \"@types/react-syntax-highlighter\": \"^15.5.13\",\n    \"class-variance-authority\": \"^0.7.0\",\n    \"classnames\": \"^2.3.2\",\n    \"clsx\": \"^2.0.0\",\n    \"codemirror\": \"^6.0.1\",\n    \"copy-to-clipboard\": \"^3.3.3\",\n    \"html2canvas\": \"^1.4.1\",\n    \"nanoid\": \"^5.0.7\",\n    \"react\": \"^18.2.0\",\n    \"react-dom\": \"^18.2.0\",\n    \"react-dropzone\": \"^14.2.3\",\n    \"react-hot-toast\": \"^2.4.1\",\n    \"react-icons\": \"^4.12.0\",\n    \"react-markdown\": \"^10.1.0\",\n    \"react-router-dom\": \"^6.20.1\",\n    \"react-syntax-highlighter\": \"^16.1.0\",\n    \"tailwind-merge\": \"^2.0.0\",\n    \"tailwindcss-animate\": \"^1.0.7\",\n    \"thememirror\": \"^2.0.1\",\n    \"vite-plugin-checker\": \"^0.9.3\",\n    \"webm-duration-fix\": \"^1.0.4\",\n    \"zustand\": \"^4.5.2\"\n  },\n  \"devDependencies\": {\n    \"@types/jest\": \"^29.5.12\",\n    \"@types/node\": \"^20.9.0\",\n    \"@types/puppeteer\": \"^7.0.4\",\n    \"@types/react\": \"^18.2.15\",\n    \"@types/react-dom\": \"^18.2.7\",\n    \"@typescript-eslint/eslint-plugin\": \"^6.0.0\",\n    \"@typescript-eslint/parser\": \"^6.0.0\",\n    \"@vitejs/plugin-react\": \"^4.0.3\",\n    \"autoprefixer\": \"^10.4.16\",\n    \"dotenv\": \"^16.4.5\",\n    \"eslint\": \"^8.45.0\",\n    \"eslint-plugin-react-hooks\": \"^4.6.0\",\n    \"eslint-plugin-react-refresh\": \"^0.4.3\",\n    \"jest\": \"^29.7.0\",\n    \"postcss\": \"^8.4.31\",\n    \"puppeteer\": \"^22.6.4\",\n    \"tailwindcss\": \"^3.3.5\",\n    \"ts-jest\": \"^29.1.2\",\n    \"typescript\": \"^5.0.2\",\n    \"vite\": \"^4.4.5\",\n    \"vite-plugin-html\": \"^3.2.0\",\n    \"vitest\": \"^1.0.1\"\n  },\n  \"engines\": {\n    \"node\": \">=14.18.0\"\n  }\n}\n"
  },
  {
    "path": "frontend/postcss.config.js",
    "content": "export default {\n  plugins: {\n    tailwindcss: {},\n    autoprefixer: {},\n  },\n}\n"
  },
  {
    "path": "frontend/src/App.tsx",
    "content": "import { useEffect, useRef, useState } from \"react\";\nimport { generateCode } from \"./generateCode\";\nimport { AppState, AppTheme, EditorTheme, Settings } from \"./types\";\nimport { IS_RUNNING_ON_CLOUD } from \"./config\";\nimport { PicoBadge } from \"./components/messages/PicoBadge\";\nimport { OnboardingNote } from \"./components/messages/OnboardingNote\";\nimport { usePersistedState } from \"./hooks/usePersistedState\";\nimport TermsOfServiceDialog from \"./components/TermsOfServiceDialog\";\nimport { USER_CLOSE_WEB_SOCKET_CODE } from \"./constants\";\nimport toast from \"react-hot-toast\";\nimport { nanoid } from \"nanoid\";\nimport { Stack } from \"./lib/stacks\";\nimport { CodeGenerationModel } from \"./lib/models\";\nimport useBrowserTabIndicator from \"./hooks/useBrowserTabIndicator\";\nimport { LuChevronLeft } from \"react-icons/lu\";\nimport {\n  buildAssistantHistoryMessage,\n  buildUserHistoryMessage,\n  cloneVariantHistory,\n  GenerationRequest,\n  registerAssetIds,\n  toRequestHistory,\n} from \"./lib/prompt-history\";\n// import TipLink from \"./components/messages/TipLink\";\nimport { useAppStore } from \"./store/app-store\";\nimport { useProjectStore } from \"./store/project-store\";\nimport { removeHighlight } from \"./components/select-and-edit/utils\";\nimport Sidebar from \"./components/sidebar/Sidebar\";\nimport IconStrip from \"./components/sidebar/IconStrip\";\nimport HistoryDisplay from \"./components/history/HistoryDisplay\";\nimport PreviewPane from \"./components/preview/PreviewPane\";\nimport StartPane from \"./components/start-pane/StartPane\";\nimport SettingsTab from \"./components/settings/SettingsTab\";\nimport { Commit } from \"./components/commits/types\";\nimport { createCommit } from \"./components/commits/utils\";\n\nfunction App() {\n  const {\n    // Inputs\n    inputMode,\n    setInputMode,\n    referenceImages,\n    setReferenceImages,\n    initialPrompt,\n    setInitialPrompt,\n    upsertPromptAssets,\n    resetPromptAssets,\n\n    head,\n    commits,\n    addCommit,\n    removeCommit,\n    setHead,\n    appendCommitCode,\n    setCommitCode,\n    resetCommits,\n    resetHead,\n    updateVariantStatus,\n    resizeVariants,\n    setVariantModels,\n    appendVariantHistoryMessage,\n    startAgentEvent,\n    appendAgentEventContent,\n    finishAgentEvent,\n\n    // Outputs\n    appendExecutionConsole,\n    resetExecutionConsoles,\n  } = useProjectStore();\n\n  const {\n    disableInSelectAndEditMode,\n    setUpdateInstruction,\n    updateImages,\n    setUpdateImages,\n    appState,\n    setAppState,\n    selectedElement,\n    setSelectedElement,\n  } = useAppStore();\n\n  // Settings\n  const [settings, setSettings] = usePersistedState<Settings>(\n    {\n      openAiApiKey: null,\n      openAiBaseURL: null,\n      anthropicApiKey: null,\n      geminiApiKey: null,\n      screenshotOneApiKey: null,\n      isImageGenerationEnabled: true,\n      editorTheme: EditorTheme.COBALT,\n      generatedCodeConfig: Stack.HTML_TAILWIND,\n      codeGenerationModel: CodeGenerationModel.CLAUDE_4_5_OPUS_2025_11_01,\n      // Only relevant for hosted version\n      isTermOfServiceAccepted: false,\n    },\n    \"setting\"\n  );\n  const [appTheme, setAppTheme] = usePersistedState<AppTheme>(\n    AppTheme.SYSTEM,\n    \"app-theme\"\n  );\n\n  const wsRef = useRef<WebSocket>(null);\n  const lastThinkingEventIdRef = useRef<Record<number, string>>({});\n  const lastAssistantEventIdRef = useRef<Record<number, string>>({});\n  const lastToolEventIdRef = useRef<Record<number, string>>({});\n\n  const [isHistoryOpen, setIsHistoryOpen] = useState(false);\n  const [isSettingsOpen, setIsSettingsOpen] = useState(false);\n  const [mobilePane, setMobilePane] = useState<\"preview\" | \"chat\">(\"preview\");\n  const showSelectAndEditFeature =\n    settings.generatedCodeConfig === Stack.HTML_TAILWIND ||\n    settings.generatedCodeConfig === Stack.HTML_CSS;\n\n  // Indicate coding state using the browser tab's favicon and title\n  useBrowserTabIndicator(appState === AppState.CODING);\n\n  // When the user already has the settings in local storage, newly added keys\n  // do not get added to the settings so if it's falsy, we populate it with the default\n  // value\n  useEffect(() => {\n    if (!settings.generatedCodeConfig) {\n      setSettings((prev) => ({\n        ...prev,\n        generatedCodeConfig: Stack.HTML_TAILWIND,\n      }));\n    }\n  }, [settings.generatedCodeConfig, setSettings]);\n\n  useEffect(() => {\n    const mediaQuery = window.matchMedia(\"(prefers-color-scheme: dark)\");\n    const applyTheme = () => {\n      const isDark =\n        appTheme === AppTheme.DARK ||\n        (appTheme === AppTheme.SYSTEM && mediaQuery.matches);\n      document.documentElement.classList.toggle(\"dark\", isDark);\n      document.body.classList.toggle(\"dark\", isDark);\n    };\n\n    applyTheme();\n\n    if (appTheme !== AppTheme.SYSTEM) {\n      return;\n    }\n\n    const onChange = () => applyTheme();\n    mediaQuery.addEventListener(\"change\", onChange);\n\n    return () => {\n      mediaQuery.removeEventListener(\"change\", onChange);\n    };\n  }, [appTheme]);\n\n  const getAssetsById = () => useProjectStore.getState().assetsById;\n\n  // Functions\n  const reset = () => {\n    setAppState(AppState.INITIAL);\n    setUpdateInstruction(\"\");\n    setUpdateImages([]);\n    disableInSelectAndEditMode();\n    resetExecutionConsoles();\n\n    resetCommits();\n    resetHead();\n    resetPromptAssets();\n\n    // Inputs\n    setInputMode(\"image\");\n    setReferenceImages([]);\n  };\n\n  const regenerate = () => {\n    if (head === null) {\n      toast.error(\n        \"No current version set. Please contact support via chat or Github.\"\n      );\n      throw new Error(\"Regenerate called with no head\");\n    }\n\n    // Retrieve the previous command\n    const currentCommit = commits[head];\n    if (currentCommit.type !== \"ai_create\") {\n      toast.error(\"Only the first version can be regenerated.\");\n      return;\n    }\n\n    // Re-run the create\n    if (inputMode === \"image\" || inputMode === \"video\") {\n      doCreate(referenceImages, inputMode);\n    } else {\n      // TODO: Fix this\n      doCreateFromText(initialPrompt);\n    }\n  };\n\n  // Used when the user cancels the code generation\n  const cancelCodeGeneration = () => {\n    wsRef.current?.close?.(USER_CLOSE_WEB_SOCKET_CODE);\n  };\n\n  // Used for user-initiated cancellation and failed edit rollbacks\n  const cancelCodeGenerationAndReset = (commit: Commit) => {\n    // When the current commit is the first version, reset the entire app state\n    if (commit.type === \"ai_create\") {\n      reset();\n    } else {\n      // Otherwise, remove current commit from commits\n      removeCommit(commit.hash);\n\n      // Revert to parent commit\n      const parentCommitHash = commit.parentHash;\n      if (parentCommitHash) {\n        setHead(parentCommitHash);\n      } else {\n        throw new Error(\"Parent commit not found\");\n      }\n\n      setAppState(AppState.CODE_READY);\n    }\n  };\n\n  function doGenerateCode(params: GenerationRequest) {\n    // Reset the execution console\n    resetExecutionConsoles();\n\n    // Set the app state to coding during generation\n    setAppState(AppState.CODING);\n\n    const { variantHistory, ...requestParams } = params;\n\n    // Merge settings with params\n    const updatedParams = { ...requestParams, ...settings };\n\n    // Use 4 variants for create, 2 for edits to match backend counts\n    // and avoid a flash when the backend sends the actual variant count\n    const initialVariantCount =\n      requestParams.generationType === \"create\" ? 4 : 2;\n    const baseCommitObject = {\n      variants: Array(initialVariantCount)\n        .fill(null)\n        .map(() => ({\n          code: \"\",\n          history: cloneVariantHistory(variantHistory),\n        })),\n    };\n\n    const commitInputObject =\n      requestParams.generationType === \"create\"\n        ? {\n            ...baseCommitObject,\n            type: \"ai_create\" as const,\n            parentHash: null,\n            inputs: requestParams.prompt,\n          }\n        : {\n            ...baseCommitObject,\n            type: \"ai_edit\" as const,\n            parentHash: head,\n            inputs: requestParams.prompt,\n          };\n\n    // Create a new commit and set it as the head\n    const commit = createCommit(commitInputObject);\n    addCommit(commit);\n    setHead(commit.hash);\n\n    lastThinkingEventIdRef.current = {};\n    lastAssistantEventIdRef.current = {};\n    lastToolEventIdRef.current = {};\n\n    const finishThinkingEvent = (variantIndex: number, status: \"complete\" | \"error\") => {\n      const eventId = lastThinkingEventIdRef.current[variantIndex];\n      if (!eventId) return;\n      finishAgentEvent(commit.hash, variantIndex, eventId, {\n        status,\n        endedAt: Date.now(),\n      });\n      delete lastThinkingEventIdRef.current[variantIndex];\n    };\n\n    const finishAssistantEvent = (variantIndex: number, status: \"complete\" | \"error\") => {\n      const eventId = lastAssistantEventIdRef.current[variantIndex];\n      if (!eventId) return;\n      finishAgentEvent(commit.hash, variantIndex, eventId, {\n        status,\n        endedAt: Date.now(),\n      });\n      delete lastAssistantEventIdRef.current[variantIndex];\n    };\n\n    const finishToolEvent = (variantIndex: number, status: \"complete\" | \"error\") => {\n      const eventId = lastToolEventIdRef.current[variantIndex];\n      if (!eventId) return;\n      finishAgentEvent(commit.hash, variantIndex, eventId, {\n        status,\n        endedAt: Date.now(),\n      });\n      delete lastToolEventIdRef.current[variantIndex];\n    };\n\n    const finishInFlightEvents = (status: \"complete\" | \"error\") => {\n      Object.keys(lastThinkingEventIdRef.current).forEach((key) => {\n        finishThinkingEvent(Number(key), status);\n      });\n      Object.keys(lastAssistantEventIdRef.current).forEach((key) => {\n        finishAssistantEvent(Number(key), status);\n      });\n      Object.keys(lastToolEventIdRef.current).forEach((key) => {\n        finishToolEvent(Number(key), status);\n      });\n    };\n\n    generateCode(wsRef, updatedParams, {\n      onChange: (token, variantIndex) => {\n        appendCommitCode(commit.hash, variantIndex, token);\n      },\n      onSetCode: (code, variantIndex) => {\n        setCommitCode(commit.hash, variantIndex, code);\n      },\n      onStatusUpdate: (line, variantIndex) =>\n        appendExecutionConsole(variantIndex, line),\n      onVariantComplete: (variantIndex) => {\n        console.log(`Variant ${variantIndex} complete event received`);\n        updateVariantStatus(commit.hash, variantIndex, \"complete\");\n        const currentCode =\n          useProjectStore.getState().commits[commit.hash]?.variants[variantIndex]\n            ?.code || \"\";\n        if (currentCode.trim().length > 0) {\n          appendVariantHistoryMessage(\n            commit.hash,\n            variantIndex,\n            buildAssistantHistoryMessage(currentCode)\n          );\n        }\n        finishThinkingEvent(variantIndex, \"complete\");\n        finishAssistantEvent(variantIndex, \"complete\");\n        finishToolEvent(variantIndex, \"complete\");\n        if (commit.type === \"ai_edit\") {\n          const {\n            updateInstruction: currentInstruction,\n            updateImages: currentImages,\n          } = useAppStore.getState();\n          const instructionUnchanged =\n            currentInstruction === commit.inputs.text;\n          const imagesUnchanged =\n            currentImages.length === commit.inputs.images.length &&\n            currentImages.every(\n              (image, index) => image === commit.inputs.images[index]\n            );\n\n          // This conditional clear handles three UX scenarios:\n          // 1) All variants fail: no completion event, so keep prompt/images for retry.\n          // 2) A variant completes and user has typed/changed images: do not clear.\n          // 3) A variant completes and user has not changed draft: clear for next edit.\n          if (instructionUnchanged && imagesUnchanged) {\n            setUpdateInstruction(\"\");\n            setUpdateImages([]);\n          }\n        }\n      },\n      onVariantError: (variantIndex, error) => {\n        console.error(`Error in variant ${variantIndex}:`, error);\n        updateVariantStatus(commit.hash, variantIndex, \"error\", error);\n        finishThinkingEvent(variantIndex, \"error\");\n        finishAssistantEvent(variantIndex, \"error\");\n        finishToolEvent(variantIndex, \"error\");\n      },\n      onVariantCount: (count) => {\n        console.log(`Backend is using ${count} variants`);\n        resizeVariants(commit.hash, count);\n      },\n      onVariantModels: (models) => {\n        setVariantModels(commit.hash, models);\n      },\n      onThinking: (content, variantIndex, eventId) => {\n        if (!eventId) return;\n        lastThinkingEventIdRef.current[variantIndex] = eventId;\n        startAgentEvent(commit.hash, variantIndex, {\n          id: eventId,\n          type: \"thinking\",\n          status: \"running\",\n          startedAt: Date.now(),\n        });\n        appendAgentEventContent(commit.hash, variantIndex, eventId, content);\n      },\n      onAssistant: (content, variantIndex, eventId) => {\n        if (!eventId) return;\n        lastAssistantEventIdRef.current[variantIndex] = eventId;\n        startAgentEvent(commit.hash, variantIndex, {\n          id: eventId,\n          type: \"assistant\",\n          status: \"running\",\n          startedAt: Date.now(),\n        });\n        appendAgentEventContent(commit.hash, variantIndex, eventId, content);\n      },\n      onToolStart: (data, variantIndex, eventId) => {\n        if (!eventId) return;\n        const lastThinking = lastThinkingEventIdRef.current[variantIndex];\n        if (lastThinking && lastThinking !== eventId) {\n          finishThinkingEvent(variantIndex, \"complete\");\n        }\n        const lastAssistant = lastAssistantEventIdRef.current[variantIndex];\n        if (lastAssistant && lastAssistant !== eventId) {\n          finishAssistantEvent(variantIndex, \"complete\");\n        }\n        startAgentEvent(commit.hash, variantIndex, {\n          id: eventId,\n          type: \"tool\",\n          status: \"running\",\n          toolName: data?.name,\n          input: data?.input,\n          startedAt: Date.now(),\n        });\n        lastToolEventIdRef.current[variantIndex] = eventId;\n      },\n      onToolResult: (data, variantIndex, eventId) => {\n        if (!eventId) return;\n        finishAgentEvent(commit.hash, variantIndex, eventId, {\n          status: data?.ok === false ? \"error\" : \"complete\",\n          output: data?.output,\n          endedAt: Date.now(),\n        });\n        if (lastToolEventIdRef.current[variantIndex] === eventId) {\n          delete lastToolEventIdRef.current[variantIndex];\n        }\n      },\n      onCancel: (reason, errorMessage) => {\n        // Close any running agent events when the socket ends without per-event\n        // terminal messages, otherwise they remain stuck in \"running\" state.\n        finishInFlightEvents(reason === \"request_failed\" ? \"error\" : \"complete\");\n\n        if (reason === \"request_failed\" && commit.type === \"ai_create\") {\n          const latestCreateCommit = useProjectStore.getState().commits[commit.hash];\n          latestCreateCommit?.variants.forEach((variant, variantIndex) => {\n            if (variant.status === \"generating\") {\n              updateVariantStatus(\n                commit.hash,\n                variantIndex,\n                \"error\",\n                errorMessage || \"Generation failed. Please retry.\"\n              );\n            }\n          });\n          setAppState(AppState.CODE_READY);\n          return;\n        }\n\n        cancelCodeGenerationAndReset(commit);\n      },\n      onComplete: () => {\n        finishInFlightEvents(\"complete\");\n        setAppState(AppState.CODE_READY);\n      },\n    });\n  }\n\n  // Initial version creation\n  function doCreate(\n    referenceImages: string[],\n    inputMode: \"image\" | \"video\",\n    textPrompt: string = \"\"\n  ) {\n    // Reset any existing state\n    reset();\n\n    // Set the input states\n    setReferenceImages(referenceImages);\n    setInputMode(inputMode);\n\n    // Kick off the code generation\n    if (referenceImages.length > 0) {\n      const media =\n        inputMode === \"video\" ? [referenceImages[0]] : referenceImages;\n      const imageAssetIds =\n        inputMode === \"image\"\n          ? registerAssetIds(\n              \"image\",\n              media,\n              getAssetsById,\n              upsertPromptAssets,\n              nanoid\n            )\n          : [];\n      const videoAssetIds =\n        inputMode === \"video\"\n          ? registerAssetIds(\n              \"video\",\n              media,\n              getAssetsById,\n              upsertPromptAssets,\n              nanoid\n            )\n          : [];\n      const variantHistory = [\n        buildUserHistoryMessage(textPrompt, imageAssetIds, videoAssetIds),\n      ];\n      doGenerateCode({\n        generationType: \"create\",\n        inputMode,\n        prompt: {\n          text: textPrompt,\n          images: inputMode === \"image\" ? media : [],\n          videos: inputMode === \"video\" ? media : [],\n        },\n        variantHistory,\n      });\n    }\n  }\n\n  function doCreateFromText(text: string) {\n    // Reset any existing state\n    reset();\n\n    setInputMode(\"text\");\n    setInitialPrompt(text);\n    doGenerateCode({\n      generationType: \"create\",\n      inputMode: \"text\",\n      prompt: { text, images: [], videos: [] },\n      variantHistory: [buildUserHistoryMessage(text)],\n    });\n  }\n\n  // Subsequent updates\n  async function doUpdate(updateInstruction: string) {\n    if (updateInstruction.trim() === \"\") {\n      toast.error(\"Please include some instructions for AI on what to update.\");\n      return;\n    }\n\n    if (head === null) {\n      toast.error(\n        \"No current version set. Contact support or open a Github issue.\"\n      );\n      throw new Error(\"Update called with no head\");\n    }\n\n    const currentCommit = commits[head];\n    const currentCode =\n      currentCommit?.variants[currentCommit.selectedVariantIndex]?.code || \"\";\n    const optionCodes = currentCommit?.variants.map(\n      (variant) => variant.code || \"\"\n    );\n\n    let modifiedUpdateInstruction = updateInstruction;\n    let selectedElementHtml: string | undefined;\n\n    // Send in a reference to the selected element if it exists\n    if (selectedElement) {\n      const elementHtml = removeHighlight(selectedElement).outerHTML;\n      selectedElementHtml = elementHtml;\n      modifiedUpdateInstruction =\n        updateInstruction +\n        \" referring to this element specifically: \" +\n        elementHtml;\n      setSelectedElement(null);\n    }\n\n    const selectedVariant = currentCommit.variants[currentCommit.selectedVariantIndex];\n    const baseVariantHistory = selectedVariant.history;\n    const updateImageAssetIds = registerAssetIds(\n      \"image\",\n      updateImages,\n      getAssetsById,\n      upsertPromptAssets,\n      nanoid\n    );\n    const updatedVariantHistory = [\n      ...cloneVariantHistory(baseVariantHistory),\n      buildUserHistoryMessage(modifiedUpdateInstruction, updateImageAssetIds),\n    ];\n    const shouldBootstrapFromFileState =\n      baseVariantHistory.length === 0 && currentCode.trim().length > 0;\n    const updatedHistory = shouldBootstrapFromFileState\n      ? []\n      : toRequestHistory(updatedVariantHistory, getAssetsById);\n\n    doGenerateCode({\n      generationType: \"update\",\n      inputMode,\n      prompt: {\n        text: updateInstruction,\n        images: updateImages,\n        videos: [],\n        selectedElementHtml,\n      },\n      history: updatedHistory,\n      optionCodes,\n      variantHistory: updatedVariantHistory,\n      fileState: currentCode\n        ? {\n            path: \"index.html\",\n            content: currentCode,\n          }\n        : undefined,\n    });\n  }\n\n  const handleTermDialogOpenChange = (open: boolean) => {\n    setSettings((s) => ({\n      ...s,\n      isTermOfServiceAccepted: !open,\n    }));\n  };\n\n  function setStack(stack: Stack) {\n    setSettings((prev) => ({\n      ...prev,\n      generatedCodeConfig: stack,\n    }));\n  }\n\n  function importFromCode(code: string, stack: Stack) {\n    // Reset any existing state\n    reset();\n\n    // Set up this project\n    setStack(stack);\n\n    // Create a new commit and set it as the head\n    const commit = createCommit({\n      type: \"code_create\",\n      parentHash: null,\n      variants: [{ code, history: [] }],\n      inputs: null,\n    });\n    addCommit(commit);\n    setHead(commit.hash);\n\n    // Set the app state\n    setAppState(AppState.CODE_READY);\n  }\n\n  const showContentPanel =\n    appState === AppState.CODING ||\n    appState === AppState.CODE_READY ||\n    isHistoryOpen;\n  const isCodingOrReady =\n    appState === AppState.CODING || appState === AppState.CODE_READY;\n  const showMobileChatPane = showContentPanel && mobilePane === \"chat\";\n\n  return (\n    <div\n      className={`dark:bg-black dark:text-white ${\n        appState === AppState.CODING || appState === AppState.CODE_READY\n          ? \"flex h-dvh flex-col overflow-hidden lg:block lg:h-screen\"\n          : \"min-h-screen\"\n      }`}\n    >\n      {IS_RUNNING_ON_CLOUD && <PicoBadge />}\n      {IS_RUNNING_ON_CLOUD && (\n        <TermsOfServiceDialog\n          open={!settings.isTermOfServiceAccepted}\n          onOpenChange={handleTermDialogOpenChange}\n        />\n      )}\n\n      {/* Icon strip - always visible */}\n      <div\n        className=\"sticky top-0 z-50 lg:fixed lg:inset-y-0 lg:z-50 lg:flex lg:w-16 lg:flex-col\"\n      >\n        <IconStrip\n          isHistoryOpen={isHistoryOpen}\n          isEditorOpen={!isHistoryOpen && !isSettingsOpen}\n          isSettingsOpen={isSettingsOpen}\n          showHistory={isCodingOrReady}\n          showEditor={isCodingOrReady}\n          onToggleHistory={() => {\n            setIsHistoryOpen((prev) => !prev);\n            setIsSettingsOpen(false);\n            setMobilePane(\"chat\");\n          }}\n          onToggleEditor={() => {\n            setIsHistoryOpen(false);\n            setIsSettingsOpen(false);\n            setMobilePane(\"preview\");\n          }}\n          onLogoClick={() => {\n            setIsHistoryOpen(false);\n            setIsSettingsOpen(false);\n            setMobilePane(\"preview\");\n          }}\n          onNewProject={() => {\n            reset();\n            setIsHistoryOpen(false);\n            setIsSettingsOpen(false);\n            setMobilePane(\"preview\");\n          }}\n          onOpenSettings={() => {\n            setIsSettingsOpen(true);\n            setIsHistoryOpen(false);\n          }}\n        />\n      </div>\n\n      {isCodingOrReady && !isSettingsOpen && (\n        <div className=\"border-b border-gray-200 bg-white px-4 py-2 dark:border-zinc-800 dark:bg-zinc-950 lg:hidden\">\n          <div className=\"grid grid-cols-2 rounded-xl bg-gray-100 p-1 dark:bg-zinc-800\">\n            <button\n              onClick={() => {\n                setIsHistoryOpen(false);\n                setMobilePane(\"preview\");\n              }}\n              className={`rounded-lg px-3 py-2 text-sm font-medium transition-colors ${\n                mobilePane === \"preview\"\n                  ? \"bg-white text-gray-900 shadow-sm dark:bg-zinc-700 dark:text-white\"\n                  : \"text-gray-500 dark:text-zinc-400\"\n              }`}\n            >\n              Preview\n            </button>\n            <button\n              onClick={() => setMobilePane(\"chat\")}\n              className={`rounded-lg px-3 py-2 text-sm font-medium transition-colors ${\n                mobilePane === \"chat\"\n                  ? \"bg-white text-gray-900 shadow-sm dark:bg-zinc-700 dark:text-white\"\n                  : \"text-gray-500 dark:text-zinc-400\"\n              }`}\n            >\n              Chat\n            </button>\n          </div>\n        </div>\n      )}\n\n      {/* Content panel - shows sidebar, history, or editor */}\n      {showContentPanel && !isSettingsOpen && (\n        <div\n          className={`border-b border-gray-200 dark:border-zinc-800 bg-white dark:bg-zinc-950 dark:text-white lg:fixed lg:inset-y-0 lg:left-16 lg:z-40 lg:flex lg:w-[calc(28rem-4rem)] lg:flex-col lg:border-b-0 lg:border-r ${\n            showMobileChatPane ? \"block\" : \"hidden lg:flex\"\n          }`}\n        >\n            {isHistoryOpen ? (\n              <div className=\"flex-1 overflow-y-auto sidebar-scrollbar-stable px-4\">\n                <div className=\"mt-3\">\n                  <div className=\"flex items-center justify-between mb-3 px-1\">\n                    <h2 className=\"text-xs font-medium uppercase tracking-wider text-gray-400 dark:text-gray-500\">Versions</h2>\n                    <button\n                      onClick={() => setIsHistoryOpen(false)}\n                      className=\"flex items-center gap-1 text-xs text-gray-500 dark:text-gray-400 hover:text-gray-700 dark:hover:text-gray-200 transition-colors\"\n                    >\n                      <LuChevronLeft className=\"w-3.5 h-3.5\" />\n                      Back to editor\n                    </button>\n                  </div>\n                  <HistoryDisplay />\n                </div>\n              </div>\n            ) : (\n              <>\n                {IS_RUNNING_ON_CLOUD && !settings.openAiApiKey && (\n                  <div className=\"px-6 mt-4\">\n                    <OnboardingNote />\n                  </div>\n                )}\n\n                {(appState === AppState.CODING ||\n                  appState === AppState.CODE_READY) && (\n                  <Sidebar\n                    showSelectAndEditFeature={showSelectAndEditFeature}\n                    doUpdate={doUpdate}\n                    regenerate={regenerate}\n                    cancelCodeGeneration={cancelCodeGeneration}\n                    onOpenVersions={() => {\n                      setIsHistoryOpen(true);\n                      setMobilePane(\"chat\");\n                    }}\n                  />\n                )}\n              </>\n            )}\n        </div>\n      )}\n\n      <main\n        className={`${\n          isSettingsOpen\n            ? \"flex flex-1 min-h-0 flex-col lg:h-full lg:pl-16\"\n            : showContentPanel\n              ? \"flex flex-1 min-h-0 flex-col lg:h-full lg:pl-[28rem]\"\n              : \"lg:pl-16\"\n        } ${isCodingOrReady && !isSettingsOpen && mobilePane === \"chat\" ? \"hidden lg:flex\" : \"\"}`}\n      >\n        {isSettingsOpen ? (\n          <SettingsTab\n            settings={settings}\n            setSettings={setSettings}\n            appTheme={appTheme}\n            setAppTheme={setAppTheme}\n          />\n        ) : (\n          <>\n            {appState === AppState.INITIAL && (\n              <StartPane\n                doCreate={doCreate}\n                doCreateFromText={doCreateFromText}\n                importFromCode={importFromCode}\n                settings={settings}\n                setSettings={setSettings}\n              />\n            )}\n\n            {isCodingOrReady && (\n              <PreviewPane\n                settings={settings}\n                onOpenVersions={() => {\n                  setIsHistoryOpen(true);\n                  setMobilePane(\"chat\");\n                }}\n              />\n            )}\n          </>\n        )}\n      </main>\n    </div>\n  );\n}\n\nexport default App;\n"
  },
  {
    "path": "frontend/src/components/ImageLightbox.tsx",
    "content": "import { useEffect, useRef, useState, useCallback } from \"react\";\nimport { LuMinus, LuPlus, LuX } from \"react-icons/lu\";\nimport { Dialog, DialogPortal, DialogOverlay } from \"./ui/dialog\";\n\nconst MIN_ZOOM = 0.5;\nconst MAX_ZOOM = 10;\nconst DEFAULT_DISPLAY_WIDTH = 1000;\n\ninterface ImageLightboxProps {\n  image: string | null;\n  onClose: () => void;\n}\n\nfunction ImageLightbox({ image, onClose }: ImageLightboxProps) {\n  const viewportRef = useRef<HTMLDivElement>(null);\n  const [zoom, setZoom] = useState(1);\n  const [naturalSize, setNaturalSize] = useState<{\n    width: number;\n    height: number;\n  } | null>(null);\n  const [fitScale, setFitScale] = useState(1);\n  const initialZoomSet = useRef(false);\n\n  const dragRef = useRef({\n    isDragging: false,\n    startX: 0,\n    startY: 0,\n    scrollLeft: 0,\n    scrollTop: 0,\n    didDrag: false,\n  });\n\n  // Reset state when image changes\n  useEffect(() => {\n    setZoom(1);\n    setNaturalSize(null);\n    setFitScale(1);\n    initialZoomSet.current = false;\n  }, [image]);\n\n  const recomputeFitScale = useCallback(() => {\n    if (!viewportRef.current || !naturalSize) return;\n\n    // Subtract p-8 padding (32px each side)\n    const viewportWidth = viewportRef.current.clientWidth - 64;\n    const viewportHeight = viewportRef.current.clientHeight - 64;\n    if (viewportWidth <= 0 || viewportHeight <= 0) return;\n\n    const scale = Math.min(\n      viewportWidth / naturalSize.width,\n      viewportHeight / naturalSize.height,\n      1\n    );\n    setFitScale(scale);\n\n    // Set initial zoom to target DEFAULT_DISPLAY_WIDTH (only clamp to viewport width)\n    if (!initialZoomSet.current) {\n      initialZoomSet.current = true;\n      const targetScale = DEFAULT_DISPLAY_WIDTH / naturalSize.width;\n      const maxWidthScale = viewportWidth / naturalSize.width;\n      const clampedScale = Math.min(targetScale, maxWidthScale);\n      setZoom(Math.max(MIN_ZOOM, Math.min(MAX_ZOOM, clampedScale / scale)));\n    }\n  }, [naturalSize]);\n\n  useEffect(() => {\n    if (!image) return;\n    recomputeFitScale();\n\n    const handleResize = () => recomputeFitScale();\n    window.addEventListener(\"resize\", handleResize);\n    return () => window.removeEventListener(\"resize\", handleResize);\n  }, [image, recomputeFitScale]);\n\n  useEffect(() => {\n    recomputeFitScale();\n  }, [recomputeFitScale]);\n\n  const zoomIn = () => {\n    setZoom((z) => Math.min(MAX_ZOOM, Math.round((z + 0.5) * 100) / 100));\n  };\n\n  const zoomOut = () => {\n    setZoom((z) => Math.max(MIN_ZOOM, Math.round((z - 0.5) * 100) / 100));\n  };\n\n  const zoomToFit = () => setZoom(1);\n\n  const zoomToDefault = () => {\n    if (!naturalSize || fitScale <= 0 || !viewportRef.current) return;\n    const viewportWidth = viewportRef.current.clientWidth - 64;\n    const targetScale = DEFAULT_DISPLAY_WIDTH / naturalSize.width;\n    const maxWidthScale = viewportWidth / naturalSize.width;\n    const clampedScale = Math.min(targetScale, maxWidthScale);\n    setZoom(Math.max(MIN_ZOOM, Math.min(MAX_ZOOM, clampedScale / fitScale)));\n  };\n\n  const handleMouseDown = useCallback((e: React.MouseEvent) => {\n    if (!viewportRef.current || e.button !== 0) return;\n    dragRef.current = {\n      isDragging: true,\n      startX: e.clientX,\n      startY: e.clientY,\n      scrollLeft: viewportRef.current.scrollLeft,\n      scrollTop: viewportRef.current.scrollTop,\n      didDrag: false,\n    };\n    viewportRef.current.style.cursor = \"grabbing\";\n    e.preventDefault();\n  }, []);\n\n  const handleMouseMove = useCallback((e: React.MouseEvent) => {\n    const drag = dragRef.current;\n    if (!drag.isDragging || !viewportRef.current) return;\n\n    const dx = e.clientX - drag.startX;\n    const dy = e.clientY - drag.startY;\n\n    if (Math.abs(dx) > 3 || Math.abs(dy) > 3) {\n      drag.didDrag = true;\n    }\n\n    viewportRef.current.scrollLeft = drag.scrollLeft - dx;\n    viewportRef.current.scrollTop = drag.scrollTop - dy;\n  }, []);\n\n  const handleMouseUp = useCallback(() => {\n    dragRef.current.isDragging = false;\n    if (viewportRef.current) {\n      viewportRef.current.style.cursor = \"\";\n    }\n  }, []);\n\n  const handleWheel = useCallback((e: React.WheelEvent) => {\n    if (!viewportRef.current) return;\n    viewportRef.current.scrollTop += e.deltaY;\n    viewportRef.current.scrollLeft += e.deltaX;\n  }, []);\n\n  const handleViewportClick = useCallback(() => {\n    if (dragRef.current.didDrag) {\n      dragRef.current.didDrag = false;\n      return;\n    }\n    onClose();\n  }, [onClose]);\n\n  const effectiveScale = fitScale * zoom;\n  const displayWidth = naturalSize\n    ? Math.max(1, Math.round(naturalSize.width * effectiveScale))\n    : undefined;\n  const displayHeight = naturalSize\n    ? Math.max(1, Math.round(naturalSize.height * effectiveScale))\n    : undefined;\n\n  return (\n    <Dialog open={!!image} onOpenChange={(open) => !open && onClose()}>\n      <DialogPortal>\n        <DialogOverlay className=\"bg-black/90 backdrop-blur-md\" />\n        <div className=\"fixed inset-0 z-50\">\n          {/* Scrollable viewport - drag to scroll, click to close */}\n          <div\n            ref={viewportRef}\n            className=\"h-full w-full overflow-auto cursor-grab\"\n            onMouseDown={handleMouseDown}\n            onMouseMove={handleMouseMove}\n            onMouseUp={handleMouseUp}\n            onMouseLeave={handleMouseUp}\n            onWheel={handleWheel}\n            onClick={handleViewportClick}\n          >\n            <div className=\"flex min-h-full min-w-full p-8\">\n              {image && (\n                <img\n                  src={image}\n                  alt=\"Reference image\"\n                  className=\"rounded-lg shadow-2xl select-none shrink-0 m-auto\"\n                  draggable={false}\n                  onClick={(e) => e.stopPropagation()}\n                  style={\n                    displayWidth && displayHeight\n                      ? {\n                          width: `${displayWidth}px`,\n                          height: `${displayHeight}px`,\n                          maxWidth: \"none\",\n                          maxHeight: \"none\",\n                        }\n                      : { visibility: \"hidden\" as const }\n                  }\n                  onLoad={(event) => {\n                    setNaturalSize({\n                      width: event.currentTarget.naturalWidth,\n                      height: event.currentTarget.naturalHeight,\n                    });\n                  }}\n                />\n              )}\n            </div>\n          </div>\n\n          {/* Zoom controls - bottom center pill */}\n          <div\n            className=\"absolute bottom-6 left-1/2 z-10 flex -translate-x-1/2 items-center gap-1 rounded-full bg-black/60 px-3 py-2 shadow-lg backdrop-blur-md\"\n            onClick={(e) => e.stopPropagation()}\n            onMouseDown={(e) => e.stopPropagation()}\n          >\n            <button\n              onClick={zoomOut}\n              className=\"rounded-full p-1.5 text-white/80 transition-colors hover:bg-white/10 hover:text-white disabled:opacity-30\"\n              disabled={zoom <= MIN_ZOOM}\n              title=\"Zoom out\"\n            >\n              <LuMinus className=\"h-4 w-4\" />\n            </button>\n            <button\n              onClick={zoomToDefault}\n              className=\"min-w-[3.5rem] rounded-full px-3 py-1 text-center text-xs font-medium text-white/80 transition-colors hover:bg-white/10 hover:text-white\"\n              title=\"Reset zoom\"\n            >\n              {Math.round(zoom * 100)}%\n            </button>\n            <button\n              onClick={zoomIn}\n              className=\"rounded-full p-1.5 text-white/80 transition-colors hover:bg-white/10 hover:text-white disabled:opacity-30\"\n              disabled={zoom >= MAX_ZOOM}\n              title=\"Zoom in\"\n            >\n              <LuPlus className=\"h-4 w-4\" />\n            </button>\n            <div className=\"mx-1 h-4 w-px bg-white/20\" />\n            <button\n              onClick={zoomToFit}\n              className=\"rounded-full px-2.5 py-1 text-xs font-medium text-white/80 transition-colors hover:bg-white/10 hover:text-white\"\n              title=\"Fit to screen\"\n            >\n              Fit\n            </button>\n            <div className=\"mx-1 h-4 w-px bg-white/20\" />\n            <button\n              onClick={onClose}\n              className=\"rounded-full p-1.5 text-white/80 transition-colors hover:bg-white/10 hover:text-white\"\n              title=\"Close\"\n            >\n              <LuX className=\"h-4 w-4\" />\n            </button>\n          </div>\n        </div>\n      </DialogPortal>\n    </Dialog>\n  );\n}\n\nexport default ImageLightbox;\n"
  },
  {
    "path": "frontend/src/components/ImageUpload.tsx",
    "content": "import { useState, useEffect, useMemo, useRef, useCallback } from \"react\";\nimport { useDropzone } from \"react-dropzone\";\nimport { toast } from \"react-hot-toast\";\nimport ScreenRecorder from \"./recording/ScreenRecorder\";\nimport { ScreenRecorderState } from \"../types\";\nimport { Stack } from \"../lib/stacks\";\n\nconst baseStyle = {\n  flex: 1,\n  width: \"80%\",\n  margin: \"0 auto\",\n  minHeight: \"400px\",\n  display: \"flex\",\n  flexDirection: \"column\",\n  alignItems: \"center\",\n  justifyContent: \"center\",\n  padding: \"20px\",\n  borderWidth: 2,\n  borderRadius: 2,\n  borderColor: \"#eeeeee\",\n  borderStyle: \"dashed\",\n  backgroundColor: \"#fafafa\",\n  color: \"#bdbdbd\",\n  outline: \"none\",\n  transition: \"border .24s ease-in-out\",\n};\n\nconst focusedStyle = {\n  borderColor: \"#2196f3\",\n};\n\nconst acceptStyle = {\n  borderColor: \"#00e676\",\n};\n\nconst rejectStyle = {\n  borderColor: \"#ff1744\",\n};\n\n// TODO: Move to a separate file\nfunction fileToDataURL(file: File): Promise<string> {\n  return new Promise((resolve, reject) => {\n    const reader = new FileReader();\n    reader.onload = () => {\n      const result = reader.result as string;\n      // Check if the MIME type is correctly set in the data URL\n      // Some browsers return application/octet-stream for video files\n      if (result.startsWith(\"data:application/octet-stream\") && file.type) {\n        // Replace with the correct MIME type from the file\n        const correctedResult = result.replace(\n          \"data:application/octet-stream\",\n          `data:${file.type}`\n        );\n        resolve(correctedResult);\n      } else {\n        resolve(result);\n      }\n    };\n    reader.onerror = (error) => reject(error);\n    reader.readAsDataURL(file);\n  });\n}\n\ntype FileWithPreview = {\n  preview: string;\n} & File;\n\ninterface Props {\n  setReferenceImages: (\n    referenceImages: string[],\n    inputMode: \"image\" | \"video\",\n    textPrompt?: string\n  ) => void;\n  onUploadStateChange?: (hasUpload: boolean) => void;\n  stack: Stack;\n  setStack: (stack: Stack) => void;\n}\n\nfunction ImageUpload({ setReferenceImages, onUploadStateChange, stack, setStack }: Props) {\n  const [files, setFiles] = useState<FileWithPreview[]>([]);\n  const [uploadedDataUrls, setUploadedDataUrls] = useState<string[]>([]);\n  const [uploadedInputMode, setUploadedInputMode] = useState<\n    \"image\" | \"video\"\n  >(\"image\");\n  const [textPrompt, setTextPrompt] = useState(\"\");\n  const [showTextPrompt, setShowTextPrompt] = useState(false);\n  const textInputRef = useRef<HTMLTextAreaElement>(null);\n\n  // TODO: Switch to Zustand\n  const [screenRecorderState, setScreenRecorderState] =\n    useState<ScreenRecorderState>(ScreenRecorderState.INITIAL);\n\n  const hasUploadedFile = uploadedDataUrls.length > 0;\n\n  // Notify parent of upload state changes\n  useEffect(() => {\n    onUploadStateChange?.(hasUploadedFile);\n  }, [hasUploadedFile, onUploadStateChange]);\n\n  const handleGenerate = useCallback(() => {\n    if (uploadedDataUrls.length > 0) {\n      setReferenceImages(uploadedDataUrls, uploadedInputMode, textPrompt);\n    }\n  }, [uploadedDataUrls, uploadedInputMode, textPrompt, setReferenceImages]);\n\n  // Global Enter key listener for generating when image is uploaded\n  useEffect(() => {\n    if (!hasUploadedFile) return;\n\n    const handleGlobalKeyDown = (e: KeyboardEvent) => {\n      if (e.key === \"Enter\" && !e.shiftKey) {\n        // Don't fire if textarea is focused (it has its own handler)\n        if (document.activeElement === textInputRef.current) return;\n        e.preventDefault();\n        handleGenerate();\n      }\n    };\n\n    document.addEventListener(\"keydown\", handleGlobalKeyDown);\n    return () => document.removeEventListener(\"keydown\", handleGlobalKeyDown);\n  }, [hasUploadedFile, handleGenerate]);\n\n  const handleClear = () => {\n    setUploadedDataUrls([]);\n    setFiles([]);\n    setTextPrompt(\"\");\n    setShowTextPrompt(false);\n  };\n\n  const handleKeyDown = (e: React.KeyboardEvent<HTMLTextAreaElement>) => {\n    if (e.key === \"Enter\" && !e.shiftKey) {\n      e.preventDefault();\n      handleGenerate();\n    }\n  };\n\n  const { getRootProps, getInputProps, isFocused, isDragAccept, isDragReject } =\n    useDropzone({\n      maxFiles: 1,\n      maxSize: 1024 * 1024 * 20, // 20 MB\n      accept: {\n        // Image formats\n        \"image/png\": [\".png\"],\n        \"image/jpeg\": [\".jpeg\"],\n        \"image/jpg\": [\".jpg\"],\n        // Video formats\n        \"video/quicktime\": [\".mov\"],\n        \"video/mp4\": [\".mp4\"],\n        \"video/webm\": [\".webm\"],\n      },\n      onDrop: (acceptedFiles) => {\n        // Set up the preview thumbnail images\n        setFiles(\n          acceptedFiles.map((file: File) =>\n            Object.assign(file, {\n              preview: URL.createObjectURL(file),\n            })\n          ) as FileWithPreview[]\n        );\n\n        // Determine input mode from file type (more reliable than checking data URL)\n        const firstFile = acceptedFiles[0];\n        const isVideo = firstFile?.type?.startsWith(\"video/\") ||\n          [\".mp4\", \".mov\", \".webm\"].some(ext => firstFile?.name?.toLowerCase().endsWith(ext));\n\n        // Convert images to data URLs and store them (don't trigger generation yet)\n        Promise.all(acceptedFiles.map((file) => fileToDataURL(file)))\n          .then((dataUrls) => {\n            if (dataUrls.length > 0) {\n              // Use file type detection as primary, fall back to data URL check\n              const inputMode = isVideo || (dataUrls[0] as string).startsWith(\"data:video\")\n                ? \"video\"\n                : \"image\";\n              setUploadedDataUrls(dataUrls as string[]);\n              setUploadedInputMode(inputMode);\n              // Focus the text input after upload\n              setTimeout(() => textInputRef.current?.focus(), 100);\n            }\n          })\n          .catch((error) => {\n            toast.error(\"Error reading files\" + error);\n            console.error(\"Error reading files:\", error);\n          });\n      },\n      onDropRejected: (rejectedFiles) => {\n        toast.error(rejectedFiles[0].errors[0].message);\n      },\n    });\n\n  useEffect(() => {\n    return () => files.forEach((file) => URL.revokeObjectURL(file.preview));\n  }, [files]);\n\n  const style = useMemo(\n    () => ({\n      ...baseStyle,\n      ...(isFocused ? focusedStyle : {}),\n      ...(isDragAccept ? acceptStyle : {}),\n      ...(isDragReject ? rejectStyle : {}),\n    }),\n    [isFocused, isDragAccept, isDragReject]\n  );\n\n  // Screen recorder callback - wrap to include empty text prompt\n  const handleScreenRecorderGenerate = (\n    images: string[],\n    inputMode: \"image\" | \"video\"\n  ) => {\n    setReferenceImages(images, inputMode, \"\");\n  };\n\n  return (\n    <section className=\"container\">\n      {screenRecorderState === ScreenRecorderState.INITIAL && !hasUploadedFile && (\n        /* eslint-disable-next-line @typescript-eslint/no-explicit-any */\n        <div {...getRootProps({ style: style as any })}>\n          <input {...getInputProps()} className=\"file-input\" />\n          <p className=\"text-slate-700 text-lg\">\n            Drag & drop a screenshot here, <br />\n            or click to upload\n          </p>\n        </div>\n      )}\n\n      {hasUploadedFile && (\n        <div className=\"flex flex-col items-center gap-4 w-4/5 mx-auto\">\n          {/* Image/Video Preview */}\n          <div className=\"relative w-full max-w-2xl\">\n            {uploadedInputMode === \"video\" ? (\n              <video\n                src={files[0]?.preview}\n                className=\"w-full h-auto max-h-[500px] object-contain rounded-lg\"\n                controls\n              />\n            ) : (\n              <img\n                src={files[0]?.preview}\n                alt=\"Uploaded screenshot\"\n                className=\"w-full h-auto max-h-[500px] object-contain rounded-lg\"\n              />\n            )}\n            <button\n              onClick={handleClear}\n              className=\"absolute top-2 right-2 bg-white rounded-full p-1 shadow-md hover:bg-gray-100\"\n              aria-label=\"Remove image\"\n            >\n              <svg\n                xmlns=\"http://www.w3.org/2000/svg\"\n                className=\"h-5 w-5 text-gray-600\"\n                viewBox=\"0 0 20 20\"\n                fill=\"currentColor\"\n              >\n                <path\n                  fillRule=\"evenodd\"\n                  d=\"M4.293 4.293a1 1 0 011.414 0L10 8.586l4.293-4.293a1 1 0 111.414 1.414L11.414 10l4.293 4.293a1 1 0 01-1.414 1.414L10 11.414l-4.293 4.293a1 1 0 01-1.414-1.414L8.586 10 4.293 5.707a1 1 0 010-1.414z\"\n                  clipRule=\"evenodd\"\n                />\n              </svg>\n            </button>\n          </div>\n\n          {/* Text Prompt Toggle/Input */}\n          {!showTextPrompt ? (\n            <button\n              onClick={() => {\n                setShowTextPrompt(true);\n                setTimeout(() => textInputRef.current?.focus(), 50);\n              }}\n              className=\"text-sm text-gray-500 hover:text-gray-700 underline\"\n            >\n              (optional) add text prompt\n            </button>\n          ) : (\n            <div className=\"w-full max-w-lg\">\n              <textarea\n                ref={textInputRef}\n                value={textPrompt}\n                onChange={(e) => setTextPrompt(e.target.value)}\n                onKeyDown={handleKeyDown}\n                placeholder=\"Describe any specific requirements or changes...\"\n                className=\"w-full p-3 text-sm border border-gray-300 rounded-md resize-none focus:outline-none focus:ring-2 focus:ring-gray-400 focus:border-transparent\"\n                rows={3}\n              />\n            </div>\n          )}\n\n          {/* Generate Button */}\n          <div className=\"flex flex-col items-center gap-1 w-full max-w-md\">\n            <button\n              onClick={handleGenerate}\n              className=\"w-full py-3 px-6 bg-black text-white font-medium rounded-md hover:bg-gray-800 transition-colors focus:outline-none focus:ring-2 focus:ring-gray-500 focus:ring-offset-2\"\n            >\n              Generate\n            </button>\n            <p className=\"text-xs text-gray-400\">\n              Press Enter to generate\n            </p>\n          </div>\n        </div>\n      )}\n\n      {screenRecorderState === ScreenRecorderState.INITIAL && !hasUploadedFile && (\n        <div className=\"text-center text-sm text-slate-800 mt-4\">\n          Upload a screen recording (.mp4, .mov) or record your screen to clone\n          a whole app.{\" \"}\n          <span className=\"inline-flex items-center rounded-full bg-blue-100 px-2 py-0.5 text-xs font-medium text-blue-700\">\n            Beta\n          </span>\n        </div>\n      )}\n      {!hasUploadedFile && (\n        <ScreenRecorder\n          screenRecorderState={screenRecorderState}\n          setScreenRecorderState={setScreenRecorderState}\n          generateCode={handleScreenRecorderGenerate}\n          stack={stack}\n          setStack={setStack}\n        />\n      )}\n    </section>\n  );\n}\n\nexport default ImageUpload;\n"
  },
  {
    "path": "frontend/src/components/ImportCodeSection.tsx",
    "content": "import { useState } from \"react\";\nimport { Button } from \"./ui/button\";\nimport {\n  Dialog,\n  DialogContent,\n  DialogDescription,\n  DialogFooter,\n  DialogHeader,\n  DialogTitle,\n  DialogTrigger,\n} from \"./ui/dialog\";\nimport { Textarea } from \"./ui/textarea\";\nimport OutputSettingsSection from \"./settings/OutputSettingsSection\";\nimport toast from \"react-hot-toast\";\nimport { Stack } from \"../lib/stacks\";\n\ninterface Props {\n  importFromCode: (code: string, stack: Stack) => void;\n}\n\nfunction ImportCodeSection({ importFromCode }: Props) {\n  const [code, setCode] = useState(\"\");\n  const [stack, setStack] = useState<Stack | undefined>(undefined);\n\n  const doImport = () => {\n    if (code === \"\") {\n      toast.error(\"Please paste in some code\");\n      return;\n    }\n\n    if (stack === undefined) {\n      toast.error(\"Please select your stack\");\n      return;\n    }\n\n    importFromCode(code, stack);\n  };\n  return (\n    <Dialog>\n      <DialogTrigger asChild>\n        <Button className=\"import-from-code-btn\" variant=\"secondary\">\n          Import from Code\n        </Button>\n      </DialogTrigger>\n      <DialogContent className=\"sm:max-w-[425px]\">\n        <DialogHeader>\n          <DialogTitle>Paste in your HTML code</DialogTitle>\n          <DialogDescription>\n            Make sure that the code you're importing is valid HTML.\n          </DialogDescription>\n        </DialogHeader>\n\n        <Textarea\n          value={code}\n          onChange={(e) => setCode(e.target.value)}\n          className=\"w-full h-64\"\n        />\n\n        <OutputSettingsSection\n          stack={stack}\n          setStack={(config: Stack) => setStack(config)}\n          label=\"Stack:\"\n          shouldDisableUpdates={false}\n        />\n\n        <DialogFooter>\n          <Button className=\"import-btn\" type=\"submit\" onClick={doImport}>\n            Import\n          </Button>\n        </DialogFooter>\n      </DialogContent>\n    </Dialog>\n  );\n}\n\nexport default ImportCodeSection;\n"
  },
  {
    "path": "frontend/src/components/TermsOfServiceDialog.tsx",
    "content": "import React from \"react\";\nimport {\n  AlertDialog,\n  AlertDialogAction,\n  AlertDialogContent,\n  AlertDialogFooter,\n  AlertDialogHeader,\n  AlertDialogTitle,\n} from \"./ui/alert-dialog\";\nimport { Input } from \"./ui/input\";\nimport toast from \"react-hot-toast\";\nimport { PICO_BACKEND_FORM_SECRET } from \"../config\";\n\nconst LOGOS = [\"microsoft\", \"amazon\", \"mit\", \"stanford\", \"bytedance\", \"baidu\"];\n\nconst TermsOfServiceDialog: React.FC<{\n  open: boolean;\n  onOpenChange: (open: boolean) => void;\n}> = ({ open, onOpenChange }) => {\n  const [email, setEmail] = React.useState(\"\");\n\n  const onSubscribe = async () => {\n    await fetch(\"https://backend.buildpicoapps.com/form\", {\n      method: \"POST\",\n      headers: {\n        \"Content-Type\": \"application/json\",\n      },\n      body: JSON.stringify({ email, secret: PICO_BACKEND_FORM_SECRET }),\n    });\n  };\n\n  return (\n    <AlertDialog open={open} onOpenChange={onOpenChange}>\n      <AlertDialogContent>\n        <AlertDialogHeader>\n          <AlertDialogTitle className=\"mb-2 text-xl\">\n            Enter your email to get started\n          </AlertDialogTitle>\n        </AlertDialogHeader>\n\n        <div className=\"mb-2\">\n          <Input\n            placeholder=\"Email\"\n            value={email}\n            onChange={(e) => {\n              setEmail(e.target.value);\n            }}\n          />\n        </div>\n        <div className=\"flex flex-col space-y-3 text-sm\">\n          <p>\n            By providing your email, you consent to receiving occasional product\n            updates, and you accept the{\" \"}\n            <a\n              href=\"https://a.picoapps.xyz/camera-write\"\n              target=\"_blank\"\n              className=\"underline\"\n            >\n              terms of service\n            </a>\n            .{\" \"}\n          </p>\n\n          <p>\n            {\" \"}\n            Prefer to run it yourself locally? This project is open source.{\" \"}\n            <a\n              href=\"https://github.com/abi/screenshot-to-code\"\n              target=\"_blank\"\n              className=\"underline\"\n            >\n              Download the code and get started on Github.\n            </a>\n          </p>\n        </div>\n\n        <AlertDialogFooter>\n          <AlertDialogAction\n            onClick={(e) => {\n              if (!email.trim() || !email.trim().includes(\"@\")) {\n                e.preventDefault();\n                toast.error(\"Please enter your email\");\n              } else {\n                onSubscribe();\n              }\n            }}\n          >\n            Agree & Continue\n          </AlertDialogAction>\n        </AlertDialogFooter>\n\n        {/* Logos */}\n        <div>\n          <div\n            className=\"mx-auto grid max-w-lg items-center gap-x-2 \n          gap-y-10 sm:max-w-xl grid-cols-6 lg:mx-0 lg:max-w-none mt-10\"\n          >\n            {LOGOS.map((companyName) => (\n              <img\n                key={companyName}\n                className=\"col-span-1 max-h-12 w-full object-contain grayscale opacity-50 hover:opacity-100\"\n                src={`https://picoapps.xyz/logos/${companyName}.png`}\n                alt={companyName}\n                width={120}\n                height={48}\n              />\n            ))}\n          </div>\n          <div className=\"text-gray-500 text-xs mt-4 text-center\">\n            Designers and engineers from these organizations use Screenshot to\n            Code to build interfaces faster.\n          </div>\n        </div>\n      </AlertDialogContent>\n    </AlertDialog>\n  );\n};\n\nexport default TermsOfServiceDialog;\n"
  },
  {
    "path": "frontend/src/components/UpdateImageUpload.tsx",
    "content": "import { useRef } from \"react\";\nimport { toast } from \"react-hot-toast\";\nimport { Cross2Icon } from \"@radix-ui/react-icons\";\nimport { LuPlus } from \"react-icons/lu\";\n\nconst MAX_UPDATE_IMAGES = 5;\n\n// Helper function to convert file to data URL\nfunction fileToDataURL(file: File): Promise<string> {\n  return new Promise((resolve, reject) => {\n    const reader = new FileReader();\n    reader.onload = () => resolve(reader.result as string);\n    reader.onerror = (error) => reject(error);\n    reader.readAsDataURL(file);\n  });\n}\n\ninterface Props {\n  updateImages: string[];\n  setUpdateImages: (images: string[]) => void;\n}\n\nexport function UpdateImagePreview({ updateImages, setUpdateImages }: Props) {\n  const removeImage = (index: number) => {\n    const newImages = updateImages.filter((_, i) => i !== index);\n    setUpdateImages(newImages);\n  };\n\n  if (updateImages.length === 0) return null;\n\n  return (\n    <div className=\"px-3 pt-3\">\n      <div className=\"flex flex-wrap gap-2 py-1\">\n        {updateImages.map((image, index) => (\n          <div key={index} className=\"relative flex-shrink-0 group overflow-visible\">\n            <div className=\"flex h-14 w-14 items-center justify-center rounded-lg border border-gray-200 bg-white p-1 shadow-sm dark:border-zinc-700 dark:bg-zinc-900\">\n              <img\n                src={image}\n                alt={`Reference ${index + 1}`}\n                className=\"max-h-full max-w-full object-contain\"\n              />\n            </div>\n            <button\n              onClick={() => removeImage(index)}\n              className=\"absolute -right-1 -top-1 z-10 flex h-5 w-5 items-center justify-center rounded-full border border-white bg-gray-900 text-white opacity-0 shadow transition-opacity group-hover:opacity-100 hover:bg-red-600 dark:border-zinc-900\"\n            >\n              <Cross2Icon className=\"h-2.5 w-2.5\" />\n            </button>\n          </div>\n        ))}\n      </div>\n    </div>\n  );\n}\n\nfunction UpdateImageUpload({ updateImages, setUpdateImages }: Props) {\n  const fileInputRef = useRef<HTMLInputElement>(null);\n  const remaining = Math.max(0, MAX_UPDATE_IMAGES - updateImages.length);\n  const isAtLimit = remaining === 0;\n\n\n  const handleButtonClick = () => {\n    if (isAtLimit) {\n      toast.error(\n        `You’ve reached the limit of ${MAX_UPDATE_IMAGES} reference images. Remove one to add another.`\n      );\n      return;\n    }\n    fileInputRef.current?.click();\n  };\n\n  const handleFileInputChange = async (e: React.ChangeEvent<HTMLInputElement>) => {\n    const files = e.target.files;\n    if (files) {\n      try {\n        if (updateImages.length >= MAX_UPDATE_IMAGES) {\n          toast.error(\n            `You’ve reached the limit of ${MAX_UPDATE_IMAGES} reference images. Remove one to add another.`\n          );\n          return;\n        }\n\n        const remainingSlots = MAX_UPDATE_IMAGES - updateImages.length;\n        let filesToAdd = Array.from(files);\n        if (filesToAdd.length > remainingSlots) {\n          toast.error(\n            `Only ${remainingSlots} more image${\n              remainingSlots === 1 ? \"\" : \"s\"\n            } will be added to stay within the ${MAX_UPDATE_IMAGES}-image limit.`\n          );\n          filesToAdd = filesToAdd.slice(0, remainingSlots);\n        }\n\n        const newImagePromises = filesToAdd.map((file) => fileToDataURL(file));\n        const newImages = await Promise.all(newImagePromises);\n        setUpdateImages([...updateImages, ...newImages]);\n        e.target.value = \"\";\n      } catch (error) {\n        toast.error(\"Error reading image files\");\n        console.error(\"Error reading files:\", error);\n      }\n    }\n  };\n\n  return (\n    <div className=\"relative inline-block\">\n      <input\n        ref={fileInputRef}\n        type=\"file\"\n        multiple\n        accept=\"image/png,image/jpeg\"\n        onChange={handleFileInputChange}\n        className=\"hidden\"\n      />\n      <button\n        type=\"button\"\n        onClick={handleButtonClick}\n        disabled={isAtLimit}\n        className={`p-2 rounded-lg transition-colors ${\n          isAtLimit\n            ? \"text-gray-300 dark:text-zinc-600 cursor-not-allowed\"\n            : \"text-gray-500 dark:text-zinc-400 hover:text-gray-700 dark:hover:text-zinc-200 hover:bg-gray-100 dark:hover:bg-zinc-800\"\n        }`}\n        title={\n          isAtLimit\n            ? `Limit reached (${MAX_UPDATE_IMAGES})`\n            : \"Add images\"\n        }\n      >\n        <LuPlus className=\"w-[18px] h-[18px]\" />\n      </button>\n    </div>\n  );\n}\n\nexport default UpdateImageUpload;\n"
  },
  {
    "path": "frontend/src/components/agent/AgentActivity.tsx",
    "content": "import { useEffect, useRef, useState } from \"react\";\nimport { useProjectStore } from \"../../store/project-store\";\nimport { useAppStore } from \"../../store/app-store\";\nimport { AppState } from \"../../types\";\nimport {\n  AgentEvent,\n  AgentEventType,\n} from \"../commits/types\";\nimport {\n  BsChatDots,\n  BsChevronDown,\n  BsChevronRight,\n  BsLightbulb,\n  BsFileEarmarkPlus,\n  BsPencilSquare,\n  BsImage,\n  BsScissors,\n  BsFiles,\n} from \"react-icons/bs\";\nimport ReactMarkdown from \"react-markdown\";\nimport { Light as SyntaxHighlighterBase } from \"react-syntax-highlighter\";\nimport html from \"react-syntax-highlighter/dist/esm/languages/hljs/xml\";\nimport { vs2015 } from \"react-syntax-highlighter/dist/esm/styles/hljs\";\nimport WorkingPulse from \"../core/WorkingPulse\";\n\nSyntaxHighlighterBase.registerLanguage(\"html\", html);\nconst SyntaxHighlighter = SyntaxHighlighterBase as any;\n\nfunction CodePreviewBlock({ code, isGenerating }: { code: string; isGenerating: boolean }) {\n  const containerRef = useRef<HTMLDivElement>(null);\n\n  useEffect(() => {\n    if (isGenerating && containerRef.current) {\n      containerRef.current.scrollTop = containerRef.current.scrollHeight;\n    }\n  }, [code, isGenerating]);\n\n  return (\n    <div ref={containerRef} className=\"max-h-60 overflow-auto rounded-md\">\n      <SyntaxHighlighter\n        language=\"html\"\n        style={vs2015}\n        customStyle={{ margin: 0, padding: \"0.5rem\", fontSize: \"0.75rem\", borderRadius: \"0.375rem\" }}\n        wrapLongLines\n      >\n        {code}\n      </SyntaxHighlighter>\n    </div>\n  );\n}\n\nfunction isFiniteNumber(value: unknown): value is number {\n  return typeof value === \"number\" && Number.isFinite(value);\n}\n\nfunction formatDurationMs(milliseconds: number): string {\n  const seconds = Math.max(1, Math.round(milliseconds / 1000));\n  if (seconds < 60) return `${seconds}s`;\n  const minutes = Math.floor(seconds / 60);\n  const remainingSeconds = seconds % 60;\n  if (minutes < 60) return `${minutes}m ${remainingSeconds}s`;\n  const hours = Math.floor(minutes / 60);\n  const remainingMinutes = minutes % 60;\n  return `${hours}h ${remainingMinutes}m`;\n}\n\nfunction formatDuration(startedAt?: number, endedAt?: number): string {\n  if (!isFiniteNumber(startedAt) || !isFiniteNumber(endedAt)) return \"\";\n  return formatDurationMs(endedAt - startedAt);\n}\n\nfunction formatElapsedSince(timestampMs: number | undefined, nowMs: number): string {\n  if (!isFiniteNumber(timestampMs)) return \"\";\n  return formatDurationMs(Math.max(0, nowMs - timestampMs));\n}\n\nfunction formatVariantWallClockDuration(\n  requestStartedAt: number | undefined,\n  completedAt: number | undefined,\n  nowMs: number\n): string {\n  if (!isFiniteNumber(requestStartedAt)) return \"\";\n  const end = isFiniteNumber(completedAt) ? completedAt : nowMs;\n  return formatDurationMs(Math.max(0, end - requestStartedAt));\n}\n\n\nfunction getEventIcon(type: AgentEventType, toolName?: string) {\n  if (type === \"thinking\") {\n    return <BsLightbulb className=\"text-yellow-500\" />;\n  }\n  if (type === \"assistant\") {\n    return <BsChatDots className=\"text-blue-500\" />;\n  }\n  if (toolName === \"create_file\") {\n    return <BsFileEarmarkPlus className=\"text-indigo-500\" />;\n  }\n  if (toolName === \"edit_file\") {\n    return <BsPencilSquare className=\"text-purple-500\" />;\n  }\n  if (toolName === \"generate_images\") {\n    return <BsImage className=\"text-pink-500\" />;\n  }\n  if (toolName === \"remove_background\") {\n    return <BsScissors className=\"text-teal-500\" />;\n  }\n  if (toolName === \"retrieve_option\") {\n    return <BsFiles className=\"text-slate-500\" />;\n  }\n  return <BsFileEarmarkPlus className=\"text-gray-500\" />;\n}\n\nfunction getEventTitle(event: AgentEvent): string {\n  if (event.type === \"thinking\") {\n    if (event.status === \"running\") return \"Thinking\";\n    const duration = formatDuration(event.startedAt, event.endedAt);\n    return duration ? `Thought for ${duration}` : \"Thought\";\n  }\n  if (event.type === \"assistant\") {\n    return \"Assistant response\";\n  }\n  if (event.type === \"tool\") {\n    if (event.toolName === \"create_file\") {\n      return event.status === \"running\" ? \"Creating file\" : \"Created file\";\n    }\n    if (event.toolName === \"edit_file\") {\n      return event.status === \"running\" ? \"Editing file\" : \"Edited file\";\n    }\n    if (event.toolName === \"generate_images\") {\n      const input = event.input as any;\n      const output = event.output as any;\n      const count = output?.images?.length || input?.count || 0;\n      if (event.status === \"running\") {\n        return count ? `Generating ${count} image${count !== 1 ? \"s\" : \"\"}` : \"Generating images\";\n      }\n      return count ? `Generated ${count} image${count !== 1 ? \"s\" : \"\"}` : \"Generated images\";\n    }\n    if (event.toolName === \"remove_background\") {\n      const rbInput = event.input as any;\n      const rbOutput = event.output as any;\n      const rbCount = rbOutput?.images?.length || rbInput?.image_urls?.length || 0;\n      if (event.status === \"running\") {\n        return rbCount > 1 ? `Removing ${rbCount} backgrounds` : \"Removing background\";\n      }\n      return rbCount > 1 ? `Removed ${rbCount} backgrounds` : \"Background removed\";\n    }\n    if (event.toolName === \"retrieve_option\") {\n      return event.status === \"running\"\n        ? \"Retrieving option\"\n        : \"Retrieved option\";\n    }\n    return event.status === \"running\" ? \"Running tool\" : \"Tool completed\";\n  }\n  return \"Activity\";\n}\n\n\nfunction renderToolDetails(event: AgentEvent, variantCode?: string) {\n  if (!event.input && !event.output) return null;\n\n  const renderJson = (data: unknown) => {\n    if (!data) return null;\n    let json = \"\";\n    try {\n      json = JSON.stringify(data, null, 2);\n    } catch {\n      json = String(data);\n    }\n    if (json.length > 900) {\n      json = json.slice(0, 900) + \"...\";\n    }\n    return (\n      <pre className=\"mt-2 rounded-md bg-gray-50 dark:bg-gray-800 p-2 text-xs text-gray-700 dark:text-gray-200 overflow-x-auto\">\n        {json}\n      </pre>\n    );\n  };\n\n  const output = event.output as any;\n  const input = event.input as any;\n  const hasError = Boolean(output?.error);\n  const images =\n    output && Array.isArray(output.images) ? (output.images as Array<any>) : null;\n  const edits =\n    output && Array.isArray(output.edits) ? (output.edits as Array<any>) : null;\n\n  return (\n    <div className=\"text-sm text-gray-700 dark:text-gray-200\">\n      {hasError && (\n        <div className=\"rounded-md border border-red-200 dark:border-red-700 bg-red-50 dark:bg-red-900/30 p-3\">\n          <div className=\"text-xs uppercase tracking-wide text-red-500\">Error</div>\n          <div className=\"mt-1 text-sm text-red-700 dark:text-red-200\">\n            {output?.error}\n          </div>\n          {event.input && (\n            <div className=\"mt-2\">\n              <div className=\"text-xs uppercase tracking-wide text-red-400\">\n                Input\n              </div>\n              {renderJson(event.input)}\n            </div>\n          )}\n        </div>\n      )}\n      {event.toolName === \"create_file\" && !hasError && variantCode && (\n        <CodePreviewBlock code={variantCode} isGenerating={event.status === \"running\"} />\n      )}\n\n      {event.toolName === \"edit_file\" && edits && !hasError && (\n        <div className=\"space-y-2\">\n          {edits.map((edit, index) => (\n            <div\n              key={`${edit.old_text}-${index}`}\n              className=\"rounded-md border border-gray-200 dark:border-gray-700 bg-white dark:bg-gray-900/60 p-3\"\n            >\n              <div className=\"text-xs uppercase tracking-wide text-gray-400\">\n                Edit {index + 1}\n              </div>\n              <div className=\"mt-2 grid gap-2\">\n                <div>\n                  <div className=\"text-xs text-gray-500\">Old</div>\n                  <div className=\"mt-1 rounded bg-red-50 dark:bg-red-900/30 p-2 text-xs font-mono text-red-700 dark:text-red-200 break-all\">\n                    {edit.old_text}\n                  </div>\n                </div>\n                <div>\n                  <div className=\"text-xs text-gray-500\">New</div>\n                  <div className=\"mt-1 rounded bg-emerald-50 dark:bg-emerald-900/30 p-2 text-xs font-mono text-emerald-700 dark:text-emerald-200 break-all\">\n                    {edit.new_text}\n                  </div>\n                </div>\n              </div>\n              {edit.replaced !== undefined && (\n                <div className=\"mt-2 text-xs text-gray-500\">\n                  Replaced {edit.replaced} time{edit.replaced === 1 ? \"\" : \"s\"}\n                </div>\n              )}\n            </div>\n          ))}\n        </div>\n      )}\n\n      {event.toolName === \"generate_images\" && !hasError && (\n        <div>\n          {/* While running: show prompts with dividers */}\n          {event.status === \"running\" && input?.prompts && Array.isArray(input.prompts) && (\n            <div className=\"divide-y divide-gray-100 dark:divide-gray-800\">\n              {input.prompts.map((prompt: string, index: number) => (\n                <div key={index} className=\"text-xs text-gray-600 dark:text-gray-400 py-1.5\">\n                  {prompt}\n                </div>\n              ))}\n            </div>\n          )}\n          {/* After complete: 50/50 image left, prompt right */}\n          {event.status !== \"running\" && images && (\n            <div className=\"divide-y divide-gray-100 dark:divide-gray-800\">\n              {images.map((item, index) => (\n                <div key={`${item.prompt}-${index}`} className=\"flex gap-3 py-2\">\n                  <div className=\"w-1/2 shrink-0\">\n                    {item.url ? (\n                      <img\n                        src={item.url}\n                        alt={item.prompt || `Generated image ${index + 1}`}\n                        className=\"w-full rounded object-cover\"\n                        loading=\"lazy\"\n                      />\n                    ) : (\n                      <div className=\"aspect-square rounded bg-gray-100 dark:bg-gray-800 flex items-center justify-center text-xs text-gray-400\">\n                        Failed\n                      </div>\n                    )}\n                  </div>\n                  <div className=\"w-1/2 text-xs text-gray-600 dark:text-gray-400 self-center\">\n                    {item.prompt}\n                  </div>\n                </div>\n              ))}\n            </div>\n          )}\n        </div>\n      )}\n\n      {event.toolName === \"remove_background\" && !hasError && (\n        <div>\n          {/* While running: show the source images */}\n          {event.status === \"running\" && input?.image_urls && Array.isArray(input.image_urls) && (\n            <div className=\"divide-y divide-gray-100 dark:divide-gray-800\">\n              {input.image_urls.map((url: string, index: number) => (\n                <div key={index} className=\"py-2\">\n                  <img\n                    src={url}\n                    alt={`Original image ${index + 1}`}\n                    className=\"w-full rounded object-cover\"\n                    loading=\"lazy\"\n                  />\n                </div>\n              ))}\n            </div>\n          )}\n          {/* After complete: before/after side by side for each image */}\n          {event.status !== \"running\" && output?.images && Array.isArray(output.images) && (\n            <div className=\"divide-y divide-gray-100 dark:divide-gray-800\">\n              {output.images.map((item: any, index: number) => (\n                <div key={`${item.image_url}-${index}`} className=\"flex gap-2 py-2\">\n                  <div className=\"w-1/2\">\n                    <div className=\"text-xs text-gray-500 dark:text-gray-400 mb-1\">Before</div>\n                    <img\n                      src={item.image_url}\n                      alt={`Original image ${index + 1}`}\n                      className=\"w-full rounded object-cover\"\n                      loading=\"lazy\"\n                    />\n                  </div>\n                  <div className=\"w-1/2\">\n                    <div className=\"text-xs text-gray-500 dark:text-gray-400 mb-1\">After</div>\n                    {item.result_url ? (\n                      <div className=\"relative\">\n                        <div\n                          className=\"absolute inset-0 rounded\"\n                          style={{\n                            backgroundImage:\n                              \"linear-gradient(45deg, #e5e7eb 25%, transparent 25%), linear-gradient(-45deg, #e5e7eb 25%, transparent 25%), linear-gradient(45deg, transparent 75%, #e5e7eb 75%), linear-gradient(-45deg, transparent 75%, #e5e7eb 75%)\",\n                            backgroundSize: \"10px 10px\",\n                            backgroundPosition: \"0 0, 0 5px, 5px -5px, -5px 0px\",\n                          }}\n                        />\n                        <img\n                          src={item.result_url}\n                          alt=\"Background removed\"\n                          className=\"relative w-full rounded\"\n                          loading=\"lazy\"\n                        />\n                      </div>\n                    ) : (\n                      <div className=\"aspect-square rounded bg-gray-100 dark:bg-gray-800 flex items-center justify-center text-xs text-gray-400\">\n                        Failed\n                      </div>\n                    )}\n                  </div>\n                </div>\n              ))}\n            </div>\n          )}\n        </div>\n      )}\n\n      {!event.toolName && !hasError && (\n        <>\n          {event.input && (\n            <div>\n              <div className=\"text-xs uppercase tracking-wide text-gray-400\">\n                Input\n              </div>\n              {renderJson(event.input)}\n            </div>\n          )}\n          {event.output && (\n            <div className=\"mt-3\">\n              <div className=\"text-xs uppercase tracking-wide text-gray-400\">\n                Output\n              </div>\n              {renderJson(event.output)}\n            </div>\n          )}\n        </>\n      )}\n    </div>\n  );\n}\n\nfunction AgentEventCard({\n  event,\n  autoExpand,\n  variantCode,\n}: {\n  event: AgentEvent;\n  autoExpand?: boolean;\n  variantCode?: string;\n}) {\n  const [expanded, setExpanded] = useState(Boolean(autoExpand));\n\n  useEffect(() => {\n    if (autoExpand) {\n      setExpanded(true);\n    }\n  }, [autoExpand]);\n\n  const isExpanded =\n    (event.type !== \"thinking\" && event.status === \"running\") || expanded;\n\n  if (event.type === \"assistant\") {\n    if (!event.content) return null;\n    return (\n      <div className=\"py-1 text-sm text-gray-700 dark:text-gray-300 prose prose-sm dark:prose-invert max-w-none\">\n        <ReactMarkdown\n          components={{\n            img: ({ ...props }) => (\n              <div className=\"my-2 flex justify-start max-w-full\">\n                <img\n                  {...props}\n                  className=\"max-h-60 max-w-full object-contain rounded-lg border border-gray-200 dark:border-gray-700\"\n                  loading=\"lazy\"\n                />\n              </div>\n            ),\n          }}\n        >\n          {event.content}\n        </ReactMarkdown>\n      </div>\n    );\n  }\n\n  return (\n    <div>\n      <button\n        onClick={() => setExpanded((prev) => !prev)}\n        className=\"w-full flex items-center gap-2 py-1.5 text-left text-gray-500 dark:text-gray-400 hover:text-gray-700 dark:hover:text-gray-300\"\n      >\n        {getEventIcon(event.type, event.toolName)}\n        <span className={`text-sm flex-1 ${event.status === \"running\" ? \"active-step-shimmer\" : \"\"}`}>\n          {getEventTitle(event)}\n        </span>\n        {isExpanded ? (\n          <BsChevronDown className=\"text-xs shrink-0\" />\n        ) : (\n          <BsChevronRight className=\"text-xs shrink-0\" />\n        )}\n      </button>\n      {isExpanded && (\n        <div className=\"pb-2\">\n          {event.type === \"thinking\" && event.content && (\n            <div className=\"prose prose-sm dark:prose-invert max-w-none\">\n              <ReactMarkdown\n                components={{\n                  img: ({ ...props }) => (\n                    <div className=\"my-2 flex justify-start max-w-full\">\n                      <img\n                        {...props}\n                        className=\"max-h-60 max-w-full object-contain rounded-lg border border-gray-200 dark:border-gray-700\"\n                        loading=\"lazy\"\n                      />\n                    </div>\n                  ),\n                }}\n              >\n                {event.content}\n              </ReactMarkdown>\n            </div>\n          )}\n          {event.type === \"tool\" && renderToolDetails(event, variantCode)}\n        </div>\n      )}\n    </div>\n  );\n}\n\nfunction AgentActivity() {\n  const { head, commits, latestCommitHash } = useProjectStore();\n  const [stepsExpandedByVariant, setStepsExpandedByVariant] = useState<\n    Record<string, boolean>\n  >({});\n  const [nowMs, setNowMs] = useState(() => Date.now());\n  const appState = useAppStore((s) => s.appState);\n\n  useEffect(() => {\n    if (appState !== AppState.CODING) return;\n    const intervalId = window.setInterval(() => setNowMs(Date.now()), 1000);\n    return () => window.clearInterval(intervalId);\n  }, [appState]);\n\n  const currentCommit = head ? commits[head] : null;\n  const selectedVariant = currentCommit\n    ? currentCommit.variants[currentCommit.selectedVariantIndex]\n    : null;\n  const selectedVariantStatus = selectedVariant?.status;\n  const variantUiKey =\n    currentCommit ? `${currentCommit.hash}:${currentCommit.selectedVariantIndex}` : \"\";\n\n  const variantCode = selectedVariant?.code || \"\";\n  const events = selectedVariant?.agentEvents || [];\n  const lastAssistantId = [...events]\n    .reverse()\n    .find((event) => event.type === \"assistant\")?.id;\n  const requestStartMs =\n    selectedVariant?.requestStartedAt ??\n    (currentCommit?.dateCreated\n      ? new Date(currentCommit.dateCreated).getTime()\n      : undefined);\n\n  const isLatestCommit = head === latestCommitHash;\n  if (!isLatestCommit || events.length === 0) {\n    return null;\n  }\n\n  const isDone =\n    selectedVariantStatus === \"complete\" ||\n    selectedVariantStatus === \"error\" ||\n    selectedVariantStatus === \"cancelled\";\n  const runningDuration = formatElapsedSince(requestStartMs, nowMs);\n  const variantDuration = formatVariantWallClockDuration(\n    requestStartMs,\n    selectedVariant?.completedAt,\n    nowMs\n  );\n  const stepsExpanded = variantUiKey\n    ? Boolean(stepsExpandedByVariant[variantUiKey])\n    : false;\n  const stepEvents = events.filter((e) => e.type === \"tool\" || e.type === \"thinking\");\n  const assistantEvents = events.filter((e) => e.type === \"assistant\");\n\n  return (\n    <div className=\"space-y-1 mb-3\">\n      {isDone ? (\n        <>\n          {/* Collapsed steps summary */}\n          <button\n            onClick={() =>\n              setStepsExpandedByVariant((prev) => ({\n                ...prev,\n                [variantUiKey]: !prev[variantUiKey],\n              }))\n            }\n            className=\"w-full flex items-center gap-2 rounded-xl border border-gray-200 dark:border-gray-700 bg-white dark:bg-gray-900/60 px-3 py-2 text-left\"\n          >\n            {stepsExpanded ? (\n              <BsChevronDown className=\"text-gray-400 text-xs\" />\n            ) : (\n              <BsChevronRight className=\"text-gray-400 text-xs\" />\n            )}\n            <span className=\"text-xs text-gray-500 dark:text-gray-400\">\n              Worked through {stepEvents.length} step{stepEvents.length !== 1 ? \"s\" : \"\"}{variantDuration ? ` in ${variantDuration}` : \"\"}\n            </span>\n          </button>\n          {stepsExpanded && (\n            <div className=\"space-y-1\">\n              {stepEvents.map((event) => (\n                <AgentEventCard key={event.id} event={event} variantCode={event.toolName === \"create_file\" ? variantCode : undefined} />\n              ))}\n            </div>\n          )}\n          {/* Assistant responses always visible */}\n          {assistantEvents.map((event) => (\n            <AgentEventCard\n              key={event.id}\n              event={event}\n              autoExpand={event.id === lastAssistantId}\n            />\n          ))}\n        </>\n      ) : (\n        <>\n          <div className=\"flex items-center justify-between rounded-xl border border-violet-200 dark:border-violet-800 bg-gradient-to-r from-violet-50 to-white dark:from-violet-900/20 dark:to-zinc-900 px-3 py-2 shadow-[0_0_15px_-3px_rgba(139,92,246,0.3)] dark:shadow-[0_0_15px_-3px_rgba(139,92,246,0.4)] transition-all duration-500\">\n            <div className=\"flex items-center gap-2 text-sm text-gray-600 dark:text-gray-300\">\n              <WorkingPulse />\n              <span>Working...</span>\n            </div>\n            <div className=\"text-xs font-semibold text-gray-700 dark:text-gray-200\">\n              Time so far {runningDuration || \"--\"}\n            </div>\n          </div>\n          {events.map((event) => (\n            <AgentEventCard\n              key={event.id}\n              event={event}\n              autoExpand={event.type === \"assistant\" && event.id === lastAssistantId}\n              variantCode={event.toolName === \"create_file\" ? variantCode : undefined}\n            />\n          ))}\n        </>\n      )}\n    </div>\n  );\n}\n\nexport default AgentActivity;\n"
  },
  {
    "path": "frontend/src/components/commits/types.ts",
    "content": "import { PromptContent, PromptMessageRole } from \"../../types\";\n\nexport type CommitHash = string;\n\nexport type VariantStatus = \"generating\" | \"complete\" | \"cancelled\" | \"error\";\n\nexport type AgentEventStatus = \"running\" | \"complete\" | \"error\";\nexport type AgentEventType = \"thinking\" | \"assistant\" | \"tool\";\n\nexport type AgentEvent = {\n  id: string;\n  type: AgentEventType;\n  status: AgentEventStatus;\n  content?: string;\n  toolName?: string;\n  input?: any;\n  output?: any;\n  startedAt: number;\n  endedAt?: number;\n};\n\nexport type VariantHistoryMessage = {\n  role: PromptMessageRole;\n  text: string;\n  imageAssetIds: string[];\n  videoAssetIds: string[];\n};\n\nexport type Variant = {\n  code: string;\n  history: VariantHistoryMessage[];\n  requestStartedAt?: number;\n  completedAt?: number;\n  status?: VariantStatus;\n  errorMessage?: string;\n  thinking?: string;\n  thinkingStartTime?: number;\n  thinkingDuration?: number;\n  agentEvents?: AgentEvent[];\n  model?: string;\n};\n\nexport type BaseCommit = {\n  hash: CommitHash;\n  parentHash: CommitHash | null;\n  dateCreated: Date;\n  isCommitted: boolean;\n  variants: Variant[];\n  selectedVariantIndex: number;\n};\n\nexport type CommitType = \"ai_create\" | \"ai_edit\" | \"code_create\";\n\nexport type AiCreateCommit = BaseCommit & {\n  type: \"ai_create\";\n  inputs: PromptContent;\n};\n\nexport type AiEditCommit = BaseCommit & {\n  type: \"ai_edit\";\n  inputs: PromptContent;\n};\n\nexport type CodeCreateCommit = BaseCommit & {\n  type: \"code_create\";\n  inputs: null;\n};\n\nexport type Commit = AiCreateCommit | AiEditCommit | CodeCreateCommit;\n"
  },
  {
    "path": "frontend/src/components/commits/utils.ts",
    "content": "import { nanoid } from \"nanoid\";\nimport {\n  AiCreateCommit,\n  AiEditCommit,\n  CodeCreateCommit,\n  Commit,\n} from \"./types\";\n\nexport function createCommit(\n  commit:\n    | Omit<\n        AiCreateCommit,\n        \"hash\" | \"dateCreated\" | \"selectedVariantIndex\" | \"isCommitted\"\n      >\n    | Omit<\n        AiEditCommit,\n        \"hash\" | \"dateCreated\" | \"selectedVariantIndex\" | \"isCommitted\"\n      >\n    | Omit<\n        CodeCreateCommit,\n        \"hash\" | \"dateCreated\" | \"selectedVariantIndex\" | \"isCommitted\"\n      >\n): Commit {\n  const hash = nanoid();\n  return {\n    ...commit,\n    hash,\n    isCommitted: false,\n    dateCreated: new Date(),\n    selectedVariantIndex: 0,\n  };\n}\n"
  },
  {
    "path": "frontend/src/components/core/KeyboardShortcutBadge.tsx",
    "content": "import React from \"react\";\nimport { BsArrowReturnLeft } from \"react-icons/bs\";\n\ninterface KeyboardShortcutBadgeProps {\n  letter: string;\n}\n\nconst KeyboardShortcutBadge: React.FC<KeyboardShortcutBadgeProps> = ({\n  letter,\n}) => {\n  const icon =\n    letter.toLowerCase() === \"enter\" || letter.toLowerCase() === \"return\" ? (\n      <BsArrowReturnLeft />\n    ) : (\n      letter.toUpperCase()\n    );\n\n  return (\n    <span className=\"font-mono text-xs ml-2 rounded bg-gray-700 dark:bg-gray-900 text-white py-[2px] px-2\">\n      {icon}\n    </span>\n  );\n};\n\nexport default KeyboardShortcutBadge;\n"
  },
  {
    "path": "frontend/src/components/core/Spinner.tsx",
    "content": "function Spinner() {\n  return (\n    <div role=\"status\">\n      <svg\n        aria-hidden=\"true\"\n        className=\"w-6 h-6 text-gray-200 animate-spin dark:text-gray-600 fill-gray-800\"\n        viewBox=\"0 0 100 101\"\n        fill=\"none\"\n        xmlns=\"http://www.w3.org/2000/svg\"\n      >\n        <path\n          d=\"M100 50.5908C100 78.2051 77.6142 100.591 50 100.591C22.3858 100.591 0 78.2051 0 50.5908C0 22.9766 22.3858 0.59082 50 0.59082C77.6142 0.59082 100 22.9766 100 50.5908ZM9.08144 50.5908C9.08144 73.1895 27.4013 91.5094 50 91.5094C72.5987 91.5094 90.9186 73.1895 90.9186 50.5908C90.9186 27.9921 72.5987 9.67226 50 9.67226C27.4013 9.67226 9.08144 27.9921 9.08144 50.5908Z\"\n          fill=\"currentColor\"\n        />\n        <path\n          d=\"M93.9676 39.0409C96.393 38.4038 97.8624 35.9116 97.0079 33.5539C95.2932 28.8227 92.871 24.3692 89.8167 20.348C85.8452 15.1192 80.8826 10.7238 75.2124 7.41289C69.5422 4.10194 63.2754 1.94025 56.7698 1.05124C51.7666 0.367541 46.6976 0.446843 41.7345 1.27873C39.2613 1.69328 37.813 4.19778 38.4501 6.62326C39.0873 9.04874 41.5694 10.4717 44.0505 10.1071C47.8511 9.54855 51.7191 9.52689 55.5402 10.0491C60.8642 10.7766 65.9928 12.5457 70.6331 15.2552C75.2735 17.9648 79.3347 21.5619 82.5849 25.841C84.9175 28.9121 86.7997 32.2913 88.1811 35.8758C89.083 38.2158 91.5421 39.6781 93.9676 39.0409Z\"\n          fill=\"currentFill\"\n        />\n      </svg>\n      <span className=\"sr-only\">Loading...</span>\n    </div>\n  );\n}\n\nexport default Spinner;\n"
  },
  {
    "path": "frontend/src/components/core/StackLabel.tsx",
    "content": "import React from \"react\";\nimport { Stack, STACK_DESCRIPTIONS } from \"../../lib/stacks\";\n\ninterface StackLabelProps {\n  stack: Stack;\n}\n\nconst StackLabel: React.FC<StackLabelProps> = ({ stack }) => {\n  const stackComponents = STACK_DESCRIPTIONS[stack].components;\n\n  return (\n    <div className=\"notranslate\" translate=\"no\">\n      {stackComponents.map((component, index) => (\n        <React.Fragment key={index}>\n          <span className=\"font-semibold\">{component}</span>\n          {index < stackComponents.length - 1 && \" + \"}\n        </React.Fragment>\n      ))}\n    </div>\n  );\n};\n\nexport default StackLabel;\n"
  },
  {
    "path": "frontend/src/components/core/WorkingPulse.tsx",
    "content": "function WorkingPulse() {\n  return (\n    <span className=\"inline-flex items-end gap-0.5\" aria-hidden=\"true\">\n      <span className=\"h-1.5 w-1.5 rounded-full bg-violet-500/90 dark:bg-violet-400/90 animate-bounce [animation-duration:900ms]\" />\n      <span className=\"h-1.5 w-1.5 rounded-full bg-violet-500/80 dark:bg-violet-400/80 animate-bounce [animation-duration:900ms] [animation-delay:150ms]\" />\n      <span className=\"h-1.5 w-1.5 rounded-full bg-violet-500/70 dark:bg-violet-400/70 animate-bounce [animation-duration:900ms] [animation-delay:300ms]\" />\n    </span>\n  );\n}\n\nexport default WorkingPulse;\n"
  },
  {
    "path": "frontend/src/components/evals/AllEvalsPage.tsx",
    "content": "import { Link } from \"react-router-dom\";\n\nfunction AllEvalsPage() {\n  return (\n    <div className=\"flex flex-col items-center justify-center min-h-screen bg-gray-50\">\n      <div className=\"max-w-md w-full space-y-8 p-8\">\n        <div className=\"flex justify-between items-center mb-4\">\n          <h1 className=\"text-3xl font-bold text-gray-900\">\n            Evals Dashboard\n          </h1>\n          <Link \n            to=\"/\"\n            className=\"text-sm text-gray-500 hover:text-gray-700 flex items-center\"\n          >\n            ← Back to app\n          </Link>\n        </div>\n        <div className=\"space-y-4\">\n          <Link\n            to=\"/evals/run\"\n            className=\"block w-full p-4 bg-white rounded-lg shadow hover:shadow-md transition-shadow border border-gray-200\"\n          >\n            <h2 className=\"text-xl font-semibold text-gray-800\">Run Evals</h2>\n            <p className=\"text-gray-600\">\n              Generate evaluations for multiple models\n            </p>\n          </Link>\n\n          <Link\n            to=\"/evals/pairwise\"\n            className=\"block w-full p-4 bg-white rounded-lg shadow hover:shadow-md transition-shadow border border-gray-200\"\n          >\n            <h2 className=\"text-xl font-semibold text-gray-800\">\n              Pairwise Comparison\n            </h2>\n            <p className=\"text-gray-600\">\n              Compare outputs from two different models\n            </p>\n          </Link>\n\n          <Link\n            to=\"/evals/best-of-n\"\n            className=\"block w-full p-4 bg-white rounded-lg shadow hover:shadow-md transition-shadow border border-gray-200\"\n          >\n            <h2 className=\"text-xl font-semibold text-gray-800\">Best of N</h2>\n            <p className=\"text-gray-600\">\n              Compare multiple model outputs side by side\n            </p>\n          </Link>\n\n          <Link\n            to=\"/evals/single\"\n            className=\"block w-full p-4 bg-white rounded-lg shadow hover:shadow-md transition-shadow border border-gray-200\"\n          >\n            <h2 className=\"text-xl font-semibold text-gray-800\">\n              Single Model Eval\n            </h2>\n            <p className=\"text-gray-600\">Score outputs from a single model</p>\n          </Link>\n\n          <Link\n            to=\"/evals/openai-input-compare\"\n            className=\"block w-full p-4 bg-white rounded-lg shadow hover:shadow-md transition-shadow border border-gray-200\"\n          >\n            <h2 className=\"text-xl font-semibold text-gray-800\">\n              OpenAI Input Compare\n            </h2>\n            <p className=\"text-gray-600\">\n              Find the first diverging input block between two requests\n            </p>\n          </Link>\n        </div>\n      </div>\n    </div>\n  );\n}\n\nexport default AllEvalsPage;\n"
  },
  {
    "path": "frontend/src/components/evals/BestOfNEvalsPage.tsx",
    "content": "import React, { useState, useEffect, useRef } from \"react\";\nimport { HTTP_BACKEND_URL } from \"../../config\";\nimport { Dialog, DialogContent, DialogTrigger } from \"../ui/dialog\";\nimport EvalNavigation from \"./EvalNavigation\";\n\ninterface Eval {\n  input: string;\n  outputs: string[];\n}\n\n// Update type to support any folder as winner\ntype Outcome = number | \"tie\" | null;\n\ninterface BestOfNEvalsResponse {\n  evals: Eval[];\n  folder_names: string[];\n}\n\ninterface OutputFolder {\n  name: string;\n  path: string;\n  modified_time: number;\n}\n\nfunction BestOfNEvalsPage() {\n  const [evals, setEvals] = React.useState<Eval[]>([]);\n  const [outcomes, setOutcomes] = React.useState<Outcome[]>([]);\n  const [folderNames, setFolderNames] = useState<string[]>([]);\n  // Track multiple folder paths\n  const [folderPaths, setFolderPaths] = useState<string[]>([\"\"]);\n  const [isLoading, setIsLoading] = useState(false);\n  const [selectedHtml, setSelectedHtml] = useState<string>(\"\");\n\n  // Available folders from backend\n  const [availableFolders, setAvailableFolders] = useState<OutputFolder[]>([]);\n\n  // Navigation state\n  const [currentComparisonIndex, setCurrentComparisonIndex] = useState(0);\n  const [currentModelIndex, setCurrentModelIndex] = useState(0);\n\n  // UI state\n  const [showResults, setShowResults] = useState(false);\n  const [winnerFilter, setWinnerFilter] = useState<number | \"tie\" | \"all\">(\n    \"all\"\n  );\n\n  // Refs for synchronized scrolling\n  const iframeRefs = useRef<(HTMLIFrameElement | null)[]>([]);\n\n  // Fetch available folders on mount\n  useEffect(() => {\n    const fetchFolders = async () => {\n      try {\n        const response = await fetch(`${HTTP_BACKEND_URL}/output_folders`);\n        const folders: OutputFolder[] = await response.json();\n        setAvailableFolders(folders);\n      } catch (error) {\n        console.error(\"Error fetching folders:\", error);\n      }\n    };\n    fetchFolders();\n  }, []);\n\n  // Synchronized scrolling effect\n  useEffect(() => {\n    const setupSyncScrolling = () => {\n      const iframes = iframeRefs.current.filter(Boolean);\n      if (iframes.length < 2) return;\n\n      const syncScroll = (sourceIframe: HTMLIFrameElement) => {\n        try {\n          const sourceDocument =\n            sourceIframe.contentDocument ||\n            sourceIframe.contentWindow?.document;\n          if (!sourceDocument) return;\n\n          const syncHandler = () => {\n            const scrollTop =\n              sourceDocument.documentElement.scrollTop ||\n              sourceDocument.body.scrollTop;\n            const scrollLeft =\n              sourceDocument.documentElement.scrollLeft ||\n              sourceDocument.body.scrollLeft;\n\n            iframes.forEach((iframe) => {\n              if (!iframe || iframe === sourceIframe) return;\n              try {\n                const targetDocument =\n                  iframe.contentDocument || iframe.contentWindow?.document;\n                if (targetDocument) {\n                  targetDocument.documentElement.scrollTop = scrollTop;\n                  targetDocument.body.scrollTop = scrollTop;\n                  targetDocument.documentElement.scrollLeft = scrollLeft;\n                  targetDocument.body.scrollLeft = scrollLeft;\n                }\n              } catch (e) {\n                // Ignore cross-origin errors\n              }\n            });\n          };\n\n          sourceDocument.addEventListener(\"scroll\", syncHandler);\n          return () =>\n            sourceDocument.removeEventListener(\"scroll\", syncHandler);\n        } catch (e) {\n          // Ignore cross-origin errors\n        }\n      };\n\n      const cleanupFunctions = iframes\n        .map((iframe) => (iframe ? syncScroll(iframe) : null))\n        .filter(Boolean);\n      return () => cleanupFunctions.forEach((cleanup) => cleanup?.());\n    };\n\n    // Wait for iframes to load\n    const timer = setTimeout(setupSyncScrolling, 1000);\n    return () => clearTimeout(timer);\n  }, [currentComparisonIndex, evals]);\n\n  // Get filtered comparisons indices\n  const getFilteredIndices = () => {\n    if (winnerFilter === \"all\") {\n      return evals.map((_, index) => index);\n    }\n    return evals\n      .map((_, index) => index)\n      .filter((index) => {\n        const outcome = outcomes[index];\n        if (winnerFilter === \"tie\") {\n          return outcome === \"tie\";\n        }\n        return outcome === winnerFilter;\n      });\n  };\n\n  const filteredIndices = getFilteredIndices();\n\n  // Navigation functions\n  const goToPrevious = () => {\n    if (winnerFilter === \"all\") {\n      setCurrentComparisonIndex((prev) => Math.max(0, prev - 1));\n    } else {\n      const currentFilteredIndex = filteredIndices.indexOf(\n        currentComparisonIndex\n      );\n      if (currentFilteredIndex > 0) {\n        setCurrentComparisonIndex(filteredIndices[currentFilteredIndex - 1]);\n      }\n    }\n  };\n\n  const goToNext = () => {\n    if (winnerFilter === \"all\") {\n      setCurrentComparisonIndex((prev) => Math.min(evals.length - 1, prev + 1));\n    } else {\n      const currentFilteredIndex = filteredIndices.indexOf(\n        currentComparisonIndex\n      );\n      if (currentFilteredIndex < filteredIndices.length - 1) {\n        setCurrentComparisonIndex(filteredIndices[currentFilteredIndex + 1]);\n      }\n    }\n  };\n\n  const goToComparison = (index: number) => {\n    setCurrentComparisonIndex(Math.max(0, Math.min(evals.length - 1, index)));\n  };\n\n  // Update current index when filter changes\n  useEffect(() => {\n    if (\n      winnerFilter !== \"all\" &&\n      filteredIndices.length > 0 &&\n      !filteredIndices.includes(currentComparisonIndex)\n    ) {\n      setCurrentComparisonIndex(filteredIndices[0]);\n    }\n  }, [winnerFilter, filteredIndices, currentComparisonIndex]);\n\n  // Keyboard shortcuts\n  useEffect(() => {\n    const handleKeyPress = (e: KeyboardEvent) => {\n      if (evals.length === 0) return;\n\n      switch (e.key) {\n        case \"ArrowLeft\":\n          e.preventDefault();\n          setCurrentModelIndex((prev) =>\n            prev > 0 ? prev - 1 : folderNames.length - 1\n          );\n          break;\n        case \"ArrowRight\":\n          e.preventDefault();\n          setCurrentModelIndex((prev) => (prev + 1) % folderNames.length);\n          break;\n        case \"ArrowUp\":\n          e.preventDefault();\n          goToPrevious();\n          break;\n        case \"ArrowDown\":\n          e.preventDefault();\n          goToNext();\n          break;\n        case \"1\":\n        case \"2\":\n        case \"3\":\n        case \"4\":\n        case \"5\":\n        case \"6\":\n        case \"7\":\n        case \"8\":\n        case \"9\":\n          e.preventDefault();\n          const modelIndex = parseInt(e.key) - 1;\n          if (modelIndex < folderNames.length) {\n            if (e.shiftKey) {\n              // Shift + number = switch to model tab\n              setCurrentModelIndex(modelIndex);\n            } else {\n              // Number = vote for model\n              handleVote(currentComparisonIndex, modelIndex);\n            }\n          }\n          break;\n        case \"t\":\n          e.preventDefault();\n          handleVote(currentComparisonIndex, \"tie\");\n          break;\n        case \"Tab\":\n          e.preventDefault();\n          setCurrentModelIndex((prev) => (prev + 1) % folderNames.length);\n          break;\n      }\n    };\n\n    window.addEventListener(\"keydown\", handleKeyPress);\n    return () => window.removeEventListener(\"keydown\", handleKeyPress);\n  }, [currentComparisonIndex, evals.length, folderNames.length]);\n\n  // Add/remove folder input fields\n  const addFolderInput = () => {\n    setFolderPaths([...folderPaths, \"\"]);\n  };\n\n  const removeFolderInput = (index: number) => {\n    setFolderPaths(folderPaths.filter((_, i) => i !== index));\n  };\n\n  const updateFolderPath = (index: number, value: string) => {\n    const newPaths = [...folderPaths];\n    newPaths[index] = value;\n    setFolderPaths(newPaths);\n  };\n\n  // Calculate statistics for N models\n  const calculateStats = () => {\n    const totalVotes = outcomes.filter((o) => o !== null).length;\n    const stats = folderNames.map((name, index) => {\n      const wins = outcomes.filter((o) => o === index).length;\n      const percentage = totalVotes\n        ? ((wins / totalVotes) * 100).toFixed(2)\n        : \"0.00\";\n      return { name, wins, percentage };\n    });\n    const ties = outcomes.filter((o) => o === \"tie\").length;\n    const tiePercentage = totalVotes\n      ? ((ties / totalVotes) * 100).toFixed(2)\n      : \"0.00\";\n\n    return { stats, ties, tiePercentage, totalVotes };\n  };\n\n  const loadEvals = async () => {\n    if (folderPaths.some((path) => !path)) {\n      alert(\"Please select all folder paths\");\n      return;\n    }\n\n    setIsLoading(true);\n    try {\n      const queryParams = new URLSearchParams();\n      folderPaths.forEach((path, index) => {\n        queryParams.append(`folder${index + 1}`, path);\n      });\n\n      const response = await fetch(\n        `${HTTP_BACKEND_URL}/best-of-n-evals?${queryParams}`\n      );\n      const data: BestOfNEvalsResponse = await response.json();\n\n      console.log(data.evals);\n\n      setEvals(data.evals);\n      setOutcomes(new Array(data.evals.length).fill(null));\n      setFolderNames(data.folder_names);\n      setCurrentComparisonIndex(0);\n      setCurrentModelIndex(0);\n      // Reset iframe refs\n      iframeRefs.current = [];\n    } catch (error) {\n      console.error(\"Error loading evals:\", error);\n      alert(\n        \"Error loading evals. Please check the folder paths and try again.\"\n      );\n    } finally {\n      setIsLoading(false);\n    }\n  };\n\n  const handleVote = (index: number, outcome: Outcome) => {\n    const newOutcomes = [...outcomes];\n    newOutcomes[index] = outcome;\n    setOutcomes(newOutcomes);\n  };\n\n  const stats = calculateStats();\n  const currentEval = evals[currentComparisonIndex];\n\n  // Copy results as CSV to clipboard\n  const copyResultsAsCSV = async () => {\n    const rows: string[] = [];\n\n    // Add summary statistics only\n    stats.stats.forEach((stat) => {\n      rows.push(`${stat.name}\\t${stat.wins}\\t${stat.percentage}%`);\n    });\n    if (stats.ties > 0) {\n      rows.push(`Ties\\t${stats.ties}\\t${stats.tiePercentage}%`);\n    }\n\n    const csvContent = rows.join(\"\\n\");\n\n    try {\n      await navigator.clipboard.writeText(csvContent);\n    } catch (err) {\n      // Fallback for older browsers\n      const textArea = document.createElement(\"textarea\");\n      textArea.value = csvContent;\n      document.body.appendChild(textArea);\n      textArea.select();\n      document.execCommand(\"copy\");\n      document.body.removeChild(textArea);\n    }\n  };\n\n  return (\n    <div className=\"mx-auto\">\n      <EvalNavigation />\n      <div className=\"w-full py-2 bg-gradient-to-b from-gray-900 to-gray-800 text-white border-b border-gray-700\">\n        {evals.length === 0 ? (\n          /* Setup Section */\n          <div className=\"flex flex-col gap-4 max-w-5xl mx-auto px-6\">\n            <div className=\"flex items-center justify-between mb-2\">\n              <h2 className=\"text-xl font-semibold text-gray-200\">\n                Configure Model Comparison\n              </h2>\n              <button\n                onClick={async () => {\n                  try {\n                    const response = await fetch(\n                      `${HTTP_BACKEND_URL}/output_folders`\n                    );\n                    const folders: OutputFolder[] = await response.json();\n                    setAvailableFolders(folders);\n                  } catch (error) {\n                    console.error(\"Error fetching folders:\", error);\n                  }\n                }}\n                className=\"bg-gray-600 hover:bg-gray-700 text-white px-3 py-1.5 rounded-lg text-sm transition-colors flex items-center gap-2\"\n                title=\"Refresh folder list\"\n              >\n                <svg\n                  className=\"w-4 h-4\"\n                  fill=\"none\"\n                  stroke=\"currentColor\"\n                  viewBox=\"0 0 24 24\"\n                >\n                  <path\n                    strokeLinecap=\"round\"\n                    strokeLinejoin=\"round\"\n                    strokeWidth={2}\n                    d=\"M4 4v5h.582m15.356 2A8.001 8.001 0 004.582 9m0 0H9m11 11v-5h-.581m0 0a8.003 8.003 0 01-15.357-2m15.357 2H15\"\n                  />\n                </svg>\n                Refresh\n              </button>\n            </div>\n            <div className=\"space-y-3\">\n              {folderPaths.map((path, index) => (\n                <div key={index} className=\"relative\">\n                  <div className=\"flex items-center gap-2\">\n                    <span className=\"text-sm text-gray-400 font-medium w-16\">\n                      Model {index + 1}\n                    </span>\n                    <div className=\"flex-1 relative\">\n                      <select\n                        value={path}\n                        onChange={(e) =>\n                          updateFolderPath(index, e.target.value)\n                        }\n                        className=\"w-full px-4 py-3 pr-10 rounded-lg bg-gray-700 text-white border border-gray-600 focus:border-blue-500 focus:outline-none transition-colors text-sm appearance-none cursor-pointer\"\n                      >\n                        <option value=\"\">Select a folder...</option>\n                        {availableFolders.map((folder) => (\n                          <option\n                            key={folder.path}\n                            value={folder.path}\n                            title={folder.name}\n                          >\n                            {folder.name}\n                          </option>\n                        ))}\n                      </select>\n                      <div className=\"absolute inset-y-0 right-0 flex items-center pr-3 pointer-events-none\">\n                        <svg\n                          className=\"w-5 h-5 text-gray-400\"\n                          fill=\"none\"\n                          stroke=\"currentColor\"\n                          viewBox=\"0 0 24 24\"\n                        >\n                          <path\n                            strokeLinecap=\"round\"\n                            strokeLinejoin=\"round\"\n                            strokeWidth={2}\n                            d=\"M19 9l-7 7-7-7\"\n                          />\n                        </svg>\n                      </div>\n                    </div>\n                    {index > 0 && (\n                      <button\n                        onClick={() => removeFolderInput(index)}\n                        className=\"bg-red-500 hover:bg-red-600 px-3 py-3 rounded-lg transition-colors\"\n                        title=\"Remove model\"\n                      >\n                        <svg\n                          className=\"w-4 h-4\"\n                          fill=\"none\"\n                          stroke=\"currentColor\"\n                          viewBox=\"0 0 24 24\"\n                        >\n                          <path\n                            strokeLinecap=\"round\"\n                            strokeLinejoin=\"round\"\n                            strokeWidth={2}\n                            d=\"M6 18L18 6M6 6l12 12\"\n                          />\n                        </svg>\n                      </button>\n                    )}\n                  </div>\n                </div>\n              ))}\n            </div>\n            <div className=\"flex gap-3 justify-center mt-6\">\n              <button\n                onClick={addFolderInput}\n                className=\"bg-gray-600 hover:bg-gray-700 text-white px-5 py-2.5 rounded-lg text-sm font-medium transition-colors flex items-center gap-2\"\n              >\n                <svg\n                  className=\"w-4 h-4\"\n                  fill=\"none\"\n                  stroke=\"currentColor\"\n                  viewBox=\"0 0 24 24\"\n                >\n                  <path\n                    strokeLinecap=\"round\"\n                    strokeLinejoin=\"round\"\n                    strokeWidth={2}\n                    d=\"M12 4v16m8-8H4\"\n                  />\n                </svg>\n                Add Model\n              </button>\n              <button\n                onClick={loadEvals}\n                disabled={isLoading || folderPaths.some((p) => !p)}\n                className=\"bg-blue-600 hover:bg-blue-700 disabled:bg-gray-600 disabled:cursor-not-allowed text-white px-8 py-2.5 rounded-lg text-sm font-medium transition-colors flex items-center gap-2\"\n              >\n                {isLoading ? (\n                  <>\n                    <svg className=\"animate-spin h-4 w-4\" viewBox=\"0 0 24 24\">\n                      <circle\n                        className=\"opacity-25\"\n                        cx=\"12\"\n                        cy=\"12\"\n                        r=\"10\"\n                        stroke=\"currentColor\"\n                        strokeWidth=\"4\"\n                        fill=\"none\"\n                      />\n                      <path\n                        className=\"opacity-75\"\n                        fill=\"currentColor\"\n                        d=\"M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z\"\n                      />\n                    </svg>\n                    Loading...\n                  </>\n                ) : (\n                  <>\n                    <svg\n                      className=\"w-4 h-4\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      viewBox=\"0 0 24 24\"\n                    >\n                      <path\n                        strokeLinecap=\"round\"\n                        strokeLinejoin=\"round\"\n                        strokeWidth={2}\n                        d=\"M14.752 11.168l-3.197-2.132A1 1 0 0010 9.87v4.263a1 1 0 001.555.832l3.197-2.132a1 1 0 000-1.664z\"\n                      />\n                      <path\n                        strokeLinecap=\"round\"\n                        strokeLinejoin=\"round\"\n                        strokeWidth={2}\n                        d=\"M21 12a9 9 0 11-18 0 9 9 0 0118 0z\"\n                      />\n                    </svg>\n                    Start Comparison\n                  </>\n                )}\n              </button>\n            </div>\n          </div>\n        ) : (\n          /* Comparison Header */\n          <div className=\"max-w-7xl mx-auto px-4\">\n            <div className=\"flex flex-wrap items-center justify-between gap-3\">\n              {/* Left: Navigation */}\n              <div className=\"flex items-center gap-2\">\n                <button\n                  onClick={() => {\n                    setEvals([]);\n                    setOutcomes([]);\n                    setFolderNames([]);\n                    setCurrentComparisonIndex(0);\n                    setCurrentModelIndex(0);\n                  }}\n                  className=\"bg-gray-700 hover:bg-gray-600 text-white px-2.5 py-1 rounded-lg text-sm transition-colors\"\n                  title=\"Back to setup\"\n                >\n                  <svg\n                    className=\"w-4 h-4\"\n                    fill=\"none\"\n                    stroke=\"currentColor\"\n                    viewBox=\"0 0 24 24\"\n                  >\n                    <path\n                      strokeLinecap=\"round\"\n                      strokeLinejoin=\"round\"\n                      strokeWidth={2}\n                      d=\"M10 19l-7-7m0 0l7-7m-7 7h18\"\n                    />\n                  </svg>\n                </button>\n\n                <div className=\"flex items-center bg-gray-700 rounded-lg\">\n                  <button\n                    onClick={goToPrevious}\n                    disabled={currentComparisonIndex === 0}\n                    className=\"px-2.5 py-1 rounded-l-lg hover:bg-gray-600 disabled:opacity-50 disabled:cursor-not-allowed transition-colors\"\n                    title=\"Previous comparison (↑)\"\n                  >\n                    <svg\n                      className=\"w-4 h-4\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      viewBox=\"0 0 24 24\"\n                    >\n                      <path\n                        strokeLinecap=\"round\"\n                        strokeLinejoin=\"round\"\n                        strokeWidth={2}\n                        d=\"M15 19l-7-7 7-7\"\n                      />\n                    </svg>\n                  </button>\n\n                  <select\n                    value={currentComparisonIndex}\n                    onChange={(e) => goToComparison(parseInt(e.target.value))}\n                    className=\"bg-transparent text-white px-3 py-1 text-sm font-medium focus:outline-none appearance-none cursor-pointer\"\n                  >\n                    {winnerFilter === \"all\"\n                      ? evals.map((_, index) => (\n                          <option\n                            key={index}\n                            value={index}\n                            className=\"bg-gray-800\"\n                          >\n                            Comparison {index + 1}{\" \"}\n                            {outcomes[index] !== null ? \"✓\" : \"\"}\n                          </option>\n                        ))\n                      : filteredIndices.map((index) => (\n                          <option\n                            key={index}\n                            value={index}\n                            className=\"bg-gray-800\"\n                          >\n                            Comparison {index + 1}{\" \"}\n                            {outcomes[index] !== null ? \"✓\" : \"\"}\n                          </option>\n                        ))}\n                  </select>\n\n                  <button\n                    onClick={goToNext}\n                    disabled={currentComparisonIndex === evals.length - 1}\n                    className=\"px-2.5 py-1 rounded-r-lg hover:bg-gray-600 disabled:opacity-50 disabled:cursor-not-allowed transition-colors\"\n                    title=\"Next comparison (↓)\"\n                  >\n                    <svg\n                      className=\"w-4 h-4\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      viewBox=\"0 0 24 24\"\n                    >\n                      <path\n                        strokeLinecap=\"round\"\n                        strokeLinejoin=\"round\"\n                        strokeWidth={2}\n                        d=\"M9 5l7 7-7 7\"\n                      />\n                    </svg>\n                  </button>\n                </div>\n\n                <span className=\"text-sm text-gray-400 font-medium\">\n                  {winnerFilter === \"all\"\n                    ? `${currentComparisonIndex + 1} of ${evals.length}`\n                    : `${\n                        filteredIndices.indexOf(currentComparisonIndex) + 1\n                      } of ${filteredIndices.length} (filtered)`}\n                </span>\n              </div>\n\n              {/* Center: Progress and Results */}\n              <div className=\"flex items-center gap-3\">\n                <div className=\"flex items-center gap-2 bg-gray-700/50 px-3 py-1 rounded-lg\">\n                  <span className=\"text-xs text-gray-400 font-medium\">\n                    Progress\n                  </span>\n                  <div className=\"w-24 h-1.5 bg-gray-600 rounded-full overflow-hidden\">\n                    <div\n                      className=\"h-full bg-gradient-to-r from-blue-500 to-blue-400 transition-all duration-500 ease-out\"\n                      style={{\n                        width: `${Math.round(\n                          (stats.totalVotes / evals.length) * 100\n                        )}%`,\n                      }}\n                    ></div>\n                  </div>\n                  <span className=\"text-sm font-semibold text-gray-200\">\n                    {Math.round((stats.totalVotes / evals.length) * 100)}%\n                  </span>\n                </div>\n\n                <button\n                  onClick={() => setShowResults(!showResults)}\n                  className=\"flex items-center gap-1.5 px-2.5 py-1 bg-gray-700 hover:bg-gray-600 rounded-lg text-sm text-gray-200 transition-colors\"\n                >\n                  <svg\n                    className=\"w-4 h-4\"\n                    fill=\"none\"\n                    stroke=\"currentColor\"\n                    viewBox=\"0 0 24 24\"\n                  >\n                    <path\n                      strokeLinecap=\"round\"\n                      strokeLinejoin=\"round\"\n                      strokeWidth={2}\n                      d=\"M9 19v-6a2 2 0 00-2-2H5a2 2 0 00-2 2v6a2 2 0 002 2h2a2 2 0 002-2zm0 0V9a2 2 0 012-2h2a2 2 0 012 2v10m-6 0a2 2 0 002 2h2a2 2 0 002-2m0 0V5a2 2 0 012-2h2a2 2 0 012 2v14a2 2 0 01-2 2h-2a2 2 0 01-2-2z\"\n                    />\n                  </svg>\n                  Results\n                  <svg\n                    className={`w-3 h-3 transition-transform duration-200 ${\n                      showResults ? \"rotate-180\" : \"\"\n                    }`}\n                    fill=\"none\"\n                    stroke=\"currentColor\"\n                    viewBox=\"0 0 24 24\"\n                  >\n                    <path\n                      strokeLinecap=\"round\"\n                      strokeLinejoin=\"round\"\n                      strokeWidth={2}\n                      d=\"M19 9l-7 7-7-7\"\n                    />\n                  </svg>\n                </button>\n\n                {/* Filter Dropdown */}\n                <select\n                  value={winnerFilter}\n                  onChange={(e) => {\n                    const value = e.target.value;\n                    if (value === \"all\" || value === \"tie\") {\n                      setWinnerFilter(value);\n                    } else {\n                      setWinnerFilter(parseInt(value));\n                    }\n                  }}\n                  className=\"px-2 py-1 bg-gray-700 text-white text-xs rounded-lg border border-gray-600 focus:border-blue-500 focus:outline-none\"\n                >\n                  <option value=\"all\">All Comparisons</option>\n                  {folderNames.map((name, index) => (\n                    <option key={index} value={index}>\n                      {name} Wins\n                    </option>\n                  ))}\n                  <option value=\"tie\">Ties</option>\n                </select>\n\n                {showResults && (\n                  <div className=\"bg-gray-800 rounded overflow-hidden\">\n                    <div className=\"flex items-center justify-between px-2 py-1 bg-gray-700\">\n                      <span className=\"text-xs text-gray-300 font-semibold\">\n                        Results\n                      </span>\n                      <button\n                        onClick={copyResultsAsCSV}\n                        className=\"text-xs px-2 py-0.5 bg-blue-600 hover:bg-blue-700 text-white rounded\"\n                      >\n                        Copy CSV\n                      </button>\n                    </div>\n                    <table className=\"text-xs w-full\">\n                      <thead>\n                        <tr className=\"bg-gray-700\">\n                          <th className=\"px-2 py-1 text-gray-300\">Model</th>\n                          <th className=\"px-2 py-1 text-gray-300\">Wins</th>\n                          <th className=\"px-2 py-1 text-gray-300\">%</th>\n                        </tr>\n                      </thead>\n                      <tbody>\n                        {stats.stats.map((stat, i) => (\n                          <tr key={i} className=\"border-t border-gray-600\">\n                            <td className=\"px-2 py-1 text-white\">\n                              {stat.name}\n                            </td>\n                            <td className=\"px-2 py-1 text-green-400 text-center\">\n                              {stat.wins}\n                            </td>\n                            <td className=\"px-2 py-1 text-green-400 text-center\">\n                              {stat.percentage}%\n                            </td>\n                          </tr>\n                        ))}\n                        {stats.ties > 0 && (\n                          <tr className=\"border-t border-gray-600\">\n                            <td className=\"px-2 py-1 text-white\">Ties</td>\n                            <td className=\"px-2 py-1 text-yellow-400 text-center\">\n                              {stats.ties}\n                            </td>\n                            <td className=\"px-2 py-1 text-yellow-400 text-center\">\n                              {stats.tiePercentage}%\n                            </td>\n                          </tr>\n                        )}\n                      </tbody>\n                    </table>\n                  </div>\n                )}\n              </div>\n\n              {/* Right: Quick help */}\n              <div className=\"text-xs text-gray-400\">\n                ↑↓ nav | ←→ switch | 1-{folderNames.length} vote | T tie\n              </div>\n            </div>\n          </div>\n        )}\n      </div>\n\n      {/* Single Comparison View */}\n      {currentEval && (\n        <div className=\"bg-gray-50 min-h-screen\">\n          <div className=\"flex gap-4 p-3 max-w-full\">\n            {/* Fixed Reference Image */}\n            <div className=\"flex-shrink-0 w-[380px]\">\n              <div className=\"bg-white rounded-lg shadow-sm overflow-hidden\">\n                <div className=\"bg-gray-100 text-gray-700 px-3 py-1.5 border-b border-gray-200\">\n                  <h3 className=\"font-medium text-xs flex items-center gap-1\">\n                    <svg\n                      className=\"w-3 h-3\"\n                      fill=\"none\"\n                      stroke=\"currentColor\"\n                      viewBox=\"0 0 24 24\"\n                    >\n                      <path\n                        strokeLinecap=\"round\"\n                        strokeLinejoin=\"round\"\n                        strokeWidth={2}\n                        d=\"M4 16l4.586-4.586a2 2 0 012.828 0L16 16m-2-2l1.586-1.586a2 2 0 012.828 0L20 14m-6-6h.01M6 20h12a2 2 0 002-2V6a2 2 0 00-2-2H6a2 2 0 00-2 2v12a2 2 0 002 2z\"\n                      />\n                    </svg>\n                    Reference Image\n                  </h3>\n                </div>\n                <div className=\"w-full h-[calc(100vh-200px)] flex items-center justify-center bg-gray-50 p-2\">\n                  <img\n                    src={currentEval.input}\n                    alt={`Input for comparison ${currentComparisonIndex + 1}`}\n                    className=\"max-w-full max-h-full object-contain rounded shadow-sm\"\n                  />\n                </div>\n              </div>\n            </div>\n\n            {/* Tabbed Model Display */}\n            <div className=\"flex-1\">\n              {/* Compact Model Tabs with Inline Voting */}\n              <div className=\"bg-white rounded-t-lg shadow-sm border-b border-gray-200\">\n                <div className=\"flex items-center\">\n                  {folderNames.map((name, index) => (\n                    <div key={index} className=\"flex-1 flex items-center\">\n                      <button\n                        onClick={() => setCurrentModelIndex(index)}\n                        className={`flex-1 px-3 py-2 text-xs font-medium transition-all border-r border-gray-200 ${\n                          currentModelIndex === index\n                            ? \"bg-blue-50 text-blue-700 border-b-2 border-b-blue-500\"\n                            : \"text-gray-600 hover:bg-gray-50\"\n                        }`}\n                      >\n                        {name}{\" \"}\n                        <span className=\"text-xs opacity-60\">\n                          ({index + 1})\n                        </span>\n                      </button>\n                      <button\n                        onClick={() =>\n                          handleVote(currentComparisonIndex, index)\n                        }\n                        className={`px-3 py-2 text-xs font-medium transition-all border-r border-gray-200 ${\n                          outcomes[currentComparisonIndex] === index\n                            ? \"bg-green-100 text-green-700\"\n                            : \"text-gray-500 hover:bg-gray-50\"\n                        }`}\n                      >\n                        {outcomes[currentComparisonIndex] === index\n                          ? \"✓\"\n                          : \"Vote\"}\n                      </button>\n                    </div>\n                  ))}\n                  <button\n                    onClick={() => handleVote(currentComparisonIndex, \"tie\")}\n                    className={`px-4 py-2 text-xs font-medium transition-all ${\n                      outcomes[currentComparisonIndex] === \"tie\"\n                        ? \"bg-yellow-100 text-yellow-700\"\n                        : \"text-gray-500 hover:bg-gray-50\"\n                    }`}\n                  >\n                    {outcomes[currentComparisonIndex] === \"tie\"\n                      ? \"Tie ✓\"\n                      : \"Tie (T)\"}\n                  </button>\n                </div>\n              </div>\n\n              {/* Current Model Output */}\n              <div className=\"bg-white shadow-lg overflow-hidden\">\n                <div className=\"bg-gray-50 px-3 py-1.5 border-b border-gray-200 flex items-center justify-between\">\n                  <span className=\"text-xs text-gray-600 font-medium\">\n                    {folderNames[currentModelIndex]} Output\n                    {outcomes[currentComparisonIndex] === currentModelIndex && (\n                      <span className=\"ml-2 text-green-600\">✓ Winner</span>\n                    )}\n                  </span>\n                  <Dialog>\n                    <DialogTrigger asChild>\n                      <button\n                        className=\"flex items-center gap-1 bg-gray-700 hover:bg-gray-800 text-white px-2 py-0.5 rounded text-xs transition-colors\"\n                        onClick={() =>\n                          setSelectedHtml(\n                            currentEval.outputs[currentModelIndex]\n                          )\n                        }\n                      >\n                        <svg\n                          className=\"w-3 h-3\"\n                          fill=\"none\"\n                          stroke=\"currentColor\"\n                          viewBox=\"0 0 24 24\"\n                        >\n                          <path\n                            strokeLinecap=\"round\"\n                            strokeLinejoin=\"round\"\n                            strokeWidth={2}\n                            d=\"M4 8V4m0 0h4M4 4l5 5m11-1V4m0 0h-4m4 0l-5 5M4 16v4m0 0h4m-4 0l5-5m11 5l-5-5m5 5v-4m0 4h-4\"\n                          />\n                        </svg>\n                        Full Screen\n                      </button>\n                    </DialogTrigger>\n                    <DialogContent className=\"w-[95vw] max-w-[95vw] h-[95vh] max-h-[95vh] bg-gray-900\">\n                      <div className=\"absolute top-4 left-4 bg-black/80 backdrop-blur text-white px-3 py-2 rounded-lg z-10\">\n                        <span className=\"font-semibold\">\n                          {folderNames[currentModelIndex]}\n                        </span>\n                      </div>\n                      <iframe\n                        srcDoc={selectedHtml}\n                        className=\"w-full h-full rounded-lg\"\n                      ></iframe>\n                    </DialogContent>\n                  </Dialog>\n                </div>\n                <div className=\"relative bg-gray-50\">\n                  <iframe\n                    ref={(el) => {\n                      iframeRefs.current[currentModelIndex] = el;\n                    }}\n                    srcDoc={currentEval.outputs[currentModelIndex]}\n                    className=\"w-full h-[calc(100vh-200px)]\"\n                    style={{ colorScheme: \"light\" }}\n                  ></iframe>\n                </div>\n              </div>\n            </div>\n          </div>\n        </div>\n      )}\n    </div>\n  );\n}\n\nexport default BestOfNEvalsPage;\n"
  },
  {
    "path": "frontend/src/components/evals/EvalNavigation.tsx",
    "content": "import { Link } from \"react-router-dom\";\n\nfunction EvalNavigation() {\n  return (\n    <div className=\"flex justify-between items-center w-full py-3 px-4 bg-zinc-900 text-white\">\n      <div className=\"flex items-center space-x-4\">\n        <Link\n          to=\"/evals\"\n          className=\"font-medium hover:text-blue-300 transition-colors\"\n        >\n          Evals Home\n        </Link>\n        \n        <div className=\"text-gray-500\">|</div>\n        \n        <Link\n          to=\"/evals/run\"\n          className=\"hover:text-blue-300 transition-colors\"\n        >\n          Run\n        </Link>\n        \n        <Link\n          to=\"/evals/pairwise\"\n          className=\"hover:text-blue-300 transition-colors\"\n        >\n          Pairwise\n        </Link>\n        \n        <Link\n          to=\"/evals/best-of-n\"\n          className=\"hover:text-blue-300 transition-colors\"\n        >\n          Best of N\n        </Link>\n        \n        <Link\n          to=\"/evals/single\"\n          className=\"hover:text-blue-300 transition-colors\"\n        >\n          Single\n        </Link>\n\n        <Link\n          to=\"/evals/openai-input-compare\"\n          className=\"hover:text-blue-300 transition-colors\"\n        >\n          Input Compare\n        </Link>\n      </div>\n      \n      <Link\n        to=\"/\"\n        className=\"text-sm text-gray-300 hover:text-white transition-colors\"\n      >\n        ← Back to app\n      </Link>\n    </div>\n  );\n}\n\nexport default EvalNavigation;\n"
  },
  {
    "path": "frontend/src/components/evals/EvalsPage.tsx",
    "content": "import React, { useState } from \"react\";\nimport { HTTP_BACKEND_URL } from \"../../config\";\nimport RatingPicker from \"./RatingPicker\";\nimport EvalNavigation from \"./EvalNavigation\";\n\ninterface Eval {\n  input: string;\n  outputs: string[];\n}\n\ninterface RatingCriteria {\n  stackAdherence: number;\n  accuracy: number;\n  codeQuality: number;\n  mobileResponsiveness: number;\n  imageCaptionQuality: number;\n}\n\ninterface OutputDisplay {\n  showSource: boolean;\n}\n\nfunction EvalsPage() {\n  const [evals, setEvals] = React.useState<Eval[]>([]);\n  const [ratings, setRatings] = React.useState<RatingCriteria[]>([]);\n  const [folderPath, setFolderPath] = useState(\"\");\n  const [isLoading, setIsLoading] = useState(false);\n  const [outputDisplays, setOutputDisplays] = useState<OutputDisplay[]>([]);\n\n  const calculateScores = () => {\n    if (ratings.length === 0) {\n      return {\n        stackAdherence: { total: 0, max: 0, percentage: \"0.00\" },\n        accuracy: { total: 0, max: 0, percentage: \"0.00\" },\n        codeQuality: { total: 0, max: 0, percentage: \"0.00\" },\n        mobileResponsiveness: { total: 0, max: 0, percentage: \"0.00\" },\n        imageCaptionQuality: { total: 0, max: 0, percentage: \"0.00\" },\n      };\n    }\n\n    const maxPerCriterion = ratings.length * 5; // max score of 5 * number of evals\n\n    const totals = ratings.reduce(\n      (acc, rating) => ({\n        stackAdherence: acc.stackAdherence + rating.stackAdherence,\n        accuracy: acc.accuracy + rating.accuracy,\n        codeQuality: acc.codeQuality + rating.codeQuality,\n        mobileResponsiveness:\n          acc.mobileResponsiveness + rating.mobileResponsiveness,\n        imageCaptionQuality:\n          acc.imageCaptionQuality + rating.imageCaptionQuality,\n      }),\n      {\n        stackAdherence: 0,\n        accuracy: 0,\n        codeQuality: 0,\n        mobileResponsiveness: 0,\n        imageCaptionQuality: 0,\n      }\n    );\n\n    return Object.entries(totals).reduce(\n      (acc, [key, total]) => ({\n        ...acc,\n        [key]: {\n          total,\n          max: maxPerCriterion,\n          percentage: ((total / maxPerCriterion) * 100).toFixed(2),\n        },\n      }),\n      {} as Record<\n        keyof RatingCriteria,\n        { total: number; max: number; percentage: string }\n      >\n    );\n  };\n\n  const loadEvals = async () => {\n    if (!folderPath) {\n      alert(\"Please enter a folder path\");\n      return;\n    }\n\n    setIsLoading(true);\n    try {\n      const queryParams = new URLSearchParams({\n        folder: `/Users/abi/Downloads/${folderPath}`,\n      });\n\n      const response = await fetch(`${HTTP_BACKEND_URL}/evals?${queryParams}`);\n      const data = await response.json();\n\n      console.log(data);\n\n      setEvals(data);\n      setRatings(\n        data.map(() => ({\n          stackAdherence: 0,\n          accuracy: 0,\n          codeQuality: 0,\n          mobileResponsiveness: 0,\n          imageCaptionQuality: 0,\n        }))\n      );\n    } catch (error) {\n      console.error(\"Error loading evals:\", error);\n      alert(\"Error loading evals. Please check the folder path and try again.\");\n    } finally {\n      setIsLoading(false);\n    }\n  };\n\n  const updateRating = (\n    index: number,\n    criterion: keyof RatingCriteria,\n    value: number\n  ) => {\n    const newRatings = [...ratings];\n    newRatings[index] = {\n      ...newRatings[index],\n      [criterion]: value,\n    };\n    setRatings(newRatings);\n  };\n\n  const toggleSourceView = (evalIndex: number) => {\n    const newDisplays = [...outputDisplays];\n    if (!newDisplays[evalIndex]) {\n      newDisplays[evalIndex] = { showSource: false };\n    }\n    newDisplays[evalIndex] = { showSource: !newDisplays[evalIndex].showSource };\n    setOutputDisplays(newDisplays);\n  };\n\n  return (\n    <div className=\"mx-auto\">\n      <EvalNavigation />\n      <div className=\"flex flex-col items-center justify-center w-full py-4 bg-zinc-950 text-white\">\n        <div className=\"flex flex-col gap-4 mb-4 w-full max-w-2xl px-4\">\n          <input\n            type=\"text\"\n            value={folderPath}\n            onChange={(e) => setFolderPath(e.target.value)}\n            placeholder=\"Enter folder name in Downloads\"\n            className=\"w-full px-4 py-2 rounded text-black\"\n          />\n          <button\n            onClick={loadEvals}\n            disabled={isLoading}\n            className=\"bg-blue-500 hover:bg-blue-600 text-white px-4 py-2 rounded disabled:bg-blue-300\"\n          >\n            {isLoading ? \"Loading...\" : \"Load Evals\"}\n          </button>\n        </div>\n\n        {evals.length > 0 && (\n          <div className=\"flex flex-col items-center gap-2 text-lg\">\n            <h2 className=\"text-2xl font-semibold mb-2\">Scores by Category</h2>\n            {Object.entries(calculateScores()).map(([criterion, score]) => (\n              <div key={criterion} className=\"flex gap-x-4 items-center\">\n                <span className=\"min-w-[200px] text-right capitalize\">\n                  {criterion.replace(/([A-Z])/g, \" $1\").trim()}:\n                </span>\n                <span>\n                  {score.total} / {score.max} ({score.percentage}%)\n                </span>\n              </div>\n            ))}\n          </div>\n        )}\n      </div>\n\n      <div className=\"flex flex-col gap-y-8 mt-4 mx-auto justify-center\">\n        {evals.map((e, index) => (\n          <div className=\"flex flex-col justify-center\" key={index}>\n            <h2 className=\"font-bold text-lg ml-4\">Evaluation {index + 1}</h2>\n            <div className=\"flex gap-x-2 justify-center ml-4\">\n              <div className=\"w-1/2 p-1 border\">\n                <img src={e.input} alt={`Input for eval ${index}`} />\n              </div>\n              {e.outputs.map((output, outputIndex) => (\n                <div className=\"w-1/2 p-1 border\" key={outputIndex}>\n                  <div className=\"mb-2\">\n                    <button\n                      onClick={() => toggleSourceView(index)}\n                      className=\"bg-gray-500 hover:bg-gray-600 text-white px-2 py-1 rounded text-sm\"\n                    >\n                      {outputDisplays[index]?.showSource ? \"Show Preview\" : \"Show Source\"}\n                    </button>\n                  </div>\n                  {outputDisplays[index]?.showSource ? (\n                    <pre className=\"whitespace-pre-wrap text-sm p-2 bg-gray-100 max-h-[480px] overflow-auto\">\n                      {output}\n                    </pre>\n                  ) : (\n                    <iframe\n                      srcDoc={output}\n                      className=\"w-[1200px] h-[800px] transform scale-[0.60]\"\n                      style={{ transformOrigin: \"top left\" }}\n                    ></iframe>\n                  )}\n                </div>\n              ))}\n            </div>\n            <div className=\"ml-8 mt-4 space-y-2\">\n              <div className=\"grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4\">\n                <div className=\"flex items-center gap-x-4\">\n                  <span className=\"min-w-[160px]\">Stack Adherence:</span>\n                  <RatingPicker\n                    onSelect={(rating) =>\n                      updateRating(index, \"stackAdherence\", rating)\n                    }\n                    maxRating={5}\n                    value={ratings[index].stackAdherence}\n                  />\n                </div>\n                <div className=\"flex items-center gap-x-4\">\n                  <span className=\"min-w-[160px]\">Accuracy:</span>\n                  <RatingPicker\n                    onSelect={(rating) =>\n                      updateRating(index, \"accuracy\", rating)\n                    }\n                    maxRating={5}\n                    value={ratings[index].accuracy}\n                  />\n                </div>\n                <div className=\"flex items-center gap-x-4\">\n                  <span className=\"min-w-[160px]\">Code Quality:</span>\n                  <RatingPicker\n                    onSelect={(rating) =>\n                      updateRating(index, \"codeQuality\", rating)\n                    }\n                    maxRating={5}\n                    value={ratings[index].codeQuality}\n                  />\n                </div>\n                <div className=\"flex items-center gap-x-4\">\n                  <span className=\"min-w-[160px]\">Mobile Responsiveness:</span>\n                  <RatingPicker\n                    onSelect={(rating) =>\n                      updateRating(index, \"mobileResponsiveness\", rating)\n                    }\n                    maxRating={5}\n                    value={ratings[index].mobileResponsiveness}\n                  />\n                </div>\n                <div className=\"flex items-center gap-x-4\">\n                  <span className=\"min-w-[160px]\">Image Caption Quality:</span>\n                  <RatingPicker\n                    onSelect={(rating) =>\n                      updateRating(index, \"imageCaptionQuality\", rating)\n                    }\n                    maxRating={5}\n                    value={ratings[index].imageCaptionQuality}\n                  />\n                </div>\n              </div>\n            </div>\n          </div>\n        ))}\n      </div>\n    </div>\n  );\n}\n\nexport default EvalsPage;\n"
  },
  {
    "path": "frontend/src/components/evals/InputFileSelector.tsx",
    "content": "import { useState, useEffect } from \"react\";\nimport { HTTP_BACKEND_URL } from \"../../config\";\nimport { BsCheckLg, BsChevronDown, BsChevronRight } from \"react-icons/bs\";\nimport { Button } from \"../ui/button\";\n\ninterface InputFile {\n  name: string;\n  path: string;\n}\n\ninterface InputFileSelectorProps {\n  onFilesSelected: (files: string[]) => void;\n}\n\nfunction InputFileSelector({ onFilesSelected }: InputFileSelectorProps) {\n  const [inputFiles, setInputFiles] = useState<InputFile[]>([]);\n  const [selectedFiles, setSelectedFiles] = useState<string[]>([]);\n  const [isLoading, setIsLoading] = useState(false);\n  const [isExpanded, setIsExpanded] = useState(false);\n  \n  useEffect(() => {\n    fetchInputFiles();\n  }, []);\n\n  useEffect(() => {\n    onFilesSelected(selectedFiles);\n  }, [selectedFiles, onFilesSelected]);\n\n  const fetchInputFiles = async () => {\n    setIsLoading(true);\n    try {\n      const response = await fetch(`${HTTP_BACKEND_URL}/eval_input_files`);\n      if (!response.ok) {\n        throw new Error(\"Failed to fetch input files\");\n      }\n      \n      const data = await response.json();\n      setInputFiles(data);\n      \n      // By default, select all files\n      const allFilePaths = data.map((file: InputFile) => file.path);\n      setSelectedFiles(allFilePaths);\n    } catch (error) {\n      console.error(\"Error fetching input files:\", error);\n    } finally {\n      setIsLoading(false);\n    }\n  };\n\n  const handleFileToggle = (filePath: string) => {\n    setSelectedFiles((prev) => {\n      return prev.includes(filePath)\n        ? prev.filter((path) => path !== filePath)\n        : [...prev, filePath];\n    });\n  };\n\n  const handleSelectAll = () => {\n    const allFilePaths = inputFiles.map((file) => file.path);\n    setSelectedFiles(allFilePaths);\n  };\n\n  const handleClearAll = () => {\n    setSelectedFiles([]);\n  };\n\n  const toggleExpanded = () => {\n    setIsExpanded(!isExpanded);\n  };\n\n  if (isLoading) {\n    return <div className=\"text-center text-sm text-gray-500\">Loading input files...</div>;\n  }\n\n  const fileCount = inputFiles.length;\n  const selectedCount = selectedFiles.length;\n\n  return (\n    <div className=\"w-full\">\n      <div \n        className=\"flex items-center justify-between cursor-pointer hover:bg-gray-50 rounded-md p-2 mb-2 border\" \n        onClick={toggleExpanded}\n      >\n        <div className=\"flex items-center gap-2\">\n          {isExpanded ? <BsChevronDown className=\"text-gray-500\" /> : <BsChevronRight className=\"text-gray-500\" />}\n          <div>\n            <span className=\"text-sm font-medium\">Input Files</span>\n            <span className=\"ml-2 text-xs text-gray-500\">\n              {selectedCount} of {fileCount} selected\n            </span>\n          </div>\n        </div>\n        \n        <div className=\"flex space-x-1\" onClick={(e) => e.stopPropagation()}>\n          <Button\n            variant=\"ghost\"\n            size=\"sm\"\n            onClick={handleSelectAll}\n            className=\"text-xs h-6 px-2 text-gray-500 hover:text-gray-700\"\n            disabled={selectedCount === fileCount}\n          >\n            All\n          </Button>\n          <Button\n            variant=\"ghost\"\n            size=\"sm\"\n            onClick={handleClearAll}\n            className=\"text-xs h-6 px-2 text-gray-500 hover:text-gray-700\"\n            disabled={selectedCount === 0}\n          >\n            None\n          </Button>\n        </div>\n      </div>\n      \n      {isExpanded && (\n        <div className=\"border rounded-md overflow-hidden\">\n          <div className=\"max-h-48 overflow-y-auto\">\n            <div className=\"grid grid-cols-1 divide-y divide-gray-100\">\n              {inputFiles.map((file) => (\n                <div\n                  key={file.path}\n                  className={`flex items-center justify-between px-3 py-1.5 cursor-pointer hover:bg-gray-50 ${\n                    selectedFiles.includes(file.path)\n                      ? \"bg-blue-50\"\n                      : \"\"\n                  }`}\n                  onClick={() => handleFileToggle(file.path)}\n                >\n                  <span className=\"text-xs truncate pr-2\" title={file.name}>\n                    {file.name}\n                  </span>\n                  {selectedFiles.includes(file.path) ? (\n                    <BsCheckLg className=\"h-3 w-3 text-blue-500 flex-shrink-0\" />\n                  ) : (\n                    <div className=\"h-3 w-3 border rounded-sm flex-shrink-0\" />\n                  )}\n                </div>\n              ))}\n            </div>\n          </div>\n        </div>\n      )}\n    </div>\n  );\n}\n\nexport default InputFileSelector;\n"
  },
  {
    "path": "frontend/src/components/evals/OpenAIInputComparePage.tsx",
    "content": "import { useState } from \"react\";\n\nimport { HTTP_BACKEND_URL } from \"../../config\";\nimport EvalNavigation from \"./EvalNavigation\";\n\ninterface OpenAIInputDifference {\n  item_index: number;\n  path: string;\n  left_summary: string;\n  right_summary: string;\n  left_value: unknown;\n  right_value: unknown;\n}\n\ninterface OpenAIInputCompareResponse {\n  common_prefix_items: number;\n  left_item_count: number;\n  right_item_count: number;\n  difference: OpenAIInputDifference | null;\n  formatted: string;\n}\n\nfunction formatJson(value: unknown): string {\n  const formatted = JSON.stringify(value, null, 2);\n  return formatted ?? String(value);\n}\n\nfunction OpenAIInputComparePage() {\n  const [leftJson, setLeftJson] = useState(\"\");\n  const [rightJson, setRightJson] = useState(\"\");\n  const [result, setResult] = useState<OpenAIInputCompareResponse | null>(null);\n  const [error, setError] = useState(\"\");\n  const [isLoading, setIsLoading] = useState(false);\n\n  const handleCompare = async () => {\n    if (!leftJson.trim() || !rightJson.trim()) {\n      setError(\"Paste both JSON payloads before comparing.\");\n      return;\n    }\n\n    setIsLoading(true);\n    setError(\"\");\n\n    try {\n      const response = await fetch(`${HTTP_BACKEND_URL}/openai-input-compare`, {\n        method: \"POST\",\n        headers: {\n          \"Content-Type\": \"application/json\",\n        },\n        body: JSON.stringify({\n          left_json: leftJson,\n          right_json: rightJson,\n        }),\n      });\n\n      const data = await response.json();\n      if (!response.ok) {\n        setResult(null);\n        setError(data.detail || \"Compare request failed.\");\n        return;\n      }\n\n      setResult(data);\n    } catch (requestError) {\n      console.error(\"Error comparing OpenAI inputs\", requestError);\n      setResult(null);\n      setError(\"Compare request failed.\");\n    } finally {\n      setIsLoading(false);\n    }\n  };\n\n  return (\n    <div className=\"min-h-screen bg-zinc-950 text-white\">\n      <EvalNavigation />\n      <div className=\"mx-auto flex w-full max-w-7xl flex-col gap-6 px-4 py-6\">\n        <div className=\"rounded-2xl border border-zinc-800 bg-zinc-900/80 p-6 shadow-2xl shadow-black/20\">\n          <h1 className=\"text-3xl font-semibold tracking-tight\">\n            OpenAI Input Compare\n          </h1>\n          <p className=\"mt-2 max-w-3xl text-sm leading-6 text-zinc-300\">\n            Paste either a full OpenAI request payload or just the raw{\" \"}\n            <code className=\"rounded bg-zinc-800 px-1.5 py-0.5 text-zinc-100\">\n              input\n            </code>{\" \"}\n            array on each side. The compare view finds the first input block\n            that diverges and the first nested field path where that happens.\n          </p>\n          <p className=\"mt-2 text-sm text-zinc-400\">\n            The OpenAI Turn Input Report now has a{\" \"}\n            <span className=\"rounded bg-zinc-800 px-1.5 py-0.5 text-zinc-100\">\n              Copy input JSON\n            </span>{\" \"}\n            button you can paste here directly.\n          </p>\n        </div>\n\n        <div className=\"grid gap-6 xl:grid-cols-2\">\n          <section className=\"rounded-2xl border border-emerald-900/60 bg-emerald-950/30 p-4\">\n            <div className=\"mb-3 flex items-center justify-between\">\n              <h2 className=\"text-lg font-semibold text-emerald-100\">Left JSON</h2>\n              <span className=\"text-xs uppercase tracking-[0.18em] text-emerald-300/80\">\n                Request A\n              </span>\n            </div>\n            <textarea\n              value={leftJson}\n              onChange={(event) => setLeftJson(event.target.value)}\n              placeholder='{\"input\":[{\"role\":\"system\",\"content\":\"...\"}]}'\n              className=\"min-h-[360px] w-full rounded-xl border border-emerald-900/60 bg-zinc-950 px-4 py-3 font-mono text-sm text-zinc-100 outline-none transition focus:border-emerald-500\"\n              spellCheck={false}\n            />\n          </section>\n\n          <section className=\"rounded-2xl border border-sky-900/60 bg-sky-950/30 p-4\">\n            <div className=\"mb-3 flex items-center justify-between\">\n              <h2 className=\"text-lg font-semibold text-sky-100\">Right JSON</h2>\n              <span className=\"text-xs uppercase tracking-[0.18em] text-sky-300/80\">\n                Request B\n              </span>\n            </div>\n            <textarea\n              value={rightJson}\n              onChange={(event) => setRightJson(event.target.value)}\n              placeholder='{\"input\":[{\"role\":\"system\",\"content\":\"...\"}]}'\n              className=\"min-h-[360px] w-full rounded-xl border border-sky-900/60 bg-zinc-950 px-4 py-3 font-mono text-sm text-zinc-100 outline-none transition focus:border-sky-500\"\n              spellCheck={false}\n            />\n          </section>\n        </div>\n\n        <div className=\"flex items-center gap-4\">\n          <button\n            type=\"button\"\n            onClick={handleCompare}\n            disabled={isLoading}\n            className=\"rounded-full bg-white px-5 py-2 text-sm font-semibold text-zinc-950 transition hover:bg-zinc-200 disabled:cursor-not-allowed disabled:bg-zinc-500\"\n          >\n            {isLoading ? \"Comparing...\" : \"Compare Inputs\"}\n          </button>\n          {error ? <p className=\"text-sm text-rose-300\">{error}</p> : null}\n        </div>\n\n        {result ? (\n          <div className=\"grid gap-6\">\n            <section className=\"grid gap-4 md:grid-cols-3\">\n              <div className=\"rounded-2xl border border-zinc-800 bg-zinc-900 p-4\">\n                <div className=\"text-xs uppercase tracking-[0.18em] text-zinc-400\">\n                  Shared Prefix\n                </div>\n                <div className=\"mt-2 text-3xl font-semibold\">\n                  {result.common_prefix_items}\n                </div>\n                <p className=\"mt-1 text-sm text-zinc-400\">\n                  Top-level input items that match exactly before divergence.\n                </p>\n              </div>\n              <div className=\"rounded-2xl border border-zinc-800 bg-zinc-900 p-4\">\n                <div className=\"text-xs uppercase tracking-[0.18em] text-zinc-400\">\n                  Left Items\n                </div>\n                <div className=\"mt-2 text-3xl font-semibold\">\n                  {result.left_item_count}\n                </div>\n              </div>\n              <div className=\"rounded-2xl border border-zinc-800 bg-zinc-900 p-4\">\n                <div className=\"text-xs uppercase tracking-[0.18em] text-zinc-400\">\n                  Right Items\n                </div>\n                <div className=\"mt-2 text-3xl font-semibold\">\n                  {result.right_item_count}\n                </div>\n              </div>\n            </section>\n\n            <section className=\"rounded-2xl border border-zinc-800 bg-zinc-900 p-5\">\n              <h2 className=\"text-xl font-semibold\">First Difference</h2>\n              {result.difference ? (\n                <div className=\"mt-4 grid gap-4\">\n                  <div className=\"rounded-xl border border-amber-900/60 bg-amber-950/30 p-4\">\n                    <div className=\"text-xs uppercase tracking-[0.18em] text-amber-300/80\">\n                      Path\n                    </div>\n                    <div className=\"mt-2 font-mono text-sm text-amber-50\">\n                      {result.difference.path}\n                    </div>\n                  </div>\n\n                  <div className=\"grid gap-4 xl:grid-cols-2\">\n                    <div className=\"rounded-xl border border-emerald-900/60 bg-emerald-950/30 p-4\">\n                      <div className=\"text-xs uppercase tracking-[0.18em] text-emerald-300/80\">\n                        Left Summary\n                      </div>\n                      <pre className=\"mt-2 whitespace-pre-wrap break-words rounded-lg bg-zinc-950 p-3 font-mono text-xs text-zinc-100\">\n                        {result.difference.left_summary}\n                      </pre>\n                      <div className=\"mt-3 text-xs uppercase tracking-[0.18em] text-emerald-300/80\">\n                        Left Value\n                      </div>\n                      <pre className=\"mt-2 max-h-[320px] overflow-auto whitespace-pre-wrap break-words rounded-lg bg-zinc-950 p-3 font-mono text-xs text-zinc-100\">\n                        {formatJson(result.difference.left_value)}\n                      </pre>\n                    </div>\n\n                    <div className=\"rounded-xl border border-sky-900/60 bg-sky-950/30 p-4\">\n                      <div className=\"text-xs uppercase tracking-[0.18em] text-sky-300/80\">\n                        Right Summary\n                      </div>\n                      <pre className=\"mt-2 whitespace-pre-wrap break-words rounded-lg bg-zinc-950 p-3 font-mono text-xs text-zinc-100\">\n                        {result.difference.right_summary}\n                      </pre>\n                      <div className=\"mt-3 text-xs uppercase tracking-[0.18em] text-sky-300/80\">\n                        Right Value\n                      </div>\n                      <pre className=\"mt-2 max-h-[320px] overflow-auto whitespace-pre-wrap break-words rounded-lg bg-zinc-950 p-3 font-mono text-xs text-zinc-100\">\n                        {formatJson(result.difference.right_value)}\n                      </pre>\n                    </div>\n                  </div>\n                </div>\n              ) : (\n                <p className=\"mt-4 text-zinc-300\">\n                  No difference found. Both inputs match exactly.\n                </p>\n              )}\n            </section>\n\n            <section className=\"rounded-2xl border border-zinc-800 bg-zinc-900 p-5\">\n              <h2 className=\"text-xl font-semibold\">Formatted Summary</h2>\n              <pre className=\"mt-4 max-h-[420px] overflow-auto whitespace-pre-wrap break-words rounded-xl bg-zinc-950 p-4 font-mono text-xs text-zinc-100\">\n                {result.formatted}\n              </pre>\n            </section>\n          </div>\n        ) : null}\n      </div>\n    </div>\n  );\n}\n\nexport default OpenAIInputComparePage;\n"
  },
  {
    "path": "frontend/src/components/evals/PairwiseEvalsPage.tsx",
    "content": "import React, { useState } from \"react\";\nimport { HTTP_BACKEND_URL } from \"../../config\";\nimport { Dialog, DialogContent, DialogTrigger } from \"../ui/dialog\";\nimport EvalNavigation from \"./EvalNavigation\";\n\ninterface Eval {\n  input: string;\n  outputs: string[];\n}\n\ntype Outcome = \"left\" | \"right\" | \"tie\" | null;\n\ninterface PairwiseEvalsResponse {\n  evals: Eval[];\n  folder1_name: string;\n  folder2_name: string;\n}\n\nfunction PairwiseEvalsPage() {\n  const [evals, setEvals] = React.useState<Eval[]>([]);\n  const [outcomes, setOutcomes] = React.useState<Outcome[]>([]);\n  const [folderNames, setFolderNames] = useState<{\n    left: string;\n    right: string;\n  }>({\n    left: \"\",\n    right: \"\",\n  });\n  const [folder1Path, setFolder1Path] = useState(\"\");\n  const [folder2Path, setFolder2Path] = useState(\"\");\n  const [isLoading, setIsLoading] = useState(false);\n  const [selectedHtml, setSelectedHtml] = useState<string>(\"\");\n\n  // Calculate statistics\n  const totalVotes = outcomes.filter((o) => o !== null).length;\n  const leftWins = outcomes.filter((o) => o === \"left\").length;\n  const rightWins = outcomes.filter((o) => o === \"right\").length;\n  const ties = outcomes.filter((o) => o === \"tie\").length;\n  // Calculate percentages\n  const leftPercentage = totalVotes\n    ? ((leftWins / totalVotes) * 100).toFixed(2)\n    : \"0.00\";\n  const rightPercentage = totalVotes\n    ? ((rightWins / totalVotes) * 100).toFixed(2)\n    : \"0.00\";\n  const tiePercentage = totalVotes\n    ? ((ties / totalVotes) * 100).toFixed(2)\n    : \"0.00\";\n\n  const loadEvals = async () => {\n    if (!folder1Path || !folder2Path) {\n      alert(\"Please enter both folder paths\");\n      return;\n    }\n\n    setIsLoading(true);\n    try {\n      const queryParams = new URLSearchParams({\n        folder1: `/Users/abi/Downloads/${folder1Path}`,\n        folder2: `/Users/abi/Downloads/${folder2Path}`,\n      });\n\n      const response = await fetch(\n        `${HTTP_BACKEND_URL}/pairwise-evals?${queryParams}`\n      );\n      const data: PairwiseEvalsResponse = await response.json();\n\n      setEvals(data.evals);\n      setOutcomes(new Array(data.evals.length).fill(null));\n      setFolderNames({\n        left: data.folder1_name,\n        right: data.folder2_name,\n      });\n    } catch (error) {\n      console.error(\"Error loading evals:\", error);\n      alert(\n        \"Error loading evals. Please check the folder paths and try again.\"\n      );\n    } finally {\n      setIsLoading(false);\n    }\n  };\n\n  const handleVote = (index: number, outcome: Outcome) => {\n    const newOutcomes = [...outcomes];\n    newOutcomes[index] = outcome;\n    setOutcomes(newOutcomes);\n  };\n\n  return (\n    <div className=\"mx-auto\">\n      <EvalNavigation />\n      <div className=\"flex flex-col items-center justify-center w-full py-4 bg-zinc-950 text-white\">\n        <div className=\"flex flex-col gap-4 mb-4 w-full max-w-2xl px-4\">\n          <input\n            type=\"text\"\n            value={folder1Path}\n            onChange={(e) => setFolder1Path(e.target.value)}\n            placeholder=\"Enter folder name in Downloads\"\n            className=\"w-full px-4 py-2 rounded text-black\"\n          />\n          <input\n            type=\"text\"\n            value={folder2Path}\n            onChange={(e) => setFolder2Path(e.target.value)}\n            placeholder=\"Enter folder name in Downloads\"\n            className=\"w-full px-4 py-2 rounded text-black\"\n          />\n          <button\n            onClick={loadEvals}\n            disabled={isLoading}\n            className=\"bg-blue-500 hover:bg-blue-600 text-white px-4 py-2 rounded disabled:bg-blue-300\"\n          >\n            {isLoading ? \"Loading...\" : \"Start Comparison\"}\n          </button>\n        </div>\n\n        {evals.length > 0 && (\n          <>\n            <span className=\"text-2xl font-semibold\">\n              Total votes: {totalVotes}\n            </span>\n            <div className=\"text-lg mt-2\">\n              <span>\n                {folderNames.left}: {leftWins} ({leftPercentage}%) |{\" \"}\n              </span>\n              <span>\n                {folderNames.right}: {rightWins} ({rightPercentage}%) |{\" \"}\n              </span>\n              <span>\n                Ties: {ties} ({tiePercentage}%)\n              </span>\n            </div>\n          </>\n        )}\n      </div>\n\n      <div className=\"flex flex-col gap-y-8 mt-4 mx-auto justify-center\">\n        {evals.map((e, index) => (\n          <div className=\"flex flex-col justify-center\" key={index}>\n            <h2 className=\"font-bold text-lg ml-4 mb-2\">\n              Comparison {index + 1}\n            </h2>\n\n            <div className=\"w-full flex justify-center mb-4\">\n              <div className=\"w-1/2 p-1 border\">\n                <img src={e.input} alt={`Input for comparison ${index}`} />\n              </div>\n            </div>\n\n            <div className=\"flex gap-x-4 justify-center\">\n              {e.outputs.slice(0, 2).map((output, outputIndex) => (\n                <div\n                  className={`w-1/2 p-1 border ${\n                    outcomes[index] === (outputIndex === 0 ? \"left\" : \"right\")\n                      ? \"border-green-500 border-4\"\n                      : \"\"\n                  }`}\n                  key={outputIndex}\n                >\n                  <div className=\"relative\">\n                    <iframe\n                      srcDoc={output}\n                      className=\"w-[1200px] h-[800px] transform scale-[0.55]\"\n                      style={{ transformOrigin: \"top left\" }}\n                    ></iframe>\n                    <Dialog>\n                      <DialogTrigger asChild>\n                        <button\n                          className=\"absolute top-2 right-2 bg-blue-500 text-white px-2 py-1 rounded text-sm\"\n                          onClick={() => setSelectedHtml(output)}\n                        >\n                          Full Screen\n                        </button>\n                      </DialogTrigger>\n                      <DialogContent className=\"w-[95vw] max-w-[95vw] h-[95vh] max-h-[95vh]\">\n                        <iframe\n                          srcDoc={selectedHtml}\n                          className=\"w-[1400px] h-[800px] transform scale-[0.90]\"\n                        ></iframe>\n                      </DialogContent>\n                    </Dialog>\n                  </div>\n                </div>\n              ))}\n            </div>\n\n            <div className=\"flex justify-center gap-x-4 mt-4\">\n              <button\n                className={`px-4 py-2 rounded ${\n                  outcomes[index] === \"left\"\n                    ? \"bg-green-500 text-white\"\n                    : \"bg-gray-200 hover:bg-gray-300\"\n                }`}\n                onClick={() => handleVote(index, \"left\")}\n              >\n                Left Wins\n              </button>\n              <button\n                className={`px-4 py-2 rounded ${\n                  outcomes[index] === \"tie\"\n                    ? \"bg-green-500 text-white\"\n                    : \"bg-gray-200 hover:bg-gray-300\"\n                }`}\n                onClick={() => handleVote(index, \"tie\")}\n              >\n                Tie\n              </button>\n              <button\n                className={`px-4 py-2 rounded ${\n                  outcomes[index] === \"right\"\n                    ? \"bg-green-500 text-white\"\n                    : \"bg-gray-200 hover:bg-gray-300\"\n                }`}\n                onClick={() => handleVote(index, \"right\")}\n              >\n                Right Wins\n              </button>\n            </div>\n          </div>\n        ))}\n      </div>\n    </div>\n  );\n}\n\nexport default PairwiseEvalsPage;\n"
  },
  {
    "path": "frontend/src/components/evals/RatingPicker.tsx",
    "content": "interface RatingPickerProps {\n  onSelect: (rating: number) => void;\n  maxRating?: number;\n  value?: number;\n}\n\nfunction RatingPicker({\n  onSelect,\n  maxRating = 5,\n  value = 0,\n}: RatingPickerProps) {\n  return (\n    <div className=\"flex gap-x-2\">\n      {Array.from({ length: maxRating }, (_, i) => i + 1).map((rating) => (\n        <button\n          key={rating}\n          onClick={() => onSelect(rating)}\n          className={`w-8 h-8 rounded-full border border-gray-300 \n            ${\n              value === rating\n                ? \"bg-blue-500 text-white\"\n                : \"hover:bg-blue-500 hover:text-white\"\n            }`}\n        >\n          {rating}\n        </button>\n      ))}\n    </div>\n  );\n}\n\nexport default RatingPicker;\n"
  },
  {
    "path": "frontend/src/components/evals/RunEvalsPage.tsx",
    "content": "import { useState, useEffect, useCallback, useRef } from \"react\";\nimport { Button } from \"../ui/button\";\nimport { Progress } from \"../ui/progress\";\nimport { HTTP_BACKEND_URL } from \"../../config\";\nimport { BsCheckLg, BsChevronDown, BsChevronRight } from \"react-icons/bs\";\nimport InputFileSelector from \"./InputFileSelector\";\nimport EvalNavigation from \"./EvalNavigation\";\n\ninterface ModelResponse {\n  models: string[];\n  stacks: string[];\n}\n\ninterface EvalProgressEvent {\n  type: \"start\" | \"model_start\" | \"task_complete\" | \"complete\" | \"error\";\n  message?: string;\n  model?: string;\n  model_index?: number;\n  total_models?: number;\n  completed_tasks?: number;\n  total_tasks?: number;\n  global_completed_tasks?: number;\n  global_total_tasks?: number;\n  input_file?: string;\n  success?: boolean;\n  error?: string;\n  output_files?: string[];\n  diff_mode?: boolean;\n  total_skipped_existing?: number;\n  model_tasks?: number;\n  model_skipped_existing?: number;\n}\n\ninterface FailedTask {\n  model: string;\n  inputFile: string;\n  error: string;\n}\n\nfunction RunEvalsPage() {\n  const faviconFlashIntervalRef = useRef<number | null>(null);\n  const [isRunning, setIsRunning] = useState(false);\n  const [models, setModels] = useState<string[]>([]);\n  const [stacks, setStacks] = useState<string[]>([]);\n  const [selectedModels, setSelectedModels] = useState<string[]>([]);\n  const [selectedStack, setSelectedStack] = useState<string>(\"html_tailwind\");\n  const [diffMode, setDiffMode] = useState(true);\n  const [selectedFiles, setSelectedFiles] = useState<string[]>([]);\n  const [showPaths, setShowPaths] = useState<boolean>(false);\n  const [completedTasks, setCompletedTasks] = useState(0);\n  const [totalTasks, setTotalTasks] = useState(0);\n  const [currentModel, setCurrentModel] = useState<string>(\"\");\n  const [statusMessage, setStatusMessage] = useState<string>(\"Idle\");\n  const [lastProcessedFile, setLastProcessedFile] = useState<string>(\"\");\n  const [failedTasks, setFailedTasks] = useState(0);\n  const [failedTaskDetails, setFailedTaskDetails] = useState<FailedTask[]>([]);\n  const [skippedExistingTasks, setSkippedExistingTasks] = useState(0);\n\n  useEffect(() => {\n    const fetchModels = async () => {\n      const response = await fetch(`${HTTP_BACKEND_URL}/models`);\n      const data: ModelResponse = await response.json();\n      setModels(data.models);\n      setStacks(data.stacks);\n    };\n    fetchModels();\n  }, []);\n\n  useEffect(() => {\n    return () => {\n      document.title = \"Screenshot to Code\";\n      if (faviconFlashIntervalRef.current !== null) {\n        window.clearInterval(faviconFlashIntervalRef.current);\n      }\n    };\n  }, []);\n\n  const setFavicon = (href: string) => {\n    const faviconEl = document.querySelector(\n      \"link[rel='icon']\"\n    ) as HTMLLinkElement | null;\n    if (faviconEl) {\n      faviconEl.href = href;\n    }\n  };\n\n  const stopFaviconFlash = () => {\n    if (faviconFlashIntervalRef.current !== null) {\n      window.clearInterval(faviconFlashIntervalRef.current);\n      faviconFlashIntervalRef.current = null;\n    }\n    setFavicon(\"/favicon/main.png\");\n    window.removeEventListener(\"visibilitychange\", stopWhenTabIsVisible);\n    window.removeEventListener(\"focus\", stopWhenTabIsVisible);\n  };\n\n  const stopWhenTabIsVisible = () => {\n    if (document.visibilityState === \"visible\" && document.hasFocus()) {\n      stopFaviconFlash();\n    }\n  };\n\n  const flashFaviconOnComplete = () => {\n    stopFaviconFlash();\n    let useAlertIcon = false;\n    faviconFlashIntervalRef.current = window.setInterval(() => {\n      setFavicon(useAlertIcon ? \"/favicon/coding.png\" : \"/favicon/main.png\");\n      useAlertIcon = !useAlertIcon;\n    }, 450);\n    window.addEventListener(\"visibilitychange\", stopWhenTabIsVisible);\n    window.addEventListener(\"focus\", stopWhenTabIsVisible);\n  };\n\n  const runEvals = async (filesToRun?: string[]) => {\n    const updateRunningTitle = (completed: number, total: number) => {\n      if (total <= 0) {\n        document.title = \"Running Evals...\";\n        return;\n      }\n      const percent = Math.round((completed / total) * 100);\n      document.title = `(${percent}%) Running Evals...`;\n    };\n\n    try {\n      setIsRunning(true);\n      document.title = \"Running Evals...\";\n      setCompletedTasks(0);\n      setTotalTasks(0);\n      setCurrentModel(\"\");\n      setStatusMessage(\"Preparing evaluation run...\");\n      setLastProcessedFile(\"\");\n      setFailedTasks(0);\n      setFailedTaskDetails([]);\n      setSkippedExistingTasks(0);\n      stopFaviconFlash();\n\n      const runFiles = filesToRun ?? selectedFiles;\n\n      const response = await fetch(`${HTTP_BACKEND_URL}/run_evals_stream`, {\n        method: \"POST\",\n        headers: {\n          \"Content-Type\": \"application/json\",\n        },\n        body: JSON.stringify({\n          models: selectedModels,\n          stack: selectedStack,\n          files: runFiles,\n          diff_mode: diffMode,\n        }),\n      });\n\n      if (!response.ok) {\n        throw new Error(\"Failed to run evals\");\n      }\n\n      if (!response.body) {\n        throw new Error(\"No progress stream available\");\n      }\n\n      const reader = response.body.getReader();\n      const decoder = new TextDecoder();\n      let bufferedText = \"\";\n\n      let streamDone = false;\n      while (!streamDone) {\n        const { value, done } = await reader.read();\n        if (done) {\n          streamDone = true;\n          continue;\n        }\n\n        bufferedText += decoder.decode(value, { stream: true });\n        const lines = bufferedText.split(\"\\n\");\n        bufferedText = lines.pop() ?? \"\";\n\n        for (const line of lines) {\n          if (!line.trim()) continue;\n\n          const event = JSON.parse(line) as EvalProgressEvent;\n\n          if (event.type === \"start\") {\n            const eventTotalTasks = event.total_tasks ?? 0;\n            setTotalTasks(eventTotalTasks);\n            setSkippedExistingTasks(event.total_skipped_existing ?? 0);\n            setStatusMessage(\n              event.diff_mode\n                ? `Starting diff run (${event.total_skipped_existing ?? 0} existing outputs skipped)...`\n                : \"Starting evaluation run...\"\n            );\n            updateRunningTitle(0, eventTotalTasks);\n          } else if (event.type === \"model_start\") {\n            if (event.model) setCurrentModel(event.model);\n            setStatusMessage(\n              `Running model ${event.model_index ?? 1}/${event.total_models ?? selectedModels.length}: ${event.model ?? \"Unknown\"}${\n                diffMode && (event.model_skipped_existing ?? 0) > 0\n                  ? ` (${event.model_skipped_existing} skipped)`\n                  : \"\"\n              }`\n            );\n          } else if (event.type === \"task_complete\") {\n            const globalCompleted = event.global_completed_tasks ?? 0;\n            const globalTotal = event.global_total_tasks ?? totalTasks;\n            setCompletedTasks(globalCompleted);\n            if (event.input_file) setLastProcessedFile(event.input_file);\n            if (event.success === false) {\n              setFailedTasks((prev) => prev + 1);\n              setFailedTaskDetails((prev) => [\n                ...prev,\n                {\n                  model: event.model ?? \"unknown\",\n                  inputFile: event.input_file ?? \"unknown\",\n                  error: event.error ?? \"Unknown error\",\n                },\n              ]);\n            }\n            setStatusMessage(\n              event.success === false\n                ? `Failed: ${event.input_file ?? \"unknown file\"}`\n                : `Processed: ${event.input_file ?? \"unknown file\"}`\n            );\n            updateRunningTitle(globalCompleted, globalTotal);\n          } else if (event.type === \"complete\") {\n            const finalCompleted = event.completed_tasks ?? completedTasks;\n            const finalTotal = event.total_tasks ?? totalTasks;\n            setCompletedTasks(finalCompleted);\n            setTotalTasks(finalTotal);\n            setStatusMessage(\"Evaluation run complete\");\n            console.log(\"Generated files:\", event.output_files ?? []);\n            updateRunningTitle(finalCompleted, finalTotal);\n          } else if (event.type === \"error\") {\n            throw new Error(event.message ?? \"Eval run failed\");\n          }\n        }\n      }\n\n      document.title = \"✓ Evals Complete\";\n      flashFaviconOnComplete();\n    } catch (error) {\n      console.error(\"Error running evals:\", error);\n      document.title = \"❌ Eval Error\";\n      setStatusMessage(\"Evaluation run failed\");\n      flashFaviconOnComplete();\n      setTimeout(() => {\n        document.title = \"Screenshot to Code\";\n      }, 5000);\n    } finally {\n      setIsRunning(false);\n    }\n  };\n\n  const handleModelToggle = (model: string) => {\n    setSelectedModels((prev) => {\n      if (prev.includes(model)) {\n        return prev.filter((m) => m !== model);\n      }\n      return [...prev, model];\n    });\n  };\n\n  const handleSelectAll = () => {\n    setSelectedModels(models);\n  };\n\n  const handleFilesSelected = useCallback((files: string[]) => {\n    setSelectedFiles(files);\n  }, []);\n\n  // Format model list for display in the summary\n  const formatModelList = () => {\n    if (selectedModels.length === 0) return \"None\";\n    if (selectedModels.length === 1) return selectedModels[0];\n    if (selectedModels.length <= 2) return selectedModels.join(\", \");\n    return `${selectedModels.slice(0, 2).join(\", \")} +${selectedModels.length - 2} more`;\n  };\n\n  const canRunEvals = selectedModels.length > 0 && selectedFiles.length > 0;\n  const failedFilesForRerun = Array.from(\n    new Set(failedTaskDetails.map((task) => task.inputFile).filter(Boolean))\n  );\n  const canRerunFailures =\n    !isRunning &&\n    selectedModels.length > 0 &&\n    failedFilesForRerun.length > 0;\n  const progressPercent = totalTasks > 0 ? (completedTasks / totalTasks) * 100 : 0;\n\n  return (\n    <div>\n      <EvalNavigation />\n      \n      <div className=\"container mx-auto px-4 py-6\">\n        {/* Unified Header with Configuration Summary */}\n        <div className=\"mb-6 bg-white rounded-lg border border-gray-200 shadow-sm p-4 max-w-5xl mx-auto\">\n          <div className=\"flex flex-col gap-4\">\n            <div className=\"flex flex-wrap justify-between items-center\">\n              <h1 className=\"text-2xl font-bold\">Run Evaluations</h1>\n              \n              <Button\n                onClick={() => void runEvals()}\n                disabled={isRunning || !canRunEvals}\n                className={`min-w-[120px] ${isRunning ? \"bg-blue-400\" : \"bg-blue-600 hover:bg-blue-700\"}`}\n              >\n                {isRunning ? \"Running...\" : \"Run Evals\"}\n              </Button>\n            </div>\n\n            {diffMode && (\n              <div className=\"rounded-md border border-emerald-200 bg-emerald-50 px-3 py-2 text-sm text-emerald-800\">\n                Diff mode enabled: only input files missing outputs in today's model folders will run. Existing outputs are skipped and never overwritten.\n              </div>\n            )}\n            \n            <div className=\"grid grid-cols-1 md:grid-cols-3 gap-3 border-t border-gray-100 pt-3\">\n              <div className=\"flex flex-col\">\n                <span className=\"text-sm font-medium text-gray-700\">Models</span>\n                <span className=\"text-sm text-gray-600 font-mono\">{formatModelList()}</span>\n              </div>\n              \n              <div className=\"flex flex-col\">\n                <span className=\"text-sm font-medium text-gray-700\">Stack</span>\n                <span className=\"text-sm text-gray-600 font-mono\">{selectedStack}</span>\n              </div>\n              \n              <div className=\"flex flex-col\">\n                <span className=\"text-sm font-medium text-gray-700\">Input Files</span>\n                <span className=\"text-sm text-gray-600\">{selectedFiles.length} selected</span>\n              </div>\n            </div>\n            \n            <div \n              className=\"flex items-center gap-1 text-xs text-gray-600 cursor-pointer mt-1 hover:bg-gray-50 inline-block rounded px-2 py-1\"\n              onClick={() => setShowPaths(!showPaths)}\n            >\n              {showPaths ? <BsChevronDown size={12} /> : <BsChevronRight size={12} />}\n              <span className=\"font-medium\">Paths</span>\n            </div>\n            \n            {showPaths && (\n              <div className=\"grid grid-cols-1 md:grid-cols-2 gap-3 text-xs text-gray-600 mt-2 bg-gray-50 p-2 rounded-md\">\n                <div>\n                  <span className=\"font-medium\">Input path:</span>\n                  <code className=\"ml-2 bg-gray-100 px-2 py-0.5 rounded\">\n                    backend/evals_data/inputs\n                  </code>\n                </div>\n                <div>\n                  <span className=\"font-medium\">Output path:</span>\n                  <code className=\"ml-2 bg-gray-100 px-2 py-0.5 rounded\">\n                    backend/evals_data/outputs\n                  </code>\n                </div>\n              </div>\n            )}\n\n            {(isRunning || totalTasks > 0) && (\n              <div className=\"border-t border-gray-100 pt-3\">\n                <div className=\"flex items-center justify-between text-sm mb-2\">\n                  <span className=\"font-medium text-gray-700\">{statusMessage}</span>\n                  <span className=\"text-gray-600\">\n                    {completedTasks} / {totalTasks || \"?\"} tasks\n                  </span>\n                </div>\n                <Progress value={progressPercent} className=\"h-2 mb-2\" />\n                <div className=\"grid grid-cols-1 md:grid-cols-3 gap-2 text-xs text-gray-600\">\n                  <span>\n                    Current model:{\" \"}\n                    <span className=\"font-mono\">{currentModel || \"-\"}</span>\n                  </span>\n                  <span>\n                    Failures: <span className=\"font-medium\">{failedTasks}</span>\n                  </span>\n                  <span className=\"truncate\" title={lastProcessedFile}>\n                    Last file:{\" \"}\n                    <span className=\"font-mono\">{lastProcessedFile || \"-\"}</span>\n                  </span>\n                </div>\n                {diffMode && (\n                  <div className=\"mt-2 text-xs text-emerald-700\">\n                    Existing outputs skipped:{\" \"}\n                    <span className=\"font-medium\">{skippedExistingTasks}</span>\n                  </div>\n                )}\n                {failedTaskDetails.length > 0 && (\n                  <div className=\"mt-3 border-t border-gray-100 pt-3\">\n                    <div className=\"flex items-center justify-between mb-2\">\n                      <span className=\"text-sm font-medium text-red-700\">\n                        Failures ({failedTaskDetails.length})\n                      </span>\n                      <Button\n                        variant=\"outline\"\n                        size=\"sm\"\n                        disabled={!canRerunFailures}\n                        onClick={() => void runEvals(failedFilesForRerun)}\n                        className=\"h-7 px-2 text-xs\"\n                      >\n                        Re-run failures\n                      </Button>\n                    </div>\n                    <div className=\"max-h-40 overflow-y-auto rounded-md border border-red-100 bg-red-50/40\">\n                      <ul className=\"divide-y divide-red-100\">\n                        {failedTaskDetails.map((task, index) => (\n                          <li key={`${task.model}-${task.inputFile}-${index}`} className=\"px-3 py-2 text-xs\">\n                            <div className=\"font-medium text-red-800\">\n                              {task.inputFile}\n                            </div>\n                            <div className=\"text-red-700\">\n                              Model: <span className=\"font-mono\">{task.model}</span>\n                            </div>\n                            <div className=\"text-red-700 break-words\">\n                              Error: {task.error}\n                            </div>\n                          </li>\n                        ))}\n                      </ul>\n                    </div>\n                  </div>\n                )}\n              </div>\n            )}\n          </div>\n        </div>\n\n        {/* Selection Controls */}\n        <div className=\"grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-6 max-w-5xl mx-auto\">\n          {/* Model Selection Section */}\n          <div className=\"bg-white rounded-lg border border-gray-200 shadow-sm\">\n            <div className=\"border-b border-gray-200 px-4 py-3 bg-gray-50 rounded-t-lg\">\n              <h2 className=\"font-medium\">Select Models</h2>\n            </div>\n            <div className=\"p-3\">\n              <div className=\"border rounded-md max-h-[300px] overflow-y-auto\">\n                <div className=\"grid grid-cols-1 divide-y divide-gray-100\">\n                  {models.map((model) => (\n                    <div\n                      key={model}\n                      className={`flex items-center justify-between px-3 py-2 cursor-pointer hover:bg-gray-50 ${\n                        selectedModels.includes(model)\n                          ? \"bg-blue-50\"\n                          : \"\"\n                      }`}\n                      onClick={() => handleModelToggle(model)}\n                    >\n                      <span className=\"text-sm truncate\" title={model}>{model}</span>\n                      {selectedModels.includes(model) ? (\n                        <BsCheckLg className=\"h-3.5 w-3.5 text-blue-500 flex-shrink-0\" />\n                      ) : (\n                        <div className=\"h-3.5 w-3.5 border rounded-sm flex-shrink-0\" />\n                      )}\n                    </div>\n                  ))}\n                </div>\n              </div>\n              <div className=\"flex justify-between mt-2 text-xs\">\n                <p className=\"text-gray-500\">\n                  Selected: {selectedModels.length} / {models.length}\n                </p>\n                <div className=\"space-x-2\">\n                  {selectedModels.length < models.length && (\n                    <Button\n                      variant=\"ghost\"\n                      size=\"sm\"\n                      onClick={handleSelectAll}\n                      className=\"text-xs h-6 px-2 text-gray-500 hover:text-gray-700\"\n                    >\n                      Select all\n                    </Button>\n                  )}\n                  {selectedModels.length > 0 && (\n                    <Button\n                      variant=\"ghost\"\n                      size=\"sm\"\n                      onClick={() => setSelectedModels([])}\n                      className=\"text-xs h-6 px-2 text-gray-500 hover:text-gray-700\"\n                    >\n                      Clear\n                    </Button>\n                  )}\n                </div>\n              </div>\n            </div>\n          </div>\n\n          {/* Stack Selection Section */}\n          <div className=\"bg-white rounded-lg border border-gray-200 shadow-sm\">\n            <div className=\"border-b border-gray-200 px-4 py-3 bg-gray-50 rounded-t-lg\">\n              <h2 className=\"font-medium\">Select Stack</h2>\n            </div>\n            <div className=\"p-3\">\n              <select\n                value={selectedStack}\n                onChange={(e) => setSelectedStack(e.target.value)}\n                className=\"w-full py-2 px-3 border border-gray-300 rounded-md shadow-sm focus:outline-none focus:ring-blue-500 focus:border-blue-500\"\n              >\n                {stacks.map((stack) => (\n                  <option key={stack} value={stack}>\n                    {stack}\n                  </option>\n                ))}\n              </select>\n              <label className=\"mt-3 flex items-center gap-2 text-sm text-gray-700 cursor-pointer\">\n                <input\n                  type=\"checkbox\"\n                  checked={diffMode}\n                  onChange={(e) => setDiffMode(e.target.checked)}\n                  className=\"h-4 w-4 rounded border-gray-300 text-blue-600 focus:ring-blue-500\"\n                />\n                <span>Diff mode (only run missing outputs, no overwrite)</span>\n              </label>\n            </div>\n          </div>\n\n          {/* Input Files Section */}\n          <div className=\"bg-white rounded-lg border border-gray-200 shadow-sm lg:col-span-1 md:col-span-2\">\n            <div className=\"border-b border-gray-200 px-4 py-3 bg-gray-50 rounded-t-lg\">\n              <h2 className=\"font-medium\">Select Input Files</h2>\n            </div>\n            <div className=\"p-3\">\n              <InputFileSelector onFilesSelected={handleFilesSelected} />\n            </div>\n          </div>\n        </div>\n      </div>\n    </div>\n  );\n}\n\nexport default RunEvalsPage;\n"
  },
  {
    "path": "frontend/src/components/generate-from-text/GenerateFromText.tsx",
    "content": "import { useState, useRef, useEffect } from \"react\";\nimport { Button } from \"../ui/button\";\nimport { Textarea } from \"../ui/textarea\";\nimport toast from \"react-hot-toast\";\n\ninterface GenerateFromTextProps {\n  doCreateFromText: (text: string) => void;\n}\n\nfunction GenerateFromText({ doCreateFromText }: GenerateFromTextProps) {\n  const [isOpen, setIsOpen] = useState(false);\n  const [text, setText] = useState(\"\");\n  const textareaRef = useRef<HTMLTextAreaElement>(null);\n\n  useEffect(() => {\n    if (isOpen && textareaRef.current) {\n      textareaRef.current.focus();\n    }\n  }, [isOpen]);\n\n  const handleGenerate = () => {\n    if (text.trim() === \"\") {\n      toast.error(\"Please enter a prompt to generate from\");\n      return;\n    }\n    doCreateFromText(text);\n  };\n\n  const handleKeyDown = (e: React.KeyboardEvent) => {\n    if (e.key === \"Enter\" && (e.metaKey || e.ctrlKey)) {\n      e.preventDefault();\n      handleGenerate();\n    }\n  };\n\n  return (\n    <div className=\"mt-4\">\n      {!isOpen ? (\n        <div className=\"flex justify-center\">\n          <Button variant=\"secondary\" onClick={() => setIsOpen(true)}>\n            Generate from text prompt [BETA]\n          </Button>\n        </div>\n      ) : (\n        <>\n          <Textarea\n            ref={textareaRef}\n            rows={2}\n            placeholder=\"A SaaS admin dashboard with charts and user management\"\n            className=\"w-full mb-4\"\n            value={text}\n            onChange={(e) => setText(e.target.value)}\n            onKeyDown={handleKeyDown}\n          />\n          <div className=\"flex justify-between items-center\">\n            <span className=\"text-sm text-gray-500\">\n              Press Cmd/Ctrl + Enter to generate\n            </span>\n            <div className=\"flex gap-2\">\n              <Button variant=\"outline\" onClick={() => setIsOpen(false)}>\n                Cancel\n              </Button>\n              <Button onClick={handleGenerate}>Generate</Button>\n            </div>\n          </div>\n        </>\n      )}\n    </div>\n  );\n}\n\nexport default GenerateFromText;\n"
  },
  {
    "path": "frontend/src/components/history/HistoryDisplay.tsx",
    "content": "import { renderHistory, RenderedHistoryItem } from \"./utils\";\nimport { useProjectStore } from \"../../store/project-store\";\nimport { BsChevronDown, BsChevronRight } from \"react-icons/bs\";\nimport { useState, useRef, useEffect, useCallback } from \"react\";\n\nfunction MediaThumbnail({\n  item,\n  onPlayClick,\n}: {\n  item: RenderedHistoryItem;\n  onPlayClick?: () => void;\n}) {\n  const firstImage = item.images[0];\n  const firstVideo = item.videos[0];\n\n  if (!firstImage && !firstVideo) return null;\n\n  return (\n    <div className=\"shrink-0 w-12 h-12 rounded-md overflow-hidden border border-gray-200 dark:border-zinc-700 bg-gray-100 dark:bg-zinc-800\">\n      {firstImage ? (\n        <img\n          src={firstImage}\n          alt=\"Input screenshot\"\n          className=\"w-full h-full object-cover\"\n          draggable={false}\n        />\n      ) : firstVideo ? (\n        <button\n          type=\"button\"\n          className=\"relative w-full h-full\"\n          onClick={(e) => {\n            e.stopPropagation();\n            onPlayClick?.();\n          }}\n        >\n          <video\n            src={firstVideo}\n            className=\"w-full h-full object-cover\"\n            muted\n            playsInline\n          />\n          <div className=\"absolute inset-0 flex items-center justify-center bg-black/30\">\n            <svg\n              className=\"w-4 h-4 text-white\"\n              fill=\"currentColor\"\n              viewBox=\"0 0 20 20\"\n            >\n              <path d=\"M6.3 2.841A1.5 1.5 0 004 4.11v11.78a1.5 1.5 0 002.3 1.269l9.344-5.89a1.5 1.5 0 000-2.538L6.3 2.84z\" />\n            </svg>\n          </div>\n        </button>\n      ) : null}\n    </div>\n  );\n}\n\nfunction ExpandedMedia({\n  item,\n  autoPlayVideo,\n}: {\n  item: RenderedHistoryItem;\n  autoPlayVideo?: boolean;\n}) {\n  const hasImages = item.images.length > 0;\n  const hasVideos = item.videos.length > 0;\n  const videoRef = useRef<HTMLVideoElement>(null);\n\n  useEffect(() => {\n    if (autoPlayVideo && videoRef.current) {\n      videoRef.current.play().catch(() => {});\n    }\n  }, [autoPlayVideo]);\n\n  if (!hasImages && !hasVideos) return null;\n\n  return (\n    <div className=\"flex flex-col gap-2 mt-2\">\n      {item.images.map((img, i) => (\n        <div\n          key={`img-${i}`}\n          className=\"rounded-md overflow-hidden border border-gray-200 dark:border-zinc-700\"\n        >\n          <img\n            src={img}\n            alt={`Input ${i + 1}`}\n            className=\"w-full h-auto object-contain max-h-48\"\n            draggable={false}\n          />\n        </div>\n      ))}\n      {item.videos.map((vid, i) => (\n        <div\n          key={`vid-${i}`}\n          className=\"rounded-md overflow-hidden border border-gray-200 dark:border-zinc-700\"\n        >\n          <video\n            ref={i === 0 ? videoRef : undefined}\n            src={vid}\n            className=\"w-full h-auto max-h-48\"\n            controls\n            muted\n            playsInline\n          />\n        </div>\n      ))}\n    </div>\n  );\n}\n\nexport default function HistoryDisplay() {\n  const { commits, head, setHead } = useProjectStore();\n  const [expandedHash, setExpandedHash] = useState<string | null>(null);\n  const [autoPlayHash, setAutoPlayHash] = useState<string | null>(null);\n\n  // Clear auto-play flag when the expanded item changes\n  useEffect(() => {\n    if (expandedHash !== autoPlayHash) {\n      setAutoPlayHash(null);\n    }\n  }, [expandedHash, autoPlayHash]);\n\n  const handleVideoPlayClick = useCallback((hash: string) => {\n    setExpandedHash(hash);\n    setAutoPlayHash(hash);\n  }, []);\n\n  // Put all commits into an array and sort by created date (oldest first)\n  const flatHistory = Object.values(commits).sort(\n    (a, b) =>\n      new Date(a.dateCreated).getTime() - new Date(b.dateCreated).getTime()\n  );\n\n  // Annotate history items with a summary, parent version, etc.\n  const renderedHistory = renderHistory(flatHistory);\n\n  if (renderedHistory.length === 0) return null;\n\n  return (\n    <div className=\"flex flex-col gap-2\">\n      {[...renderedHistory].reverse().map((item, _reverseIndex) => {\n        const versionNumber = renderedHistory.length - _reverseIndex;\n        const isActive = item.hash === head;\n        const isExpanded = expandedHash === item.hash;\n        const hasMedia = item.images.length > 0 || item.videos.length > 0;\n\n        return (\n          <div\n            key={item.hash}\n            className={`rounded-xl border transition-all ${\n              isActive\n                ? \"bg-white dark:bg-zinc-800 border-blue-200 dark:border-blue-800 shadow-sm\"\n                : \"bg-white dark:bg-zinc-900 border-gray-100 dark:border-zinc-800 hover:border-gray-200 dark:hover:border-zinc-700\"\n            }`}\n          >\n            <div\n              className=\"flex items-center gap-3 px-3 py-2.5 cursor-pointer\"\n              onClick={() => setHead(item.hash)}\n            >\n              {/* Version number badge */}\n              <span\n                className={`shrink-0 flex items-center justify-center w-6 h-6 rounded-full text-[10px] font-semibold ${\n                  isActive\n                    ? \"bg-blue-100 dark:bg-blue-900/50 text-blue-700 dark:text-blue-300\"\n                    : \"bg-gray-100 dark:bg-zinc-800 text-gray-400 dark:text-gray-500\"\n                }`}\n              >\n                {versionNumber}\n              </span>\n\n              {/* Thumbnail */}\n              <MediaThumbnail\n                item={item}\n                onPlayClick={() => handleVideoPlayClick(item.hash)}\n              />\n\n              {/* Summary */}\n              <div className=\"flex-1 min-w-0\">\n                <div className=\"flex items-center gap-1.5\">\n                  <span\n                    className={`text-[10px] uppercase tracking-wider font-medium px-1.5 py-0.5 rounded ${\n                      item.type === \"Create\"\n                        ? \"bg-emerald-50 dark:bg-emerald-900/30 text-emerald-600 dark:text-emerald-400\"\n                        : item.type === \"Edit\"\n                          ? \"bg-amber-50 dark:bg-amber-900/30 text-amber-600 dark:text-amber-400\"\n                          : \"bg-gray-50 dark:bg-gray-800 text-gray-500 dark:text-gray-400\"\n                    }`}\n                  >\n                    {item.type}\n                  </span>\n                  {item.parentVersion !== null && (\n                    <span className=\"text-[10px] text-gray-400 dark:text-gray-500\">\n                      from v{item.parentVersion}\n                    </span>\n                  )}\n                </div>\n                <p\n                  className={`text-sm mt-0.5 line-clamp-2 ${\n                    isActive\n                      ? \"font-medium text-gray-900 dark:text-white\"\n                      : \"text-gray-600 dark:text-gray-400\"\n                  }`}\n                >\n                  {item.summary}\n                  {item.selectedElementTag && (\n                    <>\n                      {\" \"}\n                      <span className=\"text-gray-300 dark:text-gray-600\">&middot;</span>\n                      {\" \"}\n                      <code className=\"text-xs font-mono text-violet-600 dark:text-violet-400\">&lt;{item.selectedElementTag}&gt;</code>\n                    </>\n                  )}\n                </p>\n              </div>\n\n              {/* Expand button */}\n              {(hasMedia || item.summary.length > 30) && (\n                <button\n                  onClick={(e) => {\n                    e.stopPropagation();\n                    setExpandedHash(isExpanded ? null : item.hash);\n                  }}\n                  className=\"shrink-0 text-gray-400 dark:text-gray-500 hover:text-gray-600 dark:hover:text-gray-300 p-1 rounded-md hover:bg-gray-100 dark:hover:bg-zinc-700 transition-colors\"\n                >\n                  {isExpanded ? (\n                    <BsChevronDown className=\"w-3 h-3\" />\n                  ) : (\n                    <BsChevronRight className=\"w-3 h-3\" />\n                  )}\n                </button>\n              )}\n            </div>\n\n            {/* Expanded details */}\n            {isExpanded && (\n              <div className=\"px-3 pb-3 pl-12\">\n                {item.summary.length > 30 && (\n                  <p className=\"text-xs text-gray-500 dark:text-gray-400 break-words\">\n                    {item.summary}\n                  </p>\n                )}\n                {item.selectedElementTag && (\n                  <p className=\"text-xs text-violet-500 dark:text-violet-400 mt-1\">\n                    Target: <code className=\"font-mono text-[10px] bg-violet-100 dark:bg-violet-900/30 px-1 py-0.5 rounded\">&lt;{item.selectedElementTag}&gt;</code>\n                  </p>\n                )}\n                <ExpandedMedia\n                  item={item}\n                  autoPlayVideo={autoPlayHash === item.hash}\n                />\n              </div>\n            )}\n          </div>\n        );\n      })}\n    </div>\n  );\n}\n"
  },
  {
    "path": "frontend/src/components/history/utils.test.ts",
    "content": "import { renderHistory } from \"./utils\";\nimport { Commit, CommitHash } from \"../commits/types\";\n\nconst basicLinearHistory: Record<CommitHash, Commit> = {\n  \"0\": {\n    hash: \"0\",\n    dateCreated: new Date(),\n    isCommitted: false,\n    type: \"ai_create\",\n    parentHash: null,\n    variants: [{ code: \"<html>1. create</html>\", history: [] }],\n    selectedVariantIndex: 0,\n    inputs: { text: \"\", images: [\"\"] },\n  },\n  \"1\": {\n    hash: \"1\",\n    dateCreated: new Date(),\n    isCommitted: false,\n    type: \"ai_edit\",\n    parentHash: \"0\",\n    variants: [{ code: \"<html>2. edit with better icons</html>\", history: [] }],\n    selectedVariantIndex: 0,\n    inputs: { text: \"use better icons\", images: [] },\n  },\n  \"2\": {\n    hash: \"2\",\n    dateCreated: new Date(),\n    isCommitted: false,\n    type: \"ai_edit\",\n    parentHash: \"1\",\n    variants: [{ code: \"<html>3. edit with better icons and red text</html>\", history: [] }],\n    selectedVariantIndex: 0,\n    inputs: { text: \"make text red\", images: [] },\n  },\n};\n\nconst basicLinearHistoryWithCode: Record<CommitHash, Commit> = {\n  \"0\": {\n    hash: \"0\",\n    dateCreated: new Date(),\n    isCommitted: false,\n    type: \"code_create\",\n    parentHash: null,\n    variants: [{ code: \"<html>1. create</html>\", history: [] }],\n    selectedVariantIndex: 0,\n    inputs: null,\n  },\n  ...Object.fromEntries(Object.entries(basicLinearHistory).slice(1)),\n};\n\nconst basicBranchingHistory: Record<CommitHash, Commit> = {\n  ...basicLinearHistory,\n  \"3\": {\n    hash: \"3\",\n    dateCreated: new Date(),\n    isCommitted: false,\n    type: \"ai_edit\",\n    parentHash: \"1\",\n    variants: [\n      { code: \"<html>4. edit with better icons and green text</html>\", history: [] },\n    ],\n    selectedVariantIndex: 0,\n    inputs: { text: \"make text green\", images: [] },\n  },\n};\n\ndescribe(\"History Utils\", () => {\n  test(\"should correctly render the history tree\", () => {\n    expect(renderHistory(Object.values(basicLinearHistory))).toEqual([\n      {\n        ...basicLinearHistory[\"0\"],\n        type: \"Create\",\n        summary: \"Create\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [\"\"],\n        videos: [],\n      },\n      {\n        ...basicLinearHistory[\"1\"],\n        type: \"Edit\",\n        summary: \"use better icons\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [],\n        videos: [],\n      },\n      {\n        ...basicLinearHistory[\"2\"],\n        type: \"Edit\",\n        summary: \"make text red\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [],\n        videos: [],\n      },\n    ]);\n\n    // Render a history with code\n    expect(renderHistory(Object.values(basicLinearHistoryWithCode))).toEqual([\n      {\n        ...basicLinearHistoryWithCode[\"0\"],\n        type: \"Imported from code\",\n        summary: \"Imported from code\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [],\n        videos: [],\n      },\n      {\n        ...basicLinearHistoryWithCode[\"1\"],\n        type: \"Edit\",\n        summary: \"use better icons\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [],\n        videos: [],\n      },\n      {\n        ...basicLinearHistoryWithCode[\"2\"],\n        type: \"Edit\",\n        summary: \"make text red\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [],\n        videos: [],\n      },\n    ]);\n\n    // Render a non-linear history\n    expect(renderHistory(Object.values(basicBranchingHistory))).toEqual([\n      {\n        ...basicBranchingHistory[\"0\"],\n        type: \"Create\",\n        summary: \"Create\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [\"\"],\n        videos: [],\n      },\n      {\n        ...basicBranchingHistory[\"1\"],\n        type: \"Edit\",\n        summary: \"use better icons\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [],\n        videos: [],\n      },\n      {\n        ...basicBranchingHistory[\"2\"],\n        type: \"Edit\",\n        summary: \"make text red\",\n        selectedElementTag: null,\n        parentVersion: null,\n        images: [],\n        videos: [],\n      },\n      {\n        ...basicBranchingHistory[\"3\"],\n        type: \"Edit\",\n        summary: \"make text green\",\n        selectedElementTag: null,\n        parentVersion: 2,\n        images: [],\n        videos: [],\n      },\n    ]);\n  });\n});\n"
  },
  {
    "path": "frontend/src/components/history/utils.ts",
    "content": "import { Commit, CommitType } from \"../commits/types\";\n\nfunction displayHistoryItemType(itemType: CommitType) {\n  switch (itemType) {\n    case \"ai_create\":\n      return \"Create\";\n    case \"ai_edit\":\n      return \"Edit\";\n    case \"code_create\":\n      return \"Imported from code\";\n    default: {\n      const exhaustiveCheck: never = itemType;\n      throw new Error(`Unhandled case: ${exhaustiveCheck}`);\n    }\n  }\n}\n\nconst setParentVersion = (commit: Commit, history: Commit[]) => {\n  // If the commit has no parent, return null\n  if (!commit.parentHash) return null;\n\n  const parentIndex = history.findIndex(\n    (item) => item.hash === commit.parentHash\n  );\n  const currentIndex = history.findIndex((item) => item.hash === commit.hash);\n\n  // Only set parent version if the parent is not the previous commit\n  // and parent exists\n  return parentIndex !== -1 && parentIndex != currentIndex - 1\n    ? parentIndex + 1\n    : null;\n};\n\nfunction extractTagName(html: string): string {\n  const match = html.match(/^<(\\w+)/);\n  return match ? match[1].toLowerCase() : \"element\";\n}\n\nfunction getCommitMedia(commit: Commit): { images: string[]; videos: string[] } {\n  if (commit.type === \"code_create\") {\n    return { images: [], videos: [] };\n  }\n  return {\n    images: commit.inputs.images || [],\n    videos: commit.inputs.videos || [],\n  };\n}\n\nexport function summarizeHistoryItem(commit: Commit): string {\n  const commitType = commit.type;\n  switch (commitType) {\n    case \"ai_create\":\n      return \"Create\";\n    case \"ai_edit\":\n      return commit.inputs.text || \"Edit\";\n    case \"code_create\":\n      return \"Imported from code\";\n    default: {\n      const exhaustiveCheck: never = commitType;\n      throw new Error(`Unhandled case: ${exhaustiveCheck}`);\n    }\n  }\n}\n\nexport function getSelectedElementTag(commit: Commit): string | null {\n  if (commit.type === \"code_create\") return null;\n  const html = commit.inputs.selectedElementHtml;\n  if (!html) return null;\n  return extractTagName(html);\n}\n\nexport type RenderedHistoryItem = Omit<Commit, \"type\"> & {\n  type: string;\n  summary: string;\n  selectedElementTag: string | null;\n  parentVersion: number | null;\n  images: string[];\n  videos: string[];\n};\n\nexport const renderHistory = (history: Commit[]): RenderedHistoryItem[] => {\n  const renderedHistory: RenderedHistoryItem[] = [];\n\n  for (let i = 0; i < history.length; i++) {\n    const commit = history[i];\n    const media = getCommitMedia(commit);\n    renderedHistory.push({\n      ...commit,\n      type: displayHistoryItemType(commit.type),\n      summary: summarizeHistoryItem(commit),\n      selectedElementTag: getSelectedElementTag(commit),\n      parentVersion: setParentVersion(commit, history),\n      images: media.images,\n      videos: media.videos,\n    });\n  }\n\n  return renderedHistory;\n};\n"
  },
  {
    "path": "frontend/src/components/messages/OnboardingNote.tsx",
    "content": "export function OnboardingNote() {\n  return (\n    <div className=\"flex flex-col space-y-4 bg-green-700 p-2 rounded text-stone-200 text-sm\">\n      <span>\n        To use Screenshot to Code,{\" \"}\n        <a\n          className=\"inline underline hover:opacity-70\"\n          href=\"https://buy.stripe.com/8wM6sre70gBW1nqaEE\"\n          target=\"_blank\"\n        >\n          buy some credits (100 generations for $36)\n        </a>{\" \"}\n        or use your own OpenAI API key with GPT4 vision access.{\" \"}\n        <a\n          href=\"https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md\"\n          className=\"inline underline hover:opacity-70\"\n          target=\"_blank\"\n        >\n          Follow these instructions to get yourself a key.\n        </a>{\" \"}\n        and paste it in the Settings dialog (gear icon above). Your key is only\n        stored in your browser. Never stored on our servers.\n      </span>\n    </div>\n  );\n}\n"
  },
  {
    "path": "frontend/src/components/messages/PicoBadge.tsx",
    "content": "export function PicoBadge() {\n  return (\n    <>\n      <a\n        href=\"https://screenshot-to-code.canny.io/feature-requests\"\n        target=\"_blank\"\n      >\n        <div\n          className=\"fixed z-50 bottom-16 right-5 rounded-md shadow bg-black\n         text-white px-4 text-xs py-3 cursor-pointer\"\n        >\n          feature requests?\n        </div>\n      </a>\n      <a href=\"https://picoapps.xyz?ref=screenshot-to-code\" target=\"_blank\">\n        <div\n          className=\"fixed z-50 bottom-5 right-5 rounded-md shadow text-black\n         bg-white px-4 text-xs py-3 cursor-pointer\"\n        >\n          an open source project by Pico\n        </div>\n      </a>\n    </>\n  );\n}\n"
  },
  {
    "path": "frontend/src/components/messages/TipLink.tsx",
    "content": "import { URLS } from \"../../urls\";\n\nfunction TipLink() {\n  return (\n    <a\n      className=\"text-xs underline text-gray-500 text-right\"\n      href={URLS.tips}\n      target=\"_blank\"\n      rel=\"noopener\"\n    >\n      Tips for better results\n    </a>\n  );\n}\n\nexport default TipLink;\n"
  },
  {
    "path": "frontend/src/components/preview/CodeMirror.tsx",
    "content": "import { useRef, useEffect, useMemo } from \"react\";\nimport { EditorState } from \"@codemirror/state\";\nimport { EditorView, keymap, lineNumbers, ViewUpdate } from \"@codemirror/view\";\nimport { espresso, cobalt } from \"thememirror\";\nimport {\n  defaultKeymap,\n  history,\n  indentWithTab,\n  redo,\n  undo,\n} from \"@codemirror/commands\";\nimport { bracketMatching } from \"@codemirror/language\";\nimport { html } from \"@codemirror/lang-html\";\nimport { EditorTheme } from \"@/types\";\n\ninterface Props {\n  code: string;\n  editorTheme: EditorTheme;\n  onCodeChange: (code: string) => void;\n}\n\nfunction CodeMirror({ code, editorTheme, onCodeChange }: Props) {\n  const ref = useRef<HTMLDivElement>(null);\n  const view = useRef<EditorView | null>(null);\n  const editorState = useMemo(\n    () =>\n      EditorState.create({\n        extensions: [\n          history(),\n          keymap.of([\n            ...defaultKeymap,\n            indentWithTab,\n            { key: \"Mod-z\", run: undo, preventDefault: true },\n            { key: \"Mod-Shift-z\", run: redo, preventDefault: true },\n          ]),\n          lineNumbers(),\n          bracketMatching(),\n          html(),\n          editorTheme === EditorTheme.ESPRESSO ? espresso : cobalt,\n          EditorView.lineWrapping,\n          EditorView.updateListener.of((update: ViewUpdate) => {\n            if (update.docChanged) {\n              const updatedCode = update.state.doc.toString();\n              onCodeChange(updatedCode);\n            }\n          }),\n        ],\n      }),\n    [editorTheme]\n  );\n  useEffect(() => {\n    view.current = new EditorView({\n      state: editorState,\n      parent: ref.current as Element,\n    });\n\n    return () => {\n      if (view.current) {\n        view.current.destroy();\n        view.current = null;\n      }\n    };\n  }, []);\n\n  useEffect(() => {\n    if (view.current && view.current.state.doc.toString() !== code) {\n      view.current.dispatch({\n        changes: { from: 0, to: view.current.state.doc.length, insert: code },\n      });\n    }\n  }, [code]);\n\n  return (\n    <div\n      className=\"overflow-x-scroll overflow-y-scroll mx-2 border-[4px] border-black rounded-[20px]\"\n      ref={ref}\n    />\n  );\n}\n\nexport default CodeMirror;\n"
  },
  {
    "path": "frontend/src/components/preview/CodePreview.tsx",
    "content": "import { useRef, useEffect } from \"react\";\n\ninterface Props {\n  code: string;\n}\n\nfunction CodePreview({ code }: Props) {\n  const scrollRef = useRef<HTMLDivElement>(null);\n\n  useEffect(() => {\n    if (scrollRef.current) {\n      scrollRef.current.scrollLeft = scrollRef.current.scrollWidth;\n    }\n  }, [code]);\n\n  return (\n    <div\n      ref={scrollRef}\n      className=\"w-full px-2 bg-black text-green-400 whitespace-nowrap flex \n      overflow-x-auto font-mono text-[10px] my-4\"\n    >\n      {code}\n    </div>\n  );\n}\n\nexport default CodePreview;\n"
  },
  {
    "path": "frontend/src/components/preview/CodeTab.tsx",
    "content": "import { FaCopy } from \"react-icons/fa\";\nimport CodeMirror from \"./CodeMirror\";\nimport { Button } from \"../ui/button\";\nimport { Settings } from \"../../types\";\nimport copy from \"copy-to-clipboard\";\nimport { useCallback } from \"react\";\nimport toast from \"react-hot-toast\";\n\ninterface Props {\n  code: string;\n  setCode: React.Dispatch<React.SetStateAction<string>>;\n  settings: Settings;\n}\n\nfunction CodeTab({ code, setCode, settings }: Props) {\n  const copyCode = useCallback(() => {\n    copy(code);\n    toast.success(\"Copied to clipboard\");\n  }, [code]);\n\n  const doOpenInCodepenio = useCallback(async () => {\n    // TODO: Update CSS and JS external links depending on the framework being used\n    const data = {\n      html: code,\n      editors: \"100\", // 1: Open HTML, 0: Close CSS, 0: Close JS\n      layout: \"left\",\n      css_external:\n        \"https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css\" +\n        (code.includes(\"<ion-\")\n          ? \",https://cdn.jsdelivr.net/npm/@ionic/core/css/ionic.bundle.css\"\n          : \"\"),\n      js_external:\n        \"https://cdn.tailwindcss.com \" +\n        (code.includes(\"<ion-\")\n          ? \",https://cdn.jsdelivr.net/npm/@ionic/core/dist/ionic/ionic.esm.js,https://cdn.jsdelivr.net/npm/@ionic/core/dist/ionic/ionic.js\"\n          : \"\"),\n    };\n\n    // Create a hidden form and submit it to open the code in CodePen\n    // Can't use fetch API directly because we want to open the URL in a new tab\n    const input = document.createElement(\"input\");\n    input.setAttribute(\"type\", \"hidden\");\n    input.setAttribute(\"name\", \"data\");\n    input.setAttribute(\"value\", JSON.stringify(data));\n\n    const form = document.createElement(\"form\");\n    form.setAttribute(\"method\", \"POST\");\n    form.setAttribute(\"action\", \"https://codepen.io/pen/define\");\n    form.setAttribute(\"target\", \"_blank\");\n    form.appendChild(input);\n\n    document.body.appendChild(form);\n    form.submit();\n  }, [code]);\n\n  return (\n    <div className=\"relative\">\n      <div className=\"flex justify-start items-center px-4 mb-2\">\n        <span\n          title=\"Copy Code\"\n          className=\"bg-black text-white flex items-center justify-center hover:text-black hover:bg-gray-100 cursor-pointer rounded-lg text-sm p-2.5\"\n          onClick={copyCode}\n          data-testid=\"copy-code\"\n        >\n          Copy Code <FaCopy className=\"ml-2\" />\n        </span>\n        <Button\n          onClick={doOpenInCodepenio}\n          className=\"bg-gray-100 text-black ml-2 py-2 px-4 border border-black rounded-md hover:bg-gray-400 focus:outline-none\"\n          data-testid=\"open-codepen\"\n        >\n          Open in{\" \"}\n          <img\n            src=\"https://assets.codepen.io/t-1/codepen-logo.svg\"\n            alt=\"codepen.io\"\n            className=\"h-4 ml-1\"\n          />\n        </Button>\n      </div>\n      <CodeMirror\n        code={code}\n        editorTheme={settings.editorTheme}\n        onCodeChange={setCode}\n      />\n    </div>\n  );\n}\n\nexport default CodeTab;\n"
  },
  {
    "path": "frontend/src/components/preview/PreviewComponent.tsx",
    "content": "import { useCallback, useEffect, useRef, useState } from \"react\";\nimport classNames from \"classnames\";\nimport useThrottle from \"../../hooks/useThrottle\";\nimport { useAppStore } from \"../../store/app-store\";\nimport { addHighlight, removeHighlight } from \"../select-and-edit/utils\";\n\ninterface Props {\n  code: string;\n  device: \"mobile\" | \"desktop\";\n  onScaleChange?: (scale: number) => void;\n  viewMode?: \"fit\" | \"actual\";\n}\n\nconst MOBILE_VIEWPORT_WIDTH = 375;\nexport const DESKTOP_VIEWPORT_WIDTH = 1366;\n\nfunction PreviewComponent({\n  code,\n  device,\n  onScaleChange,\n  viewMode,\n}: Props) {\n  const iframeRef = useRef<HTMLIFrameElement | null>(null);\n  const wrapperRef = useRef<HTMLDivElement | null>(null);\n\n  // Don't update code more often than every 200ms.\n  const throttledCode = useThrottle(code, 200);\n\n  // Select and edit functionality\n  const [clickEvent, setClickEvent] = useState<MouseEvent | null>(null);\n  const activeMode = viewMode ?? \"fit\";\n  const handleIframeClick = useCallback((event: MouseEvent) => {\n    setClickEvent(event);\n  }, []);\n\n  const handleIframeLinkClick = useCallback((event: MouseEvent) => {\n    const target = (event.target as HTMLElement).closest(\"a\");\n    if (!target) return;\n    const href = target.getAttribute(\"href\");\n    if (href && href.startsWith(\"#\")) {\n      event.preventDefault();\n    }\n  }, []);\n\n  const {\n    inSelectAndEditMode,\n    selectedElement,\n    setSelectedElement,\n  } = useAppStore();\n\n  const inSelectAndEditModeRef = useRef(inSelectAndEditMode);\n  useEffect(() => {\n    inSelectAndEditModeRef.current = inSelectAndEditMode;\n  }, [inSelectAndEditMode]);\n\n  // Handle click events to select elements\n  useEffect(() => {\n    if (!inSelectAndEditModeRef.current || !clickEvent) {\n      return;\n    }\n\n    clickEvent.preventDefault();\n\n    const targetElement = clickEvent.target as HTMLElement;\n    if (!targetElement) return;\n\n    // Remove highlight from previous element\n    if (selectedElement) {\n      removeHighlight(selectedElement);\n    }\n\n    // Highlight and store the new selected element\n    addHighlight(targetElement);\n    setSelectedElement(targetElement);\n  }, [clickEvent, setSelectedElement]); // eslint-disable-line react-hooks/exhaustive-deps\n\n  // Clean up highlight when exiting select-and-edit mode\n  useEffect(() => {\n    if (!inSelectAndEditMode && selectedElement) {\n      removeHighlight(selectedElement);\n      setSelectedElement(null);\n    }\n  }, [inSelectAndEditMode]); // eslint-disable-line react-hooks/exhaustive-deps\n\n  // Apply a fixed viewport per device and scale to fit the available pane.\n  useEffect(() => {\n    const updateScale = () => {\n      const wrapper = wrapperRef.current;\n      const iframe = iframeRef.current;\n      if (!wrapper || !iframe) return;\n\n      const viewportWidth = wrapper.clientWidth;\n      const viewportHeight = wrapper.clientHeight;\n\n      if (device === \"desktop\") {\n        const scaleValue =\n          activeMode === \"fit\"\n            ? Math.min(1, viewportWidth / DESKTOP_VIEWPORT_WIDTH)\n            : 1;\n        const iframeHeight = scaleValue > 0 ? viewportHeight / scaleValue : viewportHeight;\n\n        onScaleChange?.(scaleValue);\n        iframe.style.width = `${DESKTOP_VIEWPORT_WIDTH}px`;\n        iframe.style.height = `${iframeHeight}px`;\n        iframe.style.transform = `scale(${scaleValue})`;\n        iframe.style.transformOrigin = \"top left\";\n        return;\n      }\n\n      onScaleChange?.(1);\n      iframe.style.width = `${MOBILE_VIEWPORT_WIDTH}px`;\n      iframe.style.height = `${viewportHeight}px`;\n      iframe.style.transform = \"scale(1)\";\n      iframe.style.transformOrigin = \"top left\";\n    };\n\n    updateScale();\n\n    window.addEventListener(\"resize\", updateScale);\n    const resizeObserver = new ResizeObserver(updateScale);\n    if (wrapperRef.current) {\n      resizeObserver.observe(wrapperRef.current);\n    }\n    return () => {\n      window.removeEventListener(\"resize\", updateScale);\n      resizeObserver.disconnect();\n    };\n  }, [activeMode, device, onScaleChange]);\n\n  useEffect(() => {\n    const iframe = iframeRef.current;\n    if (!iframe) return;\n\n    const handleLoad = () => {\n      const body = iframe.contentWindow?.document.body;\n      if (!body) return;\n      body.addEventListener(\"click\", handleIframeClick);\n      body.addEventListener(\"click\", handleIframeLinkClick);\n    };\n\n    iframe.addEventListener(\"load\", handleLoad);\n\n    return () => {\n      iframe.removeEventListener(\"load\", handleLoad);\n      const body = iframe.contentWindow?.document.body;\n      if (body) {\n        body.removeEventListener(\"click\", handleIframeClick);\n        body.removeEventListener(\"click\", handleIframeLinkClick);\n      }\n    };\n  }, [handleIframeClick, handleIframeLinkClick]);\n\n  useEffect(() => {\n    const iframe = iframeRef.current;\n    if (!iframe) return;\n    if (iframe.srcdoc !== throttledCode) {\n      iframe.srcdoc = throttledCode;\n    }\n  }, [throttledCode]);\n\n  return (\n    <div\n      className={`flex-1 min-h-0 relative ${\n        device === \"mobile\"\n          ? \"flex justify-center overflow-hidden bg-gray-100 dark:bg-zinc-900\"\n          : activeMode === \"fit\"\n            ? \"flex justify-center overflow-hidden\"\n            : \"overflow-auto\"\n      }`}\n    >\n      <div\n        ref={wrapperRef}\n        className={`w-full h-full ${device === \"mobile\" ? \"flex justify-center\" : \"\"}`}\n      >\n        <iframe\n          id={`preview-${device}`}\n          ref={iframeRef}\n          title=\"Preview\"\n          className={classNames(\n            {\n              \"border-0\": true,\n            }\n          )}\n        ></iframe>\n      </div>\n    </div>\n  );\n}\n\nexport default PreviewComponent;\n"
  },
  {
    "path": "frontend/src/components/preview/PreviewPane.tsx",
    "content": "import { Tabs, TabsList, TabsTrigger, TabsContent } from \"../ui/tabs\";\nimport {\n  FaDesktop,\n  FaMobile,\n  FaCode,\n} from \"react-icons/fa\";\nimport {\n  LuChevronLeft,\n  LuChevronRight,\n  LuExternalLink,\n  LuRefreshCw,\n  LuDownload,\n} from \"react-icons/lu\";\nimport { useMemo, useState } from \"react\";\nimport { AppState, Settings } from \"../../types\";\nimport CodeTab from \"./CodeTab\";\nimport { Button } from \"../ui/button\";\nimport { useAppStore } from \"../../store/app-store\";\nimport { useProjectStore } from \"../../store/project-store\";\nimport { extractHtml } from \"./extractHtml\";\nimport PreviewComponent from \"./PreviewComponent\";\nimport { downloadCode } from \"./download\";\n\nfunction openInNewTab(code: string) {\n  const newWindow = window.open(\"\", \"_blank\");\n  if (newWindow) {\n    newWindow.document.open();\n    newWindow.document.write(code);\n    newWindow.document.close();\n  }\n}\n\ninterface Props {\n  settings: Settings;\n  onOpenVersions: () => void;\n}\n\nfunction PreviewPane({ settings, onOpenVersions }: Props) {\n  const { appState } = useAppStore();\n  const { inputMode, head, commits, setHead } = useProjectStore();\n  const [activeTab, setActiveTab] = useState(\"desktop\");\n  const [desktopScale, setDesktopScale] = useState(1);\n  const [desktopViewMode, setDesktopViewMode] = useState<\"fit\" | \"actual\">(\"fit\");\n\n  // Sorted commit list for version navigation\n  const sortedCommits = useMemo(() =>\n    Object.values(commits).sort(\n      (a, b) => new Date(a.dateCreated).getTime() - new Date(b.dateCreated).getTime()\n    ), [commits]);\n\n  const currentVersionIndex = sortedCommits.findIndex(c => c.hash === head);\n  const totalVersions = sortedCommits.length;\n  const canGoPrev = currentVersionIndex > 0;\n  const canGoNext = currentVersionIndex < totalVersions - 1;\n\n  const currentCommit = head && commits[head] ? commits[head] : \"\";\n  const currentCode = currentCommit\n    ? currentCommit.variants[currentCommit.selectedVariantIndex].code\n    : \"\";\n\n  const isSelectedVariantComplete =\n    head &&\n    commits[head] &&\n    commits[head].variants[commits[head].selectedVariantIndex].status ===\n      \"complete\";\n\n  const previewCode =\n    inputMode === \"video\" && appState === AppState.CODING\n      ? extractHtml(currentCode)\n      : currentCode;\n\n  return (\n    <div className=\"flex-1 flex flex-col min-h-0\">\n      <Tabs\n        value={activeTab}\n        onValueChange={setActiveTab}\n        className=\"flex-1 flex flex-col min-h-0\"\n      >\n        <div className=\"relative flex items-center justify-between px-4 py-2 shrink-0 border-b border-gray-200 dark:border-zinc-800 bg-white dark:bg-zinc-950\">\n          <div className=\"flex items-center gap-2\">\n            <TabsList>\n              <TabsTrigger value=\"desktop\" title=\"Desktop\" data-testid=\"tab-desktop\">\n                <FaDesktop />\n              </TabsTrigger>\n              <TabsTrigger value=\"mobile\" title=\"Mobile\" data-testid=\"tab-mobile\">\n                <FaMobile />\n              </TabsTrigger>\n              <TabsTrigger value=\"code\" title=\"Code\" data-testid=\"tab-code\" className=\"gap-2\">\n                <FaCode />\n                Code\n              </TabsTrigger>\n            </TabsList>\n            {(activeTab === \"desktop\" || activeTab === \"mobile\") && (\n              <div className=\"hidden sm:inline-flex items-center gap-2\">\n                {activeTab === \"desktop\" && (\n                  <div className=\"inline-flex items-center rounded-lg bg-gray-100 p-1 dark:bg-zinc-800\">\n                    <button\n                      type=\"button\"\n                      onClick={() => setDesktopViewMode(\"fit\")}\n                      title=\"Scale down to fit the screen\"\n                      className={`rounded-md px-3 py-1.5 text-xs font-medium transition-all ${\n                        desktopViewMode === \"fit\"\n                          ? \"bg-white text-gray-900 shadow-sm dark:bg-zinc-600 dark:text-zinc-100\"\n                          : \"text-gray-500 hover:text-gray-900 dark:text-zinc-400 dark:hover:text-zinc-200\"\n                      }`}\n                    >\n                      Scale\n                      {desktopScale < 1 && (\n                        <span className=\"ml-1 text-violet-600 dark:text-violet-300 font-bold\">\n                          ({Math.round(desktopScale * 100)}%)\n                        </span>\n                      )}\n                    </button>\n                    <button\n                      type=\"button\"\n                      onClick={() => setDesktopViewMode(\"actual\")}\n                      title=\"View at original size (100%)\"\n                      className={`rounded-md px-3 py-1.5 text-xs font-medium transition-all ${\n                        desktopViewMode === \"actual\"\n                          ? \"bg-white text-gray-900 shadow-sm dark:bg-zinc-600 dark:text-zinc-100\"\n                          : \"text-gray-500 hover:text-gray-900 dark:text-zinc-400 dark:hover:text-zinc-200\"\n                      }`}\n                    >\n                      100%\n                    </button>\n                  </div>\n                )}\n                <Button\n                  onClick={() => openInNewTab(previewCode)}\n                  variant=\"ghost\"\n                  size=\"icon\"\n                  title=\"Open in New Tab\"\n                  className=\"h-8 w-8\"\n                >\n                  <LuExternalLink />\n                </Button>\n              </div>\n            )}\n          </div>\n\n          {/* Version navigation */}\n          {totalVersions > 0 && (\n            <div className=\"absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2 hidden md:flex items-center justify-center gap-1 bg-gray-100/50 dark:bg-zinc-800/50 rounded-full p-1 border border-gray-200/50 dark:border-zinc-700/50 backdrop-blur-sm\">\n              <Button\n                onClick={() => canGoPrev && setHead(sortedCommits[currentVersionIndex - 1].hash)}\n                variant=\"ghost\"\n                size=\"icon\"\n                title=\"Previous version\"\n                className={`h-6 w-6 rounded-full hover:bg-white dark:hover:bg-zinc-700 ${!canGoPrev ? \"opacity-30 cursor-not-allowed\" : \"\"}`}\n                disabled={!canGoPrev}\n              >\n                <LuChevronLeft className=\"w-3.5 h-3.5\" />\n              </Button>\n              <div\n                onClick={onOpenVersions}\n                className=\"flex items-center justify-center gap-2 px-1 cursor-pointer hover:opacity-70 transition-opacity w-32\"\n                title=\"View all versions\"\n              >\n                <span className=\"text-xs font-semibold text-gray-700 dark:text-gray-200 leading-none\">\n                  Version {currentVersionIndex + 1}\n                </span>\n                {currentVersionIndex === totalVersions - 1 && (\n                  <span className=\"rounded-full bg-gray-100 px-2 py-0.5 text-[10px] font-medium text-gray-700 dark:bg-gray-800 dark:text-gray-300 leading-none flex items-center h-4\">\n                    Latest\n                  </span>\n                )}\n              </div>\n              <Button\n                onClick={() => canGoNext && setHead(sortedCommits[currentVersionIndex + 1].hash)}\n                variant=\"ghost\"\n                size=\"icon\"\n                title=\"Next version\"\n                className={`h-6 w-6 rounded-full hover:bg-white dark:hover:bg-zinc-700 ${!canGoNext ? \"opacity-30 cursor-not-allowed\" : \"\"}`}\n                disabled={!canGoNext}\n              >\n                <LuChevronRight className=\"w-3.5 h-3.5\" />\n              </Button>\n            </div>\n          )}\n\n          <div className=\"flex items-center gap-1\">\n            {(appState === AppState.CODE_READY || isSelectedVariantComplete) && (\n              <Button\n                onClick={() => downloadCode(previewCode)}\n                variant=\"ghost\"\n                size=\"icon\"\n                title=\"Download Code\"\n                className=\"h-9 w-9\"\n                data-testid=\"download-code\"\n              >\n                <LuDownload />\n              </Button>\n            )}\n            <Button\n              onClick={() => {\n                const iframes = document.querySelectorAll(\"iframe\");\n                iframes.forEach((iframe) => {\n                  if (iframe.srcdoc) {\n                    const content = iframe.srcdoc;\n                    iframe.srcdoc = \"\";\n                    iframe.srcdoc = content;\n                  }\n                });\n              }}\n              variant=\"ghost\"\n              size=\"icon\"\n              title=\"Refresh Preview\"\n              className=\"h-9 w-9\"\n            >\n              <LuRefreshCw />\n            </Button>\n          </div>\n        </div>\n        <TabsContent value=\"desktop\" className=\"flex-1 min-h-0 mt-0 data-[state=active]:flex data-[state=active]:flex-col\">\n          <PreviewComponent\n            code={previewCode}\n            device=\"desktop\"\n            onScaleChange={setDesktopScale}\n            viewMode={desktopViewMode}\n          />\n        </TabsContent>\n        <TabsContent value=\"mobile\" className=\"flex-1 min-h-0 mt-0 data-[state=active]:flex data-[state=active]:flex-col\">\n          <PreviewComponent\n            code={previewCode}\n            device=\"mobile\"\n            viewMode=\"actual\"\n          />\n        </TabsContent>\n        <TabsContent value=\"code\" className=\"flex-1 min-h-0 mt-0 overflow-auto\">\n          <CodeTab\n            code={previewCode}\n            setCode={() => {}}\n            settings={settings}\n          />\n        </TabsContent>\n      </Tabs>\n    </div>\n  );\n}\n\nexport default PreviewPane;\n"
  },
  {
    "path": "frontend/src/components/preview/download.ts",
    "content": "export const downloadCode = (code: string) => {\n  // Create a blob from the generated code\n  const blob = new Blob([code], { type: \"text/html\" });\n  const url = URL.createObjectURL(blob);\n\n  // Create an anchor element and set properties for download\n  const a = document.createElement(\"a\");\n  a.href = url;\n  a.download = \"index.html\"; // Set the file name for download\n  document.body.appendChild(a); // Append to the document\n  a.click(); // Programmatically click the anchor to trigger download\n\n  // Clean up by removing the anchor and revoking the Blob URL\n  document.body.removeChild(a);\n  URL.revokeObjectURL(url);\n};\n"
  },
  {
    "path": "frontend/src/components/preview/extractHtml.ts",
    "content": "// Extract HTML content, supporting <html> tags with attributes like <html lang=\"en\">\nexport function extractHtml(code: string): string {\n  // Use regex to find <html> tag with optional attributes\n  const htmlStartMatch = code.match(/<html[^>]*>/i);\n  if (!htmlStartMatch) {\n    return \"\";\n  }\n\n  const lastHtmlStartIndex = code.lastIndexOf(htmlStartMatch[0]);\n  let htmlEndIndex = code.indexOf(\"</html>\", lastHtmlStartIndex);\n\n  if (lastHtmlStartIndex !== -1) {\n    // If \"</html>\" is found, adjust htmlEndIndex to include the \"</html>\" tag\n    if (htmlEndIndex !== -1) {\n      htmlEndIndex += \"</html>\".length;\n      return code.slice(lastHtmlStartIndex, htmlEndIndex);\n    }\n    // If \"</html>\" is not found, return the rest of the string starting from the last \"<html>\"\n    return code.slice(lastHtmlStartIndex);\n  }\n  return \"\";\n}\n"
  },
  {
    "path": "frontend/src/components/preview/simpleHash.ts",
    "content": "\nexport function simpleHash(str: string, seed = 0) {\n  let hash = seed;\n  for (let i = 0; i < str.length; i++) {\n    const char = str.charCodeAt(i);\n    hash = (hash << 5) - hash + char;\n    hash |= 0; // Convert to 32bit integer\n  }\n  return hash;\n}\n"
  },
  {
    "path": "frontend/src/components/recording/ScreenRecorder.tsx",
    "content": "import { useState } from \"react\";\nimport { Button } from \"../ui/button\";\nimport { ScreenRecorderState } from \"../../types\";\nimport { blobToBase64DataUrl } from \"./utils\";\nimport fixWebmDuration from \"webm-duration-fix\";\nimport toast from \"react-hot-toast\";\nimport OutputSettingsSection from \"../settings/OutputSettingsSection\";\nimport { Stack } from \"../../lib/stacks\";\n\ninterface Props {\n  screenRecorderState: ScreenRecorderState;\n  setScreenRecorderState: (state: ScreenRecorderState) => void;\n  generateCode: (\n    referenceImages: string[],\n    inputMode: \"image\" | \"video\"\n  ) => void;\n  stack: Stack;\n  setStack: (stack: Stack) => void;\n}\n\nfunction ScreenRecorder({\n  screenRecorderState,\n  setScreenRecorderState,\n  generateCode,\n  stack,\n  setStack,\n}: Props) {\n  const [mediaStream, setMediaStream] = useState<MediaStream | null>(null);\n  const [mediaRecorder, setMediaRecorder] = useState<MediaRecorder | null>(\n    null\n  );\n  const [screenRecordingDataUrl, setScreenRecordingDataUrl] = useState<\n    string | null\n  >(null);\n\n  const startScreenRecording = async () => {\n    try {\n      // Get the screen recording stream\n      const stream = await navigator.mediaDevices.getDisplayMedia({\n        video: true,\n        audio: { echoCancellation: true },\n      });\n      setMediaStream(stream);\n\n      // TODO: Test across different browsers\n      // Create the media recorder\n      const options = { mimeType: \"video/webm\" };\n      const mediaRecorder = new MediaRecorder(stream, options);\n      setMediaRecorder(mediaRecorder);\n\n      const chunks: BlobPart[] = [];\n\n      // Accumalate chunks as data is available\n      mediaRecorder.ondataavailable = (e: BlobEvent) => chunks.push(e.data);\n\n      // When media recorder is stopped, create a data URL\n      mediaRecorder.onstop = async () => {\n        // TODO: Do I need to fix duration if it's not a webm?\n        const completeBlob = await fixWebmDuration(\n          new Blob(chunks, {\n            type: options.mimeType,\n          })\n        );\n\n        const dataUrl = await blobToBase64DataUrl(completeBlob);\n\n        setScreenRecordingDataUrl(dataUrl);\n        setScreenRecorderState(ScreenRecorderState.FINISHED);\n      };\n\n      // Start recording\n      mediaRecorder.start();\n      setScreenRecorderState(ScreenRecorderState.RECORDING);\n    } catch (error) {\n      toast.error(\"Could not start screen recording\");\n      throw error;\n    }\n  };\n\n  const stopScreenRecording = () => {\n    // Stop the recorder\n    if (mediaRecorder) {\n      mediaRecorder.stop();\n      setMediaRecorder(null);\n    }\n\n    // Stop the screen sharing stream\n    if (mediaStream) {\n      mediaStream.getTracks().forEach((track) => {\n        track.stop();\n      });\n    }\n  };\n\n  const kickoffGeneration = () => {\n    if (screenRecordingDataUrl) {\n      generateCode([screenRecordingDataUrl], \"video\");\n    } else {\n      toast.error(\"Screen recording does not exist. Please try again.\");\n      throw new Error(\"No screen recording data url\");\n    }\n  };\n\n  return (\n    <div className=\"flex items-center justify-center my-3\">\n      {screenRecorderState === ScreenRecorderState.INITIAL && (\n        <Button onClick={startScreenRecording}>Record Screen</Button>\n      )}\n\n      {screenRecorderState === ScreenRecorderState.RECORDING && (\n        <div className=\"flex items-center flex-col gap-y-4\">\n          <div className=\"flex items-center mr-2 text-xl gap-x-1\">\n            <span className=\"block h-10 w-10 bg-red-600 rounded-full mr-1 animate-pulse\"></span>\n            <span>Recording...</span>\n          </div>\n          <Button onClick={stopScreenRecording}>Finish Recording</Button>\n        </div>\n      )}\n\n      {screenRecorderState === ScreenRecorderState.FINISHED && (\n        <div className=\"flex items-center flex-col gap-y-4 w-full max-w-md\">\n          <div className=\"flex items-center mr-2 text-xl gap-x-1\">\n            <span>Screen Recording Captured.</span>\n          </div>\n          {screenRecordingDataUrl && (\n            <video\n              muted\n              autoPlay\n              loop\n              className=\"w-full border border-gray-200 rounded-md\"\n              src={screenRecordingDataUrl}\n            />\n          )}\n          <div className=\"w-full\">\n            <OutputSettingsSection\n              stack={stack}\n              setStack={setStack}\n            />\n          </div>\n          <div className=\"flex gap-x-2 w-full\">\n            <Button\n              variant=\"secondary\"\n              className=\"flex-1\"\n              onClick={() =>\n                setScreenRecorderState(ScreenRecorderState.INITIAL)\n              }\n            >\n              Re-record\n            </Button>\n            <Button className=\"flex-1\" onClick={kickoffGeneration}>Generate</Button>\n          </div>\n        </div>\n      )}\n    </div>\n  );\n}\n\nexport default ScreenRecorder;\n"
  },
  {
    "path": "frontend/src/components/recording/utils.ts",
    "content": "export function downloadBlob(blob: Blob) {\n  // Create a URL for the blob object\n  const videoURL = URL.createObjectURL(blob);\n\n  // Create a temporary anchor element and trigger the download\n  const a = document.createElement(\"a\");\n  a.href = videoURL;\n  a.download = \"recording.webm\";\n  document.body.appendChild(a);\n  a.click();\n  document.body.removeChild(a);\n\n  // Clear object URL\n  URL.revokeObjectURL(videoURL);\n}\n\nexport function blobToBase64DataUrl(blob: Blob): Promise<string> {\n  return new Promise((resolve, reject) => {\n    const reader = new FileReader();\n    reader.onloadend = () => {\n      if (reader.result) {\n        resolve(reader.result as string);\n      } else {\n        reject(new Error(\"FileReader did not return a result.\"));\n      }\n    };\n    reader.onerror = () =>\n      reject(new Error(\"FileReader encountered an error.\"));\n    reader.readAsDataURL(blob);\n  });\n}\n"
  },
  {
    "path": "frontend/src/components/select-and-edit/utils.ts",
    "content": "export function removeHighlight(element: HTMLElement) {\n  element.style.outline = \"\";\n  element.style.backgroundColor = \"\";\n  return element;\n}\n\nexport function addHighlight(element: HTMLElement) {\n  element.style.outline = \"2px dashed #1846db\";\n  element.style.backgroundColor = \"#bfcbf5\";\n  return element;\n}\n"
  },
  {
    "path": "frontend/src/components/settings/GenerationSettings.tsx",
    "content": "import React from \"react\";\nimport { useAppStore } from \"../../store/app-store\";\nimport { useProjectStore } from \"../../store/project-store\";\nimport { AppState, Settings } from \"../../types\";\nimport OutputSettingsSection from \"./OutputSettingsSection\";\nimport { Stack } from \"../../lib/stacks\";\n\ninterface GenerationSettingsProps {\n  settings: Settings;\n  setSettings: React.Dispatch<React.SetStateAction<Settings>>;\n}\n\nexport const GenerationSettings: React.FC<GenerationSettingsProps> = ({\n  settings,\n  setSettings,\n}) => {\n  const { appState } = useAppStore();\n  const { inputMode } = useProjectStore();\n\n  function setStack(stack: Stack) {\n    setSettings((prev: Settings) => ({\n      ...prev,\n      generatedCodeConfig: stack,\n    }));\n  }\n\n  const shouldDisableUpdates =\n    appState === AppState.CODING || appState === AppState.CODE_READY;\n\n  // Hide stack selector for video mode (only HTML + Tailwind is supported)\n  if (inputMode === \"video\") {\n    return null;\n  }\n\n  return (\n    <div className=\"flex flex-col gap-y-2\">\n      <OutputSettingsSection\n        stack={settings.generatedCodeConfig}\n        setStack={setStack}\n        shouldDisableUpdates={shouldDisableUpdates}\n      />\n    </div>\n  );\n};\n"
  },
  {
    "path": "frontend/src/components/settings/OutputSettingsSection.tsx",
    "content": "import {\n  Select,\n  SelectContent,\n  SelectGroup,\n  SelectItem,\n  SelectTrigger,\n  SelectValue,\n} from \"../ui/select\";\nimport { Stack } from \"../../lib/stacks\";\nimport StackLabel from \"../core/StackLabel\";\n\ninterface Props {\n  stack: Stack | undefined;\n  setStack: (config: Stack) => void;\n  label?: string;\n  shouldDisableUpdates?: boolean;\n}\n\nfunction OutputSettingsSection({\n  stack,\n  setStack,\n  label = \"Stack:\",\n  shouldDisableUpdates = false,\n}: Props) {\n  return (\n    <div className=\"flex flex-col gap-y-2 justify-between text-sm\">\n      <div className=\"grid grid-cols-3 items-center gap-4\">\n        <span>{label}</span>\n        <Select\n          value={stack ?? \"\"}\n          onValueChange={(value: string) => setStack(value as Stack)}\n          disabled={shouldDisableUpdates}\n        >\n          <SelectTrigger\n            className=\"col-span-2\"\n            id=\"output-settings-js\"\n            data-testid=\"stack-select\"\n          >\n            <SelectValue placeholder=\"Select a stack\" />\n          </SelectTrigger>\n          <SelectContent>\n            <SelectGroup>\n              {Object.values(Stack).map((stack) => (\n                <SelectItem key={stack} value={stack}>\n                  <div className=\"flex items-center\">\n                    <StackLabel stack={stack} />\n                  </div>\n                </SelectItem>\n              ))}\n            </SelectGroup>\n          </SelectContent>\n        </Select>\n      </div>\n    </div>\n  );\n}\n\nexport default OutputSettingsSection;\n"
  },
  {
    "path": "frontend/src/components/settings/SettingsTab.tsx",
    "content": "import React from \"react\";\nimport { AppTheme, EditorTheme, Settings } from \"../../types\";\nimport { capitalize } from \"../../lib/utils\";\nimport {\n  Select,\n  SelectContent,\n  SelectItem,\n  SelectTrigger,\n} from \"../ui/select\";\nimport { Input } from \"../ui/input\";\nimport { Switch } from \"../ui/switch\";\nimport { IS_RUNNING_ON_CLOUD } from \"../../config\";\n\ninterface Props {\n  settings: Settings;\n  setSettings: React.Dispatch<React.SetStateAction<Settings>>;\n  appTheme: AppTheme;\n  setAppTheme: React.Dispatch<React.SetStateAction<AppTheme>>;\n}\n\nfunction SettingsTab({ settings, setSettings, appTheme, setAppTheme }: Props) {\n  const handleThemeChange = (theme: EditorTheme) => {\n    setSettings((s) => ({\n      ...s,\n      editorTheme: theme,\n    }));\n  };\n\n  return (\n    <div className=\"flex-1 overflow-y-auto\">\n      <div className=\"px-4 py-4 lg:px-6 lg:py-6\">\n        {/* Header */}\n        <div className=\"mb-6\">\n          <h1 className=\"text-lg font-semibold text-gray-900 dark:text-white\">\n            Settings\n          </h1>\n        </div>\n\n        <div className=\"mx-auto max-w-lg space-y-6\">\n          {/* Theme */}\n          <div className=\"rounded-lg border border-gray-200 bg-white dark:border-zinc-700 dark:bg-zinc-800/60\">\n            <div className=\"border-b border-gray-100 px-4 py-3 dark:border-zinc-700\">\n              <h2 className=\"text-sm font-medium text-gray-900 dark:text-white\">\n                Theme\n              </h2>\n            </div>\n            <div className=\"divide-y divide-gray-100 dark:divide-zinc-700\">\n              <div className=\"flex items-center justify-between px-4 py-3\">\n                <div>\n                  <span className=\"text-sm text-gray-700 dark:text-zinc-300\">\n                    App Theme\n                  </span>\n                  <p className=\"mt-0.5 text-xs text-gray-500 dark:text-zinc-400\">\n                    System default, with optional light/dark override\n                  </p>\n                </div>\n                <Select\n                  name=\"app-theme\"\n                  value={appTheme}\n                  onValueChange={(value) => setAppTheme(value as AppTheme)}\n                >\n                  <SelectTrigger className=\"w-[140px]\">\n                    {capitalize(appTheme)}\n                  </SelectTrigger>\n                  <SelectContent>\n                    <SelectItem value={AppTheme.SYSTEM}>System</SelectItem>\n                    <SelectItem value={AppTheme.LIGHT}>Light</SelectItem>\n                    <SelectItem value={AppTheme.DARK}>Dark</SelectItem>\n                  </SelectContent>\n                </Select>\n              </div>\n              <div className=\"flex items-center justify-between px-4 py-3\">\n                <div>\n                  <span className=\"text-sm text-gray-700 dark:text-zinc-300\">\n                    Code Editor Theme\n                  </span>\n                  <p className=\"mt-0.5 text-xs text-gray-500 dark:text-zinc-400\">\n                    Requires page refresh to update\n                  </p>\n                </div>\n                <Select\n                  name=\"editor-theme\"\n                  value={settings.editorTheme}\n                  onValueChange={(value) =>\n                    handleThemeChange(value as EditorTheme)\n                  }\n                >\n                  <SelectTrigger className=\"w-[140px]\">\n                    <span className=\"notranslate\" translate=\"no\">\n                      {capitalize(settings.editorTheme)}\n                    </span>\n                  </SelectTrigger>\n                  <SelectContent>\n                    <SelectItem value=\"cobalt\">\n                      <span className=\"notranslate\" translate=\"no\">Cobalt</span>\n                    </SelectItem>\n                    <SelectItem value=\"espresso\">\n                      <span className=\"notranslate\" translate=\"no\">Espresso</span>\n                    </SelectItem>\n                  </SelectContent>\n                </Select>\n              </div>\n            </div>\n          </div>\n\n          {/* API Keys */}\n          <div className=\"rounded-lg border border-gray-200 bg-white dark:border-zinc-700 dark:bg-zinc-800/60\">\n            <div className=\"border-b border-gray-100 px-4 py-3 dark:border-zinc-700\">\n              <h2 className=\"text-sm font-medium text-gray-900 dark:text-white\">\n                API Keys\n              </h2>\n            </div>\n            <div className=\"space-y-4 p-4\">\n              <div>\n                <p className=\"text-sm font-medium text-gray-700 dark:text-zinc-300\">\n                  OpenAI API key\n                </p>\n                <p className=\"mt-1 text-xs text-gray-500 dark:text-zinc-400\">\n                  Only stored in your browser. Never stored on servers. Overrides\n                  your .env config.\n                </p>\n                <Input\n                  id=\"openai-api-key\"\n                  className=\"mt-2\"\n                  placeholder=\"OpenAI API key\"\n                  value={settings.openAiApiKey || \"\"}\n                  onChange={(e) =>\n                    setSettings((s) => ({\n                      ...s,\n                      openAiApiKey: e.target.value,\n                    }))\n                  }\n                />\n              </div>\n\n              {!IS_RUNNING_ON_CLOUD && (\n                <div>\n                  <p className=\"text-sm font-medium text-gray-700 dark:text-zinc-300\">\n                    OpenAI Base URL (optional)\n                  </p>\n                  <p className=\"mt-1 text-xs text-gray-500 dark:text-zinc-400\">\n                    Replace with a proxy URL if you don't want to use the\n                    default.\n                  </p>\n                  <Input\n                    id=\"openai-base-url\"\n                    className=\"mt-2\"\n                    placeholder=\"OpenAI Base URL\"\n                    value={settings.openAiBaseURL || \"\"}\n                    onChange={(e) =>\n                      setSettings((s) => ({\n                        ...s,\n                        openAiBaseURL: e.target.value,\n                      }))\n                    }\n                  />\n                </div>\n              )}\n\n              <div>\n                <p className=\"text-sm font-medium text-gray-700 dark:text-zinc-300\">\n                  Anthropic API key\n                </p>\n                <p className=\"mt-1 text-xs text-gray-500 dark:text-zinc-400\">\n                  Only stored in your browser. Never stored on servers. Overrides\n                  your .env config.\n                </p>\n                <Input\n                  id=\"anthropic-api-key\"\n                  className=\"mt-2\"\n                  placeholder=\"Anthropic API key\"\n                  value={settings.anthropicApiKey || \"\"}\n                  onChange={(e) =>\n                    setSettings((s) => ({\n                      ...s,\n                      anthropicApiKey: e.target.value,\n                    }))\n                  }\n                />\n              </div>\n\n              <div>\n                <p className=\"text-sm font-medium text-gray-700 dark:text-zinc-300\">\n                  Gemini API key\n                </p>\n                <p className=\"mt-1 text-xs text-gray-500 dark:text-zinc-400\">\n                  Only stored in your browser. Never stored on servers. Overrides\n                  your .env config.\n                </p>\n                <Input\n                  id=\"gemini-api-key\"\n                  className=\"mt-2\"\n                  placeholder=\"Gemini API key\"\n                  value={settings.geminiApiKey || \"\"}\n                  onChange={(e) =>\n                    setSettings((s) => ({\n                      ...s,\n                      geminiApiKey: e.target.value,\n                    }))\n                  }\n                />\n              </div>\n            </div>\n          </div>\n\n          {/* Image Generation */}\n          <div className=\"rounded-lg border border-gray-200 bg-white dark:border-zinc-700 dark:bg-zinc-800/60\">\n            <div className=\"border-b border-gray-100 px-4 py-3 dark:border-zinc-700\">\n              <h2 className=\"text-sm font-medium text-gray-900 dark:text-white\">\n                Image Generation\n              </h2>\n            </div>\n            <div className=\"p-4\">\n              <div className=\"flex items-center justify-between\">\n                <div>\n                  <p className=\"text-sm text-gray-700 dark:text-zinc-300\">\n                    Placeholder Images\n                  </p>\n                  <p className=\"mt-1 text-xs text-gray-500 dark:text-zinc-400\">\n                    More fun with it but if you want to save money, turn it off.\n                  </p>\n                </div>\n                <Switch\n                  id=\"image-generation\"\n                  checked={settings.isImageGenerationEnabled}\n                  onCheckedChange={() =>\n                    setSettings((s) => ({\n                      ...s,\n                      isImageGenerationEnabled: !s.isImageGenerationEnabled,\n                    }))\n                  }\n                />\n              </div>\n            </div>\n          </div>\n\n          {/* Screenshot by URL */}\n          <div className=\"rounded-lg border border-gray-200 bg-white dark:border-zinc-700 dark:bg-zinc-800/60\">\n            <div className=\"border-b border-gray-100 px-4 py-3 dark:border-zinc-700\">\n              <h2 className=\"text-sm font-medium text-gray-900 dark:text-white\">\n                Screenshot by URL\n              </h2>\n            </div>\n            <div className=\"p-4\">\n              <p className=\"text-xs text-gray-500 dark:text-zinc-400\">\n                If you want to use URLs directly instead of taking the screenshot\n                yourself, add a ScreenshotOne API key.{\" \"}\n                <a\n                  href=\"https://screenshotone.com?via=screenshot-to-code\"\n                  className=\"text-violet-600 hover:text-violet-700 dark:text-violet-400 dark:hover:text-violet-300\"\n                  target=\"_blank\"\n                >\n                  Get 100 screenshots/mo for free.\n                </a>\n              </p>\n              <Input\n                id=\"screenshot-one-api-key\"\n                className=\"mt-3\"\n                placeholder=\"ScreenshotOne API key\"\n                value={settings.screenshotOneApiKey || \"\"}\n                onChange={(e) =>\n                  setSettings((s) => ({\n                    ...s,\n                    screenshotOneApiKey: e.target.value,\n                  }))\n                }\n              />\n            </div>\n          </div>\n        </div>\n      </div>\n    </div>\n  );\n}\n\nexport default SettingsTab;\n"
  },
  {
    "path": "frontend/src/components/sidebar/IconStrip.tsx",
    "content": "import { LuClock, LuCode, LuSettings, LuPlus } from \"react-icons/lu\";\n\ninterface IconStripProps {\n  isHistoryOpen: boolean;\n  isEditorOpen: boolean;\n  isSettingsOpen: boolean;\n  showHistory: boolean;\n  showEditor: boolean;\n  onToggleHistory: () => void;\n  onToggleEditor: () => void;\n  onLogoClick: () => void;\n  onNewProject: () => void;\n  onOpenSettings: () => void;\n}\n\nfunction IconStrip({\n  isHistoryOpen,\n  isEditorOpen,\n  isSettingsOpen,\n  showHistory,\n  showEditor,\n  onToggleHistory,\n  onToggleEditor,\n  onLogoClick,\n  onNewProject,\n  onOpenSettings,\n}: IconStripProps) {\n  return (\n    <div className=\"flex w-full items-center justify-between border-b border-gray-200 bg-gray-50 px-2 py-2 dark:border-zinc-800 dark:bg-zinc-900 lg:h-full lg:w-16 lg:flex-col lg:items-center lg:gap-y-3 lg:border-b-0 lg:border-r lg:px-0 lg:py-4\">\n      {/* Logo */}\n      <button\n        onClick={onLogoClick}\n        className=\"rounded-lg p-2 transition-colors hover:bg-gray-200/70 dark:hover:bg-zinc-800 lg:mb-2 lg:p-1\"\n      >\n        <img\n          src=\"/favicon/main.png\"\n          alt=\"Logo\"\n          className=\"w-5 h-5 dark:invert\"\n        />\n      </button>\n\n      <div className=\"flex items-center gap-1 lg:flex-col lg:gap-0 lg:contents\">\n        {/* Editor */}\n        {showEditor && (\n          <button\n            onClick={onToggleEditor}\n            className={`flex items-center justify-center rounded-lg p-2 transition-colors lg:flex-col lg:gap-1 lg:px-2 lg:py-1.5 ${\n              isEditorOpen\n                ? \"text-gray-900 dark:text-white\"\n                : \"text-gray-400 dark:text-gray-500 hover:text-gray-600 dark:hover:text-gray-300\"\n            }`}\n            title=\"Editor\"\n          >\n            <LuCode className=\"w-[18px] h-[18px]\" />\n            <span className=\"hidden text-[10px] leading-none lg:block\">Editor</span>\n          </button>\n        )}\n\n        {/* Versions */}\n        {showHistory && (\n          <button\n            onClick={onToggleHistory}\n            className={`flex items-center justify-center rounded-lg p-2 transition-colors lg:flex-col lg:gap-1 lg:px-2 lg:py-1.5 ${\n              isHistoryOpen\n                ? \"text-gray-900 dark:text-white\"\n                : \"text-gray-400 dark:text-gray-500 hover:text-gray-600 dark:hover:text-gray-300\"\n            }`}\n            title=\"Versions\"\n          >\n            <LuClock className=\"w-[18px] h-[18px]\" />\n            <span className=\"hidden text-[10px] leading-none lg:block\">Versions</span>\n          </button>\n        )}\n\n        <button\n          onClick={onNewProject}\n          className=\"flex items-center justify-center rounded-lg p-2 transition-colors bg-violet-100 text-violet-700 hover:bg-violet-200 lg:flex-col lg:gap-1 lg:px-2 lg:py-1.5 dark:bg-violet-900/40 dark:text-violet-200 dark:hover:bg-violet-900/60\"\n          title=\"Start a new project\"\n        >\n          <LuPlus className=\"w-[18px] h-[18px]\" />\n          <span className=\"hidden text-[10px] leading-none lg:block font-medium\">New</span>\n        </button>\n      </div>\n\n      {/* Spacer pushes settings to bottom */}\n      <div className=\"hidden flex-1 lg:block\" />\n\n      {/* Settings */}\n      <button\n        onClick={onOpenSettings}\n        className={`flex items-center justify-center rounded-lg p-2 transition-colors lg:flex-col lg:gap-1 lg:px-2 lg:py-1.5 ${\n          isSettingsOpen\n            ? \"text-gray-900 dark:text-white\"\n            : \"text-gray-400 hover:text-gray-600 dark:text-gray-500 dark:hover:text-gray-300\"\n        }`}\n        title=\"Settings\"\n      >\n        <LuSettings className=\"w-[18px] h-[18px]\" />\n        <span className=\"hidden text-[10px] leading-none lg:block\">Settings</span>\n      </button>\n    </div>\n  );\n}\n\nexport default IconStrip;\n"
  },
  {
    "path": "frontend/src/components/sidebar/Sidebar.tsx",
    "content": "import { useAppStore } from \"../../store/app-store\";\nimport { useProjectStore } from \"../../store/project-store\";\nimport { AppState } from \"../../types\";\nimport { Button } from \"../ui/button\";\nimport { useEffect, useRef, useState, useCallback } from \"react\";\nimport { LuMousePointerClick, LuRefreshCw, LuArrowUp, LuX } from \"react-icons/lu\";\nimport { toast } from \"react-hot-toast\";\n\nimport Variants from \"../variants/Variants\";\nimport UpdateImageUpload, { UpdateImagePreview } from \"../UpdateImageUpload\";\nimport AgentActivity from \"../agent/AgentActivity\";\nimport WorkingPulse from \"../core/WorkingPulse\";\nimport ImageLightbox from \"../ImageLightbox\";\nimport { Commit } from \"../commits/types\";\nimport { removeHighlight } from \"../select-and-edit/utils\";\nimport { CodeGenerationModel } from \"../../lib/models\";\n\ninterface SidebarProps {\n  showSelectAndEditFeature: boolean;\n  doUpdate: (instruction: string) => void;\n  regenerate: () => void;\n  cancelCodeGeneration: () => void;\n  onOpenVersions: () => void;\n}\n\nconst MAX_UPDATE_IMAGES = 5;\n\nfunction extractTagName(html: string): string {\n  const match = html.match(/^<(\\w+)/);\n  return match ? match[1].toLowerCase() : \"element\";\n}\n\nfunction summarizeLatestChange(commit: Commit | null): string | null {\n  if (!commit) return null;\n  if (commit.type === \"code_create\") return \"Imported existing code.\";\n\n  const text = commit.inputs.text.trim();\n  if (text.length > 0) return text;\n\n  if (commit.type === \"ai_create\") {\n    return \"Create\";\n  }\n\n  if (commit.inputs.images.length > 1) {\n    return `Updated with ${commit.inputs.images.length} reference images.`;\n  }\n  if (commit.inputs.images.length === 1) {\n    return \"Updated with one reference image.\";\n  }\n  return \"Updated code.\";\n}\n\nfunction getSelectedElementTag(commit: Commit | null): string | null {\n  if (!commit || commit.type === \"code_create\") return null;\n  const html = commit.inputs.selectedElementHtml;\n  if (!html) return null;\n  return extractTagName(html);\n}\n\nfunction isSlowGeminiModel(model?: string): boolean {\n  return (\n    model === CodeGenerationModel.GEMINI_3_1_PRO_PREVIEW_HIGH ||\n    model === CodeGenerationModel.GEMINI_3_1_PRO_PREVIEW_MEDIUM\n  );\n}\n\nfunction Sidebar({\n  showSelectAndEditFeature,\n  doUpdate,\n  regenerate,\n  cancelCodeGeneration,\n  onOpenVersions,\n}: SidebarProps) {\n  const textareaRef = useRef<HTMLTextAreaElement>(null);\n  const middlePaneRef = useRef<HTMLDivElement>(null);\n  const [isErrorExpanded, setIsErrorExpanded] = useState(false);\n  const [isPromptExpanded, setIsPromptExpanded] = useState(false);\n  const [isPromptClamped, setIsPromptClamped] = useState(false);\n  const promptTextRef = useRef<HTMLParagraphElement>(null);\n  const [isDragging, setIsDragging] = useState(false);\n  const [nowMs, setNowMs] = useState(() => Date.now());\n  const [lightboxImage, setLightboxImage] = useState<string | null>(null);\n\n  const {\n    appState,\n    updateInstruction,\n    setUpdateInstruction,\n    updateImages,\n    setUpdateImages,\n    inSelectAndEditMode,\n    toggleInSelectAndEditMode,\n    selectedElement,\n    setSelectedElement,\n  } = useAppStore();\n\n  // Helper function to convert file to data URL\n  const fileToDataURL = (file: File): Promise<string> => {\n    return new Promise((resolve, reject) => {\n      const reader = new FileReader();\n      reader.onload = () => resolve(reader.result as string);\n      reader.onerror = (error) => reject(error);\n      reader.readAsDataURL(file);\n    });\n  };\n\n  const handleDrop = useCallback(\n    async (e: React.DragEvent) => {\n      e.preventDefault();\n      setIsDragging(false);\n\n      const files = Array.from(e.dataTransfer.files).filter(\n        (file) => file.type === \"image/png\" || file.type === \"image/jpeg\"\n      );\n\n      if (files.length === 0) return;\n\n      try {\n        if (updateImages.length >= MAX_UPDATE_IMAGES) {\n          toast.error(\n            `You’ve reached the limit of ${MAX_UPDATE_IMAGES} reference images. Remove one to add another.`\n          );\n          return;\n        }\n\n        const remainingSlots = MAX_UPDATE_IMAGES - updateImages.length;\n        let filesToAdd = files;\n        if (filesToAdd.length > remainingSlots) {\n          toast.error(\n            `Only ${remainingSlots} more image${\n              remainingSlots === 1 ? \"\" : \"s\"\n            } will be added to stay within the ${MAX_UPDATE_IMAGES}-image limit.`\n          );\n          filesToAdd = filesToAdd.slice(0, remainingSlots);\n        }\n\n        const newImagePromises = filesToAdd.map((file) => fileToDataURL(file));\n        const newImages = await Promise.all(newImagePromises);\n        setUpdateImages([...updateImages, ...newImages]);\n      } catch (error) {\n        console.error(\"Error reading files:\", error);\n      }\n    },\n    [updateImages, setUpdateImages]\n  );\n\n  const { head, commits, latestCommitHash, setHead } = useProjectStore();\n\n  const currentCommit = head ? commits[head] : null;\n  const latestChangeSummary = summarizeLatestChange(currentCommit);\n  const selectedElementTag = getSelectedElementTag(currentCommit);\n  const latestChangeImages =\n    currentCommit && currentCommit.type !== \"code_create\"\n      ? currentCommit.inputs.images\n      : [];\n  const latestChangeVideos =\n    currentCommit && currentCommit.type !== \"code_create\"\n      ? currentCommit.inputs.videos ?? []\n      : [];\n  const selectedVariantIndex = currentCommit?.selectedVariantIndex ?? 0;\n  const selectedVariant = currentCommit?.variants[selectedVariantIndex];\n  const selectedVariantEvents = selectedVariant?.agentEvents ?? [];\n  const showWorkingIndicator =\n    appState === AppState.CODING &&\n    selectedVariantEvents.length === 0 &&\n    head === latestCommitHash;\n  const requestStartMs =\n    selectedVariant?.requestStartedAt ??\n    (currentCommit?.dateCreated\n      ? new Date(currentCommit.dateCreated).getTime()\n      : undefined);\n  const elapsedSeconds = requestStartMs\n    ? Math.max(1, Math.round((nowMs - requestStartMs) / 1000))\n    : undefined;\n\n  const isFirstGeneration = currentCommit?.type === \"ai_create\";\n  const isViewingOlderVersion = head !== null && head !== latestCommitHash;\n\n  // Compute version number for the current head\n  const currentVersionNumber = (() => {\n    if (!head) return null;\n    const sorted = Object.values(commits).sort(\n      (a, b) => new Date(a.dateCreated).getTime() - new Date(b.dateCreated).getTime()\n    );\n    const index = sorted.findIndex((c) => c.hash === head);\n    return index !== -1 ? index + 1 : null;\n  })();\n\n  // Check if the currently selected variant is complete\n  const isSelectedVariantComplete =\n    head &&\n    commits[head] &&\n    commits[head].variants[commits[head].selectedVariantIndex].status ===\n      \"complete\";\n\n  // Check if the currently selected variant has an error\n  const isSelectedVariantError =\n    head &&\n    commits[head] &&\n    commits[head].variants[commits[head].selectedVariantIndex].status ===\n      \"error\";\n\n  // Get the error message from the selected variant\n  const selectedVariantErrorMessage =\n    head &&\n    commits[head] &&\n    commits[head].variants[commits[head].selectedVariantIndex].errorMessage;\n\n  // Auto-resize textarea to fit content\n  const autoResize = useCallback(() => {\n    const textarea = textareaRef.current;\n    if (textarea) {\n      textarea.style.height = \"auto\";\n      textarea.style.height = textarea.scrollHeight + \"px\";\n    }\n  }, []);\n\n  // Focus on the update instruction textarea when a variant is complete\n  useEffect(() => {\n    if (\n      (appState === AppState.CODE_READY || isSelectedVariantComplete) &&\n      textareaRef.current\n    ) {\n      const el = textareaRef.current;\n      el.focus();\n      el.setSelectionRange(el.value.length, el.value.length);\n    }\n  }, [appState, isSelectedVariantComplete]);\n\n  // Focus the textarea when an element is selected in the preview\n  useEffect(() => {\n    if (selectedElement && textareaRef.current) {\n      textareaRef.current.focus();\n    }\n  }, [selectedElement]);\n\n  // Reset textarea height when instruction changes externally (e.g., cleared after submit)\n  useEffect(() => {\n    autoResize();\n  }, [updateInstruction, autoResize]);\n\n  // Reset error expanded state when variant changes\n  useEffect(() => {\n    setIsErrorExpanded(false);\n  }, [head, commits[head || \"\"]?.selectedVariantIndex]);\n\n  // Reset prompt expanded state when commit changes and detect clamping\n  useEffect(() => {\n    setIsPromptExpanded(false);\n  }, [head]);\n\n  useEffect(() => {\n    const el = promptTextRef.current;\n    if (el) {\n      setIsPromptClamped(el.scrollHeight > el.clientHeight);\n    } else {\n      setIsPromptClamped(false);\n    }\n  }, [latestChangeSummary, isPromptExpanded]);\n\n  useEffect(() => {\n    if (!middlePaneRef.current) return;\n    requestAnimationFrame(() => {\n      if (!middlePaneRef.current) return;\n      middlePaneRef.current.scrollTop = middlePaneRef.current.scrollHeight;\n    });\n  }, [head, selectedVariantIndex]);\n\n  useEffect(() => {\n    if (appState !== AppState.CODING) return;\n    const intervalId = window.setInterval(() => setNowMs(Date.now()), 1000);\n    return () => window.clearInterval(intervalId);\n  }, [appState]);\n\n\n  return (\n    <div className=\"flex flex-col h-full\">\n      <div className=\"shrink-0 border-b border-gray-200 dark:border-zinc-800 bg-white dark:bg-zinc-950 px-4 py-2\">\n        <Variants />\n      </div>\n\n      {/* Scrollable content */}\n      <div\n        ref={middlePaneRef}\n        className=\"flex-1 min-h-0 overflow-y-auto sidebar-scrollbar-stable px-6 pt-4\"\n      >\n        {latestChangeSummary && (\n          <div className=\"mb-4 flex flex-col items-end\">\n            <div className=\"inline-block max-w-[85%] rounded-2xl rounded-br-md bg-violet-100 px-4 py-2.5 dark:bg-violet-900/30\">\n              <p\n                ref={promptTextRef}\n                className={`text-[13px] text-violet-950 dark:text-violet-100 break-words whitespace-pre-wrap ${\n                  !isPromptExpanded ? \"line-clamp-[10]\" : \"\"\n                }`}\n              >\n                {latestChangeSummary}\n              </p>\n              {selectedElementTag && (\n                <div className=\"mt-1.5 flex items-center gap-1.5\">\n                  <LuMousePointerClick className=\"w-3 h-3 text-violet-500 dark:text-violet-400\" />\n                  <span className=\"text-[11px] text-violet-600 dark:text-violet-300\">\n                    Selected: <code className=\"font-mono text-[10px] bg-violet-200/60 dark:bg-violet-800/50 px-1 py-0.5 rounded\">&lt;{selectedElementTag}&gt;</code>\n                  </span>\n                </div>\n              )}\n              {(isPromptClamped || isPromptExpanded) && (\n                <div className=\"flex justify-end mt-1.5\">\n                  <button\n                    onClick={() => setIsPromptExpanded(!isPromptExpanded)}\n                    className=\"text-[11px] font-medium text-gray-600 bg-white/70 hover:bg-white dark:text-gray-300 dark:bg-zinc-800/70 dark:hover:bg-zinc-800 px-2 py-0.5 rounded-full transition-colors shadow-sm\"\n                  >\n                    {isPromptExpanded ? \"less\" : \"more\"}\n                  </button>\n                </div>\n              )}\n            </div>\n              {latestChangeImages.length > 0 && (\n                <div className=\"mt-2 flex gap-2 flex-wrap justify-end\">\n                  {latestChangeImages.map((image, index) => (\n                    <button\n                      key={`${image.slice(0, 40)}-${index}`}\n                      onClick={() => setLightboxImage(image)}\n                      className=\"shrink-0 cursor-zoom-in rounded-lg border border-gray-200 bg-white p-1 dark:border-zinc-700 dark:bg-zinc-900 hover:border-violet-300 dark:hover:border-violet-500 transition-colors\"\n                    >\n                      <img\n                        src={image}\n                        alt={`Reference ${index + 1}`}\n                        className=\"h-24 w-24 object-contain\"\n                        loading=\"lazy\"\n                      />\n                    </button>\n                  ))}\n                </div>\n              )}\n              {latestChangeVideos.length > 0 && (\n                <div className=\"mt-2 space-y-2\">\n                  {latestChangeVideos.map((video, index) => (\n                    <video\n                      key={`${video.slice(0, 40)}-${index}`}\n                      src={video}\n                      className=\"w-full rounded-lg border border-gray-200 dark:border-zinc-700\"\n                      controls\n                      preload=\"metadata\"\n                    />\n                  ))}\n                </div>\n              )}\n          </div>\n        )}\n\n        {showWorkingIndicator && (\n          <div className=\"working-indicator-bg mb-3 rounded-xl border border-violet-200 dark:border-violet-800 px-3 py-2 transition-all duration-500\">\n            <div className=\"flex items-center justify-between\">\n              <div className=\"flex items-center gap-2 text-sm text-gray-600 dark:text-gray-300\">\n                <WorkingPulse />\n                <span>Working...</span>\n              </div>\n              <div className=\"text-xs font-semibold text-gray-700 dark:text-gray-200\">\n                Time so far {elapsedSeconds ? `${elapsedSeconds}s` : \"--\"}\n              </div>\n            </div>\n          </div>\n        )}\n\n        {currentCommit?.type === \"ai_create\" &&\n          appState === AppState.CODING &&\n          head === latestCommitHash &&\n          !isSelectedVariantComplete &&\n          !isSelectedVariantError &&\n          isSlowGeminiModel(selectedVariant?.model) && (\n          <div className=\"mb-3 rounded-md border border-amber-200 bg-amber-50 px-3 py-2 text-xs text-amber-900 dark:border-amber-800 dark:bg-amber-900/20 dark:text-amber-200\">\n            Slow, high quality model. May take 5-10 mins on some images/videos.\n          </div>\n        )}\n\n        {isViewingOlderVersion && currentVersionNumber !== null ? (\n          <div className=\"mb-4 flex flex-col items-center py-6\">\n            <p className=\"text-2xl font-semibold text-gray-900 dark:text-zinc-100\">\n              Version {currentVersionNumber}\n            </p>\n            <p className=\"mt-1 text-sm text-gray-400 dark:text-gray-500\">\n              You are viewing an older version\n            </p>\n            <div className=\"mt-4 flex gap-2\">\n              <button\n                onClick={onOpenVersions}\n                className=\"rounded-lg border border-gray-300 dark:border-zinc-600 px-4 py-2 text-sm font-medium text-gray-600 dark:text-gray-300 hover:bg-gray-100 dark:hover:bg-zinc-700 transition-colors\"\n              >\n                All versions\n              </button>\n              <button\n                onClick={() => latestCommitHash && setHead(latestCommitHash)}\n                className=\"rounded-lg bg-gray-900 dark:bg-white px-4 py-2 text-sm font-medium text-white dark:text-black hover:bg-black dark:hover:bg-gray-200 transition-colors\"\n              >\n                View latest\n              </button>\n            </div>\n          </div>\n        ) : (\n          <AgentActivity />\n        )}\n\n        {/* Regenerate button for first generation.\n            Scenarios:\n            1) `appState === CODE_READY`: request fully ended and user can retry.\n            2) `isSelectedVariantComplete`: selected option completed even if app state\n               has not yet fully transitioned.\n            3) `isSelectedVariantError`: selected option failed; keep retry visible so\n               users can rerun create without losing uploaded inputs. */}\n        {isFirstGeneration &&\n          head === latestCommitHash &&\n          (appState === AppState.CODE_READY ||\n            isSelectedVariantComplete ||\n            isSelectedVariantError) && (\n          <div className=\"flex justify-end mb-3\">\n            <button\n              onClick={regenerate}\n              className=\"flex items-center gap-1.5 px-3 py-1.5 text-xs font-medium text-gray-600 dark:text-gray-300 border border-gray-300 dark:border-gray-600 rounded-lg hover:bg-gray-100 dark:hover:bg-zinc-800 transition-colors\"\n            >\n              <LuRefreshCw className=\"w-3.5 h-3.5\" />\n              Retry\n            </button>\n          </div>\n        )}\n\n        {/* Show cancel button when coding */}\n        {appState === AppState.CODING && !isSelectedVariantComplete && (\n          <div className=\"flex w-full\">\n            <Button\n              onClick={cancelCodeGeneration}\n              className=\"w-full dark:text-white dark:bg-gray-700\"\n            >\n              Cancel All Generations\n            </Button>\n          </div>\n        )}\n\n        {/* Show error message when selected option has an error */}\n        {isSelectedVariantError && (\n          <div className=\"bg-red-50 dark:bg-red-950/30 border border-red-200 dark:border-red-800 rounded-md p-3 mb-2\">\n            <div className=\"text-red-800 dark:text-red-200 text-sm\">\n              <div className=\"font-medium mb-1\">\n                This option failed to generate because\n              </div>\n              {selectedVariantErrorMessage && (\n                <div className=\"mb-2\">\n                  <div className=\"text-red-700 dark:text-red-300 bg-red-100 dark:bg-red-900/40 border border-red-300 dark:border-red-700 rounded px-2 py-1 text-xs font-mono break-words\">\n                    {selectedVariantErrorMessage.length > 200 && !isErrorExpanded\n                      ? `${selectedVariantErrorMessage.slice(0, 200)}...`\n                      : selectedVariantErrorMessage}\n                  </div>\n                  {selectedVariantErrorMessage.length > 200 && (\n                    <button\n                      onClick={() => setIsErrorExpanded(!isErrorExpanded)}\n                      className=\"text-red-600 dark:text-red-400 text-xs underline mt-1 hover:text-red-800 dark:hover:text-red-300\"\n                    >\n                      {isErrorExpanded ? \"Show less\" : \"Show more\"}\n                    </button>\n                  )}\n                </div>\n              )}\n              <div>\n                {isFirstGeneration\n                  ? \"Click Retry to run the create request again.\"\n                  : \"Switch to another option above to make updates.\"}\n              </div>\n            </div>\n          </div>\n        )}\n      </div>\n\n      {/* Pinned bottom: prompt box + option selector */}\n      {(appState === AppState.CODE_READY || isSelectedVariantComplete) &&\n        !isSelectedVariantError && (\n          <div\n            className=\"shrink-0 border-t border-gray-200 bg-gray-50 dark:border-zinc-800 dark:bg-zinc-900 px-4 py-4\"\n            onDragEnter={() => setIsDragging(true)}\n            onDragLeave={(e) => {\n              if (!e.currentTarget.contains(e.relatedTarget as Node)) {\n                setIsDragging(false);\n              }\n            }}\n            onDragOver={(e) => e.preventDefault()}\n            onDrop={handleDrop}\n          >\n            {/* Select and edit indicator */}\n            {inSelectAndEditMode && (\n              <div className=\"mb-2\">\n                {selectedElement ? (\n                  <div className=\"flex items-center justify-between rounded-xl border border-violet-300 dark:border-violet-600 bg-violet-50 dark:bg-violet-900/20 px-3 py-2\">\n                    <div className=\"flex items-center gap-2 min-w-0\">\n                      <LuMousePointerClick className=\"w-3.5 h-3.5 text-violet-600 dark:text-violet-400 shrink-0\" />\n                      <span className=\"text-sm text-violet-700 dark:text-violet-300 truncate\">\n                        Selected: <code className=\"font-mono text-xs bg-violet-100 dark:bg-violet-800/50 px-1.5 py-0.5 rounded\">&lt;{selectedElement.tagName.toLowerCase()}&gt;</code>\n                      </span>\n                    </div>\n                    <button\n                      onClick={() => {\n                        removeHighlight(selectedElement);\n                        setSelectedElement(null);\n                      }}\n                      className=\"shrink-0 ml-3 p-0.5 text-violet-400 hover:text-violet-700 dark:hover:text-violet-200 transition-colors\"\n                      title=\"Clear selection\"\n                    >\n                      <LuX className=\"w-3.5 h-3.5\" />\n                    </button>\n                  </div>\n                ) : (\n                  <div className=\"flex items-center justify-between rounded-xl border border-violet-200 dark:border-violet-700 bg-violet-50 dark:bg-violet-900/20 px-3 py-2\">\n                    <div className=\"flex items-center gap-2\">\n                      <LuMousePointerClick className=\"w-3.5 h-3.5 text-violet-500 dark:text-violet-400 shrink-0\" />\n                      <span className=\"text-sm font-medium text-violet-700 dark:text-violet-300\">Click an element to edit it</span>\n                    </div>\n                    <button\n                      onClick={toggleInSelectAndEditMode}\n                      className=\"shrink-0 ml-3 text-sm text-violet-500 dark:text-violet-400 hover:text-violet-800 dark:hover:text-violet-200 transition-colors\"\n                    >\n                      Exit\n                    </button>\n                  </div>\n                )}\n              </div>\n            )}\n            <div className=\"relative w-full overflow-hidden rounded-2xl border-2 border-violet-300 bg-white transition-all focus-within:border-violet-500 dark:border-violet-500/50 dark:bg-zinc-900 dark:focus-within:border-violet-400\">\n              <UpdateImagePreview\n                updateImages={updateImages}\n                setUpdateImages={setUpdateImages}\n              />\n              <textarea\n                ref={textareaRef}\n                placeholder={\n                  inSelectAndEditMode && selectedElement\n                    ? `Describe changes for the selected <${selectedElement.tagName.toLowerCase()}> element...`\n                    : \"Tell the AI what to change...\"\n                }\n                onChange={(e) => {\n                  setUpdateInstruction(e.target.value);\n                  autoResize();\n                }}\n                onKeyDown={(e) => {\n                  if (e.key === \"Enter\" && !e.shiftKey) {\n                    e.preventDefault();\n                    doUpdate(updateInstruction);\n                  }\n                }}\n                value={updateInstruction}\n                data-testid=\"update-input\"\n                rows={1}\n                className=\"max-h-40 w-full resize-none border-0 bg-transparent px-4 pt-4 pb-6 text-[15px] leading-6 text-gray-800 placeholder:text-gray-400 focus:outline-none dark:text-zinc-100 dark:placeholder:text-zinc-500\"\n              />\n              <div className=\"flex items-center justify-between px-3 pb-3\">\n                <div className=\"flex items-center gap-1\">\n                  <UpdateImageUpload\n                    updateImages={updateImages}\n                    setUpdateImages={setUpdateImages}\n                  />\n                  {showSelectAndEditFeature && (\n                    <button\n                      onClick={toggleInSelectAndEditMode}\n                      className={`rounded-lg p-2 transition-colors ${\n                        inSelectAndEditMode\n                          ? \"bg-violet-100 text-violet-600 dark:bg-violet-900/30 dark:text-violet-400\"\n                          : \"text-gray-400 hover:bg-gray-100 hover:text-gray-600 dark:text-zinc-500 dark:hover:bg-zinc-800 dark:hover:text-zinc-300\"\n                      }`}\n                      title={inSelectAndEditMode ? \"Exit selection mode\" : \"Select an element in the preview to target your edit\"}\n                    >\n                      <LuMousePointerClick className=\"w-[18px] h-[18px]\" />\n                    </button>\n                  )}\n                </div>\n                <button\n                  onClick={() => doUpdate(updateInstruction)}\n                  disabled={!updateInstruction.trim()}\n                  className={`rounded-xl p-2 transition-colors update-btn ${\n                    updateInstruction.trim()\n                      ? \"bg-violet-600 text-white hover:bg-violet-700 dark:bg-violet-500 dark:hover:bg-violet-400\"\n                      : \"cursor-not-allowed bg-gray-200 text-gray-400 dark:bg-zinc-700 dark:text-zinc-500\"\n                  }`}\n                  title=\"Send\"\n                >\n                  <LuArrowUp className=\"w-[18px] h-[18px]\" strokeWidth={2.5} />\n                </button>\n              </div>\n\n              {isDragging && (\n                <div className=\"absolute inset-0 bg-blue-50/90 dark:bg-gray-800/90 border-2 border-dashed border-blue-400 dark:border-blue-600 rounded-xl flex items-center justify-center pointer-events-none z-10\">\n                  <p className=\"text-blue-600 dark:text-blue-400 font-medium\">Drop images here</p>\n                </div>\n              )}\n            </div>\n          </div>\n        )}\n\n      <ImageLightbox\n        image={lightboxImage}\n        onClose={() => setLightboxImage(null)}\n      />\n    </div>\n  );\n}\n\nexport default Sidebar;\n"
  },
  {
    "path": "frontend/src/components/start-pane/StartPane.tsx",
    "content": "import React from \"react\";\nimport { Settings } from \"../../types\";\nimport { Stack } from \"../../lib/stacks\";\nimport UnifiedInputPane from \"../unified-input/UnifiedInputPane\";\n\ninterface Props {\n  doCreate: (\n    images: string[],\n    inputMode: \"image\" | \"video\",\n    textPrompt?: string\n  ) => void;\n  doCreateFromText: (text: string) => void;\n  importFromCode: (code: string, stack: Stack) => void;\n  settings: Settings;\n  setSettings: React.Dispatch<React.SetStateAction<Settings>>;\n}\n\nconst StartPane: React.FC<Props> = ({\n  doCreate,\n  doCreateFromText,\n  importFromCode,\n  settings,\n  setSettings,\n}) => {\n  return (\n    <div className=\"flex flex-col justify-center items-center py-8\">\n      <UnifiedInputPane\n        doCreate={doCreate}\n        doCreateFromText={doCreateFromText}\n        importFromCode={importFromCode}\n        settings={settings}\n        setSettings={setSettings}\n      />\n    </div>\n  );\n};\n\nexport default StartPane;\n"
  },
  {
    "path": "frontend/src/components/thinking/ThinkingIndicator.tsx",
    "content": "import { useState } from \"react\";\nimport { useProjectStore } from \"../../store/project-store\";\nimport { BsChevronDown, BsChevronRight } from \"react-icons/bs\";\nimport ReactMarkdown from \"react-markdown\";\n\nfunction getLastSentence(text: string): string {\n  const sentences = text.split(/(?<=[.!?])\\s+/);\n  for (let i = sentences.length - 1; i >= 0; i--) {\n    const sentence = sentences[i].trim();\n    if (sentence.length > 0) {\n      if (sentence.length > 150) {\n        return \"...\" + sentence.slice(-150);\n      }\n      return sentence;\n    }\n  }\n  return text.length > 100 ? \"...\" + text.slice(-100) : text;\n}\n\nfunction ThinkingIndicator() {\n  const [isExpanded, setIsExpanded] = useState(false);\n\n  const { head, commits, latestCommitHash } = useProjectStore();\n\n  const currentCommit = head ? commits[head] : null;\n  const selectedVariant = currentCommit\n    ? currentCommit.variants[currentCommit.selectedVariantIndex]\n    : null;\n  const thinking = selectedVariant?.thinking || \"\";\n  const code = selectedVariant?.code || \"\";\n  const thinkingDuration = selectedVariant?.thinkingDuration;\n  const isGenerating = selectedVariant?.status === \"generating\";\n\n  // UI States:\n  // - Waiting: isGenerating && !code && !thinking -> \"AI is thinking...\"\n  // - Thinking: thinking && !code -> \"AI Thinking\" with content + pulsing\n  // - Complete: thinking && code -> \"AI thought for Xs\" with content\n  // - Hidden: !isGenerating && !thinking -> not rendered\n\n  const isWaiting = isGenerating && !code && !thinking;\n  const isThinkingInProgress = thinking.length > 0 && code.length === 0;\n  const isThinkingComplete = thinking.length > 0 && code.length > 0;\n\n  // Only show thinking for the latest commit, not historical ones\n  const isLatestCommit = head === latestCommitHash;\n\n  // Don't render if there's no thinking content and we're not in the waiting state\n  if (!thinking && !isWaiting) {\n    return null;\n  }\n\n  // Don't show thinking for historical commits\n  if (!isLatestCommit) {\n    return null;\n  }\n\n  // Determine header text\n  let headerText = \"AI Thinking\";\n  if (isWaiting) {\n    headerText = \"AI is thinking...\";\n  } else if (isThinkingComplete && thinkingDuration !== undefined) {\n    headerText = `AI thought for ${thinkingDuration}s`;\n  }\n\n  const previewText = thinking ? getLastSentence(thinking) : \"\";\n\n  const isActive = isWaiting || isThinkingInProgress;\n\n  return (\n    <div\n      className={`rounded-md mb-2 ${\n        isActive\n          ? \"border-2 border-green-400 dark:border-green-500\"\n          : \"bg-gray-50 dark:bg-gray-900/50 border border-gray-200 dark:border-gray-700\"\n      }`}\n      style={\n        isActive\n          ? {\n              animation: \"flash 1s ease-in-out infinite\",\n            }\n          : undefined\n      }\n    >\n      <style>\n        {`\n          @keyframes flash {\n            0%, 100% {\n              background-color: rgb(240 253 244);\n              border-color: rgb(74 222 128);\n            }\n            50% {\n              background-color: rgb(187 247 208);\n              border-color: rgb(34 197 94);\n            }\n          }\n          @media (prefers-color-scheme: dark) {\n            @keyframes flash {\n              0%, 100% {\n                background-color: rgb(20 83 45 / 0.3);\n                border-color: rgb(34 197 94);\n              }\n              50% {\n                background-color: rgb(20 83 45 / 0.6);\n                border-color: rgb(74 222 128);\n              }\n            }\n          }\n        `}\n      </style>\n      <button\n        onClick={() => setIsExpanded(!isExpanded)}\n        className=\"w-full flex items-center justify-between px-3 py-2 text-left hover:bg-gray-100 dark:hover:bg-gray-800/50 transition-colors rounded-t-md\"\n      >\n        <div className=\"flex items-center gap-2\">\n          {isExpanded ? (\n            <BsChevronDown className=\"w-3 h-3 text-gray-500 dark:text-gray-400\" />\n          ) : (\n            <BsChevronRight className=\"w-3 h-3 text-gray-500 dark:text-gray-400\" />\n          )}\n          <span className=\"text-sm font-medium text-gray-700 dark:text-gray-300\">\n            {headerText}\n          </span>\n        </div>\n        <div className=\"flex items-center gap-2\">\n          {(isWaiting || isThinkingInProgress) && (\n            <span className=\"flex items-center gap-1\">\n              <span className=\"w-2 h-2 bg-green-500 rounded-full animate-pulse\" />\n              <span className=\"text-xs text-green-600 dark:text-green-400\">\n                {isWaiting ? \"starting\" : \"reasoning\"}\n              </span>\n            </span>\n          )}\n        </div>\n      </button>\n\n      {thinking && (\n        <>\n          {isExpanded ? (\n            <div className=\"px-3 pb-3 max-h-60 overflow-y-auto\">\n              <div className=\"text-sm text-gray-800 dark:text-gray-200 leading-relaxed prose prose-sm dark:prose-invert max-w-none\">\n                <ReactMarkdown>{thinking}</ReactMarkdown>\n              </div>\n            </div>\n          ) : (\n            // Only show preview when thinking is in progress, not when complete\n            isThinkingInProgress && (\n              <div className=\"px-3 pb-2\">\n                <p className=\"text-sm text-gray-600 dark:text-gray-400 truncate\">\n                  {previewText}\n                </p>\n              </div>\n            )\n          )}\n        </>\n      )}\n    </div>\n  );\n}\n\nexport default ThinkingIndicator;\n"
  },
  {
    "path": "frontend/src/components/ui/accordion.tsx",
    "content": "import * as React from \"react\"\nimport * as AccordionPrimitive from \"@radix-ui/react-accordion\"\nimport { ChevronDownIcon } from \"@radix-ui/react-icons\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Accordion = AccordionPrimitive.Root\n\nconst AccordionItem = React.forwardRef<\n  React.ElementRef<typeof AccordionPrimitive.Item>,\n  React.ComponentPropsWithoutRef<typeof AccordionPrimitive.Item>\n>(({ className, ...props }, ref) => (\n  <AccordionPrimitive.Item\n    ref={ref}\n    className={cn(\"border-b\", className)}\n    {...props}\n  />\n))\nAccordionItem.displayName = \"AccordionItem\"\n\nconst AccordionTrigger = React.forwardRef<\n  React.ElementRef<typeof AccordionPrimitive.Trigger>,\n  React.ComponentPropsWithoutRef<typeof AccordionPrimitive.Trigger>\n>(({ className, children, ...props }, ref) => (\n  <AccordionPrimitive.Header className=\"flex\">\n    <AccordionPrimitive.Trigger\n      ref={ref}\n      className={cn(\n        \"flex flex-1 items-center justify-between py-4 text-sm font-medium transition-all hover:underline [&[data-state=open]>svg]:rotate-180\",\n        className\n      )}\n      {...props}\n    >\n      {children}\n      <ChevronDownIcon className=\"h-4 w-4 shrink-0 text-muted-foreground transition-transform duration-200\" />\n    </AccordionPrimitive.Trigger>\n  </AccordionPrimitive.Header>\n))\nAccordionTrigger.displayName = AccordionPrimitive.Trigger.displayName\n\nconst AccordionContent = React.forwardRef<\n  React.ElementRef<typeof AccordionPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof AccordionPrimitive.Content>\n>(({ className, children, ...props }, ref) => (\n  <AccordionPrimitive.Content\n    ref={ref}\n    className=\"overflow-hidden text-sm data-[state=closed]:animate-accordion-up data-[state=open]:animate-accordion-down\"\n    {...props}\n  >\n    <div className={cn(\"pb-4 pt-0\", className)}>{children}</div>\n  </AccordionPrimitive.Content>\n))\nAccordionContent.displayName = AccordionPrimitive.Content.displayName\n\nexport { Accordion, AccordionItem, AccordionTrigger, AccordionContent }\n"
  },
  {
    "path": "frontend/src/components/ui/alert-dialog.tsx",
    "content": "import * as React from \"react\"\nimport * as AlertDialogPrimitive from \"@radix-ui/react-alert-dialog\"\n\nimport { cn } from \"@/lib/utils\"\nimport { buttonVariants } from \"@/components/ui/button\"\n\nconst AlertDialog = AlertDialogPrimitive.Root\n\nconst AlertDialogTrigger = AlertDialogPrimitive.Trigger\n\nconst AlertDialogPortal = AlertDialogPrimitive.Portal\n\nconst AlertDialogOverlay = React.forwardRef<\n  React.ElementRef<typeof AlertDialogPrimitive.Overlay>,\n  React.ComponentPropsWithoutRef<typeof AlertDialogPrimitive.Overlay>\n>(({ className, ...props }, ref) => (\n  <AlertDialogPrimitive.Overlay\n    className={cn(\n      \"fixed inset-0 z-50 bg-background/80 backdrop-blur-sm data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0\",\n      className\n    )}\n    {...props}\n    ref={ref}\n  />\n))\nAlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName\n\nconst AlertDialogContent = React.forwardRef<\n  React.ElementRef<typeof AlertDialogPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof AlertDialogPrimitive.Content>\n>(({ className, ...props }, ref) => (\n  <AlertDialogPortal>\n    <AlertDialogOverlay />\n    <AlertDialogPrimitive.Content\n      ref={ref}\n      className={cn(\n        \"fixed left-[50%] top-[50%] z-50 grid w-full max-w-lg translate-x-[-50%] translate-y-[-50%] gap-4 border bg-background p-6 shadow-lg duration-200 data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[state=closed]:slide-out-to-left-1/2 data-[state=closed]:slide-out-to-top-[48%] data-[state=open]:slide-in-from-left-1/2 data-[state=open]:slide-in-from-top-[48%] sm:rounded-lg\",\n        className\n      )}\n      {...props}\n    />\n  </AlertDialogPortal>\n))\nAlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName\n\nconst AlertDialogHeader = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col space-y-2 text-center sm:text-left\",\n      className\n    )}\n    {...props}\n  />\n)\nAlertDialogHeader.displayName = \"AlertDialogHeader\"\n\nconst AlertDialogFooter = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2\",\n      className\n    )}\n    {...props}\n  />\n)\nAlertDialogFooter.displayName = \"AlertDialogFooter\"\n\nconst AlertDialogTitle = React.forwardRef<\n  React.ElementRef<typeof AlertDialogPrimitive.Title>,\n  React.ComponentPropsWithoutRef<typeof AlertDialogPrimitive.Title>\n>(({ className, ...props }, ref) => (\n  <AlertDialogPrimitive.Title\n    ref={ref}\n    className={cn(\"text-lg font-semibold\", className)}\n    {...props}\n  />\n))\nAlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName\n\nconst AlertDialogDescription = React.forwardRef<\n  React.ElementRef<typeof AlertDialogPrimitive.Description>,\n  React.ComponentPropsWithoutRef<typeof AlertDialogPrimitive.Description>\n>(({ className, ...props }, ref) => (\n  <AlertDialogPrimitive.Description\n    ref={ref}\n    className={cn(\"text-sm text-muted-foreground\", className)}\n    {...props}\n  />\n))\nAlertDialogDescription.displayName =\n  AlertDialogPrimitive.Description.displayName\n\nconst AlertDialogAction = React.forwardRef<\n  React.ElementRef<typeof AlertDialogPrimitive.Action>,\n  React.ComponentPropsWithoutRef<typeof AlertDialogPrimitive.Action>\n>(({ className, ...props }, ref) => (\n  <AlertDialogPrimitive.Action\n    ref={ref}\n    className={cn(buttonVariants(), className)}\n    {...props}\n  />\n))\nAlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName\n\nconst AlertDialogCancel = React.forwardRef<\n  React.ElementRef<typeof AlertDialogPrimitive.Cancel>,\n  React.ComponentPropsWithoutRef<typeof AlertDialogPrimitive.Cancel>\n>(({ className, ...props }, ref) => (\n  <AlertDialogPrimitive.Cancel\n    ref={ref}\n    className={cn(\n      buttonVariants({ variant: \"outline\" }),\n      \"mt-2 sm:mt-0\",\n      className\n    )}\n    {...props}\n  />\n))\nAlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName\n\nexport {\n  AlertDialog,\n  AlertDialogPortal,\n  AlertDialogOverlay,\n  AlertDialogTrigger,\n  AlertDialogContent,\n  AlertDialogHeader,\n  AlertDialogFooter,\n  AlertDialogTitle,\n  AlertDialogDescription,\n  AlertDialogAction,\n  AlertDialogCancel,\n}\n"
  },
  {
    "path": "frontend/src/components/ui/badge.tsx",
    "content": "import * as React from \"react\"\nimport { cva, type VariantProps } from \"class-variance-authority\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst badgeVariants = cva(\n  \"inline-flex items-center rounded-md border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2\",\n  {\n    variants: {\n      variant: {\n        default:\n          \"border-transparent bg-primary text-primary-foreground shadow hover:bg-primary/80\",\n        secondary:\n          \"border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80\",\n        destructive:\n          \"border-transparent bg-destructive text-destructive-foreground shadow hover:bg-destructive/80\",\n        outline: \"text-foreground\",\n      },\n    },\n    defaultVariants: {\n      variant: \"default\",\n    },\n  }\n)\n\nexport interface BadgeProps\n  extends React.HTMLAttributes<HTMLDivElement>,\n    VariantProps<typeof badgeVariants> {}\n\nfunction Badge({ className, variant, ...props }: BadgeProps) {\n  return (\n    <div className={cn(badgeVariants({ variant }), className)} {...props} />\n  )\n}\n\nexport { Badge, badgeVariants }\n"
  },
  {
    "path": "frontend/src/components/ui/button.tsx",
    "content": "import * as React from \"react\";\nimport { Slot } from \"@radix-ui/react-slot\";\nimport { cva, type VariantProps } from \"class-variance-authority\";\n\nimport { cn } from \"@/lib/utils\";\n\nconst buttonVariants = cva(\n  \"inline-flex items-center justify-center whitespace-nowrap rounded-md text-sm font-medium transition-colors focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:pointer-events-none disabled:opacity-50\",\n  {\n    variants: {\n      variant: {\n        default:\n          \"bg-primary text-primary-foreground shadow hover:bg-primary/90 dark:bg-zinc-700 dark:text-zinc-100 dark:hover:bg-zinc-600 dark:shadow-none\",\n        destructive:\n          \"bg-destructive text-destructive-foreground shadow-sm hover:bg-destructive/90\",\n        outline:\n          \"border border-input bg-transparent shadow-sm hover:bg-accent hover:text-accent-foreground\",\n        secondary:\n          \"bg-secondary text-secondary-foreground shadow-sm hover:bg-secondary/80\",\n        ghost: \"hover:bg-accent hover:text-accent-foreground\",\n        link: \"text-primary underline-offset-4 hover:underline\",\n      },\n      size: {\n        default: \"h-9 px-4 py-2\",\n        sm: \"h-8 rounded-md px-3 text-xs\",\n        lg: \"h-10 rounded-md px-8\",\n        icon: \"h-9 w-9\",\n      },\n    },\n    defaultVariants: {\n      variant: \"default\",\n      size: \"default\",\n    },\n  }\n);\n\nexport interface ButtonProps\n  extends React.ButtonHTMLAttributes<HTMLButtonElement>,\n    VariantProps<typeof buttonVariants> {\n  asChild?: boolean;\n}\n\nconst Button = React.forwardRef<HTMLButtonElement, ButtonProps>(\n  ({ className, variant, size, asChild = false, ...props }, ref) => {\n    const Comp = asChild ? Slot : \"button\";\n    return (\n      <Comp\n        className={cn(buttonVariants({ variant, size, className }))}\n        ref={ref}\n        {...props}\n      />\n    );\n  }\n);\nButton.displayName = \"Button\";\n\nexport { Button, buttonVariants };\n"
  },
  {
    "path": "frontend/src/components/ui/checkbox.tsx",
    "content": "import * as React from \"react\"\nimport * as CheckboxPrimitive from \"@radix-ui/react-checkbox\"\nimport { CheckIcon } from \"@radix-ui/react-icons\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Checkbox = React.forwardRef<\n  React.ElementRef<typeof CheckboxPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof CheckboxPrimitive.Root>\n>(({ className, ...props }, ref) => (\n  <CheckboxPrimitive.Root\n    ref={ref}\n    className={cn(\n      \"peer h-4 w-4 shrink-0 rounded-sm border border-primary shadow focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50 data-[state=checked]:bg-primary data-[state=checked]:text-primary-foreground\",\n      className\n    )}\n    {...props}\n  >\n    <CheckboxPrimitive.Indicator\n      className={cn(\"flex items-center justify-center text-current\")}\n    >\n      <CheckIcon className=\"h-4 w-4\" />\n    </CheckboxPrimitive.Indicator>\n  </CheckboxPrimitive.Root>\n))\nCheckbox.displayName = CheckboxPrimitive.Root.displayName\n\nexport { Checkbox }\n"
  },
  {
    "path": "frontend/src/components/ui/collapsible.tsx",
    "content": "import * as CollapsiblePrimitive from \"@radix-ui/react-collapsible\"\n\nconst Collapsible = CollapsiblePrimitive.Root\n\nconst CollapsibleTrigger = CollapsiblePrimitive.CollapsibleTrigger\n\nconst CollapsibleContent = CollapsiblePrimitive.CollapsibleContent\n\nexport { Collapsible, CollapsibleTrigger, CollapsibleContent }\n"
  },
  {
    "path": "frontend/src/components/ui/dialog.tsx",
    "content": "import * as React from \"react\"\nimport * as DialogPrimitive from \"@radix-ui/react-dialog\"\nimport { Cross2Icon } from \"@radix-ui/react-icons\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Dialog = DialogPrimitive.Root\n\nconst DialogTrigger = DialogPrimitive.Trigger\n\nconst DialogPortal = DialogPrimitive.Portal\n\nconst DialogClose = DialogPrimitive.Close\n\nconst DialogOverlay = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Overlay>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Overlay>\n>(({ className, ...props }, ref) => (\n  <DialogPrimitive.Overlay\n    ref={ref}\n    className={cn(\n      \"fixed inset-0 z-50 bg-background/80 backdrop-blur-sm data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0\",\n      className\n    )}\n    {...props}\n  />\n))\nDialogOverlay.displayName = DialogPrimitive.Overlay.displayName\n\nconst DialogContent = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Content>\n>(({ className, children, ...props }, ref) => (\n  <DialogPortal>\n    <DialogOverlay />\n    <DialogPrimitive.Content\n      ref={ref}\n      className={cn(\n        \"fixed left-[50%] top-[50%] z-50 grid w-full max-w-lg translate-x-[-50%] translate-y-[-50%] gap-4 border bg-background p-6 shadow-lg duration-200 data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[state=closed]:slide-out-to-left-1/2 data-[state=closed]:slide-out-to-top-[48%] data-[state=open]:slide-in-from-left-1/2 data-[state=open]:slide-in-from-top-[48%] sm:rounded-lg\",\n        className\n      )}\n      {...props}\n    >\n      {children}\n      <DialogPrimitive.Close className=\"absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none data-[state=open]:bg-accent data-[state=open]:text-muted-foreground\">\n        <Cross2Icon className=\"h-4 w-4\" />\n        <span className=\"sr-only\">Close</span>\n      </DialogPrimitive.Close>\n    </DialogPrimitive.Content>\n  </DialogPortal>\n))\nDialogContent.displayName = DialogPrimitive.Content.displayName\n\nconst DialogHeader = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col space-y-1.5 text-center sm:text-left\",\n      className\n    )}\n    {...props}\n  />\n)\nDialogHeader.displayName = \"DialogHeader\"\n\nconst DialogFooter = ({\n  className,\n  ...props\n}: React.HTMLAttributes<HTMLDivElement>) => (\n  <div\n    className={cn(\n      \"flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2\",\n      className\n    )}\n    {...props}\n  />\n)\nDialogFooter.displayName = \"DialogFooter\"\n\nconst DialogTitle = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Title>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Title>\n>(({ className, ...props }, ref) => (\n  <DialogPrimitive.Title\n    ref={ref}\n    className={cn(\n      \"text-lg font-semibold leading-none tracking-tight\",\n      className\n    )}\n    {...props}\n  />\n))\nDialogTitle.displayName = DialogPrimitive.Title.displayName\n\nconst DialogDescription = React.forwardRef<\n  React.ElementRef<typeof DialogPrimitive.Description>,\n  React.ComponentPropsWithoutRef<typeof DialogPrimitive.Description>\n>(({ className, ...props }, ref) => (\n  <DialogPrimitive.Description\n    ref={ref}\n    className={cn(\"text-sm text-muted-foreground\", className)}\n    {...props}\n  />\n))\nDialogDescription.displayName = DialogPrimitive.Description.displayName\n\nexport {\n  Dialog,\n  DialogPortal,\n  DialogOverlay,\n  DialogTrigger,\n  DialogClose,\n  DialogContent,\n  DialogHeader,\n  DialogFooter,\n  DialogTitle,\n  DialogDescription,\n}\n"
  },
  {
    "path": "frontend/src/components/ui/hover-card.tsx",
    "content": "import * as React from \"react\"\nimport * as HoverCardPrimitive from \"@radix-ui/react-hover-card\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst HoverCard = HoverCardPrimitive.Root\n\nconst HoverCardTrigger = HoverCardPrimitive.Trigger\n\nconst HoverCardContent = React.forwardRef<\n  React.ElementRef<typeof HoverCardPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof HoverCardPrimitive.Content>\n>(({ className, align = \"center\", sideOffset = 4, ...props }, ref) => (\n  <HoverCardPrimitive.Content\n    ref={ref}\n    align={align}\n    sideOffset={sideOffset}\n    className={cn(\n      \"z-50 w-64 rounded-md border bg-popover p-4 text-popover-foreground shadow-md outline-none data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2\",\n      className\n    )}\n    {...props}\n  />\n))\nHoverCardContent.displayName = HoverCardPrimitive.Content.displayName\n\nexport { HoverCard, HoverCardTrigger, HoverCardContent }\n"
  },
  {
    "path": "frontend/src/components/ui/input.tsx",
    "content": "import * as React from \"react\"\n\nimport { cn } from \"@/lib/utils\"\n\nexport interface InputProps\n  extends React.InputHTMLAttributes<HTMLInputElement> {}\n\nconst Input = React.forwardRef<HTMLInputElement, InputProps>(\n  ({ className, type, ...props }, ref) => {\n    return (\n      <input\n        type={type}\n        className={cn(\n          \"flex h-9 w-full rounded-md border border-input bg-transparent px-3 py-1 text-sm shadow-sm transition-colors file:border-0 file:bg-transparent file:text-sm file:font-medium placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50\",\n          className\n        )}\n        ref={ref}\n        {...props}\n      />\n    )\n  }\n)\nInput.displayName = \"Input\"\n\nexport { Input }\n"
  },
  {
    "path": "frontend/src/components/ui/label.tsx",
    "content": "import * as React from \"react\"\nimport * as LabelPrimitive from \"@radix-ui/react-label\"\nimport { cva, type VariantProps } from \"class-variance-authority\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst labelVariants = cva(\n  \"text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70\"\n)\n\nconst Label = React.forwardRef<\n  React.ElementRef<typeof LabelPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof LabelPrimitive.Root> &\n    VariantProps<typeof labelVariants>\n>(({ className, ...props }, ref) => (\n  <LabelPrimitive.Root\n    ref={ref}\n    className={cn(labelVariants(), className)}\n    {...props}\n  />\n))\nLabel.displayName = LabelPrimitive.Root.displayName\n\nexport { Label }\n"
  },
  {
    "path": "frontend/src/components/ui/popover.tsx",
    "content": "import * as React from \"react\"\nimport * as PopoverPrimitive from \"@radix-ui/react-popover\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Popover = PopoverPrimitive.Root\n\nconst PopoverTrigger = PopoverPrimitive.Trigger\n\nconst PopoverContent = React.forwardRef<\n  React.ElementRef<typeof PopoverPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof PopoverPrimitive.Content>\n>(({ className, align = \"center\", sideOffset = 4, ...props }, ref) => (\n  <PopoverPrimitive.Portal>\n    <PopoverPrimitive.Content\n      ref={ref}\n      align={align}\n      sideOffset={sideOffset}\n      className={cn(\n        \"z-50 w-72 rounded-md border bg-popover p-4 text-popover-foreground shadow-md outline-none data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2\",\n        className\n      )}\n      {...props}\n    />\n  </PopoverPrimitive.Portal>\n))\nPopoverContent.displayName = PopoverPrimitive.Content.displayName\n\nexport { Popover, PopoverTrigger, PopoverContent }\n"
  },
  {
    "path": "frontend/src/components/ui/progress.tsx",
    "content": "import * as React from \"react\"\nimport * as ProgressPrimitive from \"@radix-ui/react-progress\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Progress = React.forwardRef<\n  React.ElementRef<typeof ProgressPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof ProgressPrimitive.Root>\n>(({ className, value, ...props }, ref) => (\n  <ProgressPrimitive.Root\n    ref={ref}\n    className={cn(\n      \"relative h-2 w-full overflow-hidden rounded-full bg-primary/20\",\n      className\n    )}\n    {...props}\n  >\n    <ProgressPrimitive.Indicator\n      className=\"h-full w-full flex-1 bg-primary transition-all\"\n      style={{ transform: `translateX(-${100 - (value || 0)}%)` }}\n    />\n  </ProgressPrimitive.Root>\n))\nProgress.displayName = ProgressPrimitive.Root.displayName\n\nexport { Progress }\n"
  },
  {
    "path": "frontend/src/components/ui/scroll-area.tsx",
    "content": "import * as React from \"react\"\nimport * as ScrollAreaPrimitive from \"@radix-ui/react-scroll-area\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst ScrollArea = React.forwardRef<\n  React.ElementRef<typeof ScrollAreaPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof ScrollAreaPrimitive.Root>\n>(({ className, children, ...props }, ref) => (\n  <ScrollAreaPrimitive.Root\n    ref={ref}\n    className={cn(\"relative overflow-hidden\", className)}\n    {...props}\n  >\n    <ScrollAreaPrimitive.Viewport className=\"h-full w-full rounded-[inherit]\">\n      {children}\n    </ScrollAreaPrimitive.Viewport>\n    <ScrollBar />\n    <ScrollAreaPrimitive.Corner />\n  </ScrollAreaPrimitive.Root>\n))\nScrollArea.displayName = ScrollAreaPrimitive.Root.displayName\n\nconst ScrollBar = React.forwardRef<\n  React.ElementRef<typeof ScrollAreaPrimitive.ScrollAreaScrollbar>,\n  React.ComponentPropsWithoutRef<typeof ScrollAreaPrimitive.ScrollAreaScrollbar>\n>(({ className, orientation = \"vertical\", ...props }, ref) => (\n  <ScrollAreaPrimitive.ScrollAreaScrollbar\n    ref={ref}\n    orientation={orientation}\n    className={cn(\n      \"flex touch-none select-none transition-colors\",\n      orientation === \"vertical\" &&\n        \"h-full w-2.5 border-l border-l-transparent p-[1px]\",\n      orientation === \"horizontal\" &&\n        \"h-2.5 flex-col border-t border-t-transparent p-[1px]\",\n      className\n    )}\n    {...props}\n  >\n    <ScrollAreaPrimitive.ScrollAreaThumb className=\"relative flex-1 rounded-full bg-border\" />\n  </ScrollAreaPrimitive.ScrollAreaScrollbar>\n))\nScrollBar.displayName = ScrollAreaPrimitive.ScrollAreaScrollbar.displayName\n\nexport { ScrollArea, ScrollBar }\n"
  },
  {
    "path": "frontend/src/components/ui/select.tsx",
    "content": "import * as React from \"react\"\nimport {\n  CaretSortIcon,\n  CheckIcon,\n  ChevronDownIcon,\n  ChevronUpIcon,\n} from \"@radix-ui/react-icons\"\nimport * as SelectPrimitive from \"@radix-ui/react-select\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Select = SelectPrimitive.Root\n\nconst SelectGroup = SelectPrimitive.Group\n\n// Avoid raw text nodes so browser translation extensions don't desync Radix's DOM bookkeeping.\nconst SelectValue = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.Value>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.Value>\n>(({ placeholder, ...props }, ref) => (\n  <SelectPrimitive.Value\n    ref={ref}\n    placeholder={\n      typeof placeholder === \"string\" ? <span>{placeholder}</span> : placeholder\n    }\n    {...props}\n  />\n))\nSelectValue.displayName = SelectPrimitive.Value.displayName\n\nconst SelectTrigger = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.Trigger>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.Trigger>\n>(({ className, children, ...props }, ref) => (\n  <SelectPrimitive.Trigger\n    ref={ref}\n    className={cn(\n      \"flex h-9 w-full items-center justify-between whitespace-nowrap rounded-md border border-input bg-transparent px-3 py-2 text-sm shadow-sm ring-offset-background placeholder:text-muted-foreground focus:outline-none focus:ring-1 focus:ring-ring disabled:cursor-not-allowed disabled:opacity-50 [&>span]:line-clamp-1\",\n      className\n    )}\n    {...props}\n  >\n    {children}\n    <SelectPrimitive.Icon asChild>\n      <CaretSortIcon className=\"h-4 w-4 opacity-50\" />\n    </SelectPrimitive.Icon>\n  </SelectPrimitive.Trigger>\n))\nSelectTrigger.displayName = SelectPrimitive.Trigger.displayName\n\nconst SelectScrollUpButton = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.ScrollUpButton>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.ScrollUpButton>\n>(({ className, ...props }, ref) => (\n  <SelectPrimitive.ScrollUpButton\n    ref={ref}\n    className={cn(\n      \"flex cursor-default items-center justify-center py-1\",\n      className\n    )}\n    {...props}\n  >\n    <ChevronUpIcon />\n  </SelectPrimitive.ScrollUpButton>\n))\nSelectScrollUpButton.displayName = SelectPrimitive.ScrollUpButton.displayName\n\nconst SelectScrollDownButton = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.ScrollDownButton>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.ScrollDownButton>\n>(({ className, ...props }, ref) => (\n  <SelectPrimitive.ScrollDownButton\n    ref={ref}\n    className={cn(\n      \"flex cursor-default items-center justify-center py-1\",\n      className\n    )}\n    {...props}\n  >\n    <ChevronDownIcon />\n  </SelectPrimitive.ScrollDownButton>\n))\nSelectScrollDownButton.displayName =\n  SelectPrimitive.ScrollDownButton.displayName\n\nconst SelectContent = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.Content>\n>(({ className, children, position = \"popper\", ...props }, ref) => (\n  <SelectPrimitive.Portal>\n    <SelectPrimitive.Content\n      ref={ref}\n      className={cn(\n        \"relative z-50 max-h-96 min-w-[8rem] overflow-hidden rounded-md border bg-popover text-popover-foreground shadow-md data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2\",\n        position === \"popper\" &&\n          \"data-[side=bottom]:translate-y-1 data-[side=left]:-translate-x-1 data-[side=right]:translate-x-1 data-[side=top]:-translate-y-1\",\n        className\n      )}\n      position={position}\n      {...props}\n    >\n      <SelectScrollUpButton />\n      <SelectPrimitive.Viewport\n        className={cn(\n          \"p-1\",\n          position === \"popper\" &&\n            \"h-[var(--radix-select-trigger-height)] w-full min-w-[var(--radix-select-trigger-width)]\"\n        )}\n      >\n        {children}\n      </SelectPrimitive.Viewport>\n      <SelectScrollDownButton />\n    </SelectPrimitive.Content>\n  </SelectPrimitive.Portal>\n))\nSelectContent.displayName = SelectPrimitive.Content.displayName\n\nconst SelectLabel = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.Label>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.Label>\n>(({ className, ...props }, ref) => (\n  <SelectPrimitive.Label\n    ref={ref}\n    className={cn(\"px-2 py-1.5 text-sm font-semibold\", className)}\n    {...props}\n  />\n))\nSelectLabel.displayName = SelectPrimitive.Label.displayName\n\nconst SelectItem = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.Item>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item>\n>(({ className, children, ...props }, ref) => (\n  <SelectPrimitive.Item\n    ref={ref}\n    className={cn(\n      \"relative flex w-full cursor-default select-none items-center rounded-sm py-1.5 pl-2 pr-8 text-sm outline-none focus:bg-accent focus:text-accent-foreground data-[disabled]:pointer-events-none data-[disabled]:opacity-50\",\n      className\n    )}\n    {...props}\n  >\n    <span className=\"absolute right-2 flex h-3.5 w-3.5 items-center justify-center\">\n      <SelectPrimitive.ItemIndicator>\n        <CheckIcon className=\"h-4 w-4\" />\n      </SelectPrimitive.ItemIndicator>\n    </span>\n    <SelectPrimitive.ItemText>\n      {typeof children === \"string\" ? <span>{children}</span> : children}\n    </SelectPrimitive.ItemText>\n  </SelectPrimitive.Item>\n))\nSelectItem.displayName = SelectPrimitive.Item.displayName\n\nconst SelectSeparator = React.forwardRef<\n  React.ElementRef<typeof SelectPrimitive.Separator>,\n  React.ComponentPropsWithoutRef<typeof SelectPrimitive.Separator>\n>(({ className, ...props }, ref) => (\n  <SelectPrimitive.Separator\n    ref={ref}\n    className={cn(\"-mx-1 my-1 h-px bg-muted\", className)}\n    {...props}\n  />\n))\nSelectSeparator.displayName = SelectPrimitive.Separator.displayName\n\nexport {\n  Select,\n  SelectGroup,\n  SelectValue,\n  SelectTrigger,\n  SelectContent,\n  SelectLabel,\n  SelectItem,\n  SelectSeparator,\n  SelectScrollUpButton,\n  SelectScrollDownButton,\n}\n"
  },
  {
    "path": "frontend/src/components/ui/separator.tsx",
    "content": "import * as React from \"react\"\nimport * as SeparatorPrimitive from \"@radix-ui/react-separator\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Separator = React.forwardRef<\n  React.ElementRef<typeof SeparatorPrimitive.Root>,\n  React.ComponentPropsWithoutRef<typeof SeparatorPrimitive.Root>\n>(\n  (\n    { className, orientation = \"horizontal\", decorative = true, ...props },\n    ref\n  ) => (\n    <SeparatorPrimitive.Root\n      ref={ref}\n      decorative={decorative}\n      orientation={orientation}\n      className={cn(\n        \"shrink-0 bg-border\",\n        orientation === \"horizontal\" ? \"h-[1px] w-full\" : \"h-full w-[1px]\",\n        className\n      )}\n      {...props}\n    />\n  )\n)\nSeparator.displayName = SeparatorPrimitive.Root.displayName\n\nexport { Separator }\n"
  },
  {
    "path": "frontend/src/components/ui/switch.tsx",
    "content": "import * as React from \"react\";\nimport * as SwitchPrimitives from \"@radix-ui/react-switch\";\n\nimport { cn } from \"@/lib/utils\";\n\nconst Switch = React.forwardRef<\n  React.ElementRef<typeof SwitchPrimitives.Root>,\n  React.ComponentPropsWithoutRef<typeof SwitchPrimitives.Root>\n>(({ className, ...props }, ref) => (\n  <SwitchPrimitives.Root\n    className={cn(\n      \"peer inline-flex h-5 w-9 shrink-0 cursor-pointer items-center rounded-full border-2 border-transparent shadow-sm transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 focus-visible:ring-offset-background disabled:cursor-not-allowed disabled:opacity-50 data-[state=checked]:bg-primary data-[state=unchecked]:bg-input\",\n      className\n    )}\n    {...props}\n    ref={ref}\n  >\n    <SwitchPrimitives.Thumb\n      className={cn(\n        \"pointer-events-none block h-4 w-4 rounded-full bg-background shadow-lg ring-0 transition-transform data-[state=checked]:translate-x-4 data-[state=unchecked]:translate-x-0\"\n      )}\n    />\n  </SwitchPrimitives.Root>\n));\nSwitch.displayName = SwitchPrimitives.Root.displayName;\n\nexport { Switch };\n"
  },
  {
    "path": "frontend/src/components/ui/tabs.tsx",
    "content": "import * as React from \"react\"\nimport * as TabsPrimitive from \"@radix-ui/react-tabs\"\n\nimport { cn } from \"@/lib/utils\"\n\nconst Tabs = TabsPrimitive.Root\n\nconst TabsList = React.forwardRef<\n  React.ElementRef<typeof TabsPrimitive.List>,\n  React.ComponentPropsWithoutRef<typeof TabsPrimitive.List>\n>(({ className, ...props }, ref) => (\n  <TabsPrimitive.List\n    ref={ref}\n    className={cn(\n      \"inline-flex h-9 items-center justify-center rounded-lg bg-muted p-1 text-muted-foreground\",\n      className\n    )}\n    {...props}\n  />\n))\nTabsList.displayName = TabsPrimitive.List.displayName\n\nconst TabsTrigger = React.forwardRef<\n  React.ElementRef<typeof TabsPrimitive.Trigger>,\n  React.ComponentPropsWithoutRef<typeof TabsPrimitive.Trigger>\n>(({ className, ...props }, ref) => (\n  <TabsPrimitive.Trigger\n    ref={ref}\n    className={cn(\n      \"inline-flex items-center justify-center whitespace-nowrap rounded-md px-3 py-1 text-sm font-medium ring-offset-background transition-all focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50 data-[state=active]:bg-background data-[state=active]:text-foreground data-[state=active]:shadow\",\n      className\n    )}\n    {...props}\n  />\n))\nTabsTrigger.displayName = TabsPrimitive.Trigger.displayName\n\nconst TabsContent = React.forwardRef<\n  React.ElementRef<typeof TabsPrimitive.Content>,\n  React.ComponentPropsWithoutRef<typeof TabsPrimitive.Content>\n>(({ className, ...props }, ref) => (\n  <TabsPrimitive.Content\n    ref={ref}\n    className={cn(\n      \"mt-2 ring-offset-background focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2\",\n      className\n    )}\n    {...props}\n  />\n))\nTabsContent.displayName = TabsPrimitive.Content.displayName\n\nexport { Tabs, TabsList, TabsTrigger, TabsContent }\n"
  },
  {
    "path": "frontend/src/components/ui/textarea.tsx",
    "content": "import * as React from \"react\"\n\nimport { cn } from \"@/lib/utils\"\n\nexport interface TextareaProps\n  extends React.TextareaHTMLAttributes<HTMLTextAreaElement> {}\n\nconst Textarea = React.forwardRef<HTMLTextAreaElement, TextareaProps>(\n  ({ className, ...props }, ref) => {\n    return (\n      <textarea\n        className={cn(\n          \"flex min-h-[60px] w-full rounded-md border border-input bg-transparent px-3 py-2 text-sm shadow-sm placeholder:text-muted-foreground focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-ring disabled:cursor-not-allowed disabled:opacity-50\",\n          className\n        )}\n        ref={ref}\n        {...props}\n      />\n    )\n  }\n)\nTextarea.displayName = \"Textarea\"\n\nexport { Textarea }\n"
  },
  {
    "path": "frontend/src/components/unified-input/UnifiedInputPane.tsx",
    "content": "import React, { useState } from \"react\";\nimport { Tabs, TabsContent, TabsList, TabsTrigger } from \"../ui/tabs\";\nimport { Stack } from \"../../lib/stacks\";\nimport { Settings } from \"../../types\";\nimport UploadTab from \"./tabs/UploadTab\";\nimport UrlTab from \"./tabs/UrlTab\";\nimport TextTab from \"./tabs/TextTab\";\nimport ImportTab from \"./tabs/ImportTab\";\n\ninterface Props {\n  doCreate: (\n    images: string[],\n    inputMode: \"image\" | \"video\",\n    textPrompt?: string\n  ) => void;\n  doCreateFromText: (text: string) => void;\n  importFromCode: (code: string, stack: Stack) => void;\n  settings: Settings;\n  setSettings: React.Dispatch<React.SetStateAction<Settings>>;\n}\n\ntype InputTab = \"upload\" | \"url\" | \"text\" | \"import\";\n\nfunction UnifiedInputPane({\n  doCreate,\n  doCreateFromText,\n  importFromCode,\n  settings,\n  setSettings,\n}: Props) {\n  const [activeTab, setActiveTab] = useState<InputTab>(\"upload\");\n\n  function setStack(stack: Stack) {\n    setSettings((prev: Settings) => ({\n      ...prev,\n      generatedCodeConfig: stack,\n    }));\n  }\n\n  return (\n    <div className=\"w-full max-w-4xl mx-auto px-4\">\n      <Tabs\n        value={activeTab}\n        onValueChange={(value) => setActiveTab(value as InputTab)}\n        className=\"w-full\"\n      >\n        <TabsList className=\"grid w-full grid-cols-4 mb-6\">\n          <TabsTrigger\n            value=\"upload\"\n            className=\"flex items-center gap-2\"\n            data-testid=\"tab-upload\"\n          >\n            <UploadIcon />\n            <span className=\"hidden sm:inline\">Upload</span>\n          </TabsTrigger>\n          <TabsTrigger\n            value=\"url\"\n            className=\"flex items-center gap-2\"\n            data-testid=\"tab-url\"\n          >\n            <UrlIcon />\n            <span className=\"hidden sm:inline\">URL</span>\n          </TabsTrigger>\n          <TabsTrigger\n            value=\"text\"\n            className=\"flex items-center gap-2\"\n            data-testid=\"tab-text\"\n          >\n            <TextIcon />\n            <span className=\"hidden sm:inline\">Text</span>\n          </TabsTrigger>\n          <TabsTrigger\n            value=\"import\"\n            className=\"flex items-center gap-2\"\n            data-testid=\"tab-import\"\n          >\n            <ImportIcon />\n            <span className=\"hidden sm:inline\">Import</span>\n          </TabsTrigger>\n        </TabsList>\n\n        <TabsContent value=\"upload\" className=\"mt-0\">\n          <UploadTab\n            doCreate={doCreate}\n            stack={settings.generatedCodeConfig}\n            setStack={setStack}\n          />\n        </TabsContent>\n\n        <TabsContent value=\"url\" className=\"mt-0\">\n          <UrlTab\n            doCreate={doCreate}\n            screenshotOneApiKey={settings.screenshotOneApiKey}\n            stack={settings.generatedCodeConfig}\n            setStack={setStack}\n          />\n        </TabsContent>\n\n        <TabsContent value=\"text\" className=\"mt-0\">\n          <TextTab\n            doCreateFromText={doCreateFromText}\n            stack={settings.generatedCodeConfig}\n            setStack={setStack}\n          />\n        </TabsContent>\n\n        <TabsContent value=\"import\" className=\"mt-0\">\n          <ImportTab importFromCode={importFromCode} />\n        </TabsContent>\n      </Tabs>\n    </div>\n  );\n}\n\nfunction UploadIcon() {\n  return (\n    <svg\n      xmlns=\"http://www.w3.org/2000/svg\"\n      width=\"16\"\n      height=\"16\"\n      viewBox=\"0 0 24 24\"\n      fill=\"none\"\n      stroke=\"currentColor\"\n      strokeWidth=\"2\"\n      strokeLinecap=\"round\"\n      strokeLinejoin=\"round\"\n    >\n      <path d=\"M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4\" />\n      <polyline points=\"17 8 12 3 7 8\" />\n      <line x1=\"12\" y1=\"3\" x2=\"12\" y2=\"15\" />\n    </svg>\n  );\n}\n\nfunction UrlIcon() {\n  return (\n    <svg\n      xmlns=\"http://www.w3.org/2000/svg\"\n      width=\"16\"\n      height=\"16\"\n      viewBox=\"0 0 24 24\"\n      fill=\"none\"\n      stroke=\"currentColor\"\n      strokeWidth=\"2\"\n      strokeLinecap=\"round\"\n      strokeLinejoin=\"round\"\n    >\n      <path d=\"M10 13a5 5 0 0 0 7.54.54l3-3a5 5 0 0 0-7.07-7.07l-1.72 1.71\" />\n      <path d=\"M14 11a5 5 0 0 0-7.54-.54l-3 3a5 5 0 0 0 7.07 7.07l1.71-1.71\" />\n    </svg>\n  );\n}\n\nfunction TextIcon() {\n  return (\n    <svg\n      xmlns=\"http://www.w3.org/2000/svg\"\n      width=\"16\"\n      height=\"16\"\n      viewBox=\"0 0 24 24\"\n      fill=\"none\"\n      stroke=\"currentColor\"\n      strokeWidth=\"2\"\n      strokeLinecap=\"round\"\n      strokeLinejoin=\"round\"\n    >\n      <path d=\"M17 6.1H3\" />\n      <path d=\"M21 12.1H3\" />\n      <path d=\"M15.1 18H3\" />\n    </svg>\n  );\n}\n\nfunction ImportIcon() {\n  return (\n    <svg\n      xmlns=\"http://www.w3.org/2000/svg\"\n      width=\"16\"\n      height=\"16\"\n      viewBox=\"0 0 24 24\"\n      fill=\"none\"\n      stroke=\"currentColor\"\n      strokeWidth=\"2\"\n      strokeLinecap=\"round\"\n      strokeLinejoin=\"round\"\n    >\n      <polyline points=\"16 18 22 12 16 6\" />\n      <polyline points=\"8 6 2 12 8 18\" />\n    </svg>\n  );\n}\n\nexport default UnifiedInputPane;\n"
  },
  {
    "path": "frontend/src/components/unified-input/tabs/ImportTab.tsx",
    "content": "import { useState, useRef, useEffect } from \"react\";\nimport { useDropzone } from \"react-dropzone\";\nimport { Button } from \"../../ui/button\";\nimport { Textarea } from \"../../ui/textarea\";\nimport OutputSettingsSection from \"../../settings/OutputSettingsSection\";\nimport toast from \"react-hot-toast\";\nimport { Stack } from \"../../../lib/stacks\";\n\ninterface Props {\n  importFromCode: (code: string, stack: Stack) => void;\n}\n\nfunction ImportTab({ importFromCode }: Props) {\n  const [code, setCode] = useState(\"\");\n  const [stack, setStack] = useState<Stack | undefined>(undefined);\n  const textareaRef = useRef<HTMLTextAreaElement>(null);\n  const [isDraggingFile, setIsDraggingFile] = useState(false);\n\n  useEffect(() => {\n    textareaRef.current?.focus();\n  }, []);\n\n  const doImport = () => {\n    if (code === \"\") {\n      toast.error(\"Please paste in some code\");\n      return;\n    }\n\n    if (stack === undefined) {\n      toast.error(\"Please select your stack\");\n      return;\n    }\n\n    importFromCode(code, stack);\n  };\n\n  const handleKeyDown = (e: React.KeyboardEvent) => {\n    if (e.key === \"Enter\" && (e.metaKey || e.ctrlKey)) {\n      e.preventDefault();\n      doImport();\n    }\n  };\n\n  const { getRootProps, getInputProps } = useDropzone({\n    accept: {\n      \"text/html\": [\".html\", \".htm\"],\n    },\n    maxFiles: 1,\n    noClick: true,\n    noKeyboard: true,\n    onDragEnter: () => setIsDraggingFile(true),\n    onDragLeave: () => setIsDraggingFile(false),\n    onDrop: async (acceptedFiles) => {\n      setIsDraggingFile(false);\n      const file = acceptedFiles[0];\n      if (!file) return;\n      const contents = await file.text();\n      setCode(contents);\n      setTimeout(() => textareaRef.current?.focus(), 50);\n    },\n  });\n\n  return (\n    <div className=\"flex flex-col items-center gap-6\">\n      <div className=\"w-full max-w-lg\">\n        <div className=\"flex flex-col gap-6 p-8 border border-gray-200 dark:border-zinc-700 rounded-xl bg-gray-50/50 dark:bg-zinc-900/50\">\n          <div className=\"flex flex-col items-center gap-3\">\n            <div className=\"w-16 h-16 rounded-full bg-gray-100 dark:bg-zinc-800 flex items-center justify-center\">\n              <svg\n                xmlns=\"http://www.w3.org/2000/svg\"\n                width=\"28\"\n                height=\"28\"\n                viewBox=\"0 0 24 24\"\n                fill=\"none\"\n                stroke=\"currentColor\"\n                strokeWidth=\"2\"\n                strokeLinecap=\"round\"\n                strokeLinejoin=\"round\"\n                className=\"text-gray-400 dark:text-zinc-500\"\n              >\n                <polyline points=\"16 18 22 12 16 6\" />\n                <polyline points=\"8 6 2 12 8 18\" />\n              </svg>\n            </div>\n\n            <div className=\"text-center\">\n              <h3 className=\"text-gray-700 dark:text-zinc-200 font-medium\">Import Existing Code</h3>\n            </div>\n          </div>\n\n          <div className=\"space-y-4\">\n            <div\n              {...getRootProps({\n                className: `rounded-lg ${\n                  isDraggingFile ? \"ring-2 ring-blue-300 dark:ring-blue-700 ring-offset-2 dark:ring-offset-zinc-900\" : \"\"\n                }`,\n              })}\n            >\n              <input {...getInputProps()} />\n              <Textarea\n                ref={textareaRef}\n                value={code}\n                onChange={(e) => setCode(e.target.value)}\n                onKeyDown={handleKeyDown}\n                className=\"w-full h-48 font-mono text-sm resize-none\"\n                placeholder=\"Paste your HTML code here or drag/drop a .html file...\"\n                data-testid=\"import-input\"\n              />\n            </div>\n\n            <OutputSettingsSection\n              stack={stack}\n              setStack={(config: Stack) => setStack(config)}\n              label=\"Stack:\"\n              shouldDisableUpdates={false}\n            />\n\n            <Button\n              onClick={doImport}\n              className=\"w-full\"\n              size=\"lg\"\n              data-testid=\"import-submit\"\n            >\n              Import Code\n            </Button>\n\n            <p className=\"text-xs text-gray-400 dark:text-zinc-500 text-center\">\n              Press Cmd/Ctrl + Enter to import\n            </p>\n          </div>\n        </div>\n      </div>\n    </div>\n  );\n}\n\nexport default ImportTab;\n"
  },
  {
    "path": "frontend/src/components/unified-input/tabs/TextTab.tsx",
    "content": "import { useState, useRef, useEffect } from \"react\";\nimport { Button } from \"../../ui/button\";\nimport { Textarea } from \"../../ui/textarea\";\nimport toast from \"react-hot-toast\";\nimport OutputSettingsSection from \"../../settings/OutputSettingsSection\";\nimport { Stack } from \"../../../lib/stacks\";\n\ninterface Props {\n  doCreateFromText: (text: string) => void;\n  stack: Stack;\n  setStack: (stack: Stack) => void;\n}\n\nconst EXAMPLE_PROMPTS = [\n  \"An ecommerce homepage for eco-friendly skincare with product grid, reviews, and newsletter signup\",\n  \"A portfolio site for a product designer with case studies, process steps, and contact\",\n  \"A mobile fitness app dashboard with workout plan, progress ring, and quick-start buttons\",\n  \"A music streaming app with now-playing, recommended playlists, and recent listens\",\n];\n\nfunction TextTab({ doCreateFromText, stack, setStack }: Props) {\n  const [text, setText] = useState(\"\");\n  const textareaRef = useRef<HTMLTextAreaElement>(null);\n\n  useEffect(() => {\n    textareaRef.current?.focus();\n  }, []);\n\n  const handleGenerate = () => {\n    if (text.trim() === \"\") {\n      toast.error(\"Please enter a description\");\n      return;\n    }\n    doCreateFromText(text);\n  };\n\n  const handleKeyDown = (e: React.KeyboardEvent) => {\n    if (e.key === \"Enter\" && (e.metaKey || e.ctrlKey)) {\n      e.preventDefault();\n      handleGenerate();\n    }\n  };\n\n  const handleExampleClick = (example: string) => {\n    setText(example);\n    textareaRef.current?.focus();\n  };\n\n  return (\n    <div className=\"flex flex-col items-center gap-6\">\n      <div className=\"w-full max-w-lg\">\n        <div className=\"flex flex-col gap-6 p-8 border border-gray-200 dark:border-zinc-700 rounded-xl bg-gray-50/50 dark:bg-zinc-900/50\">\n          <div className=\"flex flex-col items-center gap-3\">\n            <div className=\"w-16 h-16 rounded-full bg-gray-100 dark:bg-zinc-800 flex items-center justify-center\">\n              <svg\n                xmlns=\"http://www.w3.org/2000/svg\"\n                width=\"28\"\n                height=\"28\"\n                viewBox=\"0 0 24 24\"\n                fill=\"none\"\n                stroke=\"currentColor\"\n                strokeWidth=\"2\"\n                strokeLinecap=\"round\"\n                strokeLinejoin=\"round\"\n                className=\"text-gray-400 dark:text-zinc-500\"\n              >\n                <path d=\"M17 6.1H3\" />\n                <path d=\"M21 12.1H3\" />\n                <path d=\"M15.1 18H3\" />\n              </svg>\n            </div>\n\n            <div className=\"text-center\">\n              <h3 className=\"text-gray-700 dark:text-zinc-200 font-medium\">Generate from Text</h3>\n            </div>\n          </div>\n\n          <div className=\"space-y-4\">\n            <Textarea\n              ref={textareaRef}\n              rows={4}\n              placeholder=\"Describe the UI you want to create...\"\n              className=\"w-full resize-none\"\n              value={text}\n              onChange={(e) => setText(e.target.value)}\n              onKeyDown={handleKeyDown}\n              data-testid=\"text-input\"\n            />\n\n            <div className=\"flex flex-col gap-2\">\n              <p className=\"text-xs text-gray-500 dark:text-zinc-400\">Try an example:</p>\n              <div className=\"flex flex-wrap gap-2\">\n                {EXAMPLE_PROMPTS.map((example, index) => (\n                  <button\n                    key={index}\n                    onClick={() => handleExampleClick(example)}\n                    className=\"text-xs px-2.5 py-1.5 rounded-full bg-gray-100 dark:bg-zinc-800 text-gray-600 dark:text-zinc-300 hover:bg-gray-200 dark:hover:bg-zinc-700 transition-colors truncate max-w-[200px]\"\n                    title={example}\n                  >\n                    {example.length > 30 ? example.slice(0, 30) + \"...\" : example}\n                  </button>\n                ))}\n              </div>\n            </div>\n\n            <OutputSettingsSection\n              stack={stack}\n              setStack={setStack}\n            />\n\n            <Button\n              onClick={handleGenerate}\n              className=\"w-full\"\n              size=\"lg\"\n              data-testid=\"text-generate\"\n            >\n              Generate\n            </Button>\n\n            <p className=\"text-xs text-gray-400 dark:text-zinc-500 text-center\">\n              Press Cmd/Ctrl + Enter to generate\n            </p>\n          </div>\n        </div>\n      </div>\n    </div>\n  );\n}\n\nexport default TextTab;\n"
  },
  {
    "path": "frontend/src/components/unified-input/tabs/UploadTab.tsx",
    "content": "import { useState, useEffect, useMemo, useRef, useCallback } from \"react\";\nimport { useDropzone } from \"react-dropzone\";\nimport { toast } from \"react-hot-toast\";\nimport { Cross2Icon, ImageIcon } from \"@radix-ui/react-icons\";\nimport { Button } from \"../../ui/button\";\nimport { ScreenRecorderState } from \"../../../types\";\nimport ScreenRecorder from \"../../recording/ScreenRecorder\";\nimport OutputSettingsSection from \"../../settings/OutputSettingsSection\";\nimport { Stack } from \"../../../lib/stacks\";\n\nfunction fileToDataURL(file: File): Promise<string> {\n  return new Promise((resolve, reject) => {\n    const reader = new FileReader();\n    reader.onload = () => {\n      const result = reader.result as string;\n      if (result.startsWith(\"data:application/octet-stream\") && file.type) {\n        const correctedResult = result.replace(\n          \"data:application/octet-stream\",\n          `data:${file.type}`\n        );\n        resolve(correctedResult);\n      } else {\n        resolve(result);\n      }\n    };\n    reader.onerror = (error) => reject(error);\n    reader.readAsDataURL(file);\n  });\n}\n\ntype FileWithPreview = {\n  preview: string;\n} & File;\n\nconst MAX_FILES = 5;\n\nconst isVideoFile = (file: File) =>\n  file.type.startsWith(\"video/\") ||\n  [\".mp4\", \".mov\", \".webm\"].some((ext) =>\n    file.name.toLowerCase().endsWith(ext)\n  );\n\ninterface Props {\n  doCreate: (\n    referenceImages: string[],\n    inputMode: \"image\" | \"video\",\n    textPrompt?: string\n  ) => void;\n  stack: Stack;\n  setStack: (stack: Stack) => void;\n}\n\nfunction UploadTab({ doCreate, stack, setStack }: Props) {\n  const [files, setFiles] = useState<FileWithPreview[]>([]);\n  const [uploadedDataUrls, setUploadedDataUrls] = useState<string[]>([]);\n  const [uploadedInputMode, setUploadedInputMode] = useState<\n    \"image\" | \"video\"\n  >(\"image\");\n  const [textPrompt, setTextPrompt] = useState(\"\");\n  const [showTextPrompt, setShowTextPrompt] = useState(false);\n  const [selectedIndex, setSelectedIndex] = useState(0);\n  const textInputRef = useRef<HTMLTextAreaElement>(null);\n  const filesRef = useRef<FileWithPreview[]>([]);\n  const [screenRecorderState, setScreenRecorderState] =\n    useState<ScreenRecorderState>(ScreenRecorderState.INITIAL);\n\n  const hasUploadedFile = uploadedDataUrls.length > 0;\n  const remainingSlots = Math.max(0, MAX_FILES - files.length);\n  const isAtLimit = remainingSlots === 0;\n\n  const handleGenerate = useCallback(() => {\n    if (uploadedDataUrls.length > 0) {\n      doCreate(uploadedDataUrls, uploadedInputMode, textPrompt);\n    }\n  }, [uploadedDataUrls, uploadedInputMode, textPrompt, doCreate]);\n\n  useEffect(() => {\n    if (!hasUploadedFile) return;\n\n    const handleGlobalKeyDown = (e: KeyboardEvent) => {\n      if (e.key === \"Enter\" && !e.shiftKey) {\n        if (document.activeElement === textInputRef.current) return;\n        e.preventDefault();\n        handleGenerate();\n      }\n    };\n\n    document.addEventListener(\"keydown\", handleGlobalKeyDown);\n    return () => document.removeEventListener(\"keydown\", handleGlobalKeyDown);\n  }, [hasUploadedFile, handleGenerate]);\n\n  const handleClear = () => {\n    files.forEach((file) => URL.revokeObjectURL(file.preview));\n    setUploadedDataUrls([]);\n    setFiles([]);\n    setTextPrompt(\"\");\n    setShowTextPrompt(false);\n    setUploadedInputMode(\"image\");\n    setSelectedIndex(0);\n  };\n\n  const handleKeyDown = (e: React.KeyboardEvent<HTMLTextAreaElement>) => {\n    if (e.key === \"Enter\" && !e.shiftKey) {\n      e.preventDefault();\n      handleGenerate();\n    }\n  };\n\n  const handleAddFiles = useCallback(\n    async (acceptedFiles: File[]) => {\n      if (acceptedFiles.length === 0) return;\n\n      const incomingHasVideo = acceptedFiles.some(isVideoFile);\n      const hasExistingImages = files.length > 0 && uploadedInputMode === \"image\";\n\n      if (incomingHasVideo && (acceptedFiles.length > 1 || hasExistingImages)) {\n        toast.error(\n          `Upload either one video or up to ${MAX_FILES} screenshots (not both).`\n        );\n        return;\n      }\n\n      if (uploadedInputMode === \"video\" && files.length > 0) {\n        toast.error(\"Remove the video to add images.\");\n        return;\n      }\n\n      if (!incomingHasVideo && files.length >= MAX_FILES) {\n        toast.error(\n          `You’ve reached the limit of ${MAX_FILES} screenshots. Remove one to add another.`\n        );\n        return;\n      }\n\n      let filesToAdd = acceptedFiles;\n      if (!incomingHasVideo && files.length + acceptedFiles.length > MAX_FILES) {\n        const remainingSlots = MAX_FILES - files.length;\n        toast.error(\n          `Only ${remainingSlots} more screenshot${\n            remainingSlots === 1 ? \"\" : \"s\"\n          } will be added to stay within the ${MAX_FILES}-screenshot limit.`\n        );\n        filesToAdd = acceptedFiles.slice(0, remainingSlots);\n      }\n\n      const newFiles = filesToAdd.map((file: File) =>\n        Object.assign(file, {\n          preview: URL.createObjectURL(file),\n        })\n      ) as FileWithPreview[];\n\n      try {\n        const dataUrls = await Promise.all(filesToAdd.map((file) => fileToDataURL(file)));\n        if (dataUrls.length === 0) return;\n\n        if (incomingHasVideo) {\n          files.forEach((file) => URL.revokeObjectURL(file.preview));\n          setFiles(newFiles);\n          setUploadedDataUrls(dataUrls as string[]);\n          setUploadedInputMode(\"video\");\n          setSelectedIndex(0);\n        } else {\n          setFiles((prev) => [...prev, ...newFiles]);\n          setUploadedDataUrls((prev) => [...prev, ...(dataUrls as string[])]);\n          setUploadedInputMode(\"image\");\n          if (files.length === 0) {\n            setSelectedIndex(0);\n          }\n        }\n\n        setTimeout(() => textInputRef.current?.focus(), 100);\n      } catch (error) {\n        newFiles.forEach((file) => URL.revokeObjectURL(file.preview));\n        toast.error(\"Error reading files.\");\n        console.error(\"Error reading files:\", error);\n      }\n    },\n    [files, uploadedInputMode]\n  );\n\n  const {\n    getRootProps,\n    getInputProps,\n    isFocused,\n    isDragAccept,\n    isDragReject,\n    isDragActive,\n    open,\n  } = useDropzone({\n    maxFiles: MAX_FILES,\n    maxSize: 1024 * 1024 * 20,\n    noClick: true,\n    accept: {\n      \"image/png\": [\".png\"],\n      \"image/jpeg\": [\".jpeg\"],\n      \"image/jpg\": [\".jpg\"],\n      \"video/quicktime\": [\".mov\"],\n      \"video/mp4\": [\".mp4\"],\n      \"video/webm\": [\".webm\"],\n    },\n    onDrop: handleAddFiles,\n    onDropRejected: (rejectedFiles) => {\n      const firstError = rejectedFiles[0]?.errors?.[0];\n      if (!firstError) {\n        toast.error(\"Some files were rejected.\");\n        return;\n      }\n\n      if (firstError.code === \"file-too-large\") {\n        toast.error(\"One or more files exceed the 20MB limit.\");\n        return;\n      }\n\n      if (firstError.code === \"file-invalid-type\") {\n        toast.error(\"Unsupported file type. Use PNG, JPG, MP4, MOV, or WebM.\");\n        return;\n      }\n\n      if (firstError.code === \"too-many-files\") {\n        toast.error(`You can upload up to ${MAX_FILES} screenshots.`);\n        return;\n      }\n\n      toast.error(firstError.message);\n    },\n  });\n\n  useEffect(() => {\n    filesRef.current = files;\n  }, [files]);\n\n  useEffect(() => {\n    return () => filesRef.current.forEach((file) => URL.revokeObjectURL(file.preview));\n  }, []);\n\n  const dropzoneClassName = useMemo(() => {\n    const base =\n      \"flex flex-1 w-full min-h-[320px] flex-col items-center justify-center p-5 border-2 border-dashed rounded-xl bg-gray-50 dark:bg-zinc-900 text-gray-500 dark:text-zinc-400 outline-none transition-all cursor-pointer\";\n    if (isFocused) {\n      return `${base} border-blue-500 bg-blue-50 dark:bg-blue-950/30`;\n    }\n    if (isDragAccept) {\n      return `${base} border-green-500 bg-green-50 dark:bg-green-950/30`;\n    }\n    if (isDragReject) {\n      return `${base} border-red-500 bg-red-50 dark:bg-red-950/30`;\n    }\n    return `${base} border-gray-200 dark:border-zinc-700`;\n  }, [isFocused, isDragAccept, isDragReject]);\n\n  const handleScreenRecorderGenerate = (\n    images: string[],\n    inputMode: \"image\" | \"video\"\n  ) => {\n    doCreate(images, inputMode, \"\");\n  };\n\n  const handleRemoveImage = (index: number) => {\n    if (uploadedInputMode === \"video\" || files.length === 1) {\n      handleClear();\n      return;\n    }\n\n    const removed = files[index];\n    if (removed) {\n      URL.revokeObjectURL(removed.preview);\n    }\n\n    setFiles((prev) => prev.filter((_, i) => i !== index));\n    setUploadedDataUrls((prev) => prev.filter((_, i) => i !== index));\n    setSelectedIndex((prev) => {\n      if (prev === index) {\n        return Math.max(0, index - 1);\n      }\n      if (prev > index) {\n        return prev - 1;\n      }\n      return prev;\n    });\n  };\n\n  return (\n    <div className=\"flex flex-col items-center gap-6\">\n      {screenRecorderState === ScreenRecorderState.INITIAL && !hasUploadedFile && (\n        <div {...getRootProps({ className: dropzoneClassName })} data-testid=\"upload-dropzone\">\n          <input data-testid=\"upload-input\" {...getInputProps()} />\n          <div className=\"flex flex-col items-center gap-3\">\n            <div className=\"w-16 h-16 rounded-full bg-gray-100 dark:bg-zinc-800 flex items-center justify-center\">\n              <svg\n                xmlns=\"http://www.w3.org/2000/svg\"\n                width=\"28\"\n                height=\"28\"\n                viewBox=\"0 0 24 24\"\n                fill=\"none\"\n                stroke=\"currentColor\"\n                strokeWidth=\"2\"\n                strokeLinecap=\"round\"\n                strokeLinejoin=\"round\"\n                className=\"text-gray-400 dark:text-zinc-500\"\n              >\n                <path d=\"M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4\" />\n                <polyline points=\"17 8 12 3 7 8\" />\n                <line x1=\"12\" y1=\"3\" x2=\"12\" y2=\"15\" />\n              </svg>\n            </div>\n            <div className=\"text-center\">\n              <p className=\"text-gray-700 dark:text-zinc-200 font-medium\">\n                Drop up to {MAX_FILES} screenshots or a single video\n              </p>\n            </div>\n            <p className=\"text-xs text-gray-400 dark:text-zinc-500 mt-2\">\n              Supports PNG, JPG, MP4, MOV, WebM (max 20MB each, 30s video)\n            </p>\n            <button\n              type=\"button\"\n              onClick={open}\n              className=\"text-sm text-gray-600 dark:text-zinc-400 hover:text-gray-800 dark:hover:text-zinc-200 underline\"\n            >\n              Browse files\n            </button>\n          </div>\n        </div>\n      )}\n\n      {hasUploadedFile && (\n        <div className=\"flex flex-col items-center gap-4 w-full\">\n          <div className=\"relative w-full max-w-3xl\">\n            {uploadedInputMode === \"video\" ? (\n              <div className=\"relative rounded-lg border border-gray-200 dark:border-zinc-700 bg-white dark:bg-zinc-900 p-3\">\n                <video\n                  src={files[0]?.preview}\n                  className=\"w-full h-auto max-h-[400px] object-contain rounded-md border border-gray-100 dark:border-zinc-700\"\n                  controls\n                />\n                <button\n                  onClick={handleClear}\n                  className=\"absolute top-2 right-2 bg-white dark:bg-zinc-800 rounded-full p-1.5 shadow-md hover:bg-gray-100 dark:hover:bg-zinc-700 transition-colors\"\n                  aria-label=\"Remove video\"\n                >\n                  <Cross2Icon className=\"h-4 w-4 text-gray-600 dark:text-zinc-300\" />\n                </button>\n              </div>\n            ) : (\n              <div\n                {...getRootProps({\n                  className: `relative rounded-lg border border-gray-200 dark:border-zinc-700 bg-white dark:bg-zinc-900 p-4 ${\n                    isDragActive ? \"ring-2 ring-blue-200 dark:ring-blue-800\" : \"\"\n                  }`,\n                })}\n              >\n                <input {...getInputProps()} />\n                <div className=\"flex items-center justify-between text-xs uppercase tracking-wide text-gray-400 dark:text-zinc-500\">\n                  <span>{`Uploaded Screenshots (${files.length}/${MAX_FILES})`}</span>\n                  <button\n                    type=\"button\"\n                    onClick={handleClear}\n                    className=\"text-xs text-gray-500 dark:text-zinc-400 hover:text-gray-700 dark:hover:text-zinc-200\"\n                  >\n                    Clear all\n                  </button>\n                </div>\n                <div className=\"mt-1 text-[11px] text-gray-400 dark:text-zinc-500\">\n                  {isAtLimit\n                    ? \"Limit reached\"\n                    : `${remainingSlots} remaining`}\n                </div>\n                <div className=\"mt-3 rounded-md border border-gray-100 dark:border-zinc-700 bg-gray-50 dark:bg-zinc-800 p-2 overflow-hidden\">\n                  <div className=\"flex h-[280px] w-full items-center justify-center overflow-hidden rounded bg-white dark:bg-zinc-900\">\n                    {files[selectedIndex] && (\n                      <img\n                        src={files[selectedIndex].preview}\n                        alt={`Uploaded screenshot ${selectedIndex + 1}`}\n                        className=\"h-auto w-auto max-h-full max-w-full object-contain\"\n                      />\n                    )}\n                  </div>\n                </div>\n                <div className=\"mt-3 flex items-center gap-2 overflow-x-auto pb-1\">\n                  {files.map((file, index) => (\n                    <div key={`${file.name}-${index}`} className=\"relative group flex-shrink-0\">\n                      <button\n                        type=\"button\"\n                        onClick={() => setSelectedIndex(index)}\n                        className={`h-14 w-14 rounded-md border overflow-hidden ${\n                          selectedIndex === index\n                            ? \"border-blue-500 ring-2 ring-blue-200 dark:ring-blue-800\"\n                            : \"border-gray-200 dark:border-zinc-700\"\n                        }`}\n                        aria-label={`Preview screenshot ${index + 1}`}\n                      >\n                        <img\n                          src={file.preview}\n                          alt={`Thumbnail ${index + 1}`}\n                          className=\"h-full w-full object-cover\"\n                        />\n                      </button>\n                      <button\n                        type=\"button\"\n                        onClick={() => handleRemoveImage(index)}\n                        className=\"absolute -top-1 -right-1 h-4 w-4 bg-gray-800 hover:bg-red-600 text-white rounded-full flex items-center justify-center opacity-0 group-hover:opacity-100 transition-opacity\"\n                        aria-label={`Remove screenshot ${index + 1}`}\n                      >\n                        <Cross2Icon className=\"h-2 w-2\" />\n                      </button>\n                    </div>\n                  ))}\n                  <button\n                    type=\"button\"\n                    onClick={() => {\n                      if (isAtLimit) {\n                        toast.error(\n                          `You’ve reached the limit of ${MAX_FILES} screenshots. Remove one to add another.`\n                        );\n                        return;\n                      }\n                      open();\n                    }}\n                    disabled={isAtLimit}\n                    className={`h-14 w-14 rounded-md border border-dashed flex items-center justify-center flex-shrink-0 ${\n                      isAtLimit\n                        ? \"border-gray-200 dark:border-zinc-700 text-gray-300 dark:text-zinc-600 cursor-not-allowed\"\n                        : \"border-gray-300 dark:border-zinc-600 text-gray-500 dark:text-zinc-400 hover:text-gray-700 dark:hover:text-zinc-200 hover:border-gray-400 dark:hover:border-zinc-500\"\n                    }`}\n                    aria-label=\"Add more screenshots\"\n                  >\n                    <ImageIcon className=\"h-5 w-5\" />\n                  </button>\n                </div>\n                <div className=\"mt-2 text-xs text-gray-400 dark:text-zinc-500\">\n                  Drag and drop to add more screenshots\n                </div>\n                {isDragActive && (\n                  <div className=\"absolute inset-0 bg-blue-50/80 dark:bg-blue-950/80 border-2 border-dashed border-blue-300 dark:border-blue-700 rounded-lg flex items-center justify-center pointer-events-none\">\n                    <p className=\"text-blue-600 dark:text-blue-400 font-medium\">Drop to add</p>\n                  </div>\n                )}\n              </div>\n            )}\n          </div>\n\n          {!showTextPrompt ? (\n            <button\n              onClick={() => {\n                setShowTextPrompt(true);\n                setTimeout(() => textInputRef.current?.focus(), 50);\n              }}\n              className=\"text-sm text-gray-500 dark:text-zinc-400 hover:text-gray-700 dark:hover:text-zinc-200 underline\"\n            >\n              Add instructions (optional)\n            </button>\n          ) : (\n            <div className=\"w-full max-w-lg\">\n              <textarea\n                ref={textInputRef}\n                value={textPrompt}\n                onChange={(e) => setTextPrompt(e.target.value)}\n                onKeyDown={handleKeyDown}\n                placeholder=\"Describe any specific requirements...\"\n                className=\"w-full p-3 text-sm border border-gray-200 dark:border-zinc-700 bg-white dark:bg-zinc-900 text-gray-900 dark:text-zinc-100 rounded-lg resize-none focus:outline-none focus:ring-2 focus:ring-gray-300 dark:focus:ring-zinc-600 focus:border-transparent placeholder:text-gray-400 dark:placeholder:text-zinc-500\"\n                rows={2}\n              />\n            </div>\n          )}\n\n          <div className=\"w-full max-w-md\">\n            <OutputSettingsSection\n              stack={stack}\n              setStack={setStack}\n            />\n          </div>\n\n          <div className=\"flex flex-col items-center gap-1 w-full max-w-md\">\n            <Button\n              onClick={handleGenerate}\n              className=\"w-full\"\n              size=\"lg\"\n              data-testid=\"upload-generate\"\n            >\n              Generate Code\n            </Button>\n            <p className=\"text-xs text-gray-400 dark:text-zinc-500\">Press Enter to generate</p>\n          </div>\n        </div>\n      )}\n\n      {!hasUploadedFile && (\n        <div className=\"flex flex-col items-center gap-3\">\n          {screenRecorderState === ScreenRecorderState.INITIAL && (\n            <div className=\"flex items-center gap-2 text-sm text-gray-500 dark:text-zinc-400\">\n              <div className=\"h-px w-12 bg-gray-300 dark:bg-zinc-600\" />\n              <span>or</span>\n              <div className=\"h-px w-12 bg-gray-300 dark:bg-zinc-600\" />\n            </div>\n          )}\n          <ScreenRecorder\n            screenRecorderState={screenRecorderState}\n            setScreenRecorderState={setScreenRecorderState}\n            generateCode={handleScreenRecorderGenerate}\n            stack={stack}\n            setStack={setStack}\n          />\n        </div>\n      )}\n    </div>\n  );\n}\n\nexport default UploadTab;\n"
  },
  {
    "path": "frontend/src/components/unified-input/tabs/UrlTab.tsx",
    "content": "import { useState } from \"react\";\nimport { HTTP_BACKEND_URL } from \"../../../config\";\nimport { Button } from \"../../ui/button\";\nimport { Input } from \"../../ui/input\";\nimport { toast } from \"react-hot-toast\";\nimport OutputSettingsSection from \"../../settings/OutputSettingsSection\";\nimport { Stack } from \"../../../lib/stacks\";\n\ninterface Props {\n  screenshotOneApiKey: string | null;\n  doCreate: (\n    urls: string[],\n    inputMode: \"image\" | \"video\",\n    textPrompt?: string,\n  ) => void;\n  stack: Stack;\n  setStack: (stack: Stack) => void;\n}\n\nfunction isFigmaUrl(url: string): boolean {\n  return /^https?:\\/\\/([\\w.-]*\\.)?figma\\.com\\//i.test(url.trim());\n}\n\nfunction UrlTab({ doCreate, screenshotOneApiKey, stack, setStack }: Props) {\n  const [isLoading, setIsLoading] = useState(false);\n  const [referenceUrl, setReferenceUrl] = useState(\"\");\n\n  async function takeScreenshot() {\n    const trimmedReferenceUrl = referenceUrl.trim();\n\n    if (!screenshotOneApiKey) {\n      toast.error(\n        \"Please add a ScreenshotOne API key in Settings. You can also upload screenshots directly in the Upload tab.\",\n        { duration: 6000 },\n      );\n      return;\n    }\n\n    if (!trimmedReferenceUrl) {\n      toast.error(\"Please enter a URL\");\n      return;\n    }\n\n    if (trimmedReferenceUrl.toLowerCase().startsWith(\"file://\")) {\n      toast.error(\n        \"file:// URLs can't be screenshot. If you're trying to import a local file, please use the Import tab.\",\n      );\n      return;\n    }\n\n    if (isFigmaUrl(trimmedReferenceUrl)) {\n      toast.error(\n        \"Direct Figma import is not supported. Take a screenshot of your design or export the artboards as images, then use the Upload tab.\",\n        { duration: 6000 },\n      );\n      return;\n    }\n\n    try {\n      setIsLoading(true);\n      const response = await fetch(`${HTTP_BACKEND_URL}/api/screenshot`, {\n        method: \"POST\",\n        body: JSON.stringify({\n          url: trimmedReferenceUrl,\n          apiKey: screenshotOneApiKey,\n        }),\n        headers: {\n          \"Content-Type\": \"application/json\",\n        },\n      });\n\n      if (!response.ok) {\n        throw new Error(\"Failed to capture screenshot\");\n      }\n\n      const res = await response.json();\n      doCreate([res.url], \"image\");\n    } catch (error) {\n      console.error(error);\n      toast.error(\"Failed to capture screenshot. Check console for details.\");\n    } finally {\n      setIsLoading(false);\n    }\n  }\n\n  return (\n    <div className=\"flex flex-col items-center gap-6\">\n      <div className=\"w-full max-w-lg\">\n        <div className=\"flex flex-col items-center gap-6 p-8 border border-gray-200 dark:border-zinc-700 rounded-xl bg-gray-50/50 dark:bg-zinc-900/50\">\n          <div className=\"w-16 h-16 rounded-full bg-gray-100 dark:bg-zinc-800 flex items-center justify-center\">\n            <svg\n              xmlns=\"http://www.w3.org/2000/svg\"\n              width=\"28\"\n              height=\"28\"\n              viewBox=\"0 0 24 24\"\n              fill=\"none\"\n              stroke=\"currentColor\"\n              strokeWidth=\"2\"\n              strokeLinecap=\"round\"\n              strokeLinejoin=\"round\"\n              className=\"text-gray-400 dark:text-zinc-500\"\n            >\n              <path d=\"M10 13a5 5 0 0 0 7.54.54l3-3a5 5 0 0 0-7.07-7.07l-1.72 1.71\" />\n              <path d=\"M14 11a5 5 0 0 0-7.54-.54l-3 3a5 5 0 0 0 7.07 7.07l1.71-1.71\" />\n            </svg>\n          </div>\n\n          <div className=\"text-center\">\n            <h3 className=\"text-gray-700 dark:text-zinc-200 font-medium\">Screenshot from URL</h3>\n          </div>\n\n          <div className=\"w-full space-y-3\">\n            <Input\n              placeholder=\"https://\"\n              onChange={(e) => setReferenceUrl(e.target.value)}\n              value={referenceUrl}\n              onKeyDown={(e) => {\n                if (e.key === \"Enter\" && !isLoading) {\n                  takeScreenshot();\n                }\n              }}\n              className=\"w-full\"\n              data-testid=\"url-input\"\n            />\n            {isFigmaUrl(referenceUrl) && (\n              <p className=\"text-xs text-amber-600 dark:text-amber-400\">\n                Direct Figma import is not supported. Take a screenshot of your\n                design or export the artboards as images, then use the Upload\n                tab.\n              </p>\n            )}\n            <OutputSettingsSection stack={stack} setStack={setStack} />\n\n            <Button\n              onClick={takeScreenshot}\n              disabled={isLoading}\n              className=\"w-full\"\n              size=\"lg\"\n              data-testid=\"url-capture\"\n            >\n              {isLoading ? (\n                <span className=\"flex items-center gap-2\">\n                  <svg\n                    className=\"animate-spin h-4 w-4\"\n                    xmlns=\"http://www.w3.org/2000/svg\"\n                    fill=\"none\"\n                    viewBox=\"0 0 24 24\"\n                  >\n                    <circle\n                      className=\"opacity-25\"\n                      cx=\"12\"\n                      cy=\"12\"\n                      r=\"10\"\n                      stroke=\"currentColor\"\n                      strokeWidth=\"4\"\n                    />\n                    <path\n                      className=\"opacity-75\"\n                      fill=\"currentColor\"\n                      d=\"M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z\"\n                    />\n                  </svg>\n                  Capturing...\n                </span>\n              ) : (\n                \"Capture & Generate\"\n              )}\n            </Button>\n          </div>\n\n          <p className=\"text-xs text-gray-400 dark:text-zinc-500 text-center\">\n            Requires ScreenshotOne API key.\n          </p>\n        </div>\n      </div>\n    </div>\n  );\n}\n\nexport default UrlTab;\n"
  },
  {
    "path": "frontend/src/components/variants/Variants.tsx",
    "content": "import { useProjectStore } from \"../../store/project-store\";\nimport { useEffect, useRef, useState } from \"react\";\nimport { useThrottle } from \"../../hooks/useThrottle\";\nimport {\n  CODE_GENERATION_MODEL_DESCRIPTIONS,\n  CodeGenerationModel,\n} from \"../../lib/models\";\nimport WorkingPulse from \"../core/WorkingPulse\";\n\nconst IFRAME_WIDTH = 1280;\nconst IFRAME_HEIGHT = 550;\n\ninterface VariantThumbnailProps {\n  code: string;\n  isSelected: boolean;\n}\n\nfunction VariantThumbnail({ code, isSelected }: VariantThumbnailProps) {\n  const containerRef = useRef<HTMLDivElement>(null);\n  const iframeRef = useRef<HTMLIFrameElement>(null);\n  const [scale, setScale] = useState(0.1);\n\n  const throttledCode = useThrottle(code, isSelected ? 300 : 2000);\n\n  useEffect(() => {\n    const container = containerRef.current;\n    if (!container) return;\n\n    const updateScale = () => {\n      const containerWidth = container.offsetWidth;\n      setScale(containerWidth / IFRAME_WIDTH);\n    };\n\n    updateScale();\n    const resizeObserver = new ResizeObserver(updateScale);\n    resizeObserver.observe(container);\n\n    return () => resizeObserver.disconnect();\n  }, []);\n\n  useEffect(() => {\n    const iframe = iframeRef.current;\n    if (iframe) {\n      iframe.srcdoc = throttledCode;\n    }\n  }, [throttledCode]);\n\n  const scaledHeight = IFRAME_HEIGHT * scale;\n\n  return (\n    <div\n      ref={containerRef}\n      className=\"w-full overflow-hidden rounded border border-gray-200 dark:border-gray-600 bg-white dark:bg-gray-900\"\n      style={{ height: `${scaledHeight}px` }}\n    >\n      <iframe\n        ref={iframeRef}\n        title=\"variant-preview\"\n        className=\"pointer-events-none origin-top-left\"\n        style={{\n          width: `${IFRAME_WIDTH}px`,\n          height: `${IFRAME_HEIGHT}px`,\n          transform: `scale(${scale})`,\n        }}\n        sandbox=\"allow-scripts allow-same-origin\"\n      />\n    </div>\n  );\n}\n\nfunction Variants() {\n  const { head, commits, updateSelectedVariantIndex } = useProjectStore();\n\n  const commit = head ? commits[head] : null;\n  const variants = commit?.variants || [];\n  const selectedVariantIndex = commit?.selectedVariantIndex || 0;\n\n  const handleVariantClick = (index: number) => {\n    if (index === selectedVariantIndex || !head) return;\n    updateSelectedVariantIndex(head, index);\n  };\n\n  useEffect(() => {\n    const handleKeyDown = (event: KeyboardEvent) => {\n      if (event.altKey && !event.ctrlKey && !event.shiftKey && !event.metaKey) {\n        const code = event.code;\n        if (code >= \"Digit1\" && code <= \"Digit9\") {\n          const variantIndex = parseInt(code.replace(\"Digit\", \"\")) - 1;\n          if (\n            commit &&\n            variantIndex < variants.length &&\n            variants.length > 1 &&\n            !commit.isCommitted\n          ) {\n            event.preventDefault();\n            handleVariantClick(variantIndex);\n          }\n        }\n      }\n    };\n\n    document.addEventListener(\"keydown\", handleKeyDown);\n    return () => document.removeEventListener(\"keydown\", handleKeyDown);\n  }, [variants.length, commit?.isCommitted, selectedVariantIndex, head]);\n\n  if (head === null || !commit) {\n    return null;\n  }\n\n  if (variants.length <= 1 || commit.isCommitted) {\n    return <div className=\"mt-2\"></div>;\n  }\n\n  return (\n    <div className=\"pt-2 pb-1\">\n      <div className=\"grid grid-cols-2 gap-2\">\n        {variants.map((variant, index) => {\n          let statusColor = \"bg-gray-300 dark:bg-gray-600\";\n          if (variant.status === \"complete\") statusColor = \"bg-green-500\";\n          else if (variant.status === \"error\" || variant.status === \"cancelled\") statusColor = \"bg-red-500\";\n\n          return (\n            <div\n              key={index}\n              className={`w-full rounded cursor-pointer overflow-hidden ${\n                index === selectedVariantIndex\n                  ? \"ring-2 ring-blue-400 dark:ring-blue-500\"\n                  : \"ring-1 ring-gray-200 dark:ring-gray-700 hover:ring-gray-300 dark:hover:ring-gray-600\"\n              }`}\n              title={variant.model ? (CODE_GENERATION_MODEL_DESCRIPTIONS[variant.model as CodeGenerationModel]?.name || variant.model) : undefined}\n              onClick={() => handleVariantClick(index)}\n            >\n              <VariantThumbnail\n                code={variant.code}\n                isSelected={index === selectedVariantIndex}\n              />\n              <div className=\"flex items-center px-2 py-1 bg-white dark:bg-zinc-900\">\n                <span className=\"inline-flex min-w-0 items-center text-xs text-gray-500 dark:text-gray-400 whitespace-nowrap\">\n                  <span className={`w-2 h-2 rounded-full mr-1.5 ${statusColor}`} />\n                  Option {index + 1}\n                  {index < 9 && (\n                    <span className=\"text-xs text-gray-400 dark:text-gray-500 font-mono ml-1\">\n                      (⌥{index + 1})\n                    </span>\n                  )}\n                </span>\n                {variant.status === \"generating\" && (\n                  <div\n                    className=\"ml-auto shrink-0 inline-flex items-center\"\n                    role=\"status\"\n                    aria-live=\"polite\"\n                    aria-label=\"Working\"\n                  >\n                    <WorkingPulse />\n                  </div>\n                )}\n              </div>\n            </div>\n          );\n        })}\n      </div>\n    </div>\n  );\n}\n\nexport default Variants;\n"
  },
  {
    "path": "frontend/src/config.ts",
    "content": "// Default to false if set to anything other than \"true\" or unset\nexport const IS_RUNNING_ON_CLOUD =\n  import.meta.env.VITE_IS_DEPLOYED === \"true\" || false;\n\nexport const WS_BACKEND_URL =\n  import.meta.env.VITE_WS_BACKEND_URL || \"ws://127.0.0.1:7001\";\n\nexport const HTTP_BACKEND_URL =\n  import.meta.env.VITE_HTTP_BACKEND_URL || \"http://127.0.0.1:7001\";\n\nexport const PICO_BACKEND_FORM_SECRET =\n  import.meta.env.VITE_PICO_BACKEND_FORM_SECRET || null;\n"
  },
  {
    "path": "frontend/src/constants.ts",
    "content": "//  WebSocket protocol (RFC 6455) allows for the use of custom close codes in the range 4000-4999\nexport const APP_ERROR_WEB_SOCKET_CODE = 4332;\nexport const USER_CLOSE_WEB_SOCKET_CODE = 4333;\n"
  },
  {
    "path": "frontend/src/generateCode.ts",
    "content": "import toast from \"react-hot-toast\";\nimport { WS_BACKEND_URL } from \"./config\";\nimport {\n  APP_ERROR_WEB_SOCKET_CODE,\n  USER_CLOSE_WEB_SOCKET_CODE,\n} from \"./constants\";\nimport { FullGenerationSettings } from \"./types\";\n\nconst ERROR_MESSAGE =\n  \"Error generating code. Check the Developer Console AND the backend logs for details. Feel free to open a Github issue.\";\n\nconst CANCEL_MESSAGE = \"Code generation cancelled\";\n\ntype WebSocketResponse = {\n  type:\n    | \"chunk\"\n    | \"status\"\n    | \"setCode\"\n    | \"error\"\n    | \"variantComplete\"\n    | \"variantError\"\n    | \"variantCount\"\n    | \"variantModels\"\n    | \"thinking\"\n    | \"assistant\"\n    | \"toolStart\"\n    | \"toolResult\";\n  value?: string;\n  data?: any;\n  eventId?: string;\n  variantIndex: number;\n};\n\ninterface CodeGenerationCallbacks {\n  onChange: (chunk: string, variantIndex: number) => void;\n  onSetCode: (code: string, variantIndex: number) => void;\n  onStatusUpdate: (status: string, variantIndex: number) => void;\n  onVariantComplete: (variantIndex: number) => void;\n  onVariantError: (variantIndex: number, error: string) => void;\n  onVariantCount: (count: number) => void;\n  onVariantModels: (models: string[]) => void;\n  onThinking: (content: string, variantIndex: number, eventId?: string) => void;\n  onAssistant: (content: string, variantIndex: number, eventId?: string) => void;\n  onToolStart: (data: any, variantIndex: number, eventId?: string) => void;\n  onToolResult: (data: any, variantIndex: number, eventId?: string) => void;\n  onCancel: (\n    reason: \"user_cancelled\" | \"request_failed\" | \"connection_error\",\n    errorMessage?: string\n  ) => void;\n  onComplete: () => void;\n}\n\nexport function generateCode(\n  wsRef: React.MutableRefObject<WebSocket | null>,\n  params: FullGenerationSettings,\n  callbacks: CodeGenerationCallbacks\n) {\n  const wsUrl = `${WS_BACKEND_URL}/generate-code`;\n  console.log(\"Connecting to backend @ \", wsUrl);\n\n  const ws = new WebSocket(wsUrl);\n  wsRef.current = ws;\n\n  ws.addEventListener(\"open\", () => {\n    ws.send(JSON.stringify(params));\n  });\n\n  ws.addEventListener(\"message\", async (event: MessageEvent) => {\n    const response = JSON.parse(event.data) as WebSocketResponse;\n    if (response.type === \"chunk\") {\n      callbacks.onChange(response.value || \"\", response.variantIndex);\n    } else if (response.type === \"status\") {\n      callbacks.onStatusUpdate(response.value || \"\", response.variantIndex);\n    } else if (response.type === \"setCode\") {\n      callbacks.onSetCode(response.value || \"\", response.variantIndex);\n    } else if (response.type === \"variantComplete\") {\n      callbacks.onVariantComplete(response.variantIndex);\n    } else if (response.type === \"variantError\") {\n      callbacks.onVariantError(response.variantIndex, response.value || \"\");\n    } else if (response.type === \"variantCount\") {\n      callbacks.onVariantCount(parseInt(response.value || \"1\"));\n    } else if (response.type === \"variantModels\") {\n      callbacks.onVariantModels(response.data?.models || []);\n    } else if (response.type === \"thinking\") {\n      callbacks.onThinking(response.value || \"\", response.variantIndex, response.eventId);\n    } else if (response.type === \"assistant\") {\n      callbacks.onAssistant(response.value || \"\", response.variantIndex, response.eventId);\n    } else if (response.type === \"toolStart\") {\n      callbacks.onToolStart(response.data, response.variantIndex, response.eventId);\n    } else if (response.type === \"toolResult\") {\n      callbacks.onToolResult(response.data, response.variantIndex, response.eventId);\n    } else if (response.type === \"error\") {\n      console.error(\"Error generating code\", response.value);\n      toast.error(response.value || ERROR_MESSAGE);\n    }\n  });\n\n  ws.addEventListener(\"close\", (event) => {\n    console.log(\"Connection closed\", event.code, event.reason);\n    if (event.code === USER_CLOSE_WEB_SOCKET_CODE) {\n      toast.success(CANCEL_MESSAGE);\n      callbacks.onCancel(\"user_cancelled\");\n    } else if (event.code === APP_ERROR_WEB_SOCKET_CODE) {\n      console.error(\"Known server error\", event);\n      callbacks.onCancel(\"request_failed\", event.reason || ERROR_MESSAGE);\n    } else if (event.code !== 1000) {\n      console.error(\"Unknown server or connection error\", event);\n      toast.error(ERROR_MESSAGE);\n      callbacks.onCancel(\"connection_error\", event.reason || ERROR_MESSAGE);\n    } else {\n      callbacks.onComplete();\n    }\n  });\n\n  ws.addEventListener(\"error\", (error) => {\n    console.error(\"WebSocket error\", error);\n    toast.error(ERROR_MESSAGE);\n  });\n}\n"
  },
  {
    "path": "frontend/src/hooks/useBrowserTabIndicator.ts",
    "content": "import { useEffect } from \"react\";\n\nconst CODING_SETTINGS = {\n  title: \"Coding...\",\n  favicon: \"/favicon/coding.png\",\n};\nconst DEFAULT_SETTINGS = {\n  title: \"Screenshot to Code\",\n  favicon: \"/favicon/main.png\",\n};\n\nconst DEV_FAVICON_COLORS = {\n  default: \"#22c55e\",\n  coding: \"#ef4444\",\n};\n\nconst getAugmentedFaviconDataUrl = (\n  baseUrl: string,\n  dotColor: string\n): Promise<string> =>\n  new Promise((resolve, reject) => {\n    const img = new Image();\n    img.crossOrigin = \"anonymous\";\n    img.onload = () => {\n      const size = Math.max(img.width, img.height, 64);\n      const canvas = document.createElement(\"canvas\");\n      canvas.width = size;\n      canvas.height = size;\n      const ctx = canvas.getContext(\"2d\");\n      if (!ctx) {\n        reject(new Error(\"Canvas not supported\"));\n        return;\n      }\n\n      // Center the favicon in case it's not square\n      const x = (size - img.width) / 2;\n      const y = (size - img.height) / 2;\n      ctx.drawImage(img, x, y);\n\n      // Add a noticeable dev dot in the top-left corner\n      const dotRadius = Math.max(10, Math.round(size * 0.3));\n      const dotCx = size - dotRadius - Math.round(size * 0.04);\n      const dotCy = size - dotRadius - Math.round(size * 0.04);\n      ctx.beginPath();\n      ctx.arc(dotCx, dotCy, dotRadius, 0, Math.PI * 2);\n      ctx.fillStyle = dotColor;\n      ctx.fill();\n      ctx.lineWidth = Math.max(3, Math.round(size * 0.08));\n      ctx.strokeStyle = \"#ffffff\";\n      ctx.stroke();\n\n      resolve(canvas.toDataURL(\"image/png\"));\n    };\n    img.onerror = () => reject(new Error(\"Failed to load favicon\"));\n    img.src = baseUrl;\n  });\n\nconst isDevHost = () => {\n  const host = window.location.hostname;\n  return host === \"localhost\" || host === \"127.0.0.1\" || host === \"0.0.0.0\";\n};\n\nconst useBrowserTabIndicator = (isCoding: boolean) => {\n  useEffect(() => {\n    const settings = isCoding ? CODING_SETTINGS : DEFAULT_SETTINGS;\n\n    // Set favicon\n    const faviconEl = document.querySelector(\n      \"link[rel='icon']\"\n    ) as HTMLLinkElement | null;\n    if (faviconEl) {\n      if (isDevHost()) {\n        let cancelled = false;\n        const dotColor = isCoding\n          ? DEV_FAVICON_COLORS.coding\n          : DEV_FAVICON_COLORS.default;\n        getAugmentedFaviconDataUrl(settings.favicon, dotColor)\n          .then((dataUrl) => {\n            if (!cancelled && faviconEl) {\n              faviconEl.href = dataUrl;\n            }\n          })\n          .catch(() => {\n            if (!cancelled && faviconEl) {\n              faviconEl.href = settings.favicon;\n            }\n          });\n        return () => {\n          cancelled = true;\n        };\n      } else {\n        faviconEl.href = settings.favicon;\n      }\n    }\n\n    // Set title\n    document.title = settings.title;\n  }, [isCoding]);\n};\n\nexport default useBrowserTabIndicator;\n"
  },
  {
    "path": "frontend/src/hooks/usePersistedState.ts",
    "content": "import { Dispatch, SetStateAction, useEffect, useState } from 'react';\n\ntype PersistedState<T> = [T, Dispatch<SetStateAction<T>>];\n\nfunction usePersistedState<T>(defaultValue: T, key: string): PersistedState<T> {\n  const [value, setValue] = useState<T>(() => {\n    const value = window.localStorage.getItem(key);\n\n    return value ? (JSON.parse(value) as T) : defaultValue;\n  });\n\n  useEffect(() => {\n    window.localStorage.setItem(key, JSON.stringify(value));\n  }, [key, value]);\n\n  return [value, setValue];\n}\n\nexport { usePersistedState };\n"
  },
  {
    "path": "frontend/src/hooks/useThrottle.ts",
    "content": "import React from \"react\";\n\n// Updates take effect immediately if the last update was more than {interval} ago.\n// Otherwise, updates are throttled to {interval}. The latest value is always sent.\n// The last update always gets executed, with potentially a {interval} delay.\nexport function useThrottle(value: string, interval = 500) {\n  const [throttledValue, setThrottledValue] = React.useState(value);\n  const lastUpdated = React.useRef<number | null>(null);\n\n  React.useEffect(() => {\n    const now = performance.now();\n\n    if (!lastUpdated.current || now >= lastUpdated.current + interval) {\n      lastUpdated.current = now;\n      setThrottledValue(value);\n    } else {\n      const id = window.setTimeout(() => {\n        lastUpdated.current = now;\n        setThrottledValue(value);\n      }, interval);\n\n      return () => window.clearTimeout(id);\n    }\n  }, [value, interval]);\n\n  return throttledValue;\n}\nexport default useThrottle;\n"
  },
  {
    "path": "frontend/src/index.css",
    "content": "@tailwind base;\n@tailwind components;\n@tailwind utilities;\n\n/* Image Scanning animation */\n.scanning::after {\n  content: \"\";\n  position: absolute;\n  top: 0px;\n  left: 0px;\n  width: 5px;\n  height: 100%;\n  background-image: linear-gradient(\n    to right,\n    rgba(19, 161, 14, 0.2),\n    /* Darker matrix green with full transparency */ rgba(19, 161, 14, 0.8)\n      /* The same green with 80% opacity */\n  );\n  animation: scanning 3s ease-in-out infinite;\n}\n\n@keyframes scanning {\n  0%,\n  100% {\n    transform: translateX(0px);\n  }\n  50% {\n    transform: translateX(340px);\n  }\n}\n\n@layer base {\n  :root {\n    --background: 0 0% 100%;\n    --foreground: 222.2 84% 4.9%;\n\n    --card: 0 0% 100%;\n    --card-foreground: 222.2 84% 4.9%;\n\n    --popover: 0 0% 100%;\n    --popover-foreground: 222.2 84% 4.9%;\n\n    --primary: 222.2 47.4% 11.2%;\n    --primary-foreground: 210 40% 98%;\n\n    --secondary: 210 40% 96.1%;\n    --secondary-foreground: 222.2 47.4% 11.2%;\n\n    --muted: 210 40% 96.1%;\n    --muted-foreground: 215.4 16.3% 46.9%;\n\n    --accent: 210 40% 96.1%;\n    --accent-foreground: 222.2 47.4% 11.2%;\n\n    --destructive: 0 84.2% 60.2%;\n    --destructive-foreground: 210 40% 98%;\n\n    --border: 214.3 31.8% 91.4%;\n    --input: 214.3 31.8% 91.4%;\n    --ring: 222.2 84% 4.9%;\n\n    --radius: 0.5rem;\n  }\n  \n  .dark body,\n  body.dark {\n    background-color: black;\n  }\n\n  .dark div[role=\"presentation\"],\n  div[role=\"presentation\"].dark {\n    background-color: #09090b !important;\n  }\n\n  iframe {\n    background-color: white !important; \n  }\n\n  .dark {\n    color-scheme: dark;\n\n    --background: 222.2 0% 0%;\n    --foreground: 210 40% 98%;\n\n    --card: 222.2 84% 4.9%;\n    --card-foreground: 210 40% 98%;\n\n    --popover: 222.2 84% 4.9%;\n    --popover-foreground: 210 40% 98%;\n\n    --primary: 210 40% 98%;\n    --primary-foreground: 222.2 47.4% 11.2%;\n\n    --secondary: 217.2 32.6% 17.5%;\n    --secondary-foreground: 210 40% 98%;\n\n    --muted: 217.2 32.6% 17.5%;\n    --muted-foreground: 215 20.2% 65.1%;\n\n    --accent: 217.2 32.6% 17.5%;\n    --accent-foreground: 210 40% 98%;\n\n    --destructive: 0 62.8% 30.6%;\n    --destructive-foreground: 210 40% 98%;\n\n    --border: 217.2 32.6% 17.5%;\n    --input: 217.2 32.6% 17.5%;\n    --ring: 212.7 26.8% 83.9%;\n  }\n}\n\n.sidebar-scrollbar-stable {\n  scrollbar-gutter: stable;\n}\n\n.active-step-shimmer {\n  background: linear-gradient(\n    90deg,\n    #374151 0%,\n    #374151 27%,\n    #6b7280 29%,\n    #f3f4f6 30%,\n    #6b7280 31%,\n    #374151 33%,\n    #374151 67%,\n    #6b7280 69%,\n    #f3f4f6 70%,\n    #6b7280 71%,\n    #374151 73%,\n    #374151 100%\n  );\n  background-size: 240% 100%;\n  -webkit-background-clip: text;\n  background-clip: text;\n  -webkit-text-fill-color: transparent;\n  animation: searchlight 5.5s linear infinite;\n}\n\n:is(.dark) .active-step-shimmer {\n  background: linear-gradient(\n    90deg,\n    #94a3b8 0%,\n    #94a3b8 27%,\n    #cbd5e1 29%,\n    #ffffff 30%,\n    #cbd5e1 31%,\n    #94a3b8 33%,\n    #94a3b8 67%,\n    #cbd5e1 69%,\n    #ffffff 70%,\n    #cbd5e1 71%,\n    #94a3b8 73%,\n    #94a3b8 100%\n  );\n  background-size: 240% 100%;\n  -webkit-background-clip: text;\n  background-clip: text;\n  -webkit-text-fill-color: transparent;\n  animation: searchlight 5.5s linear infinite;\n}\n\n/* Working indicator background motion */\n.working-indicator-bg {\n  background: linear-gradient(\n    90deg,\n    #f5f3ff 0%,\n    #ffffff 45%,\n    #ede9fe 55%,\n    #ffffff 100%\n  );\n  background-size: 220% 100%;\n  animation: workingGradientSlide 4s ease-in-out infinite;\n}\n\n:is(.dark) .working-indicator-bg {\n  background: linear-gradient(\n    90deg,\n    rgba(76, 29, 149, 0.2) 0%,\n    rgba(24, 24, 27, 1) 45%,\n    rgba(76, 29, 149, 0.28) 55%,\n    rgba(24, 24, 27, 1) 100%\n  );\n  background-size: 220% 100%;\n}\n\n@keyframes workingGradientSlide {\n  0% {\n    background-position: 0% 50%;\n  }\n  50% {\n    background-position: 100% 50%;\n  }\n  100% {\n    background-position: 0% 50%;\n  }\n}\n\n@keyframes searchlight {\n  0% {\n    background-position: 140% center;\n  }\n  100% {\n    background-position: -40% center;\n  }\n}\n\n@layer base {\n  * {\n    @apply border-border;\n  }\n  html, body, #root {\n    @apply h-full;\n  }\n  body {\n    @apply bg-background text-foreground m-0;\n  }\n}\n"
  },
  {
    "path": "frontend/src/lib/models.ts",
    "content": "// Keep in sync with backend (llm.py)\n// Order here matches dropdown order\nexport enum CodeGenerationModel {\n  CLAUDE_OPUS_4_6 = \"claude-opus-4-6\",\n  CLAUDE_SONNET_4_6 = \"claude-sonnet-4-6\",\n  CLAUDE_4_5_OPUS_2025_11_01 = \"claude-opus-4-5-20251101\",\n  CLAUDE_4_5_SONNET_2025_09_29 = \"claude-sonnet-4-5-20250929\",\n  GPT_5_2_CODEX_LOW = \"gpt-5.2-codex (low thinking)\",\n  GPT_5_2_CODEX_MEDIUM = \"gpt-5.2-codex (medium thinking)\",\n  GPT_5_2_CODEX_HIGH = \"gpt-5.2-codex (high thinking)\",\n  GPT_5_2_CODEX_XHIGH = \"gpt-5.2-codex (xhigh thinking)\",\n  GPT_5_3_CODEX_LOW = \"gpt-5.3-codex (low thinking)\",\n  GPT_5_3_CODEX_MEDIUM = \"gpt-5.3-codex (medium thinking)\",\n  GPT_5_3_CODEX_HIGH = \"gpt-5.3-codex (high thinking)\",\n  GPT_5_3_CODEX_XHIGH = \"gpt-5.3-codex (xhigh thinking)\",\n  GEMINI_3_FLASH_PREVIEW_HIGH = \"gemini-3-flash-preview (high thinking)\",\n  GEMINI_3_FLASH_PREVIEW_MINIMAL = \"gemini-3-flash-preview (minimal thinking)\",\n  GEMINI_3_1_PRO_PREVIEW_HIGH = \"gemini-3.1-pro-preview (high thinking)\",\n  GEMINI_3_1_PRO_PREVIEW_MEDIUM = \"gemini-3.1-pro-preview (medium thinking)\",\n  GEMINI_3_1_PRO_PREVIEW_LOW = \"gemini-3.1-pro-preview (low thinking)\",\n}\n\n// Will generate a static error if a model in the enum above is not in the descriptions\nexport const CODE_GENERATION_MODEL_DESCRIPTIONS: {\n  [key in CodeGenerationModel]: { name: string; inBeta: boolean };\n} = {\n  \"gpt-5.2-codex (low thinking)\": {\n    name: \"GPT 5.2 Codex (low)\",\n    inBeta: true,\n  },\n  \"gpt-5.2-codex (medium thinking)\": {\n    name: \"GPT 5.2 Codex (medium)\",\n    inBeta: true,\n  },\n  \"gpt-5.2-codex (high thinking)\": {\n    name: \"GPT 5.2 Codex (high)\",\n    inBeta: true,\n  },\n  \"gpt-5.2-codex (xhigh thinking)\": {\n    name: \"GPT 5.2 Codex (xhigh)\",\n    inBeta: true,\n  },\n  \"gpt-5.3-codex (low thinking)\": {\n    name: \"GPT 5.3 Codex (low)\",\n    inBeta: true,\n  },\n  \"gpt-5.3-codex (medium thinking)\": {\n    name: \"GPT 5.3 Codex (medium)\",\n    inBeta: true,\n  },\n  \"gpt-5.3-codex (high thinking)\": {\n    name: \"GPT 5.3 Codex (high)\",\n    inBeta: true,\n  },\n  \"gpt-5.3-codex (xhigh thinking)\": {\n    name: \"GPT 5.3 Codex (xhigh)\",\n    inBeta: true,\n  },\n  \"claude-opus-4-5-20251101\": { name: \"Claude Opus 4.5\", inBeta: false },\n  \"claude-opus-4-6\": { name: \"Claude Opus 4.6\", inBeta: false },\n  \"claude-sonnet-4-6\": { name: \"Claude Sonnet 4.6\", inBeta: false },\n  \"claude-sonnet-4-5-20250929\": { name: \"Claude Sonnet 4.5\", inBeta: false },\n  \"gemini-3-flash-preview (high thinking)\": {\n    name: \"Gemini 3 Flash (high)\",\n    inBeta: true,\n  },\n  \"gemini-3-flash-preview (minimal thinking)\": {\n    name: \"Gemini 3 Flash (minimal)\",\n    inBeta: true,\n  },\n  \"gemini-3.1-pro-preview (high thinking)\": {\n    name: \"Gemini 3.1 Pro (high)\",\n    inBeta: true,\n  },\n  \"gemini-3.1-pro-preview (medium thinking)\": {\n    name: \"Gemini 3.1 Pro (medium)\",\n    inBeta: true,\n  },\n  \"gemini-3.1-pro-preview (low thinking)\": {\n    name: \"Gemini 3.1 Pro (low)\",\n    inBeta: true,\n  },\n};\n"
  },
  {
    "path": "frontend/src/lib/prompt-history.test.ts",
    "content": "import {\n  buildAssistantHistoryMessage,\n  buildUserHistoryMessage,\n  cloneVariantHistory,\n  registerAssetIds,\n  toRequestHistory,\n} from \"./prompt-history\";\nimport { PromptAsset } from \"../types\";\n\ndescribe(\"prompt-history helpers\", () => {\n  test(\"cloneVariantHistory deep-copies asset id arrays\", () => {\n    const source = [\n      {\n        role: \"user\" as const,\n        text: \"Update this\",\n        imageAssetIds: [\"img-1\"],\n        videoAssetIds: [\"vid-1\"],\n      },\n    ];\n\n    const cloned = cloneVariantHistory(source);\n    cloned[0].imageAssetIds.push(\"img-2\");\n    cloned[0].videoAssetIds.push(\"vid-2\");\n\n    expect(source[0].imageAssetIds).toEqual([\"img-1\"]);\n    expect(source[0].videoAssetIds).toEqual([\"vid-1\"]);\n    expect(cloned[0].imageAssetIds).toEqual([\"img-1\", \"img-2\"]);\n    expect(cloned[0].videoAssetIds).toEqual([\"vid-1\", \"vid-2\"]);\n  });\n\n  test(\"registerAssetIds reuses existing ids and only upserts new assets\", () => {\n    const assetsById: Record<string, PromptAsset> = {\n      existing: { id: \"existing\", type: \"image\", dataUrl: \"data:image/one\" },\n    };\n    const upsertCalls: PromptAsset[][] = [];\n\n    let idCounter = 0;\n    const ids = registerAssetIds(\n      \"image\",\n      [\"data:image/one\", \"data:image/two\", \"data:image/two\"],\n      () => assetsById,\n      (assets) => {\n        upsertCalls.push(assets);\n        for (const asset of assets) assetsById[asset.id] = asset;\n      },\n      () => `generated-${++idCounter}`\n    );\n\n    expect(ids[0]).toBe(\"existing\");\n    expect(ids[1]).toBe(ids[2]);\n    expect(upsertCalls).toHaveLength(1);\n    expect(upsertCalls[0]).toHaveLength(1);\n    expect(upsertCalls[0][0].type).toBe(\"image\");\n    expect(upsertCalls[0][0].dataUrl).toBe(\"data:image/two\");\n  });\n\n  test(\"registerAssetIds ignores blank media strings\", () => {\n    const upsertCalls: PromptAsset[][] = [];\n    const ids = registerAssetIds(\n      \"video\",\n      [\"\", \"   \"],\n      () => ({}),\n      (assets) => upsertCalls.push(assets),\n      () => \"unused\"\n    );\n\n    expect(ids).toEqual([]);\n    expect(upsertCalls).toEqual([]);\n  });\n\n  test(\"toRequestHistory resolves ids and drops missing assets\", () => {\n    const assetsById: Record<string, PromptAsset> = {\n      img1: { id: \"img1\", type: \"image\", dataUrl: \"data:image/one\" },\n      vid1: { id: \"vid1\", type: \"video\", dataUrl: \"data:video/one\" },\n    };\n\n    const history = [\n      {\n        role: \"user\" as const,\n        text: \"Do this\",\n        imageAssetIds: [\"img1\", \"img-missing\"],\n        videoAssetIds: [\"vid1\", \"vid-missing\"],\n      },\n      {\n        role: \"assistant\" as const,\n        text: \"<html/>\",\n        imageAssetIds: [],\n        videoAssetIds: [],\n      },\n    ];\n\n    expect(toRequestHistory(history, () => assetsById)).toEqual([\n      {\n        role: \"user\",\n        text: \"Do this\",\n        images: [\"data:image/one\"],\n        videos: [\"data:video/one\"],\n      },\n      {\n        role: \"assistant\",\n        text: \"<html/>\",\n        images: [],\n        videos: [],\n      },\n    ]);\n  });\n\n  test(\"message builders produce expected shapes\", () => {\n    expect(buildUserHistoryMessage(\"change\", [\"img\"], [\"vid\"])).toEqual({\n      role: \"user\",\n      text: \"change\",\n      imageAssetIds: [\"img\"],\n      videoAssetIds: [\"vid\"],\n    });\n\n    expect(buildAssistantHistoryMessage(\"<html/>\")).toEqual({\n      role: \"assistant\",\n      text: \"<html/>\",\n      imageAssetIds: [],\n      videoAssetIds: [],\n    });\n  });\n});\n"
  },
  {
    "path": "frontend/src/lib/prompt-history.ts",
    "content": "import { VariantHistoryMessage } from \"../components/commits/types\";\nimport {\n  CodeGenerationParams,\n  PromptAsset,\n  PromptAssetType,\n  PromptHistoryMessage,\n} from \"../types\";\n\nexport type GenerationRequest = CodeGenerationParams & {\n  variantHistory: VariantHistoryMessage[];\n};\n\ntype AssetsById = Record<string, PromptAsset>;\ntype GetAssetsById = () => AssetsById;\ntype UpsertPromptAssets = (assets: PromptAsset[]) => void;\ntype CreateId = () => string;\n\nexport function cloneVariantHistory(\n  history: VariantHistoryMessage[]\n): VariantHistoryMessage[] {\n  return history.map((message) => ({\n    ...message,\n    imageAssetIds: [...message.imageAssetIds],\n    videoAssetIds: [...message.videoAssetIds],\n  }));\n}\n\nexport function registerAssetIds(\n  type: PromptAssetType,\n  dataUrls: string[],\n  getAssetsById: GetAssetsById,\n  upsertPromptAssets: UpsertPromptAssets,\n  createId: CreateId\n): string[] {\n  const existingByUrl = new Map<string, string>();\n  Object.values(getAssetsById()).forEach((asset) => {\n    if (asset.type === type) {\n      existingByUrl.set(asset.dataUrl, asset.id);\n    }\n  });\n\n  const newAssets: PromptAsset[] = [];\n  const ids: string[] = [];\n\n  for (const rawDataUrl of dataUrls) {\n    const dataUrl = rawDataUrl.trim();\n    if (!dataUrl) continue;\n\n    let id = existingByUrl.get(dataUrl);\n    if (!id) {\n      id = createId();\n      existingByUrl.set(dataUrl, id);\n      newAssets.push({ id, type, dataUrl });\n    }\n    ids.push(id);\n  }\n\n  if (newAssets.length > 0) {\n    upsertPromptAssets(newAssets);\n  }\n\n  return ids;\n}\n\nexport function resolveAssetIdsToDataUrls(\n  assetIds: string[],\n  getAssetsById: GetAssetsById\n): string[] {\n  const assetsById = getAssetsById();\n  return assetIds\n    .map((assetId) => assetsById[assetId]?.dataUrl)\n    .filter((value): value is string => Boolean(value));\n}\n\nexport function toRequestHistory(\n  history: VariantHistoryMessage[],\n  getAssetsById: GetAssetsById\n): PromptHistoryMessage[] {\n  return history.map((message) => ({\n    role: message.role,\n    text: message.text,\n    images: resolveAssetIdsToDataUrls(message.imageAssetIds, getAssetsById),\n    videos: resolveAssetIdsToDataUrls(message.videoAssetIds, getAssetsById),\n  }));\n}\n\nexport function buildUserHistoryMessage(\n  text: string,\n  imageAssetIds: string[] = [],\n  videoAssetIds: string[] = []\n): VariantHistoryMessage {\n  return {\n    role: \"user\",\n    text,\n    imageAssetIds,\n    videoAssetIds,\n  };\n}\n\nexport function buildAssistantHistoryMessage(\n  code: string\n): VariantHistoryMessage {\n  return {\n    role: \"assistant\",\n    text: code,\n    imageAssetIds: [],\n    videoAssetIds: [],\n  };\n}\n"
  },
  {
    "path": "frontend/src/lib/stacks.ts",
    "content": "// Keep in sync with backend (prompts/types.py)\n// Order here determines order in dropdown\nexport enum Stack {\n  HTML_TAILWIND = \"html_tailwind\",\n  HTML_CSS = \"html_css\",\n  REACT_TAILWIND = \"react_tailwind\",\n  BOOTSTRAP = \"bootstrap\",\n  VUE_TAILWIND = \"vue_tailwind\",\n  IONIC_TAILWIND = \"ionic_tailwind\",\n}\n\nexport const STACK_DESCRIPTIONS: {\n  [key in Stack]: { components: string[]; inBeta: boolean };\n} = {\n  html_css: { components: [\"HTML\", \"CSS\"], inBeta: false },\n  html_tailwind: { components: [\"HTML\", \"Tailwind\"], inBeta: false },\n  react_tailwind: { components: [\"React\", \"Tailwind\"], inBeta: false },\n  bootstrap: { components: [\"Bootstrap\"], inBeta: false },\n  vue_tailwind: { components: [\"Vue\", \"Tailwind\"], inBeta: true },\n  ionic_tailwind: { components: [\"Ionic\", \"Tailwind\"], inBeta: true },\n};\n"
  },
  {
    "path": "frontend/src/lib/takeScreenshot.ts",
    "content": "import html2canvas from \"html2canvas\";\n\nexport const takeScreenshot = async (): Promise<string> => {\n  const iframeElement = document.querySelector(\n    \"#preview-desktop\"\n  ) as HTMLIFrameElement;\n  if (!iframeElement?.contentWindow?.document.body) {\n    return \"\";\n  }\n\n  const canvas = await html2canvas(iframeElement.contentWindow.document.body);\n  const png = canvas.toDataURL(\"image/png\");\n  return png;\n};\n"
  },
  {
    "path": "frontend/src/lib/utils.ts",
    "content": "import { type ClassValue, clsx } from \"clsx\";\nimport { twMerge } from \"tailwind-merge\";\n\nexport function cn(...inputs: ClassValue[]) {\n  return twMerge(clsx(inputs));\n}\n\nexport function capitalize(str: string) {\n  return str.charAt(0).toUpperCase() + str.slice(1);\n}\n"
  },
  {
    "path": "frontend/src/main.tsx",
    "content": "import React from \"react\";\nimport ReactDOM from \"react-dom/client\";\nimport App from \"./App.tsx\";\nimport \"./index.css\";\nimport { Toaster } from \"react-hot-toast\";\nimport EvalsPage from \"./components/evals/EvalsPage.tsx\";\nimport { BrowserRouter as Router, Routes, Route } from \"react-router-dom\";\nimport PairwiseEvalsPage from \"./components/evals/PairwiseEvalsPage\";\nimport RunEvalsPage from \"./components/evals/RunEvalsPage.tsx\";\nimport BestOfNEvalsPage from \"./components/evals/BestOfNEvalsPage.tsx\";\nimport AllEvalsPage from \"./components/evals/AllEvalsPage.tsx\";\nimport OpenAIInputComparePage from \"./components/evals/OpenAIInputComparePage.tsx\";\n\nReactDOM.createRoot(document.getElementById(\"root\")!).render(\n  <React.StrictMode>\n    <Router>\n      <Routes>\n        <Route path=\"/\" element={<App />} />\n        <Route path=\"/evals\" element={<AllEvalsPage />} />\n        <Route path=\"/evals/single\" element={<EvalsPage />} />\n        <Route path=\"/evals/pairwise\" element={<PairwiseEvalsPage />} />\n        <Route path=\"/evals/best-of-n\" element={<BestOfNEvalsPage />} />\n        <Route path=\"/evals/run\" element={<RunEvalsPage />} />\n        <Route\n          path=\"/evals/openai-input-compare\"\n          element={<OpenAIInputComparePage />}\n        />\n      </Routes>\n    </Router>\n    <Toaster toastOptions={{ className: \"dark:bg-zinc-950 dark:text-white\" }} />\n  </React.StrictMode>\n);\n"
  },
  {
    "path": "frontend/src/setupTests.ts",
    "content": "// So jest test runner can read env vars from .env file\nimport { config } from \"dotenv\";\nconfig({ path: \".env.jest\" });\n"
  },
  {
    "path": "frontend/src/store/app-store.ts",
    "content": "import { create } from \"zustand\";\nimport { AppState } from \"../types\";\n\n// Store for app-wide state\ninterface AppStore {\n  appState: AppState;\n  setAppState: (state: AppState) => void;\n\n  // UI state\n  updateInstruction: string;\n  setUpdateInstruction: (instruction: string) => void;\n\n  // Update images support (multiple images)\n  updateImages: string[];\n  setUpdateImages: (images: string[]) => void;\n\n  inSelectAndEditMode: boolean;\n  toggleInSelectAndEditMode: () => void;\n  disableInSelectAndEditMode: () => void;\n\n  selectedElement: HTMLElement | null;\n  setSelectedElement: (element: HTMLElement | null) => void;\n  clearSelectedElement: () => void;\n}\n\nexport const useAppStore = create<AppStore>((set) => ({\n  appState: AppState.INITIAL,\n  setAppState: (state: AppState) => set({ appState: state }),\n\n  // UI state\n  updateInstruction: \"\",\n  setUpdateInstruction: (instruction: string) =>\n    set({ updateInstruction: instruction }),\n\n  // Update images support\n  updateImages: [],\n  setUpdateImages: (images: string[]) => set({ updateImages: images }),\n\n  inSelectAndEditMode: false,\n  toggleInSelectAndEditMode: () =>\n    set((state) => ({ inSelectAndEditMode: !state.inSelectAndEditMode })),\n  disableInSelectAndEditMode: () => set({ inSelectAndEditMode: false }),\n\n  selectedElement: null,\n  setSelectedElement: (element: HTMLElement | null) =>\n    set({ selectedElement: element }),\n  clearSelectedElement: () => set({ selectedElement: null }),\n}));\n"
  },
  {
    "path": "frontend/src/store/project-store.ts",
    "content": "import { create } from \"zustand\";\nimport {\n  AgentEvent,\n  Commit,\n  CommitHash,\n  VariantHistoryMessage,\n  VariantStatus,\n} from \"../components/commits/types\";\nimport { PromptAsset } from \"../types\";\n\n// Store for app-wide state\ninterface ProjectStore {\n  // Inputs\n  inputMode: \"image\" | \"video\" | \"text\";\n  setInputMode: (mode: \"image\" | \"video\" | \"text\") => void;\n  referenceImages: string[];\n  setReferenceImages: (images: string[]) => void;\n  initialPrompt: string;\n  setInitialPrompt: (prompt: string) => void;\n  assetsById: Record<string, PromptAsset>;\n  upsertPromptAssets: (assets: PromptAsset[]) => void;\n  resetPromptAssets: () => void;\n\n  // Outputs\n  commits: Record<string, Commit>;\n  head: CommitHash | null;\n  latestCommitHash: CommitHash | null;\n\n  addCommit: (commit: Commit) => void;\n  removeCommit: (hash: CommitHash) => void;\n  resetCommits: () => void;\n\n  appendCommitCode: (\n    hash: CommitHash,\n    numVariant: number,\n    code: string\n  ) => void;\n  appendVariantThinking: (\n    hash: CommitHash,\n    numVariant: number,\n    thinking: string\n  ) => void;\n  setCommitCode: (hash: CommitHash, numVariant: number, code: string) => void;\n  appendVariantHistoryMessage: (\n    hash: CommitHash,\n    numVariant: number,\n    message: VariantHistoryMessage\n  ) => void;\n  updateSelectedVariantIndex: (hash: CommitHash, index: number) => void;\n  updateVariantStatus: (\n    hash: CommitHash,\n    numVariant: number,\n    status: VariantStatus,\n    errorMessage?: string\n  ) => void;\n  resizeVariants: (hash: CommitHash, count: number) => void;\n  setVariantModels: (hash: CommitHash, models: string[]) => void;\n\n  startAgentEvent: (\n    hash: CommitHash,\n    numVariant: number,\n    event: AgentEvent\n  ) => void;\n  appendAgentEventContent: (\n    hash: CommitHash,\n    numVariant: number,\n    eventId: string,\n    content: string\n  ) => void;\n  finishAgentEvent: (\n    hash: CommitHash,\n    numVariant: number,\n    eventId: string,\n    updates: Partial<AgentEvent>\n  ) => void;\n\n  setHead: (hash: CommitHash) => void;\n  resetHead: () => void;\n\n  executionConsoles: { [key: number]: string[] };\n  appendExecutionConsole: (variantIndex: number, line: string) => void;\n  resetExecutionConsoles: () => void;\n}\n\nexport const useProjectStore = create<ProjectStore>((set) => ({\n  // Inputs and their setters\n  inputMode: \"image\",\n  setInputMode: (mode) => set({ inputMode: mode }),\n  referenceImages: [],\n  setReferenceImages: (images) => set({ referenceImages: images }),\n  initialPrompt: \"\",\n  setInitialPrompt: (prompt) => set({ initialPrompt: prompt }),\n  assetsById: {},\n  upsertPromptAssets: (assets) =>\n    set((state) => {\n      if (assets.length === 0) return state;\n      const merged = { ...state.assetsById };\n      for (const asset of assets) {\n        merged[asset.id] = asset;\n      }\n      return { assetsById: merged };\n    }),\n  resetPromptAssets: () => set({ assetsById: {} }),\n\n  // Outputs\n  commits: {},\n  head: null,\n  latestCommitHash: null,\n\n  addCommit: (commit: Commit) => {\n    const requestStartedAt = new Date(commit.dateCreated).getTime();\n    // Initialize variant statuses as 'generating' and start thinking timer\n    const commitsWithStatus = {\n      ...commit,\n      variants: commit.variants.map((variant) => ({\n        ...variant,\n        history: variant.history || [],\n        requestStartedAt:\n          variant.requestStartedAt ?? requestStartedAt,\n        status: variant.status || (\"generating\" as VariantStatus),\n        thinkingStartTime: Date.now(),\n        agentEvents: [],\n      })),\n    };\n\n    // When adding a new commit, make sure all existing commits are marked as committed\n    set((state) => ({\n      commits: {\n        ...Object.fromEntries(\n          Object.entries(state.commits).map(([hash, existingCommit]) => [\n            hash,\n            { ...existingCommit, isCommitted: true },\n          ])\n        ),\n        [commitsWithStatus.hash]: commitsWithStatus,\n      },\n      latestCommitHash: commitsWithStatus.hash,\n    }));\n  },\n  removeCommit: (hash: CommitHash) => {\n    set((state) => {\n      const removedCommit = state.commits[hash];\n      const newCommits = { ...state.commits };\n      delete newCommits[hash];\n\n      // If removing the latest commit, fall back to its parent\n      const newLatestCommitHash =\n        state.latestCommitHash === hash\n          ? (removedCommit?.parentHash ?? null)\n          : state.latestCommitHash;\n\n      return { commits: newCommits, latestCommitHash: newLatestCommitHash };\n    });\n  },\n  resetCommits: () => set({ commits: {}, latestCommitHash: null }),\n\n  appendCommitCode: (hash: CommitHash, numVariant: number, code: string) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit) {\n        return state;\n      }\n      // Don't update if the commit is already committed\n      if (commit.isCommitted) {\n        return state;\n      }\n      const variant = commit.variants[numVariant];\n      const isFirstCode = !variant.code && variant.thinkingStartTime;\n      const duration = isFirstCode\n        ? Math.round((Date.now() - variant.thinkingStartTime!) / 1000)\n        : variant.thinkingDuration;\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: {\n            ...commit,\n            variants: commit.variants.map((v, index) =>\n              index === numVariant\n                ? { ...v, code: v.code + code, thinkingDuration: duration }\n                : v\n            ),\n          },\n        },\n      };\n    }),\n  appendVariantThinking: (hash: CommitHash, numVariant: number, thinking: string) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      // Don't update if the commit is already committed\n      if (commit.isCommitted) {\n        throw new Error(\"Attempted to append thinking to a committed commit\");\n      }\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: {\n            ...commit,\n            variants: commit.variants.map((v, index) =>\n              index === numVariant\n                ? {\n                    ...v,\n                    thinking: (v.thinking || \"\") + thinking,\n                  }\n                : v\n            ),\n          },\n        },\n      };\n    }),\n  setCommitCode: (hash: CommitHash, numVariant: number, code: string) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit) {\n        return state;\n      }\n      // Don't update if the commit is already committed\n      if (commit.isCommitted) {\n        return state;\n      }\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: {\n            ...commit,\n            variants: commit.variants.map((variant, index) =>\n              index === numVariant ? { ...variant, code } : variant\n            ),\n          },\n        },\n      };\n    }),\n  appendVariantHistoryMessage: (hash, numVariant, message) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit || commit.isCommitted) return state;\n      const variants = commit.variants.map((variant, index) => {\n        if (index !== numVariant) return variant;\n        const history = variant.history || [];\n        const last = history[history.length - 1];\n        const isDuplicate =\n          last &&\n          last.role === message.role &&\n          last.text === message.text &&\n          last.imageAssetIds.join(\"|\") === message.imageAssetIds.join(\"|\") &&\n          last.videoAssetIds.join(\"|\") === message.videoAssetIds.join(\"|\");\n        if (isDuplicate) return variant;\n        return { ...variant, history: [...history, message] };\n      });\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: { ...commit, variants },\n        },\n      };\n    }),\n  updateSelectedVariantIndex: (hash: CommitHash, index: number) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      // Don't update if the commit is already committed\n      if (commit.isCommitted) {\n        throw new Error(\n          \"Attempted to update selected variant index of a committed commit\"\n        );\n      }\n\n      // Just update the selected variant index without canceling other variants\n      // This allows users to switch between variants even while they're still generating\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: {\n            ...commit,\n            selectedVariantIndex: index,\n          },\n        },\n      };\n    }),\n  updateVariantStatus: (\n    hash: CommitHash,\n    numVariant: number,\n    status: VariantStatus,\n    errorMessage?: string\n  ) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit) return state; // No change if commit doesn't exist\n\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: {\n            ...commit,\n            variants: commit.variants.map((variant, index) =>\n              index === numVariant \n                ? {\n                    ...variant,\n                    status,\n                    completedAt:\n                      status === \"generating\"\n                        ? undefined\n                        : variant.completedAt ?? Date.now(),\n                    errorMessage: status === \"error\" ? errorMessage : undefined,\n                  }\n                : variant\n            ),\n          },\n        },\n      };\n    }),\n  resizeVariants: (hash: CommitHash, count: number) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit) return state; // No change if commit doesn't exist\n\n      // Resize variants array to match backend count\n      const currentVariants = commit.variants;\n      const requestStartedAt = new Date(commit.dateCreated).getTime();\n      const seedHistory = currentVariants[0]?.history || [];\n      const newVariants = Array(count).fill(null).map((_, index) => \n        currentVariants[index] || {\n          code: \"\",\n          history: seedHistory.map((message) => ({\n            ...message,\n            imageAssetIds: [...message.imageAssetIds],\n            videoAssetIds: [...message.videoAssetIds],\n          })),\n          requestStartedAt,\n          status: \"generating\" as VariantStatus,\n          agentEvents: [],\n        }\n      );\n\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: {\n            ...commit,\n            variants: newVariants,\n            selectedVariantIndex: Math.min(commit.selectedVariantIndex, count - 1),\n          },\n        },\n      };\n    }),\n  setVariantModels: (hash: CommitHash, models: string[]) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit || commit.isCommitted) return state;\n      const variants = commit.variants.map((variant, index) => ({\n        ...variant,\n        model: models[index] ?? variant.model,\n      }));\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: { ...commit, variants },\n        },\n      };\n    }),\n\n  startAgentEvent: (hash, numVariant, event) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit || commit.isCommitted) return state;\n      const variants = commit.variants.map((variant, index) => {\n        if (index !== numVariant) return variant;\n        const events = variant.agentEvents || [];\n        const existingIndex = events.findIndex((e) => e.id === event.id);\n        if (existingIndex === -1) {\n          return { ...variant, agentEvents: [...events, event] };\n        }\n        const updatedEvents = events.map((e) =>\n          e.id === event.id\n            ? {\n                ...e,\n                ...event,\n                content: event.content ? event.content : e.content,\n                startedAt: e.startedAt || event.startedAt,\n              }\n            : e\n        );\n        return { ...variant, agentEvents: updatedEvents };\n      });\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: { ...commit, variants },\n        },\n      };\n    }),\n\n  appendAgentEventContent: (hash, numVariant, eventId, content) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit || commit.isCommitted) return state;\n      const variants = commit.variants.map((variant, index) => {\n        if (index !== numVariant) return variant;\n        const events = variant.agentEvents || [];\n        const updatedEvents = events.map((event) =>\n          event.id === eventId\n            ? { ...event, content: (event.content || \"\") + content }\n            : event\n        );\n        return { ...variant, agentEvents: updatedEvents };\n      });\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: { ...commit, variants },\n        },\n      };\n    }),\n\n  finishAgentEvent: (hash, numVariant, eventId, updates) =>\n    set((state) => {\n      const commit = state.commits[hash];\n      if (!commit || commit.isCommitted) return state;\n      const variants = commit.variants.map((variant, index) => {\n        if (index !== numVariant) return variant;\n        const events = variant.agentEvents || [];\n        const updatedEvents = events.map((event) =>\n          event.id === eventId\n            ? {\n                ...event,\n                ...updates,\n                // Preserve the original terminal timestamp/status once set.\n                endedAt:\n                  event.endedAt !== undefined ? event.endedAt : updates.endedAt,\n                status:\n                  event.status !== \"running\" ? event.status : updates.status ?? event.status,\n              }\n            : event\n        );\n        return { ...variant, agentEvents: updatedEvents };\n      });\n      return {\n        commits: {\n          ...state.commits,\n          [hash]: { ...commit, variants },\n        },\n      };\n    }),\n\n  setHead: (hash: CommitHash) => set({ head: hash }),\n  resetHead: () => set({ head: null }),\n\n  executionConsoles: {},\n  appendExecutionConsole: (variantIndex: number, line: string) =>\n    set((state) => ({\n      executionConsoles: {\n        ...state.executionConsoles,\n        [variantIndex]: [\n          ...(state.executionConsoles[variantIndex] || []),\n          line,\n        ],\n      },\n    })),\n  resetExecutionConsoles: () => set({ executionConsoles: {} }),\n}));\n"
  },
  {
    "path": "frontend/src/tests/fixtures/simple_page.html",
    "content": "<!doctype html>\n<html lang=\"en\">\n  <head>\n    <meta charset=\"utf-8\" />\n    <title>Simple QA Page</title>\n    <style>\n      body {\n        font-family: Arial, sans-serif;\n        padding: 24px;\n      }\n      .button {\n        background: #2563eb;\n        border: none;\n        border-radius: 8px;\n        color: #fff;\n        padding: 12px 20px;\n        font-size: 16px;\n      }\n    </style>\n  </head>\n  <body>\n    <button class=\"button\">Howdy</button>\n  </body>\n</html>\n"
  },
  {
    "path": "frontend/src/tests/qa.test.ts",
    "content": "import * as fs from \"fs\";\nimport * as path from \"path\";\nimport puppeteer, { Browser, ElementHandle, Page } from \"puppeteer\";\nimport { Stack } from \"../lib/stacks\";\nimport { CodeGenerationModel } from \"../lib/models\";\n\ndeclare global {\n  interface Window {\n    __qaDownloads: Array<string | null>;\n    __qaFormSubmits: Array<string | null>;\n    __qaClipboardCalls: string[];\n  }\n}\n\nconst RUN_E2E = process.env.RUN_E2E === \"true\";\nconst describeE2E = RUN_E2E ? describe : describe.skip;\n\nconst TESTS_ROOT_PATH =\n  process.env.TEST_ROOT_PATH ?? path.resolve(process.cwd(), \"src/tests\");\n\n// Fixtures\nconst FIXTURES_PATH = path.join(TESTS_ROOT_PATH, \"fixtures\");\nconst SIMPLE_SCREENSHOT = path.join(FIXTURES_PATH, \"simple_button.png\");\nconst SCREENSHOT_WITH_IMAGES = path.join(\n  FIXTURES_PATH,\n  \"simple_ui_with_image.png\"\n);\nconst SIMPLE_HTML = path.join(FIXTURES_PATH, \"simple_page.html\");\n\n// Results\nconst RESULTS_DIR = path.join(TESTS_ROOT_PATH, \"results\");\n\nconst defaultStacks = [Stack.HTML_TAILWIND];\nconst stacks = process.env.QA_STACKS\n  ? process.env.QA_STACKS.split(\",\").map((stack) => stack.trim() as Stack)\n  : defaultStacks;\n\nconst defaultModels = [CodeGenerationModel.CLAUDE_4_5_OPUS_2025_11_01];\nconst models = process.env.QA_MODELS\n  ? process.env.QA_MODELS.split(\",\").map((model) => model.trim())\n  : defaultModels;\n\ndescribeE2E(\"qa e2e flows\", () => {\n  let browser: Browser;\n  let page: Page;\n\n  const IS_HEADLESS = process.env.HEADLESS !== \"false\";\n\n  beforeAll(async () => {\n    fs.mkdirSync(RESULTS_DIR, { recursive: true });\n    browser = await puppeteer.launch({ headless: IS_HEADLESS });\n    page = await browser.newPage();\n\n    await installMockWebSocket(page);\n    await installDomTestHooks(page);\n    await setupRequestInterception(page, SCREENSHOT_WITH_IMAGES);\n\n    await page.goto(\"http://localhost:5173/\", { waitUntil: \"networkidle0\" });\n\n    // Set screen size\n    await page.setViewport({ width: 1280, height: 1024 });\n  });\n\n  afterAll(async () => {\n    await browser.close();\n  });\n\n  // Create tests (every stack)\n  models.forEach((model) => {\n    stacks.forEach((stack) => {\n      it(\n        `create from screenshot: ${model} & ${stack}`,\n        async () => {\n          const app = new App(\n            page,\n            stack,\n            model,\n            `create_screenshot_${model}_${stack}`\n          );\n          await app.init();\n          await app.uploadImage(SCREENSHOT_WITH_IMAGES);\n          await app.resetTestHooks();\n        },\n        60 * 1000\n      );\n\n      it(\n        `create from URL: ${model} & ${stack}`,\n        async () => {\n          const app = new App(\n            page,\n            stack,\n            model,\n            `create_url_${model}_${stack}`\n          );\n          await app.init();\n          await app.generateFromUrl(\"https://example.com\");\n          await app.resetTestHooks();\n        },\n        60 * 1000\n      );\n    });\n  });\n\n  // Update tests - for every model (HTML Tailwind only)\n  models.forEach((model) => {\n    it(\n      `update flow: ${model}`,\n      async () => {\n        const app = new App(\n          page,\n          Stack.HTML_TAILWIND,\n          model,\n          `update_${model}`\n        );\n        await app.init();\n\n        await app.uploadImage(SIMPLE_SCREENSHOT);\n        await app.regenerate();\n        await app.edit(\"make the text underline\", \"v2\");\n        await app.edit(\"make the text italic\", \"v3\");\n        await app.clickVersion(\"v2\");\n        await app.edit(\"make the text yellow\", \"v4\");\n        await app.resetTestHooks();\n      },\n      90 * 1000\n    );\n  });\n\n  // Start from code tests - for every model\n  models.forEach((model) => {\n    it(\n      `start from code: ${model}`,\n      async () => {\n        const app = new App(\n          page,\n          Stack.HTML_TAILWIND,\n          model,\n          `start_from_code_${model}`\n        );\n        await app.init();\n        await app.importFromCode(fs.readFileSync(SIMPLE_HTML, \"utf8\"));\n        await app.edit(\"make the text underline\", \"v2\");\n        await app.edit(\"make the text italic\", \"v3\");\n        await app.clickVersion(\"v2\");\n        await app.edit(\"make the text yellow\", \"v4\");\n        await app.resetTestHooks();\n      },\n      90 * 1000\n    );\n  });\n\n  // Start from text tests - for every model\n  models.forEach((model) => {\n    it(\n      `start from text: ${model}`,\n      async () => {\n        const app = new App(\n          page,\n          Stack.HTML_TAILWIND,\n          model,\n          `start_from_text_${model}`\n        );\n        await app.init();\n        await app.generateFromText(\"a simple button that says howdy\");\n        await app.edit(\"make the text underline\", \"v2\");\n        await app.edit(\"make the text italic\", \"v3\");\n        await app.clickVersion(\"v2\");\n        await app.edit(\"make the text yellow\", \"v4\");\n        await app.resetTestHooks();\n      },\n      90 * 1000\n    );\n  });\n\n  // Buttons (Download Code, Copy Code, Open in Codepen)\n  models.forEach((model) => {\n    it(\n      `code action buttons: ${model}`,\n      async () => {\n        const app = new App(\n          page,\n          Stack.HTML_TAILWIND,\n          model,\n          `code_buttons_${model}`\n        );\n        await app.init();\n        await app.generateFromText(\"a simple button that says howdy\");\n        await app.assertCodeActions();\n        await app.resetTestHooks();\n      },\n      60 * 1000\n    );\n  });\n});\n\nclass App {\n  private screenshotPathPrefix: string;\n  private page: Page;\n  private stack: Stack;\n  private model: string;\n\n  constructor(page: Page, stack: Stack, model: string, testId: string) {\n    this.page = page;\n    this.stack = stack;\n    this.model = model;\n    this.screenshotPathPrefix = `${RESULTS_DIR}/${testId}`;\n  }\n\n  async init() {\n    await this.setupLocalStorage();\n    await this.resetTestHooks();\n  }\n\n  async setupLocalStorage() {\n    const setting = {\n      openAiApiKey: \"test-openai-key\",\n      openAiBaseURL: null,\n      anthropicApiKey: \"test-anthropic-key\",\n      screenshotOneApiKey: \"test-screenshotone-key\",\n      isImageGenerationEnabled: true,\n      editorTheme: \"cobalt\",\n      generatedCodeConfig: this.stack,\n      codeGenerationModel: this.model,\n      isTermOfServiceAccepted: true,\n      accessCode: null,\n    };\n\n    await this.page.evaluate((nextSetting) => {\n      localStorage.setItem(\"setting\", JSON.stringify(nextSetting));\n    }, setting);\n\n    await this.page.reload({ waitUntil: \"networkidle0\" });\n  }\n\n  async resetTestHooks() {\n    await this.page.evaluate(() => {\n      window.__qaDownloads = [];\n      window.__qaFormSubmits = [];\n      window.__qaClipboardCalls = [];\n    });\n  }\n\n  async takeScreenshot(step: string) {\n    await this.page.screenshot({\n      path: `${this.screenshotPathPrefix}_${step}.png`,\n    });\n  }\n\n  async waitUntilVersionIsReady(version: string) {\n    await this.page.waitForFunction(\n      (versionLabel) => document.body.innerText.includes(versionLabel),\n      {\n        timeout: 30000,\n      },\n      version\n    );\n    await this.page.waitForSelector('[data-testid=\"update-input\"]', {\n      timeout: 30000,\n    });\n  }\n\n  async switchToTab(testId: string) {\n    await this.page.click(`[data-testid=\"${testId}\"]`);\n  }\n\n  async generateFromUrl(url: string) {\n    await this.switchToTab(\"tab-url\");\n    await this.page.type('[data-testid=\"url-input\"]', url);\n    await this.takeScreenshot(\"typed_url\");\n    await this.page.click('[data-testid=\"url-capture\"]');\n    await this.waitUntilVersionIsReady(\"v1\");\n    await this.takeScreenshot(\"url_result\");\n  }\n\n  async generateFromText(prompt: string) {\n    await this.switchToTab(\"tab-text\");\n    await this.page.type('[data-testid=\"text-input\"]', prompt);\n    await this.takeScreenshot(\"typed_text\");\n    await this.page.click('[data-testid=\"text-generate\"]');\n    await this.waitUntilVersionIsReady(\"v1\");\n    await this.takeScreenshot(\"text_result\");\n  }\n\n  async uploadImage(screenshotPath: string) {\n    await this.switchToTab(\"tab-upload\");\n    const fileInput = (await this.page.$(\n      '[data-testid=\"upload-input\"]'\n    )) as ElementHandle<HTMLInputElement>;\n    if (!fileInput) {\n      throw new Error(\"Upload input element not found\");\n    }\n    await fileInput.uploadFile(screenshotPath);\n    await this.page.waitForSelector('[data-testid=\"upload-generate\"]');\n    await this.takeScreenshot(\"image_uploaded\");\n\n    await this.page.click('[data-testid=\"upload-generate\"]');\n    await this.waitUntilVersionIsReady(\"v1\");\n    await this.takeScreenshot(\"image_results\");\n  }\n\n  async importFromCode(code: string) {\n    await this.switchToTab(\"tab-import\");\n    await this.page.type('[data-testid=\"import-input\"]', code);\n    await this.page.click('[data-testid=\"stack-select\"]');\n    await this.page.waitForSelector('[role=\"option\"]', { timeout: 5000 });\n    await this.page.evaluate(() => {\n      const options = Array.from(\n        document.querySelectorAll('[role=\"option\"]')\n      ) as HTMLElement[];\n      const target = options.find((option) =>\n        option.textContent?.includes(\"HTML + Tailwind\")\n      );\n      target?.click();\n    });\n    await this.takeScreenshot(\"typed_code\");\n    await this.page.click('[data-testid=\"import-submit\"]');\n    await this.waitUntilVersionIsReady(\"v1\");\n  }\n\n  async edit(edit: string, version: string) {\n    await this.page.type('[data-testid=\"update-input\"]', edit);\n    await this.takeScreenshot(`typed_${version}`);\n    await this.page.click(\".update-btn\");\n    await this.waitUntilVersionIsReady(version);\n    await this.takeScreenshot(`done_${version}`);\n  }\n\n  async clickVersion(version: string) {\n    await this.page.evaluate((versionLabel) => {\n      document.querySelectorAll(\"div\").forEach((div) => {\n        if (div.innerText.includes(versionLabel)) {\n          div.click();\n        }\n      });\n    }, version);\n  }\n\n  async regenerate() {\n    await this.page.click(\".regenerate-btn\");\n    await this.waitUntilVersionIsReady(\"v1\");\n    await this.takeScreenshot(\"regenerate_results\");\n  }\n\n  async assertCodeActions() {\n    await this.page.click('[data-testid=\"tab-code\"]');\n    await this.page.waitForSelector('[data-testid=\"copy-code\"]', {\n      timeout: 10000,\n    });\n    await this.page.click('[data-testid=\"copy-code\"]');\n    await this.page.click('[data-testid=\"open-codepen\"]');\n    await this.page.click('[data-testid=\"download-code\"]');\n\n    const results = await this.page.evaluate(() => ({\n      downloads: window.__qaDownloads,\n      submits: window.__qaFormSubmits,\n      clipboard: window.__qaClipboardCalls,\n    }));\n\n    expect(results.downloads).toContain(\"index.html\");\n    expect(results.submits).toContain(\"https://codepen.io/pen/define\");\n    expect(results.clipboard).toContain(\"copy\");\n  }\n}\n\nasync function setupRequestInterception(\n  page: Page,\n  screenshotFixturePath: string\n) {\n  const screenshotBuffer = fs.readFileSync(screenshotFixturePath);\n  const screenshotDataUrl = `data:image/png;base64,${screenshotBuffer.toString(\n    \"base64\"\n  )}`;\n\n  await page.setRequestInterception(true);\n  page.on(\"request\", (request) => {\n    const url = request.url();\n    if (url.endsWith(\"/api/screenshot\")) {\n      request.respond({\n        status: 200,\n        contentType: \"application/json\",\n        headers: {\n          \"Access-Control-Allow-Origin\": \"*\",\n          \"Access-Control-Allow-Headers\": \"*\",\n        },\n        body: JSON.stringify({ url: screenshotDataUrl }),\n      });\n      return;\n    }\n    request.continue();\n  });\n}\n\nasync function installDomTestHooks(page: Page) {\n  await page.evaluateOnNewDocument(() => {\n    window.__qaDownloads = [];\n    window.__qaFormSubmits = [];\n    window.__qaClipboardCalls = [];\n\n    const originalAnchorClick = HTMLAnchorElement.prototype.click;\n    HTMLAnchorElement.prototype.click = function (...args) {\n      window.__qaDownloads.push(this.getAttribute(\"download\"));\n      return originalAnchorClick.apply(this, args);\n    };\n\n    const originalFormSubmit = HTMLFormElement.prototype.submit;\n    HTMLFormElement.prototype.submit = function (...args) {\n      window.__qaFormSubmits.push(this.getAttribute(\"action\"));\n      return originalFormSubmit.apply(this, args);\n    };\n\n    const originalExecCommand = document.execCommand?.bind(document);\n    document.execCommand = (command) => {\n      window.__qaClipboardCalls.push(command);\n      if (!originalExecCommand) {\n        return true;\n      }\n      return originalExecCommand(command);\n    };\n  });\n}\n\nasync function installMockWebSocket(page: Page) {\n  await page.evaluateOnNewDocument(() => {\n    class MockWebSocket {\n      static CONNECTING = 0;\n      static OPEN = 1;\n      static CLOSING = 2;\n      static CLOSED = 3;\n\n      readyState = MockWebSocket.CONNECTING;\n      url: string;\n      listeners: Record<string, Array<(event: any) => void>> = {};\n\n      constructor(url: string) {\n        this.url = url;\n        window.setTimeout(() => {\n          this.readyState = MockWebSocket.OPEN;\n          this.emit(\"open\", {});\n        }, 10);\n      }\n\n      addEventListener(type: string, listener: (event: any) => void) {\n        if (!this.listeners[type]) {\n          this.listeners[type] = [];\n        }\n        this.listeners[type].push(listener);\n      }\n\n      removeEventListener(type: string, listener: (event: any) => void) {\n        if (!this.listeners[type]) return;\n        this.listeners[type] = this.listeners[type].filter(\n          (existing) => existing !== listener\n        );\n      }\n\n      send(data: string) {\n        const params = JSON.parse(data);\n        const code = [\n          \"<!doctype html>\",\n          \"<html>\",\n          \"<head><title>QA</title></head>\",\n          \"<body>\",\n          `<button>${params.generationType ?? \"create\"}</button>`,\n          \"</body>\",\n          \"</html>\",\n        ].join(\"\");\n\n        const events = [\n          { type: \"variantCount\", value: \"1\", variantIndex: 0 },\n          { type: \"status\", value: \"Generating\", variantIndex: 0 },\n          { type: \"setCode\", value: code, variantIndex: 0 },\n          { type: \"variantComplete\", value: \"\", variantIndex: 0 },\n        ];\n\n        events.forEach((payload, index) => {\n          window.setTimeout(() => {\n            this.emit(\"message\", { data: JSON.stringify(payload) });\n          }, 20 * (index + 1));\n        });\n\n        window.setTimeout(() => {\n          this.close(1000, \"OK\");\n        }, 120);\n      }\n\n      close(code = 1000, reason = \"\") {\n        this.readyState = MockWebSocket.CLOSED;\n        this.emit(\"close\", { code, reason });\n      }\n\n      emit(type: string, event: any) {\n        (this.listeners[type] || []).forEach((listener) => {\n          listener(event);\n        });\n      }\n    }\n\n    window.WebSocket = MockWebSocket as unknown as typeof WebSocket;\n  });\n}\n"
  },
  {
    "path": "frontend/src/types.ts",
    "content": "import { Stack } from \"./lib/stacks\";\nimport { CodeGenerationModel } from \"./lib/models\";\n\nexport enum EditorTheme {\n  ESPRESSO = \"espresso\",\n  COBALT = \"cobalt\",\n}\n\nexport enum AppTheme {\n  SYSTEM = \"system\",\n  LIGHT = \"light\",\n  DARK = \"dark\",\n}\n\nexport interface Settings {\n  openAiApiKey: string | null;\n  openAiBaseURL: string | null;\n  screenshotOneApiKey: string | null;\n  isImageGenerationEnabled: boolean;\n  editorTheme: EditorTheme;\n  generatedCodeConfig: Stack;\n  codeGenerationModel: CodeGenerationModel;\n  // Only relevant for hosted version\n  isTermOfServiceAccepted: boolean;\n  anthropicApiKey: string | null;\n  geminiApiKey: string | null;\n}\n\nexport enum AppState {\n  INITIAL = \"INITIAL\",\n  CODING = \"CODING\",\n  CODE_READY = \"CODE_READY\",\n}\n\nexport enum ScreenRecorderState {\n  INITIAL = \"initial\",\n  RECORDING = \"recording\",\n  FINISHED = \"finished\",\n}\n\nexport type PromptMessageRole = \"user\" | \"assistant\";\nexport type PromptAssetType = \"image\" | \"video\";\n\nexport interface PromptAsset {\n  id: string;\n  type: PromptAssetType;\n  dataUrl: string;\n}\n\nexport interface PromptContent {\n  text: string;\n  images: string[]; // Array of data URLs\n  videos?: string[]; // Array of data URLs\n  selectedElementHtml?: string; // Raw HTML of selected element (for display only)\n}\n\nexport interface PromptHistoryMessage {\n  role: PromptMessageRole;\n  text: string;\n  images: string[];\n  videos: string[];\n}\n\nexport interface CodeGenerationParams {\n  generationType: \"create\" | \"update\";\n  inputMode: \"image\" | \"video\" | \"text\";\n  prompt: PromptContent;\n  history?: PromptHistoryMessage[];\n  fileState?: {\n    path: string;\n    content: string;\n  };\n  optionCodes?: string[];\n}\n\nexport type FullGenerationSettings = CodeGenerationParams & Settings;\n"
  },
  {
    "path": "frontend/src/urls.ts",
    "content": "export const URLS = {\n  \"intro-to-video\":\n    \"https://github.com/abi/screenshot-to-code/wiki/Screen-Recording-to-Code\",\n  tips: \"https://git.new/s5ywP0e\",\n};\n"
  },
  {
    "path": "frontend/src/vite-env.d.ts",
    "content": "/// <reference types=\"vite/client\" />\n"
  },
  {
    "path": "frontend/tailwind.config.js",
    "content": "/** @type {import('tailwindcss').Config} */\nmodule.exports = {\n  darkMode: [\"class\"],\n  content: [\n    \"./pages/**/*.{ts,tsx}\",\n    \"./components/**/*.{ts,tsx}\",\n    \"./app/**/*.{ts,tsx}\",\n    \"./src/**/*.{ts,tsx}\",\n  ],\n  theme: {\n    container: {\n      center: true,\n      padding: \"2rem\",\n      screens: {\n        \"2xl\": \"1400px\",\n      },\n    },\n    extend: {\n      colors: {\n        button: \"#ffd803\",\n        highlight: \"#ffd803\",\n        border: \"hsl(var(--border))\",\n        input: \"hsl(var(--input))\",\n        ring: \"hsl(var(--ring))\",\n        background: \"hsl(var(--background))\",\n        foreground: \"hsl(var(--foreground))\",\n        primary: {\n          DEFAULT: \"hsl(var(--primary))\",\n          foreground: \"hsl(var(--primary-foreground))\",\n        },\n        secondary: {\n          DEFAULT: \"hsl(var(--secondary))\",\n          foreground: \"hsl(var(--secondary-foreground))\",\n        },\n        destructive: {\n          DEFAULT: \"hsl(var(--destructive))\",\n          foreground: \"hsl(var(--destructive-foreground))\",\n        },\n        muted: {\n          DEFAULT: \"hsl(var(--muted))\",\n          foreground: \"hsl(var(--muted-foreground))\",\n        },\n        accent: {\n          DEFAULT: \"hsl(var(--accent))\",\n          foreground: \"hsl(var(--accent-foreground))\",\n        },\n        popover: {\n          DEFAULT: \"hsl(var(--popover))\",\n          foreground: \"hsl(var(--popover-foreground))\",\n        },\n        card: {\n          DEFAULT: \"hsl(var(--card))\",\n          foreground: \"hsl(var(--card-foreground))\",\n        },\n      },\n      borderRadius: {\n        lg: \"var(--radius)\",\n        md: \"calc(var(--radius) - 2px)\",\n        sm: \"calc(var(--radius) - 4px)\",\n      },\n      keyframes: {\n        \"accordion-down\": {\n          from: { height: 0 },\n          to: { height: \"var(--radix-accordion-content-height)\" },\n        },\n        \"accordion-up\": {\n          from: { height: \"var(--radix-accordion-content-height)\" },\n          to: { height: 0 },\n        },\n      },\n      animation: {\n        \"accordion-down\": \"accordion-down 0.2s ease-out\",\n        \"accordion-up\": \"accordion-up 0.2s ease-out\",\n      },\n    },\n  },\n  plugins: [require(\"tailwindcss-animate\")],\n};\n"
  },
  {
    "path": "frontend/tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"target\": \"ES2020\",\n    \"useDefineForClassFields\": true,\n    \"lib\": [\"ES2020\", \"DOM\", \"DOM.Iterable\"],\n    \"module\": \"ESNext\",\n    \"skipLibCheck\": true,\n\n    /* Bundler mode */\n    \"moduleResolution\": \"bundler\",\n    \"allowImportingTsExtensions\": true,\n    \"resolveJsonModule\": true,\n    \"isolatedModules\": true,\n    \"noEmit\": true,\n    \"jsx\": \"react-jsx\",\n\n    /* Linting */\n    \"strict\": true,\n    \"noUnusedLocals\": true,\n    \"noUnusedParameters\": true,\n    \"noFallthroughCasesInSwitch\": true,\n\n    \"baseUrl\": \".\",\n    \"paths\": {\n      \"@/*\": [\"./src/*\"]\n    }\n  },\n  \"include\": [\"src\"],\n  \"references\": [{ \"path\": \"./tsconfig.node.json\" }]\n}\n"
  },
  {
    "path": "frontend/tsconfig.node.json",
    "content": "{\n  \"compilerOptions\": {\n    \"composite\": true,\n    \"skipLibCheck\": true,\n    \"module\": \"ESNext\",\n    \"moduleResolution\": \"bundler\",\n    \"allowSyntheticDefaultImports\": true\n  },\n  \"include\": [\"vite.config.ts\"]\n}\n"
  },
  {
    "path": "frontend/vite.config.ts",
    "content": "import path from \"path\";\nimport { defineConfig, loadEnv } from \"vite\";\nimport checker from \"vite-plugin-checker\";\nimport react from \"@vitejs/plugin-react\";\nimport { createHtmlPlugin } from \"vite-plugin-html\";\n\n// https://vitejs.dev/config/\nexport default ({ mode }) => {\n  process.env = { ...process.env, ...loadEnv(mode, process.cwd()) };\n  return defineConfig({\n    base: \"\",\n    plugins: [\n      react(),\n      checker({ \n        typescript: true\n      }),\n      createHtmlPlugin({\n        inject: {\n          data: {\n            injectHead: process.env.VITE_IS_DEPLOYED\n              ? '<script defer=\"\" data-domain=\"screenshottocode.com\" src=\"https://plausible.io/js/script.js\"></script>'\n              : \"\",\n          },\n        },\n      }),\n    ],\n    resolve: {\n      alias: {\n        \"@\": path.resolve(__dirname, \"./src\"),\n      },\n    },\n  });\n};\n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"name\": \"screenshot-to-code\",\n  \"private\": true,\n  \"version\": \"0.0.0\",\n  \"type\": \"module\",\n  \"scripts\": {\n    \"test\": \"npm run test:frontend && npm run test:backend\",\n    \"test:frontend\": \"cd frontend && yarn test\",\n    \"test:backend\": \"cd backend && poetry run pytest\"\n  },\n  \"workspaces\": [\n    \"frontend\",\n    \"backend\"\n  ]\n}"
  },
  {
    "path": "plan.md",
    "content": "# Variant System Transformation Plan\n\n## Current System Analysis\n\n### How Variants Currently Work\n\n1. **Blocking Generation**: When a user creates or updates code, the system generates NUM_VARIANTS (2) variants in parallel using different AI models\n2. **All-or-Nothing**: The WebSocket connection remains open until ALL variants complete generation\n3. **No Interactivity During Generation**: Users must wait for all variants to finish before they can:\n   - Select a variant\n   - Make updates\n   - Start a new generation\n\n### Key Components\n\n#### Backend (`backend/routes/generate_code.py`)\n- Lines 286-348: Creates parallel tasks for all variants\n- Line 350: Uses `asyncio.gather()` to wait for ALL tasks to complete\n- Line 447: Only closes WebSocket after all variants are done\n- Sends messages with `variantIndex` to route to correct variant\n\n#### Frontend\n- `frontend/src/generateCode.ts`: WebSocket client that only triggers `onComplete` when connection closes\n- `frontend/src/App.tsx`: Sets `AppState.CODING` during generation, blocking UI\n- `frontend/src/components/variants/Variants.tsx`: Only shows variants when generation is complete\n\n### Current Flow\n```\nUser Request → Generate All Variants → Wait for All → Close WS → Show Results → Allow Interaction\n```\n\n## Proposed Non-Blocking System\n\n### New Flow\n```\nUser Request → Start Generation → Show First Complete → User Can Interact → Cancel Others if Needed\n```\n\n### Key Changes Needed\n\n#### 1. Backend Changes\n\n**WebSocket Protocol Enhancement**\n- Add new message type: `\"variantComplete\"` to signal individual variant completion\n- Keep WebSocket open after first variant completes\n- Add ability to cancel specific variants mid-generation\n- Track variant states: `pending`, `generating`, `complete`, `failed`, `cancelled`\n\n**Generation Logic**\n```python\n# Instead of:\ncompletions = await asyncio.gather(*tasks, return_exceptions=True)\n\n# Use:\nasync def process_variants():\n    for index, task in enumerate(tasks):\n        try:\n            completion = await task\n            await send_message(\"variantComplete\", index)\n            # Allow frontend to interact immediately\n        except Exception as e:\n            await send_message(\"variantFailed\", index)\n```\n\n#### 2. Frontend Changes\n\n**State Management**\n- Add variant-level state tracking in `project-store.ts`:\n  ```typescript\n  interface VariantState {\n    code: string;\n    status: 'pending' | 'generating' | 'complete' | 'failed' | 'cancelled';\n    generationTime?: number;\n  }\n  ```\n\n**UI Updates**\n- Show variant options as soon as first one completes\n- Display loading states for incomplete variants\n- Enable \"Update\" button when at least one variant is ready\n- Add visual indicators for variant states\n\n**WebSocket Client**\n- Handle new `variantComplete` message type\n- Don't wait for WebSocket close to enable interactions\n- Track which variants are still generating\n\n#### 3. User Experience Improvements\n\n**Progressive Loading**\n- Show first variant immediately when ready\n- Display skeleton/loading state for pending variants\n- Allow switching between completed variants while others generate\n\n**Update Flow**\n- When user starts update with incomplete variants:\n  - Cancel remaining variant generations\n  - Start new generation with 2 variants\n  - Clear previous incomplete variants\n\n**Visual Feedback**\n- Loading spinner on generating variants\n- Success checkmark on completed variants\n- Subtle animation when variant completes\n- Time elapsed indicator\n\n### Implementation Steps\n\n1. **Phase 1: Backend Protocol** (2-3 days)\n   - Modify WebSocket message protocol\n   - Implement per-variant completion tracking\n   - Add cancellation mechanism\n\n2. **Phase 2: Frontend State** (2-3 days)\n   - Update Zustand store for variant states\n   - Modify WebSocket client handling\n   - Update commit structure\n\n3. **Phase 3: UI Components** (2-3 days)\n   - Update Variants.tsx for progressive display\n   - Add loading states and animations\n   - Update Sidebar.tsx for immediate interactions\n\n4. **Phase 4: Testing & Polish** (2 days)\n   - Handle edge cases (all variants fail, etc.)\n   - Performance optimization\n   - User testing\n\n### Benefits\n\n1. **Faster Time to First Interaction**: Users can work with the first variant immediately\n2. **Better User Experience**: No more waiting for slow models when fast ones are ready\n3. **Increased Efficiency**: Users can evaluate and iterate faster\n4. **Flexibility**: Users can cancel unwanted generations mid-flight\n\n### Risks & Mitigation\n\n1. **Complexity**: More state to manage\n   - Mitigation: Careful state design, comprehensive testing\n\n2. **Race Conditions**: User updates while variants generating\n   - Mitigation: Clear cancellation logic, queue management\n\n3. **UI Confusion**: Users might not understand partial results\n   - Mitigation: Clear visual indicators, tooltips\n\n### Success Metrics\n\n- Time to first interaction reduced by ~50%\n- User satisfaction with generation speed\n- Reduced abandonment during generation\n- Increased number of iterations per session\n"
  }
]