[
  {
    "path": ".flake8",
    "content": "[flake8]\nmax-line-length=100\nignore=E203,E501,W503\n"
  },
  {
    "path": ".github/workflows/release.yml",
    "content": "name: Release to PyPI\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n    environment: pypi-publish\n    steps:\n    - uses: actions/checkout@v3\n      with:\n        fetch-depth: 0\n    \n    - name: Set up Python\n      uses: actions/setup-python@v4\n      with:\n        python-version: '3.9'\n    \n    - name: Install dependencies\n      run: |\n        python -m pip install --upgrade pip\n        pip install hatch build twine\n    \n    - name: Build package\n      run: python -m build\n    \n    - name: Publish package\n      uses: pypa/gh-action-pypi-publish@release/v1\n      with:\n        password: ${{ secrets.PYPI_API_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized files\n__pycache__/\n*.py[cod]\n*$py.class\n*.so\n\n# Distribution / packaging\ndist/\nbuild/\n*.egg-info/\n*.egg\n\n# Unit test / coverage reports\n.pytest_cache/\n.coverage\nhtmlcov/\n.tox/\ncoverage.xml\n*.cover\n\n# Virtual environments\nenv/\nvenv/\nENV/\n.env\n\n# IDE files\n.idea/\n.vscode/\n*.swp\n*.swo\n\n# OS specific files\n.DS_Store\nThumbs.db\n\n# Project files\nSIDEKICK.md\n.python-version\nREFACTOR*\n.claude/\nTASKS*.md\nerror*.log\n*.log\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2025 Gavin Vickery\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "Makefile",
    "content": ".PHONY: install clean lint format build test\n\ninstall:\n\tpip install -e \".[dev]\"\n\nrun:\n\tenv/bin/sidekick\n\ndebug:\n\tenv/bin/sidekick --debug\n\nclean:\n\trm -rf build/\n\trm -rf dist/\n\trm -rf *.egg-info\n\tfind . -type d -name __pycache__ -exec rm -rf {} +\n\tfind . -type f -name \"*.pyc\" -delete\n\nlint:\n\tisort src/ tests/\n\tblack src/ tests/\n\tflake8 src/ tests/\n\ntest:\n\tpytest\n\nbuild:\n\tpython -m build\n"
  },
  {
    "path": "README.md",
    "content": "# Sidekick (Beta)\n\n[![PyPI version](https://badge.fury.io/py/sidekick-cli.svg)](https://badge.fury.io/py/sidekick-cli)\n[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)\n\n![Sidekick Demo](screenshot.gif)\n\nYour agentic CLI developer.\n\n## Overview\n\nSidekick is an agentic CLI-based AI tool inspired by Claude Code, Copilot, Windsurf and Cursor. It's meant\nto be an open source alternative to these tools, providing a similar experience but with the flexibility of\nusing different LLM providers (Anthropic, OpenAI, Google Gemini) while keeping the agentic workflow.\n\n*Sidekick is currently in beta and under active development. Please [report issues](https://github.com/geekforbrains/sidekick-cli/issues) or share feedback!*\n\n## Features\n\n- No vendor lock-in. Use whichever LLM provider you prefer.\n- MCP support\n- Easily switch between models in the same session.\n- JIT-style system prompt injection ensures Sidekick doesn't lose the plot.\n- Per-project guide. Adjust Sidekick's behavior to suit your needs.\n- CLI-first design. Ditch the clunky IDE.\n- Cost and token tracking.\n- Per command or per session confirmation skipping.\n\n## Roadmap\n\n- Tests 😅\n- More LLM providers, including Ollama\n\n## Quick Start\n\nInstall Sidekick.\n\n```\npip install sidekick-cli\n```\n\nOn first run, you'll be asked to configure your LLM providers.\n\n```\nsidekick\n```\n\n## Configuration\n\nAfter initial setup, Sidekick saves a config file to `~/.config/sidekick.json`. You can open and \nedit this file as needed. Future updates will make editing easier directly from within Sidekick.\n\n### MCP Support\n\nSidekick supports Model Context Protocol (MCP) servers. You can configure MCP servers in your `~/.config/sidekick.json` file:\n\n```json\n{\n  \"mcpServers\": {\n    \"fetch\": {\n      \"command\": \"uvx\",\n      \"args\": [\"mcp-server-fetch\"]\n    },\n    \"github\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"@modelcontextprotocol/server-github\"],\n      \"env\": {\n        \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"<YOUR_TOKEN>\"\n      }\n    }\n  }\n}\n```\n\nMCP servers extend the capabilities of your AI assistant, allowing it to interact with additional tools and data sources. Learn more about MCP at [modelcontextprotocol.io](https://modelcontextprotocol.io/).\n\n### Available Commands\n\n- `/help` - Show available commands\n- `/yolo` - Toggle \"yolo\" mode (skip tool confirmations)\n- `/clear` - Clear message history\n- `/model` - List available models\n- `/model <num>` - Switch to a specific model (by index)\n- `/usage` - Show session usage statistics\n- `exit` - Exit the application\n\n## Customization\n\nSidekick supports the use of a \"guide\". This is a `SIDEKICK.md` file in the project root that contains\ninstructions for Sidekick. Helpful for specifying tech stack, project structure, development\npreferences etc.\n\n## Telemetry\n\nSidekick uses [Sentry](https://sentry.io/) for error tracking and usage analytics. You can disable this by\nstarting with the `--no-telemetry` flag.\n\n```\nsidekick --no-telemetry\n```\n\n## Requirements\n\n- Python 3.10 or higher\n- Git (for undo functionality)\n\n## Installation\n\n### Using pip\n\n```bash\npip install sidekick-cli\n```\n\n### From Source\n\n1. Clone the repository\n2. Install dependencies: `pip install .` (or `pip install -e .` for development)\n\n## Development\n\n```bash\n# Install development dependencies\nmake install\n\n# Run linting\nmake lint\n\n# Run tests\nmake test\n```\n\n## Release Process\n\nWhen preparing a new release:\n\n1. Update version numbers in:\n   - `pyproject.toml`\n   - `src/sidekick/constants.py` (APP_VERSION)\n\n2. Commit the version changes:\n   ```bash\n   git add pyproject.toml src/sidekick/constants.py\n   git commit -m \"chore: bump version to X.Y.Z\"\n   ```\n\n3. Create and push a tag:\n   ```bash\n   git tag vX.Y.Z\n   git push origin vX.Y.Z\n   ```\n\n4. Create a GitHub release:\n   ```bash\n   gh release create vX.Y.Z --title \"vX.Y.Z\" --notes \"Release notes here\"\n   ```\n\n5. Merge to main branch and push to trigger PyPI release (automated)\n\n### Commit Convention\n\nThis project follows the [Conventional Commits](https://www.conventionalcommits.org/) specification for commit messages:\n\n- `feat:` - New features\n- `fix:` - Bug fixes\n- `docs:` - Documentation changes\n- `style:` - Code style changes (formatting, etc.)\n- `refactor:` - Code refactoring\n- `perf:` - Performance improvements\n- `test:` - Test additions or modifications\n- `chore:` - Maintenance tasks (version bumps, etc.)\n- `build:` - Build system changes\n- `ci:` - CI configuration changes\n\n## Links\n\n- [PyPI Package](https://pypi.org/project/sidekick-cli/)\n- [GitHub Issues](https://github.com/geekforbrains/sidekick-cli/issues)\n- [GitHub Repository](https://github.com/geekforbrains/sidekick-cli)\n\n## License\n\nMIT\n"
  },
  {
    "path": "pyproject.toml",
    "content": "[build-system]\nrequires = [\"setuptools>=61.0.0\", \"wheel\"]\nbuild-backend = \"setuptools.build_meta\"\n\n[project]\nname = \"sidekick-cli\"\nversion = \"0.5.1\"\ndescription = \"Your agentic CLI developer.\"\nkeywords = [\"cli\", \"agent\", \"development\", \"automation\"]\nreadme = \"README.md\"\nrequires-python = \">=3.10\"\nlicense = {text = \"MIT\"}\nauthors = [\n    { name = \"Gavin Vickery\", email = \"gavin@geekforbrains.com\" },\n]\nclassifiers = [\n    \"Development Status :: 4 - Beta\",\n    \"Intended Audience :: Developers\",\n    \"Programming Language :: Python :: 3\",\n    \"Programming Language :: Python :: 3.10\",\n    \"Programming Language :: Python :: 3.11\",\n    \"Programming Language :: Python :: 3.12\",\n    \"Programming Language :: Python :: 3.13\",\n    \"Topic :: Software Development\",\n    \"Topic :: Utilities\",\n]\ndependencies = [\n    \"pydantic-ai\",\n    \"rich\",\n    \"typer\",\n    \"prompt_toolkit\",\n]\n\n[project.scripts]\nsidekick = \"sidekick:app\"\n\n[project.optional-dependencies]\ndev = [\n    \"black\",\n    \"flake8\",\n    \"isort\",\n    \"pytest\",\n    \"pytest-asyncio\",\n    \"pytest-mock\",\n]\n\n[project.urls]\nHomepage = \"https://github.com/geekforbrains/sidekick-cli\"\nRepository = \"https://github.com/geekforbrains/sidekick-cli\"\n\n[tool.black]\nline-length = 100\n\n[tool.isort]\nprofile = \"black\"\nline_length = 100\n"
  },
  {
    "path": "pytest.ini",
    "content": "[pytest]\ntestpaths = tests\npython_files = test_*.py\npython_classes = Test*\npython_functions = test_*\naddopts = -v --tb=short\nasyncio_mode = auto"
  },
  {
    "path": "src/sidekick/__init__.py",
    "content": "from sidekick.main import app\n\n__all__ = [\"app\"]\n"
  },
  {
    "path": "src/sidekick/agent.py",
    "content": "import asyncio\nimport logging\nfrom pathlib import Path\nfrom typing import Any, Optional\n\nfrom pydantic_ai import Agent, CallToolsNode\nfrom pydantic_ai.messages import (\n    TextPart,\n    ToolCallPart,\n)\n\nfrom sidekick import ui\nfrom sidekick.deps import ToolDeps\nfrom sidekick.mcp import MCPAgent, load_mcp_servers\nfrom sidekick.session import session\nfrom sidekick.tools import TOOLS\nfrom sidekick.usage import usage_tracker\nfrom sidekick.utils.error import ErrorContext\n\nlog = logging.getLogger(__name__)\n\n\ndef _get_prompt(name: str) -> str:\n    try:\n        prompt_path = Path(__file__).parent / \"prompts\" / f\"{name}.txt\"\n        return prompt_path.read_text(encoding=\"utf-8\").strip()\n    except FileNotFoundError:\n        return f\"Error: Prompt file '{name}.txt' not found\"\n\n\nasync def _process_node(node, message_history):\n    if isinstance(node, CallToolsNode):\n        for part in node.model_response.parts:\n            if isinstance(part, ToolCallPart):\n                log.debug(f\"Calling tool: {part.tool_name}\")\n\n            # I cant' find a definitive way to check if a text part is a \"thinking\" response\n            # or not, but majority of the time they are accompanied by other tool calls.\n            # Using that as a basis for showing \"thinking\" messages.\n            if isinstance(part, TextPart) and len(node.model_response.parts) > 1:\n                ui.stop_spinner()\n                ui.thinking_panel(part.content)\n                ui.start_spinner()\n\n    if hasattr(node, \"request\"):\n        message_history.add_request(node.request)\n\n        for part in node.request.parts:\n            if part.part_kind == \"retry-prompt\":\n                ui.stop_spinner()\n                error_msg = (\n                    part.content\n                    if hasattr(part, \"content\") and isinstance(part.content, str)\n                    else \"Trying a different approach\"\n                )\n                ui.muted(f\"{error_msg}\")\n                ui.start_spinner()\n\n    if hasattr(node, \"model_response\"):\n        message_history.add_response(node.model_response)\n\n\ndef create_agent():\n    \"\"\"Create a fresh agent instance with MCP server support.\"\"\"\n    base_agent = Agent(\n        model=session.current_model,\n        system_prompt=_get_prompt(\"system\"),\n        tools=TOOLS,\n        mcp_servers=load_mcp_servers(),\n        deps_type=ToolDeps,\n    )\n    return MCPAgent(base_agent)\n\n\ndef _create_confirmation_callback():\n    async def confirm(title: str, preview: Any, footer: Optional[str] = None) -> bool:\n        tool_name = title.split(\":\")[0].strip() if \":\" in title else title\n\n        if not session.confirmation_enabled or tool_name in session.disabled_confirmations:\n            return True\n\n        ui.stop_spinner()\n        ui.tool(preview, title, footer)\n\n        # Display confirmation options without using a panel, but still\n        # indented by two spaces so they line up with other panel content.\n        options = (\n            (\"y\", \"Yes, execute this tool\"),\n            (\"a\", \"Always allow this tool\"),\n            (\"n\", \"No, cancel this execution\"),\n        )\n\n        for key, description in options:\n            ui.muted(f\"{key}: {description}\", indent=2)\n\n        while True:\n            choice = ui.console.input(\"  Continue? (y): \").lower().strip()\n\n            if choice == \"\" or choice in [\"y\", \"yes\"]:\n                ui.start_spinner()\n                return True\n            elif choice in [\"a\", \"always\"]:\n                session.disabled_confirmations.add(tool_name)\n                ui.start_spinner()\n                return True\n            elif choice in [\"n\", \"no\"]:\n                return False\n\n    return confirm\n\n\ndef _create_display_tool_status_callback():\n    async def display(title: str, *args: Any, **kwargs: Any) -> None:\n        \"\"\"\n        Display the current tool status.\n\n        Args:\n            title: str\n            *args: Any\n            **kwargs: Any\n                Keyword arguments passed to the tool. These will be rendered in the\n                form ``key=value`` in the output.\n        \"\"\"\n        ui.stop_spinner()\n\n        parts = []\n        if args:\n            parts.extend(str(arg) for arg in args)\n        if kwargs:\n            parts.extend(f\"{k}={v}\" for k, v in kwargs.items())\n\n        arg_str = \", \".join(parts)\n        ui.info(f\"{title}({arg_str})\")\n        ui.start_spinner()\n\n    return display\n\n\nasync def process_request(message: str, message_history):\n    log.debug(f\"Processing request: {message.replace('\\n', ' ')[:100]}...\")\n\n    async with create_agent() as mcp_agent:\n        agent = mcp_agent.agent\n\n        mh = message_history.get_messages_for_agent()\n        log.debug(f\"Message history size: {len(mh)}\")\n\n        deps = ToolDeps(\n            confirm_action=_create_confirmation_callback(),\n            display_tool_status=_create_display_tool_status_callback(),\n        )\n\n        ctx = ErrorContext(\"agent\", ui)\n        ctx.add_cleanup(lambda e: message_history.patch_on_error(str(e)))\n\n        try:\n            async with agent.iter(message, deps=deps, message_history=mh) as agent_run:\n                async for node in agent_run:\n                    await _process_node(node, message_history)\n\n                usage = agent_run.usage()\n                if usage:\n                    usage_tracker.record_usage(session.current_model, usage)\n\n                result = agent_run.result.output\n                log.debug(f\"Agent response: {result.replace('\\n', ' ')[:100]}...\")\n                return result\n        except asyncio.CancelledError:\n            raise\n        except Exception as e:\n            if type(e).__name__ == \"ClosedResourceError\" and e.__class__.__module__ == \"anyio\":\n                raise asyncio.CancelledError() from e\n            if type(e).__name__ == \"McpError\" and str(e) == \"Connection closed\":\n                log.debug(\"MCP connection closed, cancelling request\")\n                raise asyncio.CancelledError() from e\n            return await ctx.handle(e)\n"
  },
  {
    "path": "src/sidekick/commands/__init__.py",
    "content": "\"\"\"Command handlers for Sidekick CLI.\"\"\"\n\nfrom sidekick import ui\nfrom sidekick.commands.clear import handle_clear\nfrom sidekick.commands.dump import handle_dump\nfrom sidekick.commands.help import handle_help\nfrom sidekick.commands.model import handle_model\nfrom sidekick.commands.usage import handle_usage\nfrom sidekick.commands.yolo import handle_yolo\n\n__all__ = [\n    \"handle_clear\",\n    \"handle_dump\",\n    \"handle_help\",\n    \"handle_model\",\n    \"handle_usage\",\n    \"handle_yolo\",\n    \"handle_command\",\n]\n\n\nasync def handle_command(user_input: str, message_history=None) -> bool:\n    \"\"\"Handle slash commands. Returns True if command was handled.\"\"\"\n    if not user_input.startswith(\"/\"):\n        return False\n\n    parts = user_input.split()\n    command = parts[0]\n    args = parts[1:] if len(parts) > 1 else []\n\n    handlers = {\n        \"/dump\": lambda: handle_dump(message_history),\n        \"/yolo\": handle_yolo,\n        \"/model\": lambda: handle_model(args),\n        \"/usage\": handle_usage,\n        \"/clear\": lambda: handle_clear(message_history),\n        \"/help\": handle_help,\n    }\n\n    handler = handlers.get(command)\n    if handler:\n        await handler()\n        return True\n\n    ui.line()\n    ui.error(f\"Unknown command: {command}\")\n    ui.muted(\"Use /help to see available commands\")\n\n    return True\n"
  },
  {
    "path": "src/sidekick/commands/clear.py",
    "content": "\"\"\"Handle /clear command.\"\"\"\n\nfrom sidekick import ui\n\n\nasync def handle_clear(message_history):\n    \"\"\"Handle /clear command - clear conversation history and screen.\"\"\"\n    if message_history:\n        message_history.clear()\n        ui.banner()\n        ui.success(\"Conversation history cleared\")\n    else:\n        ui.error(\"Message history not available\")\n"
  },
  {
    "path": "src/sidekick/commands/dump.py",
    "content": "\"\"\"Handle /dump command.\"\"\"\n\nfrom sidekick import ui\n\nDUMP_FILE_PATH = \"dump.log\"\n\n\ndef recursive_expand(obj, indent=0):\n    \"\"\"Recursively expand objects to show their attributes.\"\"\"\n    indent_str = \"  \" * indent\n    lines = []\n\n    if isinstance(obj, (str, int, float, bool, type(None))):\n        return repr(obj)\n\n    if hasattr(obj, \"isoformat\"):\n        return repr(obj)\n\n    if isinstance(obj, (list, tuple)):\n        if not obj:\n            return \"[]\" if isinstance(obj, list) else \"()\"\n\n        bracket_open = \"[\" if isinstance(obj, list) else \"(\"\n        bracket_close = \"]\" if isinstance(obj, list) else \")\"\n\n        if len(obj) == 1 and isinstance(obj[0], (str, int, float, bool)):\n            return f\"{bracket_open}{repr(obj[0])}{bracket_close}\"\n\n        lines.append(bracket_open)\n        for item in obj:\n            expanded = recursive_expand(item, indent + 1)\n            lines.append(f\"{indent_str}  {expanded},\")\n        lines.append(f\"{indent_str}{bracket_close}\")\n        return \"\\n\".join(lines)\n\n    if isinstance(obj, dict):\n        if not obj:\n            return \"{}\"\n\n        lines.append(\"{\")\n        for key, value in obj.items():\n            expanded_value = recursive_expand(value, indent + 1)\n            lines.append(f\"{indent_str}  {repr(key)}: {expanded_value},\")\n        lines.append(f\"{indent_str}}}\")\n        return \"\\n\".join(lines)\n\n    if hasattr(obj, \"__dict__\"):\n        class_name = type(obj).__name__\n        attrs = vars(obj)\n\n        if not attrs:\n            return f\"{class_name}()\"\n\n        lines.append(f\"{class_name}(\")\n        for key, value in attrs.items():\n            expanded_value = recursive_expand(value, indent + 1)\n            lines.append(f\"{indent_str}  {key}={expanded_value},\")\n        lines.append(f\"{indent_str})\")\n        return \"\\n\".join(lines)\n\n    if hasattr(obj, \"__class__\"):\n        class_name = type(obj).__name__\n        attrs = {\n            attr: getattr(obj, attr)\n            for attr in dir(obj)\n            if not attr.startswith(\"_\") and not callable(getattr(obj, attr))\n        }\n\n        if not attrs:\n            return repr(obj)\n\n        lines.append(f\"{class_name}(\")\n        for key, value in attrs.items():\n            expanded_value = recursive_expand(value, indent + 1)\n            lines.append(f\"{indent_str}  {key}={expanded_value},\")\n        lines.append(f\"{indent_str})\")\n        return \"\\n\".join(lines)\n\n    return repr(obj)\n\n\nasync def handle_dump(message_history):\n    \"\"\"Handle /dump command - write message history to dump.log, overwriting the file each time.\"\"\"\n    if not message_history:\n        ui.error(\"Message history not available\")\n        return\n\n    try:\n        with open(DUMP_FILE_PATH, \"w\") as f:\n            for i, message in enumerate(message_history):\n                f.write(f\"{'=' * 80}\\n\")\n                f.write(f\"Message #{i} - Type: {type(message).__name__}\\n\")\n                f.write(f\"{'=' * 80}\\n\\n\")\n\n                expanded = recursive_expand(message)\n                f.write(expanded)\n                f.write(\"\\n\\n\")\n\n        ui.success(f\"Message history dumped to {DUMP_FILE_PATH}\")\n    except Exception as e:\n        ui.error(f\"Failed to dump message history: {e}\")\n"
  },
  {
    "path": "src/sidekick/commands/help.py",
    "content": "\"\"\"Handle /help command.\"\"\"\n\nfrom sidekick import ui\n\n\nasync def handle_help():\n    \"\"\"Handle /help command - show available commands.\"\"\"\n    ui.line()\n    ui.help()\n"
  },
  {
    "path": "src/sidekick/commands/model.py",
    "content": "\"\"\"Handle /model command.\"\"\"\n\nimport logging\n\nfrom rich.table import Table\n\nfrom sidekick import ui\nfrom sidekick.config import update_config_file\nfrom sidekick.constants import MODELS\nfrom sidekick.session import session\nfrom sidekick.ui.colors import colors\n\nlog = logging.getLogger(__name__)\n\n\nasync def handle_model(args: list[str]):\n    \"\"\"Handle /model command - list, switch, or set default model.\"\"\"\n    ui.line()\n\n    if len(args) == 0:\n        table = Table(show_header=False, box=None, padding=(0, 2, 0, 0))\n        table.add_column(\"#\", justify=\"right\", style=colors.primary)\n        table.add_column(\"Model\", style=\"white\")\n\n        for i, model_name in enumerate(MODELS.keys(), 1):\n            label = model_name\n            if model_name == session.current_model:\n                label += \" [dim](current)[/dim]\"\n            table.add_row(str(i), label)\n\n        ui.info_panel(table, \"Available Models\")\n\n    elif len(args) >= 1:\n        try:\n            model_num = int(args[0])\n            model_list = list(MODELS.keys())\n            if 1 <= model_num <= len(model_list):\n                selected_model = model_list[model_num - 1]\n\n                if len(args) >= 2 and args[1] == \"default\":\n                    try:\n                        update_config_file({\"default_model\": selected_model})\n                        ui.success(f\"Set {selected_model} as default model\")\n                    except Exception as e:\n                        ui.error(f\"Failed to update config: {e}\")\n                else:\n                    old_model = session.current_model\n                    session.current_model = selected_model\n                    log.debug(f\"Model switched from {old_model} to {selected_model}\")\n                    ui.info(f\"Switched to model: {selected_model}\")\n            else:\n                ui.error(f\"Invalid model number. Choose between 1 and {len(model_list)}\")\n        except ValueError:\n            ui.error(\"Invalid model number\")\n"
  },
  {
    "path": "src/sidekick/commands/usage.py",
    "content": "\"\"\"Handle /usage command.\"\"\"\n\nfrom rich.text import Text\n\nfrom sidekick import ui\nfrom sidekick.usage import usage_tracker\n\n\nasync def handle_usage():\n    \"\"\"Handle /usage command - show session usage statistics.\"\"\"\n    content = Text()\n\n    if usage_tracker.total_tokens > 0:\n        content.append(\"Total Statistics\\n\", style=f\"bold {ui.colors.primary}\")\n        content.append(f\"  • Total tokens: {usage_tracker.total_tokens:,}\\n\", style=\"white\")\n        content.append(f\"  • Total cost: ${usage_tracker.total_cost:.5f}\\n\", style=\"white\")\n        content.append(f\"  • Total requests: {usage_tracker.total_requests:,}\\n\", style=\"white\")\n\n    if usage_tracker.last_request:\n        if usage_tracker.total_tokens > 0:\n            content.append(\"\\n\")\n        content.append(\"Last Request\\n\", style=f\"bold {ui.colors.primary}\")\n        content.append(f\"  • Model: {usage_tracker.last_request['model']}\\n\", style=\"white\")\n        content.append(\n            f\"  • Input tokens: {usage_tracker.last_request['input_tokens']:,}\\n\", style=\"white\"\n        )\n        content.append(\n            f\"  • Cached tokens: {usage_tracker.last_request['cached_tokens']:,}\\n\", style=\"white\"\n        )\n        content.append(\n            f\"  • Output tokens: {usage_tracker.last_request['output_tokens']:,}\\n\", style=\"white\"\n        )\n        content.append(\n            f\"  • Request cost: ${usage_tracker.last_request['request_cost']:.5f}\\n\", style=\"white\"\n        )\n\n    if not usage_tracker.total_tokens:\n        content.append(\"No usage data yet in this session\", style=ui.colors.muted)\n\n    if content.plain.endswith(\"\\n\"):\n        content = Text(content.plain.rstrip(\"\\n\"))\n\n    panel = ui.create_panel(content, \"Session Usage Statistics\", ui.colors.muted)\n    ui.display_panel(panel)\n"
  },
  {
    "path": "src/sidekick/commands/yolo.py",
    "content": "\"\"\"Handle /yolo command.\"\"\"\n\nfrom sidekick import ui\nfrom sidekick.session import session\n\n\nasync def handle_yolo():\n    \"\"\"Handle /yolo command - toggle confirmation mode.\"\"\"\n    session.confirmation_enabled = not session.confirmation_enabled\n\n    if session.confirmation_enabled:\n        session.disabled_confirmations.clear()\n\n    status = \"disabled (YOLO mode)\" if not session.confirmation_enabled else \"enabled\"\n    ui.info(f\"Tool confirmations {status}\")\n"
  },
  {
    "path": "src/sidekick/config.py",
    "content": "\"\"\"Configuration management for Sidekick CLI.\"\"\"\n\nimport json\nimport os\nfrom pathlib import Path\nfrom typing import Any, Dict\n\nfrom .constants import DEFAULT_USER_CONFIG\n\n\nclass ConfigError(Exception):\n    \"\"\"Base exception for configuration errors.\"\"\"\n\n    pass\n\n\nclass ConfigValidationError(ConfigError):\n    \"\"\"Raised when config structure is invalid.\"\"\"\n\n    pass\n\n\ndef get_config_path() -> Path:\n    \"\"\"Get the path to the config file.\"\"\"\n    return Path.home() / \".config\" / \"sidekick.json\"\n\n\ndef config_exists() -> bool:\n    \"\"\"Check if the config file exists.\"\"\"\n    return get_config_path().exists()\n\n\ndef read_config_file() -> Dict[str, Any]:\n    \"\"\"Read and parse the config file.\n\n    Returns:\n        dict: Parsed configuration\n\n    Raises:\n        ConfigError: If config file doesn't exist or can't be accessed\n        ConfigValidationError: If config file contains invalid JSON\n    \"\"\"\n    config_path = get_config_path()\n\n    if not config_path.exists():\n        raise ConfigError(f\"Config file not found at {config_path}\")\n\n    try:\n        with open(config_path, \"r\") as f:\n            return json.load(f)\n    except PermissionError as e:\n        raise ConfigError(f\"Cannot access config file at {config_path}\") from e\n    except json.JSONDecodeError as e:\n        raise ConfigValidationError(f\"Invalid JSON in config file at {config_path}\") from e\n\n\ndef validate_config_structure(config: Dict[str, Any]) -> None:\n    \"\"\"Validate the configuration structure.\n\n    Args:\n        config: Configuration dictionary to validate\n\n    Raises:\n        ConfigValidationError: If required fields are missing or invalid\n    \"\"\"\n    if not isinstance(config, dict):\n        raise ConfigValidationError(\"Config must be a JSON object\")\n\n    if \"default_model\" not in config:\n        raise ConfigValidationError(\"Config missing required field 'default_model'\")\n\n    if not isinstance(config[\"default_model\"], str):\n        raise ConfigValidationError(\"'default_model' must be a string\")\n\n    if \"env\" not in config:\n        raise ConfigValidationError(\"Config missing required field 'env'\")\n\n    if not isinstance(config[\"env\"], dict):\n        raise ConfigValidationError(\"'env' field must be an object\")\n\n\ndef parse_mcp_servers(config: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"Extract and validate MCP server configuration.\n\n    Args:\n        config: Full configuration dictionary\n\n    Returns:\n        dict: MCP servers configuration (may be empty)\n\n    Raises:\n        ConfigValidationError: If mcpServers field is present but invalid\n    \"\"\"\n    if \"mcpServers\" not in config:\n        return {}\n\n    mcp_servers = config[\"mcpServers\"]\n\n    if not isinstance(mcp_servers, dict):\n        raise ConfigValidationError(\"'mcpServers' field must be an object\")\n\n    for key, server_config in mcp_servers.items():\n        if not isinstance(server_config, dict):\n            raise ConfigValidationError(f\"MCP server '{key}' configuration must be an object\")\n\n        if \"command\" not in server_config:\n            raise ConfigValidationError(f\"MCP server '{key}' missing required field 'command'\")\n\n        if not isinstance(server_config[\"command\"], str):\n            raise ConfigValidationError(f\"MCP server '{key}' field 'command' must be a string\")\n\n        if \"args\" not in server_config:\n            raise ConfigValidationError(f\"MCP server '{key}' missing required field 'args'\")\n\n        if not isinstance(server_config[\"args\"], list):\n            raise ConfigValidationError(f\"MCP server '{key}' field 'args' must be an array\")\n\n        if len(server_config[\"args\"]) < 1:\n            raise ConfigValidationError(\n                f\"MCP server '{key}' field 'args' must contain at least one argument\"\n            )\n\n        if \"env\" in server_config and not isinstance(server_config[\"env\"], dict):\n            raise ConfigValidationError(f\"MCP server '{key}' field 'env' must be an object\")\n\n    return mcp_servers\n\n\ndef set_env_vars(env_dict: Dict[str, str]) -> None:\n    \"\"\"Set environment variables from config.\n\n    Args:\n        env_dict: Dictionary of environment variables to set\n    \"\"\"\n    for key, value in env_dict.items():\n        if value and isinstance(value, str):\n            os.environ[key] = value\n\n\ndef update_config_file(updates: Dict[str, Any]) -> None:\n    \"\"\"Update the config file with new values.\n\n    Args:\n        updates: Dictionary of updates to apply to the config\n\n    Raises:\n        ConfigError: If config file cannot be read or written\n    \"\"\"\n    try:\n        config = read_config_file()\n    except FileNotFoundError:\n        raise ConfigError(\"Config file not found. Please run initial setup first.\")\n\n    # Merge updates into existing config\n    for key, value in updates.items():\n        if isinstance(value, dict) and key in config and isinstance(config[key], dict):\n            # For nested dicts, merge instead of replace\n            config[key].update(value)\n        else:\n            config[key] = value\n\n    # Write updated config back to file\n    config_path = get_config_path()\n\n    # Ensure the config directory exists\n    config_path.parent.mkdir(parents=True, exist_ok=True)\n\n    try:\n        with open(config_path, \"w\") as f:\n            json.dump(config, f, indent=2)\n    except (PermissionError, IOError) as e:\n        raise ConfigError(f\"Failed to write config file: {e}\")\n\n\ndef deep_merge_dicts(base: Dict[str, Any], update: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"Deep merge two dictionaries, preserving existing values in update.\n\n    Args:\n        base: Base dictionary with default values\n        update: Dictionary with user values to preserve\n\n    Returns:\n        Merged dictionary with all keys from base and values from update where they exist\n    \"\"\"\n    result = base.copy()\n\n    for key, value in update.items():\n        if key in result and isinstance(result[key], dict) and isinstance(value, dict):\n            result[key] = deep_merge_dicts(result[key], value)\n        else:\n            result[key] = value\n\n    return result\n\n\ndef ensure_config_structure() -> Dict[str, Any]:\n    \"\"\"Ensure the config file has all expected keys with defaults for missing ones.\n\n    This function reads the existing config, merges it with the default structure,\n    and writes back the updated config if any keys were missing.\n\n    Returns:\n        The updated configuration dictionary\n\n    Raises:\n        ConfigError: If config file cannot be read or written\n    \"\"\"\n    try:\n        config = read_config_file()\n    except ConfigError:\n        raise\n\n    original_config = json.dumps(config, sort_keys=True)\n    merged_config = deep_merge_dicts(DEFAULT_USER_CONFIG, config)\n    updated_config = json.dumps(merged_config, sort_keys=True)\n\n    if original_config != updated_config:\n        try:\n            config_path = get_config_path()\n            with open(config_path, \"w\") as f:\n                json.dump(merged_config, f, indent=2)\n        except (PermissionError, IOError) as e:\n            raise ConfigError(f\"Failed to update config file with missing keys: {e}\")\n\n    return merged_config\n"
  },
  {
    "path": "src/sidekick/constants.py",
    "content": "APP_NAME = \"Sidekick\"\nAPP_VERSION = \"0.5.1\"\n\nMODELS = {\n    \"anthropic:claude-opus-4-0\": {\n        \"pricing\": {\n            \"input\": 3.00,\n            \"cached_input\": 1.50,\n            \"output\": 15.00,\n        },\n        \"context_window\": 200_000,\n    },\n    \"anthropic:claude-sonnet-4-0\": {\n        \"pricing\": {\n            \"input\": 3.00,\n            \"cached_input\": 1.50,\n            \"output\": 15.00,\n        },\n        \"context_window\": 200_000,\n    },\n    \"anthropic:claude-3-7-sonnet-latest\": {\n        \"pricing\": {\n            \"input\": 3.00,\n            \"cached_input\": 1.50,\n            \"output\": 15.00,\n        },\n        \"context_window\": 200_000,\n    },\n    \"google-gla:gemini-2.5-pro\": {\n        # Gemini pro has pricing tiers <= 200k / >200k\n        # For now, using the lower pricing as unlikely to exceed 200k tokens\n        # During a session\n        #\n        # TODO: Should make usage tracking dynamic to handle this\n        \"pricing\": {\n            \"input\": 1.25,\n            \"cached_input\": 1.25,\n            \"output\": 10.00,\n        },\n        \"context_window\": 2_000_000,\n    },\n    \"google-gla:gemini-2.5-flash\": {\n        \"pricing\": {\n            \"input\": 0.30,\n            \"cached_input\": 0.035,\n            \"output\": 2.50,\n        },\n        \"context_window\": 2_000_000,\n    },\n    \"openai:o4-mini\": {\n        \"pricing\": {\n            \"input\": 1.10,\n            \"cached_input\": 0.275,\n            \"output\": 4.40,\n        },\n        \"context_window\": 200_000,\n    },\n    \"openai:o3-pro\": {\n        \"pricing\": {\n            \"input\": 20.00,\n            \"cached_input\": 20.00,\n            \"output\": 80.00,\n        },\n        \"context_window\": 200_000,\n    },\n    \"openai:o3\": {\n        \"pricing\": {\n            \"input\": 10.00,\n            \"cached_input\": 2.50,\n            \"output\": 40.00,\n        },\n        \"context_window\": 200_000,\n    },\n    \"openai:o3-mini\": {\n        \"pricing\": {\n            \"input\": 1.10,\n            \"cached_input\": 0.55,\n            \"output\": 4.40,\n        },\n        \"context_window\": 200_000,\n    },\n    \"openai:gpt-4.1\": {\n        \"pricing\": {\n            \"input\": 2.00,\n            \"cached_input\": 0.50,\n            \"output\": 8.00,\n        },\n        \"context_window\": 1_047_576,\n    },\n    \"openai:gpt-4.1-mini\": {\n        \"pricing\": {\n            \"input\": 0.40,\n            \"cached_input\": 0.10,\n            \"output\": 1.60,\n        },\n        \"context_window\": 1_047_576,\n    },\n    \"openai:gpt-4.1-nano\": {\n        \"pricing\": {\n            \"input\": 0.10,\n            \"cached_input\": 0.025,\n            \"output\": 0.40,\n        },\n        \"context_window\": 1_047_576,\n    },\n}\n\n# Non-destructive tools that should always be allowed without confirmation\nALLOWED_TOOLS = [\n    \"read_file\",\n    \"find\",\n    \"list_directory\",\n]\n\nDEFAULT_USER_CONFIG = {\n    \"default_model\": \"\",\n    \"env\": {\n        \"ANTHROPIC_API_KEY\": \"your-anthropic-api-key\",\n        \"OPENAI_API_KEY\": \"your-openai-api-key\",\n        \"GEMINI_API_KEY\": \"your-gemini-api-key\",\n    },\n    \"mcpServers\": {},\n    \"settings\": {\n        \"allowed_tools\": [],\n        \"allowed_commands\": [\n            \"ls\",\n            \"cat\",\n            \"rg\",\n            \"find\",\n            \"pwd\",\n            \"echo\",\n            \"which\",\n            \"head\",\n            \"tail\",\n            \"wc\",\n            \"sort\",\n            \"uniq\",\n            \"diff\",\n            \"tree\",\n            \"file\",\n            \"stat\",\n            \"du\",\n            \"df\",\n            \"ps\",\n            \"top\",\n            \"env\",\n            \"date\",\n            \"whoami\",\n            \"hostname\",\n            \"uname\",\n            \"id\",\n            \"groups\",\n            \"history\",\n        ],\n    },\n}\n"
  },
  {
    "path": "src/sidekick/deps.py",
    "content": "from dataclasses import dataclass\nfrom typing import Any, Awaitable, Callable, Optional\n\n\n@dataclass\nclass ToolDeps:\n    \"\"\"Dependencies passed to tools via RunContext.\"\"\"\n\n    confirm_action: Optional[Callable[[str, str, Optional[str]], Awaitable[bool]]] = None\n    display_tool_status: Optional[Callable[[str, Any], Awaitable[None]]] = None\n"
  },
  {
    "path": "src/sidekick/main.py",
    "content": "import asyncio\nimport logging\nimport sys\n\nimport typer\nfrom rich.console import Console\n\nfrom sidekick import ui\nfrom sidekick.config import (\n    ConfigError,\n    ConfigValidationError,\n    config_exists,\n    ensure_config_structure,\n    set_env_vars,\n    validate_config_structure,\n)\nfrom sidekick.constants import APP_NAME, APP_VERSION\nfrom sidekick.repl import Repl\nfrom sidekick.session import session\nfrom sidekick.setup import run_setup\nfrom sidekick.utils.guide import load_guide\nfrom sidekick.utils.logger import setup_logging\n\napp = typer.Typer(help=f\"{APP_NAME} - Your agentic CLI developer\")\nconsole = Console()\nlog = logging.getLogger(__name__)\n\n\ndef _setup_and_run_event_loop(coro):\n    \"\"\"\n    Create, run, and properly clean up the asyncio event loop.\n\n    This manual setup is used instead of the simpler `asyncio.run()` to gain\n    direct access to the loop object. This is necessary because OS signal\n    handlers (like for SIGINT/Ctrl+C) execute outside of the asyncio loop's\n    context. To gracefully cancel a task from the handler, we must use\n    `loop.call_soon_threadsafe()` to safely schedule the cancellation\n    within the running loop.\n    \"\"\"\n    loop = asyncio.new_event_loop()\n    asyncio.set_event_loop(loop)\n    try:\n        loop.run_until_complete(coro)\n    finally:\n        loop.close()\n\n\ndef _initialize_config():\n    \"\"\"Checks for, loads, and validates the application configuration.\"\"\"\n    ui.banner()\n\n    if not config_exists():\n        console.print()\n        config = run_setup()\n    else:\n        try:\n            config = ensure_config_structure()\n            validate_config_structure(config)\n        except ConfigError as e:\n            ui.error(\"Configuration error\", str(e))\n            sys.exit(1)\n        except ConfigValidationError as e:\n            ui.error(\"Invalid configuration\", str(e))\n            sys.exit(1)\n        except Exception as e:\n            ui.error(\"Failed to load configuration\", str(e))\n            sys.exit(1)\n\n    set_env_vars(config.get(\"env\", {}))\n    return config\n\n\n@app.command()\ndef main(\n    version: bool = typer.Option(False, \"--version\", \"-v\", help=\"Show version and exit.\"),\n    debug: bool = typer.Option(False, \"--debug\", help=\"Enable debug logging to file.\"),\n):\n    \"\"\"Sidekick CLI main entry point.\"\"\"\n    if version:\n        console.print(f\"{APP_NAME} version {APP_VERSION}\")\n        return\n\n    if debug:\n        session.debug_enabled = True\n\n    setup_logging(debug_enabled=debug)\n\n    config = _initialize_config()\n    session.init(config, config[\"default_model\"])\n\n    project_guide = load_guide()\n    if project_guide:\n        ui.info(\"Loaded SIDEKICK.md guide\")\n\n    log.debug(f\"Session initialized with model: {session.current_model}\")\n\n    repl = Repl(project_guide=project_guide)\n    _setup_and_run_event_loop(repl.run())\n\n\nif __name__ == \"__main__\":\n    app()\n"
  },
  {
    "path": "src/sidekick/mcp/__init__.py",
    "content": "\"\"\"MCP (Model Context Protocol) module for managing servers and agents.\"\"\"\n\nfrom .agent import MCPAgent\nfrom .servers import SilentMCPServerStdio, load_mcp_servers\n\n__all__ = [\"MCPAgent\", \"load_mcp_servers\", \"SilentMCPServerStdio\"]\n"
  },
  {
    "path": "src/sidekick/mcp/agent.py",
    "content": "\"\"\"Agent wrapper that manages MCP server lifecycle.\n\nThis module provides a wrapper around pydantic_ai.Agent that ensures MCP (Model Context Protocol)\nservers are properly started and stopped when using the agent.\n\"\"\"\n\nfrom pydantic_ai import Agent\n\n\nclass MCPAgent:\n    \"\"\"Manages MCP server lifecycle for an agent.\n\n    This is a context manager wrapper that ensures MCP servers are running when needed\n    and properly cleaned up afterwards. It wraps a pydantic_ai.Agent instance and manages\n    the lifecycle of its MCP servers without modifying the agent's behavior.\n\n    The wrapper is reusable - it can be entered and exited multiple times, starting and\n    stopping MCP servers as needed. This is useful for long-running applications where\n    the agent is created once but used for multiple requests (ie. REPL).\n\n    Key design points:\n    - Does NOT inherit from Agent - uses composition instead of inheritance\n    - Provides transparent access to the wrapped agent via the .agent property\n    - Tracks state to prevent double-starting or stopping of MCP servers\n    - Handles async context manager protocol for proper resource management\n    \"\"\"\n\n    def __init__(self, agent: Agent):\n        \"\"\"Initialize the MCP agent wrapper.\n\n        Args:\n            agent: A pydantic_ai.Agent instance that may have MCP servers configured.\n                   The agent should already have its model, tools, and MCP servers set up.\n        \"\"\"\n        self._agent = agent\n        self._mcp_context = None  # Stores the context manager from agent.run_mcp_servers()\n        self._mcp_entered = False  # Tracks whether we've entered the MCP context\n\n    @property\n    def agent(self) -> Agent:\n        \"\"\"Access the wrapped pydantic_ai.Agent instance.\n\n        This property allows direct access to the underlying agent for running conversations,\n        accessing tools, or any other agent operations. The wrapper does not intercept or\n        modify any agent methods - it only manages the MCP server lifecycle.\n\n        Returns:\n            The wrapped pydantic_ai.Agent instance\n        \"\"\"\n        return self._agent\n\n    async def __aenter__(self):\n        \"\"\"Enter the async context and start MCP servers.\n\n        This method starts all MCP servers configured on the agent by calling\n        agent.run_mcp_servers(). The method is idempotent - if called multiple times\n        without exiting, it will only start the servers once.\n\n        The actual server startup is delegated to pydantic_ai's implementation which:\n        1. Iterates through all MCPServerStdio instances in agent._mcp_servers\n        2. Starts each server process and establishes stdio communication\n        3. Returns a context manager that handles shutdown\n\n        Returns:\n            self: Returns this MCPAgent instance for use in async with statements\n        \"\"\"\n        if not self._mcp_entered:\n            # Get the context manager from pydantic_ai that manages MCP servers\n            self._mcp_context = self._agent.run_mcp_servers()\n            # Enter the context to actually start the servers\n            await self._mcp_context.__aenter__()\n            self._mcp_entered = True\n        return self\n\n    async def __aexit__(self, exc_type, exc_val, exc_tb):\n        \"\"\"Exit the async context and stop MCP servers.\n\n        This method ensures all MCP servers are properly shut down by delegating to\n        the pydantic_ai context manager. It resets the internal state so the wrapper\n        can be reused.\n\n        The cleanup process includes:\n        1. Sending shutdown signals to all MCP server processes\n        2. Waiting for processes to terminate gracefully\n        3. Cleaning up any stdio connections\n\n        Args:\n            exc_type: Exception type if an exception occurred\n            exc_val: Exception value if an exception occurred\n            exc_tb: Exception traceback if an exception occurred\n        \"\"\"\n        if self._mcp_context and self._mcp_entered:\n            # Delegate cleanup to pydantic_ai's context manager\n            await self._mcp_context.__aexit__(exc_type, exc_val, exc_tb)\n            self._mcp_entered = False\n            self._mcp_context = None\n"
  },
  {
    "path": "src/sidekick/mcp/servers.py",
    "content": "\"\"\"MCP server utilities and configurations.\"\"\"\n\nimport asyncio\nimport logging\nimport os\nfrom contextlib import asynccontextmanager\nfrom typing import Any, Dict, List\n\nfrom mcp.client.stdio import StdioServerParameters, stdio_client\nfrom pydantic_ai.mcp import MCPServerStdio\nfrom pydantic_ai.tools import RunContext\n\nfrom sidekick import ui\nfrom sidekick.config import (\n    ConfigError,\n    parse_mcp_servers,\n    read_config_file,\n    validate_config_structure,\n)\nfrom sidekick.ui import format_server_name\n\nlogger = logging.getLogger(__name__)\n\n\nasync def mcp_tool_confirmation_callback(\n    ctx: RunContext[Any],\n    original_call_tool,\n    tool_name: str,\n    arguments: Dict[str, Any],\n) -> Any:\n    \"\"\"Process tool callback that shows confirmation for ALL MCP tool calls.\n\n    This callback is invoked for every MCP tool call and ensures that\n    confirmations are shown regardless of yolo mode or other settings.\n    \"\"\"\n    # Check if we have the confirmation callback available\n    if hasattr(ctx.deps, \"confirm_action\") and ctx.deps.confirm_action:\n        ui.stop_spinner()\n\n        # Format the arguments for display\n        from rich.pretty import Pretty\n\n        args_display = Pretty(arguments, expand_all=True)\n\n        # Always show confirmation for MCP tools\n        confirmed = await ctx.deps.confirm_action(f\"MCP({tool_name})\", args_display, None)\n\n        if not confirmed:\n            raise asyncio.CancelledError(\"MCP tool execution cancelled by user\")\n\n        ui.start_spinner()\n\n    # Call the original tool\n    return await original_call_tool(tool_name, arguments)\n\n\nclass SilentMCPServerStdio(MCPServerStdio):\n    \"\"\"MCPServerStdio that suppresses stderr output.\n\n    Extends pydantic_ai's MCPServerStdio to redirect stderr to /dev/null,\n    preventing MCP server error messages from cluttering the CLI output.\n    \"\"\"\n\n    def __init__(self, *args, display_name: str = None, **kwargs):\n        super().__init__(*args, **kwargs)\n        # Add display_name for better server identification in logs/UI\n        self.display_name = display_name or self.command\n\n    @asynccontextmanager\n    async def client_streams(self):\n        \"\"\"Override parent's client_streams to suppress stderr.\n\n        The parent implementation logs errors to stderr by default.\n        This override redirects stderr to /dev/null to keep the CLI clean.\n        \"\"\"\n        server = StdioServerParameters(\n            command=self.command, args=list(self.args), env=self.env, cwd=self.cwd\n        )\n        with open(os.devnull, \"w\") as null_stream:\n            async with stdio_client(server=server, errlog=null_stream) as (\n                read_stream,\n                write_stream,\n            ):\n                yield read_stream, write_stream\n\n\ndef create_mcp_server(key: str, config: Dict[str, Any]) -> SilentMCPServerStdio:\n    \"\"\"Create a single MCP server instance.\n\n    Args:\n        key: Server identifier\n        config: Server configuration dictionary\n\n    Returns:\n        SilentMCPServerStdio: Configured server instance\n    \"\"\"\n    # Use 'name' field if present, otherwise format the key\n    display_name = config.get(\"name\", format_server_name(key))\n\n    return SilentMCPServerStdio(\n        command=config[\"command\"],\n        args=config[\"args\"],\n        env=config.get(\"env\", {}),\n        display_name=display_name,\n        process_tool_call=mcp_tool_confirmation_callback,\n    )\n\n\ndef load_mcp_servers() -> List[SilentMCPServerStdio]:\n    \"\"\"Load MCP servers from configuration.\n\n    Returns:\n        List of configured MCP server instances\n\n    Note:\n        - Returns empty list if no servers configured\n        - Shows warnings for invalid server configs but continues with valid ones\n    \"\"\"\n    try:\n        config = read_config_file()\n        validate_config_structure(config)\n        mcp_servers_config = parse_mcp_servers(config)\n    except ConfigError as e:\n        logger.error(f\"Failed to load config: {e}\", exc_info=True)\n        ui.error(f\"Failed to load MCP configuration: {e}\")\n        return []\n    except Exception as e:\n        logger.error(f\"Unexpected error loading config: {e}\", exc_info=True)\n        ui.error(f\"Unexpected error loading MCP configuration: {e}\")\n        return []\n\n    servers = []\n    failed_servers = []\n\n    for key, server_config in mcp_servers_config.items():\n        try:\n            server = create_mcp_server(key, server_config)\n            servers.append(server)\n        except Exception as e:\n            logger.warning(f\"Failed to create server '{key}': {e}\", exc_info=True)\n            display_name = server_config.get(\"name\", format_server_name(key))\n            failed_servers.append((display_name, str(e)))\n\n    # Show errors for failed servers\n    if failed_servers:\n        ui.warning(\"Some MCP servers failed to load:\")\n        for server_name, error in failed_servers:\n            ui.bullet(f\"{server_name}: {error}\")\n\n    # Show summary if all servers failed\n    if mcp_servers_config and not servers:\n        ui.error(\"No MCP servers could be loaded successfully\")\n    elif servers and failed_servers:\n        ui.info(f\"Loaded {len(servers)} of {len(mcp_servers_config)} MCP servers\")\n\n    return servers\n"
  },
  {
    "path": "src/sidekick/messages.py",
    "content": "\"\"\"Message history management for Sidekick sessions.\"\"\"\n\nimport logging\nfrom dataclasses import dataclass, field\nfrom typing import List, Optional\n\nfrom pydantic_ai import messages\n\nlog = logging.getLogger(__name__)\n\n\n@dataclass\nclass MessageHistory:\n    \"\"\"Manages conversation message history with support for future pruning.\"\"\"\n\n    _messages: List[messages.ModelMessage] = field(default_factory=list)\n    _project_guide: Optional[str] = None\n\n    def add_request(self, request: messages.ModelRequest) -> None:\n        \"\"\"Add a request message to the history.\"\"\"\n        self._messages.append(request)\n        log.debug(\"Added request to message history\")\n\n    def add_response(self, response: messages.ModelResponse) -> None:\n        \"\"\"Add a model response to the history.\n\n        This is where future pruning logic could be implemented to remove\n        thinking parts and verbose tool outputs after the response is complete.\n        \"\"\"\n        self._messages.append(response)\n        log.debug(\"Added model response to message history\")\n\n    def add_cancellation_note(self) -> None:\n        \"\"\"Add a user prompt indicating the request was cancelled.\n\n        This provides clear context to the LLM that the user interrupted the request.\n        \"\"\"\n        cancellation_request = messages.ModelRequest(\n            parts=[messages.UserPromptPart(content=\"Previous request cancelled by user\")]\n        )\n        self.add_request(cancellation_request)\n        log.debug(\"Added cancellation note to message history\")\n\n    def patch_on_error(self, error_message: str) -> None:\n        \"\"\"Patch the message history with a ToolReturnPart on error.\n\n        This is critical for maintaining valid message history when a tool call fails,\n        the user interrupts execution, or other errors occur. LLM models expect to see\n        both a tool call and its corresponding response in the history. Without this\n        patch, the next request would fail because the model would see an unanswered\n        tool call in the history.\n\n        This method finds the last tool call in the most recent response and creates\n        a synthetic tool return with the error message, ensuring the conversation\n        history remains valid for future interactions.\n        \"\"\"\n        if not self._messages:\n            return\n\n        last_message = self._messages[-1]\n\n        if not (\n            hasattr(last_message, \"kind\")\n            and last_message.kind == \"response\"\n            and hasattr(last_message, \"parts\")\n        ):\n            return\n\n        last_tool_call = None\n        for part in reversed(last_message.parts):\n            if hasattr(part, \"part_kind\") and part.part_kind == \"tool-call\":\n                last_tool_call = part\n                break\n\n        if last_tool_call:\n            tool_return = messages.ToolReturnPart(\n                tool_name=last_tool_call.tool_name,\n                tool_call_id=last_tool_call.tool_call_id,\n                content=error_message,\n            )\n            self.add_request(messages.ModelRequest(parts=[tool_return]))\n\n    def clear(self) -> None:\n        \"\"\"Clear all messages from the history.\"\"\"\n        self._messages.clear()\n        log.debug(\"Cleared message history\")\n\n    def get_messages(self) -> List[messages.ModelMessage]:\n        \"\"\"Get a copy of all messages for agent use.\"\"\"\n        return self._messages.copy()\n\n    def get_messages_for_agent(self) -> List[messages.ModelMessage]:\n        \"\"\"Get messages for agent use, with project guide prepended if available.\"\"\"\n        messages_copy = self._messages.copy()\n\n        if self._project_guide:\n            guide_message = messages.ModelRequest(\n                parts=[messages.UserPromptPart(content=self._project_guide)]\n            )\n            messages_copy.insert(0, guide_message)\n            log.debug(\"Prepended project guide to message history\")\n\n        return messages_copy\n\n    def set_project_guide(self, guide: Optional[str]) -> None:\n        \"\"\"Set the project guide content.\"\"\"\n        self._project_guide = guide\n\n    def __len__(self) -> int:\n        \"\"\"Return the number of messages in history.\"\"\"\n        return len(self._messages)\n\n    def __iter__(self):\n        \"\"\"Allow iteration over messages.\"\"\"\n        return iter(self._messages)\n\n    def __getitem__(self, index):\n        \"\"\"Allow indexed access to messages.\"\"\"\n        return self._messages[index]\n"
  },
  {
    "path": "src/sidekick/prompts/system.txt",
    "content": "You are **Sidekick**, a CLI assistant running in the user's terminal.\n\n### Understanding User Intent\n- **Action requests** (most common): \"create a function\", \"fix this bug\", \"add tests\" → Take immediate action\n- **Information requests**: \"what does this code do?\", \"explain the architecture\" → Analyze and respond\n- **Hybrid requests**: \"find all TODOs and fix them\" → Research then act\n\nWhen unclear, bias toward action—users chose a CLI tool because they want things done.\n\n### Your Environment\n- **Finding files**: Use list_directory and find tools instead of shell commands. They respect .gitignore and are more efficient.\n- **Working directory**: Start where the user runs the command unless specified otherwise\n- **Project context**: Check for README, package.json, requirements.txt, SIDEKICK.md to understand the project\n- **Available tools**: You have built-in tools AND may have additional MCP tools (check tool list at runtime)\n\n### Built-in Tools (in order of preference)\n\n#### Discovery & Navigation\n- **list_directory**: List directory contents with tree structure, respects .gitignore\n- **find**: Search for files/directories by name or content (uses ripgrep/fd when available)\n  - Use `pattern` for filename wildcards (e.g., \"*.py\")\n  - Use `content` to search text within files\n  - Use `dirs=True` to search for directories instead of files\n\n#### File Operations\n- **read_file**: Read any file. Use this before modifying code to understand context\n- **write_file**: Create new files. Only use when file doesn't exist\n- **update_file**: Modify existing files. Requires exact string matching—read first!\n\n#### Git Operations  \n- **git_add**: Stage files (provides visual preview)\n- **git_commit**: Create commits (shows changes and message)\n\n#### Fallback\n- **run_command**: For EVERYTHING else (npm install, pytest, ls, mkdir, etc.)\n\n### Important Patterns\n\n1. **Read before writing**: Always read files before modifying them\n2. **Chain operations**: Use multiple tools to complete tasks fully\n3. **Parallel execution**: Can call multiple read operations simultaneously\n4. **Use specialized tools**: Only use run_command when no specific tool exists\n5. **Complete the cycle**: If you install packages, run them. If you create tests, execute them.\n\n### Response Style\n- **Minimal output**: Terminal shows your actions—don't narrate them\n- **Success**: \"Created auth.py with login function\" ✓\n- **Information**: \"Found 3 TODO comments in: main.py (line 45), utils.py (lines 23, 67)\"\n- **Errors**: \"Failed to import pandas. Run: pip install pandas\"\n\n### Example Workflows\n\n**Finding and modifying code:**\n1. list_directory (understand project structure)\n2. find (search for files by name or content)\n3. read_file (examine the code in detail)\n4. update_file (make changes)\n5. run_command (test the changes)\n\n**Creating a feature:**\n1. find (find related files)\n2. read_file (understand existing code)\n3. write_file or update_file (implement feature)  \n4. run_command (test the feature)\n\n**Git workflow:**\n1. Make changes using file tools\n2. git_add (stage changes)\n3. git_commit (commit with descriptive message)\n\n**Debugging:**\n1. find (with content parameter to find error-related code)\n2. read_file (examine problematic code)\n3. run_command (run tests to see error)\n4. update_file (fix the issue)\n5. run_command (verify fix)\n\n### Remember\n- External confirmation protects users—don't hesitate to use tools\n- You might have additional MCP tools available—check your tool list\n- Users want results, not explanations of what you'll do\n- When in doubt, take action and report outcomes\n"
  },
  {
    "path": "src/sidekick/repl.py",
    "content": "import asyncio\nimport logging\nimport os\nimport signal\nimport subprocess\nimport sys\n\nfrom sidekick import ui\nfrom sidekick.agent import process_request\nfrom sidekick.commands import handle_command\nfrom sidekick.mcp import load_mcp_servers\nfrom sidekick.messages import MessageHistory\nfrom sidekick.session import session\nfrom sidekick.usage import usage_tracker\nfrom sidekick.utils.error import ErrorContext\nfrom sidekick.utils.input import create_multiline_prompt_session, get_multiline_input\n\nlog = logging.getLogger(__name__)\n\n\ndef _restore_default_signal_handler():\n    \"\"\"Restore the default SIGINT handler.\"\"\"\n    signal.signal(signal.SIGINT, signal.default_int_handler)\n\n\ndef _should_exit(user_input: str) -> bool:\n    \"\"\"Check if user wants to exit.\"\"\"\n    return user_input.lower() in [\"exit\", \"quit\"]\n\n\nasync def _display_server_info():\n    \"\"\"Display information about configured MCP servers.\"\"\"\n    servers = load_mcp_servers()\n    ui.info(\"Starting MCP servers\")\n    if servers:\n        for server in servers:\n            ui.bullet(server.display_name)\n    else:\n        ui.bullet(\"No servers configured\")\n\n\nclass Repl:\n    \"\"\"Manages the application's Read-Eval-Print Loop and interrupt handling.\"\"\"\n\n    def __init__(self, project_guide=None):\n        \"\"\"Initializes the REPL manager with signal handler.\"\"\"\n        self.loop = asyncio.get_event_loop()\n        self.current_task = None\n        self.signal_handler = self._setup_signal_handler()\n        self.message_history = MessageHistory()\n        if project_guide:\n            self.message_history.set_project_guide(project_guide)\n\n    def _kill_child_processes(self):\n        \"\"\"Kill all child processes of the current process.\"\"\"\n        if sys.platform == \"win32\":\n            return\n\n        pid = os.getpid()\n        try:\n            import psutil\n\n            parent = psutil.Process(pid)\n            for child in parent.children(recursive=True):\n                try:\n                    child.kill()\n                except Exception:\n                    pass\n        except ImportError:\n            try:\n                subprocess.run([\"pkill\", \"-P\", str(pid)], capture_output=True)\n            except Exception:\n                pass\n\n    def _setup_signal_handler(self):\n        \"\"\"Set up SIGINT handler for immediate cancellation.\"\"\"\n\n        def signal_handler(signum, frame):\n            if self.current_task and not self.current_task.done():\n                ui.stop_spinner()\n                self._kill_child_processes()\n                self.loop.call_soon_threadsafe(self.current_task.cancel)\n            else:\n                raise KeyboardInterrupt()\n\n        signal.signal(signal.SIGINT, signal_handler)\n        return signal_handler\n\n    async def _handle_user_request(self, user_input: str):\n        \"\"\"Process a user request with proper exception handling.\"\"\"\n        log.debug(f\"Handling user request: {user_input.replace('\\n', ' ')[:100]}...\")\n        ui.start_spinner()\n\n        request_task = asyncio.create_task(process_request(user_input, self.message_history))\n        self.current_task = request_task\n\n        ctx = ErrorContext(\"request\", ui)\n\n        try:\n            resp = await request_task\n            ui.stop_spinner()\n            if resp:\n                has_footer = bool(usage_tracker.last_request)\n                ui.agent(resp, has_footer=has_footer)\n                if usage_tracker.last_request:\n                    ui.usage(usage_tracker.last_request)\n        except asyncio.CancelledError:\n            ui.stop_spinner()\n            ui.warning(\"Request interrupted\")\n            self.message_history.add_cancellation_note()\n        except Exception as e:\n            await ctx.handle(e)\n        finally:\n            self.current_task = None\n\n    async def run(self):\n        \"\"\"Runs the main read-eval-print loop.\"\"\"\n        ui.info(f\"Using model {session.current_model}\")\n        await _display_server_info()\n\n        ui.success(\"Go kick some ass!\")\n        prompt_session = create_multiline_prompt_session()\n\n        while True:\n            ui.line()\n\n            try:\n                user_input = await get_multiline_input(prompt_session)\n            except EOFError:\n                break\n            except KeyboardInterrupt:\n                ui.muted(\"Use Ctrl+D or 'exit' to quit\")\n                continue\n\n            ui.reset_context()\n\n            if not user_input:\n                continue\n\n            if _should_exit(user_input):\n                break\n\n            if await handle_command(user_input, self.message_history):\n                continue\n\n            await self._handle_user_request(user_input)\n\n        _restore_default_signal_handler()\n\n        ui.line()\n        ui.info(\"Thanks for all the fish.\")\n"
  },
  {
    "path": "src/sidekick/session.py",
    "content": "from dataclasses import dataclass, field\nfrom typing import Any, Dict, Optional, Set\n\n\n@dataclass\nclass Session:\n    current_model: Optional[str] = None\n    allowed_commands: Set[str] = field(default_factory=set)\n    disabled_confirmations: Set[str] = field(default_factory=set)\n    confirmation_enabled: bool = True\n    debug_enabled: bool = False\n\n    def init(self, config: Dict[str, Any], model: str):\n        \"\"\"Initialize the session state.\"\"\"\n        self.current_model = model\n\n        if \"settings\" in config:\n            if \"allowed_commands\" in config[\"settings\"]:\n                self.allowed_commands.update(config[\"settings\"][\"allowed_commands\"])\n\n\n# Create global session instance\nsession = Session()\n"
  },
  {
    "path": "src/sidekick/setup.py",
    "content": "import json\nfrom pathlib import Path\nfrom typing import Dict, Optional\n\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.prompt import Confirm, Prompt\n\nfrom .config import deep_merge_dicts, ensure_config_structure\nfrom .constants import DEFAULT_USER_CONFIG\n\nconsole = Console()\n\n\ndef validate_json_file(config_path: Path) -> Optional[Dict]:\n    \"\"\"Validate if a JSON file is valid and return its content.\"\"\"\n    try:\n        with open(config_path, \"r\") as f:\n            return json.load(f)\n    except (json.JSONDecodeError, FileNotFoundError):\n        return None\n\n\ndef collect_api_keys() -> Dict[str, str]:\n    \"\"\"Collect API keys from user input.\"\"\"\n    console.print(\"\\n[bold]API Keys Configuration[/bold]\\n\")\n    console.print(\"Enter your API keys (press Enter to skip):\\n\")\n\n    api_keys = {}\n    providers = [\n        (\"ANTHROPIC_API_KEY\", \"Anthropic (Claude)\"),\n        (\"OPENAI_API_KEY\", \"OpenAI (GPT)\"),\n        (\"GEMINI_API_KEY\", \"Google (Gemini)\"),\n    ]\n\n    for key, name in providers:\n        value = Prompt.ask(f\"{name} API Key\", password=True, default=\"\")\n        if value:\n            api_keys[key] = value\n\n    return api_keys\n\n\ndef select_default_model(api_keys: Dict[str, str]) -> str:\n    \"\"\"Select default model based on available API keys.\"\"\"\n    available_models = []\n\n    if \"ANTHROPIC_API_KEY\" in api_keys:\n        available_models.extend(\n            [\n                \"claude-3-5-sonnet-20241022\",\n                \"claude-3-5-haiku-20241022\",\n            ]\n        )\n\n    if \"OPENAI_API_KEY\" in api_keys:\n        available_models.extend(\n            [\n                \"gpt-4o\",\n                \"gpt-4o-mini\",\n            ]\n        )\n\n    if \"GEMINI_API_KEY\" in api_keys:\n        available_models.extend(\n            [\n                \"gemini-2.0-flash-exp\",\n                \"gemini-1.5-pro-latest\",\n            ]\n        )\n\n    if not available_models:\n        console.print(\"[yellow]No API keys provided. Using default model.[/yellow]\")\n        return DEFAULT_USER_CONFIG[\"default_model\"]\n\n    console.print(\"\\n[bold]Default Model Selection[/bold]\\n\")\n    for i, model in enumerate(available_models, 1):\n        console.print(f\"{i}. {model}\")\n\n    while True:\n        choice = Prompt.ask(\n            \"\\nSelect default model\", choices=[str(i) for i in range(1, len(available_models) + 1)]\n        )\n        return available_models[int(choice) - 1]\n\n\ndef create_config(config_path: Path) -> Dict:\n    \"\"\"Create a new configuration file.\"\"\"\n    console.print(\n        Panel.fit(\n            \"[bold cyan]Sidekick CLI Setup[/bold cyan]\\n\\n\"\n            \"Welcome! Let's set up your configuration.\",\n            border_style=\"cyan\",\n        )\n    )\n\n    api_keys = collect_api_keys()\n\n    if not api_keys:\n        console.print(\"\\n[red]No API keys provided. At least one API key is required.[/red]\")\n        if not Confirm.ask(\"Continue anyway?\", default=False):\n            raise KeyboardInterrupt(\"Setup cancelled\")\n\n    default_model = select_default_model(api_keys)\n\n    # Start with user's choices\n    user_config = {\"default_model\": default_model, \"env\": api_keys if api_keys else {}}\n\n    # Merge with defaults to get all fields\n    config = deep_merge_dicts(DEFAULT_USER_CONFIG, user_config)\n\n    # Remove placeholder API keys if user didn't provide any\n    if not api_keys:\n        config[\"env\"] = {}\n\n    config_path.parent.mkdir(parents=True, exist_ok=True)\n\n    with open(config_path, \"w\") as f:\n        json.dump(config, f, indent=2)\n\n    console.print(f\"\\n[green]✓ Configuration saved to {config_path}[/green]\")\n\n    return config\n\n\ndef handle_invalid_config(config_path: Path) -> Dict:\n    \"\"\"Handle invalid configuration file.\"\"\"\n    console.print(\n        Panel.fit(\n            \"[bold red]Invalid Configuration File[/bold red]\\n\\n\"\n            f\"The configuration file at {config_path} is invalid or corrupted.\",\n            border_style=\"red\",\n        )\n    )\n\n    console.print(\"\\nOptions:\")\n    console.print(\"1. Reset configuration (create new)\")\n    console.print(\"2. Exit and fix manually\")\n\n    choice = Prompt.ask(\"\\nWhat would you like to do?\", choices=[\"1\", \"2\"])\n\n    if choice == \"1\":\n        config_path.unlink()\n        return create_config(config_path)\n    else:\n        raise SystemExit(\"Please fix the configuration file manually and try again.\")\n\n\ndef run_setup() -> Dict:\n    \"\"\"Run the setup flow and return the configuration.\"\"\"\n    config_path = Path.home() / \".config\" / \"sidekick.json\"\n\n    if config_path.exists():\n        config = validate_json_file(config_path)\n        if config is None:\n            return handle_invalid_config(config_path)\n\n        required_fields = [\"default_model\", \"env\"]\n        if all(field in config for field in required_fields):\n            # Ensure all default fields are present\n            return ensure_config_structure()\n        else:\n            console.print(\"[yellow]Configuration file is missing required fields.[/yellow]\")\n            return handle_invalid_config(config_path)\n\n    return create_config(config_path)\n"
  },
  {
    "path": "src/sidekick/tools/__init__.py",
    "content": "from .wrapper import create_tools\n\nTOOLS = create_tools()\n"
  },
  {
    "path": "src/sidekick/tools/common.py",
    "content": "EXCLUDE_DIRS = {\n    \".git\",\n    \".svn\",\n    \".hg\",\n    \".bzr\",\n    \"node_modules\",\n    \"bower_components\",\n    \"vendor\",\n    \"packages\",\n    \"__pycache__\",\n    \"*.pyc\",\n    \".pytest_cache\",\n    \".mypy_cache\",\n    \".ruff_cache\",\n    \"venv\",\n    \".venv\",\n    \"env\",\n    \".env\",\n    \"virtualenv\",\n    \"*.egg-info\",\n    \".eggs\",\n    \".tox\",\n    \"pip-wheel-metadata\",\n    \"build\",\n    \"dist\",\n    \"out\",\n    \"target\",\n    \"bin\",\n    \"obj\",\n    \"_build\",\n    \"_site\",\n    \".build\",\n    \".idea\",\n    \".vscode\",\n    \".vs\",\n    \".sublime-project\",\n    \".sublime-workspace\",\n    \"*.swp\",\n    \"*.swo\",\n    \"*~\",\n    \".DS_Store\",\n    \"Thumbs.db\",\n    \"coverage\",\n    \".coverage\",\n    \"htmlcov\",\n    \".nyc_output\",\n    \".cache\",\n    \".parcel-cache\",\n    \".next\",\n    \".nuxt\",\n    \".vuepress\",\n    \".docusaurus\",\n    \".serverless\",\n    \".fusebox\",\n    \".dynamodb\",\n    \"logs\",\n    \"*.log\",\n    \"npm-debug.log*\",\n    \"yarn-debug.log*\",\n    \"yarn-error.log*\",\n    \".npm\",\n    \".yarn\",\n    \".pnp.*\",\n    \"debug\",\n    \"tmp\",\n    \"temp\",\n    \".tmp\",\n    \".temp\",\n    \".sass-cache\",\n    \".gradle\",\n    \".m2\",\n    \".terraform\",\n    \"*.tfstate\",\n    \"*.tfstate.*\",\n    \".vagrant\",\n    \".kitchen\",\n    \".bundle\",\n    \"__MACOSX\",\n    \".pytest_cache\",\n    \".hypothesis\",\n}\n\nBINARY_EXTENSIONS = {\n    \".pyc\",\n    \".pyo\",\n    \".so\",\n    \".dylib\",\n    \".dll\",\n    \".exe\",\n    \".bin\",\n    \".dat\",\n    \".db\",\n    \".sqlite\",\n    \".sqlite3\",\n    \".jpg\",\n    \".jpeg\",\n    \".png\",\n    \".gif\",\n    \".bmp\",\n    \".ico\",\n    \".svg\",\n    \".webp\",\n    \".mp3\",\n    \".mp4\",\n    \".avi\",\n    \".mov\",\n    \".wmv\",\n    \".flv\",\n    \".wav\",\n    \".flac\",\n    \".zip\",\n    \".tar\",\n    \".gz\",\n    \".bz2\",\n    \".7z\",\n    \".rar\",\n    \".pdf\",\n    \".doc\",\n    \".docx\",\n    \".xls\",\n    \".xlsx\",\n    \".ppt\",\n    \".pptx\",\n    \".woff\",\n    \".woff2\",\n    \".ttf\",\n    \".otf\",\n    \".eot\",\n}\n"
  },
  {
    "path": "src/sidekick/tools/find.py",
    "content": "import asyncio\nimport fnmatch\nimport os\nimport re\nimport shutil\nfrom pathlib import Path\nfrom typing import List, Optional, Set\n\nfrom pydantic_ai import RunContext\n\nfrom sidekick.deps import ToolDeps\nfrom sidekick.tools.common import BINARY_EXTENSIONS, EXCLUDE_DIRS\n\n\nasync def _run_external_tool(tool_name: str, cmd: List[str]) -> Optional[str]:\n    \"\"\"Common helper for running external tools with subprocess.\"\"\"\n    if not shutil.which(tool_name):\n        return None\n\n    try:\n        process = await asyncio.create_subprocess_exec(\n            *cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE\n        )\n        stdout, stderr = await process.communicate()\n\n        if process.returncode == 0:\n            output = stdout.decode().strip()\n            return output if output else \"No results found.\"\n        elif process.returncode == 1:  # Common \"no matches found\" exit code\n            return \"No results found.\"\n        return None\n    except Exception:\n        return None\n\n\ndef _get_gitignore_patterns() -> Set[str]:\n    gitignore_path = Path(\".gitignore\")\n    if not gitignore_path.exists():\n        return set()\n\n    patterns = set()\n    try:\n        with open(gitignore_path, \"r\", encoding=\"utf-8\") as f:\n            for line in f:\n                line = line.strip()\n                if line and not line.startswith(\"#\"):\n                    patterns.add(line)\n    except Exception:\n        pass\n\n    return patterns\n\n\nasync def _find_files_with_fd(pattern: str, dirs: bool, max_depth: Optional[int]) -> Optional[str]:\n    cmd = [\"fd\"]\n    if dirs:\n        cmd.extend([\"--type\", \"d\"])\n    else:\n        cmd.extend([\"--type\", \"f\"])\n\n    if max_depth:\n        cmd.extend([\"--max-depth\", str(max_depth)])\n\n    cmd.append(pattern)\n\n    return await _run_external_tool(\"fd\", cmd)\n\n\nasync def _find_files_with_rg(pattern: str, max_depth: Optional[int]) -> Optional[str]:\n    cmd = [\"rg\", \"--files\"]\n    if max_depth:\n        cmd.extend([\"--max-depth\", str(max_depth)])\n\n    result = await _run_external_tool(\"rg\", cmd)\n    if result and result != \"No results found.\":\n        # rg --files lists all files, so we need to filter by pattern\n        files = result.strip().split(\"\\n\")\n        matching = [f for f in files if fnmatch.fnmatch(f, pattern)]\n        return \"\\n\".join(matching) if matching else \"No results found.\"\n    return result\n\n\nasync def _find_content_with_rg(\n    content: str,\n    include_pattern: Optional[str] = None,\n    case_sensitive: bool = True,\n    max_results: Optional[int] = None,\n) -> Optional[str]:\n    cmd = [\"rg\", \"--line-number\"]\n\n    if not case_sensitive:\n        cmd.append(\"-i\")\n\n    if include_pattern:\n        cmd.extend([\"--glob\", include_pattern])\n\n    if max_results:\n        cmd.extend([\"--max-count\", str(max_results)])\n\n    cmd.append(content)\n\n    return await _run_external_tool(\"rg\", cmd)\n\n\nasync def _find_content_with_ag(\n    content: str,\n    include_pattern: Optional[str] = None,\n    case_sensitive: bool = True,\n    max_results: Optional[int] = None,\n) -> Optional[str]:\n    cmd = [\"ag\", \"--line-numbers\"]\n\n    if not case_sensitive:\n        cmd.append(\"-i\")\n\n    if include_pattern:\n        cmd.extend([\"-G\", include_pattern])\n\n    if max_results:\n        cmd.extend([\"--max-count\", str(max_results)])\n\n    cmd.append(content)\n\n    return await _run_external_tool(\"ag\", cmd)\n\n\ndef _find_files_python(pattern: str, dirs: bool, max_depth: Optional[int]) -> str:\n    exclude_patterns = EXCLUDE_DIRS.copy()\n    exclude_patterns.update(_get_gitignore_patterns())\n\n    results = []\n    for root, directories, files in os.walk(\".\", followlinks=False):\n        current_depth = root.count(os.sep)\n        if max_depth and current_depth >= max_depth:\n            directories[:] = []\n            continue\n\n        skip_root = False\n        for exclude in exclude_patterns:\n            if exclude in root:\n                skip_root = True\n                break\n        if skip_root:\n            continue\n\n        directories[:] = [d for d in directories if d not in exclude_patterns]\n\n        items = directories if dirs else files\n        for item in items:\n            if fnmatch.fnmatch(item, pattern):\n                path = os.path.join(root, item)\n                results.append(path)\n\n    return \"\\n\".join(sorted(results)) if results else \"No results found.\"\n\n\ndef _find_content_python(\n    pattern: str,\n    include_pattern: Optional[str] = None,\n    case_sensitive: bool = True,\n    max_results: Optional[int] = None,\n) -> str:\n    try:\n        flags = 0 if case_sensitive else re.IGNORECASE\n        regex = re.compile(pattern, flags)\n    except re.error as e:\n        return f\"Invalid regex pattern: {e}\"\n\n    exclude_patterns = EXCLUDE_DIRS.copy()\n    exclude_patterns.update(_get_gitignore_patterns())\n\n    results = []\n    count = 0\n\n    for root, dirs, files in os.walk(\".\"):\n        dirs[:] = [d for d in dirs if d not in exclude_patterns and not d.startswith(\".\")]\n\n        skip_root = False\n        for exclude in exclude_patterns:\n            if exclude in root:\n                skip_root = True\n                break\n        if skip_root:\n            continue\n\n        for file in files:\n            if max_results and count >= max_results:\n                results.append(f\"... (showing first {max_results} results)\")\n                return \"\\n\".join(results) if results else \"No results found.\"\n\n            if any(file.endswith(ext) for ext in BINARY_EXTENSIONS):\n                continue\n\n            if include_pattern:\n                if not fnmatch.fnmatch(file, include_pattern):\n                    continue\n\n            filepath = os.path.join(root, file)\n\n            try:\n                with open(filepath, \"r\", encoding=\"utf-8\", errors=\"ignore\") as f:\n                    for line_num, line in enumerate(f, 1):\n                        if regex.search(line):\n                            result_line = f\"{filepath}:{line_num}:{line.rstrip()}\"\n                            results.append(result_line)\n                            count += 1\n\n                            if max_results and count >= max_results:\n                                break\n            except (OSError, PermissionError):\n                continue\n\n    return \"\\n\".join(results) if results else \"No results found.\"\n\n\nasync def find(\n    ctx: RunContext[ToolDeps],\n    directory: str = \".\",\n    pattern: str = \"*\",\n    *,\n    content: Optional[str] = None,\n    dirs: bool = False,\n    max_depth: Optional[int] = None,\n    case_sensitive: bool = True,\n    max_results: Optional[int] = None,\n    include_pattern: Optional[str] = None,\n) -> str:\n    \"\"\"Find files/directories by name or content.\n\n    Examples:\n        find(\".\", \"*.py\")                    # Find all Python files\n        find(\"src\", \"*test*\")                # Find files with \"test\" in name under src/\n        find(\".\", \"*config*\", dirs=True)     # Find directories with \"config\" in name\n        find(\".\", \"*.js\", max_depth=2)       # Find JS files, max 2 levels deep\n        find(\".\", content=\"TODO\")            # Find all files containing \"TODO\"\n        find(\".\", \"*.py\", content=\"def main\") # Find Python files containing \"def main\"\n        find(\".\", content=\"error\", case_sensitive=False) # Case-insensitive content search\n\n    Args:\n        directory: Directory to search in (default: current directory \".\")\n        pattern: Shell-style wildcard pattern for filename (default: \"*\" matches all)\n            - * matches any characters (e.g., \"*.py\" matches all .py files)\n            - ? matches single character (e.g., \"test?.py\" matches test1.py, test2.py)\n            - [seq] matches any character in seq (e.g., \"test[123].py\")\n        content: Text or regex pattern to search for in file contents\n        dirs: If True, search for directories instead of files (default: False)\n        max_depth: Maximum depth to search (default: None for unlimited)\n        case_sensitive: Whether content search is case-sensitive (default: True)\n        max_results: Maximum number of results to return (default: None for all)\n        include_pattern: When searching content, only search files matching this pattern\n\n    Returns:\n        For name search: Newline-separated list of matching paths\n        For content search: Newline-separated results in format \"filepath:line_number:matching_line\"\n        Returns \"No results found.\" if no matches.\n\n    Note:\n        Automatically excludes common non-project directories (node_modules, .git, etc.)\n        and respects .gitignore when using external tools.\n    \"\"\"\n\n    if ctx.deps and ctx.deps.display_tool_status:\n        status_info = {\"pattern\": pattern, \"dirs\": dirs, \"depth\": max_depth}\n        if content:\n            status_info[\"content\"] = content\n        await ctx.deps.display_tool_status(\"Find\", directory, **status_info)\n\n    directory = directory or \".\"\n    orig_dir = os.getcwd()\n\n    try:\n        os.chdir(os.path.expanduser(directory))\n\n        if content:\n            result = await _find_content_with_rg(\n                content, include_pattern, case_sensitive, max_results\n            )\n            if result is not None:\n                return result\n\n            result = await _find_content_with_ag(\n                content, include_pattern, case_sensitive, max_results\n            )\n            if result is not None:\n                return result\n\n            return _find_content_python(content, include_pattern, case_sensitive, max_results)\n        else:\n            result = await _find_files_with_fd(pattern, dirs, max_depth)\n            if result is not None:\n                return result\n\n            if not dirs:\n                result = await _find_files_with_rg(pattern, max_depth)\n                if result is not None:\n                    return result\n\n            return _find_files_python(pattern, dirs, max_depth)\n\n    finally:\n        os.chdir(orig_dir)\n"
  },
  {
    "path": "src/sidekick/tools/git.py",
    "content": "import asyncio\nimport subprocess\n\nfrom pydantic_ai import ModelRetry, RunContext\n\nfrom sidekick.deps import ToolDeps\n\n\nasync def git_add(ctx: RunContext[ToolDeps], files: str) -> str:\n    \"\"\"Stage files for commit using git add.\n\n    Args:\n        files: Files to stage (can be paths, patterns, or '.' for all)\n\n    Returns:\n        Success message with staged files count\n    \"\"\"\n    # Ignore for now, we already show panel\n    #\n    # if ctx.deps and ctx.deps.display_tool_status:\n    #     await ctx.deps.display_tool_status(\"Git Add\", f\"{len(files)} files\")\n\n    try:\n        # First check git status to show what will be staged\n        status_result = subprocess.run(\n            [\"git\", \"status\", \"--porcelain\"], capture_output=True, text=True, check=True\n        )\n\n        if not status_result.stdout.strip():\n            return \"No changes to stage\"\n\n        if ctx.deps and ctx.deps.confirm_action:\n            files_to_stage = []\n            for line in status_result.stdout.splitlines():\n                if line.strip():\n                    status = line[:2]\n                    filename = line[3:]\n                    if files.strip() == \".\" or any(\n                        f in filename for f in (files.split() if \" \" in files else [files])\n                    ):\n                        files_to_stage.append(f\"{status} {filename}\")\n\n            if files_to_stage:\n                preview = \"\\n\".join(files_to_stage[:20])\n                if len(files_to_stage) > 20:\n                    preview += f\"\\n... and {len(files_to_stage) - 20} more files\"\n\n                if not await ctx.deps.confirm_action(\"Git Add\", preview):\n                    raise asyncio.CancelledError(\"Tool execution cancelled by user\")\n\n        # Parse files argument - could be '.', specific files, or patterns\n        if files.strip() == \".\":\n            # Stage all changes\n            subprocess.run([\"git\", \"add\", \".\"], capture_output=True, text=True, check=True)\n        else:\n            # Stage specific files/patterns\n            file_list = files.split() if \" \" in files else [files]\n            subprocess.run([\"git\", \"add\"] + file_list, capture_output=True, text=True, check=True)\n\n        # Get updated status to show what was staged\n        new_status = subprocess.run(\n            [\"git\", \"status\", \"--porcelain\"], capture_output=True, text=True, check=True\n        )\n\n        # Count staged files\n        staged_count = sum(\n            1 for line in new_status.stdout.splitlines() if line and line[0] in [\"A\", \"M\", \"D\", \"R\"]\n        )\n\n        return f\"Successfully staged {staged_count} file(s)\"\n\n    except subprocess.CalledProcessError as e:\n        error_msg = e.stderr.strip() if e.stderr else str(e)\n        raise ModelRetry(f\"Git add failed: {error_msg}\")\n    except Exception as e:\n        raise ModelRetry(f\"Error running git add: {str(e)}\")\n\n\nasync def git_commit(ctx: RunContext[ToolDeps], message: str) -> str:\n    \"\"\"Create a git commit with the given message.\n\n    Args:\n        message: Commit message\n\n    Returns:\n        Success message with commit hash\n    \"\"\"\n    # Ignore for now, we already show panel\n    #\n    # if ctx.deps and ctx.deps.display_tool_status:\n    #     short_message = message[:25].replace(\"\\n\", \" \")\n    #     await ctx.deps.display_tool_status(\"Git Commit\", short_message)\n\n    try:\n        # Check if there are staged changes\n        status_result = subprocess.run(\n            [\"git\", \"status\", \"--porcelain\"], capture_output=True, text=True, check=True\n        )\n\n        # Check for staged files\n        staged_files = [\n            line\n            for line in status_result.stdout.splitlines()\n            if line and line[0] in [\"A\", \"M\", \"D\", \"R\"]\n        ]\n\n        if not staged_files:\n            return \"No staged changes to commit\"\n\n        if ctx.deps and ctx.deps.confirm_action:\n            preview = f\"Message: {message}\\n\\nStaged changes:\\n\\n\"\n            preview += \"\\n\".join(staged_files[:20])\n            if len(staged_files) > 20:\n                preview += f\"\\n... and {len(staged_files) - 20} more files\"\n\n            if not await ctx.deps.confirm_action(\"Git Commit\", preview):\n                raise asyncio.CancelledError(\"Tool execution cancelled by user\")\n\n        # Create the commit\n        commit_result = subprocess.run(\n            [\"git\", \"commit\", \"-m\", message], capture_output=True, text=True, check=True\n        )\n\n        # Extract commit hash from output\n        output_lines = commit_result.stdout.strip().split(\"\\n\")\n        commit_info = output_lines[0] if output_lines else \"Commit created\"\n\n        return f\"Successfully created commit: {commit_info}\"\n\n    except subprocess.CalledProcessError as e:\n        error_msg = e.stderr.strip() if e.stderr else str(e)\n        raise ModelRetry(f\"Git commit failed: {error_msg}\")\n    except Exception as e:\n        raise ModelRetry(f\"Error running git commit: {str(e)}\")\n"
  },
  {
    "path": "src/sidekick/tools/list.py",
    "content": "import asyncio\nimport os\nimport shutil\nfrom pathlib import Path\nfrom typing import Dict, List, Tuple\n\nfrom pydantic_ai import RunContext\n\nfrom sidekick.deps import ToolDeps\n\nfrom .common import EXCLUDE_DIRS\n\n\ndef _should_exclude(path: str, gitignore_patterns: List[str]) -> bool:\n    \"\"\"Check if a path should be excluded based on patterns.\"\"\"\n    path_obj = Path(path)\n\n    # Check against EXCLUDE_DIRS\n    for part in path_obj.parts:\n        if part in EXCLUDE_DIRS:\n            return True\n\n    # Check against gitignore patterns (simplified)\n    for pattern in gitignore_patterns:\n        pattern = pattern.strip()\n        if not pattern or pattern.startswith(\"#\"):\n            continue\n\n        # Simple pattern matching (not full gitignore spec)\n        if pattern.endswith(\"/\"):\n            # Directory pattern\n            if pattern[:-1] in path_obj.parts:\n                return True\n        else:\n            # File pattern\n            if path_obj.match(pattern):\n                return True\n\n    return False\n\n\ndef _read_gitignore(base_path: str) -> List[str]:\n    \"\"\"Read .gitignore patterns from the given directory.\"\"\"\n    gitignore_path = os.path.join(base_path, \".gitignore\")\n    if os.path.exists(gitignore_path):\n        try:\n            with open(gitignore_path, \"r\") as f:\n                return f.readlines()\n        except Exception:\n            pass\n    return []\n\n\ndef _format_tree(items: List[Tuple[str, bool, int]], prefix: str = \"\") -> List[str]:\n    \"\"\"Format directory structure as a tree.\"\"\"\n    lines = []\n    for i, (name, is_dir, file_count) in enumerate(items):\n        is_last = i == len(items) - 1\n        current_prefix = \"└── \" if is_last else \"├── \"\n\n        if is_dir and file_count > 0:\n            lines.append(f\"{prefix}{current_prefix}{name}/ ({file_count} files)\")\n        elif is_dir:\n            lines.append(f\"{prefix}{current_prefix}{name}/\")\n        else:\n            lines.append(f\"{prefix}{current_prefix}{name}\")\n\n    return lines\n\n\nasync def _run_rg_files(path: str, max_depth: int) -> str:\n    \"\"\"Use ripgrep to list files efficiently.\"\"\"\n    cmd = [\"rg\", \"--files\", \"--max-depth\", str(max_depth), path]\n\n    try:\n        process = await asyncio.create_subprocess_exec(\n            *cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE\n        )\n        stdout, stderr = await process.communicate()\n\n        if process.returncode == 0:\n            files = stdout.decode().strip().split(\"\\n\")\n            # Convert flat file list to tree structure\n            return _build_tree_from_files(files, path)\n        else:\n            # Fall back to Python implementation\n            return None\n    except Exception:\n        return None\n\n\ndef _build_tree_from_files(files: List[str], base_path: str) -> str:\n    \"\"\"Build a tree structure from a flat list of file paths.\"\"\"\n    # This is a simplified version - for now, fallback to Python implementation\n    # A full implementation would parse the paths and build a proper tree\n    return None\n\n\ndef _walk_directory(\n    path: str, max_depth: int, current_depth: int = 0, gitignore_patterns: List[str] = None\n) -> Tuple[List[str], Dict[str, int]]:\n    \"\"\"Walk directory and return formatted tree lines and stats.\"\"\"\n    if gitignore_patterns is None:\n        gitignore_patterns = _read_gitignore(path)\n\n    lines = []\n    total_files = 0\n    total_dirs = 0\n\n    if current_depth >= max_depth:\n        return lines, {\"files\": total_files, \"dirs\": total_dirs}\n\n    try:\n        items = []\n        for item in sorted(os.listdir(path)):\n            if item.startswith(\".\") and item not in [\".gitignore\", \".env.example\"]:\n                continue\n\n            item_path = os.path.join(path, item)\n            if _should_exclude(item_path, gitignore_patterns):\n                continue\n\n            is_dir = os.path.isdir(item_path)\n            file_count = 0\n\n            if is_dir:\n                # Count files in subdirectory (for display)\n                try:\n                    sub_items = os.listdir(item_path)\n                    file_count = sum(\n                        1\n                        for i in sub_items\n                        if not os.path.isdir(os.path.join(item_path, i)) and not i.startswith(\".\")\n                    )\n                except Exception:\n                    pass\n                total_dirs += 1\n            else:\n                total_files += 1\n\n            items.append((item, is_dir, file_count))\n\n        # Sort: directories first, then files\n        items.sort(key=lambda x: (not x[1], x[0].lower()))\n\n        # Format current level\n        if current_depth == 0:\n            lines.append(f\"{path}\")\n\n        # Add tree lines\n        tree_lines = _format_tree(items)\n        lines.extend(tree_lines)\n\n        # Recursively process subdirectories\n        for i, (name, is_dir, _) in enumerate(items):\n            if is_dir:\n                item_path = os.path.join(path, name)\n                is_last = i == len(items) - 1\n                prefix = \"    \" if is_last else \"│   \"\n\n                sub_lines, sub_stats = _walk_directory(\n                    item_path, max_depth, current_depth + 1, gitignore_patterns\n                )\n\n                # Add subdirectory content with proper indentation\n                for line in sub_lines:\n                    if line:  # Skip empty lines\n                        lines.append(prefix + line)\n\n                total_files += sub_stats[\"files\"]\n                total_dirs += sub_stats[\"dirs\"]\n\n    except PermissionError:\n        lines.append(f\"Permission denied: {path}\")\n    except Exception as e:\n        lines.append(f\"Error reading directory: {str(e)}\")\n\n    return lines, {\"files\": total_files, \"dirs\": total_dirs}\n\n\nasync def list_directory(ctx: RunContext[ToolDeps], path: str = \".\", max_depth: int = 3) -> str:\n    \"\"\"\n    List directory contents in a tree structure, respecting .gitignore and common exclusions.\n\n    Args:\n        path: Directory path to list (default: current directory)\n        max_depth: Maximum depth to traverse (default: 3)\n\n    Returns:\n        Formatted directory tree as a string\n    \"\"\"\n    if ctx.deps and ctx.deps.display_tool_status:\n        await ctx.deps.display_tool_status(\"List\", path, depth=max_depth)\n\n    path = os.path.abspath(os.path.expanduser(path))\n\n    if not os.path.exists(path):\n        return f\"Error: Path does not exist: {path}\"\n\n    if not os.path.isdir(path):\n        return f\"Error: Path is not a directory: {path}\"\n\n    # Try using ripgrep first if available\n    if shutil.which(\"rg\"):\n        result = await _run_rg_files(path, max_depth)\n        if result:\n            return result\n\n    # Fall back to Python implementation\n    lines, stats = _walk_directory(path, max_depth)\n\n    # Give AI the context of result\n    lines.append(\"\")\n    lines.append(f\"Total: {stats['files']} files, {stats['dirs']} directories\")\n    return \"\\n\".join(lines)\n"
  },
  {
    "path": "src/sidekick/tools/read_file.py",
    "content": "import logging\n\nfrom pydantic_ai import RunContext\n\nfrom sidekick.deps import ToolDeps\n\nlog = logging.getLogger(__name__)\n\n\nasync def read_file(ctx: RunContext[ToolDeps], filepath: str) -> str:\n    \"\"\"Read the contents of a file.\"\"\"\n    log.debug(f\"read_file called with filepath: {filepath}\")\n\n    if ctx.deps and ctx.deps.display_tool_status:\n        await ctx.deps.display_tool_status(\"Read\", filepath)\n\n    try:\n        with open(filepath, \"r\", encoding=\"utf-8\") as file:\n            content = file.read()\n            log.debug(f\"Successfully read {len(content)} characters from {filepath}\")\n            return content\n    except FileNotFoundError:\n        return f\"Error: File not found: {filepath}\"\n    except PermissionError:\n        return f\"Error: Permission denied: {filepath}\"\n    except Exception as e:\n        return f\"Error reading file {filepath}: {str(e)}\"\n"
  },
  {
    "path": "src/sidekick/tools/run_command.py",
    "content": "import asyncio\nimport subprocess\n\nfrom pydantic_ai import RunContext\n\nfrom sidekick import ui\nfrom sidekick.deps import ToolDeps\nfrom sidekick.session import session\nfrom sidekick.utils.command import extract_commands, is_command_allowed\n\n\nasync def run_command(ctx: RunContext[ToolDeps], command: str) -> str:\n    \"\"\"Run a shell command and return its output.\"\"\"\n    # Ignore for now, we already show panel\n    #\n    # if ctx.deps and ctx.deps.display_tool_status:\n    #     await ctx.deps.display_tool_status(\"Run\", command)\n\n    if ctx.deps and ctx.deps.confirm_action:\n        if not is_command_allowed(command, session.allowed_commands):\n            command_display = ui.create_shell_syntax(command)\n\n            if not await ctx.deps.confirm_action(\"Run Command\", command_display):\n                raise asyncio.CancelledError(\"Tool execution cancelled by user\")\n\n            commands = extract_commands(command)\n            session.allowed_commands.update(commands)\n        else:\n            # If command is allowed, show as a status\n            await ctx.deps.display_tool_status(\"Run\", command)\n\n    result = subprocess.run(\n        command,\n        shell=True,\n        capture_output=True,\n        text=True,\n        timeout=30,\n    )\n    output = result.stdout + result.stderr\n    return output if output else \"(no output)\"\n"
  },
  {
    "path": "src/sidekick/tools/update_file.py",
    "content": "import asyncio\n\nfrom pydantic_ai import ModelRetry, RunContext\n\nfrom sidekick import ui\nfrom sidekick.deps import ToolDeps\n\n\nasync def update_file(\n    ctx: RunContext[ToolDeps], filepath: str, old_content: str, new_content: str\n) -> str:\n    \"\"\"Update specific content in a file.\"\"\"\n    # Ignore for now, we already show panel\n    #\n    # if ctx.deps and ctx.deps.display_tool_status:\n    #     await ctx.deps.display_tool_status(\"Update\", filepath)\n\n    if old_content == new_content:\n        raise ModelRetry(\n            \"The old_content and new_content are identical. \"\n            \"Please provide different content for the replacement.\"\n        )\n\n    try:\n        with open(filepath, \"r\", encoding=\"utf-8\") as file:\n            content = file.read()\n    except FileNotFoundError:\n        raise ModelRetry(f\"File not found: {filepath}. Please check the file path and try again.\")\n    except Exception as e:\n        raise ModelRetry(f\"Error reading file {filepath}: {str(e)}\")\n\n    if old_content not in content:\n        preview = old_content[:100] + \"...\" if len(old_content) > 100 else old_content\n        raise ModelRetry(\n            f\"Content to replace not found in {filepath}. \"\n            f\"Searched for: '{preview}'. \"\n            \"Please re-read the file and ensure the exact content matches, including whitespace.\"\n        )\n\n    if ctx.deps and ctx.deps.confirm_action:\n        updated_content = content.replace(old_content, new_content, 1)\n        diff_preview = ui.create_unified_diff(content, updated_content, filepath)\n        footer = f\"File: {filepath}\"\n\n        if not await ctx.deps.confirm_action(\"Update File\", diff_preview, footer):\n            raise asyncio.CancelledError(\"Tool execution cancelled by user\")\n\n    try:\n        updated_content = content.replace(old_content, new_content, 1)\n        with open(filepath, \"w\", encoding=\"utf-8\") as file:\n            file.write(updated_content)\n    except Exception as e:\n        raise ModelRetry(f\"Error writing to file {filepath}: {str(e)}\")\n\n    return f\"Successfully updated {filepath}\"\n"
  },
  {
    "path": "src/sidekick/tools/wrapper.py",
    "content": "from pydantic_ai import Tool\n\nfrom sidekick.tools.find import find\nfrom sidekick.tools.git import git_add, git_commit\nfrom sidekick.tools.list import list_directory\nfrom sidekick.tools.read_file import read_file\nfrom sidekick.tools.run_command import run_command\nfrom sidekick.tools.update_file import update_file\nfrom sidekick.tools.write_file import write_file\n\nTOOL_RETRY_LIMIT = 10\n\n\ndef create_tools():\n    \"\"\"Create Tool instances for all tools.\"\"\"\n    tools = [\n        read_file,\n        write_file,\n        update_file,\n        run_command,\n        git_add,\n        git_commit,\n        find,\n        list_directory,\n    ]\n\n    return [Tool(tool, max_retries=TOOL_RETRY_LIMIT) for tool in tools]\n"
  },
  {
    "path": "src/sidekick/tools/write_file.py",
    "content": "import asyncio\nimport logging\nfrom pathlib import Path\n\nfrom pydantic_ai import RunContext\n\nfrom sidekick import ui\nfrom sidekick.deps import ToolDeps\n\nlog = logging.getLogger(__name__)\n\n\nasync def write_file(ctx: RunContext[ToolDeps], filepath: str, content: str) -> str:\n    \"\"\"Write content to a file.\"\"\"\n    log.debug(f\"write_file called with filepath: {filepath}, content length: {len(content)}\")\n\n    # Write content is in a panel already, showing this here feels redundant\n    # Commenting out for now\n    #\n    # if ctx.deps and ctx.deps.display_tool_status:\n    #     await ctx.deps.display_tool_status(\"Write\", filepath)\n\n    if ctx.deps and ctx.deps.confirm_action:\n        syntax = ui.create_syntax_highlighted(content, filepath)\n        footer = f\"File: {filepath}\"\n        if not await ctx.deps.confirm_action(\"Write File\", syntax, footer):\n            raise asyncio.CancelledError(\"Tool execution cancelled by user\")\n\n    Path(filepath).parent.mkdir(parents=True, exist_ok=True)\n\n    with open(filepath, \"w\", encoding=\"utf-8\") as file:\n        file.write(content)\n\n    return f\"Successfully wrote to {filepath}\"\n"
  },
  {
    "path": "src/sidekick/ui/__init__.py",
    "content": "\"\"\"Clean, simplified UI module.\"\"\"\n\nfrom sidekick.ui.core import BANNER, SpinnerStyle\nfrom sidekick.ui.formatting import (\n    create_inline_diff,\n    create_shell_syntax,\n    create_syntax_highlighted,\n    create_unified_diff,\n    format_server_name,\n    get_command_display_name,\n    get_file_language,\n)\nfrom sidekick.ui.manager import MessageType, OutputType, PanelType, UIManager\nfrom sidekick.ui.special import update_available as _update_available\nfrom sidekick.ui.special import usage as _usage\nfrom sidekick.ui.special import version as _version\nfrom sidekick.ui.spinner import SpinnerManager\n\n_ui = UIManager()\n_spinner = SpinnerManager(_ui.console)\n\n# Core API\npanel = _ui.panel\nmessage = _ui.message\nline = _ui.line\nreset_context = _ui.reset_context\n\n# Convenience methods\nagent = _ui.agent\ntool = _ui.tool\ninfo = _ui.info\nerror = _ui.error\nwarning = _ui.warning\nsuccess = _ui.success\nbullet = _ui.bullet\nmuted = _ui.muted\nthinking = _ui.thinking\n\n# Special panels\nthinking_panel = _ui.thinking_panel\nconfirmation_panel = _ui.confirmation_panel\ninfo_panel = _ui.info_panel\nerror_panel = _ui.error_panel\n\n# Special functions\ndump = _ui.dump\nhelp = _ui.help\n\n\ndef version():\n    \"\"\"Display version information.\"\"\"\n    _version(_ui)\n\n\ndef update_available(latest_version: str):\n    \"\"\"Display update available message.\"\"\"\n    _update_available(_ui, latest_version)\n\n\ndef usage(usage_data: dict):\n    \"\"\"Display usage statistics.\"\"\"\n    _usage(_ui, usage_data)\n\n\ndef banner():\n    \"\"\"Display the application banner.\"\"\"\n    from rich.padding import Padding\n\n    from sidekick.constants import APP_VERSION\n    from sidekick.ui.colors import colors\n\n    _ui.console.clear()\n    banner_padding = Padding(BANNER, (1, 0, 0, 2))\n    version_padding = Padding(f\"v{APP_VERSION}\", (0, 0, 1, 2))\n    _ui.console.print(banner_padding, style=colors.primary)\n    _ui.console.print(version_padding, style=colors.muted)\n    _ui._last_output = None\n\n\ndef start_spinner(message: str = \"\", style: str = SpinnerStyle.DEFAULT):\n    \"\"\"Start the spinner with a message.\"\"\"\n    # Add spacing before spinner if coming after user input\n    if _ui._last_output == OutputType.USER_INPUT:\n        _ui.console.print()\n\n    _spinner.start(message, style)\n    _ui.set_spinner_active(True)\n\n\ndef stop_spinner():\n    \"\"\"Stop the spinner.\"\"\"\n    _spinner.stop()\n    _ui.set_spinner_active(False)\n\n\nconsole = _ui.console\n\n__all__ = [\n    # Core API\n    \"panel\",\n    \"message\",\n    \"line\",\n    \"reset_context\",\n    # Messages\n    \"info\",\n    \"error\",\n    \"warning\",\n    \"success\",\n    \"bullet\",\n    \"muted\",\n    \"thinking\",\n    # Panels\n    \"agent\",\n    \"tool\",\n    \"thinking_panel\",\n    \"confirmation_panel\",\n    \"info_panel\",\n    \"error_panel\",\n    # Special functions\n    \"dump\",\n    \"help\",\n    \"version\",\n    \"update_available\",\n    \"usage\",\n    # Utilities\n    \"banner\",\n    \"start_spinner\",\n    \"stop_spinner\",\n    \"console\",\n    # Formatting\n    \"create_inline_diff\",\n    \"create_shell_syntax\",\n    \"create_syntax_highlighted\",\n    \"create_unified_diff\",\n    \"format_server_name\",\n    \"get_command_display_name\",\n    \"get_file_language\",\n    # Types\n    \"PanelType\",\n    \"MessageType\",\n    \"SpinnerStyle\",\n]\n"
  },
  {
    "path": "src/sidekick/ui/colors.py",
    "content": "\"\"\"Color definitions for the UI module.\"\"\"\n\n\nclass Colors:\n    primary = \"medium_purple1\"  # Agent responses\n    secondary = \"medium_purple3\"  # Secondary purple\n    success = \"green\"  # Success messages\n    warning = \"orange1\"  # Confirmations/warnings\n    error = \"red\"  # Errors\n    muted = \"grey62\"  # Info/help\n    tool_data = \"bright_blue\"  # Tool output data\n\n\ncolors = Colors()\n"
  },
  {
    "path": "src/sidekick/ui/core.py",
    "content": "\"\"\"Core UI functions including banner and spinner management.\"\"\"\n\nfrom rich.console import Console\nfrom rich.padding import Padding\n\nfrom sidekick.constants import APP_VERSION\nfrom sidekick.ui.colors import colors\nfrom sidekick.ui.spinner import SpinnerManager\n\nconsole = Console()\n_spinner_manager = SpinnerManager(console)\n\nBANNER = \"\"\"\n███████╗██╗██████╗ ███████╗██╗  ██╗██╗ ██████╗██╗  ██╗\n██╔════╝██║██╔══██╗██╔════╝██║ ██╔╝██║██╔════╝██║ ██╔╝\n███████╗██║██║  ██║█████╗  █████╔╝ ██║██║     █████╔╝\n╚════██║██║██║  ██║██╔══╝  ██╔═██╗ ██║██║     ██╔═██╗\n███████║██║██████╔╝███████╗██║  ██╗██║╚██████╗██║  ██╗\n╚══════╝╚═╝╚═════╝ ╚══════╝╚═╝  ╚═╝╚═╝ ╚═════╝╚═╝  ╚═╝\"\"\"\n\n\nclass SpinnerStyle:\n    DEFAULT = f\"[bold {colors.primary}]{{}}[/bold {colors.primary}]\"\n    MUTED = f\"[{colors.muted}]{{}}[/{colors.muted}]\"\n    WARNING = f\"[{colors.warning}]{{}}[/{colors.warning}]\"\n    ERROR = f\"[{colors.error}]{{}}[/{colors.error}]\"\n\n\ndef banner():\n    \"\"\"Display the application banner.\"\"\"\n    console.clear()\n    banner_padding = Padding(BANNER, (0, 0, 0, 2))\n    version_padding = Padding(f\"v{APP_VERSION}\", (0, 0, 1, 2))\n    console.print(banner_padding, style=colors.primary)\n    console.print(version_padding, style=colors.muted)\n\n\ndef start_spinner(message: str = \"\", style: str = SpinnerStyle.DEFAULT):\n    \"\"\"Start the spinner with a message.\"\"\"\n    _spinner_manager.start(message, style)\n\n\ndef stop_spinner():\n    \"\"\"Stop the spinner.\"\"\"\n    _spinner_manager.stop()\n"
  },
  {
    "path": "src/sidekick/ui/formatting.py",
    "content": "\"\"\"Formatting functions for syntax highlighting, diffs, and display.\"\"\"\n\nimport difflib\nfrom pathlib import Path\n\nfrom rich.syntax import Syntax\nfrom rich.text import Text\n\nSYNTAX_THEME = \"nord\"\n\n\ndef get_file_language(filepath: str) -> str:\n    \"\"\"Determine the language for syntax highlighting based on file extension.\n\n    Args:\n        filepath: Path to the file\n\n    Returns:\n        Language identifier for rich.syntax.Syntax\n    \"\"\"\n    ext_map = {\n        \".py\": \"python\",\n        \".js\": \"javascript\",\n        \".ts\": \"typescript\",\n        \".jsx\": \"jsx\",\n        \".tsx\": \"tsx\",\n        \".json\": \"json\",\n        \".html\": \"html\",\n        \".css\": \"css\",\n        \".scss\": \"scss\",\n        \".sass\": \"sass\",\n        \".less\": \"less\",\n        \".xml\": \"xml\",\n        \".yaml\": \"yaml\",\n        \".yml\": \"yaml\",\n        \".toml\": \"toml\",\n        \".ini\": \"ini\",\n        \".cfg\": \"ini\",\n        \".conf\": \"ini\",\n        \".sh\": \"bash\",\n        \".bash\": \"bash\",\n        \".zsh\": \"zsh\",\n        \".fish\": \"fish\",\n        \".ps1\": \"powershell\",\n        \".bat\": \"batch\",\n        \".cmd\": \"batch\",\n        \".go\": \"go\",\n        \".rs\": \"rust\",\n        \".java\": \"java\",\n        \".kt\": \"kotlin\",\n        \".swift\": \"swift\",\n        \".c\": \"c\",\n        \".h\": \"c\",\n        \".cpp\": \"cpp\",\n        \".cxx\": \"cpp\",\n        \".cc\": \"cpp\",\n        \".hpp\": \"cpp\",\n        \".cs\": \"csharp\",\n        \".php\": \"php\",\n        \".rb\": \"ruby\",\n        \".lua\": \"lua\",\n        \".pl\": \"perl\",\n        \".r\": \"r\",\n        \".R\": \"r\",\n        \".m\": \"matlab\",\n        \".jl\": \"julia\",\n        \".scala\": \"scala\",\n        \".clj\": \"clojure\",\n        \".elm\": \"elm\",\n        \".ex\": \"elixir\",\n        \".exs\": \"elixir\",\n        \".erl\": \"erlang\",\n        \".hrl\": \"erlang\",\n        \".vim\": \"vim\",\n        \".vimrc\": \"vim\",\n        \".sql\": \"sql\",\n        \".dockerfile\": \"docker\",\n        \".Dockerfile\": \"docker\",\n        \".md\": \"markdown\",\n        \".markdown\": \"markdown\",\n        \".rst\": \"rst\",\n        \".tex\": \"latex\",\n        \".vue\": \"vue\",\n        \".svelte\": \"svelte\",\n    }\n\n    # Get the file extension\n    ext = Path(filepath).suffix.lower()\n\n    # Check if we have a mapping for this extension\n    if ext in ext_map:\n        return ext_map[ext]\n\n    # Check for some special filenames\n    filename = Path(filepath).name.lower()\n    if filename == \"dockerfile\":\n        return \"docker\"\n    elif filename == \"makefile\":\n        return \"makefile\"\n    elif filename == \".gitignore\":\n        return \"gitignore\"\n    elif filename == \".env\":\n        return \"dotenv\"\n\n    # Default to text if we don't recognize the extension\n    return \"text\"\n\n\ndef create_syntax_highlighted(content: str, filepath: str, theme: str = None) -> Syntax:\n    \"\"\"Create syntax-highlighted content.\n\n    Args:\n        content: The content to highlight\n        filepath: Path to determine language\n        theme: Optional theme override\n\n    Returns:\n        Syntax object for rendering\n    \"\"\"\n    if theme is None:\n        theme = SYNTAX_THEME\n\n    language = get_file_language(filepath)\n    return Syntax(\n        content,\n        language,\n        theme=theme,\n        line_numbers=True,\n        word_wrap=True,\n    )\n\n\ndef create_shell_syntax(command: str, theme: str = None) -> Syntax:\n    \"\"\"Create syntax-highlighted shell command.\n\n    Args:\n        command: Shell command to highlight\n        theme: Optional theme override\n\n    Returns:\n        Syntax object for rendering\n    \"\"\"\n    if theme is None:\n        theme = SYNTAX_THEME\n\n    return Syntax(\n        command,\n        \"bash\",\n        theme=theme,\n        line_numbers=False,\n        word_wrap=True,\n    )\n\n\ndef create_unified_diff(\n    old_content: str, new_content: str, filepath: str = \"file\", context_lines: int = 3\n) -> Syntax:\n    \"\"\"Create a unified diff with syntax highlighting.\n\n    Args:\n        old_content: Original file content\n        new_content: Modified file content\n        filepath: Path for diff header\n        context_lines: Number of context lines\n\n    Returns:\n        Syntax object with highlighted diff\n    \"\"\"\n    old_lines = old_content.splitlines(keepends=True)\n    new_lines = new_content.splitlines(keepends=True)\n\n    diff = difflib.unified_diff(\n        old_lines,\n        new_lines,\n        fromfile=f\"a/{filepath}\",\n        tofile=f\"b/{filepath}\",\n        n=context_lines,\n        lineterm=\"\",\n    )\n\n    diff_text = \"\".join(diff)\n\n    return Syntax(\n        diff_text,\n        \"diff\",\n        theme=SYNTAX_THEME,\n        line_numbers=False,\n        word_wrap=True,\n    )\n\n\ndef create_inline_diff(old_content: str, new_content: str) -> tuple[Text, Text]:\n    \"\"\"Create inline diffs showing character-level changes.\n\n    Args:\n        old_content: Original content\n        new_content: New content\n\n    Returns:\n        Tuple of (old_text, new_text) with highlighting\n    \"\"\"\n    old_text = Text()\n    new_text = Text()\n\n    # Use difflib to find character-level differences\n    matcher = difflib.SequenceMatcher(None, old_content, new_content)\n\n    for tag, i1, i2, j1, j2 in matcher.get_opcodes():\n        if tag == \"equal\":\n            old_text.append(old_content[i1:i2])\n            new_text.append(new_content[j1:j2])\n        elif tag == \"delete\":\n            old_text.append(old_content[i1:i2], style=\"red strike\")\n        elif tag == \"insert\":\n            new_text.append(new_content[j1:j2], style=\"green\")\n        elif tag == \"replace\":\n            old_text.append(old_content[i1:i2], style=\"red strike\")\n            new_text.append(new_content[j1:j2], style=\"green\")\n\n    return old_text, new_text\n\n\ndef format_server_name(key: str) -> str:\n    \"\"\"Convert server key to human-readable name.\n\n    Args:\n        key: Server key (e.g., 'npmScripts')\n\n    Returns:\n        Human-readable name (e.g., 'NPM Scripts')\n    \"\"\"\n    # Handle camelCase\n    result = \"\"\n    for i, char in enumerate(key):\n        if i > 0 and char.isupper() and key[i - 1].islower():\n            result += \" \"\n        result += char\n\n    # Handle snake_case and hyphenated names\n    result = result.replace(\"_\", \" \").replace(\"-\", \" \")\n\n    # Capitalize words, but preserve certain acronyms\n    words = result.split()\n    formatted_words = []\n    acronyms = {\"npm\", \"mcp\", \"api\", \"cli\", \"url\", \"uri\", \"id\", \"ui\"}\n\n    for word in words:\n        if word.lower() in acronyms:\n            formatted_words.append(word.upper())\n        else:\n            formatted_words.append(word.capitalize())\n\n    return \" \".join(formatted_words)\n\n\ndef get_command_display_name(command_string: str) -> str:\n    \"\"\"Get a display-friendly version of the commands for UI.\n\n    Args:\n        command_string: The full shell command string\n\n    Returns:\n        A comma-separated list of command names\n    \"\"\"\n    # Import here to avoid circular dependency\n    from sidekick.utils.command import extract_commands\n\n    commands = extract_commands(command_string)\n    if len(commands) == 1:\n        return f\"'{commands[0]}'\"\n    else:\n        return \", \".join(f\"'{cmd}'\" for cmd in commands)\n"
  },
  {
    "path": "src/sidekick/ui/manager.py",
    "content": "\"\"\"UI Manager for centralized output control and spacing logic.\"\"\"\n\nfrom enum import Enum, auto\nfrom typing import Optional, Union\n\nfrom rich.console import Console\nfrom rich.markdown import Markdown\nfrom rich.padding import Padding\nfrom rich.panel import Panel\nfrom rich.pretty import Pretty\nfrom rich.table import Table\nfrom rich.text import Text\n\nfrom sidekick.ui.colors import colors\nfrom sidekick.ui.formatting import create_syntax_highlighted\n\n\nclass OutputType(Enum):\n    \"\"\"Types of output that affect spacing decisions.\"\"\"\n\n    STATUS = auto()  # Info, error, warning messages\n    PANEL = auto()  # Boxed content panels\n    USER_INPUT = auto()  # After user input\n    THINKING = auto()  # Thinking messages (special status)\n    SPINNER = auto()  # While spinner is active\n\n\nclass PanelType(Enum):\n    \"\"\"Types of panels with different styling.\"\"\"\n\n    DEFAULT = auto()\n    AGENT = auto()\n    TOOL = auto()\n    ERROR = auto()\n    WARNING = auto()\n    INFO = auto()\n    CONFIRMATION = auto()\n    THINKING = auto()\n\n\nclass MessageType(Enum):\n    \"\"\"Types of status messages.\"\"\"\n\n    INFO = auto()\n    ERROR = auto()\n    WARNING = auto()\n    SUCCESS = auto()\n    BULLET = auto()\n    MUTED = auto()\n    THINKING = auto()\n\n\n# Style configuration for different panel types\nPANEL_STYLES = {\n    PanelType.DEFAULT: {\"border_style\": colors.muted, \"title_prefix\": \"\"},\n    PanelType.AGENT: {\"border_style\": colors.primary, \"title_prefix\": \"Sidekick\"},\n    PanelType.TOOL: {\"border_style\": colors.tool_data, \"title_prefix\": \"\"},\n    PanelType.ERROR: {\"border_style\": colors.error, \"title_prefix\": \"Error\"},\n    PanelType.WARNING: {\"border_style\": colors.warning, \"title_prefix\": \"Warning\"},\n    PanelType.INFO: {\"border_style\": colors.muted, \"title_prefix\": \"\"},\n    PanelType.CONFIRMATION: {\"border_style\": colors.warning, \"title_prefix\": \"Confirm Action\"},\n    PanelType.THINKING: {\"border_style\": colors.muted, \"title_prefix\": \"Thinking\"},\n}\n\n# Style configuration for different message types\nMESSAGE_STYLES = {\n    MessageType.INFO: {\"prefix\": \"•\", \"style\": colors.primary},\n    MessageType.ERROR: {\"prefix\": \"✗\", \"style\": colors.error},\n    MessageType.WARNING: {\"prefix\": \"⚠\", \"style\": colors.warning},\n    MessageType.SUCCESS: {\"prefix\": \"✓\", \"style\": colors.success},\n    MessageType.BULLET: {\"prefix\": \"  -\", \"style\": colors.muted},\n    MessageType.MUTED: {\"prefix\": \"ℹ\", \"style\": colors.muted},\n    MessageType.THINKING: {\"prefix\": \"›\", \"style\": colors.muted},\n}\n\n\nclass UIManager:\n    \"\"\"Manages UI output with automatic spacing and consistent styling.\"\"\"\n\n    PANEL_CONTENT_PADDING = 1\n    PANEL_WRAPPER_PADDING = (0, 0, 0, 1)\n\n    def __init__(self):\n        self.console = Console()\n        self._last_output: Optional[OutputType] = None\n        self._spinner_active = False\n\n    def _prepare_spacing(self, new_type: OutputType):\n        \"\"\"Add spacing based on output transitions.\n\n        Rules:\n        - Panel -> Status: add blank line\n        - Any -> Panel: add blank line (except after user input)\n        - Status -> Status: no spacing\n        \"\"\"\n        if self._last_output is None:\n            return\n\n        if new_type == OutputType.STATUS and self._last_output == OutputType.PANEL:\n            self.console.print()\n        elif new_type == OutputType.PANEL and self._last_output != OutputType.USER_INPUT:\n            self.console.print()\n\n    def _prepare_panel_content(self, content, markdown, syntax):\n        \"\"\"Prepare content for display in panel.\n\n        Args:\n            content: Raw content to display\n            markdown: Whether to render as markdown\n            syntax: Language for syntax highlighting\n\n        Returns:\n            Formatted content ready for panel display\n        \"\"\"\n        if markdown and isinstance(content, str):\n            return Markdown(content)\n        elif syntax and isinstance(content, str):\n            return create_syntax_highlighted(content, syntax)\n        return content\n\n    def _determine_panel_title(self, title, panel_type, config):\n        \"\"\"Determine final panel title based on type and configuration.\n\n        Args:\n            title: Optional title override\n            panel_type: Type of panel for special handling\n            config: Panel style configuration\n\n        Returns:\n            Final title string or None\n        \"\"\"\n        if title is None and config[\"title_prefix\"]:\n            return config[\"title_prefix\"]\n        elif title and config[\"title_prefix\"] and panel_type == PanelType.AGENT:\n            return title\n        elif title and config[\"title_prefix\"]:\n            return f\"{config['title_prefix']}: {title}\"\n        return title\n\n    def panel(\n        self,\n        content: Union[str, Text],\n        *,\n        title: Optional[str] = None,\n        panel_type: PanelType = PanelType.DEFAULT,\n        footer: Optional[str] = None,\n        markdown: bool = False,\n        syntax: Optional[str] = None,\n        has_footer: bool = False,\n    ):\n        \"\"\"Display a panel with automatic spacing and styling.\n\n        Args:\n            content: Content to display in the panel\n            title: Optional title override\n            panel_type: Type of panel for styling\n            footer: Optional footer text (displayed below panel)\n            markdown: Whether to render content as markdown\n            syntax: Language for syntax highlighting (alternative to markdown)\n            has_footer: Whether external footer will be shown (affects padding)\n        \"\"\"\n        self._prepare_spacing(OutputType.PANEL)\n\n        config = PANEL_STYLES[panel_type]\n        display_content = self._prepare_panel_content(content, markdown, syntax)\n        final_title = self._determine_panel_title(title, panel_type, config)\n\n        panel = Panel(\n            Padding(display_content, self.PANEL_CONTENT_PADDING),\n            title=final_title,\n            title_align=\"left\",\n            border_style=config[\"border_style\"],\n        )\n\n        self.console.print(Padding(panel, self.PANEL_WRAPPER_PADDING))\n\n        if footer:\n            self.console.print(f\"  {footer}\", style=colors.muted)\n\n        self._last_output = OutputType.PANEL\n\n    def message(\n        self,\n        text: str,\n        *,\n        message_type: MessageType = MessageType.INFO,\n        indent: int = 0,\n        detail: Optional[str] = None,\n    ):\n        \"\"\"Display a status message with automatic spacing.\n\n        Args:\n            text: Message text\n            message_type: Type of message for styling\n            indent: Additional spaces to indent\n            detail: Optional detail text (for errors)\n        \"\"\"\n        self._prepare_spacing(OutputType.STATUS)\n\n        config = MESSAGE_STYLES[message_type]\n        prefix = config[\"prefix\"]\n        style = config[\"style\"]\n\n        if message_type == MessageType.THINKING:\n            lines = text.strip().split(\"\\n\")\n            if lines:\n                self.console.print(f\"{prefix} {lines[0]}\", style=style)\n                for line in lines[1:]:\n                    self.console.print(f\"  {line}\", style=style)\n        else:\n            indent_str = \" \" * indent\n            if prefix:\n                msg = f\"{indent_str}{prefix} {text}\"\n            else:\n                msg = f\"{indent_str}{text}\"\n\n            if detail and message_type == MessageType.ERROR:\n                msg = f\"{msg}: {detail}\"\n\n            self.console.print(msg, style=style)\n\n        self._last_output = OutputType.STATUS\n\n    def line(self):\n        \"\"\"Print a blank line without affecting output context.\"\"\"\n        self.console.print()\n\n    def reset_context(self):\n        \"\"\"Reset output context (typically after user input).\"\"\"\n        self._last_output = OutputType.USER_INPUT\n\n    def set_spinner_active(self, active: bool):\n        \"\"\"Update spinner state and preserve spacing context.\n\n        Args:\n            active: Whether spinner is active\n        \"\"\"\n        self._spinner_active = active\n        if active and self._last_output not in (OutputType.PANEL, OutputType.USER_INPUT):\n            self._last_output = OutputType.SPINNER\n\n    def agent(self, content: str, has_footer: bool = False):\n        \"\"\"Display agent response panel.\"\"\"\n        self.panel(\n            content,\n            title=\"Sidekick\",\n            panel_type=PanelType.AGENT,\n            markdown=True,\n            has_footer=has_footer,\n        )\n\n    def tool(self, content: Union[str, Text], title: str, footer: Optional[str] = None):\n        \"\"\"Display tool output panel.\"\"\"\n        self.panel(\n            content,\n            title=title,\n            panel_type=PanelType.TOOL,\n            footer=footer,\n        )\n\n    def error_panel(self, message: str, detail: Optional[str] = None, title: Optional[str] = None):\n        \"\"\"Display error panel.\"\"\"\n        content = f\"{message}\\n\\n{detail}\" if detail else message\n        self.panel(\n            content,\n            title=title,\n            panel_type=PanelType.ERROR,\n        )\n\n    def info(self, text: str):\n        \"\"\"Display info message.\"\"\"\n        self.message(text, message_type=MessageType.INFO)\n\n    def error(self, text: str, detail: Optional[str] = None):\n        \"\"\"Display error message.\"\"\"\n        self.message(text, message_type=MessageType.ERROR, detail=detail)\n\n    def warning(self, text: str):\n        \"\"\"Display warning message.\"\"\"\n        self.message(text, message_type=MessageType.WARNING)\n\n    def success(self, text: str):\n        \"\"\"Display success message.\"\"\"\n        self.message(text, message_type=MessageType.SUCCESS)\n\n    def bullet(self, text: str):\n        \"\"\"Display bullet point.\"\"\"\n        self.message(text, message_type=MessageType.BULLET)\n\n    def muted(self, text: str, indent: int = 0):\n        \"\"\"Display muted text.\"\"\"\n        self.message(text, message_type=MessageType.MUTED, indent=indent)\n\n    def thinking(self, text: str):\n        \"\"\"Display thinking message.\"\"\"\n        self.message(text, message_type=MessageType.THINKING)\n\n    def thinking_panel(self, content: str):\n        \"\"\"Display thinking panel.\"\"\"\n        self.panel(content, panel_type=PanelType.THINKING)\n\n    def confirmation_panel(self, content: str):\n        \"\"\"Display confirmation panel.\"\"\"\n        self.panel(content, panel_type=PanelType.CONFIRMATION)\n\n    def info_panel(self, content, title: str):\n        \"\"\"Display info panel.\"\"\"\n        self.panel(content, title=title, panel_type=PanelType.INFO)\n\n    def dump(self, data):\n        \"\"\"Display data in a pretty format.\n\n        Args:\n            data: Any data structure to display\n        \"\"\"\n        self.console.print(Pretty(data))\n\n    def help(self):\n        \"\"\"Display help information.\"\"\"\n        commands = [\n            (\"/help\", \"Show this help message\"),\n            (\"/yolo\", \"Toggle tool confirmation prompts\"),\n            (\"/clear\", \"Clear conversation history\"),\n            (\"/model\", \"List available models\"),\n            (\"/model <num>\", \"Switch to a specific model\"),\n            (\"/model <num> default\", \"Set a model as default\"),\n            (\"/usage\", \"Show session usage statistics\"),\n            (\"exit\", \"Exit the application\"),\n        ]\n\n        table = Table(show_header=False, box=None, padding=(0, 2, 0, 0))\n        table.add_column(\"Command\", style=colors.primary, no_wrap=True)\n        table.add_column(\"Description\", style=\"white\")\n\n        for cmd, desc in commands:\n            table.add_row(cmd, desc)\n\n        self.panel(table, title=\"Available Commands\", panel_type=PanelType.INFO)\n"
  },
  {
    "path": "src/sidekick/ui/special.py",
    "content": "\"\"\"Special UI functions that need external dependencies.\"\"\"\n\nfrom rich.padding import Padding\nfrom rich.text import Text\n\nfrom sidekick.constants import APP_NAME, APP_VERSION, MODELS\nfrom sidekick.session import session\nfrom sidekick.ui.colors import colors\nfrom sidekick.usage import usage_tracker\n\n\ndef version(ui_manager):\n    \"\"\"Display version information.\"\"\"\n    ui_manager.console.print(f\"{APP_NAME} v{APP_VERSION}\", style=colors.muted)\n\n\ndef update_available(ui_manager, latest_version: str):\n    \"\"\"Display update available message.\"\"\"\n    ui_manager.muted(f\"Update available: {APP_VERSION} → {latest_version}\")\n\n\ndef usage(ui_manager, usage_data: dict):\n    \"\"\"Display usage statistics.\"\"\"\n    content = Text()\n    content.append(\"Input: \", style=colors.muted)\n    content.append(f\"{usage_data['input_tokens']:,} tokens\")\n    if usage_data[\"cached_tokens\"] > 0:\n        content.append(f\" ({usage_data['cached_tokens']:,} cached)\", style=colors.muted)\n\n    content.append(\" | \", style=colors.muted)\n    content.append(\"Output: \", style=colors.muted)\n    content.append(f\"{usage_data['output_tokens']:,} tokens\")\n\n    content.append(\" | \", style=colors.muted)\n    content.append(\"Cost: \", style=colors.muted)\n    content.append(f\"${usage_data['request_cost']:.5f}\")\n\n    if session.current_model and usage_tracker.total_tokens:\n        model_info = MODELS.get(session.current_model)\n        if model_info and \"context_window\" in model_info:\n            token_limit = model_info[\"context_window\"]\n            if token_limit > 0:\n                remaining_percentage = (\n                    (token_limit - usage_tracker.total_tokens) / token_limit\n                ) * 100\n                # Ensure percentage doesn't go below 0\n                remaining_percentage = max(0, remaining_percentage)\n                content.append(\" | \", style=colors.muted)\n                content.append(f\"{remaining_percentage:.0f}% \")\n                content.append(\"Context remaining\", style=colors.muted)\n\n    ui_manager.console.print(Padding(content, (0, 0, 0, 2)))\n"
  },
  {
    "path": "src/sidekick/ui/spinner.py",
    "content": "\"\"\"Spinner management for the UI module.\"\"\"\n\nimport asyncio\nimport random\nfrom typing import Optional\n\nfrom rich.console import Console\n\n\nclass SpinnerManager:\n    \"\"\"Manages spinner state and rotation task.\"\"\"\n\n    _THINKING_MESSAGES = [\n        \"Cracking knuckles...\",\n        \"Polishing grappling hook...\",\n        \"Consulting the manual...\",\n        \"Adjusting utility belt...\",\n        \"Calibrating gadgets...\",\n        \"Dusting off cape...\",\n        \"Sharpening batarangs...\",\n        \"Pressing buttons...\",\n        \"Looking busy...\",\n        \"Doing stretches...\",\n        \"Putting on thinking mask...\",\n        \"Running diagnostics...\",\n        \"Preparing witty comeback...\",\n        \"Calculating trajectories...\",\n        \"Donning thinking cape...\",\n    ]\n\n    def __init__(self, console: Console):\n        self.console = console\n        self.spinner = None\n        self.rotation_task: Optional[asyncio.Task] = None\n\n    def _get_thinking_message(self) -> str:\n        \"\"\"Get a random thinking message.\"\"\"\n        return random.choice(self._THINKING_MESSAGES)\n\n    async def _rotate_messages(self, style: str, interval: float = 5.0):\n        \"\"\"Rotate thinking messages at specified interval.\"\"\"\n        while True:\n            try:\n                await asyncio.sleep(interval)\n                if self.spinner:\n                    message = self._get_thinking_message()\n                    formatted_message = style.format(message)\n                    self.spinner.update(formatted_message)\n            except asyncio.CancelledError:\n                break\n            except Exception:\n                break\n\n    def start(self, message: str = \"\", style: str = None):\n        \"\"\"Start the spinner with a message.\"\"\"\n        if self.spinner:\n            # Spinner already running, just update the message\n            if message == \"\":\n                message = self._get_thinking_message()\n            if style:\n                formatted_message = style.format(message)\n            else:\n                formatted_message = message\n            self.spinner.update(formatted_message)\n            return\n\n        if message == \"\":\n            message = self._get_thinking_message()\n        if style:\n            formatted_message = style.format(message)\n        else:\n            formatted_message = message\n\n        self.spinner = self.console.status(formatted_message, spinner=\"dots\")\n        self.spinner.start()\n\n        if self.rotation_task:\n            self.rotation_task.cancel()\n        if style:\n            self.rotation_task = asyncio.create_task(self._rotate_messages(style))\n\n    def stop(self):\n        \"\"\"Stop the spinner.\"\"\"\n        if self.spinner:\n            self.spinner.stop()\n            self.spinner = None\n\n        if self.rotation_task:\n            self.rotation_task.cancel()\n            self.rotation_task = None\n"
  },
  {
    "path": "src/sidekick/usage.py",
    "content": "\"\"\"Usage tracking for model API calls.\"\"\"\n\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, Optional\n\nfrom sidekick.constants import MODELS\n\n\n@dataclass\nclass ModelUsage:\n    \"\"\"Usage statistics for a specific model.\"\"\"\n\n    requests: int = 0\n    input_tokens: int = 0\n    cached_tokens: int = 0\n    output_tokens: int = 0\n    total_cost: float = 0.0\n\n    def add_usage(\n        self, input_tokens: int, cached_tokens: int, output_tokens: int, cost: float\n    ) -> None:\n        \"\"\"Add usage data to this model's totals.\"\"\"\n        self.requests += 1\n        self.input_tokens += input_tokens\n        self.cached_tokens += cached_tokens\n        self.output_tokens += output_tokens\n        self.total_cost += cost\n\n\n@dataclass\nclass UsageTracker:\n    \"\"\"Tracks token usage and costs across multiple models.\"\"\"\n\n    # Usage per model\n    model_usage: Dict[str, ModelUsage] = field(default_factory=dict)\n\n    # Last request details (for display)\n    last_request: Optional[Dict[str, Any]] = None\n\n    def record_usage(self, model: str, usage: Any) -> None:\n        \"\"\"Record usage from a model run.\n\n        Args:\n            model: Model identifier\n            usage: Usage object from pydantic_ai\n        \"\"\"\n        # Extract token counts\n        cached_tokens = 0\n        if hasattr(usage, \"details\") and usage.details:\n            for detail in usage.details:\n                if hasattr(detail, \"cached_tokens\"):\n                    cached_tokens += detail.cached_tokens\n\n        input_tokens = usage.request_tokens\n        non_cached_input = input_tokens - cached_tokens\n        output_tokens = usage.response_tokens\n\n        # Calculate costs\n        model_ids = list(MODELS.keys())\n        pricing = MODELS.get(model, MODELS[model_ids[0]])[\"pricing\"]\n\n        input_cost = non_cached_input / 1_000_000 * pricing[\"input\"]\n        cached_cost = cached_tokens / 1_000_000 * pricing[\"cached_input\"]\n        output_cost = output_tokens / 1_000_000 * pricing[\"output\"]\n        request_cost = input_cost + cached_cost + output_cost\n\n        # Store usage\n        if model not in self.model_usage:\n            self.model_usage[model] = ModelUsage()\n\n        self.model_usage[model].add_usage(input_tokens, cached_tokens, output_tokens, request_cost)\n\n        # Store last request details for display\n        self.last_request = {\n            \"model\": model,\n            \"input_tokens\": input_tokens,\n            \"cached_tokens\": cached_tokens,\n            \"output_tokens\": output_tokens,\n            \"input_cost\": input_cost,\n            \"cached_cost\": cached_cost,\n            \"output_cost\": output_cost,\n            \"request_cost\": request_cost,\n            \"total_cost\": self.total_cost,\n        }\n\n    @property\n    def total_tokens(self) -> int:\n        \"\"\"Get total tokens across all models.\"\"\"\n        return sum(usage.input_tokens + usage.output_tokens for usage in self.model_usage.values())\n\n    @property\n    def total_cost(self) -> float:\n        \"\"\"Get total cost across all models.\"\"\"\n        return sum(usage.total_cost for usage in self.model_usage.values())\n\n    @property\n    def total_requests(self) -> int:\n        \"\"\"Get total requests across all models.\"\"\"\n        return sum(usage.requests for usage in self.model_usage.values())\n\n\n# Global usage tracker instance\nusage_tracker = UsageTracker()\n"
  },
  {
    "path": "src/sidekick/utils/__init__.py",
    "content": ""
  },
  {
    "path": "src/sidekick/utils/command.py",
    "content": "\"\"\"Command parser for extracting individual commands from shell command strings.\"\"\"\n\nimport shlex\nfrom typing import List, Set\n\n\ndef extract_commands(command_string: str) -> List[str]:\n    \"\"\"\n    Extract individual command names from a shell command string.\n\n    Handles:\n    - Simple commands: \"ls -la\" -> [\"ls\"]\n    - Chained commands: \"ls && mkdir foo\" -> [\"ls\", \"mkdir\"]\n    - Piped commands: \"ls | grep foo\" -> [\"ls\", \"grep\"]\n    - Commands with semicolons: \"cd /tmp; ls\" -> [\"cd\", \"ls\"]\n\n    Args:\n        command_string: The full shell command string\n\n    Returns:\n        List of command names (without arguments)\n    \"\"\"\n    commands = []\n\n    # First, we need to handle quoted strings to avoid splitting on separators inside quotes\n    # We'll use a simple state machine to track whether we're inside quotes\n    parts = []\n    current_part = []\n    in_single_quote = False\n    in_double_quote = False\n    i = 0\n\n    while i < len(command_string):\n        char = command_string[i]\n\n        # Handle quotes\n        if char == \"'\" and not in_double_quote:\n            in_single_quote = not in_single_quote\n            current_part.append(char)\n        elif char == '\"' and not in_single_quote:\n            in_double_quote = not in_double_quote\n            current_part.append(char)\n        # Handle separators when not in quotes\n        elif not in_single_quote and not in_double_quote:\n            # Check for two-character separators\n            if i + 1 < len(command_string):\n                two_char = command_string[i : i + 2]\n                if two_char in [\"&&\", \"||\"]:\n                    if current_part:\n                        parts.append(\"\".join(current_part))\n                        current_part = []\n                    i += 1  # Skip the second character\n                    i += 1\n                    continue\n\n            # Check for single-character separators\n            if char in [\";\", \"|\"]:\n                if current_part:\n                    parts.append(\"\".join(current_part))\n                    current_part = []\n            else:\n                current_part.append(char)\n        else:\n            current_part.append(char)\n\n        i += 1\n\n    # Don't forget the last part\n    if current_part:\n        parts.append(\"\".join(current_part))\n\n    # Extract the command name from each part\n    for part in parts:\n        part = part.strip()\n        if not part:\n            continue\n\n        try:\n            # Use shlex to properly parse the command considering quotes\n            tokens = shlex.split(part)\n            if tokens:\n                # The first token is the command name\n                command = tokens[0]\n                # Remove any path components to get just the command name\n                command = command.split(\"/\")[-1]\n                commands.append(command)\n        except ValueError:\n            # If shlex parsing fails, fall back to simple split\n            tokens = part.split()\n            if tokens:\n                command = tokens[0].split(\"/\")[-1]\n                commands.append(command)\n\n    return commands\n\n\ndef is_command_allowed(command_string: str, allowed_commands: Set[str]) -> bool:\n    \"\"\"\n    Check if all commands in a command string are allowed.\n\n    Args:\n        command_string: The full shell command string\n        allowed_commands: Set of allowed command names\n\n    Returns:\n        True if all commands are allowed, False otherwise\n    \"\"\"\n    commands = extract_commands(command_string)\n    return all(cmd in allowed_commands for cmd in commands)\n"
  },
  {
    "path": "src/sidekick/utils/error.py",
    "content": "\"\"\"Simplified error handling for Sidekick CLI.\"\"\"\n\nimport asyncio\nimport re\nimport tempfile\nimport traceback\nfrom datetime import datetime\nfrom pathlib import Path\nfrom typing import Any, Callable, List, Optional\n\nfrom pydantic_ai.exceptions import ModelHTTPError, ModelRetry\n\n\nasync def handle_error(error: Exception, display_func) -> None:\n    \"\"\"Handle any error in the application.\n\n    Args:\n        error: The exception to handle\n        display_func: Function to display error (typically ui.error)\n    \"\"\"\n    message = extract_error_message(error)\n\n    if should_log_error(error):\n        log_file = save_error_log(error)\n        display_func(message, detail=f\"Error log: {log_file}\")\n    else:\n        display_func(message)\n\n\ndef extract_error_message(error: Exception) -> str:\n    \"\"\"Extract a clean error message from any exception.\"\"\"\n    if isinstance(error, ModelHTTPError):\n        return f\"{error.model_name}: {_get_api_message(error)}\"\n\n    error_str = str(error)\n\n    if \"MALFORMED_FUNCTION_CALL\" in error_str:\n        return \"The AI model had trouble executing a function. Please try again.\"\n\n    if \"Content field missing\" in error_str:\n        return (\n            \"The AI model returned an unexpected response format. This might be a temporary issue.\"\n        )\n\n    error_type = type(error).__name__\n    module_name = type(error).__module__ if hasattr(type(error), \"__module__\") else \"\"\n\n    if error_type in [\"ClientError\", \"APIStatusError\", \"BadRequestError\", \"AuthenticationError\"]:\n        clean_msg = _extract_provider_message(error)\n        if clean_msg:\n            provider = _get_provider_name(module_name)\n            return f\"{provider}: {clean_msg}\"\n\n    if len(error_str) > 150:\n        message_match = re.search(\n            r'[\"\\']?message[\"\\']?:\\s*[\"\\']([^\"\\'\\n]+)[\"\\']', error_str, re.IGNORECASE\n        )\n        if message_match:\n            error_str = message_match.group(1)\n        else:\n            error_str = error_str[:150] + \"...\"\n\n    return f\"Unexpected error ({error_type}): {error_str}\"\n\n\ndef should_log_error(error: Exception) -> bool:\n    \"\"\"Determine if error should be logged to file.\"\"\"\n    known_errors = (asyncio.CancelledError, KeyboardInterrupt, ModelHTTPError)\n    return not isinstance(error, known_errors)\n\n\ndef save_error_log(error: Exception) -> Path:\n    \"\"\"Save error traceback to a temp file and return the path.\"\"\"\n    timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n    temp_dir = Path(tempfile.gettempdir())\n    log_file = temp_dir / f\"sidekick_error_{timestamp}.log\"\n\n    tb = traceback.format_exc()\n\n    with open(log_file, \"w\") as f:\n        f.write(\"Sidekick Error Log\\n\")\n        f.write(\"==================\\n\\n\")\n        f.write(f\"Timestamp: {datetime.now().isoformat()}\\n\")\n        f.write(f\"Error Type: {type(error).__name__}\\n\")\n        module = type(error).__module__ if hasattr(type(error), \"__module__\") else \"Unknown\"\n        f.write(f\"Error Module: {module}\\n\\n\")\n        f.write(f\"Error Message:\\n{str(error)}\\n\\n\")\n        f.write(f\"Full Traceback:\\n{tb}\")\n\n    return log_file\n\n\ndef _get_api_message(error: ModelHTTPError) -> str:\n    \"\"\"Extract API error message from ModelHTTPError.\"\"\"\n    error_msg = str(error)\n    if isinstance(error.body, dict):\n        # Try to extract message from common error structures\n        if \"error\" in error.body and isinstance(error.body[\"error\"], dict):\n            error_msg = error.body[\"error\"].get(\"message\", str(error))\n        elif \"message\" in error.body:\n            error_msg = error.body[\"message\"]\n    return error_msg\n\n\ndef _extract_provider_message(error: Exception) -> str:\n    \"\"\"Extract clean message from provider-specific errors.\"\"\"\n    if hasattr(error, \"message\"):\n        return error.message\n    elif hasattr(error, \"body\") and isinstance(error.body, dict):\n        if \"error\" in error.body and isinstance(error.body[\"error\"], dict):\n            return error.body[\"error\"].get(\"message\", \"\")\n        elif \"message\" in error.body:\n            return error.body[\"message\"]\n    elif hasattr(error, \"details\") and isinstance(error.details, dict):\n        if \"error\" in error.details and isinstance(error.details[\"error\"], dict):\n            return error.details[\"error\"].get(\"message\", \"\")\n        elif \"message\" in error.details:\n            return error.details[\"message\"]\n    return \"\"\n\n\ndef _get_provider_name(module_name: str) -> str:\n    \"\"\"Get provider name from module name.\"\"\"\n    if \"openai\" in module_name:\n        return \"OpenAI\"\n    elif \"anthropic\" in module_name:\n        return \"Anthropic\"\n    elif \"google\" in module_name or \"genai\" in module_name:\n        return \"Google\"\n    return \"Provider\"\n\n\nclass ErrorContext:\n    \"\"\"Context for error handling with cleanup callbacks.\"\"\"\n\n    def __init__(self, operation: str, ui: Any):\n        self.operation = operation\n        self.ui = ui\n        self.cleanup_callbacks: List[Callable] = []\n\n    def add_cleanup(self, callback: Callable) -> None:\n        self.cleanup_callbacks.append(callback)\n\n    async def handle(self, error: Exception) -> Optional[Any]:\n        \"\"\"Handle error with context-specific cleanup.\"\"\"\n        if isinstance(error, ModelRetry):\n            raise error\n\n        self.ui.stop_spinner()\n\n        for callback in self.cleanup_callbacks:\n            if asyncio.iscoroutinefunction(callback):\n                await callback()\n            else:\n                callback(error)\n\n        if isinstance(error, asyncio.CancelledError):\n            return None\n\n        await handle_error(error, self.ui.error_panel)\n        return None\n"
  },
  {
    "path": "src/sidekick/utils/guide.py",
    "content": "from pathlib import Path\n\n\ndef load_guide():\n    \"\"\"Load the project guide from SIDEKICK.md if it exists.\"\"\"\n    guide_path = Path.cwd() / \"SIDEKICK.md\"\n    if guide_path.exists():\n        return guide_path.read_text(encoding=\"utf-8\").strip()\n    return None\n"
  },
  {
    "path": "src/sidekick/utils/input.py",
    "content": "\"\"\"Input utilities for the Sidekick CLI.\"\"\"\n\nfrom typing import Optional\n\nfrom prompt_toolkit import PromptSession\nfrom prompt_toolkit.formatted_text import HTML, to_formatted_text\nfrom prompt_toolkit.key_binding import KeyBindings\nfrom prompt_toolkit.styles import Style\n\n# Note: We don't import colors from ui module because prompt_toolkit\n# uses a different color system than Rich\n\nPROMPT_SYMBOL = \"λ \"\nPROMPT_CONTINUATION_INDENT = \"  \"  # Same width as prompt symbol\nPLACEHOLDER_TEXT = \"Esc+Enter to submit, /help for commands\"\nPLACEHOLDER_STYLE = \"italic fg:#666666\"\n\n\ndef create_multiline_keybindings() -> KeyBindings:\n    \"\"\"Create key bindings for multiline input.\n\n    Returns:\n        KeyBindings configured for ESC+Enter to submit.\n    \"\"\"\n    bindings = KeyBindings()\n\n    @bindings.add(\"escape\", \"enter\")\n    def _handle_multiline_submit(event):\n        \"\"\"Accept the buffer on ESC+Enter.\"\"\"\n        event.current_buffer.validate_and_handle()\n\n    return bindings\n\n\ndef create_prompt_style() -> Style:\n    \"\"\"Create consistent style for prompts.\n\n    Returns:\n        Style dictionary for prompt_toolkit.\n    \"\"\"\n    return Style.from_dict(\n        {\n            \"placeholder\": PLACEHOLDER_STYLE,\n        }\n    )\n\n\ndef prompt_continuation(width: int, line_number: int, is_soft_wrap: bool) -> str:\n    \"\"\"Provide continuation prompt for multiline input.\n\n    Args:\n        width: Terminal width (unused)\n        line_number: Current line number (unused)\n        is_soft_wrap: Whether this is a soft wrap (unused)\n\n    Returns:\n        Continuation prompt string.\n    \"\"\"\n    return PROMPT_CONTINUATION_INDENT\n\n\ndef create_multiline_prompt_session() -> PromptSession:\n    \"\"\"Create a configured prompt session for multiline input.\n\n    Returns:\n        PromptSession configured for multiline input with ESC+Enter submission.\n    \"\"\"\n    placeholder_tokens = [(\"class:placeholder\", PLACEHOLDER_TEXT)]\n    placeholder_ft = to_formatted_text(placeholder_tokens)\n\n    prompt_html = HTML(f\"<ansicyan>{PROMPT_SYMBOL}</ansicyan>\")\n\n    return PromptSession(\n        message=prompt_html,\n        style=create_prompt_style(),\n        placeholder=placeholder_ft,\n        key_bindings=create_multiline_keybindings(),\n        multiline=True,\n        prompt_continuation=prompt_continuation,\n    )\n\n\nasync def get_multiline_input(session: Optional[PromptSession] = None) -> str:\n    \"\"\"Get multiline input from the user.\n\n    Args:\n        session: Optional existing PromptSession. If not provided, a new one is created.\n\n    Returns:\n        The user's input, stripped of leading/trailing whitespace.\n\n    Raises:\n        EOFError: When Ctrl+D is pressed\n        KeyboardInterrupt: When Ctrl+C is pressed\n    \"\"\"\n    if session is None:\n        session = create_multiline_prompt_session()\n\n    return (await session.prompt_async()).strip()\n"
  },
  {
    "path": "src/sidekick/utils/logger.py",
    "content": "\"\"\"Debug logging configuration for Sidekick CLI.\"\"\"\n\nimport logging\n\nfrom sidekick import ui\n\n\nclass UILogHandler(logging.Handler):\n    \"\"\"A logging handler that outputs messages to the UI's muted function.\"\"\"\n\n    def emit(self, record):\n        # Only manipulate spinner if one is active\n        # The spinner manager will handle the state internally\n        from sidekick.ui.core import _spinner_manager\n\n        spinner_was_active = _spinner_manager.spinner is not None\n        if spinner_was_active:\n            ui.stop_spinner()\n\n        ui.muted(self.format(record))\n\n        if spinner_was_active:\n            ui.start_spinner()\n\n\ndef _is_allowed_module(name: str) -> bool:\n    \"\"\"Check if a logger name is from an allowed module.\"\"\"\n    # allowed = [\"sidekick\", \"openai\", \"google_genai\", \"anthropic\", \"pydantic_ai\"]\n    allowed = [\"sidekick\"]\n    return any(name == module or name.startswith(module + \".\") for module in allowed)\n\n\ndef setup_logging(debug_enabled: bool):\n    \"\"\"Configure logging for debug mode or disable completely.\"\"\"\n    if debug_enabled:\n        logging.root.setLevel(logging.DEBUG)\n\n        handler = UILogHandler()\n        handler.setFormatter(logging.Formatter(\"⚙︎ %(levelname)s (%(name)s): %(message)s\"))\n        handler.addFilter(lambda record: _is_allowed_module(record.name))\n\n        logging.root.addHandler(handler)\n    else:\n        logging.disable(logging.CRITICAL)\n"
  },
  {
    "path": "tests/__init__.py",
    "content": "# Test package for Sidekick CLI\n"
  },
  {
    "path": "tests/agent/__init__.py",
    "content": ""
  },
  {
    "path": "tests/agent/test_process_node.py",
    "content": "from unittest.mock import Mock, patch\n\nimport pytest\nfrom pydantic_ai import messages\n\nfrom sidekick.agent import _process_node\n\n\n@pytest.mark.asyncio\nasync def test_process_node_with_request():\n    \"\"\"Test processing a node with a request.\"\"\"\n    node = Mock()\n    node.request = Mock(spec=messages.ModelRequest)\n    node.request.parts = []\n\n    delattr(node, \"model_response\")\n\n    mock_message_history = Mock()\n\n    await _process_node(node, mock_message_history)\n\n    mock_message_history.add_request.assert_called_once_with(node.request)\n\n\n@pytest.mark.asyncio\nasync def test_process_node_with_model_response_no_tools():\n    \"\"\"Test processing a node with model response but no tool calls.\"\"\"\n    node = Mock()\n    delattr(node, \"request\")\n\n    text_part = Mock(spec=messages.TextPart)\n    text_part.part_kind = \"text\"\n\n    node.model_response = Mock(spec=messages.ModelResponse)\n    node.model_response.parts = [text_part]\n\n    mock_message_history = Mock()\n\n    await _process_node(node, mock_message_history)\n\n    mock_message_history.add_response.assert_called_once_with(node.model_response)\n\n\n@pytest.mark.asyncio\nasync def test_process_node_with_tool_call():\n    \"\"\"Test processing a node with tool call.\"\"\"\n    node = Mock()\n    delattr(node, \"request\")\n\n    tool_call = Mock(spec=messages.ToolCallPart)\n    tool_call.part_kind = \"tool-call\"\n    tool_call.tool_name = \"test_tool\"\n    tool_call.tool_call_id = \"test_123\"\n    tool_call.args_as_dict = Mock(return_value={\"arg1\": \"value1\"})\n\n    node.model_response = Mock(spec=messages.ModelResponse)\n    node.model_response.parts = [tool_call]\n\n    mock_message_history = Mock()\n\n    await _process_node(node, mock_message_history)\n\n    mock_message_history.add_response.assert_called_once_with(node.model_response)\n\n\n@pytest.mark.asyncio\nasync def test_process_node_with_tool_return():\n    \"\"\"Test processing a node with tool return.\"\"\"\n    node = Mock()\n    delattr(node, \"model_response\")\n\n    tool_return = Mock()\n    tool_return.part_kind = \"tool-return\"\n    tool_return.tool_call_id = \"test_123\"\n\n    node.request = Mock(spec=messages.ModelRequest)\n    node.request.parts = [tool_return]\n\n    mock_message_history = Mock()\n\n    await _process_node(node, mock_message_history)\n\n    mock_message_history.add_request.assert_called_once_with(node.request)\n\n\n@pytest.mark.asyncio\nasync def test_process_node_with_retry_prompt():\n    \"\"\"Test processing a node with retry prompt.\"\"\"\n    node = Mock()\n    delattr(node, \"model_response\")\n\n    retry_part = Mock()\n    retry_part.part_kind = \"retry-prompt\"\n    retry_part.content = \"Trying a different approach\"\n\n    node.request = Mock(spec=messages.ModelRequest)\n    node.request.parts = [retry_part]\n\n    mock_message_history = Mock()\n\n    with patch(\"sidekick.agent.ui.muted\") as mock_muted:\n        await _process_node(node, mock_message_history)\n\n        mock_muted.assert_called_once_with(\"Trying a different approach\")\n"
  },
  {
    "path": "tests/commands/__init__.py",
    "content": ""
  },
  {
    "path": "tests/commands/test_handle_command.py",
    "content": "\"\"\"Command handler routing tests (DRY parametrised).\"\"\"\n\nfrom unittest.mock import AsyncMock\n\nimport pytest\n\nfrom sidekick.commands import handle_command\n\n\n@pytest.mark.parametrize(\n    \"user_input, patch_target, expected_args\",\n    [\n        (\"/dump\", \"handle_dump\", [None]),\n        (\"/clear\", \"handle_clear\", [None]),\n        (\"/yolo\", \"handle_yolo\", []),\n        (\"/model 2\", \"handle_model\", [[\"2\"]]),\n    ],\n)\n@pytest.mark.asyncio\nasync def test_handle_command_routes(monkeypatch, user_input, patch_target, expected_args):\n    mock_func = AsyncMock()\n    monkeypatch.setattr(f\"sidekick.commands.{patch_target}\", mock_func)\n\n    result = await handle_command(user_input)\n\n    assert result is True\n\n    call_args = mock_func.call_args[0]\n    if expected_args:\n        if len(expected_args) == 1 and isinstance(expected_args[0], list):\n            assert call_args[0] == expected_args[0]\n        else:\n            assert list(call_args) == expected_args\n    else:\n        assert call_args == ()\n\n\n@pytest.mark.asyncio\nasync def test_handle_command_non_command():\n    \"\"\"Input without leading slash should not be treated as command.\"\"\"\n    assert await handle_command(\"not a command\") is False\n\n\n@pytest.mark.asyncio\nasync def test_handle_command_unknown():\n    \"\"\"Unknown command string should return True and display error.\"\"\"\n    assert await handle_command(\"/unknown\") is True\n"
  },
  {
    "path": "tests/commands/test_handle_dump.py",
    "content": "\"\"\"Test /dump command handler.\"\"\"\n\nfrom unittest.mock import Mock, patch\n\nimport pytest\n\nfrom sidekick.commands import handle_dump\n\n\n@pytest.mark.asyncio\nasync def test_handle_dump_writes_to_file_and_pretty_prints(mock_ui, tmp_path):\n    \"\"\"Test /dump command writes message history to dump.log and pretty prints it.\"\"\"\n    temp_dump_file = tmp_path / \"dump.log\"\n\n    mock_message_history = Mock()\n    mock_message_history.__iter__ = Mock(\n        return_value=iter(\n            [\n                {\"role\": \"user\", \"content\": \"hello\"},\n                {\"role\": \"agent\", \"content\": \"hi\"},\n            ]\n        )\n    )\n\n    with (\n        patch(\"sidekick.commands.dump.ui\", mock_ui),\n        patch(\"sidekick.commands.dump.DUMP_FILE_PATH\", str(temp_dump_file)),\n    ):\n        await handle_dump(mock_message_history)\n\n        mock_ui.success.assert_called_once_with(f\"Message history dumped to {temp_dump_file}\")\n\n        assert temp_dump_file.exists()\n        content = temp_dump_file.read_text()\n\n        assert \"Message #0 - Type: dict\" in content\n        assert \"Message #1 - Type: dict\" in content\n        assert \"'role': 'user'\" in content\n        assert \"'content': 'hello'\" in content\n        assert \"'role': 'agent'\" in content\n        assert \"'content': 'hi'\" in content\n        assert \"=\" * 80 in content\n"
  },
  {
    "path": "tests/commands/test_handle_model.py",
    "content": "\"\"\"Test /model command handler.\"\"\"\n\nfrom unittest.mock import MagicMock, patch\n\nimport pytest\n\nfrom sidekick.commands import handle_model\n\n\n@pytest.mark.asyncio\nasync def test_handle_model_list(mock_ui, mock_session, mock_models):\n    \"\"\"Test /model with no args lists available models.\"\"\"\n    mock_session.current_model = \"model2\"\n\n    with (\n        patch(\"sidekick.commands.model.ui\", mock_ui),\n        patch(\"sidekick.commands.model.session\", mock_session),\n        patch(\"sidekick.commands.model.MODELS\", mock_models),\n    ):\n        await handle_model([])\n\n        mock_ui.info_panel.assert_called_once()\n\n\n@pytest.mark.asyncio\nasync def test_handle_model_switch(mock_ui, mock_session, mock_models):\n    \"\"\"Test /model <num> switches to selected model.\"\"\"\n\n    with (\n        patch(\"sidekick.commands.model.ui\", mock_ui),\n        patch(\"sidekick.commands.model.session\", mock_session),\n        patch(\"sidekick.commands.model.MODELS\", mock_models),\n    ):\n        await handle_model([\"2\"])\n\n        assert mock_session.current_model == \"model2\"\n        mock_ui.info.assert_called_with(\"Switched to model: model2\")\n\n\n@pytest.mark.asyncio\nasync def test_handle_model_invalid_number(mock_ui):\n    \"\"\"Test /model with invalid number shows error.\"\"\"\n    with (\n        patch(\"sidekick.commands.model.ui\", mock_ui),\n        patch(\"sidekick.commands.model.MODELS\", {\"model1\": {}, \"model2\": {}}),\n    ):\n        await handle_model([\"5\"])\n        mock_ui.error.assert_called_with(\"Invalid model number. Choose between 1 and 2\")\n\n\n@pytest.mark.asyncio\nasync def test_handle_model_set_default(mock_ui):\n    \"\"\"Test /model <num> default sets default model in config.\"\"\"\n    mock_update = MagicMock()\n\n    with (\n        patch(\"sidekick.commands.model.ui\", mock_ui),\n        patch(\"sidekick.commands.model.MODELS\", {\"model1\": {}, \"model2\": {}}),\n        patch(\"sidekick.commands.model.update_config_file\", mock_update),\n    ):\n        await handle_model([\"2\", \"default\"])\n\n        mock_update.assert_called_once_with({\"default_model\": \"model2\"})\n        mock_ui.success.assert_called_with(\"Set model2 as default model\")\n"
  },
  {
    "path": "tests/commands/test_handle_yolo.py",
    "content": "\"\"\"Test /yolo command handler.\"\"\"\n\nfrom unittest.mock import patch\n\nimport pytest\n\nfrom sidekick.commands import handle_yolo\n\n\n@pytest.mark.asyncio\nasync def test_handle_yolo_disables_confirmation(mock_ui, mock_session):\n    \"\"\"Test /yolo toggles confirmation from enabled to disabled.\"\"\"\n    mock_session.confirmation_enabled = True\n\n    with (\n        patch(\"sidekick.commands.yolo.ui\", mock_ui),\n        patch(\"sidekick.commands.yolo.session\", mock_session),\n    ):\n        await handle_yolo()\n        assert mock_session.confirmation_enabled is False\n        mock_ui.info.assert_called_with(\"Tool confirmations disabled (YOLO mode)\")\n\n\n@pytest.mark.asyncio\nasync def test_handle_yolo_enables_confirmation(mock_ui, mock_session):\n    \"\"\"Test /yolo toggles confirmation from disabled to enabled.\"\"\"\n    mock_session.confirmation_enabled = False\n\n    with (\n        patch(\"sidekick.commands.yolo.ui\", mock_ui),\n        patch(\"sidekick.commands.yolo.session\", mock_session),\n    ):\n        await handle_yolo()\n        assert mock_session.confirmation_enabled is True\n        mock_ui.info.assert_called_with(\"Tool confirmations enabled\")\n"
  },
  {
    "path": "tests/config/__init__.py",
    "content": "# Config tests package\n"
  },
  {
    "path": "tests/config/test_config_exists.py",
    "content": "\"\"\"Tests for config_exists function.\"\"\"\n\nfrom unittest.mock import patch\n\nfrom sidekick.config import config_exists\n\n\ndef test_returns_true_when_exists():\n    \"\"\"Test config_exists returns True when file exists.\"\"\"\n    with patch(\"pathlib.Path.exists\", return_value=True):\n        assert config_exists() is True\n\n\ndef test_returns_false_when_not_exists():\n    \"\"\"Test config_exists returns False when file doesn't exist.\"\"\"\n    with patch(\"pathlib.Path.exists\", return_value=False):\n        assert config_exists() is False\n"
  },
  {
    "path": "tests/config/test_deep_merge_dicts.py",
    "content": "from sidekick.config import deep_merge_dicts\n\n\ndef test_simple_merge():\n    \"\"\"Test merging simple dictionaries.\"\"\"\n    base = {\"a\": 1, \"b\": 2}\n    update = {\"b\": 3, \"c\": 4}\n\n    result = deep_merge_dicts(base, update)\n\n    assert result == {\"a\": 1, \"b\": 3, \"c\": 4}\n\n\ndef test_nested_dict_merge():\n    \"\"\"Test merging nested dictionaries.\"\"\"\n    base = {\"settings\": {\"allowed_tools\": [\"read_file\"], \"allowed_commands\": [\"ls\", \"cat\"]}}\n    update = {\"settings\": {\"allowed_commands\": [\"grep\", \"pwd\"], \"custom_field\": \"value\"}}\n\n    result = deep_merge_dicts(base, update)\n\n    assert result == {\n        \"settings\": {\n            \"allowed_tools\": [\"read_file\"],\n            \"allowed_commands\": [\"grep\", \"pwd\"],\n            \"custom_field\": \"value\",\n        }\n    }\n\n\ndef test_deep_nested_merge():\n    \"\"\"Test merging deeply nested structures.\"\"\"\n    base = {\"a\": {\"b\": {\"c\": 1, \"d\": 2}, \"e\": 3}}\n    update = {\"a\": {\"b\": {\"c\": 10, \"f\": 4}, \"g\": 5}}\n\n    result = deep_merge_dicts(base, update)\n\n    assert result == {\"a\": {\"b\": {\"c\": 10, \"d\": 2, \"f\": 4}, \"e\": 3, \"g\": 5}}\n\n\ndef test_list_override():\n    \"\"\"Test that lists are overridden, not merged.\"\"\"\n    base = {\"items\": [1, 2, 3]}\n    update = {\"items\": [4, 5]}\n\n    result = deep_merge_dicts(base, update)\n\n    assert result == {\"items\": [4, 5]}\n\n\ndef test_mixed_types_override():\n    \"\"\"Test that mixed types result in override.\"\"\"\n    base = {\"field\": {\"nested\": \"value\"}}\n    update = {\"field\": \"string\"}\n\n    result = deep_merge_dicts(base, update)\n\n    assert result == {\"field\": \"string\"}\n\n\ndef test_empty_dicts():\n    \"\"\"Test merging with empty dictionaries.\"\"\"\n    assert deep_merge_dicts({}, {}) == {}\n    assert deep_merge_dicts({\"a\": 1}, {}) == {\"a\": 1}\n    assert deep_merge_dicts({}, {\"b\": 2}) == {\"b\": 2}\n\n\ndef test_none_values():\n    \"\"\"Test handling of None values.\"\"\"\n    base = {\"a\": 1, \"b\": None}\n    update = {\"b\": 2, \"c\": None}\n\n    result = deep_merge_dicts(base, update)\n\n    assert result == {\"a\": 1, \"b\": 2, \"c\": None}\n\n\ndef test_preserves_update_values():\n    \"\"\"Test that update values always take precedence.\"\"\"\n    base = {\"env\": {\"API_KEY\": \"default-key\", \"OTHER_KEY\": \"default-other\"}}\n    update = {\"env\": {\"API_KEY\": \"user-key\"}}\n\n    result = deep_merge_dicts(base, update)\n\n    assert result[\"env\"][\"API_KEY\"] == \"user-key\"\n    assert result[\"env\"][\"OTHER_KEY\"] == \"default-other\"\n"
  },
  {
    "path": "tests/config/test_ensure_config_structure.py",
    "content": "import json\nimport tempfile\nimport time\nfrom pathlib import Path\nfrom unittest.mock import patch\n\nimport pytest\n\nfrom src.sidekick.config import ConfigError, ensure_config_structure\nfrom src.sidekick.constants import DEFAULT_USER_CONFIG\n\n\ndef test_preserves_user_settings():\n    \"\"\"Test that existing user settings are preserved.\"\"\"\n    user_config = {\n        \"default_model\": \"gpt-4o\",\n        \"env\": {\"OPENAI_API_KEY\": \"sk-user123\", \"CUSTOM_KEY\": \"custom-value\"},\n        \"settings\": {\n            \"allowed_tools\": [\"bash\", \"write_file\"],\n            \"allowed_commands\": [\"rm\", \"mv\", \"cp\"],\n        },\n        \"mcpServers\": {\"myserver\": {\"command\": \"node\", \"args\": [\"server.js\"]}},\n    }\n\n    with tempfile.NamedTemporaryFile(mode=\"w\", suffix=\".json\", delete=False) as f:\n        json.dump(user_config, f, indent=2)\n        temp_path = Path(f.name)\n\n    try:\n        with patch(\"src.sidekick.config.get_config_path\", return_value=temp_path):\n            result = ensure_config_structure()\n\n            # User values should be preserved\n            assert result[\"default_model\"] == \"gpt-4o\"\n            assert result[\"env\"][\"OPENAI_API_KEY\"] == \"sk-user123\"\n            assert result[\"env\"][\"CUSTOM_KEY\"] == \"custom-value\"\n            assert result[\"settings\"][\"allowed_tools\"] == [\"bash\", \"write_file\"]\n            assert result[\"settings\"][\"allowed_commands\"] == [\"rm\", \"mv\", \"cp\"]\n            assert \"myserver\" in result[\"mcpServers\"]\n\n            # Default values should still be present for missing keys\n            assert \"ANTHROPIC_API_KEY\" in result[\"env\"]\n            assert \"GEMINI_API_KEY\" in result[\"env\"]\n    finally:\n        temp_path.unlink()\n\n\ndef test_adds_missing_defaults():\n    \"\"\"Test that missing fields are added with defaults.\"\"\"\n    minimal_config = {\n        \"default_model\": \"claude-3-5-sonnet\",\n        \"env\": {\"ANTHROPIC_API_KEY\": \"sk-ant123\"},\n    }\n\n    with tempfile.NamedTemporaryFile(mode=\"w\", suffix=\".json\", delete=False) as f:\n        json.dump(minimal_config, f, indent=2)\n        temp_path = Path(f.name)\n\n    try:\n        with patch(\"src.sidekick.config.get_config_path\", return_value=temp_path):\n            result = ensure_config_structure()\n\n            # User values preserved\n            assert result[\"default_model\"] == \"claude-3-5-sonnet\"\n            assert result[\"env\"][\"ANTHROPIC_API_KEY\"] == \"sk-ant123\"\n\n            # Missing fields added\n            assert \"mcpServers\" in result\n            assert result[\"mcpServers\"] == {}\n            assert \"settings\" in result\n            assert (\n                result[\"settings\"][\"allowed_tools\"]\n                == DEFAULT_USER_CONFIG[\"settings\"][\"allowed_tools\"]\n            )\n            assert len(result[\"settings\"][\"allowed_commands\"]) > 0\n            assert \"ls\" in result[\"settings\"][\"allowed_commands\"]\n\n            # Verify file was updated\n            with open(temp_path) as f:\n                file_content = json.load(f)\n            assert \"settings\" in file_content\n            assert \"mcpServers\" in file_content\n    finally:\n        temp_path.unlink()\n\n\ndef test_does_not_add_tool_ignore_field():\n    \"\"\"Test that tool_ignore is not added to config when updating.\"\"\"\n    config_with_legacy = {\n        \"default_model\": \"gpt-4o\",\n        \"env\": {\"OPENAI_API_KEY\": \"sk-123\"},\n        \"settings\": {\"tool_ignore\": [\"bash\", \"write_file\"]},\n    }\n\n    with tempfile.NamedTemporaryFile(mode=\"w\", suffix=\".json\", delete=False) as f:\n        json.dump(config_with_legacy, f, indent=2)\n        temp_path = Path(f.name)\n\n    try:\n        with patch(\"src.sidekick.config.get_config_path\", return_value=temp_path):\n            result = ensure_config_structure()\n\n            # Should have allowed_tools from defaults\n            assert \"allowed_tools\" in result[\"settings\"]\n\n            # Should still have tool_ignore (preserved, not removed)\n            assert \"tool_ignore\" in result[\"settings\"]\n            assert result[\"settings\"][\"tool_ignore\"] == [\"bash\", \"write_file\"]\n\n            # Verify the file still has tool_ignore\n            with open(temp_path) as f:\n                file_content = json.load(f)\n            assert file_content[\"settings\"][\"tool_ignore\"] == [\"bash\", \"write_file\"]\n    finally:\n        temp_path.unlink()\n\n\ndef test_empty_config_gets_full_defaults():\n    \"\"\"Test that an empty config gets all defaults.\"\"\"\n    empty_config = {}\n\n    with tempfile.NamedTemporaryFile(mode=\"w\", suffix=\".json\", delete=False) as f:\n        json.dump(empty_config, f, indent=2)\n        temp_path = Path(f.name)\n\n    try:\n        with patch(\"src.sidekick.config.get_config_path\", return_value=temp_path):\n            # This should fail validation before ensure_config_structure is called\n            # but let's test the merge behavior anyway\n            with patch(\"src.sidekick.config.validate_config_structure\"):\n                result = ensure_config_structure()\n\n                # Should have all defaults\n                assert result == DEFAULT_USER_CONFIG\n    finally:\n        temp_path.unlink()\n\n\ndef test_no_file_update_when_no_changes():\n    \"\"\"Test that file is not rewritten when no changes are needed.\"\"\"\n    # Config that already has all expected fields\n    complete_config = {\n        \"default_model\": \"claude-3-5-sonnet-20241022\",\n        \"env\": {\n            \"ANTHROPIC_API_KEY\": \"your-anthropic-api-key\",\n            \"OPENAI_API_KEY\": \"your-openai-api-key\",\n            \"GEMINI_API_KEY\": \"your-gemini-api-key\",\n        },\n        \"mcpServers\": {},\n        \"settings\": {\n            \"allowed_tools\": [\"read_file\"],\n            \"allowed_commands\": [\n                \"ls\",\n                \"cat\",\n                \"rg\",\n                \"find\",\n                \"pwd\",\n                \"echo\",\n                \"which\",\n                \"head\",\n                \"tail\",\n                \"wc\",\n                \"sort\",\n                \"uniq\",\n                \"diff\",\n                \"tree\",\n                \"file\",\n                \"stat\",\n                \"du\",\n                \"df\",\n                \"ps\",\n                \"top\",\n                \"env\",\n                \"date\",\n                \"whoami\",\n                \"hostname\",\n                \"uname\",\n                \"id\",\n                \"groups\",\n                \"history\",\n            ],\n        },\n    }\n\n    with tempfile.NamedTemporaryFile(mode=\"w\", suffix=\".json\", delete=False) as f:\n        json.dump(complete_config, f, indent=2)\n        temp_path = Path(f.name)\n\n    # Small delay to ensure different mtime if file is modified\n    time.sleep(0.01)\n    original_mtime = temp_path.stat().st_mtime\n\n    try:\n        with patch(\"src.sidekick.config.get_config_path\", return_value=temp_path):\n            result = ensure_config_structure()\n\n            # Should return the same config\n            assert result == complete_config\n\n            # File should not have been modified\n            assert temp_path.stat().st_mtime == original_mtime\n    finally:\n        temp_path.unlink()\n\n\ndef test_raises_config_error_on_missing_file():\n    \"\"\"Test that ConfigError is raised when config file doesn't exist.\"\"\"\n    non_existent_path = Path(\"/tmp/does_not_exist_12345.json\")\n\n    with patch(\"src.sidekick.config.get_config_path\", return_value=non_existent_path):\n        with pytest.raises(ConfigError):\n            ensure_config_structure()\n"
  },
  {
    "path": "tests/config/test_get_config_path.py",
    "content": "\"\"\"Tests for get_config_path function.\"\"\"\n\nfrom pathlib import Path\n\nfrom sidekick.config import get_config_path\n\n\ndef test_returns_correct_path():\n    \"\"\"Test that get_config_path returns the expected path.\"\"\"\n    expected = Path.home() / \".config\" / \"sidekick.json\"\n    assert get_config_path() == expected\n"
  },
  {
    "path": "tests/config/test_parse_mcp_servers.py",
    "content": "\"\"\"Tests for parse_mcp_servers function.\"\"\"\n\nimport pytest\n\nfrom sidekick.config import ConfigValidationError, parse_mcp_servers\n\n\ndef test_returns_empty_dict_when_no_mcp_servers():\n    \"\"\"Test returns empty dict when mcpServers not present.\"\"\"\n    config = {\"default_model\": \"test\", \"env\": {}}\n    assert parse_mcp_servers(config) == {}\n\n\ndef test_returns_valid_mcp_servers():\n    \"\"\"Test returns MCP servers when valid.\"\"\"\n    config = {\"mcpServers\": {\"fetch\": {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"]}}}\n    result = parse_mcp_servers(config)\n    assert result == config[\"mcpServers\"]\n\n\ndef test_accepts_server_with_name_field():\n    \"\"\"Test accepts server config with optional name field.\"\"\"\n    config = {\n        \"mcpServers\": {\n            \"fetch\": {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"], \"name\": \"Fetch Server\"}\n        }\n    }\n    result = parse_mcp_servers(config)\n    assert result == config[\"mcpServers\"]\n\n\ndef test_accepts_server_with_env():\n    \"\"\"Test accepts server config with env vars.\"\"\"\n    config = {\n        \"mcpServers\": {\n            \"brave\": {\n                \"command\": \"npx\",\n                \"args\": [\"-y\", \"@modelcontextprotocol/server-brave-search\"],\n                \"env\": {\"BRAVE_API_KEY\": \"test-key\"},\n            }\n        }\n    }\n    result = parse_mcp_servers(config)\n    assert result == config[\"mcpServers\"]\n\n\ndef test_raises_for_non_dict_mcp_servers():\n    \"\"\"Test ConfigValidationError when mcpServers not dict.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers({\"mcpServers\": \"not a dict\"})\n    assert \"'mcpServers' field must be an object\" in str(exc_info.value)\n\n\ndef test_raises_for_invalid_server_config():\n    \"\"\"Test ConfigValidationError for invalid server config.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers({\"mcpServers\": {\"fetch\": \"not a dict\"}})\n    assert \"MCP server 'fetch' configuration must be an object\" in str(exc_info.value)\n\n\ndef test_raises_for_missing_command():\n    \"\"\"Test ConfigValidationError when command missing.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers({\"mcpServers\": {\"fetch\": {\"args\": []}}})\n    assert \"MCP server 'fetch' missing required field 'command'\" in str(exc_info.value)\n\n\ndef test_raises_for_missing_args():\n    \"\"\"Test ConfigValidationError when args missing.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers({\"mcpServers\": {\"fetch\": {\"command\": \"uvx\"}}})\n    assert \"MCP server 'fetch' missing required field 'args'\" in str(exc_info.value)\n\n\ndef test_raises_for_non_string_command():\n    \"\"\"Test ConfigValidationError when command not string.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers({\"mcpServers\": {\"fetch\": {\"command\": 123}}})\n    assert \"MCP server 'fetch' field 'command' must be a string\" in str(exc_info.value)\n\n\ndef test_raises_for_non_list_args():\n    \"\"\"Test ConfigValidationError when args not list.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers({\"mcpServers\": {\"fetch\": {\"command\": \"uvx\", \"args\": \"not a list\"}}})\n    assert \"MCP server 'fetch' field 'args' must be an array\" in str(exc_info.value)\n\n\ndef test_raises_for_empty_args():\n    \"\"\"Test ConfigValidationError when args is empty list.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers({\"mcpServers\": {\"fetch\": {\"command\": \"uvx\", \"args\": []}}})\n    assert \"MCP server 'fetch' field 'args' must contain at least one argument\" in str(\n        exc_info.value\n    )\n\n\ndef test_raises_for_non_dict_env():\n    \"\"\"Test ConfigValidationError when env not dict.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(\n            {\n                \"mcpServers\": {\n                    \"fetch\": {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"], \"env\": \"not a dict\"}\n                }\n            }\n        )\n    assert \"MCP server 'fetch' field 'env' must be an object\" in str(exc_info.value)\n"
  },
  {
    "path": "tests/config/test_read_config_file.py",
    "content": "\"\"\"Tests for read_config_file function.\"\"\"\n\nfrom unittest.mock import mock_open, patch\n\nimport pytest\n\nfrom sidekick.config import ConfigError, ConfigValidationError, read_config_file\n\n\ndef test_reads_valid_json():\n    \"\"\"Test reading valid JSON config.\"\"\"\n    mock_json = '{\"default_model\": \"test-model\", \"env\": {}}'\n    with patch(\"pathlib.Path.exists\", return_value=True):\n        with patch(\"builtins.open\", mock_open(read_data=mock_json)):\n            config = read_config_file()\n            assert config == {\"default_model\": \"test-model\", \"env\": {}}\n\n\ndef test_raises_file_not_found():\n    \"\"\"Test ConfigError when config doesn't exist.\"\"\"\n    with patch(\"pathlib.Path.exists\", return_value=False):\n        with pytest.raises(ConfigError) as exc_info:\n            read_config_file()\n        assert \"Config file not found\" in str(exc_info.value)\n\n\ndef test_raises_permission_error():\n    \"\"\"Test ConfigError when can't access file.\"\"\"\n    with patch(\"pathlib.Path.exists\", return_value=True):\n        with patch(\"builtins.open\", side_effect=PermissionError(\"Access denied\")):\n            with pytest.raises(ConfigError) as exc_info:\n                read_config_file()\n            assert \"Cannot access config file\" in str(exc_info.value)\n\n\ndef test_raises_json_decode_error():\n    \"\"\"Test ConfigValidationError for invalid JSON.\"\"\"\n    with patch(\"pathlib.Path.exists\", return_value=True):\n        with patch(\"builtins.open\", mock_open(read_data=\"invalid json\")):\n            with pytest.raises(ConfigValidationError):\n                read_config_file()\n"
  },
  {
    "path": "tests/config/test_set_env_vars.py",
    "content": "\"\"\"Tests for set_env_vars function.\"\"\"\n\nimport os\nfrom unittest.mock import patch\n\nfrom sidekick.config import set_env_vars\n\n\ndef test_sets_string_env_vars():\n    \"\"\"Test that string environment variables are set.\"\"\"\n    env_dict = {\"API_KEY\": \"test-key\", \"ANOTHER_VAR\": \"test-value\"}\n\n    with patch.dict(os.environ, {}, clear=True):\n        set_env_vars(env_dict)\n        assert os.environ.get(\"API_KEY\") == \"test-key\"\n        assert os.environ.get(\"ANOTHER_VAR\") == \"test-value\"\n\n\ndef test_skips_empty_values():\n    \"\"\"Test that empty string values are skipped.\"\"\"\n    env_dict = {\"API_KEY\": \"test-key\", \"EMPTY_VAR\": \"\"}\n\n    with patch.dict(os.environ, {}, clear=True):\n        set_env_vars(env_dict)\n        assert os.environ.get(\"API_KEY\") == \"test-key\"\n        assert \"EMPTY_VAR\" not in os.environ\n\n\ndef test_skips_non_string_values():\n    \"\"\"Test that non-string values are skipped.\"\"\"\n    env_dict = {\"API_KEY\": \"test-key\", \"NUMBER_VAR\": 123, \"BOOL_VAR\": True, \"NONE_VAR\": None}\n\n    with patch.dict(os.environ, {}, clear=True):\n        set_env_vars(env_dict)\n        assert os.environ.get(\"API_KEY\") == \"test-key\"\n        assert \"NUMBER_VAR\" not in os.environ\n        assert \"BOOL_VAR\" not in os.environ\n        assert \"NONE_VAR\" not in os.environ\n\n\ndef test_handles_empty_dict():\n    \"\"\"Test that empty dict is handled gracefully.\"\"\"\n    with patch.dict(os.environ, {\"EXISTING\": \"value\"}, clear=True):\n        set_env_vars({})\n        # Should not change existing env\n        assert os.environ.get(\"EXISTING\") == \"value\"\n"
  },
  {
    "path": "tests/config/test_update_config_file.py",
    "content": "\"\"\"Test update_config_file function.\"\"\"\n\nimport json\nimport tempfile\nfrom pathlib import Path\nfrom unittest.mock import patch\n\nimport pytest\n\nfrom sidekick.config import ConfigError, update_config_file\n\n\ndef test_update_config_file_success():\n    \"\"\"Test successful config update.\"\"\"\n    with tempfile.NamedTemporaryFile(mode=\"w\", suffix=\".json\", delete=False) as f:\n        config = {\"default_model\": \"old-model\", \"env\": {\"KEY\": \"value\"}}\n        json.dump(config, f)\n        temp_path = Path(f.name)\n\n    try:\n        with patch(\"sidekick.config.get_config_path\", return_value=temp_path):\n            update_config_file({\"default_model\": \"new-model\"})\n\n            # Read back the updated config\n            with open(temp_path) as f:\n                updated = json.load(f)\n\n            assert updated[\"default_model\"] == \"new-model\"\n            assert updated[\"env\"][\"KEY\"] == \"value\"  # Other fields preserved\n    finally:\n        temp_path.unlink()\n\n\ndef test_update_config_file_no_config():\n    \"\"\"Test update when config doesn't exist.\"\"\"\n    with patch(\"sidekick.config.get_config_path\", return_value=Path(\"/nonexistent/path\")):\n        with pytest.raises(ConfigError, match=\"Config file not found\"):\n            update_config_file({\"default_model\": \"new-model\"})\n\n\ndef test_update_config_file_merge_nested():\n    \"\"\"Test that nested dicts are merged, not replaced.\"\"\"\n    with tempfile.NamedTemporaryFile(mode=\"w\", suffix=\".json\", delete=False) as f:\n        config = {\"env\": {\"KEY1\": \"value1\", \"KEY2\": \"value2\"}}\n        json.dump(config, f)\n        temp_path = Path(f.name)\n\n    try:\n        with patch(\"sidekick.config.get_config_path\", return_value=temp_path):\n            update_config_file({\"env\": {\"KEY2\": \"updated\", \"KEY3\": \"new\"}})\n\n            with open(temp_path) as f:\n                updated = json.load(f)\n\n            assert updated[\"env\"][\"KEY1\"] == \"value1\"  # Preserved\n            assert updated[\"env\"][\"KEY2\"] == \"updated\"  # Updated\n            assert updated[\"env\"][\"KEY3\"] == \"new\"  # Added\n    finally:\n        temp_path.unlink()\n"
  },
  {
    "path": "tests/config/test_validate_config_structure.py",
    "content": "\"\"\"Tests for validate_config_structure function.\"\"\"\n\nimport pytest\n\nfrom sidekick.config import ConfigValidationError, validate_config_structure\n\n\ndef test_valid_config_passes():\n    \"\"\"Test that valid config passes validation.\"\"\"\n    config = {\"default_model\": \"test-model\", \"env\": {\"API_KEY\": \"test\"}}\n    # Should not raise\n    validate_config_structure(config)\n\n\ndef test_valid_config_with_empty_env():\n    \"\"\"Test valid config with empty env dict.\"\"\"\n    config = {\"default_model\": \"test-model\", \"env\": {}}\n    # Should not raise\n    validate_config_structure(config)\n\n\ndef test_raises_for_non_dict():\n    \"\"\"Test ConfigValidationError for non-dict config.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        validate_config_structure(\"not a dict\")\n    assert \"Config must be a JSON object\" in str(exc_info.value)\n\n\ndef test_raises_for_missing_default_model():\n    \"\"\"Test ConfigValidationError when default_model missing.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        validate_config_structure({\"env\": {}})\n    assert \"Config missing required field 'default_model'\" in str(exc_info.value)\n\n\ndef test_raises_for_non_string_default_model():\n    \"\"\"Test ConfigValidationError when default_model not string.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        validate_config_structure({\"default_model\": 123, \"env\": {}})\n    assert \"'default_model' must be a string\" in str(exc_info.value)\n\n\ndef test_raises_for_missing_env():\n    \"\"\"Test ConfigValidationError when env field missing.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        validate_config_structure({\"default_model\": \"test-model\"})\n    assert \"Config missing required field 'env'\" in str(exc_info.value)\n\n\ndef test_raises_for_non_dict_env():\n    \"\"\"Test ConfigValidationError when env is not a dict.\"\"\"\n    with pytest.raises(ConfigValidationError) as exc_info:\n        validate_config_structure({\"default_model\": \"test\", \"env\": \"not a dict\"})\n    assert \"'env' field must be an object\" in str(exc_info.value)\n"
  },
  {
    "path": "tests/conftest.py",
    "content": "\"\"\"Shared fixtures and helpers for tests.\"\"\"\n\nfrom unittest.mock import AsyncMock, MagicMock, Mock\n\nimport pytest\nfrom pydantic_ai import messages\n\n# ---------------------------------------------------------------------------\n# Generic mocks\n# ---------------------------------------------------------------------------\n\n\n@pytest.fixture\ndef mock_ui():\n    \"\"\"Return a mocked *ui* module/object.\"\"\"\n    return MagicMock()\n\n\n@pytest.fixture\ndef mock_session():\n    \"\"\"Return a mocked *session* object.\"\"\"\n    return MagicMock()\n\n\n@pytest.fixture\ndef mock_models():\n    \"\"\"Return a simple models mapping for tests that need it.\"\"\"\n    return {\"model1\": {}, \"model2\": {}, \"model3\": {}}\n\n\n@pytest.fixture\ndef mock_config():\n    \"\"\"Return a minimal config dict.\"\"\"\n    return {\n        \"default_model\": \"claude-3-5-sonnet\",\n        \"env\": {},\n        \"settings\": {},\n    }\n\n\n@pytest.fixture\ndef mock_usage():\n    \"\"\"Return a usage-like object with default numbers (can be tweaked inside tests).\"\"\"\n    usage = MagicMock()\n    usage.requests = 1\n    usage.request_tokens = 1000\n    usage.response_tokens = 500\n    usage.total_tokens = 1500\n    usage.details = []\n    return usage\n\n\n# ---------------------------------------------------------------------------\n# Helper / factory fixtures\n# ---------------------------------------------------------------------------\n\n\n@pytest.fixture\ndef make_mock_process():\n    \"\"\"Factory to create an *asyncio* subprocess mock with the given stdout/stderr/returncode.\"\"\"\n\n    def _factory(stdout: str = \"\", stderr: str = \"\", returncode: int = 0):\n        proc = AsyncMock()\n        proc.communicate = AsyncMock(return_value=(stdout.encode(), stderr.encode()))\n        proc.returncode = returncode\n        return proc\n\n    return _factory\n\n\n@pytest.fixture\ndef make_tool_call():\n    \"\"\"Factory that produces a *messages.ToolCallPart*-like mock used in agent tests.\"\"\"\n\n    def _factory(name: str = \"test_tool\", call_id: str = \"call_id\"):\n        tc = Mock(spec=messages.ToolCallPart)\n        tc.part_kind = \"tool-call\"\n        tc.tool_name = name\n        tc.tool_call_id = call_id\n        return tc\n\n    return _factory\n\n\n# ---------------------------------------------------------------------------\n# Composite fixtures for common patching scenarios\n# ---------------------------------------------------------------------------\n\n\n@pytest.fixture\ndef patched_commands_env(monkeypatch, mock_ui, mock_session):\n    \"\"\"Patch *sidekick.commands.ui* and *.session* once for a whole test function.\"\"\"\n    import sidekick.commands as _cmd\n\n    monkeypatch.setattr(_cmd, \"ui\", mock_ui)\n    monkeypatch.setattr(_cmd, \"session\", mock_session)\n    yield\n"
  },
  {
    "path": "tests/main/test_error_handling.py",
    "content": "\"\"\"Tests for error handling integration in the REPL.\"\"\"\n\nimport asyncio\nfrom unittest.mock import MagicMock, patch\n\nimport pytest\nfrom pydantic_ai.exceptions import ModelHTTPError\n\nfrom sidekick.repl import Repl\n\n\n@pytest.fixture\ndef mock_repl():\n    \"\"\"Fixture to create a mock Repl instance for testing.\"\"\"\n    with patch(\"signal.signal\"):\n        repl = Repl(project_guide=None)\n        return repl\n\n\n@pytest.mark.asyncio\nasync def test_handle_user_request_with_error(mock_repl):\n    \"\"\"Test that _handle_user_request properly calls the error handler.\"\"\"\n    mock_ui = MagicMock()\n    mock_session = MagicMock(sigint_received=False, current_task=None)\n\n    with patch(\"sidekick.repl.process_request\") as mock_process:\n        mock_process.side_effect = ValueError(\"Test error\")\n\n        with patch(\"sidekick.repl.ui\", mock_ui), patch(\"sidekick.repl.session\", mock_session):\n            await mock_repl._handle_user_request(\"test input\")\n\n            mock_ui.stop_spinner.assert_called()\n            mock_ui.error_panel.assert_called_once()\n            call_args = mock_ui.error_panel.call_args\n            assert \"ValueError\" in call_args[0][0]\n            assert \"Test error\" in call_args[0][0]\n            assert \"detail\" in call_args[1]\n            assert \"Error log:\" in call_args[1][\"detail\"]\n\n\n@pytest.mark.asyncio\nasync def test_handle_user_request_with_model_http_error(mock_repl):\n    \"\"\"Test that ModelHTTPError doesn't create a log file.\"\"\"\n    mock_ui = MagicMock()\n    mock_session = MagicMock(sigint_received=False, current_task=None)\n\n    with patch(\"sidekick.repl.process_request\") as mock_process:\n        error = ModelHTTPError(\n            status_code=400, model_name=\"test-model\", body={\"error\": {\"message\": \"Bad request\"}}\n        )\n        mock_process.side_effect = error\n\n        with patch(\"sidekick.repl.ui\", mock_ui), patch(\"sidekick.repl.session\", mock_session):\n            await mock_repl._handle_user_request(\"test input\")\n\n            mock_ui.error_panel.assert_called_once_with(\"test-model: Bad request\")\n\n\n@pytest.mark.asyncio\nasync def test_handle_user_request_success(mock_repl):\n    \"\"\"Test that a successful request doesn't trigger error handling.\"\"\"\n    mock_ui = MagicMock()\n    mock_session = MagicMock(sigint_received=False, current_task=None, last_usage=None)\n\n    with patch(\"sidekick.repl.process_request\") as mock_process:\n        mock_process.return_value = \"Success response\"\n\n        with patch(\"sidekick.repl.ui\", mock_ui), patch(\"sidekick.repl.session\", mock_session):\n            await mock_repl._handle_user_request(\"test input\")\n\n            mock_ui.error_panel.assert_not_called()\n            mock_ui.agent.assert_called_once_with(\"Success response\", has_footer=False)\n\n\n@pytest.mark.asyncio\nasync def test_handle_user_request_cancellation(mock_repl):\n    \"\"\"Test that cancellation is handled gracefully without error logging.\"\"\"\n    mock_ui = MagicMock()\n    mock_session = MagicMock(\n        sigint_received=False,\n        current_task=None,\n        current_model=\"test-model\",\n    )\n\n    with patch(\"sidekick.repl.process_request\") as mock_process:\n        mock_process.side_effect = asyncio.CancelledError()\n\n        with patch(\"sidekick.repl.ui\", mock_ui), patch(\"sidekick.repl.session\", mock_session):\n            await mock_repl._handle_user_request(\"test input\")\n\n            mock_ui.error_panel.assert_not_called()\n"
  },
  {
    "path": "tests/mcp/__init__.py",
    "content": "# MCP tests package\n"
  },
  {
    "path": "tests/mcp/test_create_mcp_server.py",
    "content": "\"\"\"Tests for create_mcp_server function.\"\"\"\n\nfrom sidekick.mcp.servers import SilentMCPServerStdio, create_mcp_server\n\n\ndef test_creates_server_with_minimal_config():\n    \"\"\"Test creating server with minimal valid config.\"\"\"\n    config = {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"]}\n\n    server = create_mcp_server(\"fetch\", config)\n\n    assert isinstance(server, SilentMCPServerStdio)\n    assert server.command == \"uvx\"\n    assert server.args == [\"mcp-server-fetch\"]\n    assert server.env == {}\n    assert server.display_name == \"Fetch\"\n\n\ndef test_creates_server_with_custom_name():\n    \"\"\"Test creating server with custom name field.\"\"\"\n    config = {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"], \"name\": \"Custom Fetch Server\"}\n\n    server = create_mcp_server(\"fetch\", config)\n\n    assert server.display_name == \"Custom Fetch Server\"\n\n\ndef test_creates_server_with_env_vars():\n    \"\"\"Test creating server with environment variables.\"\"\"\n    config = {\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@modelcontextprotocol/server-brave-search\"],\n        \"env\": {\"BRAVE_API_KEY\": \"test-key\"},\n    }\n\n    server = create_mcp_server(\"brave-search\", config)\n\n    assert server.env == {\"BRAVE_API_KEY\": \"test-key\"}\n    assert server.display_name == \"Brave Search\"\n\n\ndef test_formats_display_name_from_key():\n    \"\"\"Test display name formatting from server key.\"\"\"\n    config = {\"command\": \"test\", \"args\": [\"arg\"]}\n\n    # Test various key formats\n    server1 = create_mcp_server(\"simple\", config)\n    assert server1.display_name == \"Simple\"\n\n    server2 = create_mcp_server(\"hyphen-name\", config)\n    assert server2.display_name == \"Hyphen Name\"\n\n    server3 = create_mcp_server(\"underscore_name\", config)\n    assert server3.display_name == \"Underscore Name\"\n"
  },
  {
    "path": "tests/mcp/test_format_display_name.py",
    "content": "\"\"\"Tests for format_server_name function.\"\"\"\n\nfrom sidekick.ui import format_server_name\n\n\ndef test_converts_simple_lowercase():\n    \"\"\"Test conversion of simple lowercase name.\"\"\"\n    assert format_server_name(\"fetch\") == \"Fetch\"\n\n\ndef test_converts_hyphenated_names():\n    \"\"\"Test conversion of hyphenated names.\"\"\"\n    assert format_server_name(\"brave-search\") == \"Brave Search\"\n    assert format_server_name(\"mcp-server-fetch\") == \"MCP Server Fetch\"\n\n\ndef test_converts_underscored_names():\n    \"\"\"Test conversion of underscored names.\"\"\"\n    assert format_server_name(\"brave_search\") == \"Brave Search\"\n    assert format_server_name(\"mcp_server_fetch\") == \"MCP Server Fetch\"\n\n\ndef test_converts_mixed_separators():\n    \"\"\"Test conversion with mixed separators.\"\"\"\n    assert format_server_name(\"brave-search_api\") == \"Brave Search API\"\n    assert format_server_name(\"mcp_server-fetch\") == \"MCP Server Fetch\"\n\n\ndef test_handles_already_capitalized():\n    \"\"\"Test handling of already capitalized input.\"\"\"\n    assert format_server_name(\"FETCH\") == \"Fetch\"\n    assert format_server_name(\"Brave-Search\") == \"Brave Search\"\n\n\ndef test_handles_empty_string():\n    \"\"\"Test handling of empty string.\"\"\"\n    assert format_server_name(\"\") == \"\"\n"
  },
  {
    "path": "tests/mcp/test_load_mcp_servers.py",
    "content": "\"\"\"Tests for load_mcp_servers function.\"\"\"\n\nimport logging\nfrom unittest.mock import patch\n\nfrom sidekick.config import ConfigError\nfrom sidekick.mcp.servers import load_mcp_servers\n\n\ndef test_loads_valid_servers():\n    \"\"\"Test loading valid MCP servers from config.\"\"\"\n    mock_config = {\n        \"default_model\": \"test\",\n        \"env\": {},\n        \"mcpServers\": {\n            \"fetch\": {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"]},\n            \"brave\": {\n                \"command\": \"npx\",\n                \"args\": [\"-y\", \"@modelcontextprotocol/server-brave-search\"],\n                \"env\": {\"BRAVE_API_KEY\": \"test\"},\n            },\n        },\n    }\n\n    with patch(\"sidekick.mcp.servers.read_config_file\", return_value=mock_config):\n        servers = load_mcp_servers()\n\n    assert len(servers) == 2\n    assert servers[0].display_name == \"Fetch\"\n    assert servers[1].display_name == \"Brave\"\n\n\ndef test_returns_empty_list_when_no_mcp_servers():\n    \"\"\"Test returns empty list when no mcpServers in config.\"\"\"\n    mock_config = {\"default_model\": \"test\", \"env\": {}}\n\n    with patch(\"sidekick.mcp.servers.read_config_file\", return_value=mock_config):\n        servers = load_mcp_servers()\n\n    assert servers == []\n\n\ndef test_returns_empty_list_on_config_error():\n    \"\"\"Test returns empty list when config loading fails.\"\"\"\n    with patch(\"sidekick.mcp.servers.read_config_file\", side_effect=ConfigError(\"Test error\")):\n        servers = load_mcp_servers()\n\n    assert servers == []\n\n\ndef test_handles_server_creation_failure(caplog):\n    \"\"\"Test handles individual server creation failures gracefully.\"\"\"\n    mock_config = {\n        \"default_model\": \"test\",\n        \"env\": {},\n        \"mcpServers\": {\n            \"valid\": {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"]},\n            \"failing\": {\"command\": \"test\", \"args\": [\"test\"]},\n        },\n    }\n\n    with patch(\"sidekick.mcp.servers.read_config_file\", return_value=mock_config):\n        # Make create_mcp_server fail for one server\n        original_create = __import__(\n            \"sidekick.mcp.servers\", fromlist=[\"create_mcp_server\"]\n        ).create_mcp_server\n\n        def mock_create(key, config):\n            if key == \"failing\":\n                raise RuntimeError(\"Simulated server creation failure\")\n            return original_create(key, config)\n\n        with patch(\"sidekick.mcp.servers.create_mcp_server\", side_effect=mock_create):\n            with caplog.at_level(logging.WARNING):\n                servers = load_mcp_servers()\n\n    assert len(servers) == 1\n    assert servers[0].display_name == \"Valid\"\n    assert \"Failed to create server 'failing'\" in caplog.text\n\n\ndef test_warns_when_all_servers_fail(caplog):\n    \"\"\"Test warns when no servers could be created.\"\"\"\n    mock_config = {\n        \"default_model\": \"test\",\n        \"env\": {},\n        \"mcpServers\": {\n            \"server1\": {\"command\": \"test1\", \"args\": [\"test\"]},\n            \"server2\": {\"command\": \"test2\", \"args\": [\"test\"]},\n        },\n    }\n\n    with patch(\"sidekick.mcp.servers.read_config_file\", return_value=mock_config):\n        # Make all server creations fail\n        with patch(\"sidekick.mcp.servers.create_mcp_server\", side_effect=Exception(\"Failed\")):\n            with patch(\"sidekick.mcp.servers.ui.error\") as mock_error:\n                with patch(\"sidekick.mcp.servers.ui.warning\"):\n                    with patch(\"sidekick.mcp.servers.ui.bullet\"):\n                        with caplog.at_level(logging.WARNING):\n                            servers = load_mcp_servers()\n\n    assert servers == []\n    # Check that ui.error was called with the expected message\n    mock_error.assert_called_with(\"No MCP servers could be loaded successfully\")\n\n\ndef test_handles_unexpected_errors(caplog):\n    \"\"\"Test handles unexpected errors gracefully.\"\"\"\n    with patch(\"sidekick.mcp.servers.read_config_file\", side_effect=Exception(\"Unexpected\")):\n        with caplog.at_level(logging.ERROR):\n            servers = load_mcp_servers()\n\n    assert servers == []\n    assert \"Unexpected error loading config\" in caplog.text\n"
  },
  {
    "path": "tests/mcp/test_validate_server_config.py",
    "content": "\"\"\"Tests for MCP server config validation.\"\"\"\n\nimport pytest\n\nfrom sidekick.config import ConfigValidationError, parse_mcp_servers\n\n\ndef test_valid_config_passes():\n    \"\"\"Test that valid server config passes validation.\"\"\"\n    config = {\"mcpServers\": {\"test-server\": {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"]}}}\n    # Should not raise\n    result = parse_mcp_servers(config)\n    assert \"test-server\" in result\n\n\ndef test_valid_config_with_multiple_args():\n    \"\"\"Test valid config with multiple args.\"\"\"\n    config = {\n        \"mcpServers\": {\n            \"test-server\": {\n                \"command\": \"npx\",\n                \"args\": [\"-y\", \"@modelcontextprotocol/server-brave-search\"],\n            }\n        }\n    }\n    # Should not raise\n    result = parse_mcp_servers(config)\n    assert \"test-server\" in result\n\n\ndef test_raises_for_non_dict():\n    \"\"\"Test ConfigValidationError for non-dict config.\"\"\"\n    config = {\"mcpServers\": {\"test-server\": \"not a dict\"}}\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(config)\n    assert \"MCP server 'test-server' configuration must be an object\" in str(exc_info.value)\n\n\ndef test_raises_for_missing_command():\n    \"\"\"Test ConfigValidationError when command missing.\"\"\"\n    config = {\"mcpServers\": {\"test-server\": {\"args\": [\"test\"]}}}\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(config)\n    assert \"MCP server 'test-server' missing required field 'command'\" in str(exc_info.value)\n\n\ndef test_raises_for_empty_command():\n    \"\"\"Test that empty command is allowed (validation doesn't check for empty strings).\"\"\"\n    config = {\"mcpServers\": {\"test-server\": {\"command\": \"\", \"args\": [\"test\"]}}}\n    # parse_mcp_servers doesn't validate empty command strings, only type\n    result = parse_mcp_servers(config)\n    assert \"test-server\" in result\n\n\ndef test_raises_for_missing_args():\n    \"\"\"Test ConfigValidationError when args missing.\"\"\"\n    config = {\"mcpServers\": {\"test-server\": {\"command\": \"uvx\"}}}\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(config)\n    assert \"MCP server 'test-server' missing required field 'args'\" in str(exc_info.value)\n\n\ndef test_raises_for_non_list_args():\n    \"\"\"Test ConfigValidationError when args is not a list.\"\"\"\n    config = {\"mcpServers\": {\"test-server\": {\"command\": \"uvx\", \"args\": \"not-a-list\"}}}\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(config)\n    assert \"MCP server 'test-server' field 'args' must be an array\" in str(exc_info.value)\n\n\ndef test_raises_for_empty_args():\n    \"\"\"Test ConfigValidationError when args is empty list.\"\"\"\n    config = {\"mcpServers\": {\"test-server\": {\"command\": \"uvx\", \"args\": []}}}\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(config)\n    assert \"MCP server 'test-server' field 'args' must contain at least one argument\" in str(\n        exc_info.value\n    )\n\n\ndef test_accepts_optional_fields():\n    \"\"\"Test that optional fields are accepted.\"\"\"\n    config = {\n        \"mcpServers\": {\n            \"test-server\": {\n                \"command\": \"uvx\",\n                \"args\": [\"mcp-server-fetch\"],\n                \"env\": {\"API_KEY\": \"test\"},\n                \"name\": \"Custom Name\",\n            }\n        }\n    }\n    # Should not raise\n    result = parse_mcp_servers(config)\n    assert \"test-server\" in result\n    assert result[\"test-server\"][\"env\"] == {\"API_KEY\": \"test\"}\n\n\ndef test_raises_for_non_dict_mcpservers():\n    \"\"\"Test ConfigValidationError when mcpServers is not a dict.\"\"\"\n    config = {\"mcpServers\": \"not a dict\"}\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(config)\n    assert \"'mcpServers' field must be an object\" in str(exc_info.value)\n\n\ndef test_returns_empty_dict_when_no_mcpservers():\n    \"\"\"Test that empty dict is returned when no mcpServers field.\"\"\"\n    config = {}\n    result = parse_mcp_servers(config)\n    assert result == {}\n\n\ndef test_raises_for_non_dict_env():\n    \"\"\"Test ConfigValidationError when env field is not a dict.\"\"\"\n    config = {\n        \"mcpServers\": {\n            \"test-server\": {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"], \"env\": \"not a dict\"}\n        }\n    }\n    with pytest.raises(ConfigValidationError) as exc_info:\n        parse_mcp_servers(config)\n    assert \"MCP server 'test-server' field 'env' must be an object\" in str(exc_info.value)\n"
  },
  {
    "path": "tests/setup/test_create_config.py",
    "content": "import json\nimport tempfile\nfrom pathlib import Path\nfrom unittest.mock import patch\n\nfrom sidekick.constants import DEFAULT_USER_CONFIG\nfrom sidekick.setup import create_config\n\n\ndef test_create_config_includes_all_defaults():\n    \"\"\"Test that create_config includes all default fields.\"\"\"\n    # Mock user inputs\n    with patch(\"sidekick.setup.collect_api_keys\") as mock_collect:\n        with patch(\"sidekick.setup.select_default_model\") as mock_select:\n            mock_collect.return_value = {\"OPENAI_API_KEY\": \"sk-test123\"}\n            mock_select.return_value = \"gpt-4o\"\n\n            # Create temporary config file\n            with tempfile.TemporaryDirectory() as temp_dir:\n                config_path = Path(temp_dir) / \".config\" / \"sidekick.json\"\n\n                # Mock console to suppress output\n                with patch(\"sidekick.setup.console\"):\n                    result = create_config(config_path)\n\n                # Verify the returned config has all fields\n                assert \"default_model\" in result\n                assert result[\"default_model\"] == \"gpt-4o\"\n\n                assert \"env\" in result\n                assert result[\"env\"][\"OPENAI_API_KEY\"] == \"sk-test123\"\n                # Should also have default placeholder values\n                assert \"ANTHROPIC_API_KEY\" in result[\"env\"]\n                assert \"GEMINI_API_KEY\" in result[\"env\"]\n\n                assert \"mcpServers\" in result\n                assert result[\"mcpServers\"] == {}\n\n                assert \"settings\" in result\n                assert \"allowed_tools\" in result[\"settings\"]\n                from sidekick.constants import DEFAULT_USER_CONFIG\n\n                assert (\n                    result[\"settings\"][\"allowed_tools\"]\n                    == DEFAULT_USER_CONFIG[\"settings\"][\"allowed_tools\"]\n                )\n                assert \"allowed_commands\" in result[\"settings\"]\n                assert len(result[\"settings\"][\"allowed_commands\"]) > 0\n\n                # Verify the file was written correctly\n                with open(config_path) as f:\n                    file_content = json.load(f)\n\n                assert file_content == result\n\n\ndef test_create_config_with_no_api_keys():\n    \"\"\"Test create_config when user provides no API keys but continues anyway.\"\"\"\n    with patch(\"sidekick.setup.collect_api_keys\") as mock_collect:\n        with patch(\"sidekick.setup.select_default_model\") as mock_select:\n            with patch(\"sidekick.setup.Confirm.ask\") as mock_confirm:\n                mock_collect.return_value = {}\n                mock_select.return_value = DEFAULT_USER_CONFIG[\"default_model\"]\n                mock_confirm.return_value = True  # Continue anyway\n\n                with tempfile.TemporaryDirectory() as temp_dir:\n                    config_path = Path(temp_dir) / \".config\" / \"sidekick.json\"\n\n                    with patch(\"sidekick.setup.console\"):\n                        result = create_config(config_path)\n\n                    # Should have empty env dict, not the placeholder values\n                    assert result[\"env\"] == {}\n\n                    # But should still have all other defaults\n                    assert \"settings\" in result\n                    assert \"mcpServers\" in result\n\n\ndef test_create_config_filters_empty_api_keys():\n    \"\"\"Test that empty string API keys are not included in the config.\"\"\"\n    # Mock user inputs - simulate user pressing enter without entering values\n    with patch(\"sidekick.setup.Prompt.ask\") as mock_prompt:\n        with patch(\"sidekick.setup.select_default_model\") as mock_select:\n            # Simulate user pressing enter (empty string) for all API keys\n            mock_prompt.side_effect = [\"\", \"\", \"\"]  # Empty strings for all 3 API keys\n            mock_select.return_value = DEFAULT_USER_CONFIG[\"default_model\"]\n\n            with tempfile.TemporaryDirectory() as temp_dir:\n                config_path = Path(temp_dir) / \".config\" / \"sidekick.json\"\n\n                with patch(\"sidekick.setup.console\"):\n                    with patch(\"sidekick.setup.Confirm.ask\", return_value=True):\n                        result = create_config(config_path)\n\n                # Should have empty env dict when all API keys are empty strings\n                assert result[\"env\"] == {}\n\n                # But should still have all other defaults\n                assert \"default_model\" in result\n                assert \"settings\" in result\n                assert \"mcpServers\" in result\n"
  },
  {
    "path": "tests/test_data.py",
    "content": "\"\"\"Shared test data for sidekick tests.\"\"\"\n\n# Valid MCP server configurations\nVALID_MCP_SERVER = {\"command\": \"uvx\", \"args\": [\"mcp-server-fetch\"]}\n\nVALID_MCP_SERVER_WITH_NAME = {\n    \"command\": \"uvx\",\n    \"args\": [\"mcp-server-fetch\"],\n    \"name\": \"Fetch Server\",\n}\n\nVALID_MCP_SERVER_WITH_ENV = {\n    \"command\": \"npx\",\n    \"args\": [\"-y\", \"@modelcontextprotocol/server-brave-search\"],\n    \"env\": {\"BRAVE_API_KEY\": \"test-key\"},\n}\n\nVALID_MCP_CONFIG = {\"mcpServers\": {\"fetch\": VALID_MCP_SERVER}}\n\n# Invalid MCP server configurations\nINVALID_MCP_SERVER_NOT_DICT = \"not a dict\"\nINVALID_MCP_SERVER_NO_COMMAND = {\"args\": [\"test\"]}\nINVALID_MCP_SERVER_NO_ARGS = {\"command\": \"test\"}\nINVALID_MCP_SERVER_EMPTY_ARGS = {\"command\": \"test\", \"args\": []}\nINVALID_MCP_SERVER_NON_LIST_ARGS = {\"command\": \"test\", \"args\": \"not-a-list\"}\nINVALID_MCP_SERVER_NON_DICT_ENV = {\"command\": \"test\", \"args\": [\"arg\"], \"env\": \"not-a-dict\"}\n\n# Mock tool calls\nMOCK_TOOL_CALL = {\n    \"tool_name\": \"test_tool\",\n    \"tool_call_id\": \"tc_123\",\n    \"tool_args\": {\"arg1\": \"value1\"},\n}\n"
  },
  {
    "path": "tests/tools/__init__.py",
    "content": ""
  },
  {
    "path": "tests/tools/conftest.py",
    "content": "\"\"\"Shared fixtures for tool tests.\"\"\"\n\nimport pytest\nfrom pydantic_ai import RunContext\n\n\n@pytest.fixture\ndef mock_ctx():\n    \"\"\"Create a mock RunContext for testing.\"\"\"\n    return RunContext(\n        deps=None,\n        model=None,\n        usage=None,\n        prompt=None,\n        messages=[],\n        tool_call_id=None,\n        tool_name=None,\n        retry=0,\n        run_step=0,\n    )\n"
  },
  {
    "path": "tests/tools/test_find.py",
    "content": "\"\"\"Tests for the consolidated find tool.\"\"\"\n\nfrom unittest.mock import MagicMock, patch\n\nimport pytest\nfrom pydantic_ai import RunContext\n\nfrom sidekick.deps import ToolDeps\nfrom sidekick.tools.find import find\n\n\n@pytest.fixture\ndef mock_context():\n    \"\"\"Create a mock RunContext with ToolDeps.\"\"\"\n    mock_deps = MagicMock(spec=ToolDeps)\n    mock_deps.display_tool_status = None\n    mock_ctx = MagicMock(spec=RunContext)\n    mock_ctx.deps = mock_deps\n    return mock_ctx\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"asyncio.create_subprocess_exec\")\nasync def test_find_files_with_fd(\n    mock_subprocess_exec, mock_which, make_mock_process, mock_context\n):\n    mock_which.side_effect = lambda cmd: \"/usr/bin/fd\" if cmd == \"fd\" else None\n    mock_subprocess_exec.return_value = make_mock_process(\"./src/main.py\\n./src/agent.py\")\n\n    result = await find(mock_context, \".\", \"*.py\")\n\n    args = mock_subprocess_exec.call_args[0]\n    assert args[0] == \"fd\"\n    assert \"--type\" in args and \"f\" in args\n    assert \"*.py\" in args\n    assert result == \"./src/main.py\\n./src/agent.py\"\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"asyncio.create_subprocess_exec\")\nasync def test_find_dirs_with_fd(mock_subprocess_exec, mock_which, make_mock_process, mock_context):\n    mock_which.side_effect = lambda cmd: \"/usr/bin/fd\" if cmd == \"fd\" else None\n    mock_subprocess_exec.return_value = make_mock_process(\"./src/utils\\n./tests/tools\")\n\n    result = await find(mock_context, \".\", \"*tools*\", dirs=True)\n\n    args = mock_subprocess_exec.call_args[0]\n    assert args[0] == \"fd\"\n    assert \"--type\" in args and \"d\" in args\n    assert \"*tools*\" in args\n    assert result == \"./src/utils\\n./tests/tools\"\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\", return_value=\"/usr/bin/rg\")\n@patch(\"asyncio.create_subprocess_exec\")\nasync def test_find_content_with_rg(\n    mock_subprocess_exec, mock_which, make_mock_process, mock_context\n):\n    mock_subprocess_exec.return_value = make_mock_process(\"main.py:10:def main():\")\n\n    result = await find(mock_context, \".\", content=\"def main\")\n\n    mock_which.assert_called_once_with(\"rg\")\n    args = mock_subprocess_exec.call_args[0]\n    assert args[0] == \"rg\"\n    assert \"--line-number\" in args\n    assert args[-1] == \"def main\"\n    assert result == \"main.py:10:def main():\"\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"asyncio.create_subprocess_exec\")\nasync def test_find_content_case_insensitive(\n    mock_subprocess_exec, mock_which, make_mock_process, mock_context\n):\n    mock_which.return_value = \"/usr/bin/rg\"\n    mock_subprocess_exec.return_value = make_mock_process(\"main.py:10:def MAIN():\")\n\n    result = await find(mock_context, \".\", content=\"main\", case_sensitive=False)\n\n    args = mock_subprocess_exec.call_args[0]\n    assert args[0] == \"rg\"\n    assert \"-i\" in args\n    assert result == \"main.py:10:def MAIN():\"\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"asyncio.create_subprocess_exec\")\nasync def test_find_content_with_include_pattern(\n    mock_subprocess_exec, mock_which, make_mock_process, mock_context\n):\n    mock_which.return_value = \"/usr/bin/rg\"\n    mock_subprocess_exec.return_value = make_mock_process(\"test.py:5:import pytest\")\n\n    result = await find(mock_context, \".\", content=\"import\", include_pattern=\"*.py\")\n\n    args = mock_subprocess_exec.call_args[0]\n    assert args[0] == \"rg\"\n    assert \"--glob\" in args\n    glob_idx = args.index(\"--glob\")\n    assert args[glob_idx + 1] == \"*.py\"\n    assert result == \"test.py:5:import pytest\"\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"asyncio.create_subprocess_exec\")\nasync def test_find_content_with_ag_fallback(\n    mock_subprocess_exec, mock_which, make_mock_process, mock_context\n):\n    mock_which.side_effect = lambda cmd: \"/usr/bin/ag\" if cmd == \"ag\" else None\n    mock_subprocess_exec.return_value = make_mock_process(\"main.py:10:def main():\")\n\n    result = await find(mock_context, \".\", content=\"def main\")\n\n    args = mock_subprocess_exec.call_args[0]\n    assert args[0] == \"ag\"\n    assert \"--line-numbers\" in args\n    assert args[-1] == \"def main\"\n    assert result == \"main.py:10:def main():\"\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"os.walk\")\n@patch(\"builtins.open\")\nasync def test_find_content_python_fallback(mock_open, mock_walk, mock_which, mock_context):\n    mock_which.return_value = None\n    mock_walk.return_value = [\n        (\".\", [\"src\"], [\"README.md\"]),\n        (\"./src\", [], [\"main.py\"]),\n    ]\n\n    mock_file = MagicMock()\n    mock_file.__enter__.return_value = [\"def main():\\n\", \"    print('hello')\\n\"]\n    mock_open.return_value = mock_file\n\n    result = await find(mock_context, \".\", content=\"def main\")\n\n    assert \"main.py:1:def main():\" in result\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"asyncio.create_subprocess_exec\")\nasync def test_find_no_results(mock_subprocess_exec, mock_which, make_mock_process, mock_context):\n    mock_which.side_effect = lambda cmd: \"/usr/bin/fd\" if cmd == \"fd\" else None\n    mock_subprocess_exec.return_value = make_mock_process(\"\")\n\n    result = await find(mock_context, \".\", \"*.nonexistent\")\n\n    assert result == \"No results found.\"\n\n\n@pytest.mark.asyncio\n@patch(\"shutil.which\")\n@patch(\"os.walk\")\nasync def test_find_files_python_fallback(mock_walk, mock_which, mock_context):\n    mock_which.return_value = None\n    mock_walk.return_value = [\n        (\".\", [\"src\"], [\"README.md\"]),\n        (\"./src\", [], [\"main.py\", \"utils.py\"]),\n    ]\n\n    result = await find(mock_context, \".\", \"*.py\")\n\n    assert \"./src/main.py\" in result\n    assert \"./src/utils.py\" in result\n    assert \"README.md\" not in result\n"
  },
  {
    "path": "tests/tools/test_read_file.py",
    "content": "\"\"\"Tests for read_file tool.\"\"\"\n\nfrom unittest.mock import MagicMock, mock_open, patch\n\nimport pytest\nfrom pydantic_ai import RunContext\n\nfrom sidekick.deps import ToolDeps\nfrom sidekick.tools.read_file import read_file\n\n\n@pytest.fixture\ndef mock_context():\n    \"\"\"Create a mock RunContext with ToolDeps.\"\"\"\n    mock_deps = MagicMock(spec=ToolDeps)\n    mock_deps.display_tool_status = None\n    mock_ctx = MagicMock(spec=RunContext)\n    mock_ctx.deps = mock_deps\n    return mock_ctx\n\n\n@pytest.mark.asyncio\nasync def test_read_file_success(mock_context):\n    content = \"Hello, World!\\nThis is a test file.\"\n    with patch(\"builtins.open\", mock_open(read_data=content)) as m:\n        assert await read_file(mock_context, \"/test/file.txt\") == content\n        m.assert_called_once_with(\"/test/file.txt\", \"r\", encoding=\"utf-8\")\n\n\n@pytest.mark.parametrize(\n    \"side_effect, expected\",\n    [\n        (FileNotFoundError(), \"Error: File not found: /x\"),\n        (PermissionError(\"Access denied\"), \"Error: Permission denied: /x\"),\n        (IOError(\"Disk error\"), \"Error reading file /x: Disk error\"),\n    ],\n)\n@pytest.mark.asyncio\nasync def test_read_file_errors(side_effect, expected, mock_context):\n    with patch(\"builtins.open\", side_effect=side_effect):\n        assert await read_file(mock_context, \"/x\") == expected\n\n\n@pytest.mark.asyncio\nasync def test_read_file_real_file(tmp_path, mock_context):\n    tmp_file = tmp_path / \"sample.txt\"\n    tmp_file.write_text(\"Line1\\nLine2\")\n    assert await read_file(mock_context, str(tmp_file)) == \"Line1\\nLine2\"\n\n\n@pytest.mark.asyncio\nasync def test_read_file_empty_file(mock_context):\n    with patch(\"builtins.open\", mock_open(read_data=\"\")):\n        assert await read_file(mock_context, \"/empty.txt\") == \"\"\n\n\n@pytest.mark.asyncio\nasync def test_read_file_large_content(mock_context):\n    large_content = \"x\" * 10000 + \"\\n\" + \"y\" * 10000\n    with patch(\"builtins.open\", mock_open(read_data=large_content)):\n        result = await read_file(mock_context, \"/big.txt\")\n        assert result == large_content and len(result) == 20001\n"
  },
  {
    "path": "tests/tools/test_update_file.py",
    "content": "\"\"\"Tests for update_file tool.\"\"\"\n\nimport tempfile\nfrom pathlib import Path\nfrom unittest.mock import mock_open, patch\n\nimport pytest\nfrom pydantic_ai import ModelRetry\n\nfrom sidekick.tools.update_file import update_file\n\n\n@pytest.mark.asyncio\nasync def test_update_file_success(mock_ctx):\n    \"\"\"Test successful file update.\"\"\"\n    original_content = \"Hello, World!\\nThis is a test file.\\nGoodbye!\"\n    old_content = \"This is a test file.\"\n    new_content = \"This is an updated file.\"\n    expected_content = \"Hello, World!\\nThis is an updated file.\\nGoodbye!\"\n\n    with patch(\"builtins.open\", mock_open(read_data=original_content)) as mock_file:\n        result = await update_file(mock_ctx, \"/test/file.txt\", old_content, new_content)\n\n        # Verify the file was opened for reading and writing\n        assert mock_file.call_count == 2\n        mock_file.assert_any_call(\"/test/file.txt\", \"r\", encoding=\"utf-8\")\n        mock_file.assert_any_call(\"/test/file.txt\", \"w\", encoding=\"utf-8\")\n\n        # Verify the updated content was written\n        handle = mock_file()\n        handle.write.assert_called_once_with(expected_content)\n\n        assert result == \"Successfully updated /test/file.txt\"\n\n\n@pytest.mark.asyncio\nasync def test_update_file_content_not_found(mock_ctx):\n    \"\"\"Test ModelRetry when content to replace is not found.\"\"\"\n    original_content = \"Hello, World!\\nThis is a test file.\\nGoodbye!\"\n    old_content = \"This content does not exist\"\n    new_content = \"This is an updated file.\"\n\n    with patch(\"builtins.open\", mock_open(read_data=original_content)):\n        with pytest.raises(ModelRetry) as exc_info:\n            await update_file(mock_ctx, \"/test/file.txt\", old_content, new_content)\n\n        error_msg = str(exc_info.value)\n        assert \"Content to replace not found in /test/file.txt\" in error_msg\n        assert \"Searched for: 'This content does not exist'\" in error_msg\n        assert \"re-read the file\" in error_msg\n\n\n@pytest.mark.asyncio\nasync def test_update_file_content_not_found_long_content(mock_ctx):\n    \"\"\"Test ModelRetry with truncated preview for long content.\"\"\"\n    original_content = \"Short content\"\n    old_content = \"a\" * 150  # Long content that doesn't exist\n    new_content = \"New content\"\n\n    with patch(\"builtins.open\", mock_open(read_data=original_content)):\n        with pytest.raises(ModelRetry) as exc_info:\n            await update_file(mock_ctx, \"/test/file.txt\", old_content, new_content)\n\n        error_msg = str(exc_info.value)\n        assert \"Content to replace not found\" in error_msg\n        # Check that long content is truncated\n        assert f\"Searched for: '{'a' * 100}...'\" in error_msg\n\n\n@pytest.mark.asyncio\nasync def test_update_file_not_found(mock_ctx):\n    \"\"\"Test ModelRetry when file doesn't exist.\"\"\"\n    with patch(\"builtins.open\", side_effect=FileNotFoundError()):\n        with pytest.raises(ModelRetry) as exc_info:\n            await update_file(mock_ctx, \"/nonexistent/file.txt\", \"old\", \"new\")\n\n        error_msg = str(exc_info.value)\n        assert \"File not found: /nonexistent/file.txt\" in error_msg\n        assert \"check the file path\" in error_msg\n\n\n@pytest.mark.asyncio\nasync def test_update_file_read_error(mock_ctx):\n    \"\"\"Test ModelRetry on generic read error.\"\"\"\n    with patch(\"builtins.open\", side_effect=PermissionError(\"Access denied\")):\n        with pytest.raises(ModelRetry) as exc_info:\n            await update_file(mock_ctx, \"/test/file.txt\", \"old\", \"new\")\n\n        error_msg = str(exc_info.value)\n        assert \"Error reading file /test/file.txt\" in error_msg\n        assert \"Access denied\" in error_msg\n\n\n@pytest.mark.asyncio\nasync def test_update_file_write_error(mock_ctx):\n    \"\"\"Test ModelRetry on write error.\"\"\"\n    original_content = \"Hello, World!\\nThis is a test file.\\nGoodbye!\"\n    old_content = \"This is a test file.\"\n    new_content = \"This is an updated file.\"\n\n    # Mock successful read but failed write\n    read_mock = mock_open(read_data=original_content)\n\n    def open_side_effect(filepath, mode, encoding=\"utf-8\"):\n        if mode == \"r\":\n            return read_mock(filepath, mode, encoding)\n        else:  # mode == \"w\"\n            raise PermissionError(\"Cannot write to file\")\n\n    with patch(\"builtins.open\", side_effect=open_side_effect):\n        with pytest.raises(ModelRetry) as exc_info:\n            await update_file(mock_ctx, \"/test/file.txt\", old_content, new_content)\n\n        error_msg = str(exc_info.value)\n        assert \"Error writing to file /test/file.txt\" in error_msg\n        assert \"Cannot write to file\" in error_msg\n\n\n@pytest.mark.asyncio\nasync def test_update_file_with_real_file(mock_ctx):\n    \"\"\"Integration test with actual file operations.\"\"\"\n    with tempfile.NamedTemporaryFile(mode=\"w\", delete=False, suffix=\".txt\") as tmp:\n        tmp.write(\"Line 1\\nLine 2\\nLine 3\\n\")\n        tmp_path = tmp.name\n\n    try:\n        # Test successful update\n        result = await update_file(mock_ctx, tmp_path, \"Line 2\", \"Updated Line 2\")\n        assert result == f\"Successfully updated {tmp_path}\"\n\n        # Verify content was updated\n        with open(tmp_path, \"r\") as f:\n            content = f.read()\n        assert content == \"Line 1\\nUpdated Line 2\\nLine 3\\n\"\n\n        # Test content not found\n        with pytest.raises(ModelRetry) as exc_info:\n            await update_file(mock_ctx, tmp_path, \"Non-existent line\", \"New line\")\n        assert \"Content to replace not found\" in str(exc_info.value)\n\n    finally:\n        # Clean up\n        Path(tmp_path).unlink()\n\n\n@pytest.mark.asyncio\nasync def test_update_file_only_first_occurrence(mock_ctx):\n    \"\"\"Test that only the first occurrence is replaced.\"\"\"\n    original_content = \"foo\\nbar\\nfoo\\nbaz\"\n    old_content = \"foo\"\n    new_content = \"replaced\"\n    expected_content = \"replaced\\nbar\\nfoo\\nbaz\"\n\n    with patch(\"builtins.open\", mock_open(read_data=original_content)) as mock_file:\n        result = await update_file(mock_ctx, \"/test/file.txt\", old_content, new_content)\n\n        # Verify only first occurrence was replaced\n        handle = mock_file()\n        handle.write.assert_called_once_with(expected_content)\n\n        assert result == \"Successfully updated /test/file.txt\"\n\n\n@pytest.mark.asyncio\nasync def test_update_file_preserves_encoding(mock_ctx):\n    \"\"\"Test that UTF-8 encoding is properly handled.\"\"\"\n    original_content = \"Hello 世界!\\nThis is a test file with émojis 🎉\\nGoodbye!\"\n    old_content = \"This is a test file with émojis 🎉\"\n    new_content = \"This is an updated file with émojis 🎊\"\n    expected_content = \"Hello 世界!\\nThis is an updated file with émojis 🎊\\nGoodbye!\"\n\n    with patch(\"builtins.open\", mock_open(read_data=original_content)) as mock_file:\n        result = await update_file(mock_ctx, \"/test/file.txt\", old_content, new_content)\n\n        # Verify encoding was specified\n        mock_file.assert_any_call(\"/test/file.txt\", \"r\", encoding=\"utf-8\")\n        mock_file.assert_any_call(\"/test/file.txt\", \"w\", encoding=\"utf-8\")\n\n        # Verify the content was correctly updated\n        handle = mock_file()\n        handle.write.assert_called_once_with(expected_content)\n\n        assert result == \"Successfully updated /test/file.txt\"\n\n\n@pytest.mark.asyncio\nasync def test_update_file_identical_content(mock_ctx):\n    \"\"\"Test ModelRetry when old_content and new_content are identical.\"\"\"\n    with pytest.raises(ModelRetry) as exc_info:\n        await update_file(mock_ctx, \"/test/file.txt\", \"same content\", \"same content\")\n\n    error_msg = str(exc_info.value)\n    assert \"old_content and new_content are identical\" in error_msg\n    assert \"provide different content\" in error_msg\n"
  },
  {
    "path": "tests/ui/__init__.py",
    "content": ""
  },
  {
    "path": "tests/usage/__init__.py",
    "content": "\"\"\"Tests for usage tracking functionality.\"\"\"\n"
  },
  {
    "path": "tests/usage/test_usage_tracker.py",
    "content": "\"\"\"Tests for the UsageTracker class.\"\"\"\n\nfrom unittest.mock import Mock\n\nimport pytest\n\nfrom sidekick.usage import ModelUsage, UsageTracker\n\n# Tests for ModelUsage dataclass\n\n\ndef test_model_usage_initial_state():\n    \"\"\"Test initial state of ModelUsage.\"\"\"\n    usage = ModelUsage()\n    assert usage.requests == 0\n    assert usage.input_tokens == 0\n    assert usage.cached_tokens == 0\n    assert usage.output_tokens == 0\n    assert usage.total_cost == 0.0\n\n\ndef test_model_usage_add_usage():\n    \"\"\"Test adding usage data.\"\"\"\n    usage = ModelUsage()\n    usage.add_usage(1000, 200, 500, 0.01)\n\n    assert usage.requests == 1\n    assert usage.input_tokens == 1000\n    assert usage.cached_tokens == 200\n    assert usage.output_tokens == 500\n    assert usage.total_cost == 0.01\n\n    # Add more usage\n    usage.add_usage(500, 100, 250, 0.005)\n\n    assert usage.requests == 2\n    assert usage.input_tokens == 1500\n    assert usage.cached_tokens == 300\n    assert usage.output_tokens == 750\n    assert usage.total_cost == 0.015\n\n\n# Tests for UsageTracker class\n\n\ndef test_usage_tracker_initial_state():\n    \"\"\"Test initial state of UsageTracker.\"\"\"\n    tracker = UsageTracker()\n    assert tracker.model_usage == {}\n    assert tracker.last_request is None\n    assert tracker.total_tokens == 0\n    assert tracker.total_cost == 0.0\n    assert tracker.total_requests == 0\n\n\ndef test_record_usage_basic():\n    \"\"\"Test basic usage recording without cached tokens.\"\"\"\n    tracker = UsageTracker()\n\n    # Mock usage object\n    usage = Mock()\n    usage.request_tokens = 1000\n    usage.response_tokens = 500\n    usage.details = []\n\n    tracker.record_usage(\"openai:o4-mini\", usage)\n\n    # Check model usage was recorded\n    assert \"openai:o4-mini\" in tracker.model_usage\n    model_usage = tracker.model_usage[\"openai:o4-mini\"]\n    assert model_usage.requests == 1\n    assert model_usage.input_tokens == 1000\n    assert model_usage.output_tokens == 500\n    assert model_usage.cached_tokens == 0\n\n    # Check last request\n    assert tracker.last_request is not None\n    assert tracker.last_request[\"model\"] == \"openai:o4-mini\"\n    assert tracker.last_request[\"input_tokens\"] == 1000\n    assert tracker.last_request[\"output_tokens\"] == 500\n\n    # Check totals\n    assert tracker.total_tokens == 1500\n    assert tracker.total_requests == 1\n\n\ndef test_record_usage_with_cached_tokens():\n    \"\"\"Test usage recording with cached tokens.\"\"\"\n    tracker = UsageTracker()\n\n    # Mock usage object with cached tokens\n    usage = Mock()\n    usage.request_tokens = 1000\n    usage.response_tokens = 500\n\n    detail = Mock()\n    detail.cached_tokens = 300\n    usage.details = [detail]\n\n    tracker.record_usage(\"anthropic:claude-3-7-sonnet-latest\", usage)\n\n    # Check cached tokens were recorded\n    model_usage = tracker.model_usage[\"anthropic:claude-3-7-sonnet-latest\"]\n    assert model_usage.cached_tokens == 300\n\n    # Check last request has cached tokens\n    assert tracker.last_request[\"cached_tokens\"] == 300\n\n\ndef test_record_usage_multiple_models():\n    \"\"\"Test recording usage for multiple models.\"\"\"\n    tracker = UsageTracker()\n\n    # First model\n    usage1 = Mock()\n    usage1.request_tokens = 1000\n    usage1.response_tokens = 500\n    usage1.details = []\n\n    tracker.record_usage(\"openai:o4-mini\", usage1)\n\n    # Second model\n    usage2 = Mock()\n    usage2.request_tokens = 2000\n    usage2.response_tokens = 1000\n    usage2.details = []\n\n    tracker.record_usage(\"anthropic:claude-3-7-sonnet-latest\", usage2)\n\n    # Check both models are tracked\n    assert len(tracker.model_usage) == 2\n    assert \"openai:o4-mini\" in tracker.model_usage\n    assert \"anthropic:claude-3-7-sonnet-latest\" in tracker.model_usage\n\n    # Check totals include both models\n    assert tracker.total_tokens == 4500  # 1500 + 3000\n    assert tracker.total_requests == 2\n\n\ndef test_record_usage_same_model_multiple_times():\n    \"\"\"Test recording multiple usages for the same model.\"\"\"\n    tracker = UsageTracker()\n\n    # First usage\n    usage1 = Mock()\n    usage1.request_tokens = 1000\n    usage1.response_tokens = 500\n    usage1.details = []\n\n    tracker.record_usage(\"openai:o4-mini\", usage1)\n\n    # Second usage for same model\n    usage2 = Mock()\n    usage2.request_tokens = 500\n    usage2.response_tokens = 250\n    usage2.details = []\n\n    tracker.record_usage(\"openai:o4-mini\", usage2)\n\n    # Check cumulative stats\n    model_usage = tracker.model_usage[\"openai:o4-mini\"]\n    assert model_usage.requests == 2\n    assert model_usage.input_tokens == 1500\n    assert model_usage.output_tokens == 750\n\n    assert tracker.total_tokens == 2250\n    assert tracker.total_requests == 2\n\n\ndef test_cost_calculation():\n    \"\"\"Test that costs are calculated correctly.\"\"\"\n    tracker = UsageTracker()\n\n    usage = Mock()\n    usage.request_tokens = 1000\n    usage.response_tokens = 500\n    usage.details = []\n\n    tracker.record_usage(\"openai:o4-mini\", usage)\n\n    # Check cost calculation (o4-mini pricing: $1.10/$0.275/$4.40 per 1M)\n    expected_input_cost = 1000 / 1_000_000 * 1.10\n    expected_output_cost = 500 / 1_000_000 * 4.40\n    expected_total = expected_input_cost + expected_output_cost\n\n    assert tracker.last_request[\"input_cost\"] == pytest.approx(expected_input_cost)\n    assert tracker.last_request[\"output_cost\"] == pytest.approx(expected_output_cost)\n    assert tracker.last_request[\"request_cost\"] == pytest.approx(expected_total)\n    assert tracker.total_cost == pytest.approx(expected_total)\n\n\ndef test_unknown_model_fallback():\n    \"\"\"Test that unknown models fall back to first model pricing.\"\"\"\n    tracker = UsageTracker()\n\n    usage = Mock()\n    usage.request_tokens = 1000\n    usage.response_tokens = 500\n    usage.details = []\n\n    tracker.record_usage(\"unknown:model\", usage)\n\n    # Should use first model's pricing (anthropic:claude-opus-4-0)\n    expected_input_cost = 1000 / 1_000_000 * 3.00\n    expected_output_cost = 500 / 1_000_000 * 15.00\n\n    assert tracker.last_request[\"input_cost\"] == pytest.approx(expected_input_cost)\n    assert tracker.last_request[\"output_cost\"] == pytest.approx(expected_output_cost)\n"
  },
  {
    "path": "tests/utils/__init__.py",
    "content": "\"\"\"Tests for utilities.\"\"\"\n"
  },
  {
    "path": "tests/utils/test_command_parser.py",
    "content": "\"\"\"Tests for command parser.\"\"\"\n\nfrom sidekick.ui import get_command_display_name\nfrom sidekick.utils.command import extract_commands, is_command_allowed\n\n\ndef test_simple_command():\n    assert extract_commands(\"ls -la\") == [\"ls\"]\n    assert extract_commands(\"mkdir -p /some/path\") == [\"mkdir\"]\n    assert extract_commands(\"cd /tmp\") == [\"cd\"]\n\n\ndef test_commands_with_paths():\n    assert extract_commands(\"/usr/bin/ls -la\") == [\"ls\"]\n    assert extract_commands(\"./scripts/deploy.sh\") == [\"deploy.sh\"]\n    assert extract_commands(\"/bin/bash script.sh\") == [\"bash\"]\n\n\ndef test_chained_commands_with_and():\n    assert extract_commands(\"ls && mkdir foo\") == [\"ls\", \"mkdir\"]\n    assert extract_commands(\"cd /tmp && ls -la && pwd\") == [\"cd\", \"ls\", \"pwd\"]\n\n\ndef test_chained_commands_with_or():\n    assert extract_commands(\"ls || echo 'failed'\") == [\"ls\", \"echo\"]\n\n\ndef test_piped_commands():\n    assert extract_commands(\"ls | grep foo\") == [\"ls\", \"grep\"]\n    assert extract_commands(\"cat file.txt | grep pattern | wc -l\") == [\"cat\", \"grep\", \"wc\"]\n\n\ndef test_semicolon_separated():\n    assert extract_commands(\"cd /tmp; ls\") == [\"cd\", \"ls\"]\n    assert extract_commands(\"echo 'hello'; echo 'world'\") == [\"echo\", \"echo\"]\n\n\ndef test_mixed_separators():\n    assert extract_commands(\"ls && cd /tmp; pwd | grep tmp\") == [\"ls\", \"cd\", \"pwd\", \"grep\"]\n\n\ndef test_quoted_arguments():\n    assert extract_commands('echo \"hello && world\"') == [\"echo\"]\n    assert extract_commands(\"echo 'ls | grep'\") == [\"echo\"]\n    assert extract_commands('mkdir \"my folder\" && ls') == [\"mkdir\", \"ls\"]\n\n\ndef test_empty_and_whitespace():\n    assert extract_commands(\"\") == []\n    assert extract_commands(\"   \") == []\n    assert extract_commands(\"ls &&   && pwd\") == [\"ls\", \"pwd\"]\n\n\ndef test_single_command_allowed():\n    allowed = {\"ls\", \"mkdir\", \"cd\"}\n    assert is_command_allowed(\"ls -la\", allowed) is True\n    assert is_command_allowed(\"rm -rf /\", allowed) is False\n\n\ndef test_all_commands_must_be_allowed():\n    allowed = {\"ls\", \"cd\"}\n    assert is_command_allowed(\"ls && cd /tmp\", allowed) is True\n    assert is_command_allowed(\"ls && rm file\", allowed) is False\n\n\ndef test_empty_allowed_set():\n    allowed = set()\n    assert is_command_allowed(\"ls\", allowed) is False\n\n\ndef test_command_with_path():\n    allowed = {\"ls\"}\n    assert is_command_allowed(\"/usr/bin/ls\", allowed) is True\n\n\ndef test_single_command_display():\n    assert get_command_display_name(\"ls -la\") == \"'ls'\"\n\n\ndef test_multiple_commands_display():\n    assert get_command_display_name(\"ls && cd\") == \"'ls', 'cd'\"\n    assert get_command_display_name(\"cat | grep | wc\") == \"'cat', 'grep', 'wc'\"\n"
  },
  {
    "path": "tests/utils/test_error_handler.py",
    "content": "\"\"\"Tests for error_handler utility.\"\"\"\n\nfrom unittest.mock import MagicMock\n\nimport pytest\nfrom pydantic_ai.exceptions import ModelHTTPError\n\nfrom sidekick.utils.error import (\n    extract_error_message,\n    handle_error,\n    save_error_log,\n    should_log_error,\n)\n\n\ndef test_extract_error_message_model_http_error():\n    \"\"\"Test extracting message from ModelHTTPError.\"\"\"\n    error = ModelHTTPError(\n        status_code=400,\n        model_name=\"TestModel\",\n        body={\"error\": {\"message\": \"Invalid API key\"}},\n    )\n\n    result = extract_error_message(error)\n    assert result == \"TestModel: Invalid API key\"\n\n\ndef test_extract_error_message_model_http_error_direct_message():\n    \"\"\"Test extracting message from ModelHTTPError with message in body.\"\"\"\n    error = ModelHTTPError(\n        status_code=429,\n        model_name=\"TestModel\",\n        body={\"message\": \"Rate limit exceeded\"},\n    )\n\n    result = extract_error_message(error)\n    assert result == \"TestModel: Rate limit exceeded\"\n\n\ndef test_extract_error_message_malformed_function_call():\n    \"\"\"Test extracting message for MALFORMED_FUNCTION_CALL error.\"\"\"\n    error = Exception(\"Content field missing, MALFORMED_FUNCTION_CALL\")\n\n    result = extract_error_message(error)\n    assert result == \"The AI model had trouble executing a function. Please try again.\"\n\n\ndef test_extract_error_message_content_field_missing():\n    \"\"\"Test extracting message for Content field missing error.\"\"\"\n    error = Exception(\"Content field missing from response\")\n\n    result = extract_error_message(error)\n    expected = (\n        \"The AI model returned an unexpected response format. This might be a temporary issue.\"\n    )\n    assert result == expected\n\n\ndef test_extract_error_message_long_error():\n    \"\"\"Test that long error messages are truncated.\"\"\"\n    long_message = \"x\" * 200\n    error = Exception(long_message)\n\n    result = extract_error_message(error)\n    assert len(result) < 200\n    assert result.endswith(\"...\")\n    assert \"Exception\" in result\n\n\ndef test_extract_error_message_provider_error_openai():\n    \"\"\"Test extracting message from OpenAI provider error.\"\"\"\n\n    class MockOpenAIError(Exception):\n        def __init__(self):\n            self.body = {\"error\": {\"message\": \"Rate limit exceeded\"}}\n            super().__init__()\n\n    error = MockOpenAIError()\n    error.__class__.__name__ = \"APIStatusError\"\n    error.__class__.__module__ = \"openai\"\n\n    result = extract_error_message(error)\n    assert result == \"OpenAI: Rate limit exceeded\"\n\n\ndef test_extract_error_message_provider_error_anthropic():\n    \"\"\"Test extracting message from Anthropic provider error.\"\"\"\n\n    class MockAnthropicError(Exception):\n        def __init__(self):\n            self.message = \"Invalid API key\"\n            super().__init__()\n\n    error = MockAnthropicError()\n    error.__class__.__name__ = \"AuthenticationError\"\n    error.__class__.__module__ = \"anthropic\"\n\n    result = extract_error_message(error)\n    assert result == \"Anthropic: Invalid API key\"\n\n\ndef test_extract_error_message_provider_error_google():\n    \"\"\"Test extracting message from Google provider error.\"\"\"\n\n    class MockGoogleError(Exception):\n        def __init__(self):\n            self.details = {\"error\": {\"message\": \"API key not valid\"}}\n            super().__init__()\n\n    error = MockGoogleError()\n    error.__class__.__name__ = \"ClientError\"\n    error.__class__.__module__ = \"google.genai\"\n\n    result = extract_error_message(error)\n    assert result == \"Google: API key not valid\"\n\n\ndef test_should_log_error_known_errors():\n    \"\"\"Test that known errors should not be logged.\"\"\"\n    from asyncio import CancelledError\n\n    # Known errors that shouldn't be logged\n    assert not should_log_error(CancelledError())\n    assert not should_log_error(KeyboardInterrupt())\n    assert not should_log_error(\n        ModelHTTPError(\n            status_code=400,\n            model_name=\"test\",\n            body={},\n        )\n    )\n\n\ndef test_should_log_error_unknown_errors():\n    \"\"\"Test that unknown errors should be logged.\"\"\"\n    assert should_log_error(ValueError(\"test\"))\n    assert should_log_error(Exception(\"test\"))\n    assert should_log_error(RuntimeError(\"test\"))\n\n\ndef test_save_error_log():\n    \"\"\"Test saving error log to file.\"\"\"\n    error = ValueError(\"Test error message\")\n\n    log_file = save_error_log(error)\n\n    # Check file exists\n    assert log_file.exists()\n    assert log_file.name.startswith(\"sidekick_error_\")\n    assert log_file.suffix == \".log\"\n\n    # Check content\n    content = log_file.read_text()\n    assert \"Sidekick Error Log\" in content\n    assert \"Test error message\" in content\n    assert \"ValueError\" in content\n    assert \"Traceback\" in content\n\n    # Clean up\n    log_file.unlink()\n\n\n@pytest.mark.asyncio\nasync def test_handle_error_with_logging():\n    \"\"\"Test handle_error function with error that should be logged.\"\"\"\n    mock_display = MagicMock()\n    error = ValueError(\"Unexpected error\")\n\n    await handle_error(error, mock_display)\n\n    # Check display function was called\n    mock_display.assert_called_once()\n    call_args = mock_display.call_args\n\n    # Check message\n    assert \"ValueError\" in call_args[0][0]\n    assert \"Unexpected error\" in call_args[0][0]\n\n    # Check detail contains log file path\n    assert \"detail\" in call_args[1]\n    assert \"Error log:\" in call_args[1][\"detail\"]\n    assert \"sidekick_error_\" in call_args[1][\"detail\"]\n\n\n@pytest.mark.asyncio\nasync def test_handle_error_without_logging():\n    \"\"\"Test handle_error function with error that shouldn't be logged.\"\"\"\n    mock_display = MagicMock()\n    error = ModelHTTPError(\n        status_code=400,\n        model_name=\"TestModel\",\n        body={\"error\": {\"message\": \"Bad request\"}},\n    )\n\n    await handle_error(error, mock_display)\n\n    # Check display function was called without detail\n    mock_display.assert_called_once_with(\"TestModel: Bad request\")\n\n\ndef test_extract_error_message_with_regex_extraction():\n    \"\"\"Test that regex extraction works for embedded messages.\"\"\"\n    # Test with a complex error message containing embedded JSON-like structure\n    error_msg = (\n        'APIError: {\"error\": {\"message\": \"Your credit balance is too low\", '\n        '\"type\": \"insufficient_funds\", \"code\": 1234}}'\n    )\n    error = Exception(error_msg)\n\n    result = extract_error_message(error)\n    assert \"Your credit balance is too low\" in result\n"
  },
  {
    "path": "tests/utils/test_guide.py",
    "content": "\"\"\"Tests for guide utility functions.\"\"\"\n\nfrom sidekick.utils.guide import load_guide\n\n\ndef test_load_guide_with_file(tmp_path, monkeypatch):\n    \"\"\"`load_guide` should return the file contents.\"\"\"\n\n    guide_content = \"# Project Guide\\nUse Python 3.11\"\n    guide_path = tmp_path / \"SIDEKICK.md\"\n    guide_path.write_text(guide_content)\n\n    monkeypatch.chdir(tmp_path)\n\n    result = load_guide()\n    assert result == guide_content\n\n\ndef test_load_guide_without_file(tmp_path, monkeypatch):\n    \"\"\"`load_guide` should return ``None`` when no guide file is present.\"\"\"\n\n    monkeypatch.chdir(tmp_path)\n\n    result = load_guide()\n    assert result is None\n"
  },
  {
    "path": "tests/utils/test_input.py",
    "content": "\"\"\"Tests for input utilities.\"\"\"\n\nfrom prompt_toolkit import PromptSession\nfrom prompt_toolkit.key_binding import KeyBindings\nfrom prompt_toolkit.styles import Style\n\nfrom sidekick.utils.input import (\n    PLACEHOLDER_STYLE,\n    PLACEHOLDER_TEXT,\n    PROMPT_CONTINUATION_INDENT,\n    PROMPT_SYMBOL,\n    create_multiline_keybindings,\n    create_multiline_prompt_session,\n    create_prompt_style,\n    prompt_continuation,\n)\n\n\ndef test_create_multiline_keybindings():\n    \"\"\"Test that keybindings are created correctly.\"\"\"\n    bindings = create_multiline_keybindings()\n    assert isinstance(bindings, KeyBindings)\n    assert len(bindings.bindings) > 0\n\n\ndef test_create_prompt_style():\n    \"\"\"Test that prompt style is created correctly.\"\"\"\n    style = create_prompt_style()\n    assert isinstance(style, Style)\n    style_dict = dict(style.style_rules)\n    assert \"placeholder\" in style_dict\n    assert style_dict[\"placeholder\"] == PLACEHOLDER_STYLE\n\n\ndef test_prompt_continuation():\n    \"\"\"Test prompt continuation returns correct indentation.\"\"\"\n    # Should always return the same indentation regardless of parameters\n    assert prompt_continuation(80, 0, False) == PROMPT_CONTINUATION_INDENT\n    assert prompt_continuation(120, 5, True) == PROMPT_CONTINUATION_INDENT\n    assert prompt_continuation(60, 10, False) == PROMPT_CONTINUATION_INDENT\n\n\ndef test_create_multiline_prompt_session():\n    \"\"\"Test that prompt session is created with correct configuration.\"\"\"\n    session = create_multiline_prompt_session()\n    assert isinstance(session, PromptSession)\n    assert session.multiline is True\n    assert session.prompt_continuation == prompt_continuation\n\n\ndef test_constants():\n    \"\"\"Test that constants have expected values.\"\"\"\n    assert PROMPT_SYMBOL == \"λ \"\n    assert PROMPT_CONTINUATION_INDENT == \"  \"\n    assert PLACEHOLDER_TEXT == \"Esc+Enter to submit, /help for commands\"\n"
  }
]