[
  {
    "path": ".gitignore",
    "content": ".aider*\nsession_dir/\n\ndata/*\n!data/mock.json\n!data/mock.db\n!data/mock.sqlite\n!data/analytics.json\n!data/analytics.db\n!data/analytics.sqlite\n!data/analytics.csv\n\nspecs/\n\npatterns.log\n\npaic-patterns.log\n\n.env\n\nrelevant_files.json\noutput_relevant_files.json\n\npackage-lock.json\n\nagent_workspace/\n__pycache__/\n*.pyc\n*.pyo\n*.pyd\n"
  },
  {
    "path": "CLAUDE.md",
    "content": "# CLAUDE.md - Single File Agents Repository\n\n## Commands\n- **Run agents**: `uv run <agent_filename.py> [options]`\n\n## Environment\n- Set API keys before running agents:\n  ```bash\n  export GEMINI_API_KEY='your-api-key-here'\n  export OPENAI_API_KEY='your-api-key-here'\n  export ANTHROPIC_API_KEY='your-api-key-here'\n  export FIRECRAWL_API_KEY='your-api-key-here'\n  ```\n\n## Code Style\n- Single file agents with embedded dependencies (using `uv`)\n- Dependencies specified at top of file in `/// script` comments\n- Include example usage in docstrings\n- Detailed error handling with user-friendly messages\n- Consistent format for command-line arguments\n\n## Structure\n- Each agent focuses on a single capability (DuckDB, SQLite, JQ, etc.)\n- Command-line arguments use argparse with consistent patterns\n- File naming: `sfa_<capability>_<provider>_v<version>.py`\n\n## Usage\n> We use astral `uv` as our python package manager.\n>\n> This enables us to run SINGLE FILE AGENTS with embedded dependencies.\n\nTo run an agent, use the following command:\n\n```bash\nuv run sfa_<capability>_<provider>_v<version>.py <arguments>\n```"
  },
  {
    "path": "README.md",
    "content": "# Single File Agents (SFA)\n> Premise: #1: What if we could pack single purpose, powerful AI Agents into a single python file?\n> \n> Premise: #2: What's the best structural pattern for building Agents that can improve in capability as compute and intelligence increases?\n\n![Scale Your AI Coding Impact](images/scale-your-ai-coding-impact-with-devin-cursor-aider.png)\n\n![Single File Agents](images/single-file-agents-thumb.png)\n\n## What is this?\n\nA collection of powerful single-file agents built on top of [uv](https://github.com/astral/uv) - the modern Python package installer and resolver. \n\nThese agents aim to do one thing and one thing only. They demonstrate precise prompt engineering and GenAI patterns for practical tasks many of which I share on the [IndyDevDan YouTube channel](https://www.youtube.com/@indydevdan). Watch us walk through the Single File Agent in [this video](https://youtu.be/YAIJV48QlXc).\n\nYou can also check out [this video](https://youtu.be/vq-vTsbSSZ0) where we use [Devin](https://devin.ai/), [Cursor](https://www.cursor.com/), [Aider](https://aider.chat/), and [PAIC-Patterns](https://agenticengineer.com/principled-ai-coding) to build three new agents with powerful spec (plan) prompts.\n\nThis repo contains a few agents built across the big 3 GenAI providers (Gemini, OpenAI, Anthropic).\n\n## Quick Start\n\nExport your API keys:\n\n```bash\nexport GEMINI_API_KEY='your-api-key-here'\n\nexport OPENAI_API_KEY='your-api-key-here'\n\nexport ANTHROPIC_API_KEY='your-api-key-here'\n\nexport FIRECRAWL_API_KEY='your-api-key-here' # Get your API key from https://www.firecrawl.dev/\n```\n\nJQ Agent:\n\n```bash\nuv run sfa_jq_gemini_v1.py --exe \"Filter scores above 80 from data/analytics.json and save to high_scores.json\"\n```\n\nDuckDB Agent (OpenAI):\n\n```bash\n# Tip tier\nuv run sfa_duckdb_openai_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n```\n\nDuckDB Agent (Anthropic):\n\n```bash\n# Tip tier\nuv run sfa_duckdb_anthropic_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n```\n\nDuckDB Agent (Gemini):\n\n```bash\n# Buggy but usually works\nuv run sfa_duckdb_gemini_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n```\n\nSQLite Agent (OpenAI):\n\n```bash\nuv run sfa_sqlite_openai_v2.py -d ./data/analytics.sqlite -p \"Show me all users with score above 80\"\n```\n\nMeta Prompt Generator:\n\n```bash\nuv run sfa_meta_prompt_openai_v1.py \\\n    --purpose \"generate mermaid diagrams\" \\\n    --instructions \"generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output\" \\\n    --sections \"user-prompt\" \\\n    --variables \"user-prompt\"\n```\n\n### Bash Editor Agent (Anthropic)\n> (sfa_bash_editor_agent_anthropic_v2.py)\n\nAn AI-powered assistant that can both edit files and execute bash commands using Claude's tool use capabilities.\n\nExample usage:\n```bash\n# View a file\nuv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"Show me the first 10 lines of README.md\"\n\n# Create a new file\nuv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"Create a new file called hello.txt with 'Hello World!' in it\"\n\n# Replace text in a file\nuv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"Create a new file called hello.txt with 'Hello World!' in it. Then update hello.txt to say 'Hello AI Coding World'\"\n\n# Execute a bash command\nuv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"List all Python files in the current directory sorted by size\"\n```\n\n### Polars CSV Agent (OpenAI)\n> (sfa_polars_csv_agent_openai_v2.py)\n\nAn AI-powered assistant that generates and executes Polars data transformations for CSV files using OpenAI's function calling capabilities.\n\nExample usage:\n```bash\n# Run Polars CSV agent with default compute loops (10)\nuv run sfa_polars_csv_agent_openai_v2.py -i \"data/analytics.csv\" -p \"What is the average age of the users?\"\n\n# Run with custom compute loops\nuv run sfa_polars_csv_agent_openai_v2.py -i \"data/analytics.csv\" -p \"What is the average age of the users?\" -c 5\n```\n\n### Web Scraper Agent (OpenAI)\n> (sfa_scrapper_agent_openai_v2.py)\n\nAn AI-powered web scraping and content filtering assistant that uses OpenAI's function calling capabilities and the Firecrawl API for efficient web scraping.\n\nExample usage:\n```bash\n# Basic scraping with markdown list output\nuv run sfa_scrapper_agent_openai_v2.py -u \"https://example.com\" -p \"Scrap and format each sentence as a separate line in a markdown list\" -o \"example.md\"\n\n# Advanced scraping with specific content extraction\nuv run sfa_scrapper_agent_openai_v2.py \\\n    --url https://agenticengineer.com/principled-ai-coding \\\n    --prompt \"What are the names and descriptions of each lesson?\" \\\n    --output-file-path paic-lessons.md \\\n    -c 10\n```\n\n## Features\n\n- **Self-contained**: Each agent is a single file with embedded dependencies\n- **Minimal, Precise Agents**: Carefully crafted prompts for small agents that can do one thing really well\n- **Modern Python**: Built on uv for fast, reliable dependency management\n- **Run From The Cloud**: With uv, you can run these scripts from your server or right from a gist (see my gists commands)\n- **Patternful**: Building effective agents is about setting up the right prompts, tools, and process for your use case. Once you setup a great pattern, you can re-use it over and over. That's part of the magic of these SFA's. \n\n## Test Data\n\nThe project includes a test duckdb database (`data/analytics.db`), a sqlite database (`data/analytics.sqlite`), and a JSON file (`data/analytics.json`) for testing purposes. The database contains sample user data with the following characteristics:\n\n### User Table\n- 30 sample users with varied attributes\n- Fields: id (UUID), name, age, city, score, is_active, status, created_at\n- Test data includes:\n  - Names: Alice, Bob, Charlie, Diana, Eric, Fiona, Jane, John\n  - Cities: Berlin, London, New York, Paris, Singapore, Sydney, Tokyo, Toronto\n  - Status values: active, inactive, pending, archived\n  - Age range: 20-65\n  - Score range: 3.1-96.18\n  - Date range: 2023-2025\n\nPerfect for testing filtering, sorting, and aggregation operations with realistic data variations.\n\n## Agents\n> Note: We're using the term 'agent' loosely for some of these SFA's. We have prompts, prompt chains, and a couple are official Agents.\n\n### JQ Command Agent \n> (sfa_jq_gemini_v1.py)\n\nAn AI-powered assistant that generates precise jq commands for JSON processing\n\nExample usage:\n```bash\n# Generate and execute a jq command\nuv run sfa_jq_gemini_v1.py --exe \"Filter scores above 80 from data/analytics.json and save to high_scores.json\"\n\n# Generate command only\nuv run sfa_jq_gemini_v1.py \"Filter scores above 80 from data/analytics.json and save to high_scores.json\"\n```\n\n### DuckDB Agents \n> (sfa_duckdb_openai_v2.py, sfa_duckdb_anthropic_v2.py, sfa_duckdb_gemini_v2.py, sfa_duckdb_gemini_v1.py)\n\nWe have three DuckDB agents that demonstrate different approaches and capabilities across major AI providers:\n\n#### DuckDB OpenAI Agent (sfa_duckdb_openai_v2.py, sfa_duckdb_openai_v1.py)\nAn AI-powered assistant that generates and executes DuckDB SQL queries using OpenAI's function calling capabilities.\n\nExample usage:\n```bash\n# Run DuckDB agent with default compute loops (10)\nuv run sfa_duckdb_openai_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n\n# Run with custom compute loops \nuv run sfa_duckdb_openai_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\" -c 5\n```\n\n#### DuckDB Anthropic Agent (sfa_duckdb_anthropic_v2.py)\nAn AI-powered assistant that generates and executes DuckDB SQL queries using Claude's tool use capabilities.\n\nExample usage:\n```bash\n# Run DuckDB agent with default compute loops (10)\nuv run sfa_duckdb_anthropic_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n\n# Run with custom compute loops\nuv run sfa_duckdb_anthropic_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\" -c 5\n```\n\n#### DuckDB Gemini Agent (sfa_duckdb_gemini_v2.py)\nAn AI-powered assistant that generates and executes DuckDB SQL queries using Gemini's function calling capabilities.\n\nExample usage:\n```bash\n# Run DuckDB agent with default compute loops (10)\nuv run sfa_duckdb_gemini_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n\n# Run with custom compute loops\nuv run sfa_duckdb_gemini_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\" -c 5\n```\n\n### Meta Prompt Generator (sfa_meta_prompt_openai_v1.py)\nAn AI-powered assistant that generates comprehensive, structured prompts for language models.\n\nExample usage:\n```bash\n# Generate a meta prompt using command-line arguments.\n# Optional arguments are marked with a ?.\nuv run sfa_meta_prompt_openai_v1.py \\\n    --purpose \"generate mermaid diagrams\" \\\n    --instructions \"generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output\" \\\n    --sections \"examples, user-prompt\" \\\n    --examples \"create examples of 3 basic mermaid charts with <user-chart-request> and <chart-response> blocks\" \\\n    --variables \"user-prompt\"\n\n# Without optional arguments, the script will enter interactive mode.\nuv run sfa_meta_prompt_openai_v1.py \\\n    --purpose \"generate mermaid diagrams\" \\\n    --instructions \"generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output\"\n\n# Interactive Mode\n# Just run the script without any flags to enter interactive mode.\n# You'll be prompted step by step for:\n# - Purpose (required): The main goal of your prompt\n# - Instructions (required): Detailed instructions for the model\n# - Sections (optional): Additional sections to include\n# - Examples (optional): Example inputs and outputs\n# - Variables (optional): Placeholders for dynamic content\nuv run sfa_meta_prompt_openai_v1.py\n```\n\n### Git Agent\n> Up for a challenge?\n\n## Requirements\n\n- Python 3.8+\n- uv package manager\n- GEMINI_API_KEY (for Gemini-based agents)\n- OPENAI_API_KEY (for OpenAI-based agents) \n- ANTHROPIC_API_KEY (for Anthropic-based agents)\n- jq command-line JSON processor (for JQ agent)\n- DuckDB CLI (for DuckDB agents)\n\n### Installing Required Tools\n\n#### jq Installation\n\nmacOS:\n```bash\nbrew install jq\n```\n\nWindows:\n- Download from [stedolan.github.io/jq/download](https://stedolan.github.io/jq/download/)\n- Or install with Chocolatey: `choco install jq`\n\n#### DuckDB Installation\n\nmacOS:\n```bash\nbrew install duckdb\n```\n\nWindows:\n- Download the CLI executable from [duckdb.org/docs/installation](https://duckdb.org/docs/installation)\n- Add the executable location to your system PATH\n\n## Installation\n\n1. Install uv:\n```bash\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n```\n\n2. Clone this repository:\n```bash\ngit clone <repository-url>\n```\n\n3. Set your Gemini API key (for JQ generator):\n```bash\nexport GEMINI_API_KEY='your-api-key-here'\n\n# Set your OpenAI API key (for DuckDB agents):\nexport OPENAI_API_KEY='your-api-key-here'\n\n# Set your Anthropic API key (for DuckDB agents):\nexport ANTHROPIC_API_KEY='your-api-key-here'\n```\n\n## Shout Outs + Resources for you\n- [uv](https://github.com/astral/uv) - The engineers creating uv are built different. Thank you for fixing the python ecosystem.\n- [Simon Willison](https://simonwillison.net) - Simon introduced me to the fact that you can [use uv to run single file python scripts](https://simonwillison.net/2024/Aug/20/uv-unified-python-packaging/) with dependencies. Massive thanks for all your work. He runs one of the most valuable blogs for engineers in the world.\n- [Building Effective Agents](https://www.anthropic.com/research/building-effective-agents) - A proper breakdown of how to build useful units of value built on top of GenAI.\n- [Part Time Larry](https://youtu.be/zm0Vo6Di3V8?si=oBetAgc5ifhBmK03) - Larry has a great breakdown on the new Python GenAI library and delivers great hands on, actionable GenAI x Finance information.\n- [Aider](https://aider.chat/) - AI Coding done right. Maximum control over your AI Coding Experience. Enough said.\n\n---\n\n- [New Gemini Python SDK](https://github.com/google-gemini/generative-ai-python)\n- [Anthropic Agent Chatbot Example](https://github.com/anthropics/courses/blob/master/tool_use/06_chatbot_with_multiple_tools.ipynb)\n- [Anthropic Customer Service Agent](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb)\n\n## AI Coding\n\n## Context Priming\nRead README.md, CLAUDE.md, ai_docs/*, and run git ls-files to understand this codebase.\n\n## License\n\nMIT License - feel free to use this code in your own projects.\n\nIf you find value from my work: give a shout out and tag my YT channel [IndyDevDan](https://www.youtube.com/@indydevdan).\n"
  },
  {
    "path": "ai_docs/anthropic-new-text-editor.md",
    "content": "Claude can use an Anthropic-defined text editor tool to view and modify text files, helping you debug, fix, and improve your code or other text documents. This allows Claude to directly interact with your files, providing hands-on assistance rather than just suggesting changes.\n\n## Before using the text editor tool\n\n### Use a compatible model\n\nAnthropic's text editor tool is only available for Claude 3.5 Sonnet and Claude 3.7 Sonnet:\n\n* **Claude 3.7 Sonnet**: `text_editor_20250124`\n* **Claude 3.5 Sonnet**: `text_editor_20241022`\n\nBoth versions provide identical capabilities - the version you use should match the model you're working with.\n\n### Assess your use case fit\n\nSome examples of when to use the text editor tool are:\n\n* **Code debugging**: Have Claude identify and fix bugs in your code, from syntax errors to logic issues.\n* **Code refactoring**: Let Claude improve your code structure, readability, and performance through targeted edits.\n* **Documentation generation**: Ask Claude to add docstrings, comments, or README files to your codebase.\n* **Test creation**: Have Claude create unit tests for your code based on its understanding of the implementation.\n\n---\n\n## Use the text editor tool\n\nProvide the text editor tool (named `str_replace_editor`) to Claude using the Messages API:\n\nThe text editor tool can be used in the following way:\n\n### Text editor tool commands\n\nThe text editor tool supports several commands for viewing and modifying files:\n\n#### view\n\nThe `view` command allows Claude to examine the contents of a file. It can read the entire file or a specific range of lines.\n\nParameters:\n\n* `command`: Must be \"view\"\n* `path`: The path to the file to view\n* `view_range` (optional): An array of two integers specifying the start and end line numbers to view. Line numbers are 1-indexed, and -1 for the end line means read to the end of the file.\n\n#### str\\_replace\n\nThe `str_replace` command allows Claude to replace a specific string in a file with a new string. This is used for making precise edits.\n\nParameters:\n\n* `command`: Must be \"str\\_replace\"\n* `path`: The path to the file to modify\n* `old_str`: The text to replace (must match exactly, including whitespace and indentation)\n* `new_str`: The new text to insert in place of the old text\n\n#### create\n\nThe `create` command allows Claude to create a new file with specified content.\n\nParameters:\n\n* `command`: Must be \"create\"\n* `path`: The path where the new file should be created\n* `file_text`: The content to write to the new file\n\n#### insert\n\nThe `insert` command allows Claude to insert text at a specific location in a file.\n\nParameters:\n\n* `command`: Must be \"insert\"\n* `path`: The path to the file to modify\n* `insert_line`: The line number after which to insert the text (0 for beginning of file)\n* `new_str`: The text to insert\n\n#### undo\\_edit\n\nThe `undo_edit` command allows Claude to revert the last edit made to a file.\n\nParameters:\n\n* `command`: Must be \"undo\\_edit\"\n* `path`: The path to the file whose last edit should be undone\n\n### Example: Fixing a syntax error with the text editor tool\n\nThis example demonstrates how Claude uses the text editor tool to fix a syntax error in a Python file.\n\nFirst, your application provides Claude with the text editor tool and a prompt to fix a syntax error:\n\nClaude will use the text editor tool first to view the file:\n\nYour application should then read the file and return its contents to Claude:\n\nClaude will identify the syntax error and use the `str_replace` command to fix it:\n\nYour application should then make the edit and return the result:\n\nFinally, Claude will provide a complete explanation of the fix:\n\n---\n\n## Implement the text editor tool\n\nThe text editor tool is implemented as a schema-less tool, identified by `type: \"text_editor_20250124\"`. When using this tool, you don't need to provide an input schema as with other tools; the schema is built into Claude's model and can't be modified.\n\n### Handle errors\n\nWhen using the text editor tool, various errors may occur. Here is guidance on how to handle them:\n\n### Follow implementation best practices\n\n---\n\n## Pricing and token usage\n\nThe text editor tool uses the same pricing structure as other tools used with Claude. It follows the standard input and output token pricing based on the Claude model you're using.\n\nIn addition to the base tokens, the following additional input tokens are needed for the text editor tool:\n\n| Tool | Additional input tokens |\n| --- | --- |\n| `text_editor_20241022` (Claude 3.5 Sonnet) | 700 tokens |\n| `text_editor_20250124` (Claude 3.7 Sonnet) | 700 tokens |\n\nFor more detailed information about tool pricing, see [Tool use pricing](about:/en/docs/build-with-claude/tool-use#pricing).\n\n## Integrate the text editor tool with computer use\n\nThe text editor tool can be used alongside the [computer use tool](/en/docs/agents-and-tools/computer-use) and other Anthropic-defined tools. When combining these tools, you'll need to:\n\n1. Include the appropriate beta header (if using with computer use)\n2. Match the tool version with the model you're using\n3. Account for the additional token usage for all tools included in your request\n\nFor more information about using the text editor tool in a computer use context, see the [Computer use](/en/docs/agents-and-tools/computer-use).\n\n## Change log\n\n| Date | Version | Changes |\n| --- | --- | --- |\n| March 13, 2025 | `text_editor_20250124` | Introduction of standalone Text Editor Tool documentation. This version is optimized for Claude 3.7 Sonnet but has identical capabilities to the previous version. |\n| October 22, 2024 | `text_editor_20241022` | Initial release of the Text Editor Tool with Claude 3.5 Sonnet. Provides capabilities for viewing, creating, and editing files through the `view`, `create`, `str_replace`, `insert`, and `undo_edit` commands. |\n\n## Next steps\n\nHere are some ideas for how to use the text editor tool in more convenient and powerful ways:\n\n* **Integrate with your development workflow**: Build the text editor tool into your development tools or IDE\n* **Create a code review system**: Have Claude review your code and make improvements\n* **Build a debugging assistant**: Create a system where Claude can help you diagnose and fix issues in your code\n* **Implement file format conversion**: Let Claude help you convert files from one format to another\n* **Automate documentation**: Set up workflows for Claude to automatically document your code\n\nAs you build applications with the text editor tool, we're excited to see how you leverage Claude's capabilities to enhance your development workflow and productivity."
  },
  {
    "path": "ai_docs/anthropic-token-efficient-tool-use.md",
    "content": "# Token-Efficient Tool Use\n\nThe upgraded Claude 3.7 Sonnet model is capable of calling tools in a token-efficient manner. Requests save an average of 14% in output tokens, up to 70%, which also reduces latency. Exact token reduction and latency improvements depend on the overall response shape and size.\n\nTo use this beta feature, simply add the beta header `token-efficient-tools-2025-02-19` to a tool use request with `claude-3-7-sonnet-20250219`. If you are using the SDK, ensure that you are using the beta SDK with `anthropic.beta.messages`.\n\nHere's an example of how to use token-efficient tools with the API:\n\n```python\n# Sample code to demonstrate token-efficient tools\nimport anthropic\nfrom anthropic.beta import messages as beta_messages\n\nclient = anthropic.Anthropic()\n\n# Use the beta messages endpoint with token-efficient tools\nresponse = beta_messages.create(\n    model=\"claude-3-7-sonnet-20250219\",\n    max_tokens=1000,\n    beta_features=[\"token-efficient-tools-2025-02-19\"],\n    tools=[{\n        \"name\": \"get_weather\",\n        \"description\": \"Get the current weather for a location\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"The city and state\"\n                }\n            },\n            \"required\": [\"location\"]\n        }\n    }],\n    messages=[{\n        \"role\": \"user\",\n        \"content\": \"What's the weather in San Francisco?\"\n    }]\n)\n```\n\nThe above request should, on average, use fewer input and output tokens than a normal request. To confirm this, try making the same request but remove `token-efficient-tools-2025-02-19` from the beta headers list.\n\n# Text Editor Tool\n\nClaude can use an Anthropic-defined text editor tool to view and modify text files, helping you debug, fix, and improve your code or other text documents. This allows Claude to directly interact with your files, providing hands-on assistance rather than just suggesting changes.\n\n## Before using the text editor tool\n\n### Use a compatible model\n\nAnthropic's text editor tool is only available for Claude 3.5 Sonnet and Claude 3.7 Sonnet:\n\n* **Claude 3.7 Sonnet**: `text_editor_20250124`\n* **Claude 3.5 Sonnet**: `text_editor_20241022`\n\nBoth versions provide identical capabilities - the version you use should match the model you're working with.\n\n### Assess your use case fit\n\nSome examples of when to use the text editor tool are:\n\n* **Code debugging**: Have Claude identify and fix bugs in your code, from syntax errors to logic issues.\n* **Code refactoring**: Let Claude improve your code structure, readability, and performance through targeted edits.\n* **Documentation generation**: Ask Claude to add docstrings, comments, or README files to your codebase.\n* **Test creation**: Have Claude create unit tests for your code based on its understanding of the implementation.\n\n## Text editor tool commands\n\nThe text editor tool supports several commands for viewing and modifying files:\n\n### view\n\nThe `view` command allows Claude to examine the contents of a file. It can read the entire file or a specific range of lines.\n\nParameters:\n\n* `command`: Must be \"view\"\n* `path`: The path to the file to view\n* `view_range` (optional): An array of two integers specifying the start and end line numbers to view. Line numbers are 1-indexed, and -1 for the end line means read to the end of the file.\n\n### str_replace\n\nThe `str_replace` command allows Claude to replace a specific string in a file with a new string. This is used for making precise edits.\n\nParameters:\n\n* `command`: Must be \"str_replace\"\n* `path`: The path to the file to modify\n* `old_str`: The text to replace (must match exactly, including whitespace and indentation)\n* `new_str`: The new text to insert in place of the old text\n\n### create\n\nThe `create` command allows Claude to create a new file with specified content.\n\nParameters:\n\n* `command`: Must be \"create\"\n* `path`: The path where the new file should be created\n* `file_text`: The content to write to the new file\n\n### insert\n\nThe `insert` command allows Claude to insert text at a specific location in a file.\n\nParameters:\n\n* `command`: Must be \"insert\"\n* `path`: The path to the file to modify\n* `insert_line`: The line number after which to insert the text (0 for beginning of file)\n* `new_str`: The text to insert\n\n### undo_edit\n\nThe `undo_edit` command allows Claude to revert the last edit made to a file.\n\nParameters:\n\n* `command`: Must be \"undo_edit\"\n* `path`: The path to the file whose last edit should be undone\n\n## Pricing and token usage\n\nThe text editor tool uses the same pricing structure as other tools used with Claude. It follows the standard input and output token pricing based on the Claude model you're using.\n\nIn addition to the base tokens, the following additional input tokens are needed for the text editor tool:\n\n| Tool | Additional input tokens |\n| --- | --- |\n| `text_editor_20241022` (Claude 3.5 Sonnet) | 700 tokens |\n| `text_editor_20250124` (Claude 3.7 Sonnet) | 700 tokens |\n\n## Change log\n\n| Date | Version | Changes |\n| --- | --- | --- |\n| March 13, 2025 | `text_editor_20250124` | Introduction of standalone Text Editor Tool documentation. This version is optimized for Claude 3.7 Sonnet but has identical capabilities to the previous version. |\n| October 22, 2024 | `text_editor_20241022` | Initial release of the Text Editor Tool with Claude 3.5 Sonnet. Provides capabilities for viewing, creating, and editing files through the `view`, `create`, `str_replace`, `insert`, and `undo_edit` commands. |"
  },
  {
    "path": "ai_docs/building-eff-agents.md",
    "content": "Product\n\n# Building effective agents\n\nDec 19, 2024\n\nOver the past year, we've worked with dozens of teams building large language model (LLM) agents across industries. Consistently, the most successful implementations weren't using complex frameworks or specialized libraries. Instead, they were building with simple, composable patterns.\n\nIn this post, we share what we’ve learned from working with our customers and building agents ourselves, and give practical advice for developers on building effective agents.\n\n## What are agents?\n\n\"Agent\" can be defined in several ways. Some customers define agents as fully autonomous systems that operate independently over extended periods, using various tools to accomplish complex tasks. Others use the term to describe more prescriptive implementations that follow predefined workflows. At Anthropic, we categorize all these variations as **agentic systems**, but draw an important architectural distinction between **workflows** and **agents**:\n\n- **Workflows** are systems where LLMs and tools are orchestrated through predefined code paths.\n- **Agents**, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.\n\nBelow, we will explore both types of agentic systems in detail. In Appendix 1 (“Agents in Practice”), we describe two domains where customers have found particular value in using these kinds of systems.\n\n## When (and when not) to use agents\n\nWhen building applications with LLMs, we recommend finding the simplest solution possible, and only increasing complexity when needed. This might mean not building agentic systems at all. Agentic systems often trade latency and cost for better task performance, and you should consider when this tradeoff makes sense.\n\nWhen more complexity is warranted, workflows offer predictability and consistency for well-defined tasks, whereas agents are the better option when flexibility and model-driven decision-making are needed at scale. For many applications, however, optimizing single LLM calls with retrieval and in-context examples is usually enough.\n\n## When and how to use frameworks\n\nThere are many frameworks that make agentic systems easier to implement, including:\n\n- [LangGraph](https://langchain-ai.github.io/langgraph/) from LangChain;\n- Amazon Bedrock's [AI Agent framework](https://aws.amazon.com/bedrock/agents/);\n- [Rivet](https://rivet.ironcladapp.com/), a drag and drop GUI LLM workflow builder; and\n- [Vellum](https://www.vellum.ai/), another GUI tool for building and testing complex workflows.\n\nThese frameworks make it easy to get started by simplifying standard low-level tasks like calling LLMs, defining and parsing tools, and chaining calls together. However, they often create extra layers of abstraction that can obscure the underlying prompts ​​and responses, making them harder to debug. They can also make it tempting to add complexity when a simpler setup would suffice.\n\nWe suggest that developers start by using LLM APIs directly: many patterns can be implemented in a few lines of code. If you do use a framework, ensure you understand the underlying code. Incorrect assumptions about what's under the hood are a common source of customer error.\n\nSee our [cookbook](https://github.com/anthropics/anthropic-cookbook/tree/main/patterns/agents) for some sample implementations.\n\n## Building blocks, workflows, and agents\n\nIn this section, we’ll explore the common patterns for agentic systems we’ve seen in production. We'll start with our foundational building block—the augmented LLM—and progressively increase complexity, from simple compositional workflows to autonomous agents.\n\n### Building block: The augmented LLM\n\nThe basic building block of agentic systems is an LLM enhanced with augmentations such as retrieval, tools, and memory. Our current models can actively use these capabilities—generating their own search queries, selecting appropriate tools, and determining what information to retain.\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2Fd3083d3f40bb2b6f477901cc9a240738d3dd1371-2401x1000.png&w=3840&q=75)The augmented LLM\n\nWe recommend focusing on two key aspects of the implementation: tailoring these capabilities to your specific use case and ensuring they provide an easy, well-documented interface for your LLM. While there are many ways to implement these augmentations, one approach is through our recently released [Model Context Protocol](https://www.anthropic.com/news/model-context-protocol), which allows developers to integrate with a growing ecosystem of third-party tools with a simple [client implementation](https://modelcontextprotocol.io/tutorials/building-a-client#building-mcp-clients).\n\nFor the remainder of this post, we'll assume each LLM call has access to these augmented capabilities.\n\n### Workflow: Prompt chaining\n\nPrompt chaining decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one. You can add programmatic checks (see \"gate” in the diagram below) on any intermediate steps to ensure that the process is still on track.\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F7418719e3dab222dccb379b8879e1dc08ad34c78-2401x1000.png&w=3840&q=75)The prompt chaining workflow\n\n**When to use this workflow:** This workflow is ideal for situations where the task can be easily and cleanly decomposed into fixed subtasks. The main goal is to trade off latency for higher accuracy, by making each LLM call an easier task.\n\n**Examples where prompt chaining is useful:**\n\n- Generating Marketing copy, then translating it into a different language.\n- Writing an outline of a document, checking that the outline meets certain criteria, then writing the document based on the outline.\n\n### Workflow: Routing\n\nRouting classifies an input and directs it to a specialized followup task. This workflow allows for separation of concerns, and building more specialized prompts. Without this workflow, optimizing for one kind of input can hurt performance on other inputs.\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F5c0c0e9fe4def0b584c04d37849941da55e5e71c-2401x1000.png&w=3840&q=75)The routing workflow\n\n**When to use this workflow:** Routing works well for complex tasks where there are distinct categories that are better handled separately, and where classification can be handled accurately, either by an LLM or a more traditional classification model/algorithm.\n\n**Examples where routing is useful:**\n\n- Directing different types of customer service queries (general questions, refund requests, technical support) into different downstream processes, prompts, and tools.\n- Routing easy/common questions to smaller models like Claude 3.5 Haiku and hard/unusual questions to more capable models like Claude 3.5 Sonnet to optimize cost and speed.\n\n### Workflow: Parallelization\n\nLLMs can sometimes work simultaneously on a task and have their outputs aggregated programmatically. This workflow, parallelization, manifests in two key variations:\n\n- **Sectioning**: Breaking a task into independent subtasks run in parallel.\n- **Voting:** Running the same task multiple times to get diverse outputs.\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F406bb032ca007fd1624f261af717d70e6ca86286-2401x1000.png&w=3840&q=75)The parallelization workflow\n\n**When to use this workflow:** Parallelization is effective when the divided subtasks can be parallelized for speed, or when multiple perspectives or attempts are needed for higher confidence results. For complex tasks with multiple considerations, LLMs generally perform better when each consideration is handled by a separate LLM call, allowing focused attention on each specific aspect.\n\n**Examples where parallelization is useful:**\n\n- **Sectioning**:\n  - Implementing guardrails where one model instance processes user queries while another screens them for inappropriate content or requests. This tends to perform better than having the same LLM call handle both guardrails and the core response.\n  - Automating evals for evaluating LLM performance, where each LLM call evaluates a different aspect of the model’s performance on a given prompt.\n- **Voting**:\n  - Reviewing a piece of code for vulnerabilities, where several different prompts review and flag the code if they find a problem.\n  - Evaluating whether a given piece of content is inappropriate, with multiple prompts evaluating different aspects or requiring different vote thresholds to balance false positives and negatives.\n\n### Workflow: Orchestrator-workers\n\nIn the orchestrator-workers workflow, a central LLM dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results.\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F8985fc683fae4780fb34eab1365ab78c7e51bc8e-2401x1000.png&w=3840&q=75)The orchestrator-workers workflow\n\n**When to use this workflow:** This workflow is well-suited for complex tasks where you can’t predict the subtasks needed (in coding, for example, the number of files that need to be changed and the nature of the change in each file likely depend on the task). Whereas it’s topographically similar, the key difference from parallelization is its flexibility—subtasks aren't pre-defined, but determined by the orchestrator based on the specific input.\n\n**Example where orchestrator-workers is useful:**\n\n- Coding products that make complex changes to multiple files each time.\n- Search tasks that involve gathering and analyzing information from multiple sources for possible relevant information.\n\n### Workflow: Evaluator-optimizer\n\nIn the evaluator-optimizer workflow, one LLM call generates a response while another provides evaluation and feedback in a loop.\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F14f51e6406ccb29e695da48b17017e899a6119c7-2401x1000.png&w=3840&q=75)The evaluator-optimizer workflow\n\n**When to use this workflow:** This workflow is particularly effective when we have clear evaluation criteria, and when iterative refinement provides measurable value. The two signs of good fit are, first, that LLM responses can be demonstrably improved when a human articulates their feedback; and second, that the LLM can provide such feedback. This is analogous to the iterative writing process a human writer might go through when producing a polished document.\n\n**Examples where evaluator-optimizer is useful:**\n\n- Literary translation where there are nuances that the translator LLM might not capture initially, but where an evaluator LLM can provide useful critiques.\n- Complex search tasks that require multiple rounds of searching and analysis to gather comprehensive information, where the evaluator decides whether further searches are warranted.\n\n### Agents\n\nAgents are emerging in production as LLMs mature in key capabilities—understanding complex inputs, engaging in reasoning and planning, using tools reliably, and recovering from errors. Agents begin their work with either a command from, or interactive discussion with, the human user. Once the task is clear, agents plan and operate independently, potentially returning to the human for further information or judgement. During execution, it's crucial for the agents to gain “ground truth” from the environment at each step (such as tool call results or code execution) to assess its progress. Agents can then pause for human feedback at checkpoints or when encountering blockers. The task often terminates upon completion, but it’s also common to include stopping conditions (such as a maximum number of iterations) to maintain control.\n\nAgents can handle sophisticated tasks, but their implementation is often straightforward. They are typically just LLMs using tools based on environmental feedback in a loop. It is therefore crucial to design toolsets and their documentation clearly and thoughtfully. We expand on best practices for tool development in Appendix 2 (\"Prompt Engineering your Tools\").\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F58d9f10c985c4eb5d53798dea315f7bb5ab6249e-2401x1000.png&w=3840&q=75)Autonomous agent\n\n**When to use agents:** Agents can be used for open-ended problems where it’s difficult or impossible to predict the required number of steps, and where you can’t hardcode a fixed path. The LLM will potentially operate for many turns, and you must have some level of trust in its decision-making. Agents' autonomy makes them ideal for scaling tasks in trusted environments.\n\nThe autonomous nature of agents means higher costs, and the potential for compounding errors. We recommend extensive testing in sandboxed environments, along with the appropriate guardrails.\n\n**Examples where agents are useful:**\n\nThe following examples are from our own implementations:\n\n- A coding Agent to resolve [SWE-bench tasks](https://www.anthropic.com/research/swe-bench-sonnet), which involve edits to many files based on a task description;\n- Our [“computer use” reference implementation](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo), where Claude uses a computer to accomplish tasks.\n\n![](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-cdn.anthropic.com%2Fimages%2F4zrzovbb%2Fwebsite%2F4b9a1f4eb63d5962a6e1746ac26bbc857cf3474f-2400x1666.png&w=3840&q=75)High-level flow of a coding agent\n\n## Combining and customizing these patterns\n\nThese building blocks aren't prescriptive. They're common patterns that developers can shape and combine to fit different use cases. The key to success, as with any LLM features, is measuring performance and iterating on implementations. To repeat: you should consider adding complexity _only_ when it demonstrably improves outcomes.\n\n## Summary\n\nSuccess in the LLM space isn't about building the most sophisticated system. It's about building the _right_ system for your needs. Start with simple prompts, optimize them with comprehensive evaluation, and add multi-step agentic systems only when simpler solutions fall short.\n\nWhen implementing agents, we try to follow three core principles:\n\n1. Maintain **simplicity** in your agent's design.\n2. Prioritize **transparency** by explicitly showing the agent’s planning steps.\n3. Carefully craft your agent-computer interface (ACI) through thorough tool **documentation and testing**.\n\nFrameworks can help you get started quickly, but don't hesitate to reduce abstraction layers and build with basic components as you move to production. By following these principles, you can create agents that are not only powerful but also reliable, maintainable, and trusted by their users.\n\n### Acknowledgements\n\nWritten by Erik Schluntz and Barry Zhang. This work draws upon our experiences building agents at Anthropic and the valuable insights shared by our customers, for which we're deeply grateful.\n\n## Appendix 1: Agents in practice\n\nOur work with customers has revealed two particularly promising applications for AI agents that demonstrate the practical value of the patterns discussed above. Both applications illustrate how agents add the most value for tasks that require both conversation and action, have clear success criteria, enable feedback loops, and integrate meaningful human oversight.\n\n### A. Customer support\n\nCustomer support combines familiar chatbot interfaces with enhanced capabilities through tool integration. This is a natural fit for more open-ended agents because:\n\n- Support interactions naturally follow a conversation flow while requiring access to external information and actions;\n- Tools can be integrated to pull customer data, order history, and knowledge base articles;\n- Actions such as issuing refunds or updating tickets can be handled programmatically; and\n- Success can be clearly measured through user-defined resolutions.\n\nSeveral companies have demonstrated the viability of this approach through usage-based pricing models that charge only for successful resolutions, showing confidence in their agents' effectiveness.\n\n### B. Coding agents\n\nThe software development space has shown remarkable potential for LLM features, with capabilities evolving from code completion to autonomous problem-solving. Agents are particularly effective because:\n\n- Code solutions are verifiable through automated tests;\n- Agents can iterate on solutions using test results as feedback;\n- The problem space is well-defined and structured; and\n- Output quality can be measured objectively.\n\nIn our own implementation, agents can now solve real GitHub issues in the [SWE-bench Verified](https://www.anthropic.com/research/swe-bench-sonnet) benchmark based on the pull request description alone. However, whereas automated testing helps verify functionality, human review remains crucial for ensuring solutions align with broader system requirements.\n\n## Appendix 2: Prompt engineering your tools\n\nNo matter which agentic system you're building, tools will likely be an important part of your agent. [Tools](https://www.anthropic.com/news/tool-use-ga) enable Claude to interact with external services and APIs by specifying their exact structure and definition in our API. When Claude responds, it will include a [tool use block](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#example-api-response-with-a-tool-use-content-block) in the API response if it plans to invoke a tool. Tool definitions and specifications should be given just as much prompt engineering attention as your overall prompts. In this brief appendix, we describe how to prompt engineer your tools.\n\nThere are often several ways to specify the same action. For instance, you can specify a file edit by writing a diff, or by rewriting the entire file. For structured output, you can return code inside markdown or inside JSON. In software engineering, differences like these are cosmetic and can be converted losslessly from one to the other. However, some formats are much more difficult for an LLM to write than others. Writing a diff requires knowing how many lines are changing in the chunk header before the new code is written. Writing code inside JSON (compared to markdown) requires extra escaping of newlines and quotes.\n\nOur suggestions for deciding on tool formats are the following:\n\n- Give the model enough tokens to \"think\" before it writes itself into a corner.\n- Keep the format close to what the model has seen naturally occurring in text on the internet.\n- Make sure there's no formatting \"overhead\" such as having to keep an accurate count of thousands of lines of code, or string-escaping any code it writes.\n\nOne rule of thumb is to think about how much effort goes into human-computer interfaces (HCI), and plan to invest just as much effort in creating good _agent_-computer interfaces (ACI). Here are some thoughts on how to do so:\n\n- Put yourself in the model's shoes. Is it obvious how to use this tool, based on the description and parameters, or would you need to think carefully about it? If so, then it’s probably also true for the model. A good tool definition often includes example usage, edge cases, input format requirements, and clear boundaries from other tools.\n- How can you change parameter names or descriptions to make things more obvious? Think of this as writing a great docstring for a junior developer on your team. This is especially important when using many similar tools.\n- Test how the model uses your tools: Run many example inputs in our [workbench](https://console.anthropic.com/workbench) to see what mistakes the model makes, and iterate.\n- [Poka-yoke](https://en.wikipedia.org/wiki/Poka-yoke) your tools. Change the arguments so that it is harder to make mistakes.\n\nWhile building our agent for [SWE-bench](https://www.anthropic.com/research/swe-bench-sonnet), we actually spent more time optimizing our tools than the overall prompt. For example, we found that the model would make mistakes with tools using relative filepaths after the agent had moved out of the root directory. To fix this, we changed the tool to always require absolute filepaths—and we found that the model used this method flawlessly.\n\n[Share on Twitter](https://twitter.com/intent/tweet?text=https://www.anthropic.com/research/building-effective-agents)[Share on LinkedIn](https://www.linkedin.com/shareArticle?mini=true&url=https://www.anthropic.com/research/building-effective-agents)\n"
  },
  {
    "path": "ai_docs/existing_anthropic_computer_use_code.md",
    "content": "```python\nimport os\nimport anthropic\nimport argparse\nimport yaml\nimport subprocess\nfrom datetime import datetime\nimport uuid\nfrom typing import Dict, Any, List, Optional, Union\nimport traceback\nimport sys\nimport logging\nfrom logging.handlers import RotatingFileHandler\n\nEDITOR_DIR = os.path.join(os.getcwd(), \"editor_dir\")\nSESSIONS_DIR = os.path.join(os.getcwd(), \"sessions\")\nos.makedirs(SESSIONS_DIR, exist_ok=True)\n\n\n# Fetch system prompts from environment variables or use defaults\nBASH_SYSTEM_PROMPT = os.environ.get(\n    \"BASH_SYSTEM_PROMPT\", \"You are a helpful assistant that can execute bash commands.\"\n)\nEDITOR_SYSTEM_PROMPT = os.environ.get(\n    \"EDITOR_SYSTEM_PROMPT\",\n    \"You are a helpful assistant that helps users edit text files.\",\n)\n\n\nclass SessionLogger:\n    def __init__(self, session_id: str, sessions_dir: str):\n        self.session_id = session_id\n        self.sessions_dir = sessions_dir\n        self.logger = self._setup_logging()\n\n        # Initialize token counters\n        self.total_input_tokens = 0\n        self.total_output_tokens = 0\n\n    def _setup_logging(self) -> logging.Logger:\n        \"\"\"Configure logging for the session\"\"\"\n        log_formatter = logging.Formatter(\n            \"%(asctime)s - %(name)s - %(levelname)s - %(prefix)s - %(message)s\"\n        )\n        log_file = os.path.join(self.sessions_dir, f\"{self.session_id}.log\")\n\n        file_handler = RotatingFileHandler(\n            log_file, maxBytes=1024 * 1024, backupCount=5\n        )\n        file_handler.setFormatter(log_formatter)\n\n        console_handler = logging.StreamHandler()\n        console_handler.setFormatter(log_formatter)\n\n        logger = logging.getLogger(self.session_id)\n        logger.addHandler(file_handler)\n        logger.addHandler(console_handler)\n        logger.setLevel(logging.DEBUG)\n\n        return logger\n\n    def update_token_usage(self, input_tokens: int, output_tokens: int):\n        \"\"\"Update the total token usage.\"\"\"\n        self.total_input_tokens += input_tokens\n        self.total_output_tokens += output_tokens\n\n    def log_total_cost(self):\n        \"\"\"Calculate and log the total cost based on token usage.\"\"\"\n        cost_per_million_input_tokens = 3.0  # $3.00 per million input tokens\n        cost_per_million_output_tokens = 15.0  # $15.00 per million output tokens\n\n        total_input_cost = (\n            self.total_input_tokens / 1_000_000\n        ) * cost_per_million_input_tokens\n        total_output_cost = (\n            self.total_output_tokens / 1_000_000\n        ) * cost_per_million_output_tokens\n        total_cost = total_input_cost + total_output_cost\n\n        prefix = \"📊 session\"\n        self.logger.info(\n            f\"Total input tokens: {self.total_input_tokens}\", extra={\"prefix\": prefix}\n        )\n        self.logger.info(\n            f\"Total output tokens: {self.total_output_tokens}\", extra={\"prefix\": prefix}\n        )\n        self.logger.info(\n            f\"Total input cost: ${total_input_cost:.6f}\", extra={\"prefix\": prefix}\n        )\n        self.logger.info(\n            f\"Total output cost: ${total_output_cost:.6f}\", extra={\"prefix\": prefix}\n        )\n        self.logger.info(f\"Total cost: ${total_cost:.6f}\", extra={\"prefix\": prefix})\n\n\nclass EditorSession:\n    def __init__(self, session_id: Optional[str] = None):\n        \"\"\"Initialize editor session with optional existing session ID\"\"\"\n        self.session_id = session_id or self._create_session_id()\n        self.sessions_dir = SESSIONS_DIR\n        self.editor_dir = EDITOR_DIR\n        self.client = anthropic.Anthropic(api_key=os.environ.get(\"ANTHROPIC_API_KEY\"))\n        self.messages = []\n\n        # Create editor directory if needed\n        os.makedirs(self.editor_dir, exist_ok=True)\n\n        # Initialize logger placeholder\n        self.logger = None\n\n        # Set log prefix\n        self.log_prefix = \"📝 file_editor\"\n\n    def set_logger(self, session_logger: SessionLogger):\n        \"\"\"Set the logger for the session and store the SessionLogger instance.\"\"\"\n        self.session_logger = session_logger\n        self.logger = logging.LoggerAdapter(\n            self.session_logger.logger, {\"prefix\": self.log_prefix}\n        )\n\n    def _create_session_id(self) -> str:\n        \"\"\"Create a new session ID\"\"\"\n        timestamp = datetime.now().strftime(\"%Y%m%d-%H%M%S\")\n        return f\"{timestamp}-{uuid.uuid4().hex[:6]}\"\n\n    def _get_editor_path(self, path: str) -> str:\n        \"\"\"Convert API path to local editor directory path\"\"\"\n        # Strip any leading /repo/ from the path\n        clean_path = path.replace(\"/repo/\", \"\", 1)\n        # Join with editor_dir\n        full_path = os.path.join(self.editor_dir, clean_path)\n        # Create the directory structure if it doesn't exist\n        os.makedirs(os.path.dirname(full_path), exist_ok=True)\n        return full_path\n\n    def _handle_view(self, path: str, _: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"Handle view command\"\"\"\n        editor_path = self._get_editor_path(path)\n        if os.path.exists(editor_path):\n            with open(editor_path, \"r\") as f:\n                return {\"content\": f.read()}\n        return {\"error\": f\"File {editor_path} does not exist\"}\n\n    def _handle_create(self, path: str, tool_call: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"Handle create command\"\"\"\n        os.makedirs(os.path.dirname(path), exist_ok=True)\n        with open(path, \"w\") as f:\n            f.write(tool_call[\"file_text\"])\n        return {\"content\": f\"File created at {path}\"}\n\n    def _handle_str_replace(\n        self, path: str, tool_call: Dict[str, Any]\n    ) -> Dict[str, Any]:\n        \"\"\"Handle str_replace command\"\"\"\n        with open(path, \"r\") as f:\n            content = f.read()\n        if tool_call[\"old_str\"] not in content:\n            return {\"error\": \"old_str not found in file\"}\n        new_content = content.replace(\n            tool_call[\"old_str\"], tool_call.get(\"new_str\", \"\")\n        )\n        with open(path, \"w\") as f:\n            f.write(new_content)\n        return {\"content\": \"File updated successfully\"}\n\n    def _handle_insert(self, path: str, tool_call: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"Handle insert command\"\"\"\n        with open(path, \"r\") as f:\n            lines = f.readlines()\n        insert_line = tool_call[\"insert_line\"]\n        if insert_line > len(lines):\n            return {\"error\": \"insert_line beyond file length\"}\n        lines.insert(insert_line, tool_call[\"new_str\"] + \"\\n\")\n        with open(path, \"w\") as f:\n            f.writelines(lines)\n        return {\"content\": \"Content inserted successfully\"}\n\n    def log_to_session(self, data: Dict[str, Any], section: str) -> None:\n        \"\"\"Log data to session log file\"\"\"\n        self.logger.info(f\"{section}: {data}\")\n\n    def handle_text_editor_tool(self, tool_call: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"Handle text editor tool calls\"\"\"\n        try:\n            command = tool_call[\"command\"]\n            if not all(key in tool_call for key in [\"command\", \"path\"]):\n                return {\"error\": \"Missing required fields\"}\n\n            # Get path and ensure directory exists\n            path = self._get_editor_path(tool_call[\"path\"])\n\n            handlers = {\n                \"view\": self._handle_view,\n                \"create\": self._handle_create,\n                \"str_replace\": self._handle_str_replace,\n                \"insert\": self._handle_insert,\n            }\n\n            handler = handlers.get(command)\n            if not handler:\n                return {\"error\": f\"Unknown command {command}\"}\n\n            return handler(path, tool_call)\n\n        except Exception as e:\n            self.logger.error(f\"Error in handle_text_editor_tool: {str(e)}\")\n            return {\"error\": str(e)}\n\n    def process_tool_calls(\n        self, tool_calls: List[anthropic.types.ContentBlock]\n    ) -> List[Dict[str, Any]]:\n        \"\"\"Process tool calls and return results\"\"\"\n        results = []\n\n        for tool_call in tool_calls:\n            if tool_call.type == \"tool_use\" and tool_call.name == \"str_replace_editor\":\n\n                # Log the keys and first 20 characters of the values of the tool_call\n                for key, value in tool_call.input.items():\n                    truncated_value = str(value)[:20] + (\n                        \"...\" if len(str(value)) > 20 else \"\"\n                    )\n                    self.logger.info(\n                        f\"Tool call key: {key}, Value (truncated): {truncated_value}\"\n                    )\n\n                result = self.handle_text_editor_tool(tool_call.input)\n                # Convert result to match expected tool result format\n                is_error = False\n\n                if result.get(\"error\"):\n                    is_error = True\n                    tool_result_content = [{\"type\": \"text\", \"text\": result[\"error\"]}]\n                else:\n                    tool_result_content = [\n                        {\"type\": \"text\", \"text\": result.get(\"content\", \"\")}\n                    ]\n\n                results.append(\n                    {\n                        \"tool_call_id\": tool_call.id,\n                        \"output\": {\n                            \"type\": \"tool_result\",\n                            \"content\": tool_result_content,\n                            \"tool_use_id\": tool_call.id,\n                            \"is_error\": is_error,\n                        },\n                    }\n                )\n\n        return results\n\n    def process_edit(self, edit_prompt: str) -> None:\n        \"\"\"Main method to process editing prompts\"\"\"\n        try:\n            # Initial message with proper content structure\n            api_message = {\n                \"role\": \"user\",\n                \"content\": [{\"type\": \"text\", \"text\": edit_prompt}],\n            }\n            self.messages = [api_message]\n\n            self.logger.info(f\"User input: {api_message}\")\n\n            while True:\n                response = self.client.beta.messages.create(\n                    model=\"claude-3-5-sonnet-20241022\",\n                    max_tokens=4096,\n                    messages=self.messages,\n                    tools=[\n                        {\"type\": \"text_editor_20241022\", \"name\": \"str_replace_editor\"}\n                    ],\n                    system=EDITOR_SYSTEM_PROMPT,\n                    betas=[\"computer-use-2024-10-22\"],\n                )\n\n                # Extract token usage from the response\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n                self.logger.info(\n                    f\"API usage: input_tokens={input_tokens}, output_tokens={output_tokens}\"\n                )\n\n                # Update token counts in SessionLogger\n                self.session_logger.update_token_usage(input_tokens, output_tokens)\n\n                self.logger.info(f\"API response: {response.model_dump()}\")\n\n                # Convert response content to message params\n                response_content = []\n                for block in response.content:\n                    if block.type == \"text\":\n                        response_content.append({\"type\": \"text\", \"text\": block.text})\n                    else:\n                        response_content.append(block.model_dump())\n\n                # Add assistant response to messages\n                self.messages.append({\"role\": \"assistant\", \"content\": response_content})\n\n                if response.stop_reason != \"tool_use\":\n                    print(response.content[0].text)\n                    break\n\n                tool_results = self.process_tool_calls(response.content)\n\n                # Add tool results as user message\n                if tool_results:\n                    self.messages.append(\n                        {\"role\": \"user\", \"content\": [tool_results[0][\"output\"]]}\n                    )\n\n                    if tool_results[0][\"output\"][\"is_error\"]:\n                        self.logger.error(\n                            f\"Error: {tool_results[0]['output']['content']}\"\n                        )\n                        break\n\n            # After the execution loop, log the total cost\n            self.session_logger.log_total_cost()\n\n        except Exception as e:\n            self.logger.error(f\"Error in process_edit: {str(e)}\")\n            self.logger.error(traceback.format_exc())\n            raise\n\n\nclass BashSession:\n    def __init__(self, session_id: Optional[str] = None, no_agi: bool = False):\n        \"\"\"Initialize Bash session with optional existing session ID\"\"\"\n        self.session_id = session_id or self._create_session_id()\n        self.sessions_dir = SESSIONS_DIR\n        self.client = anthropic.Anthropic(api_key=os.environ.get(\"ANTHROPIC_API_KEY\"))\n        self.messages = []\n\n        # Initialize a persistent environment dictionary for subprocesses\n        self.environment = os.environ.copy()\n\n        # Initialize logger placeholder\n        self.logger = None\n\n        # Set log prefix\n        self.log_prefix = \"🐚 bash\"\n\n        # Store the no_agi flag\n        self.no_agi = no_agi\n\n    def set_logger(self, session_logger: SessionLogger):\n        \"\"\"Set the logger for the session and store the SessionLogger instance.\"\"\"\n        self.session_logger = session_logger\n        self.logger = logging.LoggerAdapter(\n            session_logger.logger, {\"prefix\": self.log_prefix}\n        )\n\n    def _create_session_id(self) -> str:\n        \"\"\"Create a new session ID\"\"\"\n        timestamp = datetime.now().strftime(\"%Y%m%d-%H:%M:%S-%f\")\n        # return f\"{timestamp}-{uuid.uuid4().hex[:6]}\"\n        return f\"{timestamp}\"\n\n    def _handle_bash_command(self, tool_call: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"Handle bash command execution\"\"\"\n        try:\n            command = tool_call.get(\"command\")\n            restart = tool_call.get(\"restart\", False)\n\n            if restart:\n                self.environment = os.environ.copy()  # Reset the environment\n                self.logger.info(\"Bash session restarted.\")\n                return {\"content\": \"Bash session restarted.\"}\n\n            if not command:\n                self.logger.error(\"No command provided to execute.\")\n                return {\"error\": \"No command provided to execute.\"}\n\n            # Check if no_agi is enabled\n            if self.no_agi:\n                self.logger.info(f\"Mock executing bash command: {command}\")\n                return {\"content\": \"in mock mode, command did not run\"}\n\n            # Log the command being executed\n            self.logger.info(f\"Executing bash command: {command}\")\n\n            # Execute the command in a subprocess\n            result = subprocess.run(\n                command,\n                shell=True,\n                stdout=subprocess.PIPE,\n                stderr=subprocess.PIPE,\n                env=self.environment,\n                text=True,\n                executable=\"/bin/bash\",\n            )\n\n            output = result.stdout.strip()\n            error_output = result.stderr.strip()\n\n            # Log the outputs\n            if output:\n                self.logger.info(\n                    f\"Command output:\\n\\n```output for '{command[:20]}...'\\n{output}\\n```\"\n                )\n            if error_output:\n                self.logger.error(\n                    f\"Command error output:\\n\\n```error for '{command}'\\n{error_output}\\n```\"\n                )\n\n            if result.returncode != 0:\n                error_message = error_output or \"Command execution failed.\"\n                return {\"error\": error_message}\n\n            return {\"content\": output}\n\n        except Exception as e:\n            self.logger.error(f\"Error in _handle_bash_command: {str(e)}\")\n            self.logger.error(traceback.format_exc())\n            return {\"error\": str(e)}\n\n    def process_tool_calls(\n        self, tool_calls: List[anthropic.types.ContentBlock]\n    ) -> List[Dict[str, Any]]:\n        \"\"\"Process tool calls and return results\"\"\"\n        results = []\n\n        for tool_call in tool_calls:\n            if tool_call.type == \"tool_use\" and tool_call.name == \"bash\":\n                self.logger.info(f\"Bash tool call input: {tool_call.input}\")\n\n                result = self._handle_bash_command(tool_call.input)\n\n                # Convert result to match expected tool result format\n                is_error = False\n\n                if result.get(\"error\"):\n                    is_error = True\n                    tool_result_content = [{\"type\": \"text\", \"text\": result[\"error\"]}]\n                else:\n                    tool_result_content = [\n                        {\"type\": \"text\", \"text\": result.get(\"content\", \"\")}\n                    ]\n\n                results.append(\n                    {\n                        \"tool_call_id\": tool_call.id,\n                        \"output\": {\n                            \"type\": \"tool_result\",\n                            \"content\": tool_result_content,\n                            \"tool_use_id\": tool_call.id,\n                            \"is_error\": is_error,\n                        },\n                    }\n                )\n\n        return results\n\n    def process_bash_command(self, bash_prompt: str) -> None:\n        \"\"\"Main method to process bash commands via the assistant\"\"\"\n        try:\n            # Initial message with proper content structure\n            api_message = {\n                \"role\": \"user\",\n                \"content\": [{\"type\": \"text\", \"text\": bash_prompt}],\n            }\n            self.messages = [api_message]\n\n            self.logger.info(f\"User input: {api_message}\")\n\n            while True:\n                response = self.client.beta.messages.create(\n                    model=\"claude-3-5-sonnet-20241022\",\n                    max_tokens=4096,\n                    messages=self.messages,\n                    tools=[{\"type\": \"bash_20241022\", \"name\": \"bash\"}],\n                    system=BASH_SYSTEM_PROMPT,\n                    betas=[\"computer-use-2024-10-22\"],\n                )\n\n                # Extract token usage from the response\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n                self.logger.info(\n                    f\"API usage: input_tokens={input_tokens}, output_tokens={output_tokens}\"\n                )\n\n                # Update token counts in SessionLogger\n                self.session_logger.update_token_usage(input_tokens, output_tokens)\n\n                self.logger.info(f\"API response: {response.model_dump()}\")\n\n                # Convert response content to message params\n                response_content = []\n                for block in response.content:\n                    if block.type == \"text\":\n                        response_content.append({\"type\": \"text\", \"text\": block.text})\n                    else:\n                        response_content.append(block.model_dump())\n\n                # Add assistant response to messages\n                self.messages.append({\"role\": \"assistant\", \"content\": response_content})\n\n                if response.stop_reason != \"tool_use\":\n                    # Print the assistant's final response\n                    print(response.content[0].text)\n                    break\n\n                tool_results = self.process_tool_calls(response.content)\n\n                # Add tool results as user message\n                if tool_results:\n                    self.messages.append(\n                        {\"role\": \"user\", \"content\": [tool_results[0][\"output\"]]}\n                    )\n\n                    if tool_results[0][\"output\"][\"is_error\"]:\n                        self.logger.error(\n                            f\"Error: {tool_results[0]['output']['content']}\"\n                        )\n                        break\n\n            # After the execution loop, log the total cost\n            self.session_logger.log_total_cost()\n\n        except Exception as e:\n            self.logger.error(f\"Error in process_bash_command: {str(e)}\")\n            self.logger.error(traceback.format_exc())\n            raise\n\n\ndef main():\n    \"\"\"Main entry point\"\"\"\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"prompt\", help=\"The prompt for Claude\", nargs=\"?\")\n    parser.add_argument(\n        \"--mode\", choices=[\"editor\", \"bash\"], default=\"editor\", help=\"Mode to run\"\n    )\n    parser.add_argument(\n        \"--no-agi\",\n        action=\"store_true\",\n        help=\"When set, commands will not be executed, but will return 'command ran'.\",\n    )\n    args = parser.parse_args()\n\n    # Create a shared session ID\n    session_id = datetime.now().strftime(\"%Y%m%d-%H%M%S\") + \"-\" + uuid.uuid4().hex[:6]\n    # Create a single SessionLogger instance\n    session_logger = SessionLogger(session_id, SESSIONS_DIR)\n\n    if args.mode == \"editor\":\n        session = EditorSession(session_id=session_id)\n        # Pass the logger via setter method\n        session.set_logger(session_logger)\n        print(f\"Session ID: {session.session_id}\")\n        session.process_edit(args.prompt)\n    elif args.mode == \"bash\":\n        session = BashSession(session_id=session_id, no_agi=args.no_agi)\n        # Pass the logger via setter method\n        session.set_logger(session_logger)\n        print(f\"Session ID: {session.session_id}\")\n        session.process_bash_command(args.prompt)\n\n\nif __name__ == \"__main__\":\n    main()\n```"
  },
  {
    "path": "ai_docs/fc_openai_agents.md",
    "content": "# OpenAI Agents SDK Documentation\n\nThis file contains documentation for the OpenAI Agents SDK, scraped from the official documentation site.\n\n## Overview\n\nThe [OpenAI Agents SDK](https://github.com/openai/openai-agents-python) enables you to build agentic AI apps in a lightweight, easy-to-use package with very few abstractions. It's a production-ready upgrade of the previous experimentation for agents, [Swarm](https://github.com/openai/swarm/tree/main). The Agents SDK has a very small set of primitives:\n\n- **Agents**, which are LLMs equipped with instructions and tools\n- **Handoffs**, which allow agents to delegate to other agents for specific tasks\n- **Guardrails**, which enable the inputs to agents to be validated\n\nIn combination with Python, these primitives are powerful enough to express complex relationships between tools and agents, and allow you to build real-world applications without a steep learning curve. In addition, the SDK comes with built-in **tracing** that lets you visualize and debug your agentic flows, as well as evaluate them and even fine-tune models for your application.\n\n### Why use the Agents SDK\n\nThe SDK has two driving design principles:\n\n1. Enough features to be worth using, but few enough primitives to make it quick to learn.\n2. Works great out of the box, but you can customize exactly what happens.\n\nHere are the main features of the SDK:\n\n- Agent loop: Built-in agent loop that handles calling tools, sending results to the LLM, and looping until the LLM is done.\n- Python-first: Use built-in language features to orchestrate and chain agents, rather than needing to learn new abstractions.\n- Handoffs: A powerful feature to coordinate and delegate between multiple agents.\n- Guardrails: Run input validations and checks in parallel to your agents, breaking early if the checks fail.\n- Function tools: Turn any Python function into a tool, with automatic schema generation and Pydantic-powered validation.\n- Tracing: Built-in tracing that lets you visualize, debug and monitor your workflows, as well as use the OpenAI suite of evaluation, fine-tuning and distillation tools.\n\n### Installation\n\n```bash\npip install openai-agents\n```\n\n### Hello world example\n\n```python\nfrom agents import Agent, Runner\n\nagent = Agent(name=\"Assistant\", instructions=\"You are a helpful assistant\")\n\nresult = Runner.run_sync(agent, \"Write a haiku about recursion in programming.\")\nprint(result.final_output)\n\n# Code within the code,\n# Functions calling themselves,\n# Infinite loop's dance.\n```\n\n## Quickstart\n\n### Create a project and virtual environment\n\n```bash\nmkdir my_project\ncd my_project\npython -m venv .venv\nsource .venv/bin/activate\npip install openai-agents\nexport OPENAI_API_KEY=sk-...\n```\n\n### Create your first agent\n\n```python\nfrom agents import Agent\n\nagent = Agent(\n    name=\"Math Tutor\",\n    instructions=\"You provide help with math problems. Explain your reasoning at each step and include examples\",\n)\n```\n\n### Add a few more agents\n\n```python\nfrom agents import Agent\n\nhistory_tutor_agent = Agent(\n    name=\"History Tutor\",\n    handoff_description=\"Specialist agent for historical questions\",\n    instructions=\"You provide assistance with historical queries. Explain important events and context clearly.\",\n)\n\nmath_tutor_agent = Agent(\n    name=\"Math Tutor\",\n    handoff_description=\"Specialist agent for math questions\",\n    instructions=\"You provide help with math problems. Explain your reasoning at each step and include examples\",\n)\n```\n\n### Define your handoffs\n\n```python\ntriage_agent = Agent(\n    name=\"Triage Agent\",\n    instructions=\"You determine which agent to use based on the user's homework question\",\n    handoffs=[history_tutor_agent, math_tutor_agent]\n)\n```\n\n### Run the agent orchestration\n\n```python\nfrom agents import Runner\n\nasync def main():\n    result = await Runner.run(triage_agent, \"What is the capital of France?\")\n    print(result.final_output)\n```\n\n### Add a guardrail\n\n```python\nfrom agents import GuardrailFunctionOutput, Agent, Runner\nfrom pydantic import BaseModel\n\nclass HomeworkOutput(BaseModel):\n    is_homework: bool\n    reasoning: str\n\nguardrail_agent = Agent(\n    name=\"Guardrail check\",\n    instructions=\"Check if the user is asking about homework.\",\n    output_type=HomeworkOutput,\n)\n\nasync def homework_guardrail(ctx, agent, input_data):\n    result = await Runner.run(guardrail_agent, input_data, context=ctx.context)\n    final_output = result.final_output_as(HomeworkOutput)\n    return GuardrailFunctionOutput(\n        output_info=final_output,\n        tripwire_triggered=not final_output.is_homework,\n    )\n```\n\n### Put it all together\n\n```python\nfrom agents import Agent, InputGuardrail,GuardrailFunctionOutput, Runner\nfrom pydantic import BaseModel\nimport asyncio\n\nclass HomeworkOutput(BaseModel):\n    is_homework: bool\n    reasoning: str\n\nguardrail_agent = Agent(\n    name=\"Guardrail check\",\n    instructions=\"Check if the user is asking about homework.\",\n    output_type=HomeworkOutput,\n)\n\nmath_tutor_agent = Agent(\n    name=\"Math Tutor\",\n    handoff_description=\"Specialist agent for math questions\",\n    instructions=\"You provide help with math problems. Explain your reasoning at each step and include examples\",\n)\n\nhistory_tutor_agent = Agent(\n    name=\"History Tutor\",\n    handoff_description=\"Specialist agent for historical questions\",\n    instructions=\"You provide assistance with historical queries. Explain important events and context clearly.\",\n)\n\nasync def homework_guardrail(ctx, agent, input_data):\n    result = await Runner.run(guardrail_agent, input_data, context=ctx.context)\n    final_output = result.final_output_as(HomeworkOutput)\n    return GuardrailFunctionOutput(\n        output_info=final_output,\n        tripwire_triggered=not final_output.is_homework,\n    )\n\ntriage_agent = Agent(\n    name=\"Triage Agent\",\n    instructions=\"You determine which agent to use based on the user's homework question\",\n    handoffs=[history_tutor_agent, math_tutor_agent],\n    input_guardrails=[\n        InputGuardrail(guardrail_function=homework_guardrail),\n    ],\n)\n\nasync def main():\n    result = await Runner.run(triage_agent, \"who was the first president of the united states?\")\n    print(result.final_output)\n\n    result = await Runner.run(triage_agent, \"what is life\")\n    print(result.final_output)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## Agents\n\nAgents are the core building block in your apps. An agent is a large language model (LLM), configured with instructions and tools.\n\n### Basic configuration\n\nThe most common properties of an agent you'll configure are:\n\n- `instructions`: also known as a developer message or system prompt.\n- `model`: which LLM to use, and optional `model_settings` to configure model tuning parameters like temperature, top_p, etc.\n- `tools`: Tools that the agent can use to achieve its tasks.\n\n```python\nfrom agents import Agent, ModelSettings, function_tool\n\n@function_tool\ndef get_weather(city: str) -> str:\n    return f\"The weather in {city} is sunny\"\n\nagent = Agent(\n    name=\"Haiku agent\",\n    instructions=\"Always respond in haiku form\",\n    model=\"o3-mini\",\n    tools=[get_weather],\n)\n```\n\n### Context\n\nAgents are generic on their `context` type. Context is a dependency-injection tool: it's an object you create and pass to `Runner.run()`, that is passed to every agent, tool, handoff etc, and it serves as a grab bag of dependencies and state for the agent run. You can provide any Python object as the context.\n\n### Output types\n\nBy default, agents produce plain text (i.e. `str`) outputs. If you want the agent to produce a particular type of output, you can use the `output_type` parameter.\n\n### Handoffs\n\nHandoffs are sub-agents that the agent can delegate to. You provide a list of handoffs, and the agent can choose to delegate to them if relevant.\n\n### Dynamic instructions\n\nIn most cases, you can provide instructions when you create the agent. However, you can also provide dynamic instructions via a function.\n\n### Lifecycle events (hooks)\n\nSometimes, you want to observe the lifecycle of an agent. For example, you may want to log events, or pre-fetch data when certain events occur.\n\n### Guardrails\n\nGuardrails allow you to run checks/validations on user input, in parallel to the agent running.\n\n### Cloning/copying agents\n\nBy using the `clone()` method on an agent, you can duplicate an Agent, and optionally change any properties you like.\n\n## Handoffs\n\nHandoffs allow an agent to delegate tasks to another agent. This is particularly useful in scenarios where different agents specialize in distinct areas.\n\n### Creating a handoff\n\nAll agents have a `handoffs` param, which can either take an `Agent` directly, or a `Handoff` object that customizes the Handoff.\n\n### Basic Usage\n\n```python\nfrom agents import Agent, handoff\n\nbilling_agent = Agent(name=\"Billing agent\")\nrefund_agent = Agent(name=\"Refund agent\")\n\ntriage_agent = Agent(name=\"Triage agent\", handoffs=[billing_agent, handoff(refund_agent)])\n```\n\n### Customizing handoffs\n\nThe `handoff()` function lets you customize various aspects like tool name, description, callbacks, and input filtering.\n\n### Handoff inputs\n\nYou can have the LLM provide data when calling a handoff, which is useful for logging or other purposes.\n\n### Input filters\n\nWhen a handoff occurs, the new agent sees the entire previous conversation history by default. Input filters allow you to modify this behavior.\n\n### Recommended prompts\n\nTo ensure LLMs understand handoffs properly, include information about handoffs in your agent instructions.\n\n## Tools\n\nTools let agents take actions: things like fetching data, running code, calling external APIs, and even using a computer. There are three classes of tools in the Agent SDK:\n\n- Hosted tools: run on LLM servers alongside the AI models\n- Function calling: allow you to use any Python function as a tool\n- Agents as tools: allow you to use an agent as a tool\n\n### Hosted tools\n\nOpenAI offers built-in tools like `WebSearchTool`, `FileSearchTool`, and `ComputerTool`.\n\n### Function tools\n\nYou can use any Python function as a tool. The Agents SDK will automatically set up the tool with appropriate name, description and schema.\n\n```python\nimport json\nfrom typing_extensions import TypedDict\n\nfrom agents import Agent, FunctionTool, RunContextWrapper, function_tool\n\nclass Location(TypedDict):\n    lat: float\n    long: float\n\n@function_tool\nasync def fetch_weather(location: Location) -> str:\n    \"\"\"Fetch the weather for a given location.\n\n    Args:\n        location: The location to fetch the weather for.\n    \"\"\"\n    # In real life, we'd fetch the weather from a weather API\n    return \"sunny\"\n\n@function_tool(name_override=\"fetch_data\")\ndef read_file(ctx: RunContextWrapper[Any], path: str, directory: str | None = None) -> str:\n    \"\"\"Read the contents of a file.\"\"\"\n    # In real life, we'd read the file from the file system\n    return \"<file contents>\"\n```\n\n### Agents as tools\n\nIn some workflows, you may want a central agent to orchestrate a network of specialized agents, instead of handing off control.\n\n### Handling errors in function tools\n\nYou can customize error handling for function tools using the `failure_error_function` parameter.\n\n## Results\n\nWhen you call the `Runner.run` methods, you get either a `RunResult` or `RunResultStreaming` object containing information about the agent run.\n\n### Final output\n\nThe `final_output` property contains the final output of the last agent that ran.\n\n### Inputs for the next turn\n\nYou can use `result.to_input_list()` to turn the result into an input list that concatenates the original input you provided with items generated during the agent run.\n\n### Last agent\n\nThe `last_agent` property contains the last agent that ran, which can be useful for subsequent user interactions.\n\n### New items\n\nThe `new_items` property contains the new items generated during the run, including messages, tool calls, handoffs, etc.\n\n## Running agents\n\nYou can run agents via the `Runner` class with three options:\n\n1. `Runner.run()` - async method returning a `RunResult`\n2. `Runner.run_sync()` - sync wrapper around `run()`\n3. `Runner.run_streamed()` - async method that streams LLM events as they occur\n\n### The agent loop\n\nWhen you use the run method, the runner executes a loop:\n\n1. Call the LLM for the current agent with the current input\n2. Process the LLM output:\n   - If it's a final output, end the loop and return the result\n   - If it's a handoff, update the current agent and input, and re-run the loop\n   - If it's tool calls, run the tools, append results, and re-run the loop\n3. If max_turns is exceeded, raise an exception\n\n### Run config\n\nThe `run_config` parameter lets you configure various global settings for the agent run.\n\n### Conversations/chat threads\n\nEach run represents a single logical turn in a chat conversation. You can use `RunResultBase.to_input_list()` to get inputs for the next turn.\n\n## Tracing\n\nThe Agents SDK includes built-in tracing, collecting a comprehensive record of events during an agent run: LLM generations, tool calls, handoffs, guardrails, and custom events.\n\n### Traces and spans\n\n- **Traces** represent a single end-to-end operation of a \"workflow\"\n- **Spans** represent operations that have a start and end time\n\n### Default tracing\n\nBy default, the SDK traces the entire run, each agent execution, LLM generations, function tool calls, guardrails, and handoffs.\n\n### Higher level traces\n\nSometimes, you might want multiple calls to `run()` to be part of a single trace:\n\n```python\nfrom agents import Agent, Runner, trace\n\nasync def main():\n    agent = Agent(name=\"Joke generator\", instructions=\"Tell funny jokes.\")\n\n    with trace(\"Joke workflow\"):\n        first_result = await Runner.run(agent, \"Tell me a joke\")\n        second_result = await Runner.run(agent, f\"Rate this joke: {first_result.final_output}\")\n        print(f\"Joke: {first_result.final_output}\")\n        print(f\"Rating: {second_result.final_output}\")\n```\n\n### Custom trace processors\n\nYou can customize tracing to send traces to alternative or additional backends:\n\n1. `add_trace_processor()` adds an additional processor alongside the default one\n2. `set_trace_processors()` replaces the default processor entirely\n\n## Context Management\n\nContext is an overloaded term with two main aspects:\n\n1. **Local context**: Data and dependencies available to your code during tool function execution, callbacks, lifecycle hooks, etc.\n2. **LLM context**: Data the LLM sees when generating a response\n\n### Local context\n\nThis is represented via the `RunContextWrapper` class and allows you to pass any Python object to be available throughout the agent run:\n\n```python\nimport asyncio\nfrom dataclasses import dataclass\n\nfrom agents import Agent, RunContextWrapper, Runner, function_tool\n\n@dataclass\nclass UserInfo:\n    name: str\n    uid: int\n\n@function_tool\nasync def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str:\n    return f\"User {wrapper.context.name} is 47 years old\"\n\nasync def main():\n    user_info = UserInfo(name=\"John\", uid=123)\n\n    agent = Agent[UserInfo](\n        name=\"Assistant\",\n        tools=[fetch_user_age],\n    )\n\n    result = await Runner.run(\n        starting_agent=agent,\n        input=\"What is the age of the user?\",\n        context=user_info,\n    )\n\n    print(result.final_output)\n    # The user John is 47 years old.\n```\n\n### Agent/LLM context\n\nWhen an LLM is called, it can only see data from the conversation history. There are several ways to make data available:\n\n1. Add it to the Agent `instructions` (system prompt)\n2. Add it to the `input` when calling `Runner.run`\n3. Expose it via function tools for on-demand access\n4. Use retrieval or web search tools to fetch relevant contextual data\n\n## Model Context Protocol (MCP)\n\nThe [Model Context Protocol](https://modelcontextprotocol.io/introduction) (aka MCP) is a way to provide tools and context to the LLM. MCP provides a standardized way to connect AI models to different data sources and tools.\n\n### MCP Servers\n\nThe Agents SDK supports two types of MCP servers:\n\n1. **stdio servers** run as a subprocess of your application (locally)\n2. **HTTP over SSE servers** run remotely (connect via URL)\n\nYou can use `MCPServerStdio` and `MCPServerSse` classes to connect to these servers:\n\n```python\nfrom agents.mcp.server import MCPServerStdio, MCPServerSse\n\n# Example using the filesystem MCP server\nasync with MCPServerStdio(\n    params={\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", samples_dir],\n    }\n) as server:\n    tools = await server.list_tools()\n```\n\n### Using MCP Servers with Agents\n\nMCP servers can be added directly to Agents:\n\n```python\nagent = Agent(\n    name=\"Assistant\",\n    instructions=\"Use the tools to achieve the task\",\n    mcp_servers=[mcp_server_1, mcp_server_2]\n)\n```\n\nWhen the Agent runs, it will automatically call `list_tools()` on all MCP servers, making the LLM aware of all available tools. When the LLM calls a tool from an MCP server, the SDK handles calling `call_tool()` on that server.\n\n### Caching Tool Lists\n\nFor better performance, especially with remote servers, you can cache the list of tools:\n\n```python\nmcp_server = MCPServerSse(\n    url=\"https://example.com/mcp\",\n    cache_tools_list=True  # Enable caching\n)\n\n# Later, if needed, clear the cache\nmcp_server.invalidate_tools_cache()\n```\n\nOnly use caching when you're certain the tool list will not change during execution.\n\n### Tracing MCP Operations\n\nThe Agents SDK's tracing system automatically captures MCP operations, including:\n\n1. Calls to MCP servers to list tools\n2. MCP-related information on function calls\n\nThis makes it easier to debug and analyze your agent's interactions with MCP tools.\n\n### Use a different LLM\n\n```python\nimport asyncio\nimport os\n\nfrom openai import AsyncOpenAI\n\nfrom agents import Agent, OpenAIChatCompletionsModel, Runner, function_tool, set_tracing_disabled\n\nBASE_URL = os.getenv(\"EXAMPLE_BASE_URL\") or \"\"\nAPI_KEY = os.getenv(\"EXAMPLE_API_KEY\") or \"\"\nMODEL_NAME = os.getenv(\"EXAMPLE_MODEL_NAME\") or \"\"\n\nif not BASE_URL or not API_KEY or not MODEL_NAME:\n    raise ValueError(\n        \"Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code.\"\n    )\n\n\"\"\"This example uses a custom provider for a specific agent. Steps:\n1. Create a custom OpenAI client.\n2. Create a `Model` that uses the custom client.\n3. Set the `model` on the Agent.\n\nNote that in this example, we disable tracing under the assumption that you don't have an API key\nfrom platform.openai.com. If you do have one, you can either set the `OPENAI_API_KEY` env var\nor call set_tracing_export_api_key() to set a tracing specific key.\n\"\"\"\nclient = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)\nset_tracing_disabled(disabled=True)\n\n# An alternate approach that would also work:\n# PROVIDER = OpenAIProvider(openai_client=client)\n# agent = Agent(..., model=\"some-custom-model\")\n# Runner.run(agent, ..., run_config=RunConfig(model_provider=PROVIDER))\n\n\n@function_tool\ndef get_weather(city: str):\n    print(f\"[debug] getting weather for {city}\")\n    return f\"The weather in {city} is sunny.\"\n\n\nasync def main():\n    # This agent will use the custom LLM provider\n    agent = Agent(\n        name=\"Assistant\",\n        instructions=\"You only respond in haikus.\",\n        model=OpenAIChatCompletionsModel(model=MODEL_NAME, openai_client=client),\n        tools=[get_weather],\n    )\n\n    result = await Runner.run(agent, \"What's the weather in Tokyo?\")\n    print(result.final_output)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```"
  },
  {
    "path": "ai_docs/openai-function-calling.md",
    "content": "Log in [Sign up](https://platform.openai.com/signup)\n\n# Function calling\n\nCopy page\n\nEnable models to fetch data and take actions.\n\n**Function calling** provides a powerful and flexible way for OpenAI models to interface with your code or external services, and has two primary use cases:\n\n|  |  |\n| --- | --- |\n| **Fetching Data** | Retrieve up-to-date information to incorporate into the model's response (RAG). Useful for searching knowledge bases and retrieving specific data from APIs (e.g. current weather data). |\n| **Taking Action** | Perform actions like submitting a form, calling APIs, modifying application state (UI/frontend or backend), or taking agentic workflow actions (like [handing off](https://cookbook.openai.com/examples/orchestrating_agents) the conversation). |\n\nIf you only want the model to produce JSON, see our docs on [structured outputs](https://platform.openai.com/docs/guides/structured-outputs).\n\nGet weatherGet weatherSend emailSend emailSearch knowledge baseSearch knowledge base\n\nGet weather\n\nFunction calling example with get\\_weather function\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get current temperature for a given location.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"location\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"City and country e.g. Bogotá, Colombia\"\\\n                }\\\n            },\\\n            \"required\": [\\\n                \"location\"\\\n            ],\\\n            \"additionalProperties\": False\\\n        },\\\n        \"strict\": True\\\n    }\\\n}]\n\ncompletion = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"What is the weather like in Paris today?\"}],\n    tools=tools\n)\n\nprint(completion.choices[0].message.tool_calls)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\nimport { OpenAI } from \"openai\";\n\nconst openai = new OpenAI();\n\nconst tools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get current temperature for a given location.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"location\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"City and country e.g. Bogotá, Colombia\"\\\n                }\\\n            },\\\n            \"required\": [\\\n                \"location\"\\\n            ],\\\n            \"additionalProperties\": false\\\n        },\\\n        \"strict\": true\\\n    }\\\n}];\n\nconst completion = await openai.chat.completions.create({\n    model: \"gpt-4o\",\n    messages: [{ role: \"user\", content: \"What is the weather like in Paris today?\" }],\n    tools,\n    store: true,\n});\n\nconsole.log(completion.choices[0].message.tool_calls);\n```\n\n```bash\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\ncurl https://api.openai.com/v1/chat/completions \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer $OPENAI_API_KEY\" \\\n-d '{\n    \"model\": \"gpt-4o\",\n    \"messages\": [\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"What is the weather like in Paris today?\"\\\n        }\\\n    ],\n    \"tools\": [\\\n        {\\\n            \"type\": \"function\",\\\n            \"function\": {\\\n                \"name\": \"get_weather\",\\\n                \"description\": \"Get current temperature for a given location.\",\\\n                \"parameters\": {\\\n                    \"type\": \"object\",\\\n                    \"properties\": {\\\n                        \"location\": {\\\n                            \"type\": \"string\",\\\n                            \"description\": \"City and country e.g. Bogotá, Colombia\"\\\n                        }\\\n                    },\\\n                    \"required\": [\\\n                        \"location\"\\\n                    ],\\\n                    \"additionalProperties\": false\\\n                },\\\n                \"strict\": true\\\n            }\\\n        }\\\n    ]\n}'\n```\n\nOutput\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n[{\\\n    \"id\": \"call_12345xyz\",\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"get_weather\",\\\n        \"arguments\": \"{\\\"location\\\":\\\"Paris, France\\\"}\"\\\n    }\\\n}]\n```\n\nSend email\n\nFunction calling example with send\\_email function\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"send_email\",\\\n        \"description\": \"Send an email to a given recipient with a subject and message.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"to\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"The recipient email address.\"\\\n                },\\\n                \"subject\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"Email subject line.\"\\\n                },\\\n                \"body\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"Body of the email message.\"\\\n                }\\\n            },\\\n            \"required\": [\\\n                \"to\",\\\n                \"subject\",\\\n                \"body\"\\\n            ],\\\n            \"additionalProperties\": False\\\n        },\\\n        \"strict\": True\\\n    }\\\n}]\n\ncompletion = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Can you send an email to ilan@example.com and katia@example.com saying hi?\"}],\n    tools=tools\n)\n\nprint(completion.choices[0].message.tool_calls)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\nimport { OpenAI } from \"openai\";\n\nconst openai = new OpenAI();\n\nconst tools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"send_email\",\\\n        \"description\": \"Send an email to a given recipient with a subject and message.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"to\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"The recipient email address.\"\\\n                },\\\n                \"subject\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"Email subject line.\"\\\n                },\\\n                \"body\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"Body of the email message.\"\\\n                }\\\n            },\\\n            \"required\": [\\\n                \"to\",\\\n                \"subject\",\\\n                \"body\"\\\n            ],\\\n            \"additionalProperties\": false\\\n        },\\\n        \"strict\": true\\\n    }\\\n}];\n\nconst completion = await openai.chat.completions.create({\n    model: \"gpt-4o\",\n    messages: [{ role: \"user\", content: \"Can you send an email to ilan@example.com and katia@example.com saying hi?\" }],\n    tools,\n    store: true,\n});\n\nconsole.log(completion.choices[0].message.tool_calls);\n```\n\n```bash\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\ncurl https://api.openai.com/v1/chat/completions \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer $OPENAI_API_KEY\" \\\n-d '{\n    \"model\": \"gpt-4o\",\n    \"messages\": [\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"Can you send an email to ilan@example.com and katia@example.com saying hi?\"\\\n        }\\\n    ],\n    \"tools\": [\\\n        {\\\n            \"type\": \"function\",\\\n            \"function\": {\\\n                \"name\": \"send_email\",\\\n                \"description\": \"Send an email to a given recipient with a subject and message.\",\\\n                \"parameters\": {\\\n                    \"type\": \"object\",\\\n                    \"properties\": {\\\n                        \"to\": {\\\n                            \"type\": \"string\",\\\n                            \"description\": \"The recipient email address.\"\\\n                        },\\\n                        \"subject\": {\\\n                            \"type\": \"string\",\\\n                            \"description\": \"Email subject line.\"\\\n                        },\\\n                        \"body\": {\\\n                            \"type\": \"string\",\\\n                            \"description\": \"Body of the email message.\"\\\n                        }\\\n                    },\\\n                    \"required\": [\\\n                        \"to\",\\\n                        \"subject\",\\\n                        \"body\"\\\n                    ],\\\n                    \"additionalProperties\": false\\\n                },\\\n                \"strict\": true\\\n            }\\\n        }\\\n    ]\n}'\n```\n\nOutput\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n[\\\n    {\\\n        \"id\": \"call_9876abc\",\\\n        \"type\": \"function\",\\\n        \"function\": {\\\n            \"name\": \"send_email\",\\\n            \"arguments\": \"{\\\"to\\\":\\\"ilan@example.com\\\",\\\"subject\\\":\\\"Hello!\\\",\\\"body\\\":\\\"Just wanted to say hi\\\"}\"\\\n        }\\\n    },\\\n    {\\\n        \"id\": \"call_9876abc\",\\\n        \"type\": \"function\",\\\n        \"function\": {\\\n            \"name\": \"send_email\",\\\n            \"arguments\": \"{\\\"to\\\":\\\"katia@example.com\\\",\\\"subject\\\":\\\"Hello!\\\",\\\"body\\\":\\\"Just wanted to say hi\\\"}\"\\\n        }\\\n    }\\\n]\n```\n\nSearch knowledge base\n\nFunction calling example with search\\_knowledge\\_base function\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"search_knowledge_base\",\\\n        \"description\": \"Query a knowledge base to retrieve relevant info on a topic.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"query\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"The user question or search query.\"\\\n                },\\\n                \"options\": {\\\n                    \"type\": \"object\",\\\n                    \"properties\": {\\\n                        \"num_results\": {\\\n                            \"type\": \"number\",\\\n                            \"description\": \"Number of top results to return.\"\\\n                        },\\\n                        \"domain_filter\": {\\\n                            \"type\": [\\\n                                \"string\",\\\n                                \"null\"\\\n                            ],\\\n                            \"description\": \"Optional domain to narrow the search (e.g. 'finance', 'medical'). Pass null if not needed.\"\\\n                        },\\\n                        \"sort_by\": {\\\n                            \"type\": [\\\n                                \"string\",\\\n                                \"null\"\\\n                            ],\\\n                            \"enum\": [\\\n                                \"relevance\",\\\n                                \"date\",\\\n                                \"popularity\",\\\n                                \"alphabetical\"\\\n                            ],\\\n                            \"description\": \"How to sort results. Pass null if not needed.\"\\\n                        }\\\n                    },\\\n                    \"required\": [\\\n                        \"num_results\",\\\n                        \"domain_filter\",\\\n                        \"sort_by\"\\\n                    ],\\\n                    \"additionalProperties\": False\\\n                }\\\n            },\\\n            \"required\": [\\\n                \"query\",\\\n                \"options\"\\\n            ],\\\n            \"additionalProperties\": False\\\n        },\\\n        \"strict\": True\\\n    }\\\n}]\n\ncompletion = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"Can you find information about ChatGPT in the AI knowledge base?\"}],\n    tools=tools\n)\n\nprint(completion.choices[0].message.tool_calls)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\nimport { OpenAI } from \"openai\";\n\nconst openai = new OpenAI();\n\nconst tools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"search_knowledge_base\",\\\n        \"description\": \"Query a knowledge base to retrieve relevant info on a topic.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"query\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"The user question or search query.\"\\\n                },\\\n                \"options\": {\\\n                    \"type\": \"object\",\\\n                    \"properties\": {\\\n                        \"num_results\": {\\\n                            \"type\": \"number\",\\\n                            \"description\": \"Number of top results to return.\"\\\n                        },\\\n                        \"domain_filter\": {\\\n                            \"type\": [\\\n                                \"string\",\\\n                                \"null\"\\\n                            ],\\\n                            \"description\": \"Optional domain to narrow the search (e.g. 'finance', 'medical'). Pass null if not needed.\"\\\n                        },\\\n                        \"sort_by\": {\\\n                            \"type\": [\\\n                                \"string\",\\\n                                \"null\"\\\n                            ],\\\n                            \"enum\": [\\\n                                \"relevance\",\\\n                                \"date\",\\\n                                \"popularity\",\\\n                                \"alphabetical\"\\\n                            ],\\\n                            \"description\": \"How to sort results. Pass null if not needed.\"\\\n                        }\\\n                    },\\\n                    \"required\": [\\\n                        \"num_results\",\\\n                        \"domain_filter\",\\\n                        \"sort_by\"\\\n                    ],\\\n                    \"additionalProperties\": false\\\n                }\\\n            },\\\n            \"required\": [\\\n                \"query\",\\\n                \"options\"\\\n            ],\\\n            \"additionalProperties\": false\\\n        },\\\n        \"strict\": true\\\n    }\\\n}];\n\nconst completion = await openai.chat.completions.create({\n    model: \"gpt-4o\",\n    messages: [{ role: \"user\", content: \"Can you find information about ChatGPT in the AI knowledge base?\" }],\n    tools,\n    store: true,\n});\n\nconsole.log(completion.choices[0].message.tool_calls);\n```\n\n```bash\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\ncurl https://api.openai.com/v1/chat/completions \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer $OPENAI_API_KEY\" \\\n-d '{\n    \"model\": \"gpt-4o\",\n    \"messages\": [\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"Can you find information about ChatGPT in the AI knowledge base?\"\\\n        }\\\n    ],\n    \"tools\": [\\\n        {\\\n            \"type\": \"function\",\\\n            \"function\": {\\\n                \"name\": \"search_knowledge_base\",\\\n                \"description\": \"Query a knowledge base to retrieve relevant info on a topic.\",\\\n                \"parameters\": {\\\n                    \"type\": \"object\",\\\n                    \"properties\": {\\\n                        \"query\": {\\\n                            \"type\": \"string\",\\\n                            \"description\": \"The user question or search query.\"\\\n                        },\\\n                        \"options\": {\\\n                            \"type\": \"object\",\\\n                            \"properties\": {\\\n                                \"num_results\": {\\\n                                    \"type\": \"number\",\\\n                                    \"description\": \"Number of top results to return.\"\\\n                                },\\\n                                \"domain_filter\": {\\\n                                    \"type\": [\\\n                                        \"string\",\\\n                                        \"null\"\\\n                                    ],\\\n                                    \"description\": \"Optional domain to narrow the search (e.g. 'finance', 'medical'). Pass null if not needed.\"\\\n                                },\\\n                                \"sort_by\": {\\\n                                    \"type\": [\\\n                                        \"string\",\\\n                                        \"null\"\\\n                                    ],\\\n                                    \"enum\": [\\\n                                        \"relevance\",\\\n                                        \"date\",\\\n                                        \"popularity\",\\\n                                        \"alphabetical\"\\\n                                    ],\\\n                                    \"description\": \"How to sort results. Pass null if not needed.\"\\\n                                }\\\n                            },\\\n                            \"required\": [\\\n                                \"num_results\",\\\n                                \"domain_filter\",\\\n                                \"sort_by\"\\\n                            ],\\\n                            \"additionalProperties\": false\\\n                        }\\\n                    },\\\n                    \"required\": [\\\n                        \"query\",\\\n                        \"options\"\\\n                    ],\\\n                    \"additionalProperties\": false\\\n                },\\\n                \"strict\": true\\\n            }\\\n        }\\\n    ]\n}'\n```\n\nOutput\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n[{\\\n    \"id\": \"call_4567xyz\",\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"search_knowledge_base\",\\\n        \"arguments\": \"{\\\"query\\\":\\\"What is ChatGPT?\\\",\\\"options\\\":{\\\"num_results\\\":3,\\\"domain_filter\\\":null,\\\"sort_by\\\":\\\"relevance\\\"}}\"\\\n    }\\\n}]\n```\n\nExperiment with function calling and [generate function schemas](https://platform.openai.com/docs/guides/prompt-generation) in the [Playground](https://platform.openai.com/playground)!\n\n## Overview\n\nYou can extend the capabilities of OpenAI models by giving them access to `tools`, which can have one of two forms:\n\n|  |  |\n| --- | --- |\n| **Function Calling** | Developer-defined code. |\n| **Hosted Tools** | OpenAI-built tools. ( _e.g. file search, code interpreter_)<br>Only available in the [Assistants API](https://platform.openai.com/docs/assistants/tools). |\n\nThis guide will cover how you can give the model access to your own functions through **function calling**. Based on the system prompt and messages, the model may decide to call these functions — **instead of (or in addition to) generating text or audio**.\n\nYou'll then execute the function code, send back the results, and the model will incorporate them into its final response.\n\n![Function Calling Diagram Steps](https://cdn.openai.com/API/docs/images/function-calling-diagram-steps.png)\n\n### Sample function\n\nLet's look at the steps to allow a model to use a real `get_weather` function defined below:\n\nSample get\\_weather function implemented in your codebase\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\nimport requests\n\ndef get_weather(latitude, longitude):\n    response = requests.get(f\"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}&current=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m\")\n    data = response.json()\n    return data['current']['temperature_2m']\n```\n\n```javascript\n1\n2\n3\n4\n5\nasync function getWeather(latitude, longitude) {\n    const response = await fetch(`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&current=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m`);\n    const data = await response.json();\n    return data.current.temperature_2m;\n}\n```\n\nUnlike the diagram earlier, this function expects precise `latitude` and `longitude` instead of a general `location` parameter. (However, our models can automatically determine the coordinates for many locations!)\n\n### Function calling steps\n\n**Call model with [functions defined](https://platform.openai.com/docs/guides/function-calling#defining-functions)** – along with your system and user messages.\n\nStep 1: Call model with get\\_weather tool defined\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\nfrom openai import OpenAI\nimport json\n\nclient = OpenAI()\n\ntools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get current temperature for provided coordinates in celsius.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"latitude\": {\"type\": \"number\"},\\\n                \"longitude\": {\"type\": \"number\"}\\\n            },\\\n            \"required\": [\"latitude\", \"longitude\"],\\\n            \"additionalProperties\": False\\\n        },\\\n        \"strict\": True\\\n    }\\\n}]\n\nmessages = [{\"role\": \"user\", \"content\": \"What's the weather like in Paris today?\"}]\n\ncompletion = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=messages,\n    tools=tools,\n)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\nimport { OpenAI } from \"openai\";\n\nconst openai = new OpenAI();\n\nconst tools = [{\\\n    type: \"function\",\\\n    function: {\\\n        name: \"get_weather\",\\\n        description: \"Get current temperature for provided coordinates in celsius.\",\\\n        parameters: {\\\n            type: \"object\",\\\n            properties: {\\\n                latitude: { type: \"number\" },\\\n                longitude: { type: \"number\" }\\\n            },\\\n            required: [\"latitude\", \"longitude\"],\\\n            additionalProperties: false\\\n        },\\\n        strict: true\\\n    }\\\n}];\n\nconst messages = [\\\n    {\\\n        role: \"user\",\\\n        content: \"What's the weather like in Paris today?\"\\\n    }\\\n];\n\nconst completion = await openai.chat.completions.create({\n    model: \"gpt-4o\",\n    messages,\n    tools,\n    store: true,\n});\n```\n\n**Model decides to call function(s)** – model returns the **name** and **input arguments**.\n\ncompletion.choices\\[0\\].message.tool\\_calls\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n[{\\\n    \"id\": \"call_12345xyz\",\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n      \"name\": \"get_weather\",\\\n      \"arguments\": \"{\\\"latitude\\\":48.8566,\\\"longitude\\\":2.3522}\"\\\n    }\\\n}]\n```\n\n**Execute function code** – parse the model's response and [handle function calls](https://platform.openai.com/docs/guides/function-calling#handling-function-calls).\n\nStep 3: Execute get\\_weather function\n\npython\n\n```python\n1\n2\n3\n4\ntool_call = completion.choices[0].message.tool_calls[0]\nargs = json.loads(tool_call.function.arguments)\n\nresult = get_weather(args[\"latitude\"], args[\"longitude\"])\n```\n\n```javascript\n1\n2\n3\n4\nconst toolCall = completion.choices[0].message.tool_calls[0];\nconst args = JSON.parse(toolCall.function.arguments);\n\nconst result = await get_weather(args.latitude, args.longitude);\n```\n\n**Supply model with results** – so it can incorporate them into its final response.\n\nStep 4: Supply result and call model again\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\nmessages.append(completion.choices[0].message)  # append model's function call message\nmessages.append({                               # append result message\n    \"role\": \"tool\",\n    \"tool_call_id\": tool_call.id,\n    \"content\": str(result)\n})\n\ncompletion_2 = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=messages,\n    tools=tools,\n)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\nmessages.push(completion.choices[0].message); // append model's function call message\nmessages.push({                               // append result message\n    role: \"tool\",\n    tool_call_id: toolCall.id,\n    content: result.toString()\n});\n\nconst completion2 = await openai.chat.completions.create({\n    model: \"gpt-4o\",\n    messages,\n    tools,\n    store: true,\n});\n\nconsole.log(completion2.choices[0].message.content);\n```\n\n**Model responds** – incorporating the result in its output.\n\ncompletion\\_2.choices\\[0\\].message.content\n\n```json\n\"The current temperature in Paris is 14°C (57.2°F).\"\n```\n\n## Defining functions\n\nFunctions can be set in the `tools` parameter of each API request inside a `function` object.\n\nA function is defined by its schema, which informs the model what it does and what input arguments it expects. It comprises the following fields:\n\n| Field | Description |\n| --- | --- |\n| `name` | The function's name (e.g. `get_weather`) |\n| `description` | Details on when and how to use the function |\n| `parameters` | [JSON schema](https://json-schema.org/) defining the function's input arguments |\n\nTake a look at this example or generate your own below (or in our [Playground](https://platform.openai.com/playground)).\n\nGenerate\n\nExample function schema\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n{\n    \"type\": \"function\",\n    \"function\": {\n        \"name\": \"get_weather\",\n        \"description\": \"Retrieves current weather for the given location.\",\n        \"parameters\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"City and country e.g. Bogotá, Colombia\"\n                },\n                \"units\": {\n                    \"type\": \"string\",\n                    \"enum\": [\\\n                        \"celsius\",\\\n                        \"fahrenheit\"\\\n                    ],\n                    \"description\": \"Units the temperature will be returned in.\"\n                }\n            },\n            \"required\": [\\\n                \"location\",\\\n                \"units\"\\\n            ],\n            \"additionalProperties\": false\n        },\n        \"strict\": true\n    }\n}\n```\n\nBecause the `parameters` are defined by a [JSON schema](https://json-schema.org/), you can leverage many of its rich features like property types, enums, descriptions, nested objects, and, recursive objects.\n\n(Optional) Function calling wth pydantic and zod\n\nWhile we encourage you to define your function schemas directly, our SDKs have helpers to convert `pydantic` and `zod` objects into schemas. Not all `pydantic` and `zod` features are supported.\n\nDefine objects to represent function schema\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\nfrom openai import OpenAI, pydantic_function_tool\nfrom pydantic import BaseModel, Field\n\nclient = OpenAI()\n\nclass GetWeather(BaseModel):\n    location: str = Field(\n        ...,\n        description=\"City and country e.g. Bogotá, Colombia\"\n    )\n\ntools = [pydantic_function_tool(GetWeather)]\n\ncompletion = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"What's the weather like in Paris today?\"}],\n    tools=tools\n)\n\nprint(completion.choices[0].message.tool_calls)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\nimport OpenAI from \"openai\";\nimport { z } from \"zod\";\nimport { zodFunction } from \"openai/helpers/zod\";\n\nconst openai = new OpenAI();\n\nconst GetWeatherParameters = z.object({\n  location: z.string().describe(\"City and country e.g. Bogotá, Colombia\"),\n});\n\nconst tools = [\\\n  zodFunction({ name: \"getWeather\", parameters: GetWeatherParameters }),\\\n];\n\nconst messages = [\\\n  { role: \"user\", content: \"What's the weather like in Paris today?\" },\\\n];\n\nconst response = await openai.chat.completions.create({\n  model: \"gpt-4o\",\n  messages,\n  tools,\n  store: true,\n});\n\nconsole.log(response.choices[0].message.tool_calls);\n```\n\n### Best practices for defining functions\n\n1. **Write clear and detailed function names, parameter descriptions, and instructions.**\n   - **Explicitly describe the purpose of the function and each parameter** (and its format), and what the output represents.\n   - **Use the system prompt to describe when (and when not) to use each function.** Generally, tell the model _exactly_ what to do.\n   - **Include examples and edge cases**, especially to rectify any recurring failures. ( **Note:** Adding examples may hurt performance for [reasoning models](https://platform.openai.com/docs/guides/reasoning).)\n2. **Apply software engineering best practices.**\n   - **Make the functions obvious and intuitive**. ( [principle of least surprise](https://en.wikipedia.org/wiki/Principle_of_least_astonishment))\n   - **Use enums** and object structure to make invalid states unrepresentable. (e.g. `toggle_light(on: bool, off: bool)` allows for invalid calls)\n   - **Pass the intern test.** Can an intern/human correctly use the function given nothing but what you gave the model? (If not, what questions do they ask you? Add the answers to the prompt.)\n3. **Offload the burden from the model and use code where possible.**\n   - **Don't make the model fill arguments you already know.** For example, if you already have an `order_id` based on a previous menu, don't have an `order_id` param – instead, have no params `submit_refund()` and pass the `order_id` with code.\n   - **Combine functions that are always called in sequence.** For example, if you always call `mark_location()` after `query_location()`, just move the marking logic into the query function call.\n4. **Keep the number of functions small for higher accuracy.**\n   - **Evaluate your performance** with different numbers of functions.\n   - **Aim for fewer than 20 functions** at any one time, though this is just a soft suggestion.\n5. **Leverage OpenAI resources.**\n   - **Generate and iterate on function schemas** in the [Playground](https://platform.openai.com/playground).\n   - **Consider [fine-tuning](https://platform.openai.com/docs/guides/fine-tuning) to increase function calling accuracy** for large numbers of functions or difficult tasks. ( [cookbook](https://cookbook.openai.com/examples/fine_tuning_for_function_calling))\n\n### Token Usage\n\nUnder the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens. If you run into token limits, we suggest limiting the number of functions or the length of the descriptions you provide for function parameters.\n\nIt is also possible to use [fine-tuning](https://platform.openai.com/docs/guides/fine-tuning#fine-tuning-examples) to reduce the number of tokens used if you have many functions defined in your tools specification.\n\n## Handling function calls\n\nWhen the model calls a function, you must execute it and return the result. Since model responses can include zero, one, or multiple calls, it is best practice to assume there are several.\n\nThe response has an array of `tool_calls`, each with an `id` (used later to submit the function result) and a `function` containing a `name` and JSON-encoded `arguments`.\n\nSample response with multiple function calls\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n[\\\n    {\\\n        \"id\": \"call_12345xyz\",\\\n        \"type\": \"function\",\\\n        \"function\": {\\\n            \"name\": \"get_weather\",\\\n            \"arguments\": \"{\\\"location\\\":\\\"Paris, France\\\"}\"\\\n        }\\\n    },\\\n    {\\\n        \"id\": \"call_67890abc\",\\\n        \"type\": \"function\",\\\n        \"function\": {\\\n            \"name\": \"get_weather\",\\\n            \"arguments\": \"{\\\"location\\\":\\\"Bogotá, Colombia\\\"}\"\\\n        }\\\n    },\\\n    {\\\n        \"id\": \"call_99999def\",\\\n        \"type\": \"function\",\\\n        \"function\": {\\\n            \"name\": \"send_email\",\\\n            \"arguments\": \"{\\\"to\\\":\\\"bob@email.com\\\",\\\"body\\\":\\\"Hi bob\\\"}\"\\\n        }\\\n    }\\\n]\n```\n\nExecute function calls and append results\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\nfor tool_call in completion.choices[0].message.tool_calls:\n    name = tool_call.function.name\n    args = json.loads(tool_call.function.arguments)\n\n    result = call_function(name, args)\n    messages.append({\n        \"role\": \"tool\",\n        \"tool_call_id\": tool_call.id,\n        \"content\": result\n    })\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\nfor (const toolCall of completion.choices[0].message.tool_calls) {\n    const name = toolCall.function.name;\n    const args = JSON.parse(toolCall.function.arguments);\n\n    const result = callFunction(name, args);\n    messages.push({\n        role: \"tool\",\n        tool_call_id: toolCall.id,\n        content: result.toString()\n    });\n}\n```\n\nIn the example above, we have a hypothetical `call_function` to route each call. Here’s a possible implementation:\n\nExecute function calls and append results\n\npython\n\n```python\n1\n2\n3\n4\n5\ndef call_function(name, args):\n    if name == \"get_weather\":\n        return get_weather(**args)\n    if name == \"send_email\":\n        return send_email(**args)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\nconst callFunction = async (name, args) => {\n    if (name === \"get_weather\") {\n        return getWeather(args.latitude, args.longitude);\n    }\n    if (name === \"send_email\") {\n        return sendEmail(args.to, args.body);\n    }\n};\n```\n\n### Formatting results\n\nA result must be a string, but the format is up to you (JSON, error codes, plain text, etc.). The model will interpret that string as needed.\n\nIf your function has no return value (e.g. `send_email`), simply return a string to indicate success or failure. (e.g. `\"success\"`)\n\n### Incorporating results into response\n\nAfter appending the results to your `messages`, you can send them back to the model to get a final response.\n\nSend results back to model\n\npython\n\n```python\n1\n2\n3\n4\n5\ncompletion = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=messages,\n    tools=tools,\n)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\nconst completion = await openai.chat.completions.create({\n    model: \"gpt-4o\",\n    messages,\n    tools,\n    store: true,\n});\n```\n\nFinal response\n\n```json\n\"It's about 15°C in Paris, 18°C in Bogotá, and I've sent that email to Bob.\"\n```\n\n## Additional configurations\n\n### Tool choice\n\nBy default the model will determine when and how many tools to use. You can force specific behavior with the `tool_choice` parameter.\n\n1. **Auto:** ( _Default_) Call zero, one, or multiple functions. `tool_choice: \"auto\"`\n2. **Required:** Call one or more functions.\n`tool_choice: \"required\"`\n3. **Forced Function:** Call exactly one specific function.\n`tool_choice: {\"type\": \"function\", \"function\": {\"name\": \"get_weather\"}}`\n\n![Function Calling Diagram Steps](https://cdn.openai.com/API/docs/images/function-calling-diagram-tool-choice.png)\n\nYou can also set `tool_choice` to `\"none\"` to imitate the behavior of passing no functions.\n\n### Parallel function calling\n\nThe model may choose to call multiple functions in a single turn. You can prevent this by setting `parallel_tool_calls` to `false`, which ensures exactly zero or one tool is called.\n\n**Note:** Currently, if the model calls multiple functions in one turn then [strict mode](https://platform.openai.com/docs/guides/function-calling#strict-mode) will be disabled for those calls.\n\n### Strict mode\n\nSetting `strict` to `true` will ensure function calls reliably adhere to the function schema, instead of being best effort. We recommend always enabling strict mode.\n\nUnder the hood, strict mode works by leveraging our [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) feature and therefore introduces a couple requirements:\n\n1. `additionalProperties` must be set to `false` for each object in the `parameters`.\n2. All fields in `properties` must be marked as `required`.\n\nYou can denote optional fields by adding `null` as a `type` option (see example below).\n\nStrict mode enabledStrict mode enabledStrict mode disabledStrict mode disabled\n\nStrict mode enabled\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n{\n    \"type\": \"function\",\n    \"function\": {\n        \"name\": \"get_weather\",\n        \"description\": \"Retrieves current weather for the given location.\",\n        \"strict\": true,\n        \"parameters\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"City and country e.g. Bogotá, Colombia\"\n                },\n                \"units\": {\n                    \"type\": [\"string\", \"null\"],\n                    \"enum\": [\"celsius\", \"fahrenheit\"],\n                    \"description\": \"Units the temperature will be returned in.\"\n                }\n            },\n            \"required\": [\"location\", \"units\"],\n            \"additionalProperties\": false\n        }\n    }\n}\n```\n\nStrict mode disabled\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n{\n    \"type\": \"function\",\n    \"function\": {\n        \"name\": \"get_weather\",\n        \"description\": \"Retrieves current weather for the given location.\",\n        \"parameters\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"location\": {\n                    \"type\": \"string\",\n                    \"description\": \"City and country e.g. Bogotá, Colombia\"\n                },\n                \"units\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"celsius\", \"fahrenheit\"],\n                    \"description\": \"Units the temperature will be returned in.\"\n                }\n            },\n            \"required\": [\"location\"],\n        }\n    }\n}\n```\n\nAll schemas generated in the [playground](https://platform.openai.com/playground) have strict mode enabled.\n\nWhile we recommend you enable strict mode, it has a few limitations:\n\n1. Some features of JSON schema are not supported. (See [supported schemas](https://platform.openai.com/docs/guides/structured-outputs?context=with_parse#supported-schemas).)\n2. Schemas undergo additional processing on the first request (and are then cached). If your schemas vary from request to request, this may result in higher latencies.\n3. Schemas are cached for performance, and are not eligible for [zero data retention](https://platform.openai.com/docs/models#how-we-use-your-data).\n\n## Streaming\n\nStreaming can be used to surface progress by showing which function is called as the model fills its arguments, and even displaying the arguments in real time.\n\nStreaming function calls is very similar to streaming regular responses: you set `stream` to `true` and get chunks with `delta` objects.\n\nStreaming function calls\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get current temperature for a given location.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"location\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"City and country e.g. Bogotá, Colombia\"\\\n                }\\\n            },\\\n            \"required\": [\"location\"],\\\n            \"additionalProperties\": False\\\n        },\\\n        \"strict\": True\\\n    }\\\n}]\n\nstream = client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"What's the weather like in Paris today?\"}],\n    tools=tools,\n    stream=True\n)\n\nfor chunk in stream:\n    delta = chunk.choices[0].delta\n    print(delta.tool_calls)\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\nimport { OpenAI } from \"openai\";\n\nconst openai = new OpenAI();\n\nconst tools = [{\\\n    \"type\": \"function\",\\\n    \"function\": {\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get current temperature for a given location.\",\\\n        \"parameters\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"location\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"City and country e.g. Bogotá, Colombia\"\\\n                }\\\n            },\\\n            \"required\": [\"location\"],\\\n            \"additionalProperties\": false\\\n        },\\\n        \"strict\": true\\\n    }\\\n}];\n\nconst stream = await openai.chat.completions.create({\n    model: \"gpt-4o\",\n    messages: [{ role: \"user\", content: \"What's the weather like in Paris today?\" }],\n    tools,\n    stream: true,\n    store: true,\n});\n\nfor await (const chunk of stream) {\n    const delta = chunk.choices[0].delta;\n    console.log(delta.tool_calls);\n}\n```\n\nOutput delta.tool\\_calls\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n9\n[{\"index\": 0, \"id\": \"call_DdmO9pD3xa9XTPNJ32zg2hcA\", \"function\": {\"arguments\": \"\", \"name\": \"get_weather\"}, \"type\": \"function\"}]\n[{\"index\": 0, \"id\": null, \"function\": {\"arguments\": \"{\\\"\", \"name\": null}, \"type\": null}]\n[{\"index\": 0, \"id\": null, \"function\": {\"arguments\": \"location\", \"name\": null}, \"type\": null}]\n[{\"index\": 0, \"id\": null, \"function\": {\"arguments\": \"\\\":\\\"\", \"name\": null}, \"type\": null}]\n[{\"index\": 0, \"id\": null, \"function\": {\"arguments\": \"Paris\", \"name\": null}, \"type\": null}]\n[{\"index\": 0, \"id\": null, \"function\": {\"arguments\": \",\", \"name\": null}, \"type\": null}]\n[{\"index\": 0, \"id\": null, \"function\": {\"arguments\": \" France\", \"name\": null}, \"type\": null}]\n[{\"index\": 0, \"id\": null, \"function\": {\"arguments\": \"\\\"}\", \"name\": null}, \"type\": null}]\nnull\n```\n\nInstead of aggregating chunks into a single `content` string, however, you're aggregating chunks into an encoded `arguments` JSON object.\n\nWhen the model calls one or more functions the `tool_calls` field of each `delta` will be populated. Each `tool_call` contains the following fields:\n\n| Field | Description |\n| --- | --- |\n| `index` | Identifies which function call the `delta` is for |\n| `id` | Tool call id. |\n| `function` | Function call delta ( `name` and `arguments`) |\n| `type` | Type of `tool_call` (always `function` for function calls) |\n\nMany of these fields are only set for the first `delta` of each tool call, like `id`, `function.name`, and `type`.\n\nBelow is a code snippet demonstrating how to aggregate the `delta` s into a final `tool_calls` object.\n\nAccumulating tool\\_call deltas\n\npython\n\n```python\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\nfinal_tool_calls = {}\n\nfor chunk in stream:\n    for tool_call in chunk.choices[0].delta.tool_calls or []:\n        index = tool_call.index\n\n        if index not in final_tool_calls:\n            final_tool_calls[index] = tool_call\n\n        final_tool_calls[index].function.arguments += tool_call.function.arguments\n```\n\n```javascript\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\nconst finalToolCalls = {};\n\nfor await (const chunk of stream) {\n    const toolCalls = chunk.choices[0].delta.tool_calls || [];\n    for (const toolCall of toolCalls) {\n        const { index } = toolCall;\n\n        if (!finalToolCalls[index]) {\n            finalToolCalls[index] = toolCall;\n        }\n\n        finalToolCalls[index].function.arguments += toolCall.function.arguments;\n    }\n}\n```\n\nAccumulated final\\_tool\\_calls\\[0\\]\n\n```json\n1\n2\n3\n4\n5\n6\n7\n8\n{\n    \"index\": 0,\n    \"id\": \"call_RzfkBpJgzeR0S242qfvjadNe\",\n    \"function\": {\n        \"name\": \"get_weather\",\n        \"arguments\": \"{\\\"location\\\":\\\"Paris, France\\\"}\"\n    }\n}\n```\n"
  },
  {
    "path": "ai_docs/python_anthropic.md",
    "content": "[Anthropic home page![light logo](https://mintlify.s3.us-west-1.amazonaws.com/anthropic/logo/light.svg)![dark logo](https://mintlify.s3.us-west-1.amazonaws.com/anthropic/logo/dark.svg)](https://docs.anthropic.com/)\n\nEnglish\n\nSearch...\n\nCtrl K\n\nSearch...\n\nNavigation\n\nBuild with Claude\n\nTool use (function calling)\n\n[Welcome](https://docs.anthropic.com/en/home) [User Guides](https://docs.anthropic.com/en/docs/welcome) [API Reference](https://docs.anthropic.com/en/api/getting-started) [Prompt Library](https://docs.anthropic.com/en/prompt-library/library) [Release Notes](https://docs.anthropic.com/en/release-notes/overview) [Developer Newsletter](https://docs.anthropic.com/en/developer-newsletter/overview)\n\nClaude is capable of interacting with external client-side tools and functions, allowing you to equip Claude with your own custom tools to perform a wider variety of tasks.\n\nLearn everything you need to master tool use with Claude via our new\ncomprehensive [tool use\\\\\ncourse](https://github.com/anthropics/courses/tree/master/tool_use)! Please\ncontinue to share your ideas and suggestions using this\n[form](https://forms.gle/BFnYc6iCkWoRzFgk7).\n\nHere’s an example of how to provide tools to Claude using the Messages API:\n\nShell\n\nPython\n\nCopy\n\n```bash\ncurl https://api.anthropic.com/v1/messages \\\n  -H \"content-type: application/json\" \\\n  -H \"x-api-key: $ANTHROPIC_API_KEY\" \\\n  -H \"anthropic-version: 2023-06-01\" \\\n  -d '{\n    \"model\": \"claude-3-5-sonnet-20241022\",\n    \"max_tokens\": 1024,\n    \"tools\": [\\\n      {\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get the current weather in a given location\",\\\n        \"input_schema\": {\\\n          \"type\": \"object\",\\\n          \"properties\": {\\\n            \"location\": {\\\n              \"type\": \"string\",\\\n              \"description\": \"The city and state, e.g. San Francisco, CA\"\\\n            }\\\n          },\\\n          \"required\": [\"location\"]\\\n        }\\\n      }\\\n    ],\n    \"messages\": [\\\n      {\\\n        \"role\": \"user\",\\\n        \"content\": \"What is the weather like in San Francisco?\"\\\n      }\\\n    ]\n  }'\n\n```\n\n* * *\n\n## [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#how-tool-use-works)  How tool use works\n\nIntegrate external tools with Claude in these steps:\n\n1\n\nProvide Claude with tools and a user prompt\n\n- Define tools with names, descriptions, and input schemas in your API request.\n- Include a user prompt that might require these tools, e.g., “What’s the weather in San Francisco?”\n\n2\n\nClaude decides to use a tool\n\n- Claude assesses if any tools can help with the user’s query.\n- If yes, Claude constructs a properly formatted tool use request.\n- The API response has a `stop_reason` of `tool_use`, signaling Claude’s intent.\n\n3\n\nExtract tool input, run code, and return results\n\n- On your end, extract the tool name and input from Claude’s request.\n- Execute the actual tool code client-side.\n- Continue the conversation with a new `user` message containing a `tool_result` content block.\n\n4\n\nClaude uses tool result to formulate a response\n\n- Claude analyzes the tool results to craft its final response to the original user prompt.\n\nNote: Steps 3 and 4 are optional. For some workflows, Claude’s tool use request (step 2) might be all you need, without sending results back to Claude.\n\n**Tools are user-provided**\n\nIt’s important to note that Claude does not have access to any built-in server-side tools. All tools must be explicitly provided by you, the user, in each API request. This gives you full control and flexibility over the tools Claude can use.\n\nThe [computer use (beta)](https://docs.anthropic.com/en/docs/build-with-claude/computer-use) functionality is an exception - it introduces tools that are provided by Anthropic but implemented by you, the user.\n\n* * *\n\n## [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#how-to-implement-tool-use)  How to implement tool use\n\n### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#choosing-a-model)  Choosing a model\n\nGenerally, use Claude 3.5 Sonnet or Claude 3 Opus for complex tools and ambiguous queries; they handle multiple tools better and seek clarification when needed.\n\nUse Claude 3.5 Haiku or Claude 3 Haiku for straightforward tools, but note they may infer missing parameters.\n\n### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#specifying-tools)  Specifying tools\n\nTools are specified in the `tools` top-level parameter of the API request. Each tool definition includes:\n\n| Parameter | Description |\n| --- | --- |\n| `name` | The name of the tool. Must match the regex `^[a-zA-Z0-9_-]{1,64}$`. |\n| `description` | A detailed plaintext description of what the tool does, when it should be used, and how it behaves. |\n| `input_schema` | A [JSON Schema](https://json-schema.org/) object defining the expected parameters for the tool. |\n\nExample simple tool definition\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"name\": \"get_weather\",\n  \"description\": \"Get the current weather in a given location\",\n  \"input_schema\": {\n    \"type\": \"object\",\n    \"properties\": {\n      \"location\": {\n        \"type\": \"string\",\n        \"description\": \"The city and state, e.g. San Francisco, CA\"\n      },\n      \"unit\": {\n        \"type\": \"string\",\n        \"enum\": [\"celsius\", \"fahrenheit\"],\n        \"description\": \"The unit of temperature, either 'celsius' or 'fahrenheit'\"\n      }\n    },\n    \"required\": [\"location\"]\n  }\n}\n\n```\n\nThis tool, named `get_weather`, expects an input object with a required `location` string and an optional `unit` string that must be either “celsius” or “fahrenheit”.\n\n#### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#tool-use-system-prompt)  Tool use system prompt\n\nWhen you call the Anthropic API with the `tools` parameter, we construct a special system prompt from the tool definitions, tool configuration, and any user-specified system prompt. The constructed prompt is designed to instruct the model to use the specified tool(s) and provide the necessary context for the tool to operate properly:\n\nCopy\n\n```\nIn this environment you have access to a set of tools you can use to answer the user's question.\n{{ FORMATTING INSTRUCTIONS }}\nString and scalar parameters should be specified as is, while lists and objects should use JSON format. Note that spaces for string values are not stripped. The output is not expected to be valid XML and is parsed with regular expressions.\nHere are the functions available in JSONSchema format:\n{{ TOOL DEFINITIONS IN JSON SCHEMA }}\n{{ USER SYSTEM PROMPT }}\n{{ TOOL CONFIGURATION }}\n\n```\n\n#### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#best-practices-for-tool-definitions)  Best practices for tool definitions\n\nTo get the best performance out of Claude when using tools, follow these guidelines:\n\n- **Provide extremely detailed descriptions.** This is by far the most important factor in tool performance. Your descriptions should explain every detail about the tool, including:\n\n  - What the tool does\n  - When it should be used (and when it shouldn’t)\n  - What each parameter means and how it affects the tool’s behavior\n  - Any important caveats or limitations, such as what information the tool does not return if the tool name is unclear. The more context you can give Claude about your tools, the better it will be at deciding when and how to use them. Aim for at least 3-4 sentences per tool description, more if the tool is complex.\n- **Prioritize descriptions over examples.** While you can include examples of how to use a tool in its description or in the accompanying prompt, this is less important than having a clear and comprehensive explanation of the tool’s purpose and parameters. Only add examples after you’ve fully fleshed out the description.\n\nExample of a good tool description\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"name\": \"get_stock_price\",\n  \"description\": \"Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.\",\n  \"input_schema\": {\n    \"type\": \"object\",\n    \"properties\": {\n      \"ticker\": {\n        \"type\": \"string\",\n        \"description\": \"The stock ticker symbol, e.g. AAPL for Apple Inc.\"\n      }\n    },\n    \"required\": [\"ticker\"]\n  }\n}\n\n```\n\nExample poor tool description\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"name\": \"get_stock_price\",\n  \"description\": \"Gets the stock price for a ticker.\",\n  \"input_schema\": {\n    \"type\": \"object\",\n    \"properties\": {\n      \"ticker\": {\n        \"type\": \"string\"\n      }\n    },\n    \"required\": [\"ticker\"]\n  }\n}\n\n```\n\nThe good description clearly explains what the tool does, when to use it, what data it returns, and what the `ticker` parameter means. The poor description is too brief and leaves Claude with many open questions about the tool’s behavior and usage.\n\n### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#controlling-claudes-output)  Controlling Claude’s output\n\n#### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#forcing-tool-use)  Forcing tool use\n\nIn some cases, you may want Claude to use a specific tool to answer the user’s question, even if Claude thinks it can provide an answer without using a tool. You can do this by specifying the tool in the `tool_choice` field like so:\n\nCopy\n\n```\ntool_choice = {\"type\": \"tool\", \"name\": \"get_weather\"}\n\n```\n\nWhen working with the tool\\_choice parameter, we have three possible options:\n\n- `auto` allows Claude to decide whether to call any provided tools or not. This is the default value.\n- `any` tells Claude that it must use one of the provided tools, but doesn’t force a particular tool.\n- `tool` allows us to force Claude to always use a particular tool.\n\nThis diagram illustrates how each option works:\n\n![](https://mintlify.s3.us-west-1.amazonaws.com/anthropic/images/tool_choice.png)\n\nNote that when you have `tool_choice` as `any` or `tool`, we will prefill the assistant message to force a tool to be used. This means that the models will not emit a chain-of-thought `text` content block before `tool_use` content blocks, even if explicitly asked to do so.\n\nOur testing has shown that this should not reduce performance. If you would like to keep chain-of-thought (particularly with Opus) while still requesting that the model use a specific tool, you can use `{\"type\": \"auto\"}` for `tool_choice` (the default) and add explicit instructions in a `user` message. For example: `What's the weather like in London? Use the get_weather tool in your response.`\n\n#### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#json-output)  JSON output\n\nTools do not necessarily need to be client-side functions — you can use tools anytime you want the model to return JSON output that follows a provided schema. For example, you might use a `record_summary` tool with a particular schema. See [tool use examples](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#json-mode) for a full working example.\n\n#### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#chain-of-thought)  Chain of thought\n\nWhen using tools, Claude will often show its “chain of thought”, i.e. the step-by-step reasoning it uses to break down the problem and decide which tools to use. The Claude 3 Opus model will do this if `tool_choice` is set to `auto` (this is the default value, see [Forcing tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#forcing-tool-use)), and Sonnet and Haiku can be prompted into doing it.\n\nFor example, given the prompt “What’s the weather like in San Francisco right now, and what time is it there?”, Claude might respond with:\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"role\": \"assistant\",\n  \"content\": [\\\n    {\\\n      \"type\": \"text\",\\\n      \"text\": \"<thinking>To answer this question, I will: 1. Use the get_weather tool to get the current weather in San Francisco. 2. Use the get_time tool to get the current time in the America/Los_Angeles timezone, which covers San Francisco, CA.</thinking>\"\\\n    },\\\n    {\\\n      \"type\": \"tool_use\",\\\n      \"id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n      \"name\": \"get_weather\",\\\n      \"input\": {\"location\": \"San Francisco, CA\"}\\\n    }\\\n  ]\n}\n\n```\n\nThis chain of thought gives insight into Claude’s reasoning process and can help you debug unexpected behavior.\n\nWith the Claude 3 Sonnet model, chain of thought is less common by default, but you can prompt Claude to show its reasoning by adding something like `\"Before answering, explain your reasoning step-by-step in tags.\"` to the user message or system prompt.\n\nIt’s important to note that while the `<thinking>` tags are a common convention Claude uses to denote its chain of thought, the exact format (such as what this XML tag is named) may change over time. Your code should treat the chain of thought like any other assistant-generated text, and not rely on the presence or specific formatting of the `<thinking>` tags.\n\n#### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#disabling-parallel-tool-use)  Disabling parallel tool use\n\nBy default, Claude may use multiple tools to answer a user query. You can disable this behavior by setting `disable_parallel_tool_use=true` in the `tool_choice` field.\n\n- When `tool_choice` type is `auto`, this ensures that Claude uses **at most one** tool\n- When `tool_choice` type is `any` or `tool`, this ensures that Claude uses **exactly one** tool\n\n### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#handling-tool-use-and-tool-result-content-blocks)  Handling tool use and tool result content blocks\n\nWhen Claude decides to use one of the tools you’ve provided, it will return a response with a `stop_reason` of `tool_use` and one or more `tool_use` content blocks in the API response that include:\n\n- `id`: A unique identifier for this particular tool use block. This will be used to match up the tool results later.\n- `name`: The name of the tool being used.\n- `input`: An object containing the input being passed to the tool, conforming to the tool’s `input_schema`.\n\nExample API response with a \\`tool\\_use\\` content block\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"id\": \"msg_01Aq9w938a90dw8q\",\n  \"model\": \"claude-3-5-sonnet-20241022\",\n  \"stop_reason\": \"tool_use\",\n  \"role\": \"assistant\",\n  \"content\": [\\\n    {\\\n      \"type\": \"text\",\\\n      \"text\": \"<thinking>I need to use the get_weather, and the user wants SF, which is likely San Francisco, CA.</thinking>\"\\\n    },\\\n    {\\\n      \"type\": \"tool_use\",\\\n      \"id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n      \"name\": \"get_weather\",\\\n      \"input\": {\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}\\\n    }\\\n  ]\n}\n\n```\n\nWhen you receive a tool use response, you should:\n\n1. Extract the `name`, `id`, and `input` from the `tool_use` block.\n2. Run the actual tool in your codebase corresponding to that tool name, passing in the tool `input`.\n3. Continue the conversation by sending a new message with the `role` of `user`, and a `content` block containing the `tool_result` type and the following information:\n\n   - `tool_use_id`: The `id` of the tool use request this is a result for.\n   - `content`: The result of the tool, as a string (e.g. `\"content\": \"15 degrees\"`) or list of nested content blocks (e.g. `\"content\": [{\"type\": \"text\", \"text\": \"15 degrees\"}]`). These content blocks can use the `text` or `image` types.\n   - `is_error` (optional): Set to `true` if the tool execution resulted in an error.\n\nExample of successful tool result\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"role\": \"user\",\n  \"content\": [\\\n    {\\\n      \"type\": \"tool_result\",\\\n      \"tool_use_id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n      \"content\": \"15 degrees\"\\\n    }\\\n  ]\n}\n\n```\n\nExample of tool result with images\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"role\": \"user\",\n  \"content\": [\\\n    {\\\n      \"type\": \"tool_result\",\\\n      \"tool_use_id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n      \"content\": [\\\n        {\"type\": \"text\", \"text\": \"15 degrees\"},\\\n        {\\\n          \"type\": \"image\",\\\n          \"source\": {\\\n            \"type\": \"base64\",\\\n            \"media_type\": \"image/jpeg\",\\\n            \"data\": \"/9j/4AAQSkZJRg...\",\\\n          }\\\n        }\\\n      ]\\\n    }\\\n  ]\n}\n\n```\n\nExample of empty tool result\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"role\": \"user\",\n  \"content\": [\\\n    {\\\n      \"type\": \"tool_result\",\\\n      \"tool_use_id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n    }\\\n  ]\n}\n\n```\n\nAfter receiving the tool result, Claude will use that information to continue generating a response to the original user prompt.\n\n**Differences from other APIs**\n\nUnlike APIs that separate tool use or use special roles like `tool` or `function`, Anthropic’s API integrates tools directly into the `user` and `assistant` message structure.\n\nMessages contain arrays of `text`, `image`, `tool_use`, and `tool_result` blocks. `user` messages include client-side content and `tool_result`, while `assistant` messages contain AI-generated content and `tool_use`.\n\n### [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#troubleshooting-errors)  Troubleshooting errors\n\nThere are a few different types of errors that can occur when using tools with Claude:\n\nTool execution error\n\nIf the tool itself throws an error during execution (e.g. a network error when fetching weather data), you can return the error message in the `content` along with `\"is_error\": true`:\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"role\": \"user\",\n  \"content\": [\\\n    {\\\n      \"type\": \"tool_result\",\\\n      \"tool_use_id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n      \"content\": \"ConnectionError: the weather service API is not available (HTTP 500)\",\\\n      \"is_error\": true\\\n    }\\\n  ]\n}\n\n```\n\nClaude will then incorporate this error into its response to the user, e.g. “I’m sorry, I was unable to retrieve the current weather because the weather service API is not available. Please try again later.”\n\nMax tokens exceeded\n\nIf Claude’s response is cut off due to hitting the `max_tokens` limit, and the truncated response contains an incomplete tool use block, you’ll need to retry the request with a higher `max_tokens` value to get the full tool use.\n\nInvalid tool name\n\nIf Claude’s attempted use of a tool is invalid (e.g. missing required parameters), it usually means that the there wasn’t enough information for Claude to use the tool correctly. Your best bet during development is to try the request again with more-detailed `description` values in your tool definitions.\n\nHowever, you can also continue the conversation forward with a `tool_result` that indicates the error, and Claude will try to use the tool again with the missing information filled in:\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"role\": \"user\",\n  \"content\": [\\\n    {\\\n      \"type\": \"tool_result\",\\\n      \"tool_use_id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n      \"content\": \"Error: Missing required 'location' parameter\",\\\n      \"is_error\": true\\\n    }\\\n  ]\n}\n\n```\n\nIf a tool request is invalid or missing parameters, Claude will retry 2-3 times with corrections before apologizing to the user.\n\n<search\\_quality\\_reflection> tags\n\nTo prevent Claude from reflecting on search quality with <search\\_quality\\_reflection> tags, add “Do not reflect on the quality of the returned search results in your response” to your prompt.\n\n* * *\n\n## [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#tool-use-examples)  Tool use examples\n\nHere are a few code examples demonstrating various tool use patterns and techniques. For brevity’s sake, the tools are simple tools, and the tool descriptions are shorter than would be ideal to ensure best performance.\n\nSingle tool example\n\nShell\n\nPython\n\nCopy\n\n```bash\ncurl https://api.anthropic.com/v1/messages \\\n     --header \"x-api-key: $ANTHROPIC_API_KEY\" \\\n     --header \"anthropic-version: 2023-06-01\" \\\n     --header \"content-type: application/json\" \\\n     --data \\\n'{\n    \"model\": \"claude-3-5-sonnet-20241022\",\n    \"max_tokens\": 1024,\n    \"tools\": [{\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get the current weather in a given location\",\\\n        \"input_schema\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"location\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"The city and state, e.g. San Francisco, CA\"\\\n                },\\\n                \"unit\": {\\\n                    \"type\": \"string\",\\\n                    \"enum\": [\"celsius\", \"fahrenheit\"],\\\n                    \"description\": \"The unit of temperature, either \\\"celsius\\\" or \\\"fahrenheit\\\"\"\\\n                }\\\n            },\\\n            \"required\": [\"location\"]\\\n        }\\\n    }],\n    \"messages\": [{\"role\": \"user\", \"content\": \"What is the weather like in San Francisco?\"}]\n}'\n\n```\n\nClaude will return a response similar to:\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"id\": \"msg_01Aq9w938a90dw8q\",\n  \"model\": \"claude-3-5-sonnet-20241022\",\n  \"stop_reason\": \"tool_use\",\n  \"role\": \"assistant\",\n  \"content\": [\\\n    {\\\n      \"type\": \"text\",\\\n      \"text\": \"<thinking>I need to call the get_weather function, and the user wants SF, which is likely San Francisco, CA.</thinking>\"\\\n    },\\\n    {\\\n      \"type\": \"tool_use\",\\\n      \"id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n      \"name\": \"get_weather\",\\\n      \"input\": {\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}\\\n    }\\\n  ]\n}\n\n```\n\nYou would then need to execute the `get_weather` function with the provided input, and return the result in a new `user` message:\n\nShell\n\nPython\n\nCopy\n\n```bash\ncurl https://api.anthropic.com/v1/messages \\\n     --header \"x-api-key: $ANTHROPIC_API_KEY\" \\\n     --header \"anthropic-version: 2023-06-01\" \\\n     --header \"content-type: application/json\" \\\n     --data \\\n'{\n    \"model\": \"claude-3-5-sonnet-20241022\",\n    \"max_tokens\": 1024,\n    \"tools\": [\\\n        {\\\n            \"name\": \"get_weather\",\\\n            \"description\": \"Get the current weather in a given location\",\\\n            \"input_schema\": {\\\n                \"type\": \"object\",\\\n                \"properties\": {\\\n                    \"location\": {\\\n                        \"type\": \"string\",\\\n                        \"description\": \"The city and state, e.g. San Francisco, CA\"\\\n                    },\\\n                    \"unit\": {\\\n                        \"type\": \"string\",\\\n                        \"enum\": [\"celsius\", \"fahrenheit\"],\\\n                        \"description\": \"The unit of temperature, either \\\"celsius\\\" or \\\"fahrenheit\\\"\"\\\n                    }\\\n                },\\\n                \"required\": [\"location\"]\\\n            }\\\n        }\\\n    ],\n    \"messages\": [\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"What is the weather like in San Francisco?\"\\\n        },\\\n        {\\\n            \"role\": \"assistant\",\\\n            \"content\": [\\\n                {\\\n                    \"type\": \"text\",\\\n                    \"text\": \"<thinking>I need to use get_weather, and the user wants SF, which is likely San Francisco, CA.</thinking>\"\\\n                },\\\n                {\\\n                    \"type\": \"tool_use\",\\\n                    \"id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n                    \"name\": \"get_weather\",\\\n                    \"input\": {\\\n                        \"location\": \"San Francisco, CA\",\\\n                        \"unit\": \"celsius\"\\\n                    }\\\n                }\\\n            ]\\\n        },\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": [\\\n                {\\\n                    \"type\": \"tool_result\",\\\n                    \"tool_use_id\": \"toolu_01A09q90qw90lq917835lq9\",\\\n                    \"content\": \"15 degrees\"\\\n                }\\\n            ]\\\n        }\\\n    ]\n}'\n\n```\n\nThis will print Claude’s final response, incorporating the weather data:\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"id\": \"msg_01Aq9w938a90dw8q\",\n  \"model\": \"claude-3-5-sonnet-20241022\",\n  \"stop_reason\": \"stop_sequence\",\n  \"role\": \"assistant\",\n  \"content\": [\\\n    {\\\n      \"type\": \"text\",\\\n      \"text\": \"The current weather in San Francisco is 15 degrees Celsius (59 degrees Fahrenheit). It's a cool day in the city by the bay!\"\\\n    }\\\n  ]\n}\n\n```\n\nMultiple tool example\n\nYou can provide Claude with multiple tools to choose from in a single request. Here’s an example with both a `get_weather` and a `get_time` tool, along with a user query that asks for both.\n\nShell\n\nPython\n\nCopy\n\n```bash\ncurl https://api.anthropic.com/v1/messages \\\n     --header \"x-api-key: $ANTHROPIC_API_KEY\" \\\n     --header \"anthropic-version: 2023-06-01\" \\\n     --header \"content-type: application/json\" \\\n     --data \\\n'{\n    \"model\": \"claude-3-5-sonnet-20241022\",\n    \"max_tokens\": 1024,\n    \"tools\": [{\\\n        \"name\": \"get_weather\",\\\n        \"description\": \"Get the current weather in a given location\",\\\n        \"input_schema\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"location\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"The city and state, e.g. San Francisco, CA\"\\\n                },\\\n                \"unit\": {\\\n                    \"type\": \"string\",\\\n                    \"enum\": [\"celsius\", \"fahrenheit\"],\\\n                    \"description\": \"The unit of temperature, either 'celsius' or 'fahrenheit'\"\\\n                }\\\n            },\\\n            \"required\": [\"location\"]\\\n        }\\\n    },\\\n    {\\\n        \"name\": \"get_time\",\\\n        \"description\": \"Get the current time in a given time zone\",\\\n        \"input_schema\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"timezone\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"The IANA time zone name, e.g. America/Los_Angeles\"\\\n                }\\\n            },\\\n            \"required\": [\"timezone\"]\\\n        }\\\n    }],\n    \"messages\": [{\\\n        \"role\": \"user\",\\\n        \"content\": \"What is the weather like right now in New York? Also what time is it there?\"\\\n    }]\n}'\n\n```\n\nIn this case, Claude will most likely try to use two separate tools, one at a time — `get_weather` and then `get_time` — in order to fully answer the user’s question. However, it will also occasionally output two `tool_use` blocks at once, particularly if they are not dependent on each other. You would need to execute each tool and return their results in separate `tool_result` blocks within a single `user` message.\n\nMissing information\n\nIf the user’s prompt doesn’t include enough information to fill all the required parameters for a tool, Claude 3 Opus is much more likely to recognize that a parameter is missing and ask for it. Claude 3 Sonnet may ask, especially when prompted to think before outputting a tool request. But it may also do its best to infer a reasonable value.\n\nFor example, using the `get_weather` tool above, if you ask Claude “What’s the weather?” without specifying a location, Claude, particularly Claude 3 Sonnet, may make a guess about tools inputs:\n\nJSON\n\nCopy\n\n```JSON\n{\n  \"type\": \"tool_use\",\n  \"id\": \"toolu_01A09q90qw90lq917835lq9\",\n  \"name\": \"get_weather\",\n  \"input\": {\"location\": \"New York, NY\", \"unit\": \"fahrenheit\"}\n}\n\n```\n\nThis behavior is not guaranteed, especially for more ambiguous prompts and for models less intelligent than Claude 3 Opus. If Claude 3 Opus doesn’t have enough context to fill in the required parameters, it is far more likely respond with a clarifying question instead of making a tool call.\n\nSequential tools\n\nSome tasks may require calling multiple tools in sequence, using the output of one tool as the input to another. In such a case, Claude will call one tool at a time. If prompted to call the tools all at once, Claude is likely to guess parameters for tools further downstream if they are dependent on tool results for tools further upstream.\n\nHere’s an example of using a `get_location` tool to get the user’s location, then passing that location to the `get_weather` tool:\n\nShell\n\nPython\n\nCopy\n\n```bash\ncurl https://api.anthropic.com/v1/messages \\\n     --header \"x-api-key: $ANTHROPIC_API_KEY\" \\\n     --header \"anthropic-version: 2023-06-01\" \\\n     --header \"content-type: application/json\" \\\n     --data \\\n'{\n    \"model\": \"claude-3-5-sonnet-20241022\",\n    \"max_tokens\": 1024,\n    \"tools\": [\\\n        {\\\n            \"name\": \"get_location\",\\\n            \"description\": \"Get the current user location based on their IP address. This tool has no parameters or arguments.\",\\\n            \"input_schema\": {\\\n                \"type\": \"object\",\\\n                \"properties\": {}\\\n            }\\\n        },\\\n        {\\\n            \"name\": \"get_weather\",\\\n            \"description\": \"Get the current weather in a given location\",\\\n            \"input_schema\": {\\\n                \"type\": \"object\",\\\n                \"properties\": {\\\n                    \"location\": {\\\n                        \"type\": \"string\",\\\n                        \"description\": \"The city and state, e.g. San Francisco, CA\"\\\n                    },\\\n                    \"unit\": {\\\n                        \"type\": \"string\",\\\n                        \"enum\": [\"celsius\", \"fahrenheit\"],\\\n                        \"description\": \"The unit of temperature, either 'celsius' or 'fahrenheit'\"\\\n                    }\\\n                },\\\n                \"required\": [\"location\"]\\\n            }\\\n        }\\\n    ],\n    \"messages\": [{\\\n        \"role\": \"user\",\\\n        \"content\": \"What is the weather like where I am?\"\\\n    }]\n}'\n\n```\n\nIn this case, Claude would first call the `get_location` tool to get the user’s location. After you return the location in a `tool_result`, Claude would then call `get_weather` with that location to get the final answer.\n\nThe full conversation might look like:\n\n| Role | Content |\n| --- | --- |\n| User | What’s the weather like where I am? |\n| Assistant | <thinking>To answer this, I first need to determine the user’s location using the get\\_location tool. Then I can pass that location to the get\\_weather tool to find the current weather there.</thinking>\\[Tool use for get\\_location\\] |\n| User | \\[Tool result for get\\_location with matching id and result of San Francisco, CA\\] |\n| Assistant | \\[Tool use for get\\_weather with the following input\\]{ “location”: “San Francisco, CA”, “unit”: “fahrenheit” } |\n| User | \\[Tool result for get\\_weather with matching id and result of “59°F (15°C), mostly cloudy”\\] |\n| Assistant | Based on your current location in San Francisco, CA, the weather right now is 59°F (15°C) and mostly cloudy. It’s a fairly cool and overcast day in the city. You may want to bring a light jacket if you’re heading outside. |\n\nThis example demonstrates how Claude can chain together multiple tool calls to answer a question that requires gathering data from different sources. The key steps are:\n\n1. Claude first realizes it needs the user’s location to answer the weather question, so it calls the `get_location` tool.\n2. The user (i.e. the client code) executes the actual `get_location` function and returns the result “San Francisco, CA” in a `tool_result` block.\n3. With the location now known, Claude proceeds to call the `get_weather` tool, passing in “San Francisco, CA” as the `location` parameter (as well as a guessed `unit` parameter, as `unit` is not a required parameter).\n4. The user again executes the actual `get_weather` function with the provided arguments and returns the weather data in another `tool_result` block.\n5. Finally, Claude incorporates the weather data into a natural language response to the original question.\n\nChain of thought tool use\n\nBy default, Claude 3 Opus is prompted to think before it answers a tool use query to best determine whether a tool is necessary, which tool to use, and the appropriate parameters. Claude 3 Sonnet and Claude 3 Haiku are prompted to try to use tools as much as possible and are more likely to call an unnecessary tool or infer missing parameters. To prompt Sonnet or Haiku to better assess the user query before making tool calls, the following prompt can be used:\n\nChain of thought prompt\n\n`Answer the user's request using relevant tools (if they are available). Before calling a tool, do some analysis within \\<thinking>\\</thinking> tags. First, think about which of the provided tools is the relevant tool to answer the user's request. Second, go through each of the required parameters of the relevant tool and determine if the user has directly provided or given enough information to infer a value. When deciding if the parameter can be inferred, carefully consider all the context to see if it supports a specific value. If all of the required parameters are present or can be reasonably inferred, close the thinking tag and proceed with the tool call. BUT, if one of the values for a required parameter is missing, DO NOT invoke the function (not even with fillers for the missing params) and instead, ask the user to provide the missing parameters. DO NOT ask for more information on optional parameters if it is not provided.     `\n\nJSON mode\n\nYou can use tools to get Claude produce JSON output that follows a schema, even if you don’t have any intention of running that output through a tool or function.\n\nWhen using tools in this way:\n\n- You usually want to provide a **single** tool\n- You should set `tool_choice` (see [Forcing tool use](https://docs.anthropic.com/en/docs/tool-use#forcing-tool-use)) to instruct the model to explicitly use that tool\n- Remember that the model will pass the `input` to the tool, so the name of the tool and description should be from the model’s perspective.\n\nThe following uses a `record_summary` tool to describe an image following a particular format.\n\nShell\n\nPython\n\nCopy\n\n```bash\n#!/bin/bash\nIMAGE_URL=\"https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg\"\nIMAGE_MEDIA_TYPE=\"image/jpeg\"\nIMAGE_BASE64=$(curl \"$IMAGE_URL\" | base64)\n\ncurl https://api.anthropic.com/v1/messages \\\n     --header \"content-type: application/json\" \\\n     --header \"x-api-key: $ANTHROPIC_API_KEY\" \\\n     --header \"anthropic-version: 2023-06-01\" \\\n     --data \\\n'{\n    \"model\": \"claude-3-5-sonnet-latest\",\n    \"max_tokens\": 1024,\n    \"tools\": [{\\\n        \"name\": \"record_summary\",\\\n        \"description\": \"Record summary of an image using well-structured JSON.\",\\\n        \"input_schema\": {\\\n            \"type\": \"object\",\\\n            \"properties\": {\\\n                \"key_colors\": {\\\n                    \"type\": \"array\",\\\n                    \"items\": {\\\n                        \"type\": \"object\",\\\n                        \"properties\": {\\\n                            \"r\": { \"type\": \"number\", \"description\": \"red value [0.0, 1.0]\" },\\\n                            \"g\": { \"type\": \"number\", \"description\": \"green value [0.0, 1.0]\" },\\\n                            \"b\": { \"type\": \"number\", \"description\": \"blue value [0.0, 1.0]\" },\\\n                            \"name\": { \"type\": \"string\", \"description\": \"Human-readable color name in snake_case, e.g. \\\"olive_green\\\" or \\\"turquoise\\\"\" }\\\n                        },\\\n                        \"required\": [ \"r\", \"g\", \"b\", \"name\" ]\\\n                    },\\\n                    \"description\": \"Key colors in the image. Limit to less then four.\"\\\n                },\\\n                \"description\": {\\\n                    \"type\": \"string\",\\\n                    \"description\": \"Image description. One to two sentences max.\"\\\n                },\\\n                \"estimated_year\": {\\\n                    \"type\": \"integer\",\\\n                    \"description\": \"Estimated year that the images was taken, if is it a photo. Only set this if the image appears to be non-fictional. Rough estimates are okay!\"\\\n                }\\\n            },\\\n            \"required\": [ \"key_colors\", \"description\" ]\\\n        }\\\n    }],\n    \"tool_choice\": {\"type\": \"tool\", \"name\": \"record_summary\"},\n    \"messages\": [\\\n        {\"role\": \"user\", \"content\": [\\\n            {\"type\": \"image\", \"source\": {\\\n                \"type\": \"base64\",\\\n                \"media_type\": \"'$IMAGE_MEDIA_TYPE'\",\\\n                \"data\": \"'$IMAGE_BASE64'\"\\\n            }},\\\n            {\"type\": \"text\", \"text\": \"Describe this image.\"}\\\n        ]}\\\n    ]\n}'\n\n```\n\n* * *\n\n## [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#pricing)  Pricing\n\nTool use requests are priced the same as any other Claude API request, based on the total number of input tokens sent to the model (including in the `tools` parameter) and the number of output tokens generated.”\n\nThe additional tokens from tool use come from:\n\n- The `tools` parameter in API requests (tool names, descriptions, and schemas)\n- `tool_use` content blocks in API requests and responses\n- `tool_result` content blocks in API requests\n\nWhen you use `tools`, we also automatically include a special system prompt for the model which enables tool use. The number of tool use tokens required for each model are listed below (excluding the additional tokens listed above):\n\n| Model | Tool choice | Tool use system prompt token count |\n| --- | --- | --- |\n| Claude 3.5 Sonnet (Oct) | `auto`<br>* * *<br> `any`, `tool` | 346 tokens<br>* * *<br>313 tokens |\n| Claude 3 Opus | `auto`<br>* * *<br> `any`, `tool` | 530 tokens<br>* * *<br>281 tokens |\n| Claude 3 Sonnet | `auto`<br>* * *<br> `any`, `tool` | 159 tokens<br>* * *<br>235 tokens |\n| Claude 3 Haiku | `auto`<br>* * *<br> `any`, `tool` | 264 tokens<br>* * *<br>340 tokens |\n| Claude 3.5 Sonnet (June) | `auto`<br>* * *<br> `any`, `tool` | 294 tokens<br>* * *<br>261 tokens |\n\nThese token counts are added to your normal input and output tokens to calculate the total cost of a request. Refer to our [models overview table](https://docs.anthropic.com/en/docs/models-overview#model-comparison) for current per-model prices.\n\nWhen you send a tool use prompt, just like any other API request, the response will output both input and output token counts as part of the reported `usage` metrics.\n\n* * *\n\n## [​](https://docs.anthropic.com/en/docs/build-with-claude/tool-use\\#next-steps)  Next Steps\n\nExplore our repository of ready-to-implement tool use code examples in our cookbooks:\n\n[**Calculator Tool** \\\\\n\\\\\nLearn how to integrate a simple calculator tool with Claude for precise numerical computations.](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/calculator_tool.ipynb) [**Customer Service Agent** \\\\\n\\\\\nBuild a responsive customer service bot that leverages client-side tools to\\\\\nenhance support.](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/customer_service_agent.ipynb) [**JSON Extractor** \\\\\n\\\\\nSee how Claude and tool use can extract structured data from unstructured text.](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/extracting_structured_json.ipynb)\n\nWas this page helpful?\n\nYesNo\n\n[Vision](https://docs.anthropic.com/en/docs/build-with-claude/vision) [Model Context Protocol (MCP)](https://docs.anthropic.com/en/docs/build-with-claude/mcp)\n\nOn this page\n\n- [How tool use works](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#how-tool-use-works)\n- [How to implement tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#how-to-implement-tool-use)\n- [Choosing a model](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#choosing-a-model)\n- [Specifying tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#specifying-tools)\n- [Tool use system prompt](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#tool-use-system-prompt)\n- [Best practices for tool definitions](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#best-practices-for-tool-definitions)\n- [Controlling Claude’s output](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#controlling-claudes-output)\n- [Forcing tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#forcing-tool-use)\n- [JSON output](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#json-output)\n- [Chain of thought](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#chain-of-thought)\n- [Disabling parallel tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#disabling-parallel-tool-use)\n- [Handling tool use and tool result content blocks](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#handling-tool-use-and-tool-result-content-blocks)\n- [Troubleshooting errors](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#troubleshooting-errors)\n- [Tool use examples](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#tool-use-examples)\n- [Pricing](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#pricing)\n- [Next Steps](https://docs.anthropic.com/en/docs/build-with-claude/tool-use#next-steps)\n\n![](https://docs.anthropic.com/en/docs/build-with-claude/tool-use)\n"
  },
  {
    "path": "ai_docs/python_genai.md",
    "content": "\n# Google Gen AI SDK\n\n[Permalink: Google Gen AI SDK](https://github.com/googleapis/python-genai#google-gen-ai-sdk)\n\n[![PyPI version](https://camo.githubusercontent.com/af4dae966695dbde629839adb60210ed763579c6f73cf6159ed8aa64e68fd35b/68747470733a2f2f696d672e736869656c64732e696f2f707970692f762f676f6f676c652d67656e61692e737667)](https://pypi.org/project/google-genai/)\n\n* * *\n\n**Documentation:** [https://googleapis.github.io/python-genai/](https://googleapis.github.io/python-genai/)\n\n* * *\n\nGoogle Gen AI Python SDK provides an interface for developers to integrate Google's generative models into their Python applications. It supports the [Gemini Developer API](https://ai.google.dev/gemini-api/docs) and [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview) APIs.\n\n## Installation\n\n[Permalink: Installation](https://github.com/googleapis/python-genai#installation)\n\n```\npip install google-genai\n```\n\n## Imports\n\n[Permalink: Imports](https://github.com/googleapis/python-genai#imports)\n\n```\nfrom google import genai\nfrom google.genai import types\n```\n\n## Create a client\n\n[Permalink: Create a client](https://github.com/googleapis/python-genai#create-a-client)\n\nPlease run one of the following code blocks to create a client for\ndifferent services ( [Gemini Developer API](https://ai.google.dev/gemini-api/docs) or [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)).\n\n```\n# Only run this block for Gemini Developer API\nclient = genai.Client(api_key='GEMINI_API_KEY')\n```\n\n```\n# Only run this block for Vertex AI API\nclient = genai.Client(\n    vertexai=True, project='your-project-id', location='us-central1'\n)\n```\n\n**(Optional) Using environment variables:**\n\nYou can create a client by configuring the necessary environment variables.\nConfiguration setup instructions depends on whether you're using the Gemini API\non Vertex AI or the ML Dev Gemini API.\n\n**ML Dev Gemini API:** Set `GOOGLE_API_KEY` as shown below:\n\n```\nexport GOOGLE_API_KEY='your-api-key'\n```\n\n**Vertex AI API:** Set `GOOGLE_GENAI_USE_VERTEXAI`, `GOOGLE_CLOUD_PROJECT`\nand `GOOGLE_CLOUD_LOCATION`, as shown below:\n\n```\nexport GOOGLE_GENAI_USE_VERTEXAI=false\nexport GOOGLE_CLOUD_PROJECT='your-project-id'\nexport GOOGLE_CLOUD_LOCATION='us-central1'\n```\n\n```\nclient = genai.Client()\n```\n\n### API Selection\n\n[Permalink: API Selection](https://github.com/googleapis/python-genai#api-selection)\n\nTo set the API version use `http_options`. For example, to set the API version\nto `v1` for Vertex AI:\n\n```\nclient = genai.Client(\n    vertexai=True, project='your-project-id', location='us-central1',\n    http_options={'api_version': 'v1'}\n)\n```\n\nTo set the API version to `v1alpha` for the Gemini API:\n\n```\nclient = genai.Client(api_key='GEMINI_API_KEY',\n                      http_options={'api_version': 'v1alpha'})\n```\n\n## Types\n\n[Permalink: Types](https://github.com/googleapis/python-genai#types)\n\nParameter types can be specified as either dictionaries( `TypedDict`) or\n[Pydantic Models](https://pydantic.readthedocs.io/en/stable/model.html).\nPydantic model types are available in the `types` module.\n\n## Models\n\n[Permalink: Models](https://github.com/googleapis/python-genai#models)\n\nThe `client.models` modules exposes model inferencing and model getters.\n\n### Generate Content\n\n[Permalink: Generate Content](https://github.com/googleapis/python-genai#generate-content)\n\n#### with text content\n\n[Permalink: with text content](https://github.com/googleapis/python-genai#with-text-content)\n\n```\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001', contents='why is the sky blue?'\n)\nprint(response.text)\n```\n\n#### with uploaded file (Gemini API only)\n\n[Permalink: with uploaded file (Gemini API only)](https://github.com/googleapis/python-genai#with-uploaded-file-gemini-api-only)\n\ndownload the file in console.\n\n```\n!wget -q https://storage.googleapis.com/generativeai-downloads/data/a11.txt\n```\n\npython code.\n\n```\nfile = client.files.upload(file='a11.txt')\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents=['Could you summarize this file?', file]\n)\nprint(response.text)\n```\n\n#### How to structure `contents`\n\n[Permalink: How to structure contents](https://github.com/googleapis/python-genai#how-to-structure-contents)\n\nThere are several ways to structure the `contents` in your request.\n\nProvide a single string as shown in the text example above:\n\n```\ncontents='Can you recommend some things to do in Boston and New York in the winter?'\n```\n\nProvide a single `Content` instance with multiple `Part` instances:\n\n```\ncontents=types.Content(parts=[\\\n    types.Part.from_text(text='Can you recommend some things to do in Boston in the winter?'),\\\n    types.Part.from_text(text='Can you recommend some things to do in New York in the winter?')\\\n], role='user')\n```\n\nWhen sending more than one input type, provide a list with multiple `Content`\ninstances:\n\n```\ncontents=[\\\n    'What is this a picture of?',\\\n    types.Part.from_uri(\\\n        file_uri='gs://generativeai-downloads/images/scones.jpg',\\\n        mime_type='image/jpeg',\\\n    ),\\\n],\n```\n\n### System Instructions and Other Configs\n\n[Permalink: System Instructions and Other Configs](https://github.com/googleapis/python-genai#system-instructions-and-other-configs)\n\n```\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents='high',\n    config=types.GenerateContentConfig(\n        system_instruction='I say high, you say low',\n        temperature=0.3,\n    ),\n)\nprint(response.text)\n```\n\n### Typed Config\n\n[Permalink: Typed Config](https://github.com/googleapis/python-genai#typed-config)\n\nAll API methods support Pydantic types for parameters as well as\ndictionaries. You can get the type from `google.genai.types`.\n\n```\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents=types.Part.from_text(text='Why is the sky blue?'),\n    config=types.GenerateContentConfig(\n        temperature=0,\n        top_p=0.95,\n        top_k=20,\n        candidate_count=1,\n        seed=5,\n        max_output_tokens=100,\n        stop_sequences=['STOP!'],\n        presence_penalty=0.0,\n        frequency_penalty=0.0,\n    ),\n)\n\nprint(response.text)\n```\n\n### List Base Models\n\n[Permalink: List Base Models](https://github.com/googleapis/python-genai#list-base-models)\n\nTo retrieve tuned models, see [list tuned models](https://github.com/googleapis/python-genai#list-tuned-models).\n\n```\nfor model in client.models.list():\n    print(model)\n```\n\n```\npager = client.models.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n[Permalink: Async](https://github.com/googleapis/python-genai#async)\n\n```\nasync for job in await client.aio.models.list():\n    print(job)\n```\n\n```\nasync_pager = await client.aio.models.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Safety Settings\n\n[Permalink: Safety Settings](https://github.com/googleapis/python-genai#safety-settings)\n\n```\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents='Say something bad.',\n    config=types.GenerateContentConfig(\n        safety_settings=[\\\n            types.SafetySetting(\\\n                category='HARM_CATEGORY_HATE_SPEECH',\\\n                threshold='BLOCK_ONLY_HIGH',\\\n            )\\\n        ]\n    ),\n)\nprint(response.text)\n```\n\n### Function Calling\n\n[Permalink: Function Calling](https://github.com/googleapis/python-genai#function-calling)\n\n#### Automatic Python function Support\n\n[Permalink: Automatic Python function Support](https://github.com/googleapis/python-genai#automatic-python-function-support)\n\nYou can pass a Python function directly and it will be automatically\ncalled and responded.\n\n```\ndef get_current_weather(location: str) -> str:\n    \"\"\"Returns the current weather.\n\n    Args:\n      location: The city and state, e.g. San Francisco, CA\n    \"\"\"\n    return 'sunny'\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents='What is the weather like in Boston?',\n    config=types.GenerateContentConfig(tools=[get_current_weather]),\n)\n\nprint(response.text)\n```\n\n#### Manually declare and invoke a function for function calling\n\n[Permalink: Manually declare and invoke a function for function calling](https://github.com/googleapis/python-genai#manually-declare-and-invoke-a-function-for-function-calling)\n\nIf you don't want to use the automatic function support, you can manually\ndeclare the function and invoke it.\n\nThe following example shows how to declare a function and pass it as a tool.\nThen you will receive a function call part in the response.\n\n```\nfunction = types.FunctionDeclaration(\n    name='get_current_weather',\n    description='Get the current weather in a given location',\n    parameters=types.Schema(\n        type='OBJECT',\n        properties={\n            'location': types.Schema(\n                type='STRING',\n                description='The city and state, e.g. San Francisco, CA',\n            ),\n        },\n        required=['location'],\n    ),\n)\n\ntool = types.Tool(function_declarations=[function])\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents='What is the weather like in Boston?',\n    config=types.GenerateContentConfig(tools=[tool]),\n)\n\nprint(response.function_calls[0])\n```\n\nAfter you receive the function call part from the model, you can invoke the function\nand get the function response. And then you can pass the function response to\nthe model.\nThe following example shows how to do it for a simple function invocation.\n\n```\nuser_prompt_content = types.Content(\n    role='user',\n    parts=[types.Part.from_text(text='What is the weather like in Boston?')],\n)\nfunction_call_part = response.function_calls[0]\nfunction_call_content = response.candidates[0].content\n\ntry:\n    function_result = get_current_weather(\n        **function_call_part.function_call.args\n    )\n    function_response = {'result': function_result}\nexcept (\n    Exception\n) as e:  # instead of raising the exception, you can let the model handle it\n    function_response = {'error': str(e)}\n\nfunction_response_part = types.Part.from_function_response(\n    name=function_call_part.name,\n    response=function_response,\n)\nfunction_response_content = types.Content(\n    role='tool', parts=[function_response_part]\n)\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents=[\\\n        user_prompt_content,\\\n        function_call_content,\\\n        function_response_content,\\\n    ],\n    config=types.GenerateContentConfig(\n        tools=[tool],\n    ),\n)\n\nprint(response.text)\n```\n\n#### Function calling with `ANY` tools config mode\n\n[Permalink: Function calling with ANY tools config mode](https://github.com/googleapis/python-genai#function-calling-with-any-tools-config-mode)\n\nIf you configure function calling mode to be `ANY`, then the model will always\nreturn function call parts. If you also pass a python function as a tool, by\ndefault the SDK will perform automatic function calling until the remote calls exceed the\nmaximum remote call for automatic function calling (default to 10 times).\n\nIf you'd like to disable automatic function calling in `ANY` mode:\n\n```\ndef get_current_weather(location: str) -> str:\n    \"\"\"Returns the current weather.\n\n    Args:\n      location: The city and state, e.g. San Francisco, CA\n    \"\"\"\n    return \"sunny\"\n\nresponse = client.models.generate_content(\n    model=\"gemini-2.0-flash-001\",\n    contents=\"What is the weather like in Boston?\",\n    config=types.GenerateContentConfig(\n        tools=[get_current_weather],\n        automatic_function_calling=types.AutomaticFunctionCallingConfig(\n            disable=True\n        ),\n        tool_config=types.ToolConfig(\n            function_calling_config=types.FunctionCallingConfig(mode='ANY')\n        ),\n    ),\n)\n```\n\nIf you'd like to set `x` number of automatic function call turns, you can\nconfigure the maximum remote calls to be `x + 1`.\nAssuming you prefer `1` turn for automatic function calling.\n\n```\ndef get_current_weather(location: str) -> str:\n    \"\"\"Returns the current weather.\n\n    Args:\n      location: The city and state, e.g. San Francisco, CA\n    \"\"\"\n    return \"sunny\"\n\nresponse = client.models.generate_content(\n    model=\"gemini-2.0-flash-001\",\n    contents=\"What is the weather like in Boston?\",\n    config=types.GenerateContentConfig(\n        tools=[get_current_weather],\n        automatic_function_calling=types.AutomaticFunctionCallingConfig(\n            maximum_remote_calls=2\n        ),\n        tool_config=types.ToolConfig(\n            function_calling_config=types.FunctionCallingConfig(mode='ANY')\n        ),\n    ),\n)\n```\n\n### JSON Response Schema\n\n[Permalink: JSON Response Schema](https://github.com/googleapis/python-genai#json-response-schema)\n\n#### Pydantic Model Schema support\n\n[Permalink: Pydantic Model Schema support](https://github.com/googleapis/python-genai#pydantic-model-schema-support)\n\nSchemas can be provided as Pydantic Models.\n\n```\nfrom pydantic import BaseModel\n\nclass CountryInfo(BaseModel):\n    name: str\n    population: int\n    capital: str\n    continent: str\n    gdp: int\n    official_language: str\n    total_area_sq_mi: int\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents='Give me information for the United States.',\n    config=types.GenerateContentConfig(\n        response_mime_type='application/json',\n        response_schema=CountryInfo,\n    ),\n)\nprint(response.text)\n```\n\n```\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash-001',\n    contents='Give me information for the United States.',\n    config=types.GenerateContentConfig(\n        response_mime_type='application/json',\n        response_schema={\n            'required': [\\\n                'name',\\\n                'population',\\\n                'capital',\\\n                'continent',\\\n                'gdp',\\\n                'official_language',\\\n                'total_area_sq_mi',\\\n            ],\n            'properties': {\n                'name': {'type': 'STRING'},\n                'population': {'type': 'INTEGER'},\n                'capital': {'type': 'STRING'},\n                'continent': {'type': 'STRING'},\n                'gdp': {'type': 'INTEGER'},\n                'official_language': {'type': 'STRING'},\n                'total_area_sq_mi': {'type': 'INTEGER'},\n            },\n            'type': 'OBJECT',\n        },\n    ),\n)\nprint(response.text)\n```\n\n### Enum Response Schema\n\n[Permalink: Enum Response Schema](https://github.com/googleapis/python-genai#enum-response-schema)\n\n#### Text Response\n\n[Permalink: Text Response](https://github.com/googleapis/python-genai#text-response)\n\nYou can set response\\_mime\\_type to 'text/x.enum' to return one of those enum\nvalues as the response.\n\n```\nclass InstrumentEnum(Enum):\n  PERCUSSION = 'Percussion'\n  STRING = 'String'\n  WOODWIND = 'Woodwind'\n  BRASS = 'Brass'\n  KEYBOARD = 'Keyboard'\n\nresponse = client.models.generate_content(\n      model='gemini-2.0-flash-001',\n      contents='What instrument plays multiple notes at once?',\n      config={\n          'response_mime_type': 'text/x.enum',\n          'response_schema': InstrumentEnum,\n      },\n  )\nprint(response.text)\n```\n\n#### JSON Response\n\n[Permalink: JSON Response](https://github.com/googleapis/python-genai#json-response)\n\nYou can also set response\\_mime\\_type to 'application/json', the response will be identical but in quotes.\n\n```\nfrom enum import Enum\n\nclass InstrumentEnum(Enum):\n  PERCUSSION = 'Percussion'\n  STRING = 'String'\n  WOODWIND = 'Woodwind'\n  BRASS = 'Brass'\n  KEYBOARD = 'Keyboard'\n\nresponse = client.models.generate_content(\n      model='gemini-2.0-flash-001',\n      contents='What instrument plays multiple notes at once?',\n      config={\n          'response_mime_type': 'application/json',\n          'response_schema': InstrumentEnum,\n      },\n  )\nprint(response.text)\n```\n\n### Streaming\n\n[Permalink: Streaming](https://github.com/googleapis/python-genai#streaming)\n\n#### Streaming for text content\n\n[Permalink: Streaming for text content](https://github.com/googleapis/python-genai#streaming-for-text-content)\n\n```\nfor chunk in client.models.generate_content_stream(\n    model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'\n):\n    print(chunk.text, end='')\n```\n\n#### Streaming for image content\n\n[Permalink: Streaming for image content](https://github.com/googleapis/python-genai#streaming-for-image-content)\n\nIf your image is stored in [Google Cloud Storage](https://cloud.google.com/storage),\nyou can use the `from_uri` class method to create a `Part` object.\n\n```\nfor chunk in client.models.generate_content_stream(\n    model='gemini-2.0-flash-001',\n    contents=[\\\n        'What is this image about?',\\\n        types.Part.from_uri(\\\n            file_uri='gs://generativeai-downloads/images/scones.jpg',\\\n            mime_type='image/jpeg',\\\n        ),\\\n    ],\n):\n    print(chunk.text, end='')\n```\n\nIf your image is stored in your local file system, you can read it in as bytes\ndata and use the `from_bytes` class method to create a `Part` object.\n\n```\nYOUR_IMAGE_PATH = 'your_image_path'\nYOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'\nwith open(YOUR_IMAGE_PATH, 'rb') as f:\n    image_bytes = f.read()\n\nfor chunk in client.models.generate_content_stream(\n    model='gemini-2.0-flash-001',\n    contents=[\\\n        'What is this image about?',\\\n        types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),\\\n    ],\n):\n    print(chunk.text, end='')\n```\n\n### Async\n\n[Permalink: Async](https://github.com/googleapis/python-genai#async-1)\n\n`client.aio` exposes all the analogous [`async` methods](https://docs.python.org/3/library/asyncio.html)\nthat are available on `client`\n\nFor example, `client.aio.models.generate_content` is the `async` version\nof `client.models.generate_content`\n\n```\nresponse = await client.aio.models.generate_content(\n    model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'\n)\n\nprint(response.text)\n```\n\n### Streaming\n\n[Permalink: Streaming](https://github.com/googleapis/python-genai#streaming-1)\n\n```\nasync for chunk in await client.aio.models.generate_content_stream(\n    model='gemini-2.0-flash-001', contents='Tell me a story in 300 words.'\n):\n    print(chunk.text, end='')\n```\n\n### Count Tokens and Compute Tokens\n\n[Permalink: Count Tokens and Compute Tokens](https://github.com/googleapis/python-genai#count-tokens-and-compute-tokens)\n\n```\nresponse = client.models.count_tokens(\n    model='gemini-2.0-flash-001',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n#### Compute Tokens\n\n[Permalink: Compute Tokens](https://github.com/googleapis/python-genai#compute-tokens)\n\nCompute tokens is only supported in Vertex AI.\n\n```\nresponse = client.models.compute_tokens(\n    model='gemini-2.0-flash-001',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n##### Async\n\n[Permalink: Async](https://github.com/googleapis/python-genai#async-2)\n\n```\nresponse = await client.aio.models.count_tokens(\n    model='gemini-2.0-flash-001',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n### Embed Content\n\n[Permalink: Embed Content](https://github.com/googleapis/python-genai#embed-content)\n\n```\nresponse = client.models.embed_content(\n    model='text-embedding-004',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n```\n# multiple contents with config\nresponse = client.models.embed_content(\n    model='text-embedding-004',\n    contents=['why is the sky blue?', 'What is your age?'],\n    config=types.EmbedContentConfig(output_dimensionality=10),\n)\n\nprint(response)\n```\n\n### Imagen\n\n[Permalink: Imagen](https://github.com/googleapis/python-genai#imagen)\n\n#### Generate Images\n\n[Permalink: Generate Images](https://github.com/googleapis/python-genai#generate-images)\n\nSupport for generate images in Gemini Developer API is behind an allowlist\n\n```\n# Generate Image\nresponse1 = client.models.generate_images(\n    model='imagen-3.0-generate-002',\n    prompt='An umbrella in the foreground, and a rainy night sky in the background',\n    config=types.GenerateImagesConfig(\n        negative_prompt='human',\n        number_of_images=1,\n        include_rai_reason=True,\n        output_mime_type='image/jpeg',\n    ),\n)\nresponse1.generated_images[0].image.show()\n```\n\n#### Upscale Image\n\n[Permalink: Upscale Image](https://github.com/googleapis/python-genai#upscale-image)\n\nUpscale image is only supported in Vertex AI.\n\n```\n# Upscale the generated image from above\nresponse2 = client.models.upscale_image(\n    model='imagen-3.0-generate-001',\n    image=response1.generated_images[0].image,\n    upscale_factor='x2',\n    config=types.UpscaleImageConfig(\n        include_rai_reason=True,\n        output_mime_type='image/jpeg',\n    ),\n)\nresponse2.generated_images[0].image.show()\n```\n\n#### Edit Image\n\n[Permalink: Edit Image](https://github.com/googleapis/python-genai#edit-image)\n\nEdit image uses a separate model from generate and upscale.\n\nEdit image is only supported in Vertex AI.\n\n```\n# Edit the generated image from above\nfrom google.genai.types import RawReferenceImage, MaskReferenceImage\n\nraw_ref_image = RawReferenceImage(\n    reference_id=1,\n    reference_image=response1.generated_images[0].image,\n)\n\n# Model computes a mask of the background\nmask_ref_image = MaskReferenceImage(\n    reference_id=2,\n    config=types.MaskReferenceConfig(\n        mask_mode='MASK_MODE_BACKGROUND',\n        mask_dilation=0,\n    ),\n)\n\nresponse3 = client.models.edit_image(\n    model='imagen-3.0-capability-001',\n    prompt='Sunlight and clear sky',\n    reference_images=[raw_ref_image, mask_ref_image],\n    config=types.EditImageConfig(\n        edit_mode='EDIT_MODE_INPAINT_INSERTION',\n        number_of_images=1,\n        negative_prompt='human',\n        include_rai_reason=True,\n        output_mime_type='image/jpeg',\n    ),\n)\nresponse3.generated_images[0].image.show()\n```\n\n## Chats\n\n[Permalink: Chats](https://github.com/googleapis/python-genai#chats)\n\nCreate a chat session to start a multi-turn conversations with the model.\n\n### Send Message\n\n[Permalink: Send Message](https://github.com/googleapis/python-genai#send-message)\n\n```\nchat = client.chats.create(model='gemini-2.0-flash-001')\nresponse = chat.send_message('tell me a story')\nprint(response.text)\n```\n\n### Streaming\n\n[Permalink: Streaming](https://github.com/googleapis/python-genai#streaming-2)\n\n```\nchat = client.chats.create(model='gemini-2.0-flash-001')\nfor chunk in chat.send_message_stream('tell me a story'):\n    print(chunk.text)\n```\n\n### Async\n\n[Permalink: Async](https://github.com/googleapis/python-genai#async-3)\n\n```\nchat = client.aio.chats.create(model='gemini-2.0-flash-001')\nresponse = await chat.send_message('tell me a story')\nprint(response.text)\n```\n\n### Async Streaming\n\n[Permalink: Async Streaming](https://github.com/googleapis/python-genai#async-streaming)\n\n```\nchat = client.aio.chats.create(model='gemini-2.0-flash-001')\nasync for chunk in await chat.send_message_stream('tell me a story'):\n    print(chunk.text)\n```\n\n## Files\n\n[Permalink: Files](https://github.com/googleapis/python-genai#files)\n\nFiles are only supported in Gemini Developer API.\n\n```\n!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .\n!gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .\n```\n\n### Upload\n\n[Permalink: Upload](https://github.com/googleapis/python-genai#upload)\n\n```\nfile1 = client.files.upload(file='2312.11805v3.pdf')\nfile2 = client.files.upload(file='2403.05530.pdf')\n\nprint(file1)\nprint(file2)\n```\n\n### Get\n\n[Permalink: Get](https://github.com/googleapis/python-genai#get)\n\n```\nfile1 = client.files.upload(file='2312.11805v3.pdf')\nfile_info = client.files.get(name=file1.name)\n```\n\n### Delete\n\n[Permalink: Delete](https://github.com/googleapis/python-genai#delete)\n\n```\nfile3 = client.files.upload(file='2312.11805v3.pdf')\n\nclient.files.delete(name=file3.name)\n```\n\n## Caches\n\n[Permalink: Caches](https://github.com/googleapis/python-genai#caches)\n\n`client.caches` contains the control plane APIs for cached content\n\n### Create\n\n[Permalink: Create](https://github.com/googleapis/python-genai#create)\n\n```\nif client.vertexai:\n    file_uris = [\\\n        'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',\\\n        'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',\\\n    ]\nelse:\n    file_uris = [file1.uri, file2.uri]\n\ncached_content = client.caches.create(\n    model='gemini-1.5-pro-002',\n    config=types.CreateCachedContentConfig(\n        contents=[\\\n            types.Content(\\\n                role='user',\\\n                parts=[\\\n                    types.Part.from_uri(\\\n                        file_uri=file_uris[0], mime_type='application/pdf'\\\n                    ),\\\n                    types.Part.from_uri(\\\n                        file_uri=file_uris[1],\\\n                        mime_type='application/pdf',\\\n                    ),\\\n                ],\\\n            )\\\n        ],\n        system_instruction='What is the sum of the two pdfs?',\n        display_name='test cache',\n        ttl='3600s',\n    ),\n)\n```\n\n### Get\n\n[Permalink: Get](https://github.com/googleapis/python-genai#get-1)\n\n```\ncached_content = client.caches.get(name=cached_content.name)\n```\n\n### Generate Content with Caches\n\n[Permalink: Generate Content with Caches](https://github.com/googleapis/python-genai#generate-content-with-caches)\n\n```\nresponse = client.models.generate_content(\n    model='gemini-1.5-pro-002',\n    contents='Summarize the pdfs',\n    config=types.GenerateContentConfig(\n        cached_content=cached_content.name,\n    ),\n)\nprint(response.text)\n```\n\n## Tunings\n\n[Permalink: Tunings](https://github.com/googleapis/python-genai#tunings)\n\n`client.tunings` contains tuning job APIs and supports supervised fine\ntuning through `tune`.\n\n### Tune\n\n[Permalink: Tune](https://github.com/googleapis/python-genai#tune)\n\n- Vertex AI supports tuning from GCS source\n- Gemini Developer API supports tuning from inline examples\n\n```\nif client.vertexai:\n    model = 'gemini-1.5-pro-002'\n    training_dataset = types.TuningDataset(\n        gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',\n    )\nelse:\n    model = 'models/gemini-1.0-pro-001'\n    training_dataset = types.TuningDataset(\n        examples=[\\\n            types.TuningExample(\\\n                text_input=f'Input text {i}',\\\n                output=f'Output text {i}',\\\n            )\\\n            for i in range(5)\\\n        ],\n    )\n```\n\n```\ntuning_job = client.tunings.tune(\n    base_model=model,\n    training_dataset=training_dataset,\n    config=types.CreateTuningJobConfig(\n        epoch_count=1, tuned_model_display_name='test_dataset_examples model'\n    ),\n)\nprint(tuning_job)\n```\n\n### Get Tuning Job\n\n[Permalink: Get Tuning Job](https://github.com/googleapis/python-genai#get-tuning-job)\n\n```\ntuning_job = client.tunings.get(name=tuning_job.name)\nprint(tuning_job)\n```\n\n```\nimport time\n\nrunning_states = set(\n    [\\\n        'JOB_STATE_PENDING',\\\n        'JOB_STATE_RUNNING',\\\n    ]\n)\n\nwhile tuning_job.state in running_states:\n    print(tuning_job.state)\n    tuning_job = client.tunings.get(name=tuning_job.name)\n    time.sleep(10)\n```\n\n#### Use Tuned Model\n\n[Permalink: Use Tuned Model](https://github.com/googleapis/python-genai#use-tuned-model)\n\n```\nresponse = client.models.generate_content(\n    model=tuning_job.tuned_model.endpoint,\n    contents='why is the sky blue?',\n)\n\nprint(response.text)\n```\n\n### Get Tuned Model\n\n[Permalink: Get Tuned Model](https://github.com/googleapis/python-genai#get-tuned-model)\n\n```\ntuned_model = client.models.get(model=tuning_job.tuned_model.model)\nprint(tuned_model)\n```\n\n### List Tuned Models\n\n[Permalink: List Tuned Models](https://github.com/googleapis/python-genai#list-tuned-models)\n\nTo retrieve base models, see [list base models](https://github.com/googleapis/python-genai#list-base-models).\n\n```\nfor model in client.models.list(config={'page_size': 10, 'query_base': False}):\n    print(model)\n```\n\n```\npager = client.models.list(config={'page_size': 10, 'query_base': False})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n[Permalink: Async](https://github.com/googleapis/python-genai#async-4)\n\n```\nasync for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}):\n    print(job)\n```\n\n```\nasync_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Update Tuned Model\n\n[Permalink: Update Tuned Model](https://github.com/googleapis/python-genai#update-tuned-model)\n\n```\nmodel = pager[0]\n\nmodel = client.models.update(\n    model=model.name,\n    config=types.UpdateModelConfig(\n        display_name='my tuned model', description='my tuned model description'\n    ),\n)\n\nprint(model)\n```\n\n### List Tuning Jobs\n\n[Permalink: List Tuning Jobs](https://github.com/googleapis/python-genai#list-tuning-jobs)\n\n```\nfor job in client.tunings.list(config={'page_size': 10}):\n    print(job)\n```\n\n```\npager = client.tunings.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n[Permalink: Async](https://github.com/googleapis/python-genai#async-5)\n\n```\nasync for job in await client.aio.tunings.list(config={'page_size': 10}):\n    print(job)\n```\n\n```\nasync_pager = await client.aio.tunings.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n## Batch Prediction\n\n[Permalink: Batch Prediction](https://github.com/googleapis/python-genai#batch-prediction)\n\nOnly supported in Vertex AI.\n\n### Create\n\n[Permalink: Create](https://github.com/googleapis/python-genai#create-1)\n\n```\n# Specify model and source file only, destination and job display name will be auto-populated\njob = client.batches.create(\n    model='gemini-1.5-flash-002',\n    src='bq://my-project.my-dataset.my-table',\n)\n\njob\n```\n\n```\n# Get a job by name\njob = client.batches.get(name=job.name)\n\njob.state\n```\n\n```\ncompleted_states = set(\n    [\\\n        'JOB_STATE_SUCCEEDED',\\\n        'JOB_STATE_FAILED',\\\n        'JOB_STATE_CANCELLED',\\\n        'JOB_STATE_PAUSED',\\\n    ]\n)\n\nwhile job.state not in completed_states:\n    print(job.state)\n    job = client.batches.get(name=job.name)\n    time.sleep(30)\n\njob\n```\n\n### List\n\n[Permalink: List](https://github.com/googleapis/python-genai#list)\n\n```\nfor job in client.batches.list(config=types.ListBatchJobsConfig(page_size=10)):\n    print(job)\n```\n\n```\npager = client.batches.list(config=types.ListBatchJobsConfig(page_size=10))\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n[Permalink: Async](https://github.com/googleapis/python-genai#async-6)\n\n```\nasync for job in await client.aio.batches.list(\n    config=types.ListBatchJobsConfig(page_size=10)\n):\n    print(job)\n```\n\n```\nasync_pager = await client.aio.batches.list(\n    config=types.ListBatchJobsConfig(page_size=10)\n)\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Delete\n\n[Permalink: Delete](https://github.com/googleapis/python-genai#delete-1)\n\n```\n# Delete the job resource\ndelete_job = client.batches.delete(name=job.name)\n\ndelete_job\n```\n\n## About\n\nGoogle Gen AI Python SDK provides an interface for developers to integrate Google's generative models into their Python applications.\n\n\n[googleapis.github.io/python-genai/](https://googleapis.github.io/python-genai/ \"https://googleapis.github.io/python-genai/\")\n\n### Resources\n\n[Readme](https://github.com/googleapis/python-genai#readme-ov-file)\n\n### License\n\n[Apache-2.0 license](https://github.com/googleapis/python-genai#Apache-2.0-1-ov-file)\n\n### Code of conduct\n\n[Code of conduct](https://github.com/googleapis/python-genai#coc-ov-file)\n\n### Security policy\n\n[Security policy](https://github.com/googleapis/python-genai#security-ov-file)\n\n[Activity](https://github.com/googleapis/python-genai/activity)\n\n[Custom properties](https://github.com/googleapis/python-genai/custom-properties)\n\n### Stars\n\n[**927**\\\\\nstars](https://github.com/googleapis/python-genai/stargazers)\n\n### Watchers\n\n[**80**\\\\\nwatching](https://github.com/googleapis/python-genai/watchers)\n\n### Forks\n\n[**159**\\\\\nforks](https://github.com/googleapis/python-genai/forks)\n\n[Report repository](https://github.com/contact/report-content?content_url=https%3A%2F%2Fgithub.com%2Fgoogleapis%2Fpython-genai&report=googleapis+%28user%29)\n\n## [Releases\\  9](https://github.com/googleapis/python-genai/releases)\n\n[v1.1.0\\\\\nLatest\\\\\n\\\\\nFeb 10, 2025](https://github.com/googleapis/python-genai/releases/tag/v1.1.0)\n\n[\\+ 8 releases](https://github.com/googleapis/python-genai/releases)\n\n## [Packages\\  0](https://github.com/orgs/googleapis/packages?repo_name=python-genai)\n\nNo packages published\n\n## [Used by 1.2k](https://github.com/googleapis/python-genai/network/dependents)\n\n[- ![@haoyuliao](https://avatars.githubusercontent.com/u/23026269?s=64&v=4)\\\\\n- ![@timroty](https://avatars.githubusercontent.com/u/42622148?s=64&v=4)\\\\\n- ![@Lobooooooo14](https://avatars.githubusercontent.com/u/88998991?s=64&v=4)\\\\\n- ![@derrismaqebe](https://avatars.githubusercontent.com/u/193747737?s=64&v=4)\\\\\n- ![@GoUpvote](https://avatars.githubusercontent.com/u/118300643?s=64&v=4)\\\\\n- ![@Frisyk](https://avatars.githubusercontent.com/u/112816171?s=64&v=4)\\\\\n- ![@ParhamNajarzadeh](https://avatars.githubusercontent.com/u/116252212?s=64&v=4)\\\\\n- ![@kuangsith](https://avatars.githubusercontent.com/u/108988177?s=64&v=4)\\\\\n\\\\\n\\+ 1,215](https://github.com/googleapis/python-genai/network/dependents)\n\n## [Contributors\\  23](https://github.com/googleapis/python-genai/graphs/contributors)\n\n- [![@sasha-gitg](https://avatars.githubusercontent.com/u/44654632?s=64&v=4)](https://github.com/sasha-gitg)\n- [![@happy-qiao](https://avatars.githubusercontent.com/u/159568575?s=64&v=4)](https://github.com/happy-qiao)\n- [![@sararob](https://avatars.githubusercontent.com/u/3814898?s=64&v=4)](https://github.com/sararob)\n- [![@hkt74](https://avatars.githubusercontent.com/u/4653660?s=64&v=4)](https://github.com/hkt74)\n- [![@google-genai-bot](https://avatars.githubusercontent.com/u/194307901?s=64&v=4)](https://github.com/google-genai-bot)\n- [![@yyyu-google](https://avatars.githubusercontent.com/u/150068659?s=64&v=4)](https://github.com/yyyu-google)\n- [![@amirh](https://avatars.githubusercontent.com/u/1024117?s=64&v=4)](https://github.com/amirh)\n- [![@Ark-kun](https://avatars.githubusercontent.com/u/1829149?s=64&v=4)](https://github.com/Ark-kun)\n- [![@yinghsienwu](https://avatars.githubusercontent.com/u/14824050?s=64&v=4)](https://github.com/yinghsienwu)\n- [![@matthew29tang](https://avatars.githubusercontent.com/u/22719762?s=64&v=4)](https://github.com/matthew29tang)\n- [![@release-please[bot]](https://avatars.githubusercontent.com/in/40688?s=64&v=4)](https://github.com/apps/release-please)\n- [![@MarkDaoust](https://avatars.githubusercontent.com/u/1414837?s=64&v=4)](https://github.com/MarkDaoust)\n- [![@Annhiluc](https://avatars.githubusercontent.com/u/10099501?s=64&v=4)](https://github.com/Annhiluc)\n\n[\\+ 9 contributors](https://github.com/googleapis/python-genai/graphs/contributors)\n\n## Languages\n\n- [Python100.0%](https://github.com/googleapis/python-genai/search?l=python)\n\nYou can’t perform that action at this time.\n"
  },
  {
    "path": "ai_docs/python_openai.md",
    "content": "[Skip to content](https://github.com/openai/openai-python#start-of-content)\n\nYou signed in with another tab or window. [Reload](https://github.com/openai/openai-python) to refresh your session.You signed out in another tab or window. [Reload](https://github.com/openai/openai-python) to refresh your session.You switched accounts on another tab or window. [Reload](https://github.com/openai/openai-python) to refresh your session.Dismiss alert\n\n[openai](https://github.com/openai)/ **[openai-python](https://github.com/openai/openai-python)** Public\n\n- [Notifications](https://github.com/login?return_to=%2Fopenai%2Fopenai-python) You must be signed in to change notification settings\n- [Fork\\\\\n3.6k](https://github.com/login?return_to=%2Fopenai%2Fopenai-python)\n- [Star\\\\\n24.6k](https://github.com/login?return_to=%2Fopenai%2Fopenai-python)\n\n\nThe official Python library for the OpenAI API\n\n\n[pypi.org/project/openai/](https://pypi.org/project/openai/ \"https://pypi.org/project/openai/\")\n\n### License\n\n[Apache-2.0 license](https://github.com/openai/openai-python/blob/main/LICENSE)\n\n[24.6k\\\\\nstars](https://github.com/openai/openai-python/stargazers) [3.6k\\\\\nforks](https://github.com/openai/openai-python/forks) [Branches](https://github.com/openai/openai-python/branches) [Tags](https://github.com/openai/openai-python/tags) [Activity](https://github.com/openai/openai-python/activity)\n\n[Star](https://github.com/login?return_to=%2Fopenai%2Fopenai-python)\n\n[Notifications](https://github.com/login?return_to=%2Fopenai%2Fopenai-python) You must be signed in to change notification settings\n\n# openai/openai-python\n\nmain\n\n[**12** Branches](https://github.com/openai/openai-python/branches) [**243** Tags](https://github.com/openai/openai-python/tags)\n\n[Go to Branches page](https://github.com/openai/openai-python/branches)[Go to Tags page](https://github.com/openai/openai-python/tags)\n\nGo to file\n\nCode\n\n## Folders and files\n\n| Name | Name | Last commit message | Last commit date |\n| --- | --- | --- | --- |\n| ## Latest commit<br>[![stainless-app[bot]](https://avatars.githubusercontent.com/in/378072?v=4&size=40)](https://github.com/apps/stainless-app)[stainless-app\\[bot\\]](https://github.com/openai/openai-python/commits?author=stainless-app%5Bbot%5D)<br>[release: 1.63.0](https://github.com/openai/openai-python/commit/720ae54414f392202289578c9cc3b84cccc7432c)<br>Feb 13, 2025<br>[720ae54](https://github.com/openai/openai-python/commit/720ae54414f392202289578c9cc3b84cccc7432c) · Feb 13, 2025<br>## History<br>[816 Commits](https://github.com/openai/openai-python/commits/main/) |\n| [.devcontainer](https://github.com/openai/openai-python/tree/main/.devcontainer \".devcontainer\") | [.devcontainer](https://github.com/openai/openai-python/tree/main/.devcontainer \".devcontainer\") | [chore(ci): update rye to v0.35.0 (](https://github.com/openai/openai-python/commit/94fc49d8b198b4b9fe98bf22883ed82b060e865b \"chore(ci): update rye to v0.35.0 (#1523)\") [#1523](https://github.com/openai/openai-python/pull/1523) [)](https://github.com/openai/openai-python/commit/94fc49d8b198b4b9fe98bf22883ed82b060e865b \"chore(ci): update rye to v0.35.0 (#1523)\") | Jul 3, 2024 |\n| [.github](https://github.com/openai/openai-python/tree/main/.github \".github\") | [.github](https://github.com/openai/openai-python/tree/main/.github \".github\") | [chore(internal): minor formatting changes (](https://github.com/openai/openai-python/commit/27d0e67b1d121ccc5b48c95e1f0bc3f6e93e9bd3 \"chore(internal): minor formatting changes (#2050)\") [#2050](https://github.com/openai/openai-python/pull/2050) [)](https://github.com/openai/openai-python/commit/27d0e67b1d121ccc5b48c95e1f0bc3f6e93e9bd3 \"chore(internal): minor formatting changes (#2050)\") | Jan 24, 2025 |\n| [.inline-snapshot/external](https://github.com/openai/openai-python/tree/main/.inline-snapshot/external \"This path skips through empty directories\") | [.inline-snapshot/external](https://github.com/openai/openai-python/tree/main/.inline-snapshot/external \"This path skips through empty directories\") | [chore(internal): update test snapshots (](https://github.com/openai/openai-python/commit/9feadd8274809fff9ff1e36a0c90d45566ed46e2 \"chore(internal): update test snapshots (#1749)\") [#1749](https://github.com/openai/openai-python/pull/1749) [)](https://github.com/openai/openai-python/commit/9feadd8274809fff9ff1e36a0c90d45566ed46e2 \"chore(internal): update test snapshots (#1749)\") | Sep 26, 2024 |\n| [bin](https://github.com/openai/openai-python/tree/main/bin \"bin\") | [bin](https://github.com/openai/openai-python/tree/main/bin \"bin\") | [fix: temporarily patch upstream version to fix broken release flow (](https://github.com/openai/openai-python/commit/8061d18dd8bb1f9f17a46ac0d90edb2592a132a0 \"fix: temporarily patch upstream version to fix broken release flow (#1500)\") [#…](https://github.com/openai/openai-python/pull/1500) | Jun 25, 2024 |\n| [examples](https://github.com/openai/openai-python/tree/main/examples \"examples\") | [examples](https://github.com/openai/openai-python/tree/main/examples \"examples\") | [docs(examples/azure): add async snippet (](https://github.com/openai/openai-python/commit/abc5459c7504eec25a67b35104e2e09e7d8f232c \"docs(examples/azure): add async snippet (#1787)\") [#1787](https://github.com/openai/openai-python/pull/1787) [)](https://github.com/openai/openai-python/commit/abc5459c7504eec25a67b35104e2e09e7d8f232c \"docs(examples/azure): add async snippet (#1787)\") | Jan 24, 2025 |\n| [scripts](https://github.com/openai/openai-python/tree/main/scripts \"scripts\") | [scripts](https://github.com/openai/openai-python/tree/main/scripts \"scripts\") | [chore(internal): bummp ruff dependency (](https://github.com/openai/openai-python/commit/6afde0dc8512a16ff2eca781fee0395cab254f8c \"chore(internal): bummp ruff dependency (#2080)\") [#2080](https://github.com/openai/openai-python/pull/2080) [)](https://github.com/openai/openai-python/commit/6afde0dc8512a16ff2eca781fee0395cab254f8c \"chore(internal): bummp ruff dependency (#2080)\") | Feb 5, 2025 |\n| [src/openai](https://github.com/openai/openai-python/tree/main/src/openai \"This path skips through empty directories\") | [src/openai](https://github.com/openai/openai-python/tree/main/src/openai \"This path skips through empty directories\") | [release: 1.63.0](https://github.com/openai/openai-python/commit/720ae54414f392202289578c9cc3b84cccc7432c \"release: 1.63.0\") | Feb 13, 2025 |\n| [tests](https://github.com/openai/openai-python/tree/main/tests \"tests\") | [tests](https://github.com/openai/openai-python/tree/main/tests \"tests\") | [feat(api): add support for storing chat completions (](https://github.com/openai/openai-python/commit/300f58bbbde749e023dd1cf39de8f5339780a33d \"feat(api): add support for storing chat completions (#2117)\") [#2117](https://github.com/openai/openai-python/pull/2117) [)](https://github.com/openai/openai-python/commit/300f58bbbde749e023dd1cf39de8f5339780a33d \"feat(api): add support for storing chat completions (#2117)\") | Feb 13, 2025 |\n| [.gitignore](https://github.com/openai/openai-python/blob/main/.gitignore \".gitignore\") | [.gitignore](https://github.com/openai/openai-python/blob/main/.gitignore \".gitignore\") | [chore: gitignore test server logs (](https://github.com/openai/openai-python/commit/24347efc6d94faddb10b773b04c2b5afa38b2ea6 \"chore: gitignore test server logs (#1509)\") [#1509](https://github.com/openai/openai-python/pull/1509) [)](https://github.com/openai/openai-python/commit/24347efc6d94faddb10b773b04c2b5afa38b2ea6 \"chore: gitignore test server logs (#1509)\") | Jul 2, 2024 |\n| [.python-version](https://github.com/openai/openai-python/blob/main/.python-version \".python-version\") | [.python-version](https://github.com/openai/openai-python/blob/main/.python-version \".python-version\") | [V1 (](https://github.com/openai/openai-python/commit/08b8179a6b3e46ca8eb117f819cc6563ae74e27d \"V1 (#677)  * cleanup  * v1.0.0-beta.1  * docs: add basic manual azure example  * docs: use chat completions instead of completions for demo example  * test: rename `API_BASE_URL` to `TEST_API_BASE_URL`  * feat(client): handle retry-after header with a date format  * feat(api): remove `content_filter` stop_reason and update documentation  * refactor(cli): rename internal types for improved auto complete  * feat(client): add forwards-compatible pydantic methods  * feat(api): move `n_epochs` under `hyperparameters`  * feat(client): add support for passing in a httpx client  * chore: update README  * feat(cli): use http/2 if h2 is available  * chore(docs): remove trailing spaces  * feat(client): add logging setup  * chore(internal): minor updates  * v1.0.0-beta.2  * docs: use chat completions instead of completions for demo example  * chore: add case insensitive get header function  * fix(client): correctly handle errors during streaming  * fix(streaming): add additional overload for ambiguous stream param  * chore(internal): enable lint rule  * chore(internal): cleanup some redundant code  * fix(client): accept io.IOBase instances in file params  * docs: improve error message for invalid file param type  * 1.0.0-beta.3  * chore(internal): migrate from Poetry to Rye  * feat(cli): add `tools fine_tunes.prepare_data`  * feat(client): support passing httpx.URL instances to base_url  * chore(internal): fix some latent type errors  * feat(api): add embeddings encoding_format  * feat: use numpy for faster embeddings decoding  * chore(internal): bump pyright  * chore(internal): bump deps  * feat(client): improve file upload types  * feat(client): adjust retry behavior to be exponential backoff  * ci: add lint workflow  * docs: improve to dictionary example  * ci(lint): run ruff too  * chore(internal): require explicit overrides  * feat(client): support accessing raw response objects  * test(qs): add an additional test case for array brackets  * feat(client): add dedicated Azure client  * feat(package): add classifiers  * docs(readme): add Azure guide  * 1.0.0-rc1  * docs: small cleanup  * feat(github): include a devcontainer setup  * chore: improve type names  * feat(client): allow binary returns  * feat(client): support passing BaseModels to request params at runtime  * fix(binaries): don't synchronously block in astream_to_file  * 1.0.0-rc2  * chore(internal): remove unused int/float conversion  * docs(readme): improve example snippets  * fix: prevent TypeError in Python 3.8 (ABC is not subscriptable)  * 1.0.0-rc3  * docs: update streaming example  * docs(readme): update opening  * v1.0.0  ---------  Co-authored-by: Robert Craigie <robert@craigie.dev> Co-authored-by: Stainless Bot <107565488+stainless-bot@users.noreply.github.com> Co-authored-by: Stainless Bot <dev@stainlessapi.com> Co-authored-by: Alex Rattray <rattray.alex@gmail.com>\") [#677](https://github.com/openai/openai-python/pull/677) [)](https://github.com/openai/openai-python/commit/08b8179a6b3e46ca8eb117f819cc6563ae74e27d \"V1 (#677)  * cleanup  * v1.0.0-beta.1  * docs: add basic manual azure example  * docs: use chat completions instead of completions for demo example  * test: rename `API_BASE_URL` to `TEST_API_BASE_URL`  * feat(client): handle retry-after header with a date format  * feat(api): remove `content_filter` stop_reason and update documentation  * refactor(cli): rename internal types for improved auto complete  * feat(client): add forwards-compatible pydantic methods  * feat(api): move `n_epochs` under `hyperparameters`  * feat(client): add support for passing in a httpx client  * chore: update README  * feat(cli): use http/2 if h2 is available  * chore(docs): remove trailing spaces  * feat(client): add logging setup  * chore(internal): minor updates  * v1.0.0-beta.2  * docs: use chat completions instead of completions for demo example  * chore: add case insensitive get header function  * fix(client): correctly handle errors during streaming  * fix(streaming): add additional overload for ambiguous stream param  * chore(internal): enable lint rule  * chore(internal): cleanup some redundant code  * fix(client): accept io.IOBase instances in file params  * docs: improve error message for invalid file param type  * 1.0.0-beta.3  * chore(internal): migrate from Poetry to Rye  * feat(cli): add `tools fine_tunes.prepare_data`  * feat(client): support passing httpx.URL instances to base_url  * chore(internal): fix some latent type errors  * feat(api): add embeddings encoding_format  * feat: use numpy for faster embeddings decoding  * chore(internal): bump pyright  * chore(internal): bump deps  * feat(client): improve file upload types  * feat(client): adjust retry behavior to be exponential backoff  * ci: add lint workflow  * docs: improve to dictionary example  * ci(lint): run ruff too  * chore(internal): require explicit overrides  * feat(client): support accessing raw response objects  * test(qs): add an additional test case for array brackets  * feat(client): add dedicated Azure client  * feat(package): add classifiers  * docs(readme): add Azure guide  * 1.0.0-rc1  * docs: small cleanup  * feat(github): include a devcontainer setup  * chore: improve type names  * feat(client): allow binary returns  * feat(client): support passing BaseModels to request params at runtime  * fix(binaries): don't synchronously block in astream_to_file  * 1.0.0-rc2  * chore(internal): remove unused int/float conversion  * docs(readme): improve example snippets  * fix: prevent TypeError in Python 3.8 (ABC is not subscriptable)  * 1.0.0-rc3  * docs: update streaming example  * docs(readme): update opening  * v1.0.0  ---------  Co-authored-by: Robert Craigie <robert@craigie.dev> Co-authored-by: Stainless Bot <107565488+stainless-bot@users.noreply.github.com> Co-authored-by: Stainless Bot <dev@stainlessapi.com> Co-authored-by: Alex Rattray <rattray.alex@gmail.com>\") | Nov 6, 2023 |\n| [.release-please-manifest.json](https://github.com/openai/openai-python/blob/main/.release-please-manifest.json \".release-please-manifest.json\") | [.release-please-manifest.json](https://github.com/openai/openai-python/blob/main/.release-please-manifest.json \".release-please-manifest.json\") | [release: 1.63.0](https://github.com/openai/openai-python/commit/720ae54414f392202289578c9cc3b84cccc7432c \"release: 1.63.0\") | Feb 13, 2025 |\n| [.stats.yml](https://github.com/openai/openai-python/blob/main/.stats.yml \".stats.yml\") | [.stats.yml](https://github.com/openai/openai-python/blob/main/.stats.yml \".stats.yml\") | [feat(api): add support for storing chat completions (](https://github.com/openai/openai-python/commit/300f58bbbde749e023dd1cf39de8f5339780a33d \"feat(api): add support for storing chat completions (#2117)\") [#2117](https://github.com/openai/openai-python/pull/2117) [)](https://github.com/openai/openai-python/commit/300f58bbbde749e023dd1cf39de8f5339780a33d \"feat(api): add support for storing chat completions (#2117)\") | Feb 13, 2025 |\n| [Brewfile](https://github.com/openai/openai-python/blob/main/Brewfile \"Brewfile\") | [Brewfile](https://github.com/openai/openai-python/blob/main/Brewfile \"Brewfile\") | [feat(api): delete messages (](https://github.com/openai/openai-python/commit/d2738d4259aa1c58e206ec23e388855fa218d3f9 \"feat(api): delete messages (#1388)\") [#1388](https://github.com/openai/openai-python/pull/1388) [)](https://github.com/openai/openai-python/commit/d2738d4259aa1c58e206ec23e388855fa218d3f9 \"feat(api): delete messages (#1388)\") | Apr 30, 2024 |\n| [CHANGELOG.md](https://github.com/openai/openai-python/blob/main/CHANGELOG.md \"CHANGELOG.md\") | [CHANGELOG.md](https://github.com/openai/openai-python/blob/main/CHANGELOG.md \"CHANGELOG.md\") | [release: 1.63.0](https://github.com/openai/openai-python/commit/720ae54414f392202289578c9cc3b84cccc7432c \"release: 1.63.0\") | Feb 13, 2025 |\n| [CONTRIBUTING.md](https://github.com/openai/openai-python/blob/main/CONTRIBUTING.md \"CONTRIBUTING.md\") | [CONTRIBUTING.md](https://github.com/openai/openai-python/blob/main/CONTRIBUTING.md \"CONTRIBUTING.md\") | [docs: fix typo in fenced code block language (](https://github.com/openai/openai-python/commit/50de514b910aced32104833acd892bebfb2cf123 \"docs: fix typo in fenced code block language (#1769)\") [#1769](https://github.com/openai/openai-python/pull/1769) [)](https://github.com/openai/openai-python/commit/50de514b910aced32104833acd892bebfb2cf123 \"docs: fix typo in fenced code block language (#1769)\") | Oct 7, 2024 |\n| [LICENSE](https://github.com/openai/openai-python/blob/main/LICENSE \"LICENSE\") | [LICENSE](https://github.com/openai/openai-python/blob/main/LICENSE \"LICENSE\") | [chore: bump license year (](https://github.com/openai/openai-python/commit/99861632e9bdb1a480d92913d621bded574bf797 \"chore: bump license year (#1981)\") [#1981](https://github.com/openai/openai-python/pull/1981) [)](https://github.com/openai/openai-python/commit/99861632e9bdb1a480d92913d621bded574bf797 \"chore: bump license year (#1981)\") | Jan 2, 2025 |\n| [README.md](https://github.com/openai/openai-python/blob/main/README.md \"README.md\") | [README.md](https://github.com/openai/openai-python/blob/main/README.md \"README.md\") | [docs(readme): current section links (](https://github.com/openai/openai-python/commit/90e3d39655548c935002dee7ef6f617c846c123c \"docs(readme): current section links (#2055)  chore(helpers): section links\") [#2055](https://github.com/openai/openai-python/pull/2055) [)](https://github.com/openai/openai-python/commit/90e3d39655548c935002dee7ef6f617c846c123c \"docs(readme): current section links (#2055)  chore(helpers): section links\") | Jan 31, 2025 |\n| [SECURITY.md](https://github.com/openai/openai-python/blob/main/SECURITY.md \"SECURITY.md\") | [SECURITY.md](https://github.com/openai/openai-python/blob/main/SECURITY.md \"SECURITY.md\") | [chore(docs): add SECURITY.md (](https://github.com/openai/openai-python/commit/1aeaf02f662e6e925180ddfcaa26508408b2f2a4 \"chore(docs): add SECURITY.md (#1408)\") [#1408](https://github.com/openai/openai-python/pull/1408) [)](https://github.com/openai/openai-python/commit/1aeaf02f662e6e925180ddfcaa26508408b2f2a4 \"chore(docs): add SECURITY.md (#1408)\") | May 10, 2024 |\n| [api.md](https://github.com/openai/openai-python/blob/main/api.md \"api.md\") | [api.md](https://github.com/openai/openai-python/blob/main/api.md \"api.md\") | [feat(api): add support for storing chat completions (](https://github.com/openai/openai-python/commit/300f58bbbde749e023dd1cf39de8f5339780a33d \"feat(api): add support for storing chat completions (#2117)\") [#2117](https://github.com/openai/openai-python/pull/2117) [)](https://github.com/openai/openai-python/commit/300f58bbbde749e023dd1cf39de8f5339780a33d \"feat(api): add support for storing chat completions (#2117)\") | Feb 13, 2025 |\n| [helpers.md](https://github.com/openai/openai-python/blob/main/helpers.md \"helpers.md\") | [helpers.md](https://github.com/openai/openai-python/blob/main/helpers.md \"helpers.md\") | [docs(readme): current section links (](https://github.com/openai/openai-python/commit/90e3d39655548c935002dee7ef6f617c846c123c \"docs(readme): current section links (#2055)  chore(helpers): section links\") [#2055](https://github.com/openai/openai-python/pull/2055) [)](https://github.com/openai/openai-python/commit/90e3d39655548c935002dee7ef6f617c846c123c \"docs(readme): current section links (#2055)  chore(helpers): section links\") | Jan 31, 2025 |\n| [mypy.ini](https://github.com/openai/openai-python/blob/main/mypy.ini \"mypy.ini\") | [mypy.ini](https://github.com/openai/openai-python/blob/main/mypy.ini \"mypy.ini\") | [chore(internal): update deps (](https://github.com/openai/openai-python/commit/83f11490fa291f9814f3dae6a65b1f62d0177675 \"chore(internal): update deps (#2015)\") [#2015](https://github.com/openai/openai-python/pull/2015) [)](https://github.com/openai/openai-python/commit/83f11490fa291f9814f3dae6a65b1f62d0177675 \"chore(internal): update deps (#2015)\") | Jan 17, 2025 |\n| [noxfile.py](https://github.com/openai/openai-python/blob/main/noxfile.py \"noxfile.py\") | [noxfile.py](https://github.com/openai/openai-python/blob/main/noxfile.py \"noxfile.py\") | [V1 (](https://github.com/openai/openai-python/commit/08b8179a6b3e46ca8eb117f819cc6563ae74e27d \"V1 (#677)  * cleanup  * v1.0.0-beta.1  * docs: add basic manual azure example  * docs: use chat completions instead of completions for demo example  * test: rename `API_BASE_URL` to `TEST_API_BASE_URL`  * feat(client): handle retry-after header with a date format  * feat(api): remove `content_filter` stop_reason and update documentation  * refactor(cli): rename internal types for improved auto complete  * feat(client): add forwards-compatible pydantic methods  * feat(api): move `n_epochs` under `hyperparameters`  * feat(client): add support for passing in a httpx client  * chore: update README  * feat(cli): use http/2 if h2 is available  * chore(docs): remove trailing spaces  * feat(client): add logging setup  * chore(internal): minor updates  * v1.0.0-beta.2  * docs: use chat completions instead of completions for demo example  * chore: add case insensitive get header function  * fix(client): correctly handle errors during streaming  * fix(streaming): add additional overload for ambiguous stream param  * chore(internal): enable lint rule  * chore(internal): cleanup some redundant code  * fix(client): accept io.IOBase instances in file params  * docs: improve error message for invalid file param type  * 1.0.0-beta.3  * chore(internal): migrate from Poetry to Rye  * feat(cli): add `tools fine_tunes.prepare_data`  * feat(client): support passing httpx.URL instances to base_url  * chore(internal): fix some latent type errors  * feat(api): add embeddings encoding_format  * feat: use numpy for faster embeddings decoding  * chore(internal): bump pyright  * chore(internal): bump deps  * feat(client): improve file upload types  * feat(client): adjust retry behavior to be exponential backoff  * ci: add lint workflow  * docs: improve to dictionary example  * ci(lint): run ruff too  * chore(internal): require explicit overrides  * feat(client): support accessing raw response objects  * test(qs): add an additional test case for array brackets  * feat(client): add dedicated Azure client  * feat(package): add classifiers  * docs(readme): add Azure guide  * 1.0.0-rc1  * docs: small cleanup  * feat(github): include a devcontainer setup  * chore: improve type names  * feat(client): allow binary returns  * feat(client): support passing BaseModels to request params at runtime  * fix(binaries): don't synchronously block in astream_to_file  * 1.0.0-rc2  * chore(internal): remove unused int/float conversion  * docs(readme): improve example snippets  * fix: prevent TypeError in Python 3.8 (ABC is not subscriptable)  * 1.0.0-rc3  * docs: update streaming example  * docs(readme): update opening  * v1.0.0  ---------  Co-authored-by: Robert Craigie <robert@craigie.dev> Co-authored-by: Stainless Bot <107565488+stainless-bot@users.noreply.github.com> Co-authored-by: Stainless Bot <dev@stainlessapi.com> Co-authored-by: Alex Rattray <rattray.alex@gmail.com>\") [#677](https://github.com/openai/openai-python/pull/677) [)](https://github.com/openai/openai-python/commit/08b8179a6b3e46ca8eb117f819cc6563ae74e27d \"V1 (#677)  * cleanup  * v1.0.0-beta.1  * docs: add basic manual azure example  * docs: use chat completions instead of completions for demo example  * test: rename `API_BASE_URL` to `TEST_API_BASE_URL`  * feat(client): handle retry-after header with a date format  * feat(api): remove `content_filter` stop_reason and update documentation  * refactor(cli): rename internal types for improved auto complete  * feat(client): add forwards-compatible pydantic methods  * feat(api): move `n_epochs` under `hyperparameters`  * feat(client): add support for passing in a httpx client  * chore: update README  * feat(cli): use http/2 if h2 is available  * chore(docs): remove trailing spaces  * feat(client): add logging setup  * chore(internal): minor updates  * v1.0.0-beta.2  * docs: use chat completions instead of completions for demo example  * chore: add case insensitive get header function  * fix(client): correctly handle errors during streaming  * fix(streaming): add additional overload for ambiguous stream param  * chore(internal): enable lint rule  * chore(internal): cleanup some redundant code  * fix(client): accept io.IOBase instances in file params  * docs: improve error message for invalid file param type  * 1.0.0-beta.3  * chore(internal): migrate from Poetry to Rye  * feat(cli): add `tools fine_tunes.prepare_data`  * feat(client): support passing httpx.URL instances to base_url  * chore(internal): fix some latent type errors  * feat(api): add embeddings encoding_format  * feat: use numpy for faster embeddings decoding  * chore(internal): bump pyright  * chore(internal): bump deps  * feat(client): improve file upload types  * feat(client): adjust retry behavior to be exponential backoff  * ci: add lint workflow  * docs: improve to dictionary example  * ci(lint): run ruff too  * chore(internal): require explicit overrides  * feat(client): support accessing raw response objects  * test(qs): add an additional test case for array brackets  * feat(client): add dedicated Azure client  * feat(package): add classifiers  * docs(readme): add Azure guide  * 1.0.0-rc1  * docs: small cleanup  * feat(github): include a devcontainer setup  * chore: improve type names  * feat(client): allow binary returns  * feat(client): support passing BaseModels to request params at runtime  * fix(binaries): don't synchronously block in astream_to_file  * 1.0.0-rc2  * chore(internal): remove unused int/float conversion  * docs(readme): improve example snippets  * fix: prevent TypeError in Python 3.8 (ABC is not subscriptable)  * 1.0.0-rc3  * docs: update streaming example  * docs(readme): update opening  * v1.0.0  ---------  Co-authored-by: Robert Craigie <robert@craigie.dev> Co-authored-by: Stainless Bot <107565488+stainless-bot@users.noreply.github.com> Co-authored-by: Stainless Bot <dev@stainlessapi.com> Co-authored-by: Alex Rattray <rattray.alex@gmail.com>\") | Nov 6, 2023 |\n| [pyproject.toml](https://github.com/openai/openai-python/blob/main/pyproject.toml \"pyproject.toml\") | [pyproject.toml](https://github.com/openai/openai-python/blob/main/pyproject.toml \"pyproject.toml\") | [release: 1.63.0](https://github.com/openai/openai-python/commit/720ae54414f392202289578c9cc3b84cccc7432c \"release: 1.63.0\") | Feb 13, 2025 |\n| [release-please-config.json](https://github.com/openai/openai-python/blob/main/release-please-config.json \"release-please-config.json\") | [release-please-config.json](https://github.com/openai/openai-python/blob/main/release-please-config.json \"release-please-config.json\") | [chore(internal): support pre-release versioning (](https://github.com/openai/openai-python/commit/336cf03092376c0b54cc2cfbd78167c1aac01af7 \"chore(internal): support pre-release versioning (#1113)\") [#1113](https://github.com/openai/openai-python/pull/1113) [)](https://github.com/openai/openai-python/commit/336cf03092376c0b54cc2cfbd78167c1aac01af7 \"chore(internal): support pre-release versioning (#1113)\") | Feb 2, 2024 |\n| [requirements-dev.lock](https://github.com/openai/openai-python/blob/main/requirements-dev.lock \"requirements-dev.lock\") | [requirements-dev.lock](https://github.com/openai/openai-python/blob/main/requirements-dev.lock \"requirements-dev.lock\") | [chore(internal): bummp ruff dependency (](https://github.com/openai/openai-python/commit/6afde0dc8512a16ff2eca781fee0395cab254f8c \"chore(internal): bummp ruff dependency (#2080)\") [#2080](https://github.com/openai/openai-python/pull/2080) [)](https://github.com/openai/openai-python/commit/6afde0dc8512a16ff2eca781fee0395cab254f8c \"chore(internal): bummp ruff dependency (#2080)\") | Feb 5, 2025 |\n| [requirements.lock](https://github.com/openai/openai-python/blob/main/requirements.lock \"requirements.lock\") | [requirements.lock](https://github.com/openai/openai-python/blob/main/requirements.lock \"requirements.lock\") | [chore(internal): update websockets dep (](https://github.com/openai/openai-python/commit/1e5e19976a02f4f3423cf7e32ad5fa020c857b82 \"chore(internal): update websockets dep (#2036)\") [#2036](https://github.com/openai/openai-python/pull/2036) [)](https://github.com/openai/openai-python/commit/1e5e19976a02f4f3423cf7e32ad5fa020c857b82 \"chore(internal): update websockets dep (#2036)\") | Jan 20, 2025 |\n| View all files |\n\n## Repository files navigation\n\n# OpenAI Python API library\n\n[Permalink: OpenAI Python API library](https://github.com/openai/openai-python#openai-python-api-library)\n\n[![PyPI version](https://camo.githubusercontent.com/b2f318dcc71bb9b8e6021c196a2cce69a3a64e721ddc19c4904d42f84d0219ac/68747470733a2f2f696d672e736869656c64732e696f2f707970692f762f6f70656e61692e737667)](https://pypi.org/project/openai/)\n\nThe OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.8+\napplication. The library includes type definitions for all request params and response fields,\nand offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).\n\nIt is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).\n\n## Documentation\n\n[Permalink: Documentation](https://github.com/openai/openai-python#documentation)\n\nThe REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](https://github.com/openai/openai-python/blob/main/api.md).\n\n## Installation\n\n[Permalink: Installation](https://github.com/openai/openai-python#installation)\n\nImportant\n\nThe SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code.\n\n```\n# install from PyPI\npip install openai\n```\n\n## Usage\n\n[Permalink: Usage](https://github.com/openai/openai-python#usage)\n\nThe full API of this library can be found in [api.md](https://github.com/openai/openai-python/blob/main/api.md).\n\n```\nimport os\nfrom openai import OpenAI\n\nclient = OpenAI(\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),  # This is the default and can be omitted\n)\n\nchat_completion = client.chat.completions.create(\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"Say this is a test\",\\\n        }\\\n    ],\n    model=\"gpt-4o\",\n)\n```\n\nWhile you can provide an `api_key` keyword argument,\nwe recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)\nto add `OPENAI_API_KEY=\"My API Key\"` to your `.env` file\nso that your API Key is not stored in source control.\n\n### Vision\n\n[Permalink: Vision](https://github.com/openai/openai-python#vision)\n\nWith a hosted image:\n\n```\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": [\\\n                {\"type\": \"text\", \"text\": prompt},\\\n                {\\\n                    \"type\": \"image_url\",\\\n                    \"image_url\": {\"url\": f\"{img_url}\"},\\\n                },\\\n            ],\\\n        }\\\n    ],\n)\n```\n\nWith the image as a base64 encoded string:\n\n```\nresponse = client.chat.completions.create(\n    model=\"gpt-4o-mini\",\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": [\\\n                {\"type\": \"text\", \"text\": prompt},\\\n                {\\\n                    \"type\": \"image_url\",\\\n                    \"image_url\": {\"url\": f\"data:{img_type};base64,{img_b64_str}\"},\\\n                },\\\n            ],\\\n        }\\\n    ],\n)\n```\n\n### Polling Helpers\n\n[Permalink: Polling Helpers](https://github.com/openai/openai-python#polling-helpers)\n\nWhen interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes\nhelper functions which will poll the status until it reaches a terminal state and then return the resulting object.\nIf an API method results in an action that could benefit from polling there will be a corresponding version of the\nmethod ending in '\\_and\\_poll'.\n\nFor instance to create a Run and poll until it reaches a terminal state you can run:\n\n```\nrun = client.beta.threads.runs.create_and_poll(\n    thread_id=thread.id,\n    assistant_id=assistant.id,\n)\n```\n\nMore information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)\n\n### Bulk Upload Helpers\n\n[Permalink: Bulk Upload Helpers](https://github.com/openai/openai-python#bulk-upload-helpers)\n\nWhen creating and interacting with vector stores, you can use polling helpers to monitor the status of operations.\nFor convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.\n\n```\nsample_files = [Path(\"sample-paper.pdf\"), ...]\n\nbatch = await client.vector_stores.file_batches.upload_and_poll(\n    store.id,\n    files=sample_files,\n)\n```\n\n### Streaming Helpers\n\n[Permalink: Streaming Helpers](https://github.com/openai/openai-python#streaming-helpers)\n\nThe SDK also includes helpers to process streams and handle incoming events.\n\n```\nwith client.beta.threads.runs.stream(\n    thread_id=thread.id,\n    assistant_id=assistant.id,\n    instructions=\"Please address the user as Jane Doe. The user has a premium account.\",\n) as stream:\n    for event in stream:\n        # Print the text from text delta events\n        if event.type == \"thread.message.delta\" and event.data.delta.content:\n            print(event.data.delta.content[0].text)\n```\n\nMore information on streaming helpers can be found in the dedicated documentation: [helpers.md](https://github.com/openai/openai-python/blob/main/helpers.md)\n\n## Async usage\n\n[Permalink: Async usage](https://github.com/openai/openai-python#async-usage)\n\nSimply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:\n\n```\nimport os\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI(\n    api_key=os.environ.get(\"OPENAI_API_KEY\"),  # This is the default and can be omitted\n)\n\nasync def main() -> None:\n    chat_completion = await client.chat.completions.create(\n        messages=[\\\n            {\\\n                \"role\": \"user\",\\\n                \"content\": \"Say this is a test\",\\\n            }\\\n        ],\n        model=\"gpt-4o\",\n    )\n\nasyncio.run(main())\n```\n\nFunctionality between the synchronous and asynchronous clients is otherwise identical.\n\n## Streaming responses\n\n[Permalink: Streaming responses](https://github.com/openai/openai-python#streaming-responses)\n\nWe provide support for streaming responses using Server Side Events (SSE).\n\n```\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nstream = client.chat.completions.create(\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"Say this is a test\",\\\n        }\\\n    ],\n    model=\"gpt-4o\",\n    stream=True,\n)\nfor chunk in stream:\n    print(chunk.choices[0].delta.content or \"\", end=\"\")\n```\n\nThe async client uses the exact same interface.\n\n```\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\nasync def main():\n    stream = await client.chat.completions.create(\n        model=\"gpt-4\",\n        messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}],\n        stream=True,\n    )\n    async for chunk in stream:\n        print(chunk.choices[0].delta.content or \"\", end=\"\")\n\nasyncio.run(main())\n```\n\n## Module-level client\n\n[Permalink: Module-level client](https://github.com/openai/openai-python#module-level-client)\n\nImportant\n\nWe highly recommend instantiating client instances instead of relying on the global client.\n\nWe also expose a global client instance that is accessible in a similar fashion to versions prior to v1.\n\n```\nimport openai\n\n# optional; defaults to `os.environ['OPENAI_API_KEY']`\nopenai.api_key = '...'\n\n# all client options can be configured just like the `OpenAI` instantiation counterpart\nopenai.base_url = \"https://...\"\nopenai.default_headers = {\"x-foo\": \"true\"}\n\ncompletion = openai.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"How do I output all files in a directory using Python?\",\\\n        },\\\n    ],\n)\nprint(completion.choices[0].message.content)\n```\n\nThe API is the exact same as the standard client instance-based API.\n\nThis is intended to be used within REPLs or notebooks for faster iteration, **not** in application code.\n\nWe recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because:\n\n- It can be difficult to reason about where client options are configured\n- It's not possible to change certain client options without potentially causing race conditions\n- It's harder to mock for testing purposes\n- It's not possible to control cleanup of network connections\n\n## Realtime API beta\n\n[Permalink: Realtime API beta](https://github.com/openai/openai-python#realtime-api-beta)\n\nThe Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a WebSocket connection.\n\nUnder the hood the SDK uses the [`websockets`](https://websockets.readthedocs.io/en/stable/) library to manage connections.\n\nThe Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).\n\nBasic text based example:\n\n```\nimport asyncio\nfrom openai import AsyncOpenAI\n\nasync def main():\n    client = AsyncOpenAI()\n\n    async with client.beta.realtime.connect(model=\"gpt-4o-realtime-preview\") as connection:\n        await connection.session.update(session={'modalities': ['text']})\n\n        await connection.conversation.item.create(\n            item={\n                \"type\": \"message\",\n                \"role\": \"user\",\n                \"content\": [{\"type\": \"input_text\", \"text\": \"Say hello!\"}],\n            }\n        )\n        await connection.response.create()\n\n        async for event in connection:\n            if event.type == 'response.text.delta':\n                print(event.delta, flush=True, end=\"\")\n\n            elif event.type == 'response.text.done':\n                print()\n\n            elif event.type == \"response.done\":\n                break\n\nasyncio.run(main())\n```\n\nHowever the real magic of the Realtime API is handling audio inputs / outputs, see this example [TUI script](https://github.com/openai/openai-python/blob/main/examples/realtime/push_to_talk_app.py) for a fully fledged example.\n\n### Realtime error handling\n\n[Permalink: Realtime error handling](https://github.com/openai/openai-python#realtime-error-handling)\n\nWhenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as _no errors are raised directly_ by the SDK when an `error` event comes in.\n\n```\nclient = AsyncOpenAI()\n\nasync with client.beta.realtime.connect(model=\"gpt-4o-realtime-preview\") as connection:\n    ...\n    async for event in connection:\n        if event.type == 'error':\n            print(event.error.type)\n            print(event.error.code)\n            print(event.error.event_id)\n            print(event.error.message)\n```\n\n## Using types\n\n[Permalink: Using types](https://github.com/openai/openai-python#using-types)\n\nNested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev/) which also provide helper methods for things like:\n\n- Serializing back into JSON, `model.to_json()`\n- Converting to a dictionary, `model.to_dict()`\n\nTyped requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.\n\n## Pagination\n\n[Permalink: Pagination](https://github.com/openai/openai-python#pagination)\n\nList methods in the OpenAI API are paginated.\n\nThis library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:\n\n```\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nall_jobs = []\n# Automatically fetches more pages as needed.\nfor job in client.fine_tuning.jobs.list(\n    limit=20,\n):\n    # Do something with job here\n    all_jobs.append(job)\nprint(all_jobs)\n```\n\nOr, asynchronously:\n\n```\nimport asyncio\nfrom openai import AsyncOpenAI\n\nclient = AsyncOpenAI()\n\nasync def main() -> None:\n    all_jobs = []\n    # Iterate through items across all pages, issuing requests as needed.\n    async for job in client.fine_tuning.jobs.list(\n        limit=20,\n    ):\n        all_jobs.append(job)\n    print(all_jobs)\n\nasyncio.run(main())\n```\n\nAlternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:\n\n```\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\nif first_page.has_next_page():\n    print(f\"will fetch next page using these details: {first_page.next_page_info()}\")\n    next_page = await first_page.get_next_page()\n    print(f\"number of items we just fetched: {len(next_page.data)}\")\n\n# Remove `await` for non-async usage.\n```\n\nOr just work directly with the returned data:\n\n```\nfirst_page = await client.fine_tuning.jobs.list(\n    limit=20,\n)\n\nprint(f\"next page cursor: {first_page.after}\")  # => \"next page cursor: ...\"\nfor job in first_page.data:\n    print(job.id)\n\n# Remove `await` for non-async usage.\n```\n\n## Nested params\n\n[Permalink: Nested params](https://github.com/openai/openai-python#nested-params)\n\nNested parameters are dictionaries, typed using `TypedDict`, for example:\n\n```\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ncompletion = client.chat.completions.create(\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"Can you generate an example json object describing a fruit?\",\\\n        }\\\n    ],\n    model=\"gpt-4o\",\n    response_format={\"type\": \"json_object\"},\n)\n```\n\n## File uploads\n\n[Permalink: File uploads](https://github.com/openai/openai-python#file-uploads)\n\nRequest parameters that correspond to file uploads can be passed as `bytes`, a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.\n\n```\nfrom pathlib import Path\nfrom openai import OpenAI\n\nclient = OpenAI()\n\nclient.files.create(\n    file=Path(\"input.jsonl\"),\n    purpose=\"fine-tune\",\n)\n```\n\nThe async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.\n\n## Handling errors\n\n[Permalink: Handling errors](https://github.com/openai/openai-python#handling-errors)\n\nWhen the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.\n\nWhen the API returns a non-success status code (that is, 4xx or 5xx\nresponse), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties.\n\nAll errors inherit from `openai.APIError`.\n\n```\nimport openai\nfrom openai import OpenAI\n\nclient = OpenAI()\n\ntry:\n    client.fine_tuning.jobs.create(\n        model=\"gpt-4o\",\n        training_file=\"file-abc123\",\n    )\nexcept openai.APIConnectionError as e:\n    print(\"The server could not be reached\")\n    print(e.__cause__)  # an underlying Exception, likely raised within httpx.\nexcept openai.RateLimitError as e:\n    print(\"A 429 status code was received; we should back off a bit.\")\nexcept openai.APIStatusError as e:\n    print(\"Another non-200-range status code was received\")\n    print(e.status_code)\n    print(e.response)\n```\n\nError codes are as follows:\n\n| Status Code | Error Type |\n| --- | --- |\n| 400 | `BadRequestError` |\n| 401 | `AuthenticationError` |\n| 403 | `PermissionDeniedError` |\n| 404 | `NotFoundError` |\n| 422 | `UnprocessableEntityError` |\n| 429 | `RateLimitError` |\n| >=500 | `InternalServerError` |\n| N/A | `APIConnectionError` |\n\n## Request IDs\n\n[Permalink: Request IDs](https://github.com/openai/openai-python#request-ids)\n\n> For more information on debugging requests, see [these docs](https://platform.openai.com/docs/api-reference/debugging-requests)\n\nAll object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.\n\n```\ncompletion = await client.chat.completions.create(\n    messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}], model=\"gpt-4\"\n)\nprint(completion._request_id)  # req_123\n```\n\nNote that unlike other properties that use an `_` prefix, the `_request_id` property\n_is_ public. Unless documented otherwise, _all_ other `_` prefix properties,\nmethods and modules are _private_.\n\nImportant\n\nIf you need to access request IDs for failed requests you must catch the `APIStatusError` exception\n\n```\nimport openai\n\ntry:\n    completion = await client.chat.completions.create(\n        messages=[{\"role\": \"user\", \"content\": \"Say this is a test\"}], model=\"gpt-4\"\n    )\nexcept openai.APIStatusError as exc:\n    print(exc.request_id)  # req_123\n    raise exc\n```\n\n### Retries\n\n[Permalink: Retries](https://github.com/openai/openai-python#retries)\n\nCertain errors are automatically retried 2 times by default, with a short exponential backoff.\nConnection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,\n429 Rate Limit, and >=500 Internal errors are all retried by default.\n\nYou can use the `max_retries` option to configure or disable retry settings:\n\n```\nfrom openai import OpenAI\n\n# Configure the default for all requests:\nclient = OpenAI(\n    # default is 2\n    max_retries=0,\n)\n\n# Or, configure per-request:\nclient.with_options(max_retries=5).chat.completions.create(\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"How can I get the name of the current day in JavaScript?\",\\\n        }\\\n    ],\n    model=\"gpt-4o\",\n)\n```\n\n### Timeouts\n\n[Permalink: Timeouts](https://github.com/openai/openai-python#timeouts)\n\nBy default requests time out after 10 minutes. You can configure this with a `timeout` option,\nwhich accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:\n\n```\nfrom openai import OpenAI\n\n# Configure the default for all requests:\nclient = OpenAI(\n    # 20 seconds (default is 10 minutes)\n    timeout=20.0,\n)\n\n# More granular control:\nclient = OpenAI(\n    timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),\n)\n\n# Override per-request:\nclient.with_options(timeout=5.0).chat.completions.create(\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"How can I list all files in a directory using Python?\",\\\n        }\\\n    ],\n    model=\"gpt-4o\",\n)\n```\n\nOn timeout, an `APITimeoutError` is thrown.\n\nNote that requests that time out are [retried twice by default](https://github.com/openai/openai-python#retries).\n\n## Advanced\n\n[Permalink: Advanced](https://github.com/openai/openai-python#advanced)\n\n### Logging\n\n[Permalink: Logging](https://github.com/openai/openai-python#logging)\n\nWe use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.\n\nYou can enable logging by setting the environment variable `OPENAI_LOG` to `info`.\n\n```\n$ export OPENAI_LOG=info\n```\n\nOr to `debug` for more verbose logging.\n\n### How to tell whether `None` means `null` or missing\n\n[Permalink: How to tell whether None means null or missing](https://github.com/openai/openai-python#how-to-tell-whether-none-means-null-or-missing)\n\nIn an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:\n\n```\nif response.my_field is None:\n  if 'my_field' not in response.model_fields_set:\n    print('Got json like {}, without a \"my_field\" key present at all.')\n  else:\n    print('Got json like {\"my_field\": null}.')\n```\n\n### Accessing raw response data (e.g. headers)\n\n[Permalink: Accessing raw response data (e.g. headers)](https://github.com/openai/openai-python#accessing-raw-response-data-eg-headers)\n\nThe \"raw\" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,\n\n```\nfrom openai import OpenAI\n\nclient = OpenAI()\nresponse = client.chat.completions.with_raw_response.create(\n    messages=[{\\\n        \"role\": \"user\",\\\n        \"content\": \"Say this is a test\",\\\n    }],\n    model=\"gpt-4o\",\n)\nprint(response.headers.get('X-My-Header'))\n\ncompletion = response.parse()  # get the object that `chat.completions.create()` would have returned\nprint(completion)\n```\n\nThese methods return a [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.\n\nFor the sync client this will mostly be the same with the exception\nof `content` & `text` will be methods instead of properties. In the\nasync client, all methods will be async.\n\nA migration script will be provided & the migration in general should\nbe smooth.\n\n#### `.with_streaming_response`\n\n[Permalink: .with_streaming_response](https://github.com/openai/openai-python#with_streaming_response)\n\nThe above interface eagerly reads the full response body when you make the request, which may not always be what you want.\n\nTo stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.\n\nAs such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.\n\n```\nwith client.chat.completions.with_streaming_response.create(\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"Say this is a test\",\\\n        }\\\n    ],\n    model=\"gpt-4o\",\n) as response:\n    print(response.headers.get(\"X-My-Header\"))\n\n    for line in response.iter_lines():\n        print(line)\n```\n\nThe context manager is required so that the response will reliably be closed.\n\n### Making custom/undocumented requests\n\n[Permalink: Making custom/undocumented requests](https://github.com/openai/openai-python#making-customundocumented-requests)\n\nThis library is typed for convenient access to the documented API.\n\nIf you need to access undocumented endpoints, params, or response properties, the library can still be used.\n\n#### Undocumented endpoints\n\n[Permalink: Undocumented endpoints](https://github.com/openai/openai-python#undocumented-endpoints)\n\nTo make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other\nhttp verbs. Options on the client will be respected (such as retries) when making this request.\n\n```\nimport httpx\n\nresponse = client.post(\n    \"/foo\",\n    cast_to=httpx.Response,\n    body={\"my_param\": True},\n)\n\nprint(response.headers.get(\"x-foo\"))\n```\n\n#### Undocumented request params\n\n[Permalink: Undocumented request params](https://github.com/openai/openai-python#undocumented-request-params)\n\nIf you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request\noptions.\n\n#### Undocumented response properties\n\n[Permalink: Undocumented response properties](https://github.com/openai/openai-python#undocumented-response-properties)\n\nTo access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You\ncan also get all the extra fields on the Pydantic model as a dict with\n[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).\n\n### Configuring the HTTP client\n\n[Permalink: Configuring the HTTP client](https://github.com/openai/openai-python#configuring-the-http-client)\n\nYou can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:\n\n- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)\n- Custom [transports](https://www.python-httpx.org/advanced/transports/)\n- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality\n\n```\nimport httpx\nfrom openai import OpenAI, DefaultHttpxClient\n\nclient = OpenAI(\n    # Or use the `OPENAI_BASE_URL` env var\n    base_url=\"http://my.test.server.example.com:8083/v1\",\n    http_client=DefaultHttpxClient(\n        proxy=\"http://my.test.proxy.example.com\",\n        transport=httpx.HTTPTransport(local_address=\"0.0.0.0\"),\n    ),\n)\n```\n\nYou can also customize the client on a per-request basis by using `with_options()`:\n\n```\nclient.with_options(http_client=DefaultHttpxClient(...))\n```\n\n### Managing HTTP resources\n\n[Permalink: Managing HTTP resources](https://github.com/openai/openai-python#managing-http-resources)\n\nBy default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.\n\n```\nfrom openai import OpenAI\n\nwith OpenAI() as client:\n  # make requests here\n  ...\n\n# HTTP client is now closed\n```\n\n## Microsoft Azure OpenAI\n\n[Permalink: Microsoft Azure OpenAI](https://github.com/openai/openai-python#microsoft-azure-openai)\n\nTo use this library with [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview), use the `AzureOpenAI`\nclass instead of the `OpenAI` class.\n\nImportant\n\nThe Azure API shape differs from the core API shape which means that the static types for responses / params\nwon't always be correct.\n\n```\nfrom openai import AzureOpenAI\n\n# gets the API Key from environment variable AZURE_OPENAI_API_KEY\nclient = AzureOpenAI(\n    # https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioning\n    api_version=\"2023-07-01-preview\",\n    # https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource\n    azure_endpoint=\"https://example-endpoint.openai.azure.com\",\n)\n\ncompletion = client.chat.completions.create(\n    model=\"deployment-name\",  # e.g. gpt-35-instant\n    messages=[\\\n        {\\\n            \"role\": \"user\",\\\n            \"content\": \"How do I output all files in a directory using Python?\",\\\n        },\\\n    ],\n)\nprint(completion.to_json())\n```\n\nIn addition to the options provided in the base `OpenAI` client, the following options are provided:\n\n- `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)\n- `azure_deployment`\n- `api_version` (or the `OPENAI_API_VERSION` environment variable)\n- `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)\n- `azure_ad_token_provider`\n\nAn example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py).\n\n## Versioning\n\n[Permalink: Versioning](https://github.com/openai/openai-python#versioning)\n\nThis package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:\n\n1. Changes that only affect static types, without breaking runtime behavior.\n2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_\n3. Changes that we do not expect to impact the vast majority of users in practice.\n\nWe take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.\n\nWe are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-python/issues) with questions, bugs, or suggestions.\n\n### Determining the installed version\n\n[Permalink: Determining the installed version](https://github.com/openai/openai-python#determining-the-installed-version)\n\nIf you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.\n\nYou can determine the version that is being used at runtime with:\n\n```\nimport openai\nprint(openai.__version__)\n```\n\n## Requirements\n\n[Permalink: Requirements](https://github.com/openai/openai-python#requirements)\n\nPython 3.8 or higher.\n\n## Contributing\n\n[Permalink: Contributing](https://github.com/openai/openai-python#contributing)\n\nSee [the contributing documentation](https://github.com/openai/openai-python/blob/main/CONTRIBUTING.md).\n\n## About\n\nThe official Python library for the OpenAI API\n\n\n[pypi.org/project/openai/](https://pypi.org/project/openai/ \"https://pypi.org/project/openai/\")\n\n### Topics\n\n[python](https://github.com/topics/python \"Topic: python\") [openai](https://github.com/topics/openai \"Topic: openai\")\n\n### Resources\n\n[Readme](https://github.com/openai/openai-python#readme-ov-file)\n\n### License\n\n[Apache-2.0 license](https://github.com/openai/openai-python#Apache-2.0-1-ov-file)\n\n### Security policy\n\n[Security policy](https://github.com/openai/openai-python#security-ov-file)\n\n[Activity](https://github.com/openai/openai-python/activity)\n\n[Custom properties](https://github.com/openai/openai-python/custom-properties)\n\n### Stars\n\n[**24.6k**\\\\\nstars](https://github.com/openai/openai-python/stargazers)\n\n### Watchers\n\n[**309**\\\\\nwatching](https://github.com/openai/openai-python/watchers)\n\n### Forks\n\n[**3.6k**\\\\\nforks](https://github.com/openai/openai-python/forks)\n\n[Report repository](https://github.com/contact/report-content?content_url=https%3A%2F%2Fgithub.com%2Fopenai%2Fopenai-python&report=openai+%28user%29)\n\n## [Releases\\  241](https://github.com/openai/openai-python/releases)\n\n[v1.63.0\\\\\nLatest\\\\\n\\\\\nFeb 13, 2025](https://github.com/openai/openai-python/releases/tag/v1.63.0)\n\n[\\+ 240 releases](https://github.com/openai/openai-python/releases)\n\n## [Contributors\\  131](https://github.com/openai/openai-python/graphs/contributors)\n\n- [![@stainless-bot](https://avatars.githubusercontent.com/u/107565488?s=64&v=4)](https://github.com/stainless-bot)\n- [![@stainless-app[bot]](https://avatars.githubusercontent.com/in/378072?s=64&v=4)](https://github.com/apps/stainless-app)\n- [![@RobertCraigie](https://avatars.githubusercontent.com/u/23125036?s=64&v=4)](https://github.com/RobertCraigie)\n- [![@hallacy](https://avatars.githubusercontent.com/u/1945079?s=64&v=4)](https://github.com/hallacy)\n- [![@rachellim](https://avatars.githubusercontent.com/u/9589037?s=64&v=4)](https://github.com/rachellim)\n- [![@logankilpatrick](https://avatars.githubusercontent.com/u/35577566?s=64&v=4)](https://github.com/logankilpatrick)\n- [![@ddeville](https://avatars.githubusercontent.com/u/356759?s=64&v=4)](https://github.com/ddeville)\n- [![@kristapratico](https://avatars.githubusercontent.com/u/31998003?s=64&v=4)](https://github.com/kristapratico)\n- [![@BorisPower](https://avatars.githubusercontent.com/u/81998504?s=64&v=4)](https://github.com/BorisPower)\n- [![@athyuttamre](https://avatars.githubusercontent.com/u/1485350?s=64&v=4)](https://github.com/athyuttamre)\n- [![@mpokrass](https://avatars.githubusercontent.com/u/5784632?s=64&v=4)](https://github.com/mpokrass)\n- [![@jhallard](https://avatars.githubusercontent.com/u/7551586?s=64&v=4)](https://github.com/jhallard)\n- [![@sorinsuciu-msft](https://avatars.githubusercontent.com/u/12627402?s=64&v=4)](https://github.com/sorinsuciu-msft)\n- [![@cmurtz-msft](https://avatars.githubusercontent.com/u/120655914?s=64&v=4)](https://github.com/cmurtz-msft)\n\n[\\+ 117 contributors](https://github.com/openai/openai-python/graphs/contributors)\n\n## Languages\n\n- [Python99.8%](https://github.com/openai/openai-python/search?l=python)\n- Other0.2%\n\nYou can’t perform that action at this time.\n"
  },
  {
    "path": "codebase-architectures/.gitignore",
    "content": "# Python bytecode files\n**/__pycache__/\n**/*.pyc\n**/*.pyo\n**/*.pyd\n**/.pytest_cache/\n**/.coverage\n**/*.so\n**/.DS_Store\n"
  },
  {
    "path": "codebase-architectures/README.md",
    "content": "# Codebase Architectures\n\nThis directory contains examples of different codebase architectures, each implemented with simple, runnable code.\n\n## Architectures Included\n\n1. **Vertical Slice Architecture** - Feature-oriented organization where each feature contains all its necessary components\n2. **Layered (N-Tier or MVC) Architecture** - Separation of concerns by technical layer\n3. **Pipeline (Sequential Flow) Architecture** - Linear processing stages for data transformation\n4. **Atomic/Composable Architecture** - Hierarchical organization from atomic modules to capabilities to endpoints\n\n## Running the Examples\n\n### Python Examples\n```bash\ncd <architecture-directory>\nuv run main.py\n```\n\n### Node.js Examples\n```bash\ncd <architecture-directory>\nnode main.js\n# or if bun is available:\nbun run main.js\n```\n\nEach architecture directory contains its own README with specific details about the implementation.\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/README.md",
    "content": "# Atomic/Composable Architecture\n\nThis directory demonstrates an Atomic/Composable Architecture implementation with a simple notification system application.\n\n## Structure\n\n```\natomic-composable-architecture/\n├── atom/                       # Smallest atomic reusable building blocks\n│   ├── auth.py                 # Authentication utilities\n│   ├── validation.py           # Data validation functions\n│   └── notifications.py        # Notification helpers\n│\n├── molecule/                   # Combines multiple atoms into features\n│   ├── user_management.py      # Uses auth + validation atoms\n│   └── alerting.py             # Uses notifications + validation atoms\n│\n└── organism/                   # Combines molecules into user-facing APIs\n    ├── user_api.py             # Uses user_management molecule\n    └── alerts_api.py           # Uses alerting molecule\n```\n\n## Explanation\n\n- **Atom**: Bottom-level reusable components that must remain general-purpose and independent. In a unrestricted codebase design: Atoms can only depend on other atoms. In a stricter version: Atoms can only depend on other atoms, not molecules or organisms.\n- **Molecule**: Compose atoms to build concrete functionality. Molecules can depend on multiple atoms, enabling reuse and rapid feature composition.\n- **Organism**: The highest level, combining molecules to create user-facing APIs or interfaces.\n\n## Benefits\n\n- Maximizes code reuse and composability; **reduces duplication** and accelerates feature development.\n- Clear hierarchical structure makes it easy to reason about what building blocks are available.\n- Promotes small, focused, and easily understandable code units.\n\n## Cons\n\n- Indirection introduced by composability can make dependency tracing challenging.\n- Understanding module usage patterns (what uses what) may require navigating through multiple files or explicit documentation.\n- Requires discipline and careful adherence to dependency rules to avoid cyclic or unintended dependencies.\n\n## Running the Example\n\n```bash\nuv run main.py\n```\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/atom/auth.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAuthentication module for the Atomic/Composable Architecture.\nThis module provides atomic authentication utilities.\n\"\"\"\n\nimport hashlib\nimport os\nimport time\nimport uuid\nfrom typing import Dict, Optional, Tuple\n\n# In-memory user store for demonstration purposes\n# In a real application, this would be a database\nUSER_STORE: Dict[str, Dict] = {}\n\n# In-memory token store for demonstration purposes\nTOKEN_STORE: Dict[str, Dict] = {}\n\ndef hash_password(password: str, salt: Optional[str] = None) -> Tuple[str, str]:\n    \"\"\"\n    Hash a password with a salt for secure storage.\n    \n    Args:\n        password: The password to hash\n        salt: Optional salt, generated if not provided\n        \n    Returns:\n        Tuple of (hashed_password, salt)\n    \"\"\"\n    if salt is None:\n        salt = os.urandom(16).hex()\n    \n    # In a real application, use a more secure hashing algorithm like bcrypt\n    hashed = hashlib.sha256((password + salt).encode()).hexdigest()\n    return hashed, salt\n\ndef verify_password(password: str, hashed_password: str, salt: str) -> bool:\n    \"\"\"\n    Verify a password against a stored hash.\n    \n    Args:\n        password: The password to verify\n        hashed_password: The stored hashed password\n        salt: The salt used for hashing\n        \n    Returns:\n        True if the password matches, False otherwise\n    \"\"\"\n    calculated_hash, _ = hash_password(password, salt)\n    return calculated_hash == hashed_password\n\ndef register_user(username: str, password: str, email: str) -> Dict:\n    \"\"\"\n    Register a new user.\n    \n    Args:\n        username: The username for the new user\n        password: The password for the new user\n        email: The email for the new user\n        \n    Returns:\n        User data dictionary\n    \n    Raises:\n        ValueError: If the username already exists\n    \"\"\"\n    if username in USER_STORE:\n        raise ValueError(f\"Username '{username}' already exists\")\n    \n    hashed_password, salt = hash_password(password)\n    user_id = str(uuid.uuid4())\n    \n    user_data = {\n        \"id\": user_id,\n        \"username\": username,\n        \"email\": email,\n        \"hashed_password\": hashed_password,\n        \"salt\": salt,\n        \"created_at\": time.time()\n    }\n    \n    USER_STORE[username] = user_data\n    return {k: v for k, v in user_data.items() if k not in [\"hashed_password\", \"salt\"]}\n\ndef authenticate(username: str, password: str) -> Optional[Dict]:\n    \"\"\"\n    Authenticate a user with username and password.\n    \n    Args:\n        username: The username to authenticate\n        password: The password to authenticate\n        \n    Returns:\n        User data dictionary if authentication succeeds, None otherwise\n    \"\"\"\n    if username not in USER_STORE:\n        return None\n    \n    user_data = USER_STORE[username]\n    if verify_password(password, user_data[\"hashed_password\"], user_data[\"salt\"]):\n        return {k: v for k, v in user_data.items() if k not in [\"hashed_password\", \"salt\"]}\n    \n    return None\n\ndef create_token(user_id: str, expires_in: int = 3600) -> str:\n    \"\"\"\n    Create an authentication token for a user.\n    \n    Args:\n        user_id: The user ID to create a token for\n        expires_in: Token expiration time in seconds\n        \n    Returns:\n        Authentication token\n    \"\"\"\n    token = str(uuid.uuid4())\n    expiration = time.time() + expires_in\n    \n    TOKEN_STORE[token] = {\n        \"user_id\": user_id,\n        \"expires_at\": expiration\n    }\n    \n    return token\n\ndef validate_token(token: str) -> Optional[str]:\n    \"\"\"\n    Validate an authentication token.\n    \n    Args:\n        token: The token to validate\n        \n    Returns:\n        User ID if the token is valid, None otherwise\n    \"\"\"\n    if token not in TOKEN_STORE:\n        return None\n    \n    token_data = TOKEN_STORE[token]\n    if token_data[\"expires_at\"] < time.time():\n        # Token expired, remove it\n        del TOKEN_STORE[token]\n        return None\n    \n    return token_data[\"user_id\"]\n\ndef revoke_token(token: str) -> bool:\n    \"\"\"\n    Revoke an authentication token.\n    \n    Args:\n        token: The token to revoke\n        \n    Returns:\n        True if the token was revoked, False if it didn't exist\n    \"\"\"\n    if token in TOKEN_STORE:\n        del TOKEN_STORE[token]\n        return True\n    return False\n\ndef get_user_by_id(user_id: str) -> Optional[Dict]:\n    \"\"\"\n    Get a user by ID.\n    \n    Args:\n        user_id: The user ID to look up\n        \n    Returns:\n        User data dictionary if found, None otherwise\n    \"\"\"\n    for user_data in USER_STORE.values():\n        if user_data[\"id\"] == user_id:\n            return {k: v for k, v in user_data.items() if k not in [\"hashed_password\", \"salt\"]}\n    return None\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/atom/notifications.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nNotifications module for the Atomic/Composable Architecture.\nThis module provides atomic notification utilities.\n\"\"\"\n\nimport time\nfrom typing import Dict, List, Optional\n\n# In-memory notification store for demonstration purposes\nNOTIFICATION_STORE: Dict[str, List[Dict]] = {}\n\n# Notification templates\nTEMPLATES = {\n    \"welcome\": \"Welcome, {username}! Thank you for joining our platform.\",\n    \"password_reset\": \"Your password has been reset. If you didn't request this, please contact support.\",\n    \"new_login\": \"New login detected from {device} at {location}.\",\n    \"alert\": \"{message}\"\n}\n\ndef create_notification(user_id: str, notification_type: str, data: Dict, \n                       is_read: bool = False) -> Dict:\n    \"\"\"\n    Create a notification for a user.\n    \n    Args:\n        user_id: The ID of the user to notify\n        notification_type: The type of notification\n        data: Data to include in the notification\n        is_read: Whether the notification has been read\n        \n    Returns:\n        The created notification\n    \"\"\"\n    if user_id not in NOTIFICATION_STORE:\n        NOTIFICATION_STORE[user_id] = []\n    \n    # Get template or use alert template as fallback\n    template = TEMPLATES.get(notification_type, TEMPLATES[\"alert\"])\n    \n    # Format message with provided data\n    try:\n        message = template.format(**data)\n    except KeyError:\n        # Fallback if template variables are missing\n        message = f\"Notification: {notification_type}\"\n    \n    notification = {\n        \"id\": str(len(NOTIFICATION_STORE[user_id]) + 1),\n        \"user_id\": user_id,\n        \"type\": notification_type,\n        \"message\": message,\n        \"data\": data,\n        \"is_read\": is_read,\n        \"created_at\": time.time()\n    }\n    \n    NOTIFICATION_STORE[user_id].append(notification)\n    return notification\n\ndef get_user_notifications(user_id: str, unread_only: bool = False) -> List[Dict]:\n    \"\"\"\n    Get notifications for a user.\n    \n    Args:\n        user_id: The ID of the user\n        unread_only: Whether to return only unread notifications\n        \n    Returns:\n        List of notifications\n    \"\"\"\n    if user_id not in NOTIFICATION_STORE:\n        return []\n    \n    if unread_only:\n        return [n for n in NOTIFICATION_STORE[user_id] if not n[\"is_read\"]]\n    \n    return NOTIFICATION_STORE[user_id]\n\ndef mark_notification_as_read(user_id: str, notification_id: str) -> bool:\n    \"\"\"\n    Mark a notification as read.\n    \n    Args:\n        user_id: The ID of the user\n        notification_id: The ID of the notification\n        \n    Returns:\n        True if the notification was marked as read, False otherwise\n    \"\"\"\n    if user_id not in NOTIFICATION_STORE:\n        return False\n    \n    for notification in NOTIFICATION_STORE[user_id]:\n        if notification[\"id\"] == notification_id:\n            notification[\"is_read\"] = True\n            return True\n    \n    return False\n\ndef mark_all_notifications_as_read(user_id: str) -> int:\n    \"\"\"\n    Mark all notifications for a user as read.\n    \n    Args:\n        user_id: The ID of the user\n        \n    Returns:\n        Number of notifications marked as read\n    \"\"\"\n    if user_id not in NOTIFICATION_STORE:\n        return 0\n    \n    count = 0\n    for notification in NOTIFICATION_STORE[user_id]:\n        if not notification[\"is_read\"]:\n            notification[\"is_read\"] = True\n            count += 1\n    \n    return count\n\ndef delete_notification(user_id: str, notification_id: str) -> bool:\n    \"\"\"\n    Delete a notification.\n    \n    Args:\n        user_id: The ID of the user\n        notification_id: The ID of the notification\n        \n    Returns:\n        True if the notification was deleted, False otherwise\n    \"\"\"\n    if user_id not in NOTIFICATION_STORE:\n        return False\n    \n    for i, notification in enumerate(NOTIFICATION_STORE[user_id]):\n        if notification[\"id\"] == notification_id:\n            del NOTIFICATION_STORE[user_id][i]\n            return True\n    \n    return False\n\ndef send_email_notification(email: str, subject: str, message: str) -> bool:\n    \"\"\"\n    Send an email notification (mock implementation).\n    \n    Args:\n        email: The recipient's email address\n        subject: The email subject\n        message: The email message\n        \n    Returns:\n        True if the email was sent successfully (always True in this mock)\n    \"\"\"\n    # In a real application, this would send an actual email\n    print(f\"[EMAIL] To: {email}, Subject: {subject}\")\n    print(f\"[EMAIL] Message: {message}\")\n    return True\n\ndef send_sms_notification(phone_number: str, message: str) -> bool:\n    \"\"\"\n    Send an SMS notification (mock implementation).\n    \n    Args:\n        phone_number: The recipient's phone number\n        message: The SMS message\n        \n    Returns:\n        True if the SMS was sent successfully (always True in this mock)\n    \"\"\"\n    # In a real application, this would send an actual SMS\n    print(f\"[SMS] To: {phone_number}\")\n    print(f\"[SMS] Message: {message}\")\n    return True\n\ndef create_alert(user_id: str, message: str, level: str = \"info\", \n                data: Optional[Dict] = None) -> Dict:\n    \"\"\"\n    Create an alert notification.\n    \n    Args:\n        user_id: The ID of the user to alert\n        message: The alert message\n        level: Alert level (info, warning, error)\n        data: Additional data for the alert\n        \n    Returns:\n        The created notification\n    \"\"\"\n    if data is None:\n        data = {}\n    \n    data[\"message\"] = message\n    \n    notification = create_notification(\n        user_id=user_id,\n        notification_type=\"alert\",\n        data={\n            **data,\n            \"level\": level\n        }\n    )\n    \n    return notification\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/atom/validation.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nValidation module for the Atomic/Composable Architecture.\nThis module provides atomic validation utilities.\n\"\"\"\n\nimport re\nfrom typing import Any, Dict, List, Optional, Union\n\ndef validate_required_fields(data: Dict[str, Any], required_fields: List[str]) -> List[str]:\n    \"\"\"\n    Validate that all required fields are present in the data.\n    \n    Args:\n        data: The data to validate\n        required_fields: List of required field names\n        \n    Returns:\n        List of missing field names, empty if all required fields are present\n    \"\"\"\n    return [field for field in required_fields if field not in data or data[field] is None]\n\ndef validate_email(email: str) -> bool:\n    \"\"\"\n    Validate an email address format.\n    \n    Args:\n        email: The email address to validate\n        \n    Returns:\n        True if the email is valid, False otherwise\n    \"\"\"\n    # Simple regex for email validation\n    # In a real application, consider using a more comprehensive validation\n    pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$'\n    return bool(re.match(pattern, email))\n\ndef validate_string_length(value: str, min_length: int = 0, max_length: Optional[int] = None) -> bool:\n    \"\"\"\n    Validate that a string's length is within the specified range.\n    \n    Args:\n        value: The string to validate\n        min_length: Minimum allowed length\n        max_length: Maximum allowed length, or None for no maximum\n        \n    Returns:\n        True if the string length is valid, False otherwise\n    \"\"\"\n    if not isinstance(value, str):\n        return False\n    \n    if len(value) < min_length:\n        return False\n    \n    if max_length is not None and len(value) > max_length:\n        return False\n    \n    return True\n\ndef validate_numeric_range(value: Union[int, float], min_value: Optional[Union[int, float]] = None, \n                          max_value: Optional[Union[int, float]] = None) -> bool:\n    \"\"\"\n    Validate that a numeric value is within the specified range.\n    \n    Args:\n        value: The numeric value to validate\n        min_value: Minimum allowed value, or None for no minimum\n        max_value: Maximum allowed value, or None for no maximum\n        \n    Returns:\n        True if the value is within range, False otherwise\n    \"\"\"\n    if not isinstance(value, (int, float)):\n        return False\n    \n    if min_value is not None and value < min_value:\n        return False\n    \n    if max_value is not None and value > max_value:\n        return False\n    \n    return True\n\ndef validate_pattern(value: str, pattern: str) -> bool:\n    \"\"\"\n    Validate that a string matches a regular expression pattern.\n    \n    Args:\n        value: The string to validate\n        pattern: Regular expression pattern to match\n        \n    Returns:\n        True if the string matches the pattern, False otherwise\n    \"\"\"\n    return bool(re.match(pattern, value))\n\ndef validate_username(username: str) -> bool:\n    \"\"\"\n    Validate a username format.\n    \n    Args:\n        username: The username to validate\n        \n    Returns:\n        True if the username is valid, False otherwise\n    \"\"\"\n    # Username must be 3-20 characters, alphanumeric with underscores\n    pattern = r'^[a-zA-Z0-9_]{3,20}$'\n    return bool(re.match(pattern, username))\n\ndef validate_password_strength(password: str) -> Dict[str, bool]:\n    \"\"\"\n    Validate password strength against multiple criteria.\n    \n    Args:\n        password: The password to validate\n        \n    Returns:\n        Dictionary with validation results for each criterion\n    \"\"\"\n    results = {\n        \"length\": len(password) >= 8,\n        \"uppercase\": bool(re.search(r'[A-Z]', password)),\n        \"lowercase\": bool(re.search(r'[a-z]', password)),\n        \"digit\": bool(re.search(r'\\d', password)),\n        \"special_char\": bool(re.search(r'[!@#$%^&*(),.?\":{}|<>]', password))\n    }\n    \n    results[\"is_valid\"] = all(results.values())\n    return results\n\ndef validate_data(data: Dict[str, Any], schema: Dict[str, Dict[str, Any]]) -> Dict[str, List[str]]:\n    \"\"\"\n    Validate data against a schema.\n    \n    Args:\n        data: The data to validate\n        schema: Validation schema defining field types and constraints\n        \n    Returns:\n        Dictionary mapping field names to lists of validation error messages\n    \"\"\"\n    errors: Dict[str, List[str]] = {}\n    \n    for field_name, field_schema in schema.items():\n        field_type = field_schema.get(\"type\")\n        required = field_schema.get(\"required\", False)\n        \n        # Check if required field is missing\n        if required and (field_name not in data or data[field_name] is None):\n            errors.setdefault(field_name, []).append(\"Field is required\")\n            continue\n        \n        # Skip validation for optional fields that are not present\n        if field_name not in data or data[field_name] is None:\n            continue\n        \n        value = data[field_name]\n        \n        # Type validation\n        if field_type == \"string\" and not isinstance(value, str):\n            errors.setdefault(field_name, []).append(\"Must be a string\")\n        elif field_type == \"number\" and not isinstance(value, (int, float)):\n            errors.setdefault(field_name, []).append(\"Must be a number\")\n        elif field_type == \"integer\" and not isinstance(value, int):\n            errors.setdefault(field_name, []).append(\"Must be an integer\")\n        elif field_type == \"boolean\" and not isinstance(value, bool):\n            errors.setdefault(field_name, []).append(\"Must be a boolean\")\n        elif field_type == \"array\" and not isinstance(value, list):\n            errors.setdefault(field_name, []).append(\"Must be an array\")\n        elif field_type == \"object\" and not isinstance(value, dict):\n            errors.setdefault(field_name, []).append(\"Must be an object\")\n        \n        # String-specific validations\n        if field_type == \"string\" and isinstance(value, str):\n            min_length = field_schema.get(\"min_length\")\n            max_length = field_schema.get(\"max_length\")\n            pattern = field_schema.get(\"pattern\")\n            \n            if min_length is not None and len(value) < min_length:\n                errors.setdefault(field_name, []).append(f\"Must be at least {min_length} characters\")\n            \n            if max_length is not None and len(value) > max_length:\n                errors.setdefault(field_name, []).append(f\"Must be at most {max_length} characters\")\n            \n            if pattern is not None and not re.match(pattern, value):\n                errors.setdefault(field_name, []).append(\"Does not match required pattern\")\n        \n        # Number-specific validations\n        if field_type in [\"number\", \"integer\"] and isinstance(value, (int, float)):\n            minimum = field_schema.get(\"minimum\")\n            maximum = field_schema.get(\"maximum\")\n            \n            if minimum is not None and value < minimum:\n                errors.setdefault(field_name, []).append(f\"Must be at least {minimum}\")\n            \n            if maximum is not None and value > maximum:\n                errors.setdefault(field_name, []).append(f\"Must be at most {maximum}\")\n    \n    return errors\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/main.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n# ]\n# ///\n\n\"\"\"\nMain application entry point for the Atomic/Composable Architecture example.\n\"\"\"\n\nfrom organism.user_api import UserAPI\nfrom organism.alerts_api import AlertsAPI\n\ndef display_header(text):\n    \"\"\"Display a header with the given text.\"\"\"\n    print(\"\\n\" + \"=\" * 50)\n    print(f\" {text}\")\n    print(\"=\" * 50)\n\ndef display_response(response):\n    \"\"\"Display an API response.\"\"\"\n    status = response[\"status\"]\n    message = response[\"message\"]\n    data = response[\"data\"]\n    \n    if status == \"success\":\n        print(f\"✅ {message}\")\n    else:\n        print(f\"❌ {message}\")\n    \n    if data:\n        if isinstance(data, dict):\n            for key, value in data.items():\n                if key == \"user\":\n                    print(\"\\nUser:\")\n                    for user_key, user_value in value.items():\n                        print(f\"  {user_key}: {user_value}\")\n                elif key == \"alerts\":\n                    print(\"\\nAlerts:\")\n                    for i, alert in enumerate(value):\n                        print(f\"\\nAlert {i+1}:\")\n                        print(f\"  Message: {alert['message']}\")\n                        print(f\"  Type: {alert['type']}\")\n                        print(f\"  Level: {alert['data'].get('level', 'N/A')}\")\n                        print(f\"  Read: {'Yes' if alert['is_read'] else 'No'}\")\n                else:\n                    print(f\"\\n{key.capitalize()}:\")\n                    if isinstance(value, dict):\n                        for sub_key, sub_value in value.items():\n                            print(f\"  {sub_key}: {sub_value}\")\n                    else:\n                        print(f\"  {value}\")\n\ndef main():\n    \"\"\"Run the application.\"\"\"\n    display_header(\"Atomic/Composable Architecture Example\")\n    \n    # Register users\n    display_header(\"Registering Users\")\n    \n    register_response = UserAPI.register(\n        username=\"johndoe\",\n        password=\"Password123!\",\n        email=\"john@example.com\"\n    )\n    display_response(register_response)\n    \n    register_response2 = UserAPI.register(\n        username=\"janedoe\",\n        password=\"Secure456@\",\n        email=\"jane@example.com\"\n    )\n    display_response(register_response2)\n    \n    # Try to register with invalid data\n    invalid_register = UserAPI.register(\n        username=\"user\",\n        password=\"weak\",\n        email=\"invalid-email\"\n    )\n    display_response(invalid_register)\n    \n    # Login\n    display_header(\"User Login\")\n    \n    login_response = UserAPI.login(\n        username=\"johndoe\",\n        password=\"Password123!\"\n    )\n    display_response(login_response)\n    \n    # Store token for later use\n    if login_response[\"status\"] == \"success\" and login_response[\"data\"]:\n        token = login_response[\"data\"][\"token\"]\n        \n        # Get user profile\n        display_header(\"User Profile\")\n        \n        profile_response = UserAPI.get_profile(token)\n        display_response(profile_response)\n        \n        # Update profile\n        display_header(\"Updating Profile\")\n        \n        update_response = UserAPI.update_profile(\n            token=token,\n            profile_data={\"name\": \"John Doe\", \"location\": \"New York\"}\n        )\n        display_response(update_response)\n        \n        # Send alerts\n        display_header(\"Sending Alerts\")\n        \n        info_alert = AlertsAPI.send_alert(\n            token=token,\n            message=\"This is an informational alert\",\n            level=\"info\"\n        )\n        display_response(info_alert)\n        \n        warning_alert = AlertsAPI.send_alert(\n            token=token,\n            message=\"This is a warning alert\",\n            level=\"warning\",\n            email=\"john@example.com\"\n        )\n        display_response(warning_alert)\n        \n        error_alert = AlertsAPI.send_alert(\n            token=token,\n            message=\"This is an error alert\",\n            level=\"error\",\n            additional_data={\"error_code\": \"E123\", \"source\": \"system\"}\n        )\n        display_response(error_alert)\n        \n        # Get alerts\n        display_header(\"Getting Alerts\")\n        \n        alerts_response = AlertsAPI.get_alerts(token)\n        display_response(alerts_response)\n        \n        # Filter alerts by level\n        display_header(\"Filtering Alerts by Level\")\n        \n        warning_alerts = AlertsAPI.get_alerts(token, level=\"warning\")\n        display_response(warning_alerts)\n        \n        # Mark an alert as read\n        if alerts_response[\"status\"] == \"success\" and alerts_response[\"data\"]:\n            alerts = alerts_response[\"data\"][\"alerts\"]\n            if alerts:\n                alert_id = alerts[0][\"id\"]\n                \n                display_header(\"Marking Alert as Read\")\n                \n                mark_response = AlertsAPI.mark_as_read(token, alert_id)\n                display_response(mark_response)\n                \n                # Get unread alerts\n                display_header(\"Getting Unread Alerts\")\n                \n                unread_response = AlertsAPI.get_alerts(token, unread_only=True)\n                display_response(unread_response)\n                \n                # Mark all as read\n                display_header(\"Marking All Alerts as Read\")\n                \n                mark_all_response = AlertsAPI.mark_all_as_read(token)\n                display_response(mark_all_response)\n                \n                # Delete an alert\n                display_header(\"Deleting an Alert\")\n                \n                delete_response = AlertsAPI.delete_alert(token, alert_id)\n                display_response(delete_response)\n        \n        # Send system notification\n        display_header(\"Sending System Notification\")\n        \n        system_response = AlertsAPI.send_system_alert(\n            token=token,\n            user_id=profile_response[\"data\"][\"user\"][\"id\"],\n            notification_type=\"welcome\",\n            data={\"username\": \"johndoe\"},\n            email=\"john@example.com\"\n        )\n        display_response(system_response)\n        \n        # Logout\n        display_header(\"User Logout\")\n        \n        logout_response = UserAPI.logout(token)\n        display_response(logout_response)\n        \n        # Try to use expired token\n        display_header(\"Using Expired Token\")\n        \n        expired_response = UserAPI.get_profile(token)\n        display_response(expired_response)\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/molecule/alerting.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAlerting capability for the Atomic/Composable Architecture.\nThis capability combines notifications and validation modules.\n\"\"\"\n\nfrom typing import Dict, List, Optional, Tuple\n\nfrom atom.notifications import (\n    create_notification, get_user_notifications, mark_notification_as_read,\n    mark_all_notifications_as_read, delete_notification, send_email_notification,\n    send_sms_notification, create_alert\n)\nfrom atom.validation import (\n    validate_required_fields, validate_email, validate_string_length\n)\n\ndef send_user_alert(user_id: str, message: str, level: str = \"info\", \n                   email: Optional[str] = None, phone: Optional[str] = None,\n                   additional_data: Optional[Dict] = None) -> Tuple[bool, Dict]:\n    \"\"\"\n    Send an alert to a user through multiple channels.\n    \n    Args:\n        user_id: The ID of the user to alert\n        message: The alert message\n        level: Alert level (info, warning, error)\n        email: Optional email address to send the alert to\n        phone: Optional phone number to send the alert to\n        additional_data: Additional data for the alert\n        \n    Returns:\n        Tuple of (success, result) with notification details\n    \"\"\"\n    # Validate required fields\n    missing_fields = validate_required_fields(\n        {\"user_id\": user_id, \"message\": message},\n        [\"user_id\", \"message\"]\n    )\n    \n    if missing_fields:\n        return False, {\"error\": f\"Missing required fields: {', '.join(missing_fields)}\"}\n    \n    # Validate message length\n    if not validate_string_length(message, min_length=1, max_length=500):\n        return False, {\"error\": \"Message must be between 1 and 500 characters\"}\n    \n    # Validate level\n    valid_levels = [\"info\", \"warning\", \"error\"]\n    if level not in valid_levels:\n        return False, {\"error\": f\"Level must be one of: {', '.join(valid_levels)}\"}\n    \n    # Create the alert notification\n    notification = create_alert(\n        user_id=user_id,\n        message=message,\n        level=level,\n        data=additional_data\n    )\n    \n    # Send email if provided\n    email_sent = False\n    if email:\n        if validate_email(email):\n            subject = f\"Alert: {level.capitalize()}\"\n            email_sent = send_email_notification(email, subject, message)\n        else:\n            return False, {\"error\": \"Invalid email format\"}\n    \n    # Send SMS if provided\n    sms_sent = False\n    if phone:\n        sms_sent = send_sms_notification(phone, message)\n    \n    return True, {\n        \"notification\": notification,\n        \"channels\": {\n            \"in_app\": True,\n            \"email\": email_sent,\n            \"sms\": sms_sent\n        }\n    }\n\ndef get_user_alerts(user_id: str, unread_only: bool = False, \n                   level: Optional[str] = None) -> List[Dict]:\n    \"\"\"\n    Get alerts for a user with optional filtering.\n    \n    Args:\n        user_id: The ID of the user\n        unread_only: Whether to return only unread alerts\n        level: Optional filter by alert level\n        \n    Returns:\n        List of alert notifications\n    \"\"\"\n    # Get all notifications for the user\n    notifications = get_user_notifications(user_id, unread_only)\n    \n    # Filter to only alert type notifications\n    alerts = [n for n in notifications if n[\"type\"] == \"alert\"]\n    \n    # Filter by level if specified\n    if level:\n        alerts = [a for a in alerts if a[\"data\"].get(\"level\") == level]\n    \n    return alerts\n\ndef mark_alert_as_read(user_id: str, notification_id: str) -> bool:\n    \"\"\"\n    Mark an alert as read.\n    \n    Args:\n        user_id: The ID of the user\n        notification_id: The ID of the notification\n        \n    Returns:\n        True if the alert was marked as read, False otherwise\n    \"\"\"\n    return mark_notification_as_read(user_id, notification_id)\n\ndef mark_all_alerts_as_read(user_id: str) -> int:\n    \"\"\"\n    Mark all alerts for a user as read.\n    \n    Args:\n        user_id: The ID of the user\n        \n    Returns:\n        Number of alerts marked as read\n    \"\"\"\n    return mark_all_notifications_as_read(user_id)\n\ndef delete_user_alert(user_id: str, notification_id: str) -> bool:\n    \"\"\"\n    Delete an alert.\n    \n    Args:\n        user_id: The ID of the user\n        notification_id: The ID of the notification\n        \n    Returns:\n        True if the alert was deleted, False otherwise\n    \"\"\"\n    return delete_notification(user_id, notification_id)\n\ndef send_system_notification(user_id: str, notification_type: str, \n                            data: Dict, email: Optional[str] = None) -> Tuple[bool, Dict]:\n    \"\"\"\n    Send a system notification to a user.\n    \n    Args:\n        user_id: The ID of the user\n        notification_type: The type of notification (welcome, password_reset, new_login)\n        data: Data for the notification template\n        email: Optional email address to send the notification to\n        \n    Returns:\n        Tuple of (success, result) with notification details\n    \"\"\"\n    # Validate required fields\n    missing_fields = validate_required_fields(\n        {\"user_id\": user_id, \"notification_type\": notification_type},\n        [\"user_id\", \"notification_type\"]\n    )\n    \n    if missing_fields:\n        return False, {\"error\": f\"Missing required fields: {', '.join(missing_fields)}\"}\n    \n    # Validate notification type\n    valid_types = [\"welcome\", \"password_reset\", \"new_login\"]\n    if notification_type not in valid_types:\n        return False, {\"error\": f\"Notification type must be one of: {', '.join(valid_types)}\"}\n    \n    # Create the notification\n    notification = create_notification(\n        user_id=user_id,\n        notification_type=notification_type,\n        data=data\n    )\n    \n    # Send email if provided\n    email_sent = False\n    if email:\n        if validate_email(email):\n            # Get the notification message\n            message = notification[\"message\"]\n            subject = f\"Notification: {notification_type.replace('_', ' ').title()}\"\n            email_sent = send_email_notification(email, subject, message)\n        else:\n            return False, {\"error\": \"Invalid email format\"}\n    \n    return True, {\n        \"notification\": notification,\n        \"channels\": {\n            \"in_app\": True,\n            \"email\": email_sent\n        }\n    }\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/molecule/user_management.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nUser management capability for the Atomic/Composable Architecture.\nThis capability combines auth and validation modules.\n\"\"\"\n\nfrom typing import Dict, List, Optional, Tuple\n\nfrom atom.auth import (\n    register_user, authenticate, create_token, validate_token,\n    revoke_token, get_user_by_id\n)\nfrom atom.validation import (\n    validate_required_fields, validate_email, validate_username,\n    validate_password_strength, validate_string_length\n)\n\ndef register_new_user(username: str, password: str, email: str) -> Tuple[bool, Dict]:\n    \"\"\"\n    Register a new user with validation.\n    \n    Args:\n        username: The username for the new user\n        password: The password for the new user\n        email: The email for the new user\n        \n    Returns:\n        Tuple of (success, result) where result is either user data or error messages\n    \"\"\"\n    # Validate required fields\n    missing_fields = validate_required_fields(\n        {\"username\": username, \"password\": password, \"email\": email},\n        [\"username\", \"password\", \"email\"]\n    )\n    \n    if missing_fields:\n        return False, {\"error\": f\"Missing required fields: {', '.join(missing_fields)}\"}\n    \n    # Validate username\n    if not validate_username(username):\n        return False, {\"error\": \"Username must be 3-20 characters, alphanumeric with underscores\"}\n    \n    # Validate email\n    if not validate_email(email):\n        return False, {\"error\": \"Invalid email format\"}\n    \n    # Validate password strength\n    password_validation = validate_password_strength(password)\n    if not password_validation[\"is_valid\"]:\n        errors = []\n        if not password_validation[\"length\"]:\n            errors.append(\"Password must be at least 8 characters\")\n        if not password_validation[\"uppercase\"]:\n            errors.append(\"Password must contain at least one uppercase letter\")\n        if not password_validation[\"lowercase\"]:\n            errors.append(\"Password must contain at least one lowercase letter\")\n        if not password_validation[\"digit\"]:\n            errors.append(\"Password must contain at least one digit\")\n        if not password_validation[\"special_char\"]:\n            errors.append(\"Password must contain at least one special character\")\n        \n        return False, {\"error\": errors}\n    \n    try:\n        # Register the user\n        user_data = register_user(username, password, email)\n        return True, {\"user\": user_data}\n    except ValueError as e:\n        return False, {\"error\": str(e)}\n\ndef login_user(username: str, password: str) -> Tuple[bool, Dict]:\n    \"\"\"\n    Login a user and create an authentication token.\n    \n    Args:\n        username: The username to authenticate\n        password: The password to authenticate\n        \n    Returns:\n        Tuple of (success, result) where result contains user data and token or error message\n    \"\"\"\n    # Validate required fields\n    missing_fields = validate_required_fields(\n        {\"username\": username, \"password\": password},\n        [\"username\", \"password\"]\n    )\n    \n    if missing_fields:\n        return False, {\"error\": f\"Missing required fields: {', '.join(missing_fields)}\"}\n    \n    # Authenticate the user\n    user_data = authenticate(username, password)\n    if not user_data:\n        return False, {\"error\": \"Invalid username or password\"}\n    \n    # Create an authentication token\n    token = create_token(user_data[\"id\"])\n    \n    return True, {\n        \"user\": user_data,\n        \"token\": token\n    }\n\ndef validate_user_token(token: str) -> Tuple[bool, Optional[Dict]]:\n    \"\"\"\n    Validate a user token and return user data.\n    \n    Args:\n        token: The token to validate\n        \n    Returns:\n        Tuple of (success, user_data) where user_data is None if validation fails\n    \"\"\"\n    if not token:\n        return False, None\n    \n    user_id = validate_token(token)\n    if not user_id:\n        return False, None\n    \n    user_data = get_user_by_id(user_id)\n    if not user_data:\n        return False, None\n    \n    return True, user_data\n\ndef logout_user(token: str) -> bool:\n    \"\"\"\n    Logout a user by revoking their token.\n    \n    Args:\n        token: The token to revoke\n        \n    Returns:\n        True if the token was revoked, False otherwise\n    \"\"\"\n    return revoke_token(token)\n\ndef update_user_profile(user_id: str, profile_data: Dict) -> Tuple[bool, Dict]:\n    \"\"\"\n    Update a user's profile data.\n    \n    Args:\n        user_id: The ID of the user to update\n        profile_data: The profile data to update\n        \n    Returns:\n        Tuple of (success, result) where result is either updated user data or error messages\n    \"\"\"\n    # Get the current user data\n    user_data = get_user_by_id(user_id)\n    if not user_data:\n        return False, {\"error\": \"User not found\"}\n    \n    # Validate email if provided\n    if \"email\" in profile_data:\n        if not validate_email(profile_data[\"email\"]):\n            return False, {\"error\": \"Invalid email format\"}\n    \n    # In a real application, this would update the user in the database\n    # For this mock, we'll just print the update\n    print(f\"[UPDATE] User {user_id} profile updated:\")\n    for key, value in profile_data.items():\n        print(f\"  {key}: {value}\")\n    \n    # Return success with mock updated data\n    updated_user = {**user_data, **profile_data}\n    return True, {\"user\": updated_user}\n\ndef change_password(user_id: str, current_password: str, new_password: str) -> Tuple[bool, Dict]:\n    \"\"\"\n    Change a user's password.\n    \n    Args:\n        user_id: The ID of the user\n        current_password: The current password\n        new_password: The new password\n        \n    Returns:\n        Tuple of (success, result) where result contains a success message or error message\n    \"\"\"\n    # Validate required fields\n    missing_fields = validate_required_fields(\n        {\"current_password\": current_password, \"new_password\": new_password},\n        [\"current_password\", \"new_password\"]\n    )\n    \n    if missing_fields:\n        return False, {\"error\": f\"Missing required fields: {', '.join(missing_fields)}\"}\n    \n    # Validate new password strength\n    password_validation = validate_password_strength(new_password)\n    if not password_validation[\"is_valid\"]:\n        errors = []\n        if not password_validation[\"length\"]:\n            errors.append(\"Password must be at least 8 characters\")\n        if not password_validation[\"uppercase\"]:\n            errors.append(\"Password must contain at least one uppercase letter\")\n        if not password_validation[\"lowercase\"]:\n            errors.append(\"Password must contain at least one lowercase letter\")\n        if not password_validation[\"digit\"]:\n            errors.append(\"Password must contain at least one digit\")\n        if not password_validation[\"special_char\"]:\n            errors.append(\"Password must contain at least one special character\")\n        \n        return False, {\"error\": errors}\n    \n    # In a real application, this would verify the current password and update it\n    # For this mock, we'll just print the change\n    print(f\"[PASSWORD] User {user_id} password changed\")\n    \n    return True, {\"message\": \"Password changed successfully\"}\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/organism/alerts_api.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAlerts API endpoints for the Atomic/Composable Architecture.\nThis module combines alerting capability with HTTP endpoints.\n\"\"\"\n\nfrom typing import Dict, List, Optional\n\nfrom molecule.alerting import (\n    send_user_alert, get_user_alerts, mark_alert_as_read,\n    mark_all_alerts_as_read, delete_user_alert, send_system_notification\n)\nfrom molecule.user_management import validate_user_token\n\nclass AlertsAPI:\n    \"\"\"API endpoints for alerts management.\"\"\"\n    \n    @staticmethod\n    def send_alert(token: str, message: str, level: str = \"info\", \n                  email: Optional[str] = None, phone: Optional[str] = None,\n                  additional_data: Optional[Dict] = None) -> Dict:\n        \"\"\"\n        Send an alert to a user.\n        \n        Args:\n            token: Authentication token\n            message: The alert message\n            level: Alert level (info, warning, error)\n            email: Optional email address to send the alert to\n            phone: Optional phone number to send the alert to\n            additional_data: Additional data for the alert\n            \n        Returns:\n            Response with success status and alert details or error message\n        \"\"\"\n        # Validate token\n        success, user_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Send alert\n        success, result = send_user_alert(\n            user_id=user_data[\"id\"],\n            message=message,\n            level=level,\n            email=email,\n            phone=phone,\n            additional_data=additional_data\n        )\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Alert sent successfully\",\n                \"data\": result\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": result.get(\"error\", \"Failed to send alert\"),\n                \"data\": None\n            }\n    \n    @staticmethod\n    def get_alerts(token: str, unread_only: bool = False, level: Optional[str] = None) -> Dict:\n        \"\"\"\n        Get alerts for a user.\n        \n        Args:\n            token: Authentication token\n            unread_only: Whether to return only unread alerts\n            level: Optional filter by alert level\n            \n        Returns:\n            Response with success status and alerts or error message\n        \"\"\"\n        # Validate token\n        success, user_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Get alerts\n        alerts = get_user_alerts(\n            user_id=user_data[\"id\"],\n            unread_only=unread_only,\n            level=level\n        )\n        \n        return {\n            \"status\": \"success\",\n            \"message\": f\"Retrieved {len(alerts)} alerts\",\n            \"data\": {\"alerts\": alerts}\n        }\n    \n    @staticmethod\n    def mark_as_read(token: str, notification_id: str) -> Dict:\n        \"\"\"\n        Mark an alert as read.\n        \n        Args:\n            token: Authentication token\n            notification_id: The ID of the notification\n            \n        Returns:\n            Response with success status or error message\n        \"\"\"\n        # Validate token\n        success, user_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Mark as read\n        success = mark_alert_as_read(user_data[\"id\"], notification_id)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Alert marked as read\",\n                \"data\": None\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Alert not found\",\n                \"data\": None\n            }\n    \n    @staticmethod\n    def mark_all_as_read(token: str) -> Dict:\n        \"\"\"\n        Mark all alerts as read.\n        \n        Args:\n            token: Authentication token\n            \n        Returns:\n            Response with success status and count of alerts marked as read\n        \"\"\"\n        # Validate token\n        success, user_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Mark all as read\n        count = mark_all_alerts_as_read(user_data[\"id\"])\n        \n        return {\n            \"status\": \"success\",\n            \"message\": f\"Marked {count} alerts as read\",\n            \"data\": {\"count\": count}\n        }\n    \n    @staticmethod\n    def delete_alert(token: str, notification_id: str) -> Dict:\n        \"\"\"\n        Delete an alert.\n        \n        Args:\n            token: Authentication token\n            notification_id: The ID of the notification\n            \n        Returns:\n            Response with success status or error message\n        \"\"\"\n        # Validate token\n        success, user_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Delete alert\n        success = delete_user_alert(user_data[\"id\"], notification_id)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Alert deleted successfully\",\n                \"data\": None\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Alert not found\",\n                \"data\": None\n            }\n    \n    @staticmethod\n    def send_system_alert(token: str, user_id: str, notification_type: str, \n                         data: Dict, email: Optional[str] = None) -> Dict:\n        \"\"\"\n        Send a system notification to a user (admin function).\n        \n        Args:\n            token: Authentication token (must be admin)\n            user_id: The ID of the user to notify\n            notification_type: The type of notification\n            data: Data for the notification template\n            email: Optional email address to send the notification to\n            \n        Returns:\n            Response with success status and notification details or error message\n        \"\"\"\n        # Validate token (in a real app, would check if user is admin)\n        success, admin_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Send system notification\n        success, result = send_system_notification(\n            user_id=user_id,\n            notification_type=notification_type,\n            data=data,\n            email=email\n        )\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"System notification sent successfully\",\n                \"data\": result\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": result.get(\"error\", \"Failed to send system notification\"),\n                \"data\": None\n            }\n"
  },
  {
    "path": "codebase-architectures/atomic-composable-architecture/organism/user_api.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nUser API endpoints for the Atomic/Composable Architecture.\nThis module combines user_management capability with HTTP endpoints.\n\"\"\"\n\nfrom typing import Dict, Optional\n\nfrom molecule.user_management import (\n    register_new_user, login_user, validate_user_token,\n    logout_user, update_user_profile, change_password\n)\n\nclass UserAPI:\n    \"\"\"API endpoints for user management.\"\"\"\n    \n    @staticmethod\n    def register(username: str, password: str, email: str) -> Dict:\n        \"\"\"\n        Register a new user.\n        \n        Args:\n            username: The username for the new user\n            password: The password for the new user\n            email: The email for the new user\n            \n        Returns:\n            Response with success status and user data or error message\n        \"\"\"\n        success, result = register_new_user(username, password, email)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"User registered successfully\",\n                \"data\": result\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": result.get(\"error\", \"Registration failed\"),\n                \"data\": None\n            }\n    \n    @staticmethod\n    def login(username: str, password: str) -> Dict:\n        \"\"\"\n        Login a user.\n        \n        Args:\n            username: The username to authenticate\n            password: The password to authenticate\n            \n        Returns:\n            Response with success status and user data with token or error message\n        \"\"\"\n        success, result = login_user(username, password)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Login successful\",\n                \"data\": result\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": result.get(\"error\", \"Login failed\"),\n                \"data\": None\n            }\n    \n    @staticmethod\n    def get_profile(token: str) -> Dict:\n        \"\"\"\n        Get a user's profile.\n        \n        Args:\n            token: Authentication token\n            \n        Returns:\n            Response with success status and user data or error message\n        \"\"\"\n        success, user_data = validate_user_token(token)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Profile retrieved successfully\",\n                \"data\": {\"user\": user_data}\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n    \n    @staticmethod\n    def logout(token: str) -> Dict:\n        \"\"\"\n        Logout a user.\n        \n        Args:\n            token: Authentication token\n            \n        Returns:\n            Response with success status\n        \"\"\"\n        success = logout_user(token)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Logout successful\",\n                \"data\": None\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid token\",\n                \"data\": None\n            }\n    \n    @staticmethod\n    def update_profile(token: str, profile_data: Dict) -> Dict:\n        \"\"\"\n        Update a user's profile.\n        \n        Args:\n            token: Authentication token\n            profile_data: The profile data to update\n            \n        Returns:\n            Response with success status and updated user data or error message\n        \"\"\"\n        # Validate token\n        success, user_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Update profile\n        success, result = update_user_profile(user_data[\"id\"], profile_data)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Profile updated successfully\",\n                \"data\": result\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": result.get(\"error\", \"Profile update failed\"),\n                \"data\": None\n            }\n    \n    @staticmethod\n    def change_password(token: str, current_password: str, new_password: str) -> Dict:\n        \"\"\"\n        Change a user's password.\n        \n        Args:\n            token: Authentication token\n            current_password: The current password\n            new_password: The new password\n            \n        Returns:\n            Response with success status or error message\n        \"\"\"\n        # Validate token\n        success, user_data = validate_user_token(token)\n        if not success:\n            return {\n                \"status\": \"error\",\n                \"message\": \"Invalid or expired token\",\n                \"data\": None\n            }\n        \n        # Change password\n        success, result = change_password(user_data[\"id\"], current_password, new_password)\n        \n        if success:\n            return {\n                \"status\": \"success\",\n                \"message\": \"Password changed successfully\",\n                \"data\": None\n            }\n        else:\n            return {\n                \"status\": \"error\",\n                \"message\": result.get(\"error\", \"Password change failed\"),\n                \"data\": None\n            }\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/README.md",
    "content": "# Layered (N-Tier or MVC) Architecture\n\nThis directory demonstrates a Layered Architecture implementation with a simple product catalog application.\n\n## Structure\n\n```\nlayered-architecture/\n├── api/                   # Interfaces (controllers or endpoints)\n│   ├── product_api.py\n│   └── category_api.py\n├── services/              # Business logic layer\n│   ├── product_service.py\n│   └── category_service.py\n├── models/                # Data models and domain objects\n│   ├── product.py\n│   └── category.py\n├── data/                  # Data access and persistence\n│   └── database.py\n├── utils/                 # Shared utilities\n│   └── logger.py\n└── main.py                # Application entry point\n```\n\n## Benefits\n\n- Clear separation of concerns aids contributions\n- Centralized shared logic promotes consistency and reduces duplication\n- Clear role signaling (e.g., service vs. API vs. data)\n\n## Cons\n\n- Features spread across layers; context management can be challenging\n- Tight coupling may occur between layers, complicating cross-layer changes\n\n## Running the Example\n\n```bash\nuv run main.py\n```\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/api/category_api.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCategory API endpoints.\n\"\"\"\n\nfrom services.category_service import CategoryService\nfrom utils.logger import Logger, app_logger\n\nclass CategoryAPI:\n    \"\"\"API endpoints for category management.\"\"\"\n    \n    @staticmethod\n    def create_category(name, description=None):\n        \"\"\"Create a new category.\"\"\"\n        try:\n            category = CategoryService.create_category(name, description)\n            return {\n                \"success\": True,\n                \"message\": \"Category created successfully\",\n                \"data\": category\n            }\n        except ValueError as e:\n            Logger.warning(app_logger, f\"Validation error in create_category: {str(e)}\")\n            return {\n                \"success\": False,\n                \"message\": str(e)\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in create_category: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while creating the category\"\n            }\n    \n    @staticmethod\n    def get_category(category_id):\n        \"\"\"Get a category by ID.\"\"\"\n        try:\n            category = CategoryService.get_category(category_id)\n            if not category:\n                return {\n                    \"success\": False,\n                    \"message\": f\"Category with ID {category_id} not found\"\n                }\n            return {\n                \"success\": True,\n                \"data\": category\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in get_category: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while retrieving the category\"\n            }\n    \n    @staticmethod\n    def get_all_categories():\n        \"\"\"Get all categories.\"\"\"\n        try:\n            categories = CategoryService.get_all_categories()\n            return {\n                \"success\": True,\n                \"data\": categories\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in get_all_categories: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while retrieving categories\"\n            }\n    \n    @staticmethod\n    def update_category(category_id, name=None, description=None):\n        \"\"\"Update a category.\"\"\"\n        try:\n            category = CategoryService.update_category(category_id, name, description)\n            if not category:\n                return {\n                    \"success\": False,\n                    \"message\": f\"Category with ID {category_id} not found\"\n                }\n            return {\n                \"success\": True,\n                \"message\": \"Category updated successfully\",\n                \"data\": category\n            }\n        except ValueError as e:\n            Logger.warning(app_logger, f\"Validation error in update_category: {str(e)}\")\n            return {\n                \"success\": False,\n                \"message\": str(e)\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in update_category: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while updating the category\"\n            }\n    \n    @staticmethod\n    def delete_category(category_id):\n        \"\"\"Delete a category.\"\"\"\n        try:\n            result = CategoryService.delete_category(category_id)\n            if not result:\n                return {\n                    \"success\": False,\n                    \"message\": f\"Category with ID {category_id} not found\"\n                }\n            return {\n                \"success\": True,\n                \"message\": \"Category deleted successfully\"\n            }\n        except ValueError as e:\n            Logger.warning(app_logger, f\"Validation error in delete_category: {str(e)}\")\n            return {\n                \"success\": False,\n                \"message\": str(e)\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in delete_category: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while deleting the category\"\n            }\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/api/product_api.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nProduct API endpoints.\n\"\"\"\n\nfrom services.product_service import ProductService\nfrom utils.logger import Logger, app_logger\n\nclass ProductAPI:\n    \"\"\"API endpoints for product management.\"\"\"\n    \n    @staticmethod\n    def create_product(name, price, category_id=None, description=None, sku=None):\n        \"\"\"Create a new product.\"\"\"\n        try:\n            product = ProductService.create_product(name, price, category_id, description, sku)\n            return {\n                \"success\": True,\n                \"message\": \"Product created successfully\",\n                \"data\": product\n            }\n        except ValueError as e:\n            Logger.warning(app_logger, f\"Validation error in create_product: {str(e)}\")\n            return {\n                \"success\": False,\n                \"message\": str(e)\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in create_product: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while creating the product\"\n            }\n    \n    @staticmethod\n    def get_product(product_id):\n        \"\"\"Get a product by ID.\"\"\"\n        try:\n            product = ProductService.get_product(product_id)\n            if not product:\n                return {\n                    \"success\": False,\n                    \"message\": f\"Product with ID {product_id} not found\"\n                }\n            return {\n                \"success\": True,\n                \"data\": product\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in get_product: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while retrieving the product\"\n            }\n    \n    @staticmethod\n    def get_by_sku(sku):\n        \"\"\"Get a product by SKU.\"\"\"\n        try:\n            product = ProductService.get_by_sku(sku)\n            if not product:\n                return {\n                    \"success\": False,\n                    \"message\": f\"Product with SKU '{sku}' not found\"\n                }\n            return {\n                \"success\": True,\n                \"data\": product\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in get_by_sku: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while retrieving the product\"\n            }\n    \n    @staticmethod\n    def get_all_products():\n        \"\"\"Get all products.\"\"\"\n        try:\n            products = ProductService.get_all_products()\n            return {\n                \"success\": True,\n                \"data\": products\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in get_all_products: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while retrieving products\"\n            }\n    \n    @staticmethod\n    def get_products_by_category(category_id):\n        \"\"\"Get all products in a category.\"\"\"\n        try:\n            products = ProductService.get_products_by_category(category_id)\n            return {\n                \"success\": True,\n                \"data\": products\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in get_products_by_category: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while retrieving products\"\n            }\n    \n    @staticmethod\n    def update_product(product_id, name=None, price=None, category_id=None, description=None, sku=None):\n        \"\"\"Update a product.\"\"\"\n        try:\n            product = ProductService.update_product(product_id, name, price, category_id, description, sku)\n            if not product:\n                return {\n                    \"success\": False,\n                    \"message\": f\"Product with ID {product_id} not found\"\n                }\n            return {\n                \"success\": True,\n                \"message\": \"Product updated successfully\",\n                \"data\": product\n            }\n        except ValueError as e:\n            Logger.warning(app_logger, f\"Validation error in update_product: {str(e)}\")\n            return {\n                \"success\": False,\n                \"message\": str(e)\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in update_product: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while updating the product\"\n            }\n    \n    @staticmethod\n    def delete_product(product_id):\n        \"\"\"Delete a product.\"\"\"\n        try:\n            result = ProductService.delete_product(product_id)\n            if not result:\n                return {\n                    \"success\": False,\n                    \"message\": f\"Product with ID {product_id} not found\"\n                }\n            return {\n                \"success\": True,\n                \"message\": \"Product deleted successfully\"\n            }\n        except Exception as e:\n            Logger.error(app_logger, f\"Error in delete_product: {str(e)}\", exc_info=True)\n            return {\n                \"success\": False,\n                \"message\": \"An error occurred while deleting the product\"\n            }\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/data/database.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nDatabase module for data persistence.\n\"\"\"\n\nimport uuid\nfrom utils.logger import Logger, app_logger\n\nclass InMemoryDatabase:\n    \"\"\"In-memory database implementation.\"\"\"\n    \n    def __init__(self):\n        \"\"\"Initialize the database.\"\"\"\n        self.data = {}\n        self.logger = Logger.get_logger(\"database\")\n        Logger.info(self.logger, \"Database initialized\")\n    \n    def create_table(self, table_name):\n        \"\"\"Create a new table if it doesn't exist.\"\"\"\n        if table_name not in self.data:\n            self.data[table_name] = {}\n            Logger.info(self.logger, f\"Table '{table_name}' created\")\n    \n    def insert(self, table_name, item):\n        \"\"\"Insert an item into a table.\"\"\"\n        if table_name not in self.data:\n            self.create_table(table_name)\n        \n        # Generate ID if not provided\n        if \"id\" not in item:\n            item[\"id\"] = str(uuid.uuid4())\n        \n        self.data[table_name][item[\"id\"]] = item\n        Logger.info(self.logger, f\"Item inserted into '{table_name}' with ID {item['id']}\")\n        return item\n    \n    def get(self, table_name, item_id):\n        \"\"\"Get an item from a table by ID.\"\"\"\n        if table_name not in self.data or item_id not in self.data[table_name]:\n            Logger.warning(self.logger, f\"Item with ID {item_id} not found in '{table_name}'\")\n            return None\n        \n        Logger.debug(self.logger, f\"Retrieved item with ID {item_id} from '{table_name}'\")\n        return self.data[table_name][item_id]\n    \n    def get_all(self, table_name):\n        \"\"\"Get all items from a table.\"\"\"\n        if table_name not in self.data:\n            Logger.warning(self.logger, f\"Table '{table_name}' not found\")\n            return []\n        \n        items = list(self.data[table_name].values())\n        Logger.debug(self.logger, f\"Retrieved {len(items)} items from '{table_name}'\")\n        return items\n    \n    def update(self, table_name, item_id, item):\n        \"\"\"Update an item in a table.\"\"\"\n        if table_name not in self.data or item_id not in self.data[table_name]:\n            Logger.warning(self.logger, f\"Cannot update: Item with ID {item_id} not found in '{table_name}'\")\n            return None\n        \n        # Ensure ID remains the same\n        item[\"id\"] = item_id\n        self.data[table_name][item_id] = item\n        Logger.info(self.logger, f\"Updated item with ID {item_id} in '{table_name}'\")\n        return item\n    \n    def delete(self, table_name, item_id):\n        \"\"\"Delete an item from a table.\"\"\"\n        if table_name not in self.data or item_id not in self.data[table_name]:\n            Logger.warning(self.logger, f\"Cannot delete: Item with ID {item_id} not found in '{table_name}'\")\n            return False\n        \n        del self.data[table_name][item_id]\n        Logger.info(self.logger, f\"Deleted item with ID {item_id} from '{table_name}'\")\n        return True\n    \n    def query(self, table_name, filter_func):\n        \"\"\"Query items from a table using a filter function.\"\"\"\n        if table_name not in self.data:\n            Logger.warning(self.logger, f\"Table '{table_name}' not found for query\")\n            return []\n        \n        items = list(self.data[table_name].values())\n        filtered_items = [item for item in items if filter_func(item)]\n        Logger.debug(self.logger, f\"Query returned {len(filtered_items)} items from '{table_name}'\")\n        return filtered_items\n\n# Create a singleton database instance\ndb = InMemoryDatabase()\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/main.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n# ]\n# ///\n\n\"\"\"\nMain application entry point for the Layered Architecture example.\n\"\"\"\n\nfrom api.category_api import CategoryAPI\nfrom api.product_api import ProductAPI\nfrom utils.logger import app_logger, Logger\n\ndef display_header(text):\n    \"\"\"Display a header with the given text.\"\"\"\n    print(\"\\n\" + \"=\" * 50)\n    print(f\" {text}\")\n    print(\"=\" * 50)\n\ndef display_result(result):\n    \"\"\"Display a result.\"\"\"\n    if result.get(\"success\"):\n        print(\"✅ \" + result.get(\"message\", \"Operation successful\"))\n        \n        if \"data\" in result:\n            data = result[\"data\"]\n            if isinstance(data, list):\n                for item in data:\n                    print_item(item)\n            else:\n                print_item(data)\n    else:\n        print(\"❌ \" + result.get(\"message\", \"Operation failed\"))\n\ndef print_item(item):\n    \"\"\"Print an item.\"\"\"\n    if isinstance(item, dict):\n        for key, value in item.items():\n            if key not in [\"created_at\", \"updated_at\"]:\n                print(f\"  {key}: {value}\")\n        print()\n\ndef main():\n    \"\"\"Run the application.\"\"\"\n    Logger.info(app_logger, \"Starting Layered Architecture Example\")\n    \n    display_header(\"Layered Architecture Example\")\n    \n    # Create categories\n    display_header(\"Creating Categories\")\n    \n    electronics_result = CategoryAPI.create_category(\"Electronics\", \"Electronic devices and gadgets\")\n    display_result(electronics_result)\n    \n    books_result = CategoryAPI.create_category(\"Books\", \"Books and e-books\")\n    display_result(books_result)\n    \n    clothing_result = CategoryAPI.create_category(\"Clothing\", \"Apparel and accessories\")\n    display_result(clothing_result)\n    \n    # Try to create a duplicate category\n    duplicate_result = CategoryAPI.create_category(\"Electronics\", \"Duplicate category\")\n    display_result(duplicate_result)\n    \n    # Get all categories\n    display_header(\"All Categories\")\n    categories_result = CategoryAPI.get_all_categories()\n    display_result(categories_result)\n    \n    # Create products\n    display_header(\"Creating Products\")\n    \n    # Get category IDs\n    categories = categories_result[\"data\"]\n    electronics_id = next((c[\"id\"] for c in categories if c[\"name\"] == \"Electronics\"), None)\n    books_id = next((c[\"id\"] for c in categories if c[\"name\"] == \"Books\"), None)\n    \n    # Create products\n    laptop_result = ProductAPI.create_product(\n        \"Laptop\", \n        999.99, \n        electronics_id, \n        \"High-performance laptop\", \n        \"TECH-001\"\n    )\n    display_result(laptop_result)\n    \n    phone_result = ProductAPI.create_product(\n        \"Smartphone\", \n        499.99, \n        electronics_id, \n        \"Latest smartphone model\", \n        \"TECH-002\"\n    )\n    display_result(phone_result)\n    \n    book_result = ProductAPI.create_product(\n        \"Programming Book\", \n        29.99, \n        books_id, \n        \"Learn programming with this book\", \n        \"BOOK-001\"\n    )\n    display_result(book_result)\n    \n    # Try to create a product with invalid price\n    invalid_result = ProductAPI.create_product(\n        \"Invalid Product\", \n        \"not-a-price\", \n        electronics_id\n    )\n    display_result(invalid_result)\n    \n    # Get products by category\n    display_header(\"Electronics Products\")\n    electronics_products = ProductAPI.get_products_by_category(electronics_id)\n    display_result(electronics_products)\n    \n    # Update a product\n    display_header(\"Updating a Product\")\n    if laptop_result.get(\"success\") and \"data\" in laptop_result:\n        laptop_id = laptop_result[\"data\"][\"id\"]\n        update_result = ProductAPI.update_product(\n            laptop_id,\n            price=899.99,\n            description=\"High-performance laptop with discount\"\n        )\n        display_result(update_result)\n    \n    # Try to delete a category with products\n    display_header(\"Trying to Delete a Category with Products\")\n    delete_result = CategoryAPI.delete_category(electronics_id)\n    display_result(delete_result)\n    \n    # Delete a product\n    display_header(\"Deleting a Product\")\n    if phone_result.get(\"success\") and \"data\" in phone_result:\n        phone_id = phone_result[\"data\"][\"id\"]\n        delete_product_result = ProductAPI.delete_product(phone_id)\n        display_result(delete_product_result)\n    \n    # Get all products\n    display_header(\"All Remaining Products\")\n    all_products = ProductAPI.get_all_products()\n    display_result(all_products)\n    \n    Logger.info(app_logger, \"Layered Architecture Example completed\")\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/models/category.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCategory model definition.\n\"\"\"\n\nfrom datetime import datetime\n\nclass Category:\n    \"\"\"Category model representing a product category.\"\"\"\n    \n    def __init__(self, name, description=None, id=None):\n        \"\"\"Initialize a category.\"\"\"\n        self.id = id\n        self.name = name\n        self.description = description\n        self.created_at = datetime.now().isoformat()\n        self.updated_at = self.created_at\n    \n    def to_dict(self):\n        \"\"\"Convert category to dictionary.\"\"\"\n        return {\n            \"id\": self.id,\n            \"name\": self.name,\n            \"description\": self.description,\n            \"created_at\": self.created_at,\n            \"updated_at\": self.updated_at\n        }\n    \n    @classmethod\n    def from_dict(cls, data):\n        \"\"\"Create a category from dictionary.\"\"\"\n        category = cls(\n            name=data[\"name\"],\n            description=data.get(\"description\"),\n            id=data.get(\"id\")\n        )\n        category.created_at = data.get(\"created_at\", category.created_at)\n        category.updated_at = data.get(\"updated_at\", category.updated_at)\n        return category\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/models/product.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nProduct model definition.\n\"\"\"\n\nfrom datetime import datetime\n\nclass Product:\n    \"\"\"Product model representing a product in the catalog.\"\"\"\n    \n    def __init__(self, name, price, category_id=None, description=None, sku=None, id=None):\n        \"\"\"Initialize a product.\"\"\"\n        self.id = id\n        self.name = name\n        self.price = price\n        self.category_id = category_id\n        self.description = description\n        self.sku = sku\n        self.created_at = datetime.now().isoformat()\n        self.updated_at = self.created_at\n    \n    def to_dict(self):\n        \"\"\"Convert product to dictionary.\"\"\"\n        return {\n            \"id\": self.id,\n            \"name\": self.name,\n            \"price\": self.price,\n            \"category_id\": self.category_id,\n            \"description\": self.description,\n            \"sku\": self.sku,\n            \"created_at\": self.created_at,\n            \"updated_at\": self.updated_at\n        }\n    \n    @classmethod\n    def from_dict(cls, data):\n        \"\"\"Create a product from dictionary.\"\"\"\n        product = cls(\n            name=data[\"name\"],\n            price=data[\"price\"],\n            category_id=data.get(\"category_id\"),\n            description=data.get(\"description\"),\n            sku=data.get(\"sku\"),\n            id=data.get(\"id\")\n        )\n        product.created_at = data.get(\"created_at\", product.created_at)\n        product.updated_at = data.get(\"updated_at\", product.updated_at)\n        return product\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/services/category_service.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCategory service containing business logic for category management.\n\"\"\"\n\nfrom datetime import datetime\nfrom data.database import db\nfrom models.category import Category\nfrom utils.logger import Logger, app_logger\n\nclass CategoryService:\n    \"\"\"Service for managing categories.\"\"\"\n    \n    @staticmethod\n    def create_category(name, description=None):\n        \"\"\"Create a new category.\"\"\"\n        try:\n            # Validate category name\n            if not name or not isinstance(name, str):\n                raise ValueError(\"Category name is required and must be a string\")\n            \n            # Check if category with same name already exists\n            existing_categories = db.query(\"categories\", lambda c: c[\"name\"].lower() == name.lower())\n            if existing_categories:\n                raise ValueError(f\"Category with name '{name}' already exists\")\n            \n            # Create and save category\n            category = Category(name=name, description=description)\n            saved_category = db.insert(\"categories\", category.to_dict())\n            Logger.info(app_logger, f\"Created category: {name}\")\n            return saved_category\n        except Exception as e:\n            Logger.error(app_logger, f\"Error creating category: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def get_category(category_id):\n        \"\"\"Get a category by ID.\"\"\"\n        try:\n            category_data = db.get(\"categories\", category_id)\n            if not category_data:\n                Logger.warning(app_logger, f\"Category not found: {category_id}\")\n                return None\n            return category_data\n        except Exception as e:\n            Logger.error(app_logger, f\"Error getting category: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def get_all_categories():\n        \"\"\"Get all categories.\"\"\"\n        try:\n            categories = db.get_all(\"categories\")\n            Logger.info(app_logger, f\"Retrieved {len(categories)} categories\")\n            return categories\n        except Exception as e:\n            Logger.error(app_logger, f\"Error getting all categories: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def update_category(category_id, name=None, description=None):\n        \"\"\"Update a category.\"\"\"\n        try:\n            # Get existing category\n            category_data = db.get(\"categories\", category_id)\n            if not category_data:\n                Logger.warning(app_logger, f\"Cannot update: Category not found: {category_id}\")\n                return None\n            \n            # Check if new name already exists\n            if name and name != category_data[\"name\"]:\n                existing_categories = db.query(\"categories\", lambda c: c[\"name\"].lower() == name.lower() and c[\"id\"] != category_id)\n                if existing_categories:\n                    raise ValueError(f\"Category with name '{name}' already exists\")\n            \n            # Update fields\n            if name:\n                category_data[\"name\"] = name\n            if description is not None:\n                category_data[\"description\"] = description\n            \n            # Update timestamp\n            category_data[\"updated_at\"] = datetime.now().isoformat()\n            \n            # Save to database\n            updated_category = db.update(\"categories\", category_id, category_data)\n            Logger.info(app_logger, f\"Updated category: {category_id}\")\n            return updated_category\n        except Exception as e:\n            Logger.error(app_logger, f\"Error updating category: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def delete_category(category_id):\n        \"\"\"Delete a category.\"\"\"\n        try:\n            # Check if category exists\n            category_data = db.get(\"categories\", category_id)\n            if not category_data:\n                Logger.warning(app_logger, f\"Cannot delete: Category not found: {category_id}\")\n                return False\n            \n            # Check if category has products\n            products = db.query(\"products\", lambda p: p[\"category_id\"] == category_id)\n            if products:\n                raise ValueError(f\"Cannot delete category: {len(products)} products are associated with this category\")\n            \n            # Delete category\n            result = db.delete(\"categories\", category_id)\n            Logger.info(app_logger, f\"Deleted category: {category_id}\")\n            return result\n        except Exception as e:\n            Logger.error(app_logger, f\"Error deleting category: {str(e)}\", exc_info=True)\n            raise\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/services/product_service.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nProduct service containing business logic for product management.\n\"\"\"\n\nfrom datetime import datetime\nfrom data.database import db\nfrom models.product import Product\nfrom utils.logger import Logger, app_logger\n\nclass ProductService:\n    \"\"\"Service for managing products.\"\"\"\n    \n    @staticmethod\n    def create_product(name, price, category_id=None, description=None, sku=None):\n        \"\"\"Create a new product.\"\"\"\n        try:\n            # Validate product data\n            if not name or not isinstance(name, str):\n                raise ValueError(\"Product name is required and must be a string\")\n            \n            try:\n                price = float(price)\n                if price < 0:\n                    raise ValueError()\n            except (ValueError, TypeError):\n                raise ValueError(\"Price must be a positive number\")\n            \n            # Validate category if provided\n            if category_id:\n                category = db.get(\"categories\", category_id)\n                if not category:\n                    raise ValueError(f\"Category with ID {category_id} not found\")\n            \n            # Validate SKU if provided\n            if sku:\n                existing_products = db.query(\"products\", lambda p: p[\"sku\"] == sku)\n                if existing_products:\n                    raise ValueError(f\"Product with SKU '{sku}' already exists\")\n            \n            # Create and save product\n            product = Product(\n                name=name,\n                price=price,\n                category_id=category_id,\n                description=description,\n                sku=sku\n            )\n            saved_product = db.insert(\"products\", product.to_dict())\n            Logger.info(app_logger, f\"Created product: {name}\")\n            return saved_product\n        except Exception as e:\n            Logger.error(app_logger, f\"Error creating product: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def get_product(product_id):\n        \"\"\"Get a product by ID.\"\"\"\n        try:\n            product_data = db.get(\"products\", product_id)\n            if not product_data:\n                Logger.warning(app_logger, f\"Product not found: {product_id}\")\n                return None\n            return product_data\n        except Exception as e:\n            Logger.error(app_logger, f\"Error getting product: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def get_by_sku(sku):\n        \"\"\"Get a product by SKU.\"\"\"\n        try:\n            products = db.query(\"products\", lambda p: p[\"sku\"] == sku)\n            if not products:\n                Logger.warning(app_logger, f\"Product with SKU '{sku}' not found\")\n                return None\n            return products[0]\n        except Exception as e:\n            Logger.error(app_logger, f\"Error getting product by SKU: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def get_all_products():\n        \"\"\"Get all products.\"\"\"\n        try:\n            products = db.get_all(\"products\")\n            Logger.info(app_logger, f\"Retrieved {len(products)} products\")\n            return products\n        except Exception as e:\n            Logger.error(app_logger, f\"Error getting all products: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def get_products_by_category(category_id):\n        \"\"\"Get all products in a category.\"\"\"\n        try:\n            products = db.query(\"products\", lambda p: p[\"category_id\"] == category_id)\n            Logger.info(app_logger, f\"Retrieved {len(products)} products for category {category_id}\")\n            return products\n        except Exception as e:\n            Logger.error(app_logger, f\"Error getting products by category: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def update_product(product_id, name=None, price=None, category_id=None, description=None, sku=None):\n        \"\"\"Update a product.\"\"\"\n        try:\n            # Get existing product\n            product_data = db.get(\"products\", product_id)\n            if not product_data:\n                Logger.warning(app_logger, f\"Cannot update: Product not found: {product_id}\")\n                return None\n            \n            # Validate price if provided\n            if price is not None:\n                try:\n                    price = float(price)\n                    if price < 0:\n                        raise ValueError()\n                except (ValueError, TypeError):\n                    raise ValueError(\"Price must be a positive number\")\n            \n            # Validate category if provided\n            if category_id:\n                category = db.get(\"categories\", category_id)\n                if not category:\n                    raise ValueError(f\"Category with ID {category_id} not found\")\n            \n            # Validate SKU if provided\n            if sku and sku != product_data[\"sku\"]:\n                existing_products = db.query(\"products\", lambda p: p[\"sku\"] == sku and p[\"id\"] != product_id)\n                if existing_products:\n                    raise ValueError(f\"Product with SKU '{sku}' already exists\")\n            \n            # Update fields\n            if name:\n                product_data[\"name\"] = name\n            if price is not None:\n                product_data[\"price\"] = price\n            if category_id is not None:\n                product_data[\"category_id\"] = category_id\n            if description is not None:\n                product_data[\"description\"] = description\n            if sku is not None:\n                product_data[\"sku\"] = sku\n            \n            # Update timestamp\n            product_data[\"updated_at\"] = datetime.now().isoformat()\n            \n            # Save to database\n            updated_product = db.update(\"products\", product_id, product_data)\n            Logger.info(app_logger, f\"Updated product: {product_id}\")\n            return updated_product\n        except Exception as e:\n            Logger.error(app_logger, f\"Error updating product: {str(e)}\", exc_info=True)\n            raise\n    \n    @staticmethod\n    def delete_product(product_id):\n        \"\"\"Delete a product.\"\"\"\n        try:\n            # Check if product exists\n            product_data = db.get(\"products\", product_id)\n            if not product_data:\n                Logger.warning(app_logger, f\"Cannot delete: Product not found: {product_id}\")\n                return False\n            \n            # Delete product\n            result = db.delete(\"products\", product_id)\n            Logger.info(app_logger, f\"Deleted product: {product_id}\")\n            return result\n        except Exception as e:\n            Logger.error(app_logger, f\"Error deleting product: {str(e)}\", exc_info=True)\n            raise\n"
  },
  {
    "path": "codebase-architectures/layered-architecture/utils/logger.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nLogger utility for the application.\n\"\"\"\n\nimport logging\nfrom datetime import datetime\n\n# Configure logging\nlogging.basicConfig(\n    level=logging.INFO,\n    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'\n)\n\nclass Logger:\n    \"\"\"Logger class for application logging.\"\"\"\n    \n    @staticmethod\n    def get_logger(name):\n        \"\"\"Get a logger instance for the given name.\"\"\"\n        return logging.getLogger(name)\n    \n    @staticmethod\n    def info(logger, message):\n        \"\"\"Log an info message.\"\"\"\n        logger.info(message)\n    \n    @staticmethod\n    def error(logger, message, exc_info=None):\n        \"\"\"Log an error message.\"\"\"\n        logger.error(message, exc_info=exc_info)\n    \n    @staticmethod\n    def warning(logger, message):\n        \"\"\"Log a warning message.\"\"\"\n        logger.warning(message)\n    \n    @staticmethod\n    def debug(logger, message):\n        \"\"\"Log a debug message.\"\"\"\n        logger.debug(message)\n\n# Create a default logger\napp_logger = Logger.get_logger(\"app\")\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/README.md",
    "content": "# Pipeline (Sequential Flow) Architecture\n\nThis directory demonstrates a Pipeline Architecture implementation with a simple data processing application.\n\n## Structure\n\n```\npipeline-architecture/\n├── steps/                       # Composable pipeline steps\n│   ├── input_stage.py           # Input parsing and preparation\n│   ├── processing_stage.py      # Core computation or transformation\n│   └── output_stage.py          # Final formatting or response handling\n├── pipeline_manager/            # Pipeline orchestration\n│   ├── pipeline_manager.py      # Base pipeline manager\n│   └── data_pipeline.py         # Data processing pipeline implementation\n└── shared/\n    └── utilities.py             # Common utilities across pipeline\n```\n\nThis architecture follows a more functional approach, where:\n- Steps are composable, independent units that can be mixed and matched\n- Pipeline managers orchestrate the flow between steps\n- Different pipeline implementations can be created for specific use cases\n- Each step focuses on a single responsibility and can be tested in isolation\n\n## Benefits\n\n- Clearly defined linear execution simplifies reasoning and debugging\n- Easy to scale or optimize individual pipeline stages independently\n- Facilitates predictable context management\n\n## Cons\n\n- Rigid linearity limits branching and complex decision-making scenarios\n- Major workflow changes can require extensive pipeline refactoring\n\n## Running the Example\n\n```bash\nuv run main.py\n```\n\nThis example demonstrates a data processing pipeline that:\n1. Reads and validates input data\n2. Processes and transforms the data\n3. Formats and outputs the results\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/data/.gitkeep",
    "content": "# This directory will store sample data files\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/data/sales_data.json",
    "content": "[\n  {\n    \"id\": \"S001\",\n    \"product\": \"Laptop\",\n    \"category\": \"Electronics\",\n    \"price\": 1299.99,\n    \"quantity\": 5,\n    \"date\": \"2025-01-15\",\n    \"customer\": \"ABC Corp\",\n    \"discount\": 0.1\n  },\n  {\n    \"id\": \"S002\",\n    \"product\": \"Smartphone\",\n    \"category\": \"Electronics\",\n    \"price\": 899.99,\n    \"quantity\": 10,\n    \"date\": \"2025-01-20\",\n    \"customer\": \"XYZ Ltd\",\n    \"discount\": 0.05\n  },\n  {\n    \"id\": \"S003\",\n    \"product\": \"Office Chair\",\n    \"category\": \"Furniture\",\n    \"price\": 249.99,\n    \"quantity\": 8,\n    \"date\": \"2025-01-22\",\n    \"customer\": \"123 Industries\",\n    \"discount\": 0.0\n  },\n  {\n    \"id\": \"S004\",\n    \"product\": \"Desk\",\n    \"category\": \"Furniture\",\n    \"price\": 349.99,\n    \"quantity\": 4,\n    \"date\": \"2025-01-25\",\n    \"customer\": \"ABC Corp\",\n    \"discount\": 0.15\n  },\n  {\n    \"id\": \"S005\",\n    \"product\": \"Monitor\",\n    \"category\": \"Electronics\",\n    \"price\": 499.99,\n    \"quantity\": 12,\n    \"date\": \"2025-01-30\",\n    \"customer\": \"XYZ Ltd\",\n    \"discount\": 0.1\n  },\n  {\n    \"id\": \"S006\",\n    \"product\": \"Printer\",\n    \"category\": \"Electronics\",\n    \"price\": 299.99,\n    \"quantity\": 3,\n    \"date\": \"2025-02-05\",\n    \"customer\": \"123 Industries\",\n    \"discount\": 0.0\n  },\n  {\n    \"id\": \"S007\",\n    \"product\": \"Bookshelf\",\n    \"category\": \"Furniture\",\n    \"price\": 199.99,\n    \"quantity\": 6,\n    \"date\": \"2025-02-10\",\n    \"customer\": \"ABC Corp\",\n    \"discount\": 0.05\n  }\n]"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/main.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n# ]\n# ///\n\n\"\"\"\nMain application entry point for the Pipeline Architecture example.\n\"\"\"\n\nimport os\nimport json\nfrom steps.input_stage import InputStage\nfrom steps.processing_stage import ProcessingStage\nfrom steps.output_stage import OutputStage\nfrom pipeline_manager.data_pipeline import DataProcessingPipeline\nfrom shared.utilities import format_currency, format_percentage\n\ndef create_sample_data():\n    \"\"\"Create sample sales data for the pipeline example.\"\"\"\n    # Create output directory if it doesn't exist\n    os.makedirs(\"./data\", exist_ok=True)\n    \n    # Sample sales data\n    sales_data = [\n        {\n            \"id\": \"S001\",\n            \"product\": \"Laptop\",\n            \"category\": \"Electronics\",\n            \"price\": 1299.99,\n            \"quantity\": 5,\n            \"date\": \"2025-01-15\",\n            \"customer\": \"ABC Corp\",\n            \"discount\": 0.1\n        },\n        {\n            \"id\": \"S002\",\n            \"product\": \"Smartphone\",\n            \"category\": \"Electronics\",\n            \"price\": 899.99,\n            \"quantity\": 10,\n            \"date\": \"2025-01-20\",\n            \"customer\": \"XYZ Ltd\",\n            \"discount\": 0.05\n        },\n        {\n            \"id\": \"S003\",\n            \"product\": \"Office Chair\",\n            \"category\": \"Furniture\",\n            \"price\": 249.99,\n            \"quantity\": 8,\n            \"date\": \"2025-01-22\",\n            \"customer\": \"123 Industries\",\n            \"discount\": 0.0\n        },\n        {\n            \"id\": \"S004\",\n            \"product\": \"Desk\",\n            \"category\": \"Furniture\",\n            \"price\": 349.99,\n            \"quantity\": 4,\n            \"date\": \"2025-01-25\",\n            \"customer\": \"ABC Corp\",\n            \"discount\": 0.15\n        },\n        {\n            \"id\": \"S005\",\n            \"product\": \"Monitor\",\n            \"category\": \"Electronics\",\n            \"price\": 499.99,\n            \"quantity\": 12,\n            \"date\": \"2025-01-30\",\n            \"customer\": \"XYZ Ltd\",\n            \"discount\": 0.1\n        },\n        {\n            \"id\": \"S006\",\n            \"product\": \"Printer\",\n            \"category\": \"Electronics\",\n            \"price\": 299.99,\n            \"quantity\": 3,\n            \"date\": \"2025-02-05\",\n            \"customer\": \"123 Industries\",\n            \"discount\": 0.0\n        },\n        {\n            \"id\": \"S007\",\n            \"product\": \"Bookshelf\",\n            \"category\": \"Furniture\",\n            \"price\": 199.99,\n            \"quantity\": 6,\n            \"date\": \"2025-02-10\",\n            \"customer\": \"ABC Corp\",\n            \"discount\": 0.05\n        }\n    ]\n    \n    # Save to file\n    with open(\"./data/sales_data.json\", \"w\") as file:\n        json.dump(sales_data, file, indent=2)\n    \n    print(f\"Created sample data file: ./data/sales_data.json\")\n    return \"./data/sales_data.json\"\n\ndef main():\n    \"\"\"Run the pipeline architecture example.\"\"\"\n    print(\"\\n===== Pipeline Architecture Example =====\")\n    \n    # Create sample data\n    data_file = create_sample_data()\n    \n    # Create pipeline stages\n    input_stage = InputStage()\n    processing_stage = ProcessingStage()\n    output_stage = OutputStage()\n    \n    # Create and configure pipeline\n    pipeline = DataProcessingPipeline(\"Sales Data Analysis Pipeline\")\n    \n    # Add stages\n    pipeline.add_stage(\"input\", input_stage)\n    pipeline.add_stage(\"processing\", processing_stage)\n    pipeline.add_stage(\"output\", output_stage)\n    \n    # Configure input\n    pipeline.configure_input(\n        source=data_file,\n        source_type=\"json\",\n        required_fields=[\"id\", \"product\", \"price\", \"quantity\"]\n    )\n    \n    # Configure processing\n    pipeline.configure_processing({\n        \"calculate_statistics\": True,\n        \"numeric_fields\": [\"price\", \"quantity\", \"discount\"],\n        \"filters\": [\n            {\n                \"filter_func\": lambda item: item[\"price\"] * item[\"quantity\"] > 1000,\n                \"description\": \"High-value sales (>$1000)\"\n            }\n        ],\n        \"transformations\": {\n            \"price\": lambda price: format_currency(price),\n            \"discount\": lambda discount: format_percentage(discount)\n        },\n        \"transformation_description\": \"Format price as currency and discount as percentage\"\n    })\n    \n    # Configure output\n    pipeline.configure_output({\n        \"format_summary\": True,\n        \"format_detailed\": True,\n        \"print_results\": True,\n        \"print_output_type\": \"summary\",\n        \"save_to_file\": [\n            {\n                \"format\": \"json\",\n                \"dir\": \"./output\",\n                \"filename\": \"sales_analysis.json\"\n            }\n        ]\n    })\n    \n    # Run the pipeline\n    result = pipeline.run()\n    \n    print(\"\\n===== Pipeline Execution Complete =====\")\n    print(f\"Pipeline status: {result['metadata']['status']}\")\n    print(f\"Execution time: {result['metadata']['execution_time_seconds']:.2f} seconds\")\n    \n    # Show output file location if saved\n    if \"stages\" in result and len(result[\"stages\"]) > 0:\n        output_stage_name = result[\"stages\"][-1][\"name\"]\n        if output_stage_name in pipeline.results:\n            output_result = pipeline.results[output_stage_name]\n            if \"metadata\" in output_result and \"output_files\" in output_result[\"metadata\"]:\n                print(\"\\nOutput files:\")\n                for output_file in output_result[\"metadata\"][\"output_files\"]:\n                    print(f\"- {output_file['path']} ({output_file['format']})\")\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/output/.gitkeep",
    "content": "# This directory will store output files generated by the pipeline\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/output/sales_analysis.json",
    "content": "{\n  \"report_type\": \"detailed\",\n  \"generated_at\": \"2025-03-17T14:25:07.162838\",\n  \"data_source\": \"./data/sales_data.json\",\n  \"record_count\": 6,\n  \"data\": [\n    {\n      \"id\": \"S001\",\n      \"product\": \"Laptop\",\n      \"category\": \"Electronics\",\n      \"price\": \"$1299.99\",\n      \"quantity\": 5,\n      \"date\": \"2025-01-15\",\n      \"customer\": \"ABC Corp\",\n      \"discount\": \"10.0%\"\n    },\n    {\n      \"id\": \"S002\",\n      \"product\": \"Smartphone\",\n      \"category\": \"Electronics\",\n      \"price\": \"$899.99\",\n      \"quantity\": 10,\n      \"date\": \"2025-01-20\",\n      \"customer\": \"XYZ Ltd\",\n      \"discount\": \"5.0%\"\n    },\n    {\n      \"id\": \"S003\",\n      \"product\": \"Office Chair\",\n      \"category\": \"Furniture\",\n      \"price\": \"$249.99\",\n      \"quantity\": 8,\n      \"date\": \"2025-01-22\",\n      \"customer\": \"123 Industries\",\n      \"discount\": \"0.0%\"\n    },\n    {\n      \"id\": \"S004\",\n      \"product\": \"Desk\",\n      \"category\": \"Furniture\",\n      \"price\": \"$349.99\",\n      \"quantity\": 4,\n      \"date\": \"2025-01-25\",\n      \"customer\": \"ABC Corp\",\n      \"discount\": \"15.0%\"\n    },\n    {\n      \"id\": \"S005\",\n      \"product\": \"Monitor\",\n      \"category\": \"Electronics\",\n      \"price\": \"$499.99\",\n      \"quantity\": 12,\n      \"date\": \"2025-01-30\",\n      \"customer\": \"XYZ Ltd\",\n      \"discount\": \"10.0%\"\n    },\n    {\n      \"id\": \"S007\",\n      \"product\": \"Bookshelf\",\n      \"category\": \"Furniture\",\n      \"price\": \"$199.99\",\n      \"quantity\": 6,\n      \"date\": \"2025-02-10\",\n      \"customer\": \"ABC Corp\",\n      \"discount\": \"5.0%\"\n    }\n  ],\n  \"analysis\": {\n    \"statistics\": {\n      \"price\": {\n        \"count\": 7,\n        \"min\": 199.99,\n        \"max\": 1299.99,\n        \"sum\": 3799.9300000000003,\n        \"mean\": 542.8471428571429,\n        \"median\": 349.99,\n        \"std_dev\": 408.68546527104377\n      },\n      \"quantity\": {\n        \"count\": 7,\n        \"min\": 3,\n        \"max\": 12,\n        \"sum\": 48,\n        \"mean\": 6.857142857142857,\n        \"median\": 6,\n        \"std_dev\": 3.2877840272018797\n      },\n      \"discount\": {\n        \"count\": 7,\n        \"min\": 0.0,\n        \"max\": 0.15,\n        \"sum\": 0.45,\n        \"mean\": 0.0642857142857143,\n        \"median\": 0.05,\n        \"std_dev\": 0.05563486402641868\n      }\n    }\n  },\n  \"processing_info\": {\n    \"steps\": [\n      \"calculate_statistics\",\n      \"filter_data\",\n      \"transform_fields\"\n    ],\n    \"filters\": [\n      {\n        \"description\": \"High-value sales (>$1000)\",\n        \"original_count\": 7,\n        \"filtered_count\": 6,\n        \"removed_count\": 1\n      }\n    ],\n    \"transformations\": [\n      {\n        \"description\": \"Format price as currency and discount as percentage\",\n        \"fields_transformed\": [\n          \"price\",\n          \"discount\"\n        ]\n      }\n    ],\n    \"processing_time_seconds\": 0.000299\n  }\n}"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/pipeline_manager/data_pipeline.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nData processing pipeline implementation for the pipeline architecture.\nThis module provides a specific implementation of the pipeline manager for data processing.\n\"\"\"\n\nfrom pipeline_manager.pipeline_manager import PipelineManager\n\nclass DataProcessingPipeline(PipelineManager):\n    \"\"\"Specific implementation of a data processing pipeline.\"\"\"\n    \n    def __init__(self, name=\"Data Processing Pipeline\"):\n        \"\"\"Initialize the data processing pipeline.\"\"\"\n        super().__init__(name)\n    \n    def _execute_first_stage(self, input_stage):\n        \"\"\"Execute the input stage of the pipeline.\"\"\"\n        # This implementation assumes the input stage has load_data and validate_data methods\n        result = input_stage.load_data(self.input_source, self.input_source_type)\n        \n        if result[\"metadata\"][\"status\"] != \"error\":\n            if hasattr(self, \"required_fields\"):\n                result = input_stage.validate_data(required_fields=self.required_fields)\n        \n        return result\n    \n    def _execute_stage(self, stage_instance, previous_result):\n        \"\"\"Execute a stage with the result from the previous stage.\"\"\"\n        # Determine which stage we're executing based on the instance type\n        if hasattr(stage_instance, \"process\"):\n            # Processing stage\n            result = stage_instance.process(previous_result)\n            \n            # Execute additional processing methods if configured\n            if hasattr(self, \"processing_config\"):\n                config = self.processing_config\n                \n                # Calculate statistics if configured\n                if config.get(\"calculate_statistics\"):\n                    result = stage_instance.calculate_statistics(\n                        numeric_fields=config.get(\"numeric_fields\")\n                    )\n                \n                # Apply filters if configured\n                if \"filters\" in config:\n                    for filter_config in config[\"filters\"]:\n                        result = stage_instance.filter_data(\n                            filter_config[\"filter_func\"],\n                            filter_config.get(\"description\")\n                        )\n                \n                # Apply transformations if configured\n                if \"transformations\" in config:\n                    result = stage_instance.transform_fields(\n                        config[\"transformations\"],\n                        config.get(\"transformation_description\")\n                    )\n            \n            # Finalize the processing stage\n            result = stage_instance.finalize()\n            \n        elif hasattr(stage_instance, \"prepare\"):\n            # Output stage\n            result = stage_instance.prepare(previous_result)\n            \n            # Execute additional output methods if configured\n            if hasattr(self, \"output_config\"):\n                config = self.output_config\n                \n                # Format as summary if configured\n                if config.get(\"format_summary\", False):\n                    result = stage_instance.format_as_summary()\n                \n                # Format as detailed report if configured\n                if config.get(\"format_detailed\", False):\n                    result = stage_instance.format_as_detailed_report()\n                \n                # Save to file if configured\n                if \"save_to_file\" in config:\n                    for save_config in config[\"save_to_file\"]:\n                        result = stage_instance.save_to_file(\n                            output_format=save_config.get(\"format\", \"json\"),\n                            output_dir=save_config.get(\"dir\", \"./output\"),\n                            filename=save_config.get(\"filename\")\n                        )\n                \n                # Print results if configured\n                if config.get(\"print_results\"):\n                    result = stage_instance.print_results(\n                        output_type=config.get(\"print_output_type\", \"summary\")\n                    )\n            \n            # Finalize the output stage\n            result = stage_instance.finalize()\n            \n        else:\n            # Unknown stage type\n            raise ValueError(f\"Unknown stage type: {type(stage_instance).__name__}\")\n        \n        return result\n    \n    def configure_input(self, source, source_type=\"json\", required_fields=None):\n        \"\"\"\n        Configure the input stage.\n        \n        Args:\n            source: Path to the data file or raw data\n            source_type: Type of data source (json, csv, raw)\n            required_fields: List of required field names for validation\n        \"\"\"\n        self.input_source = source\n        self.input_source_type = source_type\n        if required_fields:\n            self.required_fields = required_fields\n    \n    def configure_processing(self, config):\n        \"\"\"\n        Configure the processing stage.\n        \n        Args:\n            config: Dictionary with processing configuration\n        \"\"\"\n        self.processing_config = config\n    \n    def configure_output(self, config):\n        \"\"\"\n        Configure the output stage.\n        \n        Args:\n            config: Dictionary with output configuration\n        \"\"\"\n        self.output_config = config\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/pipeline_manager/pipeline_manager.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nPipeline manager for the pipeline architecture.\nThis module coordinates the execution of the pipeline stages.\n\"\"\"\n\nfrom datetime import datetime\n\nclass PipelineManager:\n    \"\"\"Manager for coordinating pipeline stages.\"\"\"\n    \n    def __init__(self, name=\"Data Processing Pipeline\"):\n        \"\"\"Initialize the pipeline manager.\"\"\"\n        self.name = name\n        self.stages = []\n        self.results = {}\n        self.metadata = {\n            \"pipeline_name\": name,\n            \"status\": \"initialized\",\n            \"started_at\": None,\n            \"completed_at\": None,\n            \"errors\": []\n        }\n    \n    def add_stage(self, stage_name, stage_instance):\n        \"\"\"\n        Add a stage to the pipeline.\n        \n        Args:\n            stage_name: Name of the stage\n            stage_instance: Instance of the stage class\n        \"\"\"\n        self.stages.append({\n            \"name\": stage_name,\n            \"instance\": stage_instance,\n            \"status\": \"pending\"\n        })\n    \n    def run(self):\n        \"\"\"\n        Run the pipeline by executing all stages in sequence.\n        \n        Returns:\n            dict: Pipeline results including data and metadata\n        \"\"\"\n        self.metadata[\"started_at\"] = datetime.now().isoformat()\n        self.metadata[\"status\"] = \"running\"\n        \n        print(f\"\\n=== Starting Pipeline: {self.name} ===\")\n        \n        # Execute each stage\n        for i, stage in enumerate(self.stages):\n            stage_name = stage[\"name\"]\n            stage_instance = stage[\"instance\"]\n            \n            print(f\"\\n--- Stage {i+1}: {stage_name} ---\")\n            \n            try:\n                # Execute the stage\n                if i == 0:\n                    # First stage doesn't take input from previous stage\n                    result = self._execute_first_stage(stage_instance)\n                else:\n                    # Pass result from previous stage\n                    previous_result = self.results[self.stages[i-1][\"name\"]]\n                    result = self._execute_stage(stage_instance, previous_result)\n                \n                # Store the result\n                self.results[stage_name] = result\n                \n                # Update stage status\n                stage[\"status\"] = result[\"metadata\"][\"status\"]\n                \n                # Check for errors\n                if result[\"metadata\"][\"status\"] in [\"error\", \"skipped\"]:\n                    print(f\"Stage {stage_name} {result['metadata']['status']}\")\n                    for error in result[\"metadata\"].get(\"errors\", []):\n                        print(f\"  Error: {error}\")\n                    \n                    # Add errors to pipeline metadata\n                    self.metadata[\"errors\"].append({\n                        \"stage\": stage_name,\n                        \"errors\": result[\"metadata\"].get(\"errors\", [])\n                    })\n                else:\n                    print(f\"Stage {stage_name} completed successfully\")\n            \n            except Exception as e:\n                # Handle unexpected errors\n                error_message = f\"Unexpected error in stage {stage_name}: {str(e)}\"\n                print(f\"  Error: {error_message}\")\n                \n                # Update stage status\n                stage[\"status\"] = \"error\"\n                \n                # Add error to pipeline metadata\n                self.metadata[\"errors\"].append({\n                    \"stage\": stage_name,\n                    \"errors\": [error_message]\n                })\n        \n        # Update pipeline status\n        self.metadata[\"completed_at\"] = datetime.now().isoformat()\n        if self.metadata[\"errors\"]:\n            self.metadata[\"status\"] = \"completed_with_errors\"\n        else:\n            self.metadata[\"status\"] = \"completed\"\n        \n        # Calculate total execution time\n        start_time = datetime.fromisoformat(self.metadata[\"started_at\"])\n        end_time = datetime.fromisoformat(self.metadata[\"completed_at\"])\n        execution_time = (end_time - start_time).total_seconds()\n        self.metadata[\"execution_time_seconds\"] = execution_time\n        \n        print(f\"\\n=== Pipeline {self.name} {self.metadata['status']} ===\")\n        print(f\"Total execution time: {execution_time:.2f} seconds\")\n        \n        return self._create_pipeline_result()\n    \n    def _execute_first_stage(self, stage_instance):\n        \"\"\"Execute the first stage of the pipeline.\"\"\"\n        # This method should be overridden in subclasses to provide\n        # specific implementation for the first stage\n        raise NotImplementedError(\"Subclasses must implement _execute_first_stage\")\n    \n    def _execute_stage(self, stage_instance, previous_result):\n        \"\"\"Execute a stage with the result from the previous stage.\"\"\"\n        # This method should be overridden in subclasses to provide\n        # specific implementation for subsequent stages\n        raise NotImplementedError(\"Subclasses must implement _execute_stage\")\n    \n    def get_final_result(self):\n        \"\"\"\n        Get the result from the final stage of the pipeline.\n        \n        Returns:\n            dict: Result from the final stage\n        \"\"\"\n        if not self.stages:\n            return None\n        \n        final_stage_name = self.stages[-1][\"name\"]\n        if final_stage_name in self.results:\n            return self.results[final_stage_name]\n        \n        return None\n    \n    def _create_pipeline_result(self):\n        \"\"\"Create a result dictionary for the entire pipeline.\"\"\"\n        # Get the final result\n        final_result = self.get_final_result()\n        \n        # Create pipeline result\n        pipeline_result = {\n            \"metadata\": self.metadata,\n            \"stages\": [{\n                \"name\": stage[\"name\"],\n                \"status\": stage[\"status\"]\n            } for stage in self.stages]\n        }\n        \n        # Add data from final stage if available\n        if final_result and \"data\" in final_result:\n            pipeline_result[\"data\"] = final_result[\"data\"]\n        \n        # Add analysis from final stage if available\n        if final_result and \"analysis\" in final_result:\n            pipeline_result[\"analysis\"] = final_result[\"analysis\"]\n        \n        return pipeline_result\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/shared/utilities.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nShared utilities for the pipeline architecture.\n\"\"\"\n\nimport json\nimport csv\nimport os\nfrom datetime import datetime\n\ndef load_json_file(file_path):\n    \"\"\"Load data from a JSON file.\"\"\"\n    try:\n        with open(file_path, 'r') as file:\n            return json.load(file)\n    except FileNotFoundError:\n        raise ValueError(f\"File not found: {file_path}\")\n    except json.JSONDecodeError:\n        raise ValueError(f\"Invalid JSON format in file: {file_path}\")\n\ndef save_json_file(data, file_path):\n    \"\"\"Save data to a JSON file.\"\"\"\n    directory = os.path.dirname(file_path)\n    if directory and not os.path.exists(directory):\n        os.makedirs(directory)\n    \n    with open(file_path, 'w') as file:\n        json.dump(data, file, indent=2)\n\ndef load_csv_file(file_path):\n    \"\"\"Load data from a CSV file.\"\"\"\n    try:\n        with open(file_path, 'r', newline='') as file:\n            reader = csv.DictReader(file)\n            return list(reader)\n    except FileNotFoundError:\n        raise ValueError(f\"File not found: {file_path}\")\n    except Exception as e:\n        raise ValueError(f\"Error reading CSV file {file_path}: {str(e)}\")\n\ndef save_csv_file(data, file_path, fieldnames=None):\n    \"\"\"Save data to a CSV file.\"\"\"\n    if not data:\n        raise ValueError(\"No data to save\")\n    \n    directory = os.path.dirname(file_path)\n    if directory and not os.path.exists(directory):\n        os.makedirs(directory)\n    \n    if fieldnames is None:\n        fieldnames = data[0].keys()\n    \n    with open(file_path, 'w', newline='') as file:\n        writer = csv.DictWriter(file, fieldnames=fieldnames)\n        writer.writeheader()\n        writer.writerows(data)\n\ndef get_timestamp():\n    \"\"\"Get the current timestamp.\"\"\"\n    return datetime.now().isoformat()\n\ndef validate_required_fields(data, required_fields):\n    \"\"\"Validate that all required fields are present in the data.\"\"\"\n    if not isinstance(data, dict):\n        raise ValueError(\"Data must be a dictionary\")\n    \n    missing_fields = [field for field in required_fields if field not in data]\n    if missing_fields:\n        raise ValueError(f\"Missing required fields: {', '.join(missing_fields)}\")\n    \n    return True\n\ndef format_currency(amount):\n    \"\"\"Format a number as currency.\"\"\"\n    try:\n        return f\"${float(amount):.2f}\"\n    except (ValueError, TypeError):\n        return \"N/A\"\n\ndef format_percentage(value):\n    \"\"\"Format a number as percentage.\"\"\"\n    try:\n        return f\"{float(value) * 100:.1f}%\"\n    except (ValueError, TypeError):\n        return \"N/A\"\n\ndef generate_report_filename(prefix=\"report\", extension=\"json\"):\n    \"\"\"Generate a filename for a report with timestamp.\"\"\"\n    timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\n    return f\"{prefix}_{timestamp}.{extension}\"\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/steps/input_stage.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nInput stage for the pipeline architecture.\nThis stage is responsible for loading and validating input data.\n\"\"\"\n\nimport os\nimport json\nfrom shared.utilities import load_json_file, load_csv_file, validate_required_fields\n\nclass InputStage:\n    \"\"\"Input stage for data processing pipeline.\"\"\"\n    \n    def __init__(self):\n        \"\"\"Initialize the input stage.\"\"\"\n        self.data = None\n        self.metadata = {\n            \"stage\": \"input\",\n            \"status\": \"initialized\",\n            \"errors\": []\n        }\n    \n    def load_data(self, source, source_type=\"json\"):\n        \"\"\"\n        Load data from the specified source.\n        \n        Args:\n            source: Path to the data file or raw data\n            source_type: Type of data source (json, csv, raw)\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        try:\n            self.metadata[\"source\"] = source\n            self.metadata[\"source_type\"] = source_type\n            \n            # Load data based on source type\n            if source_type == \"json\":\n                if isinstance(source, str) and os.path.exists(source):\n                    self.data = load_json_file(source)\n                elif isinstance(source, str):\n                    self.data = json.loads(source)\n                else:\n                    self.data = source\n            elif source_type == \"csv\":\n                self.data = load_csv_file(source)\n            elif source_type == \"raw\":\n                self.data = source\n            else:\n                raise ValueError(f\"Unsupported source type: {source_type}\")\n            \n            self.metadata[\"status\"] = \"data_loaded\"\n            self.metadata[\"record_count\"] = len(self.data) if isinstance(self.data, list) else 1\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(str(e))\n            return self._create_result()\n    \n    def validate_data(self, schema=None, required_fields=None):\n        \"\"\"\n        Validate the loaded data against a schema or required fields.\n        \n        Args:\n            schema: Schema definition for validation\n            required_fields: List of required field names\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.data is None:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(\"No data loaded to validate\")\n            return self._create_result()\n        \n        try:\n            validation_errors = []\n            \n            # Validate required fields if specified\n            if required_fields:\n                if isinstance(self.data, list):\n                    for i, item in enumerate(self.data):\n                        try:\n                            validate_required_fields(item, required_fields)\n                        except ValueError as e:\n                            validation_errors.append(f\"Record {i}: {str(e)}\")\n                else:\n                    try:\n                        validate_required_fields(self.data, required_fields)\n                    except ValueError as e:\n                        validation_errors.append(str(e))\n            \n            # Update metadata based on validation results\n            if validation_errors:\n                self.metadata[\"status\"] = \"validation_failed\"\n                self.metadata[\"errors\"].extend(validation_errors)\n            else:\n                self.metadata[\"status\"] = \"validated\"\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Validation error: {str(e)}\")\n            return self._create_result()\n    \n    def transform_data(self, transform_func):\n        \"\"\"\n        Apply a transformation function to the data.\n        \n        Args:\n            transform_func: Function to transform the data\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.data is None:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(\"No data loaded to transform\")\n            return self._create_result()\n        \n        try:\n            self.data = transform_func(self.data)\n            self.metadata[\"status\"] = \"transformed\"\n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Transformation error: {str(e)}\")\n            return self._create_result()\n    \n    def _create_result(self):\n        \"\"\"Create a result dictionary with data and metadata.\"\"\"\n        return {\n            \"data\": self.data,\n            \"metadata\": self.metadata\n        }\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/steps/output_stage.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nOutput stage for the pipeline architecture.\nThis stage is responsible for formatting and delivering the final results.\n\"\"\"\n\nimport os\nimport json\nfrom datetime import datetime\nfrom shared.utilities import save_json_file, save_csv_file, generate_report_filename\n\nclass OutputStage:\n    \"\"\"Output stage for formatting and delivering results.\"\"\"\n    \n    def __init__(self):\n        \"\"\"Initialize the output stage.\"\"\"\n        self.data = None\n        self.analysis = None\n        self.metadata = {\n            \"stage\": \"output\",\n            \"status\": \"initialized\",\n            \"errors\": [],\n            \"output_formats\": []\n        }\n    \n    def prepare(self, processing_result):\n        \"\"\"\n        Prepare the output stage with data from the processing stage.\n        \n        Args:\n            processing_result: Result from the processing stage\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        # Check if processing stage had errors\n        if processing_result[\"metadata\"][\"status\"] in [\"error\", \"skipped\"]:\n            self.metadata[\"status\"] = \"skipped\"\n            self.metadata[\"errors\"].append(\"Processing stage had errors, output skipped\")\n            return self._create_result()\n        \n        # Get data and metadata from processing stage\n        self.data = processing_result[\"data\"]\n        self.metadata[\"input_metadata\"] = processing_result[\"metadata\"][\"input_metadata\"]\n        self.metadata[\"processing_metadata\"] = processing_result[\"metadata\"]\n        \n        # Get analysis if available\n        if \"analysis\" in processing_result:\n            self.analysis = processing_result[\"analysis\"]\n        \n        # Initialize output\n        self.metadata[\"status\"] = \"preparing\"\n        self.metadata[\"started_at\"] = datetime.now().isoformat()\n        \n        return self._create_result()\n    \n    def format_as_summary(self):\n        \"\"\"\n        Format the data as a summary report.\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.data is None:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(\"No data to format\")\n            return self._create_result()\n        \n        try:\n            # Create summary\n            summary = {\n                \"report_type\": \"summary\",\n                \"generated_at\": datetime.now().isoformat(),\n                \"data_source\": self.metadata.get(\"input_metadata\", {}).get(\"source\", \"unknown\"),\n                \"record_count\": len(self.data) if isinstance(self.data, list) else 1\n            }\n            \n            # Add statistics if available\n            if self.analysis and \"statistics\" in self.analysis:\n                summary[\"statistics\"] = self.analysis[\"statistics\"]\n            \n            # Add processing information\n            if \"processing_metadata\" in self.metadata:\n                processing_meta = self.metadata[\"processing_metadata\"]\n                if \"processing_steps\" in processing_meta:\n                    summary[\"processing_steps\"] = processing_meta[\"processing_steps\"]\n                if \"processing_time_seconds\" in processing_meta:\n                    summary[\"processing_time_seconds\"] = processing_meta[\"processing_time_seconds\"]\n            \n            # Store the summary\n            self.summary = summary\n            \n            # Update metadata\n            self.metadata[\"output_formats\"].append(\"summary\")\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Summary formatting error: {str(e)}\")\n            return self._create_result()\n    \n    def format_as_detailed_report(self):\n        \"\"\"\n        Format the data as a detailed report.\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.data is None:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(\"No data to format\")\n            return self._create_result()\n        \n        try:\n            # Create detailed report\n            report = {\n                \"report_type\": \"detailed\",\n                \"generated_at\": datetime.now().isoformat(),\n                \"data_source\": self.metadata.get(\"input_metadata\", {}).get(\"source\", \"unknown\"),\n                \"record_count\": len(self.data) if isinstance(self.data, list) else 1,\n                \"data\": self.data\n            }\n            \n            # Add analysis if available\n            if self.analysis:\n                report[\"analysis\"] = self.analysis\n            \n            # Add processing information\n            if \"processing_metadata\" in self.metadata:\n                report[\"processing_info\"] = {\n                    \"steps\": self.metadata[\"processing_metadata\"].get(\"processing_steps\", []),\n                    \"filters\": self.metadata[\"processing_metadata\"].get(\"filters_applied\", []),\n                    \"transformations\": self.metadata[\"processing_metadata\"].get(\"transformations_applied\", []),\n                    \"processing_time_seconds\": self.metadata[\"processing_metadata\"].get(\"processing_time_seconds\")\n                }\n            \n            # Store the detailed report\n            self.detailed_report = report\n            \n            # Update metadata\n            self.metadata[\"output_formats\"].append(\"detailed_report\")\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Detailed report formatting error: {str(e)}\")\n            return self._create_result()\n    \n    def save_to_file(self, output_format=\"json\", output_dir=\"./output\", filename=None):\n        \"\"\"\n        Save the formatted output to a file.\n        \n        Args:\n            output_format: Format to save (json, csv)\n            output_dir: Directory to save the file\n            filename: Optional filename (generated if not provided)\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        try:\n            # Create output directory if it doesn't exist\n            if not os.path.exists(output_dir):\n                os.makedirs(output_dir)\n            \n            # Determine what to save\n            if output_format == \"json\":\n                if hasattr(self, \"detailed_report\"):\n                    data_to_save = self.detailed_report\n                    file_prefix = \"detailed_report\"\n                elif hasattr(self, \"summary\"):\n                    data_to_save = self.summary\n                    file_prefix = \"summary_report\"\n                else:\n                    data_to_save = {\n                        \"data\": self.data,\n                        \"generated_at\": datetime.now().isoformat()\n                    }\n                    file_prefix = \"data_export\"\n                \n                # Generate filename if not provided\n                if not filename:\n                    filename = generate_report_filename(file_prefix, \"json\")\n                \n                # Save to file\n                file_path = os.path.join(output_dir, filename)\n                save_json_file(data_to_save, file_path)\n                \n            elif output_format == \"csv\":\n                # CSV format only works for list data\n                if not isinstance(self.data, list):\n                    raise ValueError(\"CSV output format requires list data\")\n                \n                # Generate filename if not provided\n                if not filename:\n                    filename = generate_report_filename(\"data_export\", \"csv\")\n                \n                # Save to file\n                file_path = os.path.join(output_dir, filename)\n                save_csv_file(self.data, file_path)\n            \n            else:\n                raise ValueError(f\"Unsupported output format: {output_format}\")\n            \n            # Update metadata\n            self.metadata[\"output_files\"] = self.metadata.get(\"output_files\", [])\n            self.metadata[\"output_files\"].append({\n                \"format\": output_format,\n                \"path\": file_path,\n                \"filename\": filename\n            })\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"File save error: {str(e)}\")\n            return self._create_result()\n    \n    def print_results(self, output_type=\"summary\"):\n        \"\"\"\n        Print the results to the console.\n        \n        Args:\n            output_type: Type of output to print (summary, detailed)\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        try:\n            if output_type == \"summary\" and hasattr(self, \"summary\"):\n                print(\"\\n===== SUMMARY REPORT =====\")\n                print(f\"Generated at: {self.summary['generated_at']}\")\n                print(f\"Data source: {self.summary['data_source']}\")\n                print(f\"Record count: {self.summary['record_count']}\")\n                \n                if \"statistics\" in self.summary:\n                    print(\"\\n----- Statistics -----\")\n                    for field, stats in self.summary[\"statistics\"].items():\n                        print(f\"\\n{field}:\")\n                        for stat_name, stat_value in stats.items():\n                            print(f\"  {stat_name}: {stat_value}\")\n                \n                if \"processing_steps\" in self.summary:\n                    print(\"\\n----- Processing Steps -----\")\n                    for step in self.summary[\"processing_steps\"]:\n                        print(f\"- {step}\")\n                \n            elif output_type == \"detailed\" and hasattr(self, \"detailed_report\"):\n                print(\"\\n===== DETAILED REPORT =====\")\n                print(f\"Generated at: {self.detailed_report['generated_at']}\")\n                print(f\"Data source: {self.detailed_report['data_source']}\")\n                print(f\"Record count: {self.detailed_report['record_count']}\")\n                \n                if \"analysis\" in self.detailed_report:\n                    print(\"\\n----- Analysis -----\")\n                    for analysis_type, analysis_data in self.detailed_report[\"analysis\"].items():\n                        print(f\"\\n{analysis_type}:\")\n                        print(json.dumps(analysis_data, indent=2))\n                \n                print(\"\\n----- Data Sample -----\")\n                if isinstance(self.data, list):\n                    sample_size = min(3, len(self.data))\n                    for i in range(sample_size):\n                        print(f\"\\nRecord {i+1}:\")\n                        print(json.dumps(self.data[i], indent=2))\n                else:\n                    print(json.dumps(self.data, indent=2))\n            \n            else:\n                print(\"\\n===== DATA OUTPUT =====\")\n                if isinstance(self.data, list):\n                    print(f\"Record count: {len(self.data)}\")\n                    sample_size = min(3, len(self.data))\n                    print(f\"\\nShowing {sample_size} sample records:\")\n                    for i in range(sample_size):\n                        print(f\"\\nRecord {i+1}:\")\n                        print(json.dumps(self.data[i], indent=2))\n                else:\n                    print(json.dumps(self.data, indent=2))\n            \n            # Update metadata\n            self.metadata[\"output_formats\"].append(\"console\")\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Print error: {str(e)}\")\n            return self._create_result()\n    \n    def finalize(self):\n        \"\"\"\n        Finalize the output stage.\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.metadata[\"status\"] not in [\"error\", \"skipped\"]:\n            self.metadata[\"status\"] = \"completed\"\n            self.metadata[\"completed_at\"] = datetime.now().isoformat()\n            \n            # Calculate processing time if we have start time\n            if \"started_at\" in self.metadata:\n                start_time = datetime.fromisoformat(self.metadata[\"started_at\"])\n                end_time = datetime.fromisoformat(self.metadata[\"completed_at\"])\n                processing_time = (end_time - start_time).total_seconds()\n                self.metadata[\"processing_time_seconds\"] = processing_time\n        \n        return self._create_result()\n    \n    def _create_result(self):\n        \"\"\"Create a result dictionary with data and metadata.\"\"\"\n        result = {\n            \"data\": self.data,\n            \"metadata\": self.metadata\n        }\n        \n        # Add analysis if available\n        if self.analysis:\n            result[\"analysis\"] = self.analysis\n        \n        # Add formatted outputs if available\n        if hasattr(self, \"summary\"):\n            result[\"summary\"] = self.summary\n        \n        if hasattr(self, \"detailed_report\"):\n            result[\"detailed_report\"] = self.detailed_report\n        \n        return result\n"
  },
  {
    "path": "codebase-architectures/pipeline-architecture/steps/processing_stage.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nProcessing stage for the pipeline architecture.\nThis stage is responsible for transforming and analyzing the data.\n\"\"\"\n\nimport statistics\nfrom datetime import datetime\n\nclass ProcessingStage:\n    \"\"\"Processing stage for data transformation and analysis.\"\"\"\n    \n    def __init__(self):\n        \"\"\"Initialize the processing stage.\"\"\"\n        self.data = None\n        self.metadata = {\n            \"stage\": \"processing\",\n            \"status\": \"initialized\",\n            \"errors\": [],\n            \"processing_steps\": []\n        }\n    \n    def process(self, input_result):\n        \"\"\"\n        Process the data from the input stage.\n        \n        Args:\n            input_result: Result from the input stage\n        \n        Returns:\n            dict: Stage result with processed data and metadata\n        \"\"\"\n        # Check if input stage had errors\n        if input_result[\"metadata\"][\"status\"] in [\"error\", \"validation_failed\"]:\n            self.metadata[\"status\"] = \"skipped\"\n            self.metadata[\"errors\"].append(\"Input stage had errors, processing skipped\")\n            return self._create_result()\n        \n        # Get data from input stage\n        self.data = input_result[\"data\"]\n        self.metadata[\"input_metadata\"] = input_result[\"metadata\"]\n        \n        # Initialize processing\n        self.metadata[\"status\"] = \"processing\"\n        self.metadata[\"started_at\"] = datetime.now().isoformat()\n        \n        return self._create_result()\n    \n    def calculate_statistics(self, numeric_fields=None):\n        \"\"\"\n        Calculate statistics for numeric fields in the data.\n        \n        Args:\n            numeric_fields: List of field names to calculate statistics for\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.data is None:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(\"No data to process\")\n            return self._create_result()\n        \n        try:\n            # Determine fields to analyze\n            if numeric_fields is None:\n                # Try to automatically detect numeric fields\n                if isinstance(self.data, list) and len(self.data) > 0:\n                    sample = self.data[0]\n                    numeric_fields = [\n                        field for field, value in sample.items()\n                        if isinstance(value, (int, float)) or (\n                            isinstance(value, str) and value.replace('.', '', 1).isdigit()\n                        )\n                    ]\n            \n            # Calculate statistics\n            stats = {}\n            if isinstance(self.data, list) and numeric_fields:\n                for field in numeric_fields:\n                    try:\n                        # Extract numeric values\n                        values = []\n                        for item in self.data:\n                            if field in item:\n                                value = item[field]\n                                if isinstance(value, (int, float)):\n                                    values.append(value)\n                                elif isinstance(value, str) and value.replace('.', '', 1).isdigit():\n                                    values.append(float(value))\n                        \n                        # Calculate statistics if we have values\n                        if values:\n                            field_stats = {\n                                \"count\": len(values),\n                                \"min\": min(values),\n                                \"max\": max(values),\n                                \"sum\": sum(values),\n                                \"mean\": statistics.mean(values),\n                                \"median\": statistics.median(values)\n                            }\n                            \n                            # Add standard deviation if we have enough values\n                            if len(values) > 1:\n                                field_stats[\"std_dev\"] = statistics.stdev(values)\n                            \n                            stats[field] = field_stats\n                    except Exception as e:\n                        self.metadata[\"errors\"].append(f\"Error calculating statistics for field '{field}': {str(e)}\")\n            \n            # Add statistics to data\n            if not hasattr(self, \"analysis\"):\n                self.analysis = {}\n            self.analysis[\"statistics\"] = stats\n            \n            # Update metadata\n            self.metadata[\"processing_steps\"].append(\"calculate_statistics\")\n            self.metadata[\"statistics_fields\"] = list(stats.keys())\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Statistics calculation error: {str(e)}\")\n            return self._create_result()\n    \n    def filter_data(self, filter_func, description=None):\n        \"\"\"\n        Filter the data using the provided filter function.\n        \n        Args:\n            filter_func: Function that takes a data item and returns True to keep it\n            description: Description of the filter for metadata\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.data is None:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(\"No data to filter\")\n            return self._create_result()\n        \n        try:\n            original_count = len(self.data) if isinstance(self.data, list) else 1\n            \n            # Apply filter\n            if isinstance(self.data, list):\n                self.data = [item for item in self.data if filter_func(item)]\n            else:\n                self.data = self.data if filter_func(self.data) else None\n            \n            # Update metadata\n            filtered_count = len(self.data) if isinstance(self.data, list) else (1 if self.data else 0)\n            filter_info = {\n                \"description\": description or \"Custom filter\",\n                \"original_count\": original_count,\n                \"filtered_count\": filtered_count,\n                \"removed_count\": original_count - filtered_count\n            }\n            \n            if not hasattr(self, \"filters_applied\"):\n                self.filters_applied = []\n            self.filters_applied.append(filter_info)\n            \n            self.metadata[\"processing_steps\"].append(\"filter_data\")\n            self.metadata[\"filters_applied\"] = self.filters_applied\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Filter error: {str(e)}\")\n            return self._create_result()\n    \n    def transform_fields(self, transformations, description=None):\n        \"\"\"\n        Apply transformations to specific fields in the data.\n        \n        Args:\n            transformations: Dict mapping field names to transformation functions\n            description: Description of the transformations for metadata\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.data is None:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(\"No data to transform\")\n            return self._create_result()\n        \n        try:\n            # Apply transformations\n            if isinstance(self.data, list):\n                for item in self.data:\n                    for field, transform_func in transformations.items():\n                        if field in item:\n                            item[field] = transform_func(item[field])\n            else:\n                for field, transform_func in transformations.items():\n                    if field in self.data:\n                        self.data[field] = transform_func(self.data[field])\n            \n            # Update metadata\n            transform_info = {\n                \"description\": description or \"Field transformations\",\n                \"fields_transformed\": list(transformations.keys())\n            }\n            \n            if not hasattr(self, \"transformations_applied\"):\n                self.transformations_applied = []\n            self.transformations_applied.append(transform_info)\n            \n            self.metadata[\"processing_steps\"].append(\"transform_fields\")\n            self.metadata[\"transformations_applied\"] = self.transformations_applied\n            \n            return self._create_result()\n        except Exception as e:\n            self.metadata[\"status\"] = \"error\"\n            self.metadata[\"errors\"].append(f\"Transformation error: {str(e)}\")\n            return self._create_result()\n    \n    def finalize(self):\n        \"\"\"\n        Finalize the processing stage.\n        \n        Returns:\n            dict: Stage result with data and metadata\n        \"\"\"\n        if self.metadata[\"status\"] not in [\"error\", \"skipped\"]:\n            self.metadata[\"status\"] = \"completed\"\n            self.metadata[\"completed_at\"] = datetime.now().isoformat()\n            \n            # Calculate processing time if we have start time\n            if \"started_at\" in self.metadata:\n                start_time = datetime.fromisoformat(self.metadata[\"started_at\"])\n                end_time = datetime.fromisoformat(self.metadata[\"completed_at\"])\n                processing_time = (end_time - start_time).total_seconds()\n                self.metadata[\"processing_time_seconds\"] = processing_time\n        \n        return self._create_result()\n    \n    def _create_result(self):\n        \"\"\"Create a result dictionary with data and metadata.\"\"\"\n        result = {\n            \"data\": self.data,\n            \"metadata\": self.metadata\n        }\n        \n        # Add analysis if available\n        if hasattr(self, \"analysis\"):\n            result[\"analysis\"] = self.analysis\n        \n        return result\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/README.md",
    "content": "# Vertical Slice Architecture\n\nThis directory demonstrates a Vertical Slice Architecture implementation with a simple task management application.\n\n## Structure\n\n```\nvertical-slice-architecture/\n├── features/\n│   ├── tasks/\n│   │   ├── api.py              # Feature-specific API endpoints\n│   │   ├── service.py          # Core business logic\n│   │   ├── model.py            # Data models/schema\n│   │   └── README.md           # Feature documentation\n│   └── users/\n│       ├── api.py              # Feature-specific API endpoints\n│       ├── service.py          # Core business logic\n│       ├── model.py            # Data models/schema\n│       └── README.md           # Feature documentation\n├── shared/\n│   ├── utils.py                # Shared utilities\n│   └── db.py                   # Shared database connections\n└── main.py                     # Application entry point\n```\n\n## Benefits\n\n- Excellent feature isolation; clear and consistent structure\n- Each feature is independently testable and maintainable\n- Clear feature-level documentation enhances comprehension\n\n## Cons\n\n- Potential for duplicated logic across features\n- Complexity increases when blending features; shared logic must be explicitly managed\n\n## Running the Example\n\n```bash\nuv run main.py\n```\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/projects/README.md",
    "content": "# Projects Feature\n\nThis feature provides functionality for managing projects in the task management system.\n\n## Components\n\n- **model.py**: Defines the Project model with fields like name, description, user_id, and task_ids\n- **service.py**: Contains business logic for project management\n- **api.py**: Provides API endpoints for project operations\n\n## Functionality\n\n- Create, read, update, and delete projects\n- Assign tasks to projects\n- Remove tasks from projects\n- Get all tasks for a specific project\n- Get all projects for a specific user\n\n## Relationships\n\n- Projects are owned by users (one-to-many)\n- Projects can contain multiple tasks (one-to-many)\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/projects/api.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nProject API endpoints.\n\"\"\"\n\nfrom .service import ProjectService\nfrom features.tasks.service import TaskService\n\nclass ProjectAPI:\n    \"\"\"API endpoints for project management.\"\"\"\n    \n    @staticmethod\n    def create_project(name, description=None, user_id=None):\n        \"\"\"Create a new project.\"\"\"\n        project_data = {\n            \"name\": name,\n            \"description\": description,\n            \"user_id\": user_id\n        }\n        return ProjectService.create_project(project_data)\n    \n    @staticmethod\n    def get_project(project_id):\n        \"\"\"Get a project by ID.\"\"\"\n        project = ProjectService.get_project(project_id)\n        if not project:\n            return {\"error\": f\"Project with ID {project_id} not found\"}\n        return project\n    \n    @staticmethod\n    def get_all_projects():\n        \"\"\"Get all projects.\"\"\"\n        return ProjectService.get_all_projects()\n    \n    @staticmethod\n    def get_user_projects(user_id):\n        \"\"\"Get all projects for a specific user.\"\"\"\n        return ProjectService.get_user_projects(user_id)\n    \n    @staticmethod\n    def update_project(project_id, project_data):\n        \"\"\"Update a project.\"\"\"\n        project = ProjectService.update_project(project_id, project_data)\n        if not project:\n            return {\"error\": f\"Project with ID {project_id} not found\"}\n        return project\n    \n    @staticmethod\n    def delete_project(project_id):\n        \"\"\"Delete a project.\"\"\"\n        success = ProjectService.delete_project(project_id)\n        if not success:\n            return {\"error\": f\"Project with ID {project_id} not found\"}\n        return {\"message\": f\"Project with ID {project_id} deleted successfully\"}\n    \n    @staticmethod\n    def add_task_to_project(project_id, task_id):\n        \"\"\"Add a task to a project.\"\"\"\n        success = ProjectService.add_task_to_project(project_id, task_id)\n        if not success:\n            return {\"error\": \"Project or task not found\"}\n        return {\"message\": f\"Task added to project successfully\"}\n    \n    @staticmethod\n    def remove_task_from_project(project_id, task_id):\n        \"\"\"Remove a task from a project.\"\"\"\n        success = ProjectService.remove_task_from_project(project_id, task_id)\n        if not success:\n            return {\"error\": \"Project or task not found, or task is not in project\"}\n        return {\"message\": f\"Task removed from project successfully\"}\n    \n    @staticmethod\n    def get_project_tasks(project_id):\n        \"\"\"Get all tasks for a specific project.\"\"\"\n        project = ProjectService.get_project(project_id)\n        if not project:\n            return {\"error\": f\"Project with ID {project_id} not found\"}\n        \n        tasks = ProjectService.get_project_tasks(project_id)\n        return tasks\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/projects/model.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nProject model definition.\n\"\"\"\n\nfrom shared.utils import generate_id, get_timestamp\n\nclass Project:\n    \"\"\"Project model representing a collection of tasks.\"\"\"\n    \n    def __init__(self, name, description=None, user_id=None, id=None):\n        self.id = id or generate_id()\n        self.name = name\n        self.description = description\n        self.user_id = user_id  # Owner of the project\n        self.task_ids = []  # List of task IDs associated with this project\n        self.created_at = get_timestamp()\n        self.updated_at = self.created_at\n        \n    def to_dict(self):\n        \"\"\"Convert project to dictionary.\"\"\"\n        return {\n            \"id\": self.id,\n            \"name\": self.name,\n            \"description\": self.description,\n            \"user_id\": self.user_id,\n            \"task_ids\": self.task_ids,\n            \"created_at\": self.created_at,\n            \"updated_at\": self.updated_at\n        }\n        \n    @classmethod\n    def from_dict(cls, data):\n        \"\"\"Create a project from dictionary.\"\"\"\n        project = cls(\n            name=data[\"name\"],\n            description=data.get(\"description\"),\n            user_id=data.get(\"user_id\"),\n            id=data.get(\"id\")\n        )\n        project.task_ids = data.get(\"task_ids\", [])\n        project.created_at = data.get(\"created_at\", project.created_at)\n        project.updated_at = data.get(\"updated_at\", project.updated_at)\n        return project\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/projects/service.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nProject service containing business logic for project management.\n\"\"\"\n\nfrom shared.db import db\nfrom shared.utils import validate_required_fields, get_timestamp\nfrom .model import Project\n\nclass ProjectService:\n    \"\"\"Service for managing projects.\"\"\"\n    \n    @staticmethod\n    def create_project(project_data):\n        \"\"\"Create a new project.\"\"\"\n        validate_required_fields(project_data, [\"name\"])\n        project = Project(**project_data)\n        db.insert(\"projects\", project.id, project.to_dict())\n        return project.to_dict()\n    \n    @staticmethod\n    def get_project(project_id):\n        \"\"\"Get a project by ID.\"\"\"\n        project_data = db.get(\"projects\", project_id)\n        if not project_data:\n            return None\n        return project_data\n    \n    @staticmethod\n    def get_all_projects():\n        \"\"\"Get all projects.\"\"\"\n        return db.get_all(\"projects\")\n    \n    @staticmethod\n    def get_user_projects(user_id):\n        \"\"\"Get all projects for a specific user.\"\"\"\n        all_projects = db.get_all(\"projects\")\n        return [project for project in all_projects if project.get(\"user_id\") == user_id]\n    \n    @staticmethod\n    def update_project(project_id, project_data):\n        \"\"\"Update a project.\"\"\"\n        existing_project = db.get(\"projects\", project_id)\n        if not existing_project:\n            return None\n        \n        # Update fields\n        for key, value in project_data.items():\n            if key not in [\"id\", \"created_at\"]:\n                existing_project[key] = value\n        \n        # Update timestamp\n        existing_project[\"updated_at\"] = get_timestamp()\n        \n        # Save to database\n        db.update(\"projects\", project_id, existing_project)\n        return existing_project\n    \n    @staticmethod\n    def delete_project(project_id):\n        \"\"\"Delete a project.\"\"\"\n        return db.delete(\"projects\", project_id)\n    \n    @staticmethod\n    def add_task_to_project(project_id, task_id):\n        \"\"\"Add a task to a project.\"\"\"\n        project = db.get(\"projects\", project_id)\n        if not project:\n            return False\n        \n        # Check if task exists\n        task = db.get(\"tasks\", task_id)\n        if not task:\n            return False\n        \n        # Add task to project if not already added\n        if task_id not in project[\"task_ids\"]:\n            project[\"task_ids\"].append(task_id)\n            project[\"updated_at\"] = get_timestamp()\n            db.update(\"projects\", project_id, project)\n        \n        return True\n    \n    @staticmethod\n    def remove_task_from_project(project_id, task_id):\n        \"\"\"Remove a task from a project.\"\"\"\n        project = db.get(\"projects\", project_id)\n        if not project:\n            return False\n        \n        # Remove task from project if it exists\n        if task_id in project[\"task_ids\"]:\n            project[\"task_ids\"].remove(task_id)\n            project[\"updated_at\"] = get_timestamp()\n            db.update(\"projects\", project_id, project)\n            return True\n        \n        return False\n    \n    @staticmethod\n    def get_project_tasks(project_id):\n        \"\"\"Get all tasks for a specific project.\"\"\"\n        project = db.get(\"projects\", project_id)\n        if not project:\n            return []\n        \n        tasks = []\n        for task_id in project[\"task_ids\"]:\n            task = db.get(\"tasks\", task_id)\n            if task:\n                tasks.append(task)\n        \n        return tasks\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/tasks/README.md",
    "content": "# Tasks Feature\n\nThis feature handles the management of tasks in the application.\n\n## Components\n\n- **model.py**: Defines the Task data model\n- **service.py**: Contains business logic for task operations\n- **api.py**: Exposes task management endpoints\n\n## Functionality\n\n- Create, read, update, and delete tasks\n- Assign tasks to users\n- Filter tasks by user\n- Track task status and timestamps\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/tasks/api.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTask API endpoints.\n\"\"\"\n\nfrom .service import TaskService\n\nclass TaskAPI:\n    \"\"\"API endpoints for task management.\"\"\"\n    \n    @staticmethod\n    def create_task(title, description=None, user_id=None):\n        \"\"\"Create a new task.\"\"\"\n        task_data = {\n            \"title\": title,\n            \"description\": description,\n            \"user_id\": user_id\n        }\n        return TaskService.create_task(task_data)\n    \n    @staticmethod\n    def get_task(task_id):\n        \"\"\"Get a task by ID.\"\"\"\n        task = TaskService.get_task(task_id)\n        if not task:\n            return {\"error\": f\"Task with ID {task_id} not found\"}\n        return task\n    \n    @staticmethod\n    def get_all_tasks():\n        \"\"\"Get all tasks.\"\"\"\n        return TaskService.get_all_tasks()\n    \n    @staticmethod\n    def get_user_tasks(user_id):\n        \"\"\"Get all tasks for a specific user.\"\"\"\n        return TaskService.get_user_tasks(user_id)\n    \n    @staticmethod\n    def update_task(task_id, task_data):\n        \"\"\"Update a task.\"\"\"\n        task = TaskService.update_task(task_id, task_data)\n        if not task:\n            return {\"error\": f\"Task with ID {task_id} not found\"}\n        return task\n    \n    @staticmethod\n    def delete_task(task_id):\n        \"\"\"Delete a task.\"\"\"\n        success = TaskService.delete_task(task_id)\n        if not success:\n            return {\"error\": f\"Task with ID {task_id} not found\"}\n        return {\"message\": f\"Task with ID {task_id} deleted successfully\"}\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/tasks/model.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTask model definition.\n\"\"\"\n\nfrom shared.utils import generate_id, get_timestamp\n\nclass Task:\n    \"\"\"Task model representing a to-do item.\"\"\"\n    \n    def __init__(self, title, description=None, user_id=None, status=\"pending\", id=None):\n        self.id = id or generate_id()\n        self.title = title\n        self.description = description\n        self.user_id = user_id\n        self.status = status\n        self.created_at = get_timestamp()\n        self.updated_at = self.created_at\n        \n    def to_dict(self):\n        \"\"\"Convert task to dictionary.\"\"\"\n        return {\n            \"id\": self.id,\n            \"title\": self.title,\n            \"description\": self.description,\n            \"user_id\": self.user_id,\n            \"status\": self.status,\n            \"created_at\": self.created_at,\n            \"updated_at\": self.updated_at\n        }\n        \n    @classmethod\n    def from_dict(cls, data):\n        \"\"\"Create a task from dictionary.\"\"\"\n        task = cls(\n            title=data[\"title\"],\n            description=data.get(\"description\"),\n            user_id=data.get(\"user_id\"),\n            status=data.get(\"status\", \"pending\"),\n            id=data.get(\"id\")\n        )\n        task.created_at = data.get(\"created_at\", task.created_at)\n        task.updated_at = data.get(\"updated_at\", task.updated_at)\n        return task\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/tasks/service.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTask service containing business logic for task management.\n\"\"\"\n\nfrom shared.db import db\nfrom shared.utils import validate_required_fields, get_timestamp\nfrom .model import Task\n\nclass TaskService:\n    \"\"\"Service for managing tasks.\"\"\"\n    \n    @staticmethod\n    def create_task(task_data):\n        \"\"\"Create a new task.\"\"\"\n        validate_required_fields(task_data, [\"title\"])\n        task = Task(**task_data)\n        db.insert(\"tasks\", task.id, task.to_dict())\n        return task.to_dict()\n    \n    @staticmethod\n    def get_task(task_id):\n        \"\"\"Get a task by ID.\"\"\"\n        task_data = db.get(\"tasks\", task_id)\n        if not task_data:\n            return None\n        return task_data\n    \n    @staticmethod\n    def get_all_tasks():\n        \"\"\"Get all tasks.\"\"\"\n        return db.get_all(\"tasks\")\n    \n    @staticmethod\n    def get_user_tasks(user_id):\n        \"\"\"Get all tasks for a specific user.\"\"\"\n        all_tasks = db.get_all(\"tasks\")\n        return [task for task in all_tasks if task.get(\"user_id\") == user_id]\n    \n    @staticmethod\n    def update_task(task_id, task_data):\n        \"\"\"Update a task.\"\"\"\n        existing_task = db.get(\"tasks\", task_id)\n        if not existing_task:\n            return None\n        \n        # Update fields\n        for key, value in task_data.items():\n            if key not in [\"id\", \"created_at\"]:\n                existing_task[key] = value\n        \n        # Update timestamp\n        existing_task[\"updated_at\"] = get_timestamp()\n        \n        # Save to database\n        db.update(\"tasks\", task_id, existing_task)\n        return existing_task\n    \n    @staticmethod\n    def delete_task(task_id):\n        \"\"\"Delete a task.\"\"\"\n        return db.delete(\"tasks\", task_id)\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/users/README.md",
    "content": "# Users Feature\n\nThis feature handles user management in the application.\n\n## Components\n\n- **model.py**: Defines the User data model\n- **service.py**: Contains business logic for user operations\n- **api.py**: Exposes user management endpoints\n\n## Functionality\n\n- Create, read, update, and delete users\n- Validate unique usernames\n- Retrieve users by ID or username\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/users/api.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nUser API endpoints.\n\"\"\"\n\nfrom .service import UserService\n\nclass UserAPI:\n    \"\"\"API endpoints for user management.\"\"\"\n    \n    @staticmethod\n    def create_user(username, email, name=None):\n        \"\"\"Create a new user.\"\"\"\n        try:\n            user_data = {\n                \"username\": username,\n                \"email\": email,\n                \"name\": name\n            }\n            return UserService.create_user(user_data)\n        except ValueError as e:\n            return {\"error\": str(e)}\n    \n    @staticmethod\n    def get_user(user_id):\n        \"\"\"Get a user by ID.\"\"\"\n        user = UserService.get_user(user_id)\n        if not user:\n            return {\"error\": f\"User with ID {user_id} not found\"}\n        return user\n    \n    @staticmethod\n    def get_by_username(username):\n        \"\"\"Get a user by username.\"\"\"\n        user = UserService.get_by_username(username)\n        if not user:\n            return {\"error\": f\"User with username '{username}' not found\"}\n        return user\n    \n    @staticmethod\n    def get_all_users():\n        \"\"\"Get all users.\"\"\"\n        return UserService.get_all_users()\n    \n    @staticmethod\n    def update_user(user_id, user_data):\n        \"\"\"Update a user.\"\"\"\n        try:\n            user = UserService.update_user(user_id, user_data)\n            if not user:\n                return {\"error\": f\"User with ID {user_id} not found\"}\n            return user\n        except ValueError as e:\n            return {\"error\": str(e)}\n    \n    @staticmethod\n    def delete_user(user_id):\n        \"\"\"Delete a user.\"\"\"\n        success = UserService.delete_user(user_id)\n        if not success:\n            return {\"error\": f\"User with ID {user_id} not found\"}\n        return {\"message\": f\"User with ID {user_id} deleted successfully\"}\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/users/model.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nUser model definition.\n\"\"\"\n\nfrom shared.utils import generate_id, get_timestamp\n\nclass User:\n    \"\"\"User model representing an application user.\"\"\"\n    \n    def __init__(self, username, email, name=None, id=None):\n        self.id = id or generate_id()\n        self.username = username\n        self.email = email\n        self.name = name\n        self.created_at = get_timestamp()\n        self.updated_at = self.created_at\n        \n    def to_dict(self):\n        \"\"\"Convert user to dictionary.\"\"\"\n        return {\n            \"id\": self.id,\n            \"username\": self.username,\n            \"email\": self.email,\n            \"name\": self.name,\n            \"created_at\": self.created_at,\n            \"updated_at\": self.updated_at\n        }\n        \n    @classmethod\n    def from_dict(cls, data):\n        \"\"\"Create a user from dictionary.\"\"\"\n        user = cls(\n            username=data[\"username\"],\n            email=data[\"email\"],\n            name=data.get(\"name\"),\n            id=data.get(\"id\")\n        )\n        user.created_at = data.get(\"created_at\", user.created_at)\n        user.updated_at = data.get(\"updated_at\", user.updated_at)\n        return user\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/features/users/service.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nUser service containing business logic for user management.\n\"\"\"\n\nfrom shared.db import db\nfrom shared.utils import validate_required_fields, get_timestamp\nfrom .model import User\n\nclass UserService:\n    \"\"\"Service for managing users.\"\"\"\n    \n    @staticmethod\n    def create_user(user_data):\n        \"\"\"Create a new user.\"\"\"\n        validate_required_fields(user_data, [\"username\", \"email\"])\n        \n        # Check if username already exists\n        all_users = db.get_all(\"users\")\n        if any(user[\"username\"] == user_data[\"username\"] for user in all_users):\n            raise ValueError(f\"Username '{user_data['username']}' already exists\")\n        \n        user = User(**user_data)\n        db.insert(\"users\", user.id, user.to_dict())\n        return user.to_dict()\n    \n    @staticmethod\n    def get_user(user_id):\n        \"\"\"Get a user by ID.\"\"\"\n        user_data = db.get(\"users\", user_id)\n        if not user_data:\n            return None\n        return user_data\n    \n    @staticmethod\n    def get_by_username(username):\n        \"\"\"Get a user by username.\"\"\"\n        all_users = db.get_all(\"users\")\n        for user in all_users:\n            if user[\"username\"] == username:\n                return user\n        return None\n    \n    @staticmethod\n    def get_all_users():\n        \"\"\"Get all users.\"\"\"\n        return db.get_all(\"users\")\n    \n    @staticmethod\n    def update_user(user_id, user_data):\n        \"\"\"Update a user.\"\"\"\n        existing_user = db.get(\"users\", user_id)\n        if not existing_user:\n            return None\n        \n        # Check if username is being changed and already exists\n        if \"username\" in user_data and user_data[\"username\"] != existing_user[\"username\"]:\n            all_users = db.get_all(\"users\")\n            if any(user[\"username\"] == user_data[\"username\"] for user in all_users if user[\"id\"] != user_id):\n                raise ValueError(f\"Username '{user_data['username']}' already exists\")\n        \n        # Update fields\n        for key, value in user_data.items():\n            if key not in [\"id\", \"created_at\"]:\n                existing_user[key] = value\n        \n        # Update timestamp\n        existing_user[\"updated_at\"] = get_timestamp()\n        \n        # Save to database\n        db.update(\"users\", user_id, existing_user)\n        return existing_user\n    \n    @staticmethod\n    def delete_user(user_id):\n        \"\"\"Delete a user.\"\"\"\n        return db.delete(\"users\", user_id)\n"
  },
  {
    "path": "codebase-architectures/vertical-slice-architecture/main.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n# ]\n# ///\n\n\"\"\"\nMain application entry point for the Vertical Slice Architecture example.\n\"\"\"\n\nfrom features.users.api import UserAPI\nfrom features.tasks.api import TaskAPI\nfrom features.projects.api import ProjectAPI\n\ndef display_header(text):\n    \"\"\"Display a header with the given text.\"\"\"\n    print(\"\\n\" + \"=\" * 50)\n    print(f\" {text}\")\n    print(\"=\" * 50)\n\ndef display_result(result):\n    \"\"\"Display a result.\"\"\"\n    if isinstance(result, list):\n        for item in result:\n            print(f\"- {item}\")\n    elif isinstance(result, dict):\n        for key, value in result.items():\n            print(f\"{key}: {value}\")\n    else:\n        print(result)\n\ndef main():\n    \"\"\"Run the application.\"\"\"\n    display_header(\"Vertical Slice Architecture Example\")\n    \n    # Create users\n    display_header(\"Creating Users\")\n    user1 = UserAPI.create_user(\"johndoe\", \"john@example.com\", \"John Doe\")\n    display_result(user1)\n    \n    user2 = UserAPI.create_user(\"janedoe\", \"jane@example.com\", \"Jane Doe\")\n    display_result(user2)\n    \n    # Try to create a user with an existing username\n    duplicate_user = UserAPI.create_user(\"johndoe\", \"another@example.com\")\n    display_result(duplicate_user)\n    \n    # Get all users\n    display_header(\"All Users\")\n    all_users = UserAPI.get_all_users()\n    for user in all_users:\n        display_result(user)\n    \n    # Create tasks\n    display_header(\"Creating Tasks\")\n    task1 = TaskAPI.create_task(\"Complete project\", \"Finish the architecture example\", user1[\"id\"])\n    display_result(task1)\n    \n    task2 = TaskAPI.create_task(\"Review code\", \"Check for bugs and improvements\", user2[\"id\"])\n    display_result(task2)\n    \n    task3 = TaskAPI.create_task(\"Write documentation\", \"Document the architecture\", user1[\"id\"])\n    display_result(task3)\n    \n    # Get user tasks\n    display_header(f\"Tasks for {user1['name']}\")\n    user1_tasks = TaskAPI.get_user_tasks(user1[\"id\"])\n    for task in user1_tasks:\n        display_result(task)\n    \n    # Update a task\n    display_header(\"Updating a Task\")\n    updated_task = TaskAPI.update_task(task1[\"id\"], {\"status\": \"completed\"})\n    display_result(updated_task)\n    \n    # Delete a task\n    display_header(\"Deleting a Task\")\n    delete_result = TaskAPI.delete_task(task2[\"id\"])\n    display_result(delete_result)\n    \n    # Get all remaining tasks\n    display_header(\"All Remaining Tasks\")\n    all_tasks = TaskAPI.get_all_tasks()\n    for task in all_tasks:\n        display_result(task)\n    \n    # Create a project\n    display_header(\"Creating a Project\")\n    project = ProjectAPI.create_project(\"Task Management System\", \"A project for managing tasks\", user1[\"id\"])\n    display_result(project)\n    \n    # Add tasks to the project\n    display_header(\"Adding Tasks to Project\")\n    add_task1 = ProjectAPI.add_task_to_project(project[\"id\"], task1[\"id\"])\n    display_result(add_task1)\n    \n    add_task3 = ProjectAPI.add_task_to_project(project[\"id\"], task3[\"id\"])\n    display_result(add_task3)\n    \n    # Get project tasks\n    display_header(f\"Tasks in Project: {project['name']}\")\n    project_tasks = ProjectAPI.get_project_tasks(project[\"id\"])\n    for task in project_tasks:\n        display_result(task)\n    \n    # Get user projects\n    display_header(f\"Projects for {user1['name']}\")\n    user_projects = ProjectAPI.get_user_projects(user1[\"id\"])\n    for proj in user_projects:\n        display_result(proj)\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "data/analytics.csv",
    "content": "id,name,age,city,score,is_active,status,created_at\r\n94efbf8b-4c95-4feb-9eda-900192276be7,Fiona,33,Singapore,95.48,True,active,2024-04-30\r\nefcbb1f5-ffaf-4b40-a44b-cd67f6508eec,Alice,46,Paris,37.81,False,active,2023-10-31\r\n9c2378d3-f46d-4f19-8e9e-b9053e6e57ea,Charlie,54,Tokyo,86.24,False,archived,2023-11-15\r\n12a6dc88-1bd5-4a20-b729-e7a70918a60b,Charlie,31,Tokyo,61.24,True,pending,2024-03-26\r\ndc1cf50e-c3c2-4843-8137-131834f7b00a,Jane,26,London,22.61,True,inactive,2024-11-04\r\ndb0ac268-3666-4a03-a284-e35380ae8c84,Jane,34,New York,3.1,True,archived,2024-01-29\r\n24ab09dd-163a-4faf-8e06-d3fb6418dec5,Alice,64,Tokyo,6.17,True,pending,2024-03-08\r\n27f8624d-c41a-4569-bbbb-1348b94fa0fd,Jane,34,Paris,43.44,True,active,2023-04-19\r\n1666fe7c-a2b1-44d5-abba-293c4e0f9b23,Charlie,58,Sydney,85.02,True,active,2024-09-09\r\n66036da1-7704-4ca6-9ec2-1e6d560db610,Alice,61,Singapore,93.69,True,archived,2024-05-06\r\n04fa9318-9e6d-4029-be2d-9bc7ecf37f3f,John,24,Tokyo,14.59,True,inactive,2023-08-17\r\n4586fbe1-acf9-4bfa-8237-ab697cdc69ed,Diana,43,Sydney,60.84,True,archived,2023-11-06\r\n20b827b3-f183-44db-9f4f-f30b361c8a83,Diana,63,Singapore,96.18,False,inactive,2024-08-02\r\n779dbbe0-b509-4332-98ee-d89887f48cec,Bob,47,Toronto,37.85,False,active,2025-01-25\r\n8612b8d7-4524-4575-a782-a01c4a0c88a3,Bob,25,Berlin,88.28,False,pending,2025-01-16\r\nad992026-c339-40eb-a669-f16983d226ca,Diana,29,Berlin,84.17,False,active,2023-07-26\r\nb3ef272f-3fc9-4422-88db-ed39858f1f68,John,20,New York,78.33,True,archived,2023-04-16\r\n2a0ace4c-ee9e-4b6b-9005-897210109cca,Bob,49,Toronto,56.91,False,active,2023-07-10\r\n3674700e-a83f-4450-97c8-04f80f2d2e89,Charlie,30,Sydney,80.83,False,active,2024-10-22\r\na17712d7-10f1-48e3-b1fd-8c65afb0442a,Jane,40,Toronto,35.85,False,inactive,2024-02-03\r\n3b1e3e89-9ccb-46cf-9954-cb0903e8e02d,Diana,48,Singapore,37.58,True,inactive,2023-06-13\r\na4461690-a009-48c1-96d2-ed89dd1907a7,Diana,21,Tokyo,32.35,False,pending,2024-06-03\r\n7157129c-0b3d-4040-8609-65afe4322ed2,Alice,65,New York,40.59,False,inactive,2024-02-19\r\ndc3c472f-d16b-47e2-95c4-61c461e2b228,Diana,25,Singapore,48.32,False,inactive,2024-10-03\r\n34e6e614-f376-4c1c-b3b5-f2926987115f,Bob,30,Tokyo,72.08,False,inactive,2024-10-09\r\n2e4aded0-53aa-4668-b3b1-90f4747aa295,Fiona,20,Tokyo,88.64,True,inactive,2023-04-08\r\n2f65400e-3447-4944-9db3-af09bd4d57db,John,34,Toronto,54.75,True,archived,2024-02-26\r\na667e148-4c64-4f5a-b23a-e00856feed8d,John,27,Singapore,33.5,True,archived,2023-02-26\r\n517c9c76-5c3e-497e-bd78-343ebd001668,Eric,23,Sydney,78.33,True,inactive,2024-08-11\r\n78a7f34f-7c0b-4e16-b57e-3829726569a7,Jane,32,Berlin,49.09,False,archived,2023-10-19\r\n"
  },
  {
    "path": "data/analytics.json",
    "content": "[\n  {\n    \"id\": \"94efbf8b-4c95-4feb-9eda-900192276be7\",\n    \"name\": \"Fiona\",\n    \"age\": 33,\n    \"city\": \"Singapore\",\n    \"score\": 95.48,\n    \"is_active\": true,\n    \"status\": \"active\",\n    \"created_at\": \"2024-04-30\"\n  },\n  {\n    \"id\": \"efcbb1f5-ffaf-4b40-a44b-cd67f6508eec\",\n    \"name\": \"Alice\",\n    \"age\": 46,\n    \"city\": \"Paris\",\n    \"score\": 37.81,\n    \"is_active\": false,\n    \"status\": \"active\",\n    \"created_at\": \"2023-10-31\"\n  },\n  {\n    \"id\": \"9c2378d3-f46d-4f19-8e9e-b9053e6e57ea\",\n    \"name\": \"Charlie\",\n    \"age\": 54,\n    \"city\": \"Tokyo\",\n    \"score\": 86.24,\n    \"is_active\": false,\n    \"status\": \"archived\",\n    \"created_at\": \"2023-11-15\"\n  },\n  {\n    \"id\": \"12a6dc88-1bd5-4a20-b729-e7a70918a60b\",\n    \"name\": \"Charlie\",\n    \"age\": 31,\n    \"city\": \"Tokyo\",\n    \"score\": 61.24,\n    \"is_active\": true,\n    \"status\": \"pending\",\n    \"created_at\": \"2024-03-26\"\n  },\n  {\n    \"id\": \"dc1cf50e-c3c2-4843-8137-131834f7b00a\",\n    \"name\": \"Jane\",\n    \"age\": 26,\n    \"city\": \"London\",\n    \"score\": 22.61,\n    \"is_active\": true,\n    \"status\": \"inactive\",\n    \"created_at\": \"2024-11-04\"\n  },\n  {\n    \"id\": \"db0ac268-3666-4a03-a284-e35380ae8c84\",\n    \"name\": \"Jane\",\n    \"age\": 34,\n    \"city\": \"New York\",\n    \"score\": 3.1,\n    \"is_active\": true,\n    \"status\": \"archived\",\n    \"created_at\": \"2024-01-29\"\n  },\n  {\n    \"id\": \"24ab09dd-163a-4faf-8e06-d3fb6418dec5\",\n    \"name\": \"Alice\",\n    \"age\": 64,\n    \"city\": \"Tokyo\",\n    \"score\": 6.17,\n    \"is_active\": true,\n    \"status\": \"pending\",\n    \"created_at\": \"2024-03-08\"\n  },\n  {\n    \"id\": \"27f8624d-c41a-4569-bbbb-1348b94fa0fd\",\n    \"name\": \"Jane\",\n    \"age\": 34,\n    \"city\": \"Paris\",\n    \"score\": 43.44,\n    \"is_active\": true,\n    \"status\": \"active\",\n    \"created_at\": \"2023-04-19\"\n  },\n  {\n    \"id\": \"1666fe7c-a2b1-44d5-abba-293c4e0f9b23\",\n    \"name\": \"Charlie\",\n    \"age\": 58,\n    \"city\": \"Sydney\",\n    \"score\": 85.02,\n    \"is_active\": true,\n    \"status\": \"active\",\n    \"created_at\": \"2024-09-09\"\n  },\n  {\n    \"id\": \"66036da1-7704-4ca6-9ec2-1e6d560db610\",\n    \"name\": \"Alice\",\n    \"age\": 61,\n    \"city\": \"Singapore\",\n    \"score\": 93.69,\n    \"is_active\": true,\n    \"status\": \"archived\",\n    \"created_at\": \"2024-05-06\"\n  },\n  {\n    \"id\": \"04fa9318-9e6d-4029-be2d-9bc7ecf37f3f\",\n    \"name\": \"John\",\n    \"age\": 24,\n    \"city\": \"Tokyo\",\n    \"score\": 14.59,\n    \"is_active\": true,\n    \"status\": \"inactive\",\n    \"created_at\": \"2023-08-17\"\n  },\n  {\n    \"id\": \"4586fbe1-acf9-4bfa-8237-ab697cdc69ed\",\n    \"name\": \"Diana\",\n    \"age\": 43,\n    \"city\": \"Sydney\",\n    \"score\": 60.84,\n    \"is_active\": true,\n    \"status\": \"archived\",\n    \"created_at\": \"2023-11-06\"\n  },\n  {\n    \"id\": \"20b827b3-f183-44db-9f4f-f30b361c8a83\",\n    \"name\": \"Diana\",\n    \"age\": 63,\n    \"city\": \"Singapore\",\n    \"score\": 96.18,\n    \"is_active\": false,\n    \"status\": \"inactive\",\n    \"created_at\": \"2024-08-02\"\n  },\n  {\n    \"id\": \"779dbbe0-b509-4332-98ee-d89887f48cec\",\n    \"name\": \"Bob\",\n    \"age\": 47,\n    \"city\": \"Toronto\",\n    \"score\": 37.85,\n    \"is_active\": false,\n    \"status\": \"active\",\n    \"created_at\": \"2025-01-25\"\n  },\n  {\n    \"id\": \"8612b8d7-4524-4575-a782-a01c4a0c88a3\",\n    \"name\": \"Bob\",\n    \"age\": 25,\n    \"city\": \"Berlin\",\n    \"score\": 88.28,\n    \"is_active\": false,\n    \"status\": \"pending\",\n    \"created_at\": \"2025-01-16\"\n  },\n  {\n    \"id\": \"ad992026-c339-40eb-a669-f16983d226ca\",\n    \"name\": \"Diana\",\n    \"age\": 29,\n    \"city\": \"Berlin\",\n    \"score\": 84.17,\n    \"is_active\": false,\n    \"status\": \"active\",\n    \"created_at\": \"2023-07-26\"\n  },\n  {\n    \"id\": \"b3ef272f-3fc9-4422-88db-ed39858f1f68\",\n    \"name\": \"John\",\n    \"age\": 20,\n    \"city\": \"New York\",\n    \"score\": 78.33,\n    \"is_active\": true,\n    \"status\": \"archived\",\n    \"created_at\": \"2023-04-16\"\n  },\n  {\n    \"id\": \"2a0ace4c-ee9e-4b6b-9005-897210109cca\",\n    \"name\": \"Bob\",\n    \"age\": 49,\n    \"city\": \"Toronto\",\n    \"score\": 56.91,\n    \"is_active\": false,\n    \"status\": \"active\",\n    \"created_at\": \"2023-07-10\"\n  },\n  {\n    \"id\": \"3674700e-a83f-4450-97c8-04f80f2d2e89\",\n    \"name\": \"Charlie\",\n    \"age\": 30,\n    \"city\": \"Sydney\",\n    \"score\": 80.83,\n    \"is_active\": false,\n    \"status\": \"active\",\n    \"created_at\": \"2024-10-22\"\n  },\n  {\n    \"id\": \"a17712d7-10f1-48e3-b1fd-8c65afb0442a\",\n    \"name\": \"Jane\",\n    \"age\": 40,\n    \"city\": \"Toronto\",\n    \"score\": 35.85,\n    \"is_active\": false,\n    \"status\": \"inactive\",\n    \"created_at\": \"2024-02-03\"\n  },\n  {\n    \"id\": \"3b1e3e89-9ccb-46cf-9954-cb0903e8e02d\",\n    \"name\": \"Diana\",\n    \"age\": 48,\n    \"city\": \"Singapore\",\n    \"score\": 37.58,\n    \"is_active\": true,\n    \"status\": \"inactive\",\n    \"created_at\": \"2023-06-13\"\n  },\n  {\n    \"id\": \"a4461690-a009-48c1-96d2-ed89dd1907a7\",\n    \"name\": \"Diana\",\n    \"age\": 21,\n    \"city\": \"Tokyo\",\n    \"score\": 32.35,\n    \"is_active\": false,\n    \"status\": \"pending\",\n    \"created_at\": \"2024-06-03\"\n  },\n  {\n    \"id\": \"7157129c-0b3d-4040-8609-65afe4322ed2\",\n    \"name\": \"Alice\",\n    \"age\": 65,\n    \"city\": \"New York\",\n    \"score\": 40.59,\n    \"is_active\": false,\n    \"status\": \"inactive\",\n    \"created_at\": \"2024-02-19\"\n  },\n  {\n    \"id\": \"dc3c472f-d16b-47e2-95c4-61c461e2b228\",\n    \"name\": \"Diana\",\n    \"age\": 25,\n    \"city\": \"Singapore\",\n    \"score\": 48.32,\n    \"is_active\": false,\n    \"status\": \"inactive\",\n    \"created_at\": \"2024-10-03\"\n  },\n  {\n    \"id\": \"34e6e614-f376-4c1c-b3b5-f2926987115f\",\n    \"name\": \"Bob\",\n    \"age\": 30,\n    \"city\": \"Tokyo\",\n    \"score\": 72.08,\n    \"is_active\": false,\n    \"status\": \"inactive\",\n    \"created_at\": \"2024-10-09\"\n  },\n  {\n    \"id\": \"2e4aded0-53aa-4668-b3b1-90f4747aa295\",\n    \"name\": \"Fiona\",\n    \"age\": 20,\n    \"city\": \"Tokyo\",\n    \"score\": 88.64,\n    \"is_active\": true,\n    \"status\": \"inactive\",\n    \"created_at\": \"2023-04-08\"\n  },\n  {\n    \"id\": \"2f65400e-3447-4944-9db3-af09bd4d57db\",\n    \"name\": \"John\",\n    \"age\": 34,\n    \"city\": \"Toronto\",\n    \"score\": 54.75,\n    \"is_active\": true,\n    \"status\": \"archived\",\n    \"created_at\": \"2024-02-26\"\n  },\n  {\n    \"id\": \"a667e148-4c64-4f5a-b23a-e00856feed8d\",\n    \"name\": \"John\",\n    \"age\": 27,\n    \"city\": \"Singapore\",\n    \"score\": 33.5,\n    \"is_active\": true,\n    \"status\": \"archived\",\n    \"created_at\": \"2023-02-26\"\n  },\n  {\n    \"id\": \"517c9c76-5c3e-497e-bd78-343ebd001668\",\n    \"name\": \"Eric\",\n    \"age\": 23,\n    \"city\": \"Sydney\",\n    \"score\": 78.33,\n    \"is_active\": true,\n    \"status\": \"inactive\",\n    \"created_at\": \"2024-08-11\"\n  },\n  {\n    \"id\": \"78a7f34f-7c0b-4e16-b57e-3829726569a7\",\n    \"name\": \"Jane\",\n    \"age\": 32,\n    \"city\": \"Berlin\",\n    \"score\": 49.09,\n    \"is_active\": false,\n    \"status\": \"archived\",\n    \"created_at\": \"2023-10-19\"\n  }\n]\n"
  },
  {
    "path": "example-agent-codebase-arch/README.md",
    "content": "# Example Agent Codebase Architecture\n\nThis is not runnable code. It is an example of how to structure an agent codebase."
  },
  {
    "path": "example-agent-codebase-arch/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/__init__.py",
    "content": "\"\"\"\nAtomic components for the Atomic/Composable Architecture implementation of the file editor agent.\nThese are the most basic building blocks for the file editor agent.\n\"\"\"\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/file_tools/__init__.py",
    "content": "\"\"\"\nAtomic file operations for the Atomic/Composable Architecture implementation of the file editor agent.\nThese are the most basic building blocks for file manipulation.\n\"\"\"\n\nfrom .result import FileOperationResult\nfrom .read import read_file\nfrom .write import write_file\nfrom .replace import replace_in_file\nfrom .insert import insert_in_file\nfrom .undo import undo_edit\n\n__all__ = [\n    'FileOperationResult',\n    'read_file',\n    'write_file',\n    'replace_in_file',\n    'insert_in_file',\n    'undo_edit'\n]\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/file_tools/insert_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic file insert operation for the Atomic/Composable Architecture.\nThis is the most basic building block for inserting content in files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(\n    0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n)\n\nfrom atom.path_utils.normalize import normalize_path\nfrom atom.path_utils.validation import is_valid_path, file_exists\nfrom atom.logging.console import log_info, log_error\nfrom atom.file_tools.result import FileOperationResult\n\n\ndef insert_in_file(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Insert text at a specific line in a file.\n\n    Args:\n        path: The path to the file to modify\n        insert_line: The line number after which to insert the text (1-indexed)\n        new_str: The text to insert\n\n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    try:\n        # Validate path\n        if not is_valid_path(path):\n            error_msg = \"Invalid file path provided: path is empty.\"\n            log_error(\"insert_in_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        # Check if the file exists\n        if not file_exists(path):\n            error_msg = f\"File {path} does not exist\"\n            log_error(\"insert_in_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Validate insert_line\n        if insert_line is None:\n            error_msg = \"No line number specified: insert_line is missing.\"\n            log_error(\"insert_in_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Read the file\n        with open(path, \"r\") as f:\n            lines = f.readlines()\n\n        # Line is 0-indexed for this function, but Claude provides 1-indexed\n        insert_line = min(max(0, insert_line - 1), len(lines))\n\n        # Check that the index is within acceptable bounds\n        if insert_line < 0 or insert_line > len(lines):\n            error_msg = (\n                f\"Insert line number {insert_line} out of range (0-{len(lines)}).\"\n            )\n            log_error(\"insert_in_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Ensure new_str ends with newline\n        if new_str and not new_str.endswith(\"\\n\"):\n            new_str += \"\\n\"\n\n        # Insert the text\n        lines.insert(insert_line, new_str)\n\n        # Write the file\n        with open(path, \"w\") as f:\n            f.writelines(lines)\n\n        log_info(\n            \"insert_in_file\",\n            f\"Successfully inserted text at line {insert_line + 1} in {path}\",\n        )\n        return FileOperationResult(\n            True, f\"Successfully inserted text at line {insert_line + 1} in {path}\"\n        )\n    except Exception as e:\n        error_msg = f\"Error inserting text: {str(e)}\"\n        log_error(\"insert_in_file\", error_msg, exc_info=True)\n        return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/file_tools/read_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic file read operation for the Atomic/Composable Architecture.\nThis is the most basic building block for reading files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom atom.path_utils.normalize import normalize_path\nfrom atom.path_utils.validation import is_valid_path, file_exists\nfrom atom.logging.console import log_error\nfrom atom.logging.display import display_file_content\nfrom atom.file_operations.result import FileOperationResult\n\ndef read_file(path: str, start_line: int = None, end_line: int = None) -> FileOperationResult:\n    \"\"\"\n    Read the contents of a file.\n\n    Args:\n        path: The path to the file to read\n        start_line: Optional start line (1-indexed)\n        end_line: Optional end line (1-indexed, -1 for end of file)\n\n    Returns:\n        FileOperationResult with content or error message\n    \"\"\"\n    try:\n        # Validate path\n        if not is_valid_path(path):\n            error_msg = \"Invalid file path provided: path is empty.\"\n            log_error(\"read_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        # Check if the file exists\n        if not file_exists(path):\n            error_msg = f\"File {path} does not exist\"\n            log_error(\"read_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Read the file\n        with open(path, \"r\") as f:\n            lines = f.readlines()\n\n        # Apply line range if specified\n        if start_line is not None or end_line is not None:\n            # Convert to 0-indexed for Python\n            start = max(0, (start_line or 1) - 1)\n            if end_line == -1 or end_line is None:\n                end = len(lines)\n            else:\n                end = min(len(lines), end_line)\n            lines = lines[start:end]\n\n        content = \"\".join(lines)\n        \n        # Display the file content (only for console, not returned to Claude)\n        display_file_content(path, content)\n\n        return FileOperationResult(True, f\"Successfully read file {path}\", content)\n    except Exception as e:\n        error_msg = f\"Error reading file: {str(e)}\"\n        log_error(\"read_file\", error_msg, exc_info=True)\n        return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/file_tools/replace_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic file replace operation for the Atomic/Composable Architecture.\nThis is the most basic building block for replacing content in files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom atom.path_utils.normalize import normalize_path\nfrom atom.path_utils.validation import is_valid_path, file_exists\nfrom atom.logging.console import log_info, log_error\nfrom atom.file_operations.result import FileOperationResult\n\ndef replace_in_file(path: str, old_str: str, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Replace a string in a file.\n\n    Args:\n        path: The path to the file to modify\n        old_str: The string to replace\n        new_str: The string to replace with\n\n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    try:\n        # Validate path\n        if not is_valid_path(path):\n            error_msg = \"Invalid file path provided: path is empty.\"\n            log_error(\"replace_in_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        # Check if the file exists\n        if not file_exists(path):\n            error_msg = f\"File {path} does not exist\"\n            log_error(\"replace_in_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Read the file\n        with open(path, \"r\") as f:\n            content = f.read()\n\n        # Check if the string exists\n        if old_str not in content:\n            error_msg = f\"The specified string was not found in the file {path}\"\n            log_error(\"replace_in_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Replace the string\n        new_content = content.replace(old_str, new_str, 1)\n\n        # Write the file\n        with open(path, \"w\") as f:\n            f.write(new_content)\n\n        log_info(\"replace_in_file\", f\"Successfully replaced text in {path}\")\n        return FileOperationResult(True, f\"Successfully replaced text in {path}\")\n    except Exception as e:\n        error_msg = f\"Error replacing text: {str(e)}\"\n        log_error(\"replace_in_file\", error_msg, exc_info=True)\n        return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/file_tools/result_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic file operation result model for the Atomic/Composable Architecture.\nThis is the most basic building block for representing file operation results.\n\"\"\"\n\nfrom typing import Any, Dict\n\nclass FileOperationResult:\n    \"\"\"\n    Model representing the result of a file operation.\n    \"\"\"\n    \n    def __init__(self, success: bool, message: str, data: Any = None):\n        \"\"\"\n        Initialize a file operation result.\n        \n        Args:\n            success: Whether the operation was successful\n            message: A message describing the result\n            data: Optional data returned by the operation\n        \"\"\"\n        self.success = success\n        self.message = message\n        self.data = data\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"\n        Convert the result to a dictionary.\n        \n        Returns:\n            Dictionary representation of the result\n        \"\"\"\n        return {\n            \"success\": self.success,\n            \"message\": self.message,\n            \"data\": self.data\n        }\n    \n    def to_response(self) -> Dict[str, Any]:\n        \"\"\"\n        Convert the result to a response for Claude.\n        \n        Returns:\n            Dictionary with result or error to send back to Claude\n        \"\"\"\n        if self.success:\n            return {\"result\": self.data if self.data is not None else self.message}\n        else:\n            return {\"error\": self.message}\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/file_tools/undo_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic file undo operation for the Atomic/Composable Architecture.\nThis is the most basic building block for undoing changes to files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom atom.path_utils.normalize import normalize_path\nfrom atom.path_utils.validation import is_valid_path\nfrom atom.logging.console import log_info, log_error\nfrom atom.file_operations.result import FileOperationResult\n\ndef undo_edit(path: str) -> FileOperationResult:\n    \"\"\"\n    Placeholder for undo_edit functionality.\n    In a real implementation, you would need to track edit history.\n\n    Args:\n        path: The path to the file whose last edit should be undone\n\n    Returns:\n        FileOperationResult with message about undo functionality\n    \"\"\"\n    try:\n        # Validate path\n        if not is_valid_path(path):\n            error_msg = \"Invalid file path provided: path is empty.\"\n            log_error(\"undo_edit\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        message = \"Undo functionality is not implemented in this version.\"\n        log_info(\"undo_edit\", message)\n        return FileOperationResult(True, message)\n    except Exception as e:\n        error_msg = f\"Error in undo_edit: {str(e)}\"\n        log_error(\"undo_edit\", error_msg, exc_info=True)\n        return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/file_tools/write_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic file write operation for the Atomic/Composable Architecture.\nThis is the most basic building block for writing files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom atom.path_utils.normalize import normalize_path\nfrom atom.path_utils.validation import is_valid_path\nfrom atom.path_utils.directory import ensure_directory_exists\nfrom atom.logging.console import log_info, log_error\nfrom atom.file_operations.result import FileOperationResult\n\ndef write_file(path: str, content: str) -> FileOperationResult:\n    \"\"\"\n    Write content to a file.\n\n    Args:\n        path: The path to the file to write\n        content: The content to write to the file\n\n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    try:\n        # Validate path\n        if not is_valid_path(path):\n            error_msg = \"Invalid file path provided: path is empty.\"\n            log_error(\"write_file\", error_msg)\n            return FileOperationResult(False, error_msg)\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        # Ensure the directory exists\n        ensure_directory_exists(path)\n\n        # Write the file\n        with open(path, \"w\") as f:\n            f.write(content or \"\")\n\n        log_info(\"write_file\", f\"Successfully wrote to file {path}\")\n        return FileOperationResult(True, f\"Successfully wrote to file {path}\")\n    except Exception as e:\n        error_msg = f\"Error writing file: {str(e)}\"\n        log_error(\"write_file\", error_msg, exc_info=True)\n        return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/logging/__init__.py",
    "content": "\"\"\"\nAtomic logging utilities for the Atomic/Composable Architecture implementation of the file editor agent.\nThese are the most basic building blocks for logging and console output.\n\"\"\"\n\nfrom .console import log_info, log_warning, log_error\nfrom .display import display_file_content, display_token_usage\n\n__all__ = [\n    'log_info',\n    'log_warning',\n    'log_error',\n    'display_file_content',\n    'display_token_usage'\n]\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/logging/console.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic console logging utilities for the Atomic/Composable Architecture.\nThese are the most basic building blocks for console logging.\n\"\"\"\n\nimport traceback\nfrom rich.console import Console\n\n# Initialize rich console\nconsole = Console()\n\ndef log_info(component: str, message: str) -> None:\n    \"\"\"\n    Log an informational message.\n\n    Args:\n        component: The component that is logging the message\n        message: The message to log\n    \"\"\"\n    console.log(f\"[{component}] {message}\")\n\ndef log_warning(component: str, message: str) -> None:\n    \"\"\"\n    Log a warning message.\n\n    Args:\n        component: The component that is logging the message\n        message: The message to log\n    \"\"\"\n    console.log(f\"[{component}] [warning] {message}\")\n    console.print(f\"[yellow]{message}[/yellow]\")\n\ndef log_error(component: str, message: str, exc_info: bool = False) -> None:\n    \"\"\"\n    Log an error message.\n\n    Args:\n        component: The component that is logging the message\n        message: The message to log\n        exc_info: Whether to include exception info\n    \"\"\"\n    console.log(f\"[{component}] [error] {message}\")\n    console.print(f\"[red]{message}[/red]\")\n    \n    if exc_info:\n        console.log(traceback.format_exc())\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/logging/display.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic display utilities for the Atomic/Composable Architecture.\nThese are the most basic building blocks for displaying content.\n\"\"\"\n\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.syntax import Syntax\nfrom rich.table import Table\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom atom.path_utils.extension import get_file_extension\n\n# Initialize rich console\nconsole = Console()\n\ndef display_file_content(path: str, content: str) -> None:\n    \"\"\"\n    Display file content with syntax highlighting.\n\n    Args:\n        path: Path to the file\n        content: Content of the file\n    \"\"\"\n    file_extension = get_file_extension(path)\n    syntax = Syntax(content, file_extension or \"text\", line_numbers=True)\n    console.print(Panel(syntax, title=f\"File: {path}\"))\n\ndef display_token_usage(input_tokens: int, output_tokens: int) -> None:\n    \"\"\"\n    Display token usage information in a rich formatted table.\n\n    Args:\n        input_tokens: Number of input tokens used\n        output_tokens: Number of output tokens used\n    \"\"\"\n    total_tokens = input_tokens + output_tokens\n    token_ratio = output_tokens / input_tokens if input_tokens > 0 else 0\n\n    # Create a table for token usage\n    table = Table(title=\"Token Usage Statistics\", expand=True)\n\n    # Add columns with proper styling\n    table.add_column(\"Metric\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Count\", style=\"magenta\", justify=\"right\")\n    table.add_column(\"Percentage\", justify=\"right\")\n\n    # Add rows with data\n    table.add_row(\n        \"Input Tokens\", f\"{input_tokens:,}\", f\"{input_tokens/total_tokens:.1%}\"\n    )\n    table.add_row(\n        \"Output Tokens\", f\"{output_tokens:,}\", f\"{output_tokens/total_tokens:.1%}\"\n    )\n    table.add_row(\"Total Tokens\", f\"{total_tokens:,}\", \"100.0%\")\n    table.add_row(\"Output/Input Ratio\", f\"{token_ratio:.2f}\", \"\")\n\n    console.print()\n    console.print(table)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/path_utils/__init__.py",
    "content": "\"\"\"\nAtomic path utilities for the Atomic/Composable Architecture implementation of the file editor agent.\nThese are the most basic building blocks for path manipulation.\n\"\"\"\n\nfrom .normalize import normalize_path\nfrom .extension import get_file_extension\nfrom .directory import ensure_directory_exists\nfrom .validation import is_valid_path, file_exists\n\n__all__ = [\n    'normalize_path',\n    'get_file_extension',\n    'ensure_directory_exists',\n    'is_valid_path',\n    'file_exists'\n]\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/path_utils/directory.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic directory utility for the Atomic/Composable Architecture.\nThis is the most basic building block for directory operations.\n\"\"\"\n\nimport os\n\ndef ensure_directory_exists(path: str) -> None:\n    \"\"\"\n    Ensure that the directory for a file path exists.\n    Creates the directory if it doesn't exist.\n\n    Args:\n        path: The path to check\n    \"\"\"\n    directory = os.path.dirname(path)\n    if directory and not os.path.exists(directory):\n        os.makedirs(directory)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/path_utils/extension.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic file extension utility for the Atomic/Composable Architecture.\nThis is the most basic building block for getting file extensions.\n\"\"\"\n\nimport os\n\ndef get_file_extension(path: str) -> str:\n    \"\"\"\n    Get the file extension from a path.\n\n    Args:\n        path: The path to get the extension from\n\n    Returns:\n        The file extension without the dot\n    \"\"\"\n    return os.path.splitext(path)[1][1:]\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/path_utils/normalize.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic path normalization utility for the Atomic/Composable Architecture.\nThis is the most basic building block for normalizing file paths.\n\"\"\"\n\nimport os\n\ndef normalize_path(path: str) -> str:\n    \"\"\"\n    Normalize file paths to handle various formats (absolute, relative, Windows paths, etc.)\n\n    Args:\n        path: The path to normalize\n\n    Returns:\n        The normalized path\n    \"\"\"\n    if not path:\n        return path\n\n    # Handle Windows backslash paths if provided\n    path = path.replace(\"\\\\\", os.sep)\n\n    is_windows_path = False\n    if os.name == \"nt\" and len(path) > 1 and path[1] == \":\":\n        is_windows_path = True\n\n    # Handle /repo/ paths from Claude (tool use convention)\n    if path.startswith(\"/repo/\"):\n        path = os.path.join(os.getcwd(), path[6:])\n        return path\n\n    if path.startswith(\"/\"):\n        # Handle case when Claude provides paths with leading slash\n        if path == \"/\" or path == \"/.\":\n            # Special case for root directory\n            path = os.getcwd()\n        else:\n            # Replace leading slash with current working directory\n            path = os.path.join(os.getcwd(), path[1:])\n    elif path.startswith(\"./\"):\n        # Handle relative paths starting with ./\n        path = os.path.join(os.getcwd(), path[2:])\n    elif not os.path.isabs(path) and not is_windows_path:\n        # For non-absolute paths that aren't Windows paths either\n        path = os.path.join(os.getcwd(), path)\n\n    return path\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/atom/path_utils/validation.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAtomic path validation utilities for the Atomic/Composable Architecture.\nThese are the most basic building blocks for validating paths.\n\"\"\"\n\nimport os\n\ndef is_valid_path(path: str) -> bool:\n    \"\"\"\n    Check if a path is valid.\n\n    Args:\n        path: The path to check\n\n    Returns:\n        True if the path is valid, False otherwise\n    \"\"\"\n    return path is not None and path.strip() != \"\"\n\ndef file_exists(path: str) -> bool:\n    \"\"\"\n    Check if a file exists.\n\n    Args:\n        path: The path to check\n\n    Returns:\n        True if the file exists, False otherwise\n    \"\"\"\n    return os.path.exists(path) and os.path.isfile(path)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/membrane/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/membrane/main_file_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nOrganism-level file agent for the Atomic/Composable Architecture implementation of the file editor agent.\nThis module pulls together all components to provide a high-level API for the file editor agent.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport argparse\nimport traceback\nfrom typing import Dict, Any, Optional, List, Union\nfrom rich.console import Console\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nimport anthropic\nfrom molecule.file_crud import FileCRUD\nfrom organism.file_agent import run_agent\nfrom atom.logging.console import log_info, log_error, log_warning\nfrom atom.logging.display import display_token_usage\n\nclass FileAgent:\n    \"\"\"\n    File agent that pulls together all components to provide a high-level API for the file editor agent.\n    \"\"\"\n    \n    @staticmethod\n    def run(prompt: str, api_key: Optional[str] = None, max_tool_use_loops: int = 15, \n            token_efficient_tool_use: bool = True) -> None:\n        \"\"\"\n        Run the file editor agent with the specified prompt.\n        \n        Args:\n            prompt: The prompt to send to Claude\n            api_key: Optional API key for Anthropic\n            max_tool_use_loops: Maximum number of tool use loops\n            token_efficient_tool_use: Whether to use token-efficient tool use\n        \"\"\"\n        log_info(\"file_agent\", f\"Running file editor agent with prompt: {prompt}\")\n        \n        # Get the API key\n        api_key = api_key or os.environ.get(\"ANTHROPIC_API_KEY\")\n        \n        if not api_key:\n            log_error(\"file_agent\", \"No API key provided. Please set the ANTHROPIC_API_KEY environment variable or provide an API key.\")\n            \n            # For testing purposes, we'll just print a success message\n            console = Console()\n            console.print(\"[green]Successfully loaded the Atomic/Composable Architecture implementation![/green]\")\n            console.print(\"[yellow]This is a mock implementation for testing the architecture structure.[/yellow]\")\n            console.print(\"[yellow]In a real implementation, this would connect to the Claude API.[/yellow]\")\n            \n            # Display mock token usage\n            display_token_usage(1000, 500)\n            \n            return\n            \n        # Initialize the Anthropic client\n        client = anthropic.Anthropic(api_key=api_key)\n        \n        # Run the agent\n        try:\n            input_tokens, output_tokens = run_agent(\n                client=client,\n                prompt=prompt,\n                handle_tool_use=FileCRUD.handle_tool_use,\n                max_tool_use_loops=max_tool_use_loops,\n                token_efficient_tool_use=token_efficient_tool_use\n            )\n            \n            # Display token usage\n            display_token_usage(input_tokens, output_tokens)\n            \n            log_info(\"file_agent\", \"File editor agent completed successfully.\")\n            \n        except Exception as e:\n            log_error(\"file_agent\", f\"Error running file editor agent: {str(e)}\", exc_info=True)\n\ndef main():\n    \"\"\"\n    Main entry point for the file editor agent.\n    \"\"\"\n    parser = argparse.ArgumentParser(description=\"File Editor Agent\")\n    parser.add_argument(\"--prompt\", type=str, help=\"Prompt to send to Claude\")\n    parser.add_argument(\"--api-key\", type=str, help=\"API key for Anthropic\")\n    parser.add_argument(\"--max-tool-use-loops\", type=int, default=15, help=\"Maximum number of tool use loops\")\n    parser.add_argument(\"--token-efficient-tool-use\", action=\"store_true\", help=\"Use token-efficient tool use\")\n    \n    args = parser.parse_args()\n    \n    if not args.prompt:\n        log_error(\"main\", \"No prompt provided. Please provide a prompt with --prompt.\")\n        return\n        \n    FileAgent.run(\n        prompt=args.prompt,\n        api_key=args.api_key,\n        max_tool_use_loops=args.max_tool_use_loops,\n        token_efficient_tool_use=args.token_efficient_tool_use\n    )\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/membrane/mcp_file_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nOrganism-level file agent for the Atomic/Composable Architecture implementation of the file editor agent.\nThis module pulls together all components to provide a high-level API for the file editor agent.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport argparse\nimport traceback\nfrom typing import Dict, Any, Optional, List, Union\nfrom rich.console import Console\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nimport anthropic\nfrom molecule.file_crud import FileCRUD\nfrom organism.file_agent import run_agent\nfrom atom.logging.console import log_info, log_error, log_warning\nfrom atom.logging.display import display_token_usage\n\nclass FileAgent:\n    \"\"\"\n    File agent that pulls together all components to provide a high-level API for the file editor agent.\n    \"\"\"\n    \n    @staticmethod\n    def run(prompt: str, api_key: Optional[str] = None, max_tool_use_loops: int = 15, \n            token_efficient_tool_use: bool = True) -> None:\n        \"\"\"\n        Run the file editor agent with the specified prompt.\n        \n        Args:\n            prompt: The prompt to send to Claude\n            api_key: Optional API key for Anthropic\n            max_tool_use_loops: Maximum number of tool use loops\n            token_efficient_tool_use: Whether to use token-efficient tool use\n        \"\"\"\n        log_info(\"file_agent\", f\"Running file editor agent with prompt: {prompt}\")\n        \n        # Get the API key\n        api_key = api_key or os.environ.get(\"ANTHROPIC_API_KEY\")\n        \n        if not api_key:\n            log_error(\"file_agent\", \"No API key provided. Please set the ANTHROPIC_API_KEY environment variable or provide an API key.\")\n            \n            # For testing purposes, we'll just print a success message\n            console = Console()\n            console.print(\"[green]Successfully loaded the Atomic/Composable Architecture implementation![/green]\")\n            console.print(\"[yellow]This is a mock implementation for testing the architecture structure.[/yellow]\")\n            console.print(\"[yellow]In a real implementation, this would connect to the Claude API.[/yellow]\")\n            \n            # Display mock token usage\n            display_token_usage(1000, 500)\n            \n            return\n            \n        # Initialize the Anthropic client\n        client = anthropic.Anthropic(api_key=api_key)\n        \n        # Run the agent\n        try:\n            input_tokens, output_tokens = run_agent(\n                client=client,\n                prompt=prompt,\n                handle_tool_use=FileCRUD.handle_tool_use,\n                max_tool_use_loops=max_tool_use_loops,\n                token_efficient_tool_use=token_efficient_tool_use\n            )\n            \n            # Display token usage\n            display_token_usage(input_tokens, output_tokens)\n            \n            log_info(\"file_agent\", \"File editor agent completed successfully.\")\n            \n        except Exception as e:\n            log_error(\"file_agent\", f\"Error running file editor agent: {str(e)}\", exc_info=True)\n\ndef main():\n    \"\"\"\n    Main entry point for the file editor agent.\n    \"\"\"\n    parser = argparse.ArgumentParser(description=\"File Editor Agent\")\n    parser.add_argument(\"--prompt\", type=str, help=\"Prompt to send to Claude\")\n    parser.add_argument(\"--api-key\", type=str, help=\"API key for Anthropic\")\n    parser.add_argument(\"--max-tool-use-loops\", type=int, default=15, help=\"Maximum number of tool use loops\")\n    parser.add_argument(\"--token-efficient-tool-use\", action=\"store_true\", help=\"Use token-efficient tool use\")\n    \n    args = parser.parse_args()\n    \n    if not args.prompt:\n        log_error(\"main\", \"No prompt provided. Please provide a prompt with --prompt.\")\n        return\n        \n    FileAgent.run(\n        prompt=args.prompt,\n        api_key=args.api_key,\n        max_tool_use_loops=args.max_tool_use_loops,\n        token_efficient_tool_use=args.token_efficient_tool_use\n    )\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/molecule/__init__.py",
    "content": "\"\"\"\nMolecular components for the Atomic/Composable Architecture implementation of the file editor agent.\nThese components combine atomic building blocks to provide higher-level functionality.\n\"\"\"\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/molecule/file_crud.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nMolecular file CRUD operations for the Atomic/Composable Architecture implementation of the file editor agent.\nThis module combines atomic components to provide file CRUD capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nfrom atom.file_operations import read_file, write_file, replace_in_file, insert_in_file, undo_edit, FileOperationResult\nfrom atom.logging.console import log_info, log_error\n\nclass FileCRUD:\n    \"\"\"\n    File CRUD operations that combine atomic components to provide file manipulation capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def read(path: str, start_line: int = None, end_line: int = None) -> FileOperationResult:\n        \"\"\"\n        Read the contents of a file.\n        \n        Args:\n            path: The path to the file to read\n            start_line: Optional start line (1-indexed)\n            end_line: Optional end line (1-indexed, -1 for end of file)\n            \n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        log_info(\"file_crud\", f\"Reading file {path} with range {start_line}-{end_line}\")\n        \n        result = read_file(path, start_line, end_line)\n        \n        if result.success:\n            log_info(\"file_crud\", f\"Successfully read file {path}\")\n        else:\n            log_error(\"file_crud\", f\"Failed to read file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def write(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Write content to a file.\n        \n        Args:\n            path: The path to the file to write\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_crud\", f\"Writing to file {path}\")\n        \n        result = write_file(path, content)\n        \n        if result.success:\n            log_info(\"file_crud\", f\"Successfully wrote to file {path}\")\n        else:\n            log_error(\"file_crud\", f\"Failed to write to file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a string in a file.\n        \n        Args:\n            path: The path to the file to modify\n            old_str: The string to replace\n            new_str: The string to replace with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_crud\", f\"Replacing text in file {path}\")\n        \n        result = replace_in_file(path, old_str, new_str)\n        \n        if result.success:\n            log_info(\"file_crud\", f\"Successfully replaced text in file {path}\")\n        else:\n            log_error(\"file_crud\", f\"Failed to replace text in file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def insert(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text (1-indexed)\n            new_str: The text to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_crud\", f\"Inserting text at line {insert_line} in file {path}\")\n        \n        result = insert_in_file(path, insert_line, new_str)\n        \n        if result.success:\n            log_info(\"file_crud\", f\"Successfully inserted text at line {insert_line} in file {path}\")\n        else:\n            log_error(\"file_crud\", f\"Failed to insert text at line {insert_line} in file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def create(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_crud\", f\"Creating file {path}\")\n        \n        return FileCRUD.write(path, content)\n    \n    @staticmethod\n    def undo(path: str) -> FileOperationResult:\n        \"\"\"\n        Undo the last edit to a file.\n        \n        Args:\n            path: The path to the file whose last edit should be undone\n            \n        Returns:\n            FileOperationResult with message about undo functionality\n        \"\"\"\n        log_info(\"file_crud\", f\"Undoing last edit to file {path}\")\n        \n        result = undo_edit(path)\n        \n        if result.success:\n            log_info(\"file_crud\", f\"Successfully undid last edit to file {path}\")\n        else:\n            log_error(\"file_crud\", f\"Failed to undo last edit to file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def handle_tool_use(tool_use: dict) -> dict:\n        \"\"\"\n        Handle a tool use request from Claude.\n        \n        Args:\n            tool_use: The tool use request from Claude\n            \n        Returns:\n            Dictionary with result or error to send back to Claude\n        \"\"\"\n        command = tool_use.get(\"command\")\n        path = tool_use.get(\"path\")\n        \n        log_info(\"file_crud\", f\"Handling tool use request: {command} on {path}\")\n        \n        if not command:\n            error_msg = \"No command specified in tool use request\"\n            log_error(\"file_crud\", error_msg)\n            return {\"error\": error_msg}\n            \n        if not path and command != \"undo_edit\":\n            error_msg = \"No path specified in tool use request\"\n            log_error(\"file_crud\", error_msg)\n            return {\"error\": error_msg}\n            \n        result = None\n        \n        try:\n            if command == \"view\":\n                view_range = tool_use.get(\"view_range\")\n                start_line = None\n                end_line = None\n                \n                if view_range:\n                    start_line, end_line = view_range\n                    \n                result = FileCRUD.read(path, start_line, end_line)\n                \n            elif command == \"str_replace\":\n                old_str = tool_use.get(\"old_str\")\n                new_str = tool_use.get(\"new_str\")\n                \n                if old_str is None:\n                    return {\"error\": \"Missing 'old_str' parameter for str_replace command\"}\n                    \n                if new_str is None:\n                    return {\"error\": \"Missing 'new_str' parameter for str_replace command\"}\n                    \n                result = FileCRUD.replace(path, old_str, new_str)\n                \n            elif command == \"create\":\n                file_text = tool_use.get(\"file_text\", \"\")\n                result = FileCRUD.create(path, file_text)\n                \n            elif command == \"insert\":\n                insert_line = tool_use.get(\"insert_line\")\n                new_str = tool_use.get(\"new_str\")\n                \n                if insert_line is None:\n                    return {\"error\": \"Missing 'insert_line' parameter for insert command\"}\n                    \n                if new_str is None:\n                    return {\"error\": \"Missing 'new_str' parameter for insert command\"}\n                    \n                result = FileCRUD.insert(path, insert_line, new_str)\n                \n            elif command == \"undo_edit\":\n                result = FileCRUD.undo(path)\n                \n            else:\n                error_msg = f\"Unknown command: {command}\"\n                log_error(\"file_crud\", error_msg)\n                return {\"error\": error_msg}\n                \n            # Convert the result to a response for Claude\n            if result.success:\n                return {\"result\": result.data if result.data is not None else result.message}\n            else:\n                return {\"error\": result.message}\n                \n        except Exception as e:\n            error_msg = f\"Error handling tool use: {str(e)}\"\n            log_error(\"file_crud\", error_msg, exc_info=True)\n            return {\"error\": error_msg}\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/molecule/file_reader.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nMolecular file reader for the Atomic/Composable Architecture implementation of the file editor agent.\nThis module combines atomic components to provide file reading capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nfrom atom.file_operations import read_file, FileOperationResult\nfrom atom.logging.console import log_info, log_error\n\nclass FileReader:\n    \"\"\"\n    File reader that combines atomic components to provide file reading capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def read(path: str, start_line: int = None, end_line: int = None) -> FileOperationResult:\n        \"\"\"\n        Read the contents of a file.\n\n        Args:\n            path: The path to the file to read\n            start_line: Optional start line (1-indexed)\n            end_line: Optional end line (1-indexed, -1 for end of file)\n\n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        log_info(\"file_reader\", f\"Reading file {path} with range {start_line}-{end_line}\")\n        \n        # Use the atomic read_file function\n        result = read_file(path, start_line, end_line)\n        \n        if result.success:\n            log_info(\"file_reader\", f\"Successfully read file {path}\")\n        else:\n            log_error(\"file_reader\", f\"Failed to read file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def view_file(path: str, view_range=None) -> FileOperationResult:\n        \"\"\"\n        View the contents of a file with optional range.\n\n        Args:\n            path: The path to the file to view\n            view_range: Optional tuple of (start_line, end_line)\n\n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        start_line = None\n        end_line = None\n        \n        if view_range:\n            start_line, end_line = view_range\n            \n        log_info(\"file_reader\", f\"Viewing file {path} with range {start_line}-{end_line}\")\n        \n        return FileReader.read(path, start_line, end_line)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/molecule/file_writer.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nMolecular file writer for the Atomic/Composable Architecture implementation of the file editor agent.\nThis module combines atomic components to provide file writing capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nfrom atom.file_operations import write_file, replace_in_file, insert_in_file, FileOperationResult\nfrom atom.logging.console import log_info, log_error\n\nclass FileWriter:\n    \"\"\"\n    File writer that combines atomic components to provide file writing capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def write(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Write content to a file.\n        \n        Args:\n            path: The path to the file to write\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Writing to file {path}\")\n        \n        # Use the atomic write_file function\n        result = write_file(path, content)\n        \n        if result.success:\n            log_info(\"file_writer\", f\"Successfully wrote to file {path}\")\n        else:\n            log_error(\"file_writer\", f\"Failed to write to file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a string in a file.\n        \n        Args:\n            path: The path to the file to modify\n            old_str: The string to replace\n            new_str: The string to replace with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Replacing text in file {path}\")\n        \n        # Use the atomic replace_in_file function\n        result = replace_in_file(path, old_str, new_str)\n        \n        if result.success:\n            log_info(\"file_writer\", f\"Successfully replaced text in file {path}\")\n        else:\n            log_error(\"file_writer\", f\"Failed to replace text in file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def insert(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text (1-indexed)\n            new_str: The text to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Inserting text at line {insert_line} in file {path}\")\n        \n        # Use the atomic insert_in_file function\n        result = insert_in_file(path, insert_line, new_str)\n        \n        if result.success:\n            log_info(\"file_writer\", f\"Successfully inserted text at line {insert_line} in file {path}\")\n        else:\n            log_error(\"file_writer\", f\"Failed to insert text at line {insert_line} in file {path}: {result.message}\")\n            \n        return result\n    \n    @staticmethod\n    def create(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Creating file {path}\")\n        \n        # Use the atomic write_file function\n        return FileWriter.write(path, content)\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/organism/__init__.py",
    "content": "\"\"\"\nOrganism-level components for the Atomic/Composable Architecture implementation of the file editor agent.\nThese components pull together molecular components to provide high-level APIs.\n\"\"\"\n"
  },
  {
    "path": "example-agent-codebase-arch/atomic-composable-architecture/organism/file_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nMolecular file editor for the Atomic/Composable Architecture implementation of the file editor agent.\nThis module combines atomic components to provide file editing capabilities.\n\"\"\"\n\nimport time\nfrom typing import Tuple, Dict, Any, List, Optional, Callable\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.markdown import Markdown\nfrom anthropic import Anthropic\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))\n\nfrom atom.logging import log_info, log_error, display_token_usage\n\n# Initialize rich console\nconsole = Console()\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\nclass FileEditor:\n    \"\"\"\n    File editor that combines atomic components to provide file editing capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def run_agent(\n        client: Anthropic,\n        prompt: str,\n        handle_tool_use_func,\n        max_thinking_tokens: int = DEFAULT_THINKING_TOKENS,\n        max_loops: int = 10,\n        use_token_efficiency: bool = False,\n    ) -> Tuple[str, int, int]:\n        \"\"\"\n        Run the Claude agent with file editing capabilities.\n\n        Args:\n            client: The Anthropic client\n            prompt: The user's prompt\n            handle_tool_use_func: Function to handle tool use requests\n            max_thinking_tokens: Maximum tokens for thinking\n            max_loops: Maximum number of tool use loops\n            use_token_efficiency: Whether to use token-efficient tool use beta feature\n\n        Returns:\n            Tuple containing:\n            - Final response from Claude (str)\n            - Total input tokens used (int)\n            - Total output tokens used (int)\n        \"\"\"\n        # Track token usage\n        input_tokens_total = 0\n        output_tokens_total = 0\n        system_prompt = \"\"\"You are a helpful AI assistant with text editing capabilities.\nYou have access to a text editor tool that can view, edit, and create files.\nAlways think step by step about what you need to do before taking any action.\nBe careful when making edits to files, as they can permanently change the user's files.\nFollow these steps when handling file operations:\n1. First, view files to understand their content before making changes\n2. For edits, ensure you have the correct context and are making the right changes\n3. When creating files, make sure they're in the right location with proper formatting\n\"\"\"\n\n        # Define text editor tool\n        text_editor_tool = {\"name\": \"str_replace_editor\", \"type\": \"text_editor_20250124\"}\n\n        messages = [\n            {\n                \"role\": \"user\",\n                \"content\": f\"\"\"I need help with editing files. Here's what I want to do:\n\n{prompt}\n\nPlease use the text editor tool to help me with this. First, think through what you need to do, then use the appropriate tool.\n\"\"\",\n            }\n        ]\n\n        loop_count = 0\n        tool_use_count = 0\n        thinking_start_time = time.time()\n\n        while loop_count < max_loops:\n            loop_count += 1\n\n            console.rule(f\"[yellow]Agent Loop {loop_count}/{max_loops}[/yellow]\")\n            log_info(\"file_editor\", f\"Starting agent loop {loop_count}/{max_loops}\")\n\n            # Create message with text editor tool\n            message_args = {\n                \"model\": MODEL,\n                \"max_tokens\": 4096,\n                \"tools\": [text_editor_tool],\n                \"messages\": messages,\n                \"system\": system_prompt,\n                \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": max_thinking_tokens},\n            }\n\n            # Use the beta.messages with betas parameter if token efficiency is enabled\n            if use_token_efficiency:\n                # Using token-efficient tools beta feature\n                message_args[\"betas\"] = [\"token-efficient-tools-2025-02-19\"]\n                response = client.beta.messages.create(**message_args)\n            else:\n                # Standard approach\n                response = client.messages.create(**message_args)\n\n            # Track token usage\n            if hasattr(response, \"usage\"):\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n\n                input_tokens_total += input_tokens\n                output_tokens_total += output_tokens\n\n                console.print(\n                    f\"[dim]Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}[/dim]\"\n                )\n                log_info(\n                    \"file_editor\", \n                    f\"Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}\"\n                )\n\n            # Process response content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            for content_block in response.content:\n                if content_block.type == \"thinking\":\n                    thinking_block = content_block\n                    # Access the thinking attribute which contains the actual thinking text\n                    if hasattr(thinking_block, \"thinking\"):\n                        console.print(\n                            Panel(\n                                thinking_block.thinking,\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                    else:\n                        console.print(\n                            Panel(\n                                \"Claude is thinking...\",\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                elif content_block.type == \"tool_use\":\n                    tool_use_block = content_block\n                    tool_use_count += 1\n                elif content_block.type == \"text\":\n                    text_block = content_block\n\n            # If we got a final text response with no tool use, we're done\n            if text_block and not tool_use_block:\n                thinking_end_time = time.time()\n                thinking_duration = thinking_end_time - thinking_start_time\n\n                console.print(\n                    f\"\\n[bold green]Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses[/bold green]\"\n                )\n                log_info(\n                    \"file_editor\",\n                    f\"Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses\"\n                )\n\n                # Add the response to messages\n                messages.append(\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": [\n                            *([thinking_block] if thinking_block else []),\n                            {\"type\": \"text\", \"text\": text_block.text},\n                        ],\n                    }\n                )\n\n                return text_block.text, input_tokens_total, output_tokens_total\n\n            # Handle tool use\n            if tool_use_block:\n                # Add the assistant's response to messages before handling tool calls\n                messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n                console.print(\n                    f\"\\n[bold blue]Tool Call:[/bold blue] {tool_use_block.name}\"\n                )\n                log_info(\"file_editor\", f\"Tool Call: {tool_use_block.name}\")\n\n                # Handle the tool use\n                tool_result = handle_tool_use_func(tool_use_block.input)\n\n                # Format tool result for Claude\n                tool_result_message = {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"tool_result\",\n                            \"tool_use_id\": tool_use_block.id,\n                            \"content\": tool_result.get(\"error\") or tool_result.get(\"result\", \"\"),\n                        }\n                    ],\n                }\n                messages.append(tool_result_message)\n\n        # If we reach here, we hit the max loops\n        console.print(\n            f\"\\n[bold red]Warning: Reached maximum loops ({max_loops}) without completing the task[/bold red]\"\n        )\n        log_error(\n            \"file_editor\",\n            f\"Reached maximum loops ({max_loops}) without completing the task\"\n        )\n        return (\n            \"I wasn't able to complete the task within the allowed number of thinking steps. Please try a more specific prompt or increase the loop limit.\",\n            input_tokens_total,\n            output_tokens_total,\n        )\n\n# Expose the run_agent function at the module level\ndef run_agent(\n    client: Anthropic,\n    prompt: str,\n    handle_tool_use: Callable[[Dict[str, Any]], Dict[str, Any]],\n    max_tool_use_loops: int = 15,\n    token_efficient_tool_use: bool = True,\n) -> Tuple[int, int]:\n    \"\"\"\n    Run the file editor agent with the specified prompt.\n    \n    Args:\n        client: The Anthropic client\n        prompt: The prompt to send to Claude\n        handle_tool_use: Function to handle tool use requests\n        max_tool_use_loops: Maximum number of tool use loops\n        token_efficient_tool_use: Whether to use token-efficient tool use\n        \n    Returns:\n        Tuple containing input and output token counts\n    \"\"\"\n    log_info(\"file_editor\", f\"Running agent with prompt: {prompt}\")\n    \n    _, input_tokens, output_tokens = FileEditor.run_agent(\n        client=client,\n        prompt=prompt,\n        handle_tool_use_func=handle_tool_use,\n        max_loops=max_tool_use_loops,\n        use_token_efficiency=token_efficient_tool_use,\n        max_thinking_tokens=DEFAULT_THINKING_TOKENS\n    )\n    \n    return input_tokens, output_tokens\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/__init__.py",
    "content": "\"\"\"\nBlog agent package for the Vertical Slice Architecture.\nThis package provides blog management capabilities.\n\"\"\"\n\nfrom features.blog_agent.blog_agent import run_agent\nfrom features.blog_agent.blog_manager import BlogManager"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/blog_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nBlog agent for the Vertical Slice Architecture implementation of the blog agent.\nThis module provides the agent interface for blog management operations.\n\"\"\"\n\nimport time\nfrom typing import Tuple, Dict, Any, List, Optional\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.markdown import Markdown\nfrom anthropic import Anthropic\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, display_token_usage\nfrom features.blog_agent.tool_handler import handle_tool_use\n\n# Initialize rich console\nconsole = Console()\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\nclass BlogAgent:\n    \"\"\"\n    Blog agent that provides an interface for AI-assisted blog management.\n    \"\"\"\n    \n    @staticmethod\n    def run_agent(\n        client: Anthropic,\n        prompt: str,\n        max_thinking_tokens: int = DEFAULT_THINKING_TOKENS,\n        max_loops: int = 10,\n        use_token_efficiency: bool = False,\n    ) -> Tuple[str, int, int]:\n        \"\"\"\n        Run the Claude agent with blog management capabilities.\n\n        Args:\n            client: The Anthropic client\n            prompt: The user's prompt\n            max_thinking_tokens: Maximum tokens for thinking\n            max_loops: Maximum number of tool use loops\n            use_token_efficiency: Whether to use token-efficient tool use beta feature\n\n        Returns:\n            Tuple containing:\n            - Final response from Claude (str)\n            - Total input tokens used (int)\n            - Total output tokens used (int)\n        \"\"\"\n        # Track token usage\n        input_tokens_total = 0\n        output_tokens_total = 0\n        system_prompt = \"\"\"You are a helpful AI assistant with blog management capabilities.\nYou have access to tools that can create, read, update, delete, and search blog posts.\nAlways think step by step about what you need to do before taking any action.\nBe helpful in suggesting blog post ideas and improvements when asked.\n\nAvailable commands:\n- create_post: Create a new blog post (title, content, author, tags)\n- get_post: Get a blog post by ID (post_id)\n- update_post: Update a blog post (post_id, title?, content?, tags?, published?)\n- delete_post: Delete a blog post (post_id)\n- list_posts: List blog posts (tag?, author?, published_only?)\n- search_posts: Search blog posts (query, search_content?, tag?, author?)\n- publish_post: Publish a blog post (post_id)\n- unpublish_post: Unpublish a blog post (post_id)\n\"\"\"\n\n        # Define blog management tool\n        blog_management_tool = {\n            \"name\": \"blog_management\",\n            \"description\": \"Manage blog posts including creation, editing, searching, and publishing\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"command\": {\n                        \"type\": \"string\",\n                        \"enum\": [\n                            \"create_post\", \"get_post\", \"update_post\", \"delete_post\",\n                            \"list_posts\", \"search_posts\", \"publish_post\", \"unpublish_post\"\n                        ],\n                        \"description\": \"The blog management command to execute\"\n                    }\n                },\n                \"required\": [\"command\"]\n            }\n        }\n\n        messages = [\n            {\n                \"role\": \"user\",\n                \"content\": f\"\"\"I need help managing my blog. Here's what I want to do:\n\n{prompt}\n\nPlease use the blog management tools to help me with this. First, think through what you need to do, then use the appropriate tools.\n\"\"\",\n            }\n        ]\n\n        loop_count = 0\n        tool_use_count = 0\n        thinking_start_time = time.time()\n\n        while loop_count < max_loops:\n            loop_count += 1\n\n            console.rule(f\"[yellow]Agent Loop {loop_count}/{max_loops}[/yellow]\")\n            log_info(\"blog_agent\", f\"Starting agent loop {loop_count}/{max_loops}\")\n\n            # Create message with blog management tool\n            message_args = {\n                \"model\": MODEL,\n                \"max_tokens\": 4096,\n                \"tools\": [blog_management_tool],\n                \"messages\": messages,\n                \"system\": system_prompt,\n                \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": max_thinking_tokens},\n            }\n\n            # Use the beta.messages with betas parameter if token efficiency is enabled\n            if use_token_efficiency:\n                # Using token-efficient tools beta feature\n                message_args[\"betas\"] = [\"token-efficient-tools-2025-02-19\"]\n                response = client.beta.messages.create(**message_args)\n            else:\n                # Standard approach\n                response = client.messages.create(**message_args)\n\n            # Track token usage\n            if hasattr(response, \"usage\"):\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n\n                input_tokens_total += input_tokens\n                output_tokens_total += output_tokens\n\n                console.print(\n                    f\"[dim]Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}[/dim]\"\n                )\n                log_info(\n                    \"blog_agent\", \n                    f\"Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}\"\n                )\n\n            # Process response content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            for content_block in response.content:\n                if content_block.type == \"thinking\":\n                    thinking_block = content_block\n                    # Access the thinking attribute which contains the actual thinking text\n                    if hasattr(thinking_block, \"thinking\"):\n                        console.print(\n                            Panel(\n                                thinking_block.thinking,\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                    else:\n                        console.print(\n                            Panel(\n                                \"Claude is thinking...\",\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                elif content_block.type == \"tool_use\":\n                    tool_use_block = content_block\n                    tool_use_count += 1\n                elif content_block.type == \"text\":\n                    text_block = content_block\n\n            # If we got a final text response with no tool use, we're done\n            if text_block and not tool_use_block:\n                thinking_end_time = time.time()\n                thinking_duration = thinking_end_time - thinking_start_time\n\n                console.print(\n                    f\"\\n[bold green]Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses[/bold green]\"\n                )\n                log_info(\n                    \"blog_agent\",\n                    f\"Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses\"\n                )\n\n                # Add the response to messages\n                messages.append(\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": [\n                            *([thinking_block] if thinking_block else []),\n                            {\"type\": \"text\", \"text\": text_block.text},\n                        ],\n                    }\n                )\n\n                return text_block.text, input_tokens_total, output_tokens_total\n\n            # Handle tool use\n            if tool_use_block:\n                # Add the assistant's response to messages before handling tool calls\n                messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n                console.print(\n                    f\"\\n[bold blue]Tool Call:[/bold blue] {tool_use_block.name}\"\n                )\n                log_info(\"blog_agent\", f\"Tool Call: {tool_use_block.name}\")\n\n                # Handle the tool use\n                tool_result = handle_tool_use(tool_use_block.input)\n\n                # Format tool result for Claude\n                tool_result_message = {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"tool_result\",\n                            \"tool_use_id\": tool_use_block.id,\n                            \"content\": tool_result.get(\"error\") or tool_result.get(\"result\", \"\"),\n                        }\n                    ],\n                }\n                \n                # If we have data in the result, include it as formatted markdown\n                if \"data\" in tool_result and tool_result[\"data\"]:\n                    data_json = json.dumps(tool_result[\"data\"], indent=2)\n                    tool_result_message[\"content\"][0][\"content\"] += f\"\\n\\n```json\\n{data_json}\\n```\"\n                \n                messages.append(tool_result_message)\n\n        # If we reach here, we hit the max loops\n        console.print(\n            f\"\\n[bold red]Warning: Reached maximum loops ({max_loops}) without completing the task[/bold red]\"\n        )\n        log_error(\n            \"blog_agent\",\n            f\"Reached maximum loops ({max_loops}) without completing the task\"\n        )\n        return (\n            \"I wasn't able to complete the task within the allowed number of thinking steps. Please try a more specific prompt or increase the loop limit.\",\n            input_tokens_total,\n            output_tokens_total,\n        )\n\n# Expose the run_agent function at the module level\ndef run_agent(\n    client: Anthropic,\n    prompt: str,\n    max_tool_use_loops: int = 15,\n    token_efficient_tool_use: bool = True,\n) -> Tuple[int, int]:\n    \"\"\"\n    Run the blog agent with the specified prompt.\n    \n    Args:\n        client: The Anthropic client\n        prompt: The prompt to send to Claude\n        max_tool_use_loops: Maximum number of tool use loops\n        token_efficient_tool_use: Whether to use token-efficient tool use\n        \n    Returns:\n        Tuple containing input and output token counts\n    \"\"\"\n    log_info(\"blog_agent\", f\"Running agent with prompt: {prompt}\")\n    \n    _, input_tokens, output_tokens = BlogAgent.run_agent(\n        client=client,\n        prompt=prompt,\n        max_loops=max_tool_use_loops,\n        use_token_efficiency=token_efficient_tool_use,\n        max_thinking_tokens=DEFAULT_THINKING_TOKENS\n    )\n    \n    return input_tokens, output_tokens"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/blog_manager.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nBlog manager for the Vertical Slice Architecture implementation of the blog agent.\nThis module combines various blog tools to provide comprehensive blog management capabilities.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogOperationResult\nfrom features.blog_agent.create_tool import create_blog_post\nfrom features.blog_agent.read_tool import read_blog_post, list_blog_posts\nfrom features.blog_agent.update_tool import update_blog_post, publish_blog_post, unpublish_blog_post\nfrom features.blog_agent.delete_tool import delete_blog_post\nfrom features.blog_agent.search_tool import search_blog_posts\n\nclass BlogManager:\n    \"\"\"\n    Blog manager that combines various tools to provide blog management capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def create_post(title: str, content: str, author: str, tags: List[str] = None) -> BlogOperationResult:\n        \"\"\"\n        Create a new blog post.\n        \n        Args:\n            title: The title of the blog post\n            content: The content of the blog post\n            author: The author of the blog post\n            tags: Optional list of tags\n            \n        Returns:\n            BlogOperationResult with result or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Creating blog post: {title}\")\n        return create_blog_post(title, content, author, tags)\n    \n    @staticmethod\n    def get_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Get a blog post by ID.\n        \n        Args:\n            post_id: The ID of the blog post to get\n            \n        Returns:\n            BlogOperationResult with the blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Getting blog post: {post_id}\")\n        return read_blog_post(post_id)\n    \n    @staticmethod\n    def update_post(post_id: str, title: Optional[str] = None, content: Optional[str] = None,\n                   tags: Optional[List[str]] = None, published: Optional[bool] = None) -> BlogOperationResult:\n        \"\"\"\n        Update a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to update\n            title: Optional new title\n            content: Optional new content\n            tags: Optional new tags\n            published: Optional new publication status\n            \n        Returns:\n            BlogOperationResult with the updated blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Updating blog post: {post_id}\")\n        return update_blog_post(post_id, title, content, tags, published)\n    \n    @staticmethod\n    def delete_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Delete a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to delete\n            \n        Returns:\n            BlogOperationResult with result or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Deleting blog post: {post_id}\")\n        return delete_blog_post(post_id)\n    \n    @staticmethod\n    def list_posts(tag: Optional[str] = None, author: Optional[str] = None, \n                  published_only: bool = False) -> BlogOperationResult:\n        \"\"\"\n        List blog posts, optionally filtered by tag, author, or publication status.\n        \n        Args:\n            tag: Optional tag to filter by\n            author: Optional author to filter by\n            published_only: Whether to only return published posts\n            \n        Returns:\n            BlogOperationResult with a list of blog posts or error message\n        \"\"\"\n        log_info(\"blog_manager\", \"Listing blog posts\")\n        return list_blog_posts(tag, author, published_only)\n    \n    @staticmethod\n    def search_posts(query: str, search_content: bool = True, \n                    tag: Optional[str] = None, author: Optional[str] = None) -> BlogOperationResult:\n        \"\"\"\n        Search blog posts by query string, optionally filtered by tag or author.\n        \n        Args:\n            query: The search query\n            search_content: Whether to search in the content (otherwise just title and tags)\n            tag: Optional tag to filter by\n            author: Optional author to filter by\n            \n        Returns:\n            BlogOperationResult with a list of matching blog posts or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Searching blog posts for: {query}\")\n        return search_blog_posts(query, search_content, tag, author)\n    \n    @staticmethod\n    def publish_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Publish a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to publish\n            \n        Returns:\n            BlogOperationResult with the published blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Publishing blog post: {post_id}\")\n        return publish_blog_post(post_id)\n    \n    @staticmethod\n    def unpublish_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Unpublish a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to unpublish\n            \n        Returns:\n            BlogOperationResult with the unpublished blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Unpublishing blog post: {post_id}\")\n        return unpublish_blog_post(post_id)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/create_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCreate tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post creation capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport uuid\nfrom datetime import datetime\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogPost, BlogOperationResult\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef create_blog_post(title: str, content: str, author: str, tags: list = None) -> BlogOperationResult:\n    \"\"\"\n    Create a new blog post.\n    \n    Args:\n        title: Title of the blog post\n        content: Content of the blog post\n        author: Author of the blog post\n        tags: Optional list of tags\n        \n    Returns:\n        BlogOperationResult with result or error message\n    \"\"\"\n    log_info(\"create_tool\", f\"Creating blog post: {title}\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(BLOG_POSTS_DIR, exist_ok=True)\n        \n        # Generate a unique ID and timestamps\n        post_id = str(uuid.uuid4())\n        current_time = datetime.now().isoformat()\n        \n        # Create the blog post\n        blog_post = BlogPost(\n            id=post_id,\n            title=title,\n            content=content,\n            author=author,\n            tags=tags or [],\n            published=False,\n            created_at=current_time,\n            updated_at=current_time\n        )\n        \n        # Save the blog post to a JSON file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        with open(file_path, 'w', encoding='utf-8') as f:\n            json.dump(blog_post.to_dict(), f, indent=2)\n        \n        log_info(\"create_tool\", f\"Created blog post: {title} with ID: {post_id}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully created blog post: {title}\", \n            data=blog_post.to_dict()\n        )\n    except Exception as e:\n        error_msg = f\"Failed to create blog post: {str(e)}\"\n        log_error(\"create_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/delete_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nDelete tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post deletion capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogOperationResult\nfrom features.blog_agent.read_tool import read_blog_post\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef delete_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Delete a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to delete\n        \n    Returns:\n        BlogOperationResult with result or error message\n    \"\"\"\n    log_info(\"delete_tool\", f\"Deleting blog post with ID: {post_id}\")\n    \n    try:\n        # Verify the blog post exists\n        read_result = read_blog_post(post_id)\n        \n        if not read_result.success:\n            return read_result\n        \n        # Get the blog post title for the response message\n        blog_post_title = read_result.data[\"title\"]\n        \n        # Delete the blog post file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        os.remove(file_path)\n        \n        log_info(\"delete_tool\", f\"Deleted blog post: {blog_post_title}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully deleted blog post: {blog_post_title}\"\n        )\n    except Exception as e:\n        error_msg = f\"Failed to delete blog post: {str(e)}\"\n        log_error(\"delete_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/model_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nModels for the blog agent in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nfrom typing import Dict, Any, Optional, List, Union\nfrom dataclasses import dataclass\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\n\n@dataclass\nclass BlogPost:\n    \"\"\"Model representing a blog post.\"\"\"\n    \n    title: str\n    content: str\n    author: str\n    tags: List[str]\n    published: bool = False\n    id: Optional[str] = None\n    created_at: Optional[str] = None\n    updated_at: Optional[str] = None\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"Convert the blog post to a dictionary.\"\"\"\n        return {\n            \"id\": self.id,\n            \"title\": self.title,\n            \"content\": self.content,\n            \"author\": self.author,\n            \"tags\": self.tags,\n            \"published\": self.published,\n            \"created_at\": self.created_at,\n            \"updated_at\": self.updated_at\n        }\n    \n    @classmethod\n    def from_dict(cls, data: Dict[str, Any]) -> 'BlogPost':\n        \"\"\"Create a blog post from a dictionary.\"\"\"\n        return cls(\n            id=data.get(\"id\"),\n            title=data.get(\"title\", \"\"),\n            content=data.get(\"content\", \"\"),\n            author=data.get(\"author\", \"\"),\n            tags=data.get(\"tags\", []),\n            published=data.get(\"published\", False),\n            created_at=data.get(\"created_at\"),\n            updated_at=data.get(\"updated_at\")\n        )\n\n\nclass BlogOperationResult:\n    \"\"\"\n    Model representing the result of a blog operation.\n    \"\"\"\n    \n    def __init__(self, success: bool, message: str, data: Any = None):\n        \"\"\"\n        Initialize a blog operation result.\n        \n        Args:\n            success: Whether the operation was successful\n            message: A message describing the result\n            data: Optional data returned by the operation\n        \"\"\"\n        self.success = success\n        self.message = message\n        self.data = data\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"\n        Convert the result to a dictionary.\n        \n        Returns:\n            Dictionary representation of the result\n        \"\"\"\n        return {\n            \"success\": self.success,\n            \"message\": self.message,\n            \"data\": self.data\n        }\n\n\nclass ToolUseRequest:\n    \"\"\"\n    Model representing a tool use request from Claude.\n    \"\"\"\n    \n    def __init__(self, command: str, **kwargs):\n        \"\"\"\n        Initialize a tool use request.\n        \n        Args:\n            command: The command to execute\n            **kwargs: Additional arguments for the command\n        \"\"\"\n        self.command = command\n        self.kwargs = kwargs\n    \n    @classmethod\n    def from_dict(cls, data: Dict[str, Any]) -> 'ToolUseRequest':\n        \"\"\"\n        Create a tool use request from a dictionary.\n        \n        Args:\n            data: Dictionary containing the tool use request\n            \n        Returns:\n            A ToolUseRequest instance\n        \"\"\"\n        command = data.get(\"command\")\n        \n        # Extract all other keys as kwargs\n        kwargs = {k: v for k, v in data.items() if k != \"command\"}\n        \n        return cls(command, **kwargs)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/read_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nRead tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post reading capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport glob\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogPost, BlogOperationResult\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef read_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Read a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to read\n        \n    Returns:\n        BlogOperationResult with the blog post or error message\n    \"\"\"\n    log_info(\"read_tool\", f\"Reading blog post with ID: {post_id}\")\n    \n    try:\n        # Read the blog post from the JSON file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        \n        if not os.path.exists(file_path):\n            error_msg = f\"Blog post with ID {post_id} not found\"\n            log_error(\"read_tool\", error_msg)\n            return BlogOperationResult(success=False, message=error_msg)\n        \n        with open(file_path, 'r', encoding='utf-8') as f:\n            blog_post_data = json.load(f)\n        \n        # Create a BlogPost object from the data\n        blog_post = BlogPost.from_dict(blog_post_data)\n        \n        log_info(\"read_tool\", f\"Successfully read blog post: {blog_post.title}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully read blog post: {blog_post.title}\", \n            data=blog_post.to_dict()\n        )\n    except Exception as e:\n        error_msg = f\"Failed to read blog post: {str(e)}\"\n        log_error(\"read_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)\n\ndef list_blog_posts(tag: Optional[str] = None, author: Optional[str] = None, \n                   published_only: bool = False) -> BlogOperationResult:\n    \"\"\"\n    List all blog posts, optionally filtered by tag, author, or publication status.\n    \n    Args:\n        tag: Optional tag to filter by\n        author: Optional author to filter by\n        published_only: Whether to only return published posts\n        \n    Returns:\n        BlogOperationResult with a list of blog posts or error message\n    \"\"\"\n    log_info(\"read_tool\", \"Listing blog posts\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(BLOG_POSTS_DIR, exist_ok=True)\n        \n        # Get all JSON files in the blog posts directory\n        file_paths = glob.glob(os.path.join(BLOG_POSTS_DIR, \"*.json\"))\n        \n        blog_posts = []\n        \n        for file_path in file_paths:\n            try:\n                with open(file_path, 'r', encoding='utf-8') as f:\n                    blog_post_data = json.load(f)\n                    \n                # Apply filters\n                if published_only and not blog_post_data.get(\"published\", False):\n                    continue\n                    \n                if author and blog_post_data.get(\"author\") != author:\n                    continue\n                    \n                if tag and tag not in blog_post_data.get(\"tags\", []):\n                    continue\n                    \n                blog_posts.append(blog_post_data)\n            except Exception as e:\n                log_error(\"read_tool\", f\"Error reading file {file_path}: {str(e)}\")\n                continue\n        \n        log_info(\"read_tool\", f\"Listed {len(blog_posts)} blog posts\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully listed {len(blog_posts)} blog posts\", \n            data=blog_posts\n        )\n    except Exception as e:\n        error_msg = f\"Failed to list blog posts: {str(e)}\"\n        log_error(\"read_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/search_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nSearch tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post searching capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport glob\nimport re\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogOperationResult\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef search_blog_posts(query: str, search_content: bool = True, \n                     tag: Optional[str] = None, author: Optional[str] = None) -> BlogOperationResult:\n    \"\"\"\n    Search blog posts by query string, optionally filtered by tag or author.\n    \n    Args:\n        query: The search query\n        search_content: Whether to search in the content (otherwise just title and tags)\n        tag: Optional tag to filter by\n        author: Optional author to filter by\n        \n    Returns:\n        BlogOperationResult with a list of matching blog posts or error message\n    \"\"\"\n    log_info(\"search_tool\", f\"Searching blog posts for: {query}\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(BLOG_POSTS_DIR, exist_ok=True)\n        \n        # Get all JSON files in the blog posts directory\n        file_paths = glob.glob(os.path.join(BLOG_POSTS_DIR, \"*.json\"))\n        \n        # Compile the search regex for case-insensitive search\n        search_regex = re.compile(query, re.IGNORECASE)\n        \n        matching_posts = []\n        \n        for file_path in file_paths:\n            try:\n                with open(file_path, 'r', encoding='utf-8') as f:\n                    blog_post_data = json.load(f)\n                \n                # Apply filters\n                if author and blog_post_data.get(\"author\") != author:\n                    continue\n                    \n                if tag and tag not in blog_post_data.get(\"tags\", []):\n                    continue\n                \n                # Check for match in title\n                if search_regex.search(blog_post_data.get(\"title\", \"\")):\n                    matching_posts.append(blog_post_data)\n                    continue\n                \n                # Check for match in tags\n                if any(search_regex.search(t) for t in blog_post_data.get(\"tags\", [])):\n                    matching_posts.append(blog_post_data)\n                    continue\n                \n                # Check for match in content if requested\n                if search_content and search_regex.search(blog_post_data.get(\"content\", \"\")):\n                    matching_posts.append(blog_post_data)\n                    continue\n                \n            except Exception as e:\n                log_error(\"search_tool\", f\"Error processing file {file_path}: {str(e)}\")\n                continue\n        \n        log_info(\"search_tool\", f\"Found {len(matching_posts)} matching blog posts\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Found {len(matching_posts)} matching blog posts\", \n            data=matching_posts\n        )\n    except Exception as e:\n        error_msg = f\"Failed to search blog posts: {str(e)}\"\n        log_error(\"search_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/tool_handler.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTool handler for the Vertical Slice Architecture implementation of the blog agent.\nThis module handles tool use requests from the Claude agent.\n\"\"\"\n\nimport sys\nimport os\nimport json\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import ToolUseRequest\nfrom features.blog_agent.blog_manager import BlogManager\n\ndef handle_tool_use(input_data: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Handle tool use requests from the Claude agent.\n    \n    Args:\n        input_data: The tool use request data from Claude\n        \n    Returns:\n        Dictionary with the result or error message\n    \"\"\"\n    log_info(\"tool_handler\", f\"Received tool use request: {input_data}\")\n    \n    try:\n        # Parse the tool use request\n        request = ToolUseRequest.from_dict(input_data)\n        \n        # Handle the command\n        if request.command == \"create_post\":\n            title = request.kwargs.get(\"title\", \"\")\n            content = request.kwargs.get(\"content\", \"\")\n            author = request.kwargs.get(\"author\", \"\")\n            tags = request.kwargs.get(\"tags\", [])\n            \n            result = BlogManager.create_post(title, content, author, tags)\n            \n        elif request.command == \"get_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.get_post(post_id)\n            \n        elif request.command == \"update_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            title = request.kwargs.get(\"title\")\n            content = request.kwargs.get(\"content\")\n            tags = request.kwargs.get(\"tags\")\n            published = request.kwargs.get(\"published\")\n            \n            result = BlogManager.update_post(post_id, title, content, tags, published)\n            \n        elif request.command == \"delete_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.delete_post(post_id)\n            \n        elif request.command == \"list_posts\":\n            tag = request.kwargs.get(\"tag\")\n            author = request.kwargs.get(\"author\")\n            published_only = request.kwargs.get(\"published_only\", False)\n            \n            result = BlogManager.list_posts(tag, author, published_only)\n            \n        elif request.command == \"search_posts\":\n            query = request.kwargs.get(\"query\", \"\")\n            search_content = request.kwargs.get(\"search_content\", True)\n            tag = request.kwargs.get(\"tag\")\n            author = request.kwargs.get(\"author\")\n            \n            result = BlogManager.search_posts(query, search_content, tag, author)\n            \n        elif request.command == \"publish_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.publish_post(post_id)\n            \n        elif request.command == \"unpublish_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.unpublish_post(post_id)\n            \n        else:\n            log_error(\"tool_handler\", f\"Unknown command: {request.command}\")\n            return {\"error\": f\"Unknown command: {request.command}\"}\n        \n        # Return the result\n        if result.success:\n            # Convert complex objects to JSON serializable format\n            if isinstance(result.data, dict) or isinstance(result.data, list):\n                # Convert to JSON string and back to ensure serializability\n                clean_data = json.loads(json.dumps(result.data))\n                return {\"result\": result.message, \"data\": clean_data}\n            else:\n                return {\"result\": result.message}\n        else:\n            return {\"error\": result.message}\n            \n    except Exception as e:\n        error_msg = f\"Error handling tool use: {str(e)}\"\n        log_error(\"tool_handler\", error_msg)\n        return {\"error\": error_msg}"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent/update_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nUpdate tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post updating capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nfrom datetime import datetime\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogPost, BlogOperationResult\nfrom features.blog_agent.read_tool import read_blog_post\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef update_blog_post(post_id: str, title: Optional[str] = None, content: Optional[str] = None,\n                    tags: Optional[List[str]] = None, published: Optional[bool] = None) -> BlogOperationResult:\n    \"\"\"\n    Update a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to update\n        title: Optional new title\n        content: Optional new content\n        tags: Optional new tags\n        published: Optional new publication status\n        \n    Returns:\n        BlogOperationResult with the updated blog post or error message\n    \"\"\"\n    log_info(\"update_tool\", f\"Updating blog post with ID: {post_id}\")\n    \n    try:\n        # Read the existing blog post\n        read_result = read_blog_post(post_id)\n        \n        if not read_result.success:\n            return read_result\n        \n        # Get the existing blog post data\n        blog_post_data = read_result.data\n        \n        # Update the fields\n        if title is not None:\n            blog_post_data[\"title\"] = title\n            \n        if content is not None:\n            blog_post_data[\"content\"] = content\n            \n        if tags is not None:\n            blog_post_data[\"tags\"] = tags\n            \n        if published is not None:\n            blog_post_data[\"published\"] = published\n            \n        # Update the updated_at timestamp\n        blog_post_data[\"updated_at\"] = datetime.now().isoformat()\n        \n        # Save the updated blog post to the JSON file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        with open(file_path, 'w', encoding='utf-8') as f:\n            json.dump(blog_post_data, f, indent=2)\n        \n        log_info(\"update_tool\", f\"Updated blog post: {blog_post_data['title']}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully updated blog post: {blog_post_data['title']}\", \n            data=blog_post_data\n        )\n    except Exception as e:\n        error_msg = f\"Failed to update blog post: {str(e)}\"\n        log_error(\"update_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)\n\ndef publish_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Publish a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to publish\n        \n    Returns:\n        BlogOperationResult with the published blog post or error message\n    \"\"\"\n    log_info(\"update_tool\", f\"Publishing blog post with ID: {post_id}\")\n    return update_blog_post(post_id, published=True)\n\ndef unpublish_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Unpublish a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to unpublish\n        \n    Returns:\n        BlogOperationResult with the unpublished blog post or error message\n    \"\"\"\n    log_info(\"update_tool\", f\"Unpublishing blog post with ID: {post_id}\")\n    return update_blog_post(post_id, published=False)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/__init__.py",
    "content": "\"\"\"\nBlog agent package for the Vertical Slice Architecture.\nThis package provides blog management capabilities.\n\"\"\"\n\nfrom features.blog_agent.blog_agent import run_agent\nfrom features.blog_agent.blog_manager import BlogManager"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/blog_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nBlog agent for the Vertical Slice Architecture implementation of the blog agent.\nThis module provides the agent interface for blog management operations.\n\"\"\"\n\nimport time\nfrom typing import Tuple, Dict, Any, List, Optional\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.markdown import Markdown\nfrom anthropic import Anthropic\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, display_token_usage\nfrom features.blog_agent.tool_handler import handle_tool_use\n\n# Initialize rich console\nconsole = Console()\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\nclass BlogAgent:\n    \"\"\"\n    Blog agent that provides an interface for AI-assisted blog management.\n    \"\"\"\n    \n    @staticmethod\n    def run_agent(\n        client: Anthropic,\n        prompt: str,\n        max_thinking_tokens: int = DEFAULT_THINKING_TOKENS,\n        max_loops: int = 10,\n        use_token_efficiency: bool = False,\n    ) -> Tuple[str, int, int]:\n        \"\"\"\n        Run the Claude agent with blog management capabilities.\n\n        Args:\n            client: The Anthropic client\n            prompt: The user's prompt\n            max_thinking_tokens: Maximum tokens for thinking\n            max_loops: Maximum number of tool use loops\n            use_token_efficiency: Whether to use token-efficient tool use beta feature\n\n        Returns:\n            Tuple containing:\n            - Final response from Claude (str)\n            - Total input tokens used (int)\n            - Total output tokens used (int)\n        \"\"\"\n        # Track token usage\n        input_tokens_total = 0\n        output_tokens_total = 0\n        system_prompt = \"\"\"You are a helpful AI assistant with blog management capabilities.\nYou have access to tools that can create, read, update, delete, and search blog posts.\nAlways think step by step about what you need to do before taking any action.\nBe helpful in suggesting blog post ideas and improvements when asked.\n\nAvailable commands:\n- create_post: Create a new blog post (title, content, author, tags)\n- get_post: Get a blog post by ID (post_id)\n- update_post: Update a blog post (post_id, title?, content?, tags?, published?)\n- delete_post: Delete a blog post (post_id)\n- list_posts: List blog posts (tag?, author?, published_only?)\n- search_posts: Search blog posts (query, search_content?, tag?, author?)\n- publish_post: Publish a blog post (post_id)\n- unpublish_post: Unpublish a blog post (post_id)\n\"\"\"\n\n        # Define blog management tool\n        blog_management_tool = {\n            \"name\": \"blog_management\",\n            \"description\": \"Manage blog posts including creation, editing, searching, and publishing\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"command\": {\n                        \"type\": \"string\",\n                        \"enum\": [\n                            \"create_post\", \"get_post\", \"update_post\", \"delete_post\",\n                            \"list_posts\", \"search_posts\", \"publish_post\", \"unpublish_post\"\n                        ],\n                        \"description\": \"The blog management command to execute\"\n                    }\n                },\n                \"required\": [\"command\"]\n            }\n        }\n\n        messages = [\n            {\n                \"role\": \"user\",\n                \"content\": f\"\"\"I need help managing my blog. Here's what I want to do:\n\n{prompt}\n\nPlease use the blog management tools to help me with this. First, think through what you need to do, then use the appropriate tools.\n\"\"\",\n            }\n        ]\n\n        loop_count = 0\n        tool_use_count = 0\n        thinking_start_time = time.time()\n\n        while loop_count < max_loops:\n            loop_count += 1\n\n            console.rule(f\"[yellow]Agent Loop {loop_count}/{max_loops}[/yellow]\")\n            log_info(\"blog_agent\", f\"Starting agent loop {loop_count}/{max_loops}\")\n\n            # Create message with blog management tool\n            message_args = {\n                \"model\": MODEL,\n                \"max_tokens\": 4096,\n                \"tools\": [blog_management_tool],\n                \"messages\": messages,\n                \"system\": system_prompt,\n                \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": max_thinking_tokens},\n            }\n\n            # Use the beta.messages with betas parameter if token efficiency is enabled\n            if use_token_efficiency:\n                # Using token-efficient tools beta feature\n                message_args[\"betas\"] = [\"token-efficient-tools-2025-02-19\"]\n                response = client.beta.messages.create(**message_args)\n            else:\n                # Standard approach\n                response = client.messages.create(**message_args)\n\n            # Track token usage\n            if hasattr(response, \"usage\"):\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n\n                input_tokens_total += input_tokens\n                output_tokens_total += output_tokens\n\n                console.print(\n                    f\"[dim]Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}[/dim]\"\n                )\n                log_info(\n                    \"blog_agent\", \n                    f\"Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}\"\n                )\n\n            # Process response content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            for content_block in response.content:\n                if content_block.type == \"thinking\":\n                    thinking_block = content_block\n                    # Access the thinking attribute which contains the actual thinking text\n                    if hasattr(thinking_block, \"thinking\"):\n                        console.print(\n                            Panel(\n                                thinking_block.thinking,\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                    else:\n                        console.print(\n                            Panel(\n                                \"Claude is thinking...\",\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                elif content_block.type == \"tool_use\":\n                    tool_use_block = content_block\n                    tool_use_count += 1\n                elif content_block.type == \"text\":\n                    text_block = content_block\n\n            # If we got a final text response with no tool use, we're done\n            if text_block and not tool_use_block:\n                thinking_end_time = time.time()\n                thinking_duration = thinking_end_time - thinking_start_time\n\n                console.print(\n                    f\"\\n[bold green]Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses[/bold green]\"\n                )\n                log_info(\n                    \"blog_agent\",\n                    f\"Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses\"\n                )\n\n                # Add the response to messages\n                messages.append(\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": [\n                            *([thinking_block] if thinking_block else []),\n                            {\"type\": \"text\", \"text\": text_block.text},\n                        ],\n                    }\n                )\n\n                return text_block.text, input_tokens_total, output_tokens_total\n\n            # Handle tool use\n            if tool_use_block:\n                # Add the assistant's response to messages before handling tool calls\n                messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n                console.print(\n                    f\"\\n[bold blue]Tool Call:[/bold blue] {tool_use_block.name}\"\n                )\n                log_info(\"blog_agent\", f\"Tool Call: {tool_use_block.name}\")\n\n                # Handle the tool use\n                tool_result = handle_tool_use(tool_use_block.input)\n\n                # Format tool result for Claude\n                tool_result_message = {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"tool_result\",\n                            \"tool_use_id\": tool_use_block.id,\n                            \"content\": tool_result.get(\"error\") or tool_result.get(\"result\", \"\"),\n                        }\n                    ],\n                }\n                \n                # If we have data in the result, include it as formatted markdown\n                if \"data\" in tool_result and tool_result[\"data\"]:\n                    data_json = json.dumps(tool_result[\"data\"], indent=2)\n                    tool_result_message[\"content\"][0][\"content\"] += f\"\\n\\n```json\\n{data_json}\\n```\"\n                \n                messages.append(tool_result_message)\n\n        # If we reach here, we hit the max loops\n        console.print(\n            f\"\\n[bold red]Warning: Reached maximum loops ({max_loops}) without completing the task[/bold red]\"\n        )\n        log_error(\n            \"blog_agent\",\n            f\"Reached maximum loops ({max_loops}) without completing the task\"\n        )\n        return (\n            \"I wasn't able to complete the task within the allowed number of thinking steps. Please try a more specific prompt or increase the loop limit.\",\n            input_tokens_total,\n            output_tokens_total,\n        )\n\n# Expose the run_agent function at the module level\ndef run_agent(\n    client: Anthropic,\n    prompt: str,\n    max_tool_use_loops: int = 15,\n    token_efficient_tool_use: bool = True,\n) -> Tuple[int, int]:\n    \"\"\"\n    Run the blog agent with the specified prompt.\n    \n    Args:\n        client: The Anthropic client\n        prompt: The prompt to send to Claude\n        max_tool_use_loops: Maximum number of tool use loops\n        token_efficient_tool_use: Whether to use token-efficient tool use\n        \n    Returns:\n        Tuple containing input and output token counts\n    \"\"\"\n    log_info(\"blog_agent\", f\"Running agent with prompt: {prompt}\")\n    \n    _, input_tokens, output_tokens = BlogAgent.run_agent(\n        client=client,\n        prompt=prompt,\n        max_loops=max_tool_use_loops,\n        use_token_efficiency=token_efficient_tool_use,\n        max_thinking_tokens=DEFAULT_THINKING_TOKENS\n    )\n    \n    return input_tokens, output_tokens"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/blog_manager.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nBlog manager for the Vertical Slice Architecture implementation of the blog agent.\nThis module combines various blog tools to provide comprehensive blog management capabilities.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogOperationResult\nfrom features.blog_agent.create_tool import create_blog_post\nfrom features.blog_agent.read_tool import read_blog_post, list_blog_posts\nfrom features.blog_agent.update_tool import update_blog_post, publish_blog_post, unpublish_blog_post\nfrom features.blog_agent.delete_tool import delete_blog_post\nfrom features.blog_agent.search_tool import search_blog_posts\n\nclass BlogManager:\n    \"\"\"\n    Blog manager that combines various tools to provide blog management capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def create_post(title: str, content: str, author: str, tags: List[str] = None) -> BlogOperationResult:\n        \"\"\"\n        Create a new blog post.\n        \n        Args:\n            title: The title of the blog post\n            content: The content of the blog post\n            author: The author of the blog post\n            tags: Optional list of tags\n            \n        Returns:\n            BlogOperationResult with result or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Creating blog post: {title}\")\n        return create_blog_post(title, content, author, tags)\n    \n    @staticmethod\n    def get_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Get a blog post by ID.\n        \n        Args:\n            post_id: The ID of the blog post to get\n            \n        Returns:\n            BlogOperationResult with the blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Getting blog post: {post_id}\")\n        return read_blog_post(post_id)\n    \n    @staticmethod\n    def update_post(post_id: str, title: Optional[str] = None, content: Optional[str] = None,\n                   tags: Optional[List[str]] = None, published: Optional[bool] = None) -> BlogOperationResult:\n        \"\"\"\n        Update a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to update\n            title: Optional new title\n            content: Optional new content\n            tags: Optional new tags\n            published: Optional new publication status\n            \n        Returns:\n            BlogOperationResult with the updated blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Updating blog post: {post_id}\")\n        return update_blog_post(post_id, title, content, tags, published)\n    \n    @staticmethod\n    def delete_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Delete a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to delete\n            \n        Returns:\n            BlogOperationResult with result or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Deleting blog post: {post_id}\")\n        return delete_blog_post(post_id)\n    \n    @staticmethod\n    def list_posts(tag: Optional[str] = None, author: Optional[str] = None, \n                  published_only: bool = False) -> BlogOperationResult:\n        \"\"\"\n        List blog posts, optionally filtered by tag, author, or publication status.\n        \n        Args:\n            tag: Optional tag to filter by\n            author: Optional author to filter by\n            published_only: Whether to only return published posts\n            \n        Returns:\n            BlogOperationResult with a list of blog posts or error message\n        \"\"\"\n        log_info(\"blog_manager\", \"Listing blog posts\")\n        return list_blog_posts(tag, author, published_only)\n    \n    @staticmethod\n    def search_posts(query: str, search_content: bool = True, \n                    tag: Optional[str] = None, author: Optional[str] = None) -> BlogOperationResult:\n        \"\"\"\n        Search blog posts by query string, optionally filtered by tag or author.\n        \n        Args:\n            query: The search query\n            search_content: Whether to search in the content (otherwise just title and tags)\n            tag: Optional tag to filter by\n            author: Optional author to filter by\n            \n        Returns:\n            BlogOperationResult with a list of matching blog posts or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Searching blog posts for: {query}\")\n        return search_blog_posts(query, search_content, tag, author)\n    \n    @staticmethod\n    def publish_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Publish a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to publish\n            \n        Returns:\n            BlogOperationResult with the published blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Publishing blog post: {post_id}\")\n        return publish_blog_post(post_id)\n    \n    @staticmethod\n    def unpublish_post(post_id: str) -> BlogOperationResult:\n        \"\"\"\n        Unpublish a blog post.\n        \n        Args:\n            post_id: The ID of the blog post to unpublish\n            \n        Returns:\n            BlogOperationResult with the unpublished blog post or error message\n        \"\"\"\n        log_info(\"blog_manager\", f\"Unpublishing blog post: {post_id}\")\n        return unpublish_blog_post(post_id)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/create_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCreate tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post creation capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport uuid\nfrom datetime import datetime\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogPost, BlogOperationResult\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef create_blog_post(title: str, content: str, author: str, tags: list = None) -> BlogOperationResult:\n    \"\"\"\n    Create a new blog post.\n    \n    Args:\n        title: Title of the blog post\n        content: Content of the blog post\n        author: Author of the blog post\n        tags: Optional list of tags\n        \n    Returns:\n        BlogOperationResult with result or error message\n    \"\"\"\n    log_info(\"create_tool\", f\"Creating blog post: {title}\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(BLOG_POSTS_DIR, exist_ok=True)\n        \n        # Generate a unique ID and timestamps\n        post_id = str(uuid.uuid4())\n        current_time = datetime.now().isoformat()\n        \n        # Create the blog post\n        blog_post = BlogPost(\n            id=post_id,\n            title=title,\n            content=content,\n            author=author,\n            tags=tags or [],\n            published=False,\n            created_at=current_time,\n            updated_at=current_time\n        )\n        \n        # Save the blog post to a JSON file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        with open(file_path, 'w', encoding='utf-8') as f:\n            json.dump(blog_post.to_dict(), f, indent=2)\n        \n        log_info(\"create_tool\", f\"Created blog post: {title} with ID: {post_id}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully created blog post: {title}\", \n            data=blog_post.to_dict()\n        )\n    except Exception as e:\n        error_msg = f\"Failed to create blog post: {str(e)}\"\n        log_error(\"create_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/delete_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nDelete tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post deletion capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogOperationResult\nfrom features.blog_agent.read_tool import read_blog_post\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef delete_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Delete a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to delete\n        \n    Returns:\n        BlogOperationResult with result or error message\n    \"\"\"\n    log_info(\"delete_tool\", f\"Deleting blog post with ID: {post_id}\")\n    \n    try:\n        # Verify the blog post exists\n        read_result = read_blog_post(post_id)\n        \n        if not read_result.success:\n            return read_result\n        \n        # Get the blog post title for the response message\n        blog_post_title = read_result.data[\"title\"]\n        \n        # Delete the blog post file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        os.remove(file_path)\n        \n        log_info(\"delete_tool\", f\"Deleted blog post: {blog_post_title}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully deleted blog post: {blog_post_title}\"\n        )\n    except Exception as e:\n        error_msg = f\"Failed to delete blog post: {str(e)}\"\n        log_error(\"delete_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/model_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nModels for the blog agent in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nfrom typing import Dict, Any, Optional, List, Union\nfrom dataclasses import dataclass\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\n\n@dataclass\nclass BlogPost:\n    \"\"\"Model representing a blog post.\"\"\"\n    \n    title: str\n    content: str\n    author: str\n    tags: List[str]\n    published: bool = False\n    id: Optional[str] = None\n    created_at: Optional[str] = None\n    updated_at: Optional[str] = None\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"Convert the blog post to a dictionary.\"\"\"\n        return {\n            \"id\": self.id,\n            \"title\": self.title,\n            \"content\": self.content,\n            \"author\": self.author,\n            \"tags\": self.tags,\n            \"published\": self.published,\n            \"created_at\": self.created_at,\n            \"updated_at\": self.updated_at\n        }\n    \n    @classmethod\n    def from_dict(cls, data: Dict[str, Any]) -> 'BlogPost':\n        \"\"\"Create a blog post from a dictionary.\"\"\"\n        return cls(\n            id=data.get(\"id\"),\n            title=data.get(\"title\", \"\"),\n            content=data.get(\"content\", \"\"),\n            author=data.get(\"author\", \"\"),\n            tags=data.get(\"tags\", []),\n            published=data.get(\"published\", False),\n            created_at=data.get(\"created_at\"),\n            updated_at=data.get(\"updated_at\")\n        )\n\n\nclass BlogOperationResult:\n    \"\"\"\n    Model representing the result of a blog operation.\n    \"\"\"\n    \n    def __init__(self, success: bool, message: str, data: Any = None):\n        \"\"\"\n        Initialize a blog operation result.\n        \n        Args:\n            success: Whether the operation was successful\n            message: A message describing the result\n            data: Optional data returned by the operation\n        \"\"\"\n        self.success = success\n        self.message = message\n        self.data = data\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"\n        Convert the result to a dictionary.\n        \n        Returns:\n            Dictionary representation of the result\n        \"\"\"\n        return {\n            \"success\": self.success,\n            \"message\": self.message,\n            \"data\": self.data\n        }\n\n\nclass ToolUseRequest:\n    \"\"\"\n    Model representing a tool use request from Claude.\n    \"\"\"\n    \n    def __init__(self, command: str, **kwargs):\n        \"\"\"\n        Initialize a tool use request.\n        \n        Args:\n            command: The command to execute\n            **kwargs: Additional arguments for the command\n        \"\"\"\n        self.command = command\n        self.kwargs = kwargs\n    \n    @classmethod\n    def from_dict(cls, data: Dict[str, Any]) -> 'ToolUseRequest':\n        \"\"\"\n        Create a tool use request from a dictionary.\n        \n        Args:\n            data: Dictionary containing the tool use request\n            \n        Returns:\n            A ToolUseRequest instance\n        \"\"\"\n        command = data.get(\"command\")\n        \n        # Extract all other keys as kwargs\n        kwargs = {k: v for k, v in data.items() if k != \"command\"}\n        \n        return cls(command, **kwargs)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/read_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nRead tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post reading capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport glob\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogPost, BlogOperationResult\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef read_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Read a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to read\n        \n    Returns:\n        BlogOperationResult with the blog post or error message\n    \"\"\"\n    log_info(\"read_tool\", f\"Reading blog post with ID: {post_id}\")\n    \n    try:\n        # Read the blog post from the JSON file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        \n        if not os.path.exists(file_path):\n            error_msg = f\"Blog post with ID {post_id} not found\"\n            log_error(\"read_tool\", error_msg)\n            return BlogOperationResult(success=False, message=error_msg)\n        \n        with open(file_path, 'r', encoding='utf-8') as f:\n            blog_post_data = json.load(f)\n        \n        # Create a BlogPost object from the data\n        blog_post = BlogPost.from_dict(blog_post_data)\n        \n        log_info(\"read_tool\", f\"Successfully read blog post: {blog_post.title}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully read blog post: {blog_post.title}\", \n            data=blog_post.to_dict()\n        )\n    except Exception as e:\n        error_msg = f\"Failed to read blog post: {str(e)}\"\n        log_error(\"read_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)\n\ndef list_blog_posts(tag: Optional[str] = None, author: Optional[str] = None, \n                   published_only: bool = False) -> BlogOperationResult:\n    \"\"\"\n    List all blog posts, optionally filtered by tag, author, or publication status.\n    \n    Args:\n        tag: Optional tag to filter by\n        author: Optional author to filter by\n        published_only: Whether to only return published posts\n        \n    Returns:\n        BlogOperationResult with a list of blog posts or error message\n    \"\"\"\n    log_info(\"read_tool\", \"Listing blog posts\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(BLOG_POSTS_DIR, exist_ok=True)\n        \n        # Get all JSON files in the blog posts directory\n        file_paths = glob.glob(os.path.join(BLOG_POSTS_DIR, \"*.json\"))\n        \n        blog_posts = []\n        \n        for file_path in file_paths:\n            try:\n                with open(file_path, 'r', encoding='utf-8') as f:\n                    blog_post_data = json.load(f)\n                    \n                # Apply filters\n                if published_only and not blog_post_data.get(\"published\", False):\n                    continue\n                    \n                if author and blog_post_data.get(\"author\") != author:\n                    continue\n                    \n                if tag and tag not in blog_post_data.get(\"tags\", []):\n                    continue\n                    \n                blog_posts.append(blog_post_data)\n            except Exception as e:\n                log_error(\"read_tool\", f\"Error reading file {file_path}: {str(e)}\")\n                continue\n        \n        log_info(\"read_tool\", f\"Listed {len(blog_posts)} blog posts\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully listed {len(blog_posts)} blog posts\", \n            data=blog_posts\n        )\n    except Exception as e:\n        error_msg = f\"Failed to list blog posts: {str(e)}\"\n        log_error(\"read_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/search_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nSearch tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post searching capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nimport glob\nimport re\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogOperationResult\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef search_blog_posts(query: str, search_content: bool = True, \n                     tag: Optional[str] = None, author: Optional[str] = None) -> BlogOperationResult:\n    \"\"\"\n    Search blog posts by query string, optionally filtered by tag or author.\n    \n    Args:\n        query: The search query\n        search_content: Whether to search in the content (otherwise just title and tags)\n        tag: Optional tag to filter by\n        author: Optional author to filter by\n        \n    Returns:\n        BlogOperationResult with a list of matching blog posts or error message\n    \"\"\"\n    log_info(\"search_tool\", f\"Searching blog posts for: {query}\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(BLOG_POSTS_DIR, exist_ok=True)\n        \n        # Get all JSON files in the blog posts directory\n        file_paths = glob.glob(os.path.join(BLOG_POSTS_DIR, \"*.json\"))\n        \n        # Compile the search regex for case-insensitive search\n        search_regex = re.compile(query, re.IGNORECASE)\n        \n        matching_posts = []\n        \n        for file_path in file_paths:\n            try:\n                with open(file_path, 'r', encoding='utf-8') as f:\n                    blog_post_data = json.load(f)\n                \n                # Apply filters\n                if author and blog_post_data.get(\"author\") != author:\n                    continue\n                    \n                if tag and tag not in blog_post_data.get(\"tags\", []):\n                    continue\n                \n                # Check for match in title\n                if search_regex.search(blog_post_data.get(\"title\", \"\")):\n                    matching_posts.append(blog_post_data)\n                    continue\n                \n                # Check for match in tags\n                if any(search_regex.search(t) for t in blog_post_data.get(\"tags\", [])):\n                    matching_posts.append(blog_post_data)\n                    continue\n                \n                # Check for match in content if requested\n                if search_content and search_regex.search(blog_post_data.get(\"content\", \"\")):\n                    matching_posts.append(blog_post_data)\n                    continue\n                \n            except Exception as e:\n                log_error(\"search_tool\", f\"Error processing file {file_path}: {str(e)}\")\n                continue\n        \n        log_info(\"search_tool\", f\"Found {len(matching_posts)} matching blog posts\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Found {len(matching_posts)} matching blog posts\", \n            data=matching_posts\n        )\n    except Exception as e:\n        error_msg = f\"Failed to search blog posts: {str(e)}\"\n        log_error(\"search_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/tool_handler.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTool handler for the Vertical Slice Architecture implementation of the blog agent.\nThis module handles tool use requests from the Claude agent.\n\"\"\"\n\nimport sys\nimport os\nimport json\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import ToolUseRequest\nfrom features.blog_agent.blog_manager import BlogManager\n\ndef handle_tool_use(input_data: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Handle tool use requests from the Claude agent.\n    \n    Args:\n        input_data: The tool use request data from Claude\n        \n    Returns:\n        Dictionary with the result or error message\n    \"\"\"\n    log_info(\"tool_handler\", f\"Received tool use request: {input_data}\")\n    \n    try:\n        # Parse the tool use request\n        request = ToolUseRequest.from_dict(input_data)\n        \n        # Handle the command\n        if request.command == \"create_post\":\n            title = request.kwargs.get(\"title\", \"\")\n            content = request.kwargs.get(\"content\", \"\")\n            author = request.kwargs.get(\"author\", \"\")\n            tags = request.kwargs.get(\"tags\", [])\n            \n            result = BlogManager.create_post(title, content, author, tags)\n            \n        elif request.command == \"get_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.get_post(post_id)\n            \n        elif request.command == \"update_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            title = request.kwargs.get(\"title\")\n            content = request.kwargs.get(\"content\")\n            tags = request.kwargs.get(\"tags\")\n            published = request.kwargs.get(\"published\")\n            \n            result = BlogManager.update_post(post_id, title, content, tags, published)\n            \n        elif request.command == \"delete_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.delete_post(post_id)\n            \n        elif request.command == \"list_posts\":\n            tag = request.kwargs.get(\"tag\")\n            author = request.kwargs.get(\"author\")\n            published_only = request.kwargs.get(\"published_only\", False)\n            \n            result = BlogManager.list_posts(tag, author, published_only)\n            \n        elif request.command == \"search_posts\":\n            query = request.kwargs.get(\"query\", \"\")\n            search_content = request.kwargs.get(\"search_content\", True)\n            tag = request.kwargs.get(\"tag\")\n            author = request.kwargs.get(\"author\")\n            \n            result = BlogManager.search_posts(query, search_content, tag, author)\n            \n        elif request.command == \"publish_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.publish_post(post_id)\n            \n        elif request.command == \"unpublish_post\":\n            post_id = request.kwargs.get(\"post_id\", \"\")\n            \n            result = BlogManager.unpublish_post(post_id)\n            \n        else:\n            log_error(\"tool_handler\", f\"Unknown command: {request.command}\")\n            return {\"error\": f\"Unknown command: {request.command}\"}\n        \n        # Return the result\n        if result.success:\n            # Convert complex objects to JSON serializable format\n            if isinstance(result.data, dict) or isinstance(result.data, list):\n                # Convert to JSON string and back to ensure serializability\n                clean_data = json.loads(json.dumps(result.data))\n                return {\"result\": result.message, \"data\": clean_data}\n            else:\n                return {\"result\": result.message}\n        else:\n            return {\"error\": result.message}\n            \n    except Exception as e:\n        error_msg = f\"Error handling tool use: {str(e)}\"\n        log_error(\"tool_handler\", error_msg)\n        return {\"error\": error_msg}"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/blog_agent_v2/update_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nUpdate tool for the blog agent in the Vertical Slice Architecture.\nThis module provides blog post updating capabilities.\n\"\"\"\n\nimport sys\nimport os\nimport json\nfrom datetime import datetime\nfrom typing import Dict, Any, List, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.blog_agent.model_tools import BlogPost, BlogOperationResult\nfrom features.blog_agent.read_tool import read_blog_post\n\n# Path to store blog posts\nBLOG_POSTS_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"..\", \"..\", \"..\", \"data\", \"blog_posts\")\n\ndef update_blog_post(post_id: str, title: Optional[str] = None, content: Optional[str] = None,\n                    tags: Optional[List[str]] = None, published: Optional[bool] = None) -> BlogOperationResult:\n    \"\"\"\n    Update a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to update\n        title: Optional new title\n        content: Optional new content\n        tags: Optional new tags\n        published: Optional new publication status\n        \n    Returns:\n        BlogOperationResult with the updated blog post or error message\n    \"\"\"\n    log_info(\"update_tool\", f\"Updating blog post with ID: {post_id}\")\n    \n    try:\n        # Read the existing blog post\n        read_result = read_blog_post(post_id)\n        \n        if not read_result.success:\n            return read_result\n        \n        # Get the existing blog post data\n        blog_post_data = read_result.data\n        \n        # Update the fields\n        if title is not None:\n            blog_post_data[\"title\"] = title\n            \n        if content is not None:\n            blog_post_data[\"content\"] = content\n            \n        if tags is not None:\n            blog_post_data[\"tags\"] = tags\n            \n        if published is not None:\n            blog_post_data[\"published\"] = published\n            \n        # Update the updated_at timestamp\n        blog_post_data[\"updated_at\"] = datetime.now().isoformat()\n        \n        # Save the updated blog post to the JSON file\n        file_path = os.path.join(BLOG_POSTS_DIR, f\"{post_id}.json\")\n        with open(file_path, 'w', encoding='utf-8') as f:\n            json.dump(blog_post_data, f, indent=2)\n        \n        log_info(\"update_tool\", f\"Updated blog post: {blog_post_data['title']}\")\n        return BlogOperationResult(\n            success=True, \n            message=f\"Successfully updated blog post: {blog_post_data['title']}\", \n            data=blog_post_data\n        )\n    except Exception as e:\n        error_msg = f\"Failed to update blog post: {str(e)}\"\n        log_error(\"update_tool\", error_msg)\n        return BlogOperationResult(success=False, message=error_msg)\n\ndef publish_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Publish a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to publish\n        \n    Returns:\n        BlogOperationResult with the published blog post or error message\n    \"\"\"\n    log_info(\"update_tool\", f\"Publishing blog post with ID: {post_id}\")\n    return update_blog_post(post_id, published=True)\n\ndef unpublish_blog_post(post_id: str) -> BlogOperationResult:\n    \"\"\"\n    Unpublish a blog post by ID.\n    \n    Args:\n        post_id: The ID of the blog post to unpublish\n        \n    Returns:\n        BlogOperationResult with the unpublished blog post or error message\n    \"\"\"\n    log_info(\"update_tool\", f\"Unpublishing blog post with ID: {post_id}\")\n    return update_blog_post(post_id, published=False)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/api_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAPI layer for file operations in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nimport traceback\nfrom typing import Dict, Any, Optional, List, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nfrom shared.utils import console\nfrom features.file_operations.service import FileOperationService\nfrom features.file_operations.model import ToolUseRequest, FileOperationResult\n\nclass FileOperationsAPI:\n    \"\"\"\n    API for file operations.\n    \"\"\"\n    \n    @staticmethod\n    def handle_tool_use(tool_use: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Handle text editor tool use from Claude.\n\n        Args:\n            tool_use: The tool use request from Claude\n\n        Returns:\n            Dictionary with result or error to send back to Claude\n        \"\"\"\n        try:\n            # Convert the tool use dictionary to a ToolUseRequest object\n            request = ToolUseRequest.from_dict(tool_use)\n            \n            console.log(f\"[handle_tool_use] Received command: {request.command}, path: {request.path}\")\n\n            if not request.command:\n                error_msg = \"No command specified in tool use request\"\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n\n            if not request.path and request.command != \"undo_edit\":  # undo_edit might not need a path\n                error_msg = \"No path specified in tool use request\"\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n\n            # The path normalization is now handled in each file operation function\n            console.print(f\"[blue]Executing {request.command} command on {request.path}[/blue]\")\n\n            result = None\n            \n            if request.command == \"view\":\n                view_range = request.kwargs.get(\"view_range\")\n                console.log(\n                    f\"[handle_tool_use] Calling view_file with view_range: {view_range}\"\n                )\n                result = FileOperationService.view_file(request.path, view_range)\n\n            elif request.command == \"str_replace\":\n                old_str = request.kwargs.get(\"old_str\")\n                new_str = request.kwargs.get(\"new_str\")\n                console.log(f\"[handle_tool_use] Calling str_replace\")\n                result = FileOperationService.str_replace(request.path, old_str, new_str)\n\n            elif request.command == \"create\":\n                file_text = request.kwargs.get(\"file_text\")\n                console.log(f\"[handle_tool_use] Calling create_file\")\n                result = FileOperationService.create_file(request.path, file_text)\n\n            elif request.command == \"insert\":\n                insert_line = request.kwargs.get(\"insert_line\")\n                new_str = request.kwargs.get(\"new_str\")\n                console.log(f\"[handle_tool_use] Calling insert_text at line: {insert_line}\")\n                result = FileOperationService.insert_text(request.path, insert_line, new_str)\n\n            elif request.command == \"undo_edit\":\n                console.log(f\"[handle_tool_use] Calling undo_edit\")\n                result = FileOperationService.undo_edit(request.path)\n\n            else:\n                error_msg = f\"Unknown command: {request.command}\"\n                console.print(f\"[red]{error_msg}[/red]\")\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n            \n            # Convert the result to a dictionary\n            if result.success:\n                return {\"result\": result.data if result.data is not None else result.message}\n            else:\n                return {\"error\": result.message}\n                \n        except Exception as e:\n            error_msg = f\"Error handling tool use: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[handle_tool_use] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return {\"error\": error_msg}\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/create_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCreate tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file creation capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.write_tool import write_file\n\ndef create_file(path: str, content: str) -> FileOperationResult:\n    \"\"\"\n    Create a new file with the specified content.\n    \n    Args:\n        path: The path to the file to create\n        content: The content to write to the file\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"create_tool\", f\"Creating file {path}\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True)\n        \n        # Use the write_file function to create the file\n        return write_file(path, content)\n    except Exception as e:\n        error_msg = f\"Failed to create file {path}: {str(e)}\"\n        log_error(\"create_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/file_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile agent for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides the agent interface for file operations.\n\"\"\"\n\nimport time\nfrom typing import Tuple, Dict, Any, List, Optional, Callable\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom anthropic import Anthropic\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, display_token_usage\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.tool_handler import handle_tool_use\n\n# Initialize rich console\nconsole = Console()\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\nclass FileAgent:\n    \"\"\"\n    File agent that provides an interface for AI-assisted file operations.\n    \"\"\"\n    \n    @staticmethod\n    def run_agent(\n        client: Anthropic,\n        prompt: str,\n        max_thinking_tokens: int = DEFAULT_THINKING_TOKENS,\n        max_loops: int = 10,\n        use_token_efficiency: bool = False,\n    ) -> Tuple[str, int, int]:\n        \"\"\"\n        Run the Claude agent with file editing capabilities.\n\n        Args:\n            client: The Anthropic client\n            prompt: The user's prompt\n            max_thinking_tokens: Maximum tokens for thinking\n            max_loops: Maximum number of tool use loops\n            use_token_efficiency: Whether to use token-efficient tool use beta feature\n\n        Returns:\n            Tuple containing:\n            - Final response from Claude (str)\n            - Total input tokens used (int)\n            - Total output tokens used (int)\n        \"\"\"\n        # Track token usage\n        input_tokens_total = 0\n        output_tokens_total = 0\n        system_prompt = \"\"\"You are a helpful AI assistant with text editing capabilities.\nYou have access to a text editor tool that can view, edit, and create files.\nAlways think step by step about what you need to do before taking any action.\nBe careful when making edits to files, as they can permanently change the user's files.\nFollow these steps when handling file operations:\n1. First, view files to understand their content before making changes\n2. For edits, ensure you have the correct context and are making the right changes\n3. When creating files, make sure they're in the right location with proper formatting\n\"\"\"\n\n        # Define text editor tool\n        text_editor_tool = {\"name\": \"str_replace_editor\", \"type\": \"text_editor_20250124\"}\n\n        messages = [\n            {\n                \"role\": \"user\",\n                \"content\": f\"\"\"I need help with editing files. Here's what I want to do:\n\n{prompt}\n\nPlease use the text editor tool to help me with this. First, think through what you need to do, then use the appropriate tool.\n\"\"\",\n            }\n        ]\n\n        loop_count = 0\n        tool_use_count = 0\n        thinking_start_time = time.time()\n\n        while loop_count < max_loops:\n            loop_count += 1\n\n            console.rule(f\"[yellow]Agent Loop {loop_count}/{max_loops}[/yellow]\")\n            log_info(\"file_agent\", f\"Starting agent loop {loop_count}/{max_loops}\")\n\n            # Create message with text editor tool\n            message_args = {\n                \"model\": MODEL,\n                \"max_tokens\": 4096,\n                \"tools\": [text_editor_tool],\n                \"messages\": messages,\n                \"system\": system_prompt,\n                \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": max_thinking_tokens},\n            }\n\n            # Use the beta.messages with betas parameter if token efficiency is enabled\n            if use_token_efficiency:\n                # Using token-efficient tools beta feature\n                message_args[\"betas\"] = [\"token-efficient-tools-2025-02-19\"]\n                response = client.beta.messages.create(**message_args)\n            else:\n                # Standard approach\n                response = client.messages.create(**message_args)\n\n            # Track token usage\n            if hasattr(response, \"usage\"):\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n\n                input_tokens_total += input_tokens\n                output_tokens_total += output_tokens\n\n                console.print(\n                    f\"[dim]Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}[/dim]\"\n                )\n                log_info(\n                    \"file_agent\", \n                    f\"Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}\"\n                )\n\n            # Process response content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            for content_block in response.content:\n                if content_block.type == \"thinking\":\n                    thinking_block = content_block\n                    # Access the thinking attribute which contains the actual thinking text\n                    if hasattr(thinking_block, \"thinking\"):\n                        console.print(\n                            Panel(\n                                thinking_block.thinking,\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                    else:\n                        console.print(\n                            Panel(\n                                \"Claude is thinking...\",\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                elif content_block.type == \"tool_use\":\n                    tool_use_block = content_block\n                    tool_use_count += 1\n                elif content_block.type == \"text\":\n                    text_block = content_block\n\n            # If we got a final text response with no tool use, we're done\n            if text_block and not tool_use_block:\n                thinking_end_time = time.time()\n                thinking_duration = thinking_end_time - thinking_start_time\n\n                console.print(\n                    f\"\\n[bold green]Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses[/bold green]\"\n                )\n                log_info(\n                    \"file_agent\",\n                    f\"Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses\"\n                )\n\n                # Add the response to messages\n                messages.append(\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": [\n                            *([thinking_block] if thinking_block else []),\n                            {\"type\": \"text\", \"text\": text_block.text},\n                        ],\n                    }\n                )\n\n                return text_block.text, input_tokens_total, output_tokens_total\n\n            # Handle tool use\n            if tool_use_block:\n                # Add the assistant's response to messages before handling tool calls\n                messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n                console.print(\n                    f\"\\n[bold blue]Tool Call:[/bold blue] {tool_use_block.name}\"\n                )\n                log_info(\"file_agent\", f\"Tool Call: {tool_use_block.name}\")\n\n                # Handle the tool use with our handler\n                tool_result = handle_tool_use(tool_use_block.input)\n\n                # Format tool result for Claude\n                tool_result_message = {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"tool_result\",\n                            \"tool_use_id\": tool_use_block.id,\n                            \"content\": tool_result.get(\"error\") or tool_result.get(\"result\", \"\"),\n                        }\n                    ],\n                }\n                messages.append(tool_result_message)\n\n        # If we reach here, we hit the max loops\n        console.print(\n            f\"\\n[bold red]Warning: Reached maximum loops ({max_loops}) without completing the task[/bold red]\"\n        )\n        log_error(\n            \"file_agent\",\n            f\"Reached maximum loops ({max_loops}) without completing the task\"\n        )\n        return (\n            \"I wasn't able to complete the task within the allowed number of thinking steps. Please try a more specific prompt or increase the loop limit.\",\n            input_tokens_total,\n            output_tokens_total,\n        )\n\n# Expose the run_agent function at the module level\ndef run_agent(\n    client: Anthropic,\n    prompt: str,\n    max_tool_use_loops: int = 15,\n    token_efficient_tool_use: bool = True,\n) -> Tuple[int, int]:\n    \"\"\"\n    Run the file editor agent with the specified prompt.\n    \n    Args:\n        client: The Anthropic client\n        prompt: The prompt to send to Claude\n        max_tool_use_loops: Maximum number of tool use loops\n        token_efficient_tool_use: Whether to use token-efficient tool use\n        \n    Returns:\n        Tuple containing input and output token counts\n    \"\"\"\n    log_info(\"file_agent\", f\"Running agent with prompt: {prompt}\")\n    \n    _, input_tokens, output_tokens = FileAgent.run_agent(\n        client=client,\n        prompt=prompt,\n        max_loops=max_tool_use_loops,\n        use_token_efficiency=token_efficient_tool_use,\n        max_thinking_tokens=DEFAULT_THINKING_TOKENS\n    )\n    \n    return input_tokens, output_tokens"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/file_editor.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile editor for the Vertical Slice Architecture implementation of the file editor agent.\nThis module combines reading and writing capabilities for file editing.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any, Tuple, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.file_writer import FileWriter\nfrom features.file_operations.read_tool import read_file\n\nclass FileEditor:\n    \"\"\"\n    File editor that combines reading and writing capabilities for file editing.\n    \"\"\"\n    \n    @staticmethod\n    def read(path: str, start_line: Optional[int] = None, end_line: Optional[int] = None) -> FileOperationResult:\n        \"\"\"\n        Read the contents of a file.\n        \n        Args:\n            path: The path to the file to read\n            start_line: Optional start line (1-indexed)\n            end_line: Optional end line (1-indexed, -1 for end of file)\n            \n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Reading file {path} with range {start_line}-{end_line}\")\n        return read_file(path, start_line, end_line)\n    \n    @staticmethod\n    def view_file(path: str, view_range=None) -> FileOperationResult:\n        \"\"\"\n        View the contents of a file with optional range.\n        \n        Args:\n            path: The path to the file to view\n            view_range: Optional tuple of (start_line, end_line)\n            \n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        start_line = None\n        end_line = None\n        \n        if view_range:\n            start_line, end_line = view_range\n            \n        log_info(\"file_editor\", f\"Viewing file {path} with range {start_line}-{end_line}\")\n        \n        return FileEditor.read(path, start_line, end_line)\n    \n    @staticmethod\n    def edit_file(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Edit a file by replacing one string with another.\n        \n        Args:\n            path: The path to the file to edit\n            old_str: The string to replace\n            new_str: The string to replace it with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Editing file {path}\")\n        \n        # First, read the file to check if it exists\n        read_result = FileEditor.read(path)\n        if not read_result.success:\n            log_error(\"file_editor\", f\"Cannot edit file that can't be read: {read_result.message}\")\n            return read_result\n        \n        # Then, use the file writer to replace the string\n        return FileWriter.replace(path, old_str, new_str)\n    \n    @staticmethod\n    def create_file(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content for the new file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Creating file {path}\")\n        \n        # Use the file writer to create the file\n        return FileWriter.create(path, content)\n    \n    @staticmethod\n    def insert_line(path: str, line_num: int, content: str) -> FileOperationResult:\n        \"\"\"\n        Insert content at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            line_num: The line number where to insert (1-indexed)\n            content: The content to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Inserting at line {line_num} in file {path}\")\n        \n        # First, read the file to check if it exists\n        read_result = FileEditor.read(path)\n        if not read_result.success:\n            log_error(\"file_editor\", f\"Cannot modify file that can't be read: {read_result.message}\")\n            return read_result\n        \n        # Then, use the file writer to insert the line\n        return FileWriter.insert(path, line_num, content)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/file_writer.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile writer for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file writing capabilities by composing various tools.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.write_tool import write_file\nfrom features.file_operations.replace_tool import replace_in_file\nfrom features.file_operations.insert_tool import insert_in_file\nfrom features.file_operations.create_tool import create_file\n\nclass FileWriter:\n    \"\"\"\n    File writer that composes various tools to provide file writing capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def write(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Write content to a file.\n        \n        Args:\n            path: The path to the file to write\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Writing to file {path}\")\n        return write_file(path, content)\n    \n    @staticmethod\n    def replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a string in a file.\n        \n        Args:\n            path: The path to the file to modify\n            old_str: The string to replace\n            new_str: The string to replace with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Replacing text in file {path}\")\n        return replace_in_file(path, old_str, new_str)\n    \n    @staticmethod\n    def insert(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text (1-indexed)\n            new_str: The text to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Inserting text at line {insert_line} in file {path}\")\n        return insert_in_file(path, insert_line, new_str)\n    \n    @staticmethod\n    def create(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Creating file {path}\")\n        return create_file(path, content)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/insert_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nInsert tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides line insertion capabilities for files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef insert_in_file(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Insert text at a specific line in a file.\n    \n    Args:\n        path: The path to the file to modify\n        insert_line: The line number after which to insert the text (1-indexed)\n        new_str: The text to insert\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"insert_tool\", f\"Inserting text at line {insert_line} in file {path}\")\n    \n    try:\n        # Read the existing content\n        with open(path, 'r', encoding='utf-8') as f:\n            lines = f.readlines()\n        \n        if insert_line < 1 or insert_line > len(lines) + 1:\n            error_msg = f\"Invalid line number {insert_line} for file {path} with {len(lines)} lines\"\n            log_error(\"insert_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        # Insert the new string at the specified position\n        lines.insert(insert_line - 1, new_str if new_str.endswith('\\n') else new_str + '\\n')\n        \n        # Write the modified content back to the file\n        with open(path, 'w', encoding='utf-8') as f:\n            f.writelines(lines)\n        \n        log_info(\"insert_tool\", f\"Successfully inserted text at line {insert_line} in file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully inserted text at line {insert_line} in file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to insert text at line {insert_line} in file {path}: {str(e)}\"\n        log_error(\"insert_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/model_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nModels for the file operations feature in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nfrom typing import Dict, Any, Optional, List, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nclass FileOperationResult:\n    \"\"\"\n    Model representing the result of a file operation.\n    \"\"\"\n    \n    def __init__(self, success: bool, message: str, content: str = \"\", data: Any = None):\n        \"\"\"\n        Initialize a file operation result.\n        \n        Args:\n            success: Whether the operation was successful\n            message: A message describing the result\n            content: File content if the operation returns content\n            data: Optional data returned by the operation\n        \"\"\"\n        self.success = success\n        self.message = message\n        self.content = content\n        self.data = data\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"\n        Convert the result to a dictionary.\n        \n        Returns:\n            Dictionary representation of the result\n        \"\"\"\n        return {\n            \"success\": self.success,\n            \"message\": self.message,\n            \"content\": self.content,\n            \"data\": self.data\n        }\n\nclass ToolUseRequest:\n    \"\"\"\n    Model representing a tool use request from Claude.\n    \"\"\"\n    \n    def __init__(self, command: str, path: str = None, **kwargs):\n        \"\"\"\n        Initialize a tool use request.\n        \n        Args:\n            command: The command to execute\n            path: The path to operate on\n            **kwargs: Additional arguments for the command\n        \"\"\"\n        self.command = command\n        self.path = path\n        self.kwargs = kwargs\n    \n    @classmethod\n    def from_dict(cls, data: Dict[str, Any]) -> 'ToolUseRequest':\n        \"\"\"\n        Create a tool use request from a dictionary.\n        \n        Args:\n            data: Dictionary containing the tool use request\n            \n        Returns:\n            A ToolUseRequest instance\n        \"\"\"\n        command = data.get(\"command\")\n        path = data.get(\"path\")\n        \n        # Extract all other keys as kwargs\n        kwargs = {k: v for k, v in data.items() if k not in [\"command\", \"path\"]}\n        \n        return cls(command, path, **kwargs)\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/read_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nRead tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file reading capabilities.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef read_file(path: str, start_line: Optional[int] = None, end_line: Optional[int] = None) -> FileOperationResult:\n    \"\"\"\n    Read the contents of a file.\n    \n    Args:\n        path: The path to the file to read\n        start_line: Optional start line (1-indexed)\n        end_line: Optional end line (1-indexed, -1 for end of file)\n        \n    Returns:\n        FileOperationResult with content or error message\n    \"\"\"\n    log_info(\"read_tool\", f\"Reading file {path} with range {start_line}-{end_line}\")\n    \n    try:\n        with open(path, 'r', encoding='utf-8') as f:\n            all_lines = f.readlines()\n        \n        # Handle line range\n        if start_line is not None:\n            start_idx = max(0, start_line - 1)  # Convert 1-indexed to 0-indexed\n        else:\n            start_idx = 0\n            \n        if end_line is not None:\n            if end_line == -1:\n                end_idx = len(all_lines)\n            else:\n                end_idx = min(end_line, len(all_lines))\n        else:\n            end_idx = len(all_lines)\n            \n        selected_lines = all_lines[start_idx:end_idx]\n        content = ''.join(selected_lines)\n        \n        log_info(\"read_tool\", f\"Successfully read file {path}\")\n        return FileOperationResult(success=True, content=content, message=f\"Successfully read file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to read file {path}: {str(e)}\"\n        log_error(\"read_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/replace_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nReplace tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides string replacement capabilities for files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef replace_in_file(path: str, old_str: str, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Replace a string in a file.\n    \n    Args:\n        path: The path to the file to modify\n        old_str: The string to replace\n        new_str: The string to replace with\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"replace_tool\", f\"Replacing text in file {path}\")\n    \n    try:\n        # Read the existing content\n        with open(path, 'r', encoding='utf-8') as f:\n            content = f.read()\n        \n        # Count occurrences to verify uniqueness\n        occurrences = content.count(old_str)\n        \n        if occurrences == 0:\n            error_msg = f\"String not found in file {path}\"\n            log_error(\"replace_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        if occurrences > 1:\n            error_msg = f\"Multiple occurrences ({occurrences}) of the string found in file {path}. Need a unique string to replace.\"\n            log_error(\"replace_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        # Replace the string and write back to the file\n        new_content = content.replace(old_str, new_str, 1)\n        \n        with open(path, 'w', encoding='utf-8') as f:\n            f.write(new_content)\n        \n        log_info(\"replace_tool\", f\"Successfully replaced text in file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully replaced text in file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to replace text in file {path}: {str(e)}\"\n        log_error(\"replace_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/service_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nService layer for file operations in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nimport traceback\nfrom typing import Dict, Any, Optional, List, Tuple, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nfrom shared.utils import console, normalize_path, display_file_content\nfrom features.file_operations.model import FileOperationResult\n\nclass FileOperationService:\n    \"\"\"\n    Service for handling file operations.\n    \"\"\"\n    \n    @staticmethod\n    def view_file(path: str, view_range=None) -> FileOperationResult:\n        \"\"\"\n        View the contents of a file.\n\n        Args:\n            path: The path to the file to view\n            view_range: Optional start and end lines to view [start, end]\n\n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        try:\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[view_file] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                lines = f.readlines()\n\n            if view_range:\n                start, end = view_range\n                # Convert to 0-indexed for Python\n                start = max(0, start - 1)\n                if end == -1:\n                    end = len(lines)\n                else:\n                    end = min(len(lines), end)\n                lines = lines[start:end]\n\n            content = \"\".join(lines)\n\n            # Display the file content (only for console, not returned to Claude)\n            display_file_content(path, content)\n\n            return FileOperationResult(True, f\"Successfully viewed file {path}\", content)\n        except Exception as e:\n            error_msg = f\"Error viewing file: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[view_file] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def str_replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a specific string in a file.\n\n        Args:\n            path: The path to the file to modify\n            old_str: The text to replace\n            new_str: The new text to insert\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[str_replace] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                content = f.read()\n\n            if old_str not in content:\n                error_msg = f\"The specified string was not found in the file {path}\"\n                console.log(f\"[str_replace] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            new_content = content.replace(old_str, new_str, 1)\n\n            with open(path, \"w\") as f:\n                f.write(new_content)\n\n            console.print(f\"[green]Successfully replaced text in {path}[/green]\")\n            console.log(f\"[str_replace] Successfully replaced text in {path}\")\n            return FileOperationResult(True, f\"Successfully replaced text in {path}\")\n        except Exception as e:\n            error_msg = f\"Error replacing text: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[str_replace] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def create_file(path: str, file_text: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with specified content.\n\n        Args:\n            path: The path where the new file should be created\n            file_text: The content to write to the new file\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            # Check if the path is empty or invalid\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[create_file] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            # Check if the directory exists\n            directory = os.path.dirname(path)\n            if directory and not os.path.exists(directory):\n                console.log(f\"[create_file] Creating directory: {directory}\")\n                os.makedirs(directory)\n\n            with open(path, \"w\") as f:\n                f.write(file_text or \"\")\n\n            console.print(f\"[green]Successfully created file {path}[/green]\")\n            console.log(f\"[create_file] Successfully created file {path}\")\n            return FileOperationResult(True, f\"Successfully created file {path}\")\n        except Exception as e:\n            error_msg = f\"Error creating file: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[create_file] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def insert_text(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific location in a file.\n\n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text\n            new_str: The text to insert\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            if insert_line is None:\n                error_msg = \"No line number specified: insert_line is missing.\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                lines = f.readlines()\n\n            # Line is 0-indexed for this function, but Claude provides 1-indexed\n            insert_line = min(max(0, insert_line - 1), len(lines))\n\n            # Check that the index is within acceptable bounds\n            if insert_line < 0 or insert_line > len(lines):\n                error_msg = (\n                    f\"Insert line number {insert_line} out of range (0-{len(lines)}).\"\n                )\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Ensure new_str ends with newline\n            if new_str and not new_str.endswith(\"\\n\"):\n                new_str += \"\\n\"\n\n            lines.insert(insert_line, new_str)\n\n            with open(path, \"w\") as f:\n                f.writelines(lines)\n\n            console.print(\n                f\"[green]Successfully inserted text at line {insert_line + 1} in {path}[/green]\"\n            )\n            console.log(\n                f\"[insert_text] Successfully inserted text at line {insert_line + 1} in {path}\"\n            )\n            return FileOperationResult(\n                True, f\"Successfully inserted text at line {insert_line + 1} in {path}\"\n            )\n        except Exception as e:\n            error_msg = f\"Error inserting text: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[insert_text] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def undo_edit(path: str) -> FileOperationResult:\n        \"\"\"\n        Placeholder for undo_edit functionality.\n        In a real implementation, you would need to track edit history.\n\n        Args:\n            path: The path to the file whose last edit should be undone\n\n        Returns:\n            FileOperationResult with message about undo functionality\n        \"\"\"\n        try:\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[undo_edit] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            message = \"Undo functionality is not implemented in this version.\"\n            console.print(f\"[yellow]{message}[/yellow]\")\n            console.log(f\"[undo_edit] {message}\")\n            return FileOperationResult(True, message)\n        except Exception as e:\n            error_msg = f\"Error in undo_edit: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[undo_edit] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/tool_handler.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTool handler for the Vertical Slice Architecture implementation of the file editor agent.\nThis module handles tool use requests from the Claude agent.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, normalize_path\nfrom features.file_operations.model_tools import ToolUseRequest\nfrom features.file_operations.file_editor import FileEditor\n\ndef handle_tool_use(input_data: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Handle tool use requests from the Claude agent.\n    \n    Args:\n        input_data: The tool use request data from Claude\n        \n    Returns:\n        Dictionary with the result or error message\n    \"\"\"\n    log_info(\"tool_handler\", f\"Received tool use request: {input_data}\")\n    \n    try:\n        # Parse the tool use request\n        request = ToolUseRequest.from_dict(input_data)\n        \n        # Normalize the path\n        path = normalize_path(request.path) if request.path else None\n        \n        # Handle the command\n        if request.command == \"view\":\n            start_line = request.kwargs.get(\"start_line\")\n            end_line = request.kwargs.get(\"end_line\")\n            \n            if start_line is not None:\n                start_line = int(start_line)\n            if end_line is not None:\n                end_line = int(end_line)\n                \n            result = FileEditor.read(path, start_line, end_line)\n            \n        elif request.command == \"edit\":\n            old_str = request.kwargs.get(\"old_str\", \"\")\n            new_str = request.kwargs.get(\"new_str\", \"\")\n            \n            result = FileEditor.edit_file(path, old_str, new_str)\n            \n        elif request.command == \"create\":\n            content = request.kwargs.get(\"content\", \"\")\n            \n            result = FileEditor.create_file(path, content)\n            \n        elif request.command == \"insert\":\n            line_num = int(request.kwargs.get(\"line_num\", 1))\n            content = request.kwargs.get(\"content\", \"\")\n            \n            result = FileEditor.insert_line(path, line_num, content)\n            \n        else:\n            log_error(\"tool_handler\", f\"Unknown command: {request.command}\")\n            return {\"error\": f\"Unknown command: {request.command}\"}\n        \n        # Return the result\n        if result.success:\n            return {\"result\": result.content or result.message}\n        else:\n            return {\"error\": result.message}\n            \n    except Exception as e:\n        error_msg = f\"Error handling tool use: {str(e)}\"\n        log_error(\"tool_handler\", error_msg)\n        return {\"error\": error_msg}"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent/write_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nWrite tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file writing capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef write_file(path: str, content: str) -> FileOperationResult:\n    \"\"\"\n    Write content to a file.\n    \n    Args:\n        path: The path to the file to write\n        content: The content to write to the file\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"write_tool\", f\"Writing to file {path}\")\n    \n    try:\n        with open(path, 'w', encoding='utf-8') as f:\n            f.write(content)\n        \n        log_info(\"write_tool\", f\"Successfully wrote to file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully wrote to file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to write to file {path}: {str(e)}\"\n        log_error(\"write_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/api_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAPI layer for file operations in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nimport traceback\nfrom typing import Dict, Any, Optional, List, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nfrom shared.utils import console\nfrom features.file_operations.service import FileOperationService\nfrom features.file_operations.model import ToolUseRequest, FileOperationResult\n\nclass FileOperationsAPI:\n    \"\"\"\n    API for file operations.\n    \"\"\"\n    \n    @staticmethod\n    def handle_tool_use(tool_use: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Handle text editor tool use from Claude.\n\n        Args:\n            tool_use: The tool use request from Claude\n\n        Returns:\n            Dictionary with result or error to send back to Claude\n        \"\"\"\n        try:\n            # Convert the tool use dictionary to a ToolUseRequest object\n            request = ToolUseRequest.from_dict(tool_use)\n            \n            console.log(f\"[handle_tool_use] Received command: {request.command}, path: {request.path}\")\n\n            if not request.command:\n                error_msg = \"No command specified in tool use request\"\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n\n            if not request.path and request.command != \"undo_edit\":  # undo_edit might not need a path\n                error_msg = \"No path specified in tool use request\"\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n\n            # The path normalization is now handled in each file operation function\n            console.print(f\"[blue]Executing {request.command} command on {request.path}[/blue]\")\n\n            result = None\n            \n            if request.command == \"view\":\n                view_range = request.kwargs.get(\"view_range\")\n                console.log(\n                    f\"[handle_tool_use] Calling view_file with view_range: {view_range}\"\n                )\n                result = FileOperationService.view_file(request.path, view_range)\n\n            elif request.command == \"str_replace\":\n                old_str = request.kwargs.get(\"old_str\")\n                new_str = request.kwargs.get(\"new_str\")\n                console.log(f\"[handle_tool_use] Calling str_replace\")\n                result = FileOperationService.str_replace(request.path, old_str, new_str)\n\n            elif request.command == \"create\":\n                file_text = request.kwargs.get(\"file_text\")\n                console.log(f\"[handle_tool_use] Calling create_file\")\n                result = FileOperationService.create_file(request.path, file_text)\n\n            elif request.command == \"insert\":\n                insert_line = request.kwargs.get(\"insert_line\")\n                new_str = request.kwargs.get(\"new_str\")\n                console.log(f\"[handle_tool_use] Calling insert_text at line: {insert_line}\")\n                result = FileOperationService.insert_text(request.path, insert_line, new_str)\n\n            elif request.command == \"undo_edit\":\n                console.log(f\"[handle_tool_use] Calling undo_edit\")\n                result = FileOperationService.undo_edit(request.path)\n\n            else:\n                error_msg = f\"Unknown command: {request.command}\"\n                console.print(f\"[red]{error_msg}[/red]\")\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n            \n            # Convert the result to a dictionary\n            if result.success:\n                return {\"result\": result.data if result.data is not None else result.message}\n            else:\n                return {\"error\": result.message}\n                \n        except Exception as e:\n            error_msg = f\"Error handling tool use: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[handle_tool_use] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return {\"error\": error_msg}\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/create_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCreate tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file creation capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.write_tool import write_file\n\ndef create_file(path: str, content: str) -> FileOperationResult:\n    \"\"\"\n    Create a new file with the specified content.\n    \n    Args:\n        path: The path to the file to create\n        content: The content to write to the file\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"create_tool\", f\"Creating file {path}\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True)\n        \n        # Use the write_file function to create the file\n        return write_file(path, content)\n    except Exception as e:\n        error_msg = f\"Failed to create file {path}: {str(e)}\"\n        log_error(\"create_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/file_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile agent for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides the agent interface for file operations.\n\"\"\"\n\nimport time\nfrom typing import Tuple, Dict, Any, List, Optional, Callable\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom anthropic import Anthropic\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, display_token_usage\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.tool_handler import handle_tool_use\n\n# Initialize rich console\nconsole = Console()\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\nclass FileAgent:\n    \"\"\"\n    File agent that provides an interface for AI-assisted file operations.\n    \"\"\"\n    \n    @staticmethod\n    def run_agent(\n        client: Anthropic,\n        prompt: str,\n        max_thinking_tokens: int = DEFAULT_THINKING_TOKENS,\n        max_loops: int = 10,\n        use_token_efficiency: bool = False,\n    ) -> Tuple[str, int, int]:\n        \"\"\"\n        Run the Claude agent with file editing capabilities.\n\n        Args:\n            client: The Anthropic client\n            prompt: The user's prompt\n            max_thinking_tokens: Maximum tokens for thinking\n            max_loops: Maximum number of tool use loops\n            use_token_efficiency: Whether to use token-efficient tool use beta feature\n\n        Returns:\n            Tuple containing:\n            - Final response from Claude (str)\n            - Total input tokens used (int)\n            - Total output tokens used (int)\n        \"\"\"\n        # Track token usage\n        input_tokens_total = 0\n        output_tokens_total = 0\n        system_prompt = \"\"\"You are a helpful AI assistant with text editing capabilities.\nYou have access to a text editor tool that can view, edit, and create files.\nAlways think step by step about what you need to do before taking any action.\nBe careful when making edits to files, as they can permanently change the user's files.\nFollow these steps when handling file operations:\n1. First, view files to understand their content before making changes\n2. For edits, ensure you have the correct context and are making the right changes\n3. When creating files, make sure they're in the right location with proper formatting\n\"\"\"\n\n        # Define text editor tool\n        text_editor_tool = {\"name\": \"str_replace_editor\", \"type\": \"text_editor_20250124\"}\n\n        messages = [\n            {\n                \"role\": \"user\",\n                \"content\": f\"\"\"I need help with editing files. Here's what I want to do:\n\n{prompt}\n\nPlease use the text editor tool to help me with this. First, think through what you need to do, then use the appropriate tool.\n\"\"\",\n            }\n        ]\n\n        loop_count = 0\n        tool_use_count = 0\n        thinking_start_time = time.time()\n\n        while loop_count < max_loops:\n            loop_count += 1\n\n            console.rule(f\"[yellow]Agent Loop {loop_count}/{max_loops}[/yellow]\")\n            log_info(\"file_agent\", f\"Starting agent loop {loop_count}/{max_loops}\")\n\n            # Create message with text editor tool\n            message_args = {\n                \"model\": MODEL,\n                \"max_tokens\": 4096,\n                \"tools\": [text_editor_tool],\n                \"messages\": messages,\n                \"system\": system_prompt,\n                \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": max_thinking_tokens},\n            }\n\n            # Use the beta.messages with betas parameter if token efficiency is enabled\n            if use_token_efficiency:\n                # Using token-efficient tools beta feature\n                message_args[\"betas\"] = [\"token-efficient-tools-2025-02-19\"]\n                response = client.beta.messages.create(**message_args)\n            else:\n                # Standard approach\n                response = client.messages.create(**message_args)\n\n            # Track token usage\n            if hasattr(response, \"usage\"):\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n\n                input_tokens_total += input_tokens\n                output_tokens_total += output_tokens\n\n                console.print(\n                    f\"[dim]Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}[/dim]\"\n                )\n                log_info(\n                    \"file_agent\", \n                    f\"Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}\"\n                )\n\n            # Process response content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            for content_block in response.content:\n                if content_block.type == \"thinking\":\n                    thinking_block = content_block\n                    # Access the thinking attribute which contains the actual thinking text\n                    if hasattr(thinking_block, \"thinking\"):\n                        console.print(\n                            Panel(\n                                thinking_block.thinking,\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                    else:\n                        console.print(\n                            Panel(\n                                \"Claude is thinking...\",\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                elif content_block.type == \"tool_use\":\n                    tool_use_block = content_block\n                    tool_use_count += 1\n                elif content_block.type == \"text\":\n                    text_block = content_block\n\n            # If we got a final text response with no tool use, we're done\n            if text_block and not tool_use_block:\n                thinking_end_time = time.time()\n                thinking_duration = thinking_end_time - thinking_start_time\n\n                console.print(\n                    f\"\\n[bold green]Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses[/bold green]\"\n                )\n                log_info(\n                    \"file_agent\",\n                    f\"Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses\"\n                )\n\n                # Add the response to messages\n                messages.append(\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": [\n                            *([thinking_block] if thinking_block else []),\n                            {\"type\": \"text\", \"text\": text_block.text},\n                        ],\n                    }\n                )\n\n                return text_block.text, input_tokens_total, output_tokens_total\n\n            # Handle tool use\n            if tool_use_block:\n                # Add the assistant's response to messages before handling tool calls\n                messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n                console.print(\n                    f\"\\n[bold blue]Tool Call:[/bold blue] {tool_use_block.name}\"\n                )\n                log_info(\"file_agent\", f\"Tool Call: {tool_use_block.name}\")\n\n                # Handle the tool use with our handler\n                tool_result = handle_tool_use(tool_use_block.input)\n\n                # Format tool result for Claude\n                tool_result_message = {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"tool_result\",\n                            \"tool_use_id\": tool_use_block.id,\n                            \"content\": tool_result.get(\"error\") or tool_result.get(\"result\", \"\"),\n                        }\n                    ],\n                }\n                messages.append(tool_result_message)\n\n        # If we reach here, we hit the max loops\n        console.print(\n            f\"\\n[bold red]Warning: Reached maximum loops ({max_loops}) without completing the task[/bold red]\"\n        )\n        log_error(\n            \"file_agent\",\n            f\"Reached maximum loops ({max_loops}) without completing the task\"\n        )\n        return (\n            \"I wasn't able to complete the task within the allowed number of thinking steps. Please try a more specific prompt or increase the loop limit.\",\n            input_tokens_total,\n            output_tokens_total,\n        )\n\n# Expose the run_agent function at the module level\ndef run_agent(\n    client: Anthropic,\n    prompt: str,\n    max_tool_use_loops: int = 15,\n    token_efficient_tool_use: bool = True,\n) -> Tuple[int, int]:\n    \"\"\"\n    Run the file editor agent with the specified prompt.\n    \n    Args:\n        client: The Anthropic client\n        prompt: The prompt to send to Claude\n        max_tool_use_loops: Maximum number of tool use loops\n        token_efficient_tool_use: Whether to use token-efficient tool use\n        \n    Returns:\n        Tuple containing input and output token counts\n    \"\"\"\n    log_info(\"file_agent\", f\"Running agent with prompt: {prompt}\")\n    \n    _, input_tokens, output_tokens = FileAgent.run_agent(\n        client=client,\n        prompt=prompt,\n        max_loops=max_tool_use_loops,\n        use_token_efficiency=token_efficient_tool_use,\n        max_thinking_tokens=DEFAULT_THINKING_TOKENS\n    )\n    \n    return input_tokens, output_tokens"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/file_editor.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile editor for the Vertical Slice Architecture implementation of the file editor agent.\nThis module combines reading and writing capabilities for file editing.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any, Tuple, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.file_writer import FileWriter\nfrom features.file_operations.read_tool import read_file\n\nclass FileEditor:\n    \"\"\"\n    File editor that combines reading and writing capabilities for file editing.\n    \"\"\"\n    \n    @staticmethod\n    def read(path: str, start_line: Optional[int] = None, end_line: Optional[int] = None) -> FileOperationResult:\n        \"\"\"\n        Read the contents of a file.\n        \n        Args:\n            path: The path to the file to read\n            start_line: Optional start line (1-indexed)\n            end_line: Optional end line (1-indexed, -1 for end of file)\n            \n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Reading file {path} with range {start_line}-{end_line}\")\n        return read_file(path, start_line, end_line)\n    \n    @staticmethod\n    def view_file(path: str, view_range=None) -> FileOperationResult:\n        \"\"\"\n        View the contents of a file with optional range.\n        \n        Args:\n            path: The path to the file to view\n            view_range: Optional tuple of (start_line, end_line)\n            \n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        start_line = None\n        end_line = None\n        \n        if view_range:\n            start_line, end_line = view_range\n            \n        log_info(\"file_editor\", f\"Viewing file {path} with range {start_line}-{end_line}\")\n        \n        return FileEditor.read(path, start_line, end_line)\n    \n    @staticmethod\n    def edit_file(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Edit a file by replacing one string with another.\n        \n        Args:\n            path: The path to the file to edit\n            old_str: The string to replace\n            new_str: The string to replace it with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Editing file {path}\")\n        \n        # First, read the file to check if it exists\n        read_result = FileEditor.read(path)\n        if not read_result.success:\n            log_error(\"file_editor\", f\"Cannot edit file that can't be read: {read_result.message}\")\n            return read_result\n        \n        # Then, use the file writer to replace the string\n        return FileWriter.replace(path, old_str, new_str)\n    \n    @staticmethod\n    def create_file(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content for the new file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Creating file {path}\")\n        \n        # Use the file writer to create the file\n        return FileWriter.create(path, content)\n    \n    @staticmethod\n    def insert_line(path: str, line_num: int, content: str) -> FileOperationResult:\n        \"\"\"\n        Insert content at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            line_num: The line number where to insert (1-indexed)\n            content: The content to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Inserting at line {line_num} in file {path}\")\n        \n        # First, read the file to check if it exists\n        read_result = FileEditor.read(path)\n        if not read_result.success:\n            log_error(\"file_editor\", f\"Cannot modify file that can't be read: {read_result.message}\")\n            return read_result\n        \n        # Then, use the file writer to insert the line\n        return FileWriter.insert(path, line_num, content)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/file_writer.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile writer for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file writing capabilities by composing various tools.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.write_tool import write_file\nfrom features.file_operations.replace_tool import replace_in_file\nfrom features.file_operations.insert_tool import insert_in_file\nfrom features.file_operations.create_tool import create_file\n\nclass FileWriter:\n    \"\"\"\n    File writer that composes various tools to provide file writing capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def write(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Write content to a file.\n        \n        Args:\n            path: The path to the file to write\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Writing to file {path}\")\n        return write_file(path, content)\n    \n    @staticmethod\n    def replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a string in a file.\n        \n        Args:\n            path: The path to the file to modify\n            old_str: The string to replace\n            new_str: The string to replace with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Replacing text in file {path}\")\n        return replace_in_file(path, old_str, new_str)\n    \n    @staticmethod\n    def insert(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text (1-indexed)\n            new_str: The text to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Inserting text at line {insert_line} in file {path}\")\n        return insert_in_file(path, insert_line, new_str)\n    \n    @staticmethod\n    def create(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Creating file {path}\")\n        return create_file(path, content)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/insert_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nInsert tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides line insertion capabilities for files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef insert_in_file(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Insert text at a specific line in a file.\n    \n    Args:\n        path: The path to the file to modify\n        insert_line: The line number after which to insert the text (1-indexed)\n        new_str: The text to insert\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"insert_tool\", f\"Inserting text at line {insert_line} in file {path}\")\n    \n    try:\n        # Read the existing content\n        with open(path, 'r', encoding='utf-8') as f:\n            lines = f.readlines()\n        \n        if insert_line < 1 or insert_line > len(lines) + 1:\n            error_msg = f\"Invalid line number {insert_line} for file {path} with {len(lines)} lines\"\n            log_error(\"insert_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        # Insert the new string at the specified position\n        lines.insert(insert_line - 1, new_str if new_str.endswith('\\n') else new_str + '\\n')\n        \n        # Write the modified content back to the file\n        with open(path, 'w', encoding='utf-8') as f:\n            f.writelines(lines)\n        \n        log_info(\"insert_tool\", f\"Successfully inserted text at line {insert_line} in file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully inserted text at line {insert_line} in file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to insert text at line {insert_line} in file {path}: {str(e)}\"\n        log_error(\"insert_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/model_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nModels for the file operations feature in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nfrom typing import Dict, Any, Optional, List, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nclass FileOperationResult:\n    \"\"\"\n    Model representing the result of a file operation.\n    \"\"\"\n    \n    def __init__(self, success: bool, message: str, content: str = \"\", data: Any = None):\n        \"\"\"\n        Initialize a file operation result.\n        \n        Args:\n            success: Whether the operation was successful\n            message: A message describing the result\n            content: File content if the operation returns content\n            data: Optional data returned by the operation\n        \"\"\"\n        self.success = success\n        self.message = message\n        self.content = content\n        self.data = data\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"\n        Convert the result to a dictionary.\n        \n        Returns:\n            Dictionary representation of the result\n        \"\"\"\n        return {\n            \"success\": self.success,\n            \"message\": self.message,\n            \"content\": self.content,\n            \"data\": self.data\n        }\n\nclass ToolUseRequest:\n    \"\"\"\n    Model representing a tool use request from Claude.\n    \"\"\"\n    \n    def __init__(self, command: str, path: str = None, **kwargs):\n        \"\"\"\n        Initialize a tool use request.\n        \n        Args:\n            command: The command to execute\n            path: The path to operate on\n            **kwargs: Additional arguments for the command\n        \"\"\"\n        self.command = command\n        self.path = path\n        self.kwargs = kwargs\n    \n    @classmethod\n    def from_dict(cls, data: Dict[str, Any]) -> 'ToolUseRequest':\n        \"\"\"\n        Create a tool use request from a dictionary.\n        \n        Args:\n            data: Dictionary containing the tool use request\n            \n        Returns:\n            A ToolUseRequest instance\n        \"\"\"\n        command = data.get(\"command\")\n        path = data.get(\"path\")\n        \n        # Extract all other keys as kwargs\n        kwargs = {k: v for k, v in data.items() if k not in [\"command\", \"path\"]}\n        \n        return cls(command, path, **kwargs)\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/read_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nRead tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file reading capabilities.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef read_file(path: str, start_line: Optional[int] = None, end_line: Optional[int] = None) -> FileOperationResult:\n    \"\"\"\n    Read the contents of a file.\n    \n    Args:\n        path: The path to the file to read\n        start_line: Optional start line (1-indexed)\n        end_line: Optional end line (1-indexed, -1 for end of file)\n        \n    Returns:\n        FileOperationResult with content or error message\n    \"\"\"\n    log_info(\"read_tool\", f\"Reading file {path} with range {start_line}-{end_line}\")\n    \n    try:\n        with open(path, 'r', encoding='utf-8') as f:\n            all_lines = f.readlines()\n        \n        # Handle line range\n        if start_line is not None:\n            start_idx = max(0, start_line - 1)  # Convert 1-indexed to 0-indexed\n        else:\n            start_idx = 0\n            \n        if end_line is not None:\n            if end_line == -1:\n                end_idx = len(all_lines)\n            else:\n                end_idx = min(end_line, len(all_lines))\n        else:\n            end_idx = len(all_lines)\n            \n        selected_lines = all_lines[start_idx:end_idx]\n        content = ''.join(selected_lines)\n        \n        log_info(\"read_tool\", f\"Successfully read file {path}\")\n        return FileOperationResult(success=True, content=content, message=f\"Successfully read file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to read file {path}: {str(e)}\"\n        log_error(\"read_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/replace_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nReplace tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides string replacement capabilities for files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef replace_in_file(path: str, old_str: str, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Replace a string in a file.\n    \n    Args:\n        path: The path to the file to modify\n        old_str: The string to replace\n        new_str: The string to replace with\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"replace_tool\", f\"Replacing text in file {path}\")\n    \n    try:\n        # Read the existing content\n        with open(path, 'r', encoding='utf-8') as f:\n            content = f.read()\n        \n        # Count occurrences to verify uniqueness\n        occurrences = content.count(old_str)\n        \n        if occurrences == 0:\n            error_msg = f\"String not found in file {path}\"\n            log_error(\"replace_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        if occurrences > 1:\n            error_msg = f\"Multiple occurrences ({occurrences}) of the string found in file {path}. Need a unique string to replace.\"\n            log_error(\"replace_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        # Replace the string and write back to the file\n        new_content = content.replace(old_str, new_str, 1)\n        \n        with open(path, 'w', encoding='utf-8') as f:\n            f.write(new_content)\n        \n        log_info(\"replace_tool\", f\"Successfully replaced text in file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully replaced text in file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to replace text in file {path}: {str(e)}\"\n        log_error(\"replace_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/service_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nService layer for file operations in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nimport traceback\nfrom typing import Dict, Any, Optional, List, Tuple, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nfrom shared.utils import console, normalize_path, display_file_content\nfrom features.file_operations.model import FileOperationResult\n\nclass FileOperationService:\n    \"\"\"\n    Service for handling file operations.\n    \"\"\"\n    \n    @staticmethod\n    def view_file(path: str, view_range=None) -> FileOperationResult:\n        \"\"\"\n        View the contents of a file.\n\n        Args:\n            path: The path to the file to view\n            view_range: Optional start and end lines to view [start, end]\n\n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        try:\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[view_file] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                lines = f.readlines()\n\n            if view_range:\n                start, end = view_range\n                # Convert to 0-indexed for Python\n                start = max(0, start - 1)\n                if end == -1:\n                    end = len(lines)\n                else:\n                    end = min(len(lines), end)\n                lines = lines[start:end]\n\n            content = \"\".join(lines)\n\n            # Display the file content (only for console, not returned to Claude)\n            display_file_content(path, content)\n\n            return FileOperationResult(True, f\"Successfully viewed file {path}\", content)\n        except Exception as e:\n            error_msg = f\"Error viewing file: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[view_file] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def str_replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a specific string in a file.\n\n        Args:\n            path: The path to the file to modify\n            old_str: The text to replace\n            new_str: The new text to insert\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[str_replace] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                content = f.read()\n\n            if old_str not in content:\n                error_msg = f\"The specified string was not found in the file {path}\"\n                console.log(f\"[str_replace] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            new_content = content.replace(old_str, new_str, 1)\n\n            with open(path, \"w\") as f:\n                f.write(new_content)\n\n            console.print(f\"[green]Successfully replaced text in {path}[/green]\")\n            console.log(f\"[str_replace] Successfully replaced text in {path}\")\n            return FileOperationResult(True, f\"Successfully replaced text in {path}\")\n        except Exception as e:\n            error_msg = f\"Error replacing text: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[str_replace] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def create_file(path: str, file_text: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with specified content.\n\n        Args:\n            path: The path where the new file should be created\n            file_text: The content to write to the new file\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            # Check if the path is empty or invalid\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[create_file] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            # Check if the directory exists\n            directory = os.path.dirname(path)\n            if directory and not os.path.exists(directory):\n                console.log(f\"[create_file] Creating directory: {directory}\")\n                os.makedirs(directory)\n\n            with open(path, \"w\") as f:\n                f.write(file_text or \"\")\n\n            console.print(f\"[green]Successfully created file {path}[/green]\")\n            console.log(f\"[create_file] Successfully created file {path}\")\n            return FileOperationResult(True, f\"Successfully created file {path}\")\n        except Exception as e:\n            error_msg = f\"Error creating file: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[create_file] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def insert_text(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific location in a file.\n\n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text\n            new_str: The text to insert\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            if insert_line is None:\n                error_msg = \"No line number specified: insert_line is missing.\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                lines = f.readlines()\n\n            # Line is 0-indexed for this function, but Claude provides 1-indexed\n            insert_line = min(max(0, insert_line - 1), len(lines))\n\n            # Check that the index is within acceptable bounds\n            if insert_line < 0 or insert_line > len(lines):\n                error_msg = (\n                    f\"Insert line number {insert_line} out of range (0-{len(lines)}).\"\n                )\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Ensure new_str ends with newline\n            if new_str and not new_str.endswith(\"\\n\"):\n                new_str += \"\\n\"\n\n            lines.insert(insert_line, new_str)\n\n            with open(path, \"w\") as f:\n                f.writelines(lines)\n\n            console.print(\n                f\"[green]Successfully inserted text at line {insert_line + 1} in {path}[/green]\"\n            )\n            console.log(\n                f\"[insert_text] Successfully inserted text at line {insert_line + 1} in {path}\"\n            )\n            return FileOperationResult(\n                True, f\"Successfully inserted text at line {insert_line + 1} in {path}\"\n            )\n        except Exception as e:\n            error_msg = f\"Error inserting text: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[insert_text] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def undo_edit(path: str) -> FileOperationResult:\n        \"\"\"\n        Placeholder for undo_edit functionality.\n        In a real implementation, you would need to track edit history.\n\n        Args:\n            path: The path to the file whose last edit should be undone\n\n        Returns:\n            FileOperationResult with message about undo functionality\n        \"\"\"\n        try:\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[undo_edit] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            message = \"Undo functionality is not implemented in this version.\"\n            console.print(f\"[yellow]{message}[/yellow]\")\n            console.log(f\"[undo_edit] {message}\")\n            return FileOperationResult(True, message)\n        except Exception as e:\n            error_msg = f\"Error in undo_edit: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[undo_edit] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/tool_handler.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTool handler for the Vertical Slice Architecture implementation of the file editor agent.\nThis module handles tool use requests from the Claude agent.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, normalize_path\nfrom features.file_operations.model_tools import ToolUseRequest\nfrom features.file_operations.file_editor import FileEditor\n\ndef handle_tool_use(input_data: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Handle tool use requests from the Claude agent.\n    \n    Args:\n        input_data: The tool use request data from Claude\n        \n    Returns:\n        Dictionary with the result or error message\n    \"\"\"\n    log_info(\"tool_handler\", f\"Received tool use request: {input_data}\")\n    \n    try:\n        # Parse the tool use request\n        request = ToolUseRequest.from_dict(input_data)\n        \n        # Normalize the path\n        path = normalize_path(request.path) if request.path else None\n        \n        # Handle the command\n        if request.command == \"view\":\n            start_line = request.kwargs.get(\"start_line\")\n            end_line = request.kwargs.get(\"end_line\")\n            \n            if start_line is not None:\n                start_line = int(start_line)\n            if end_line is not None:\n                end_line = int(end_line)\n                \n            result = FileEditor.read(path, start_line, end_line)\n            \n        elif request.command == \"edit\":\n            old_str = request.kwargs.get(\"old_str\", \"\")\n            new_str = request.kwargs.get(\"new_str\", \"\")\n            \n            result = FileEditor.edit_file(path, old_str, new_str)\n            \n        elif request.command == \"create\":\n            content = request.kwargs.get(\"content\", \"\")\n            \n            result = FileEditor.create_file(path, content)\n            \n        elif request.command == \"insert\":\n            line_num = int(request.kwargs.get(\"line_num\", 1))\n            content = request.kwargs.get(\"content\", \"\")\n            \n            result = FileEditor.insert_line(path, line_num, content)\n            \n        else:\n            log_error(\"tool_handler\", f\"Unknown command: {request.command}\")\n            return {\"error\": f\"Unknown command: {request.command}\"}\n        \n        # Return the result\n        if result.success:\n            return {\"result\": result.content or result.message}\n        else:\n            return {\"error\": result.message}\n            \n    except Exception as e:\n        error_msg = f\"Error handling tool use: {str(e)}\"\n        log_error(\"tool_handler\", error_msg)\n        return {\"error\": error_msg}"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2/write_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nWrite tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file writing capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef write_file(path: str, content: str) -> FileOperationResult:\n    \"\"\"\n    Write content to a file.\n    \n    Args:\n        path: The path to the file to write\n        content: The content to write to the file\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"write_tool\", f\"Writing to file {path}\")\n    \n    try:\n        with open(path, 'w', encoding='utf-8') as f:\n            f.write(content)\n        \n        log_info(\"write_tool\", f\"Successfully wrote to file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully wrote to file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to write to file {path}: {str(e)}\"\n        log_error(\"write_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/__init__.py",
    "content": ""
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/api_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nAPI layer for file operations in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nimport traceback\nfrom typing import Dict, Any, Optional, List, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nfrom shared.utils import console\nfrom features.file_operations.service import FileOperationService\nfrom features.file_operations.model import ToolUseRequest, FileOperationResult\n\nclass FileOperationsAPI:\n    \"\"\"\n    API for file operations.\n    \"\"\"\n    \n    @staticmethod\n    def handle_tool_use(tool_use: Dict[str, Any]) -> Dict[str, Any]:\n        \"\"\"\n        Handle text editor tool use from Claude.\n\n        Args:\n            tool_use: The tool use request from Claude\n\n        Returns:\n            Dictionary with result or error to send back to Claude\n        \"\"\"\n        try:\n            # Convert the tool use dictionary to a ToolUseRequest object\n            request = ToolUseRequest.from_dict(tool_use)\n            \n            console.log(f\"[handle_tool_use] Received command: {request.command}, path: {request.path}\")\n\n            if not request.command:\n                error_msg = \"No command specified in tool use request\"\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n\n            if not request.path and request.command != \"undo_edit\":  # undo_edit might not need a path\n                error_msg = \"No path specified in tool use request\"\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n\n            # The path normalization is now handled in each file operation function\n            console.print(f\"[blue]Executing {request.command} command on {request.path}[/blue]\")\n\n            result = None\n            \n            if request.command == \"view\":\n                view_range = request.kwargs.get(\"view_range\")\n                console.log(\n                    f\"[handle_tool_use] Calling view_file with view_range: {view_range}\"\n                )\n                result = FileOperationService.view_file(request.path, view_range)\n\n            elif request.command == \"str_replace\":\n                old_str = request.kwargs.get(\"old_str\")\n                new_str = request.kwargs.get(\"new_str\")\n                console.log(f\"[handle_tool_use] Calling str_replace\")\n                result = FileOperationService.str_replace(request.path, old_str, new_str)\n\n            elif request.command == \"create\":\n                file_text = request.kwargs.get(\"file_text\")\n                console.log(f\"[handle_tool_use] Calling create_file\")\n                result = FileOperationService.create_file(request.path, file_text)\n\n            elif request.command == \"insert\":\n                insert_line = request.kwargs.get(\"insert_line\")\n                new_str = request.kwargs.get(\"new_str\")\n                console.log(f\"[handle_tool_use] Calling insert_text at line: {insert_line}\")\n                result = FileOperationService.insert_text(request.path, insert_line, new_str)\n\n            elif request.command == \"undo_edit\":\n                console.log(f\"[handle_tool_use] Calling undo_edit\")\n                result = FileOperationService.undo_edit(request.path)\n\n            else:\n                error_msg = f\"Unknown command: {request.command}\"\n                console.print(f\"[red]{error_msg}[/red]\")\n                console.log(f\"[handle_tool_use] Error: {error_msg}\")\n                return {\"error\": error_msg}\n            \n            # Convert the result to a dictionary\n            if result.success:\n                return {\"result\": result.data if result.data is not None else result.message}\n            else:\n                return {\"error\": result.message}\n                \n        except Exception as e:\n            error_msg = f\"Error handling tool use: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[handle_tool_use] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return {\"error\": error_msg}\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/create_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nCreate tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file creation capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.write_tool import write_file\n\ndef create_file(path: str, content: str) -> FileOperationResult:\n    \"\"\"\n    Create a new file with the specified content.\n    \n    Args:\n        path: The path to the file to create\n        content: The content to write to the file\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"create_tool\", f\"Creating file {path}\")\n    \n    try:\n        # Create directory if it doesn't exist\n        os.makedirs(os.path.dirname(os.path.abspath(path)), exist_ok=True)\n        \n        # Use the write_file function to create the file\n        return write_file(path, content)\n    except Exception as e:\n        error_msg = f\"Failed to create file {path}: {str(e)}\"\n        log_error(\"create_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/file_agent.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile agent for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides the agent interface for file operations.\n\"\"\"\n\nimport time\nfrom typing import Tuple, Dict, Any, List, Optional, Callable\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom anthropic import Anthropic\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, display_token_usage\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.tool_handler import handle_tool_use\n\n# Initialize rich console\nconsole = Console()\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\nclass FileAgent:\n    \"\"\"\n    File agent that provides an interface for AI-assisted file operations.\n    \"\"\"\n    \n    @staticmethod\n    def run_agent(\n        client: Anthropic,\n        prompt: str,\n        max_thinking_tokens: int = DEFAULT_THINKING_TOKENS,\n        max_loops: int = 10,\n        use_token_efficiency: bool = False,\n    ) -> Tuple[str, int, int]:\n        \"\"\"\n        Run the Claude agent with file editing capabilities.\n\n        Args:\n            client: The Anthropic client\n            prompt: The user's prompt\n            max_thinking_tokens: Maximum tokens for thinking\n            max_loops: Maximum number of tool use loops\n            use_token_efficiency: Whether to use token-efficient tool use beta feature\n\n        Returns:\n            Tuple containing:\n            - Final response from Claude (str)\n            - Total input tokens used (int)\n            - Total output tokens used (int)\n        \"\"\"\n        # Track token usage\n        input_tokens_total = 0\n        output_tokens_total = 0\n        system_prompt = \"\"\"You are a helpful AI assistant with text editing capabilities.\nYou have access to a text editor tool that can view, edit, and create files.\nAlways think step by step about what you need to do before taking any action.\nBe careful when making edits to files, as they can permanently change the user's files.\nFollow these steps when handling file operations:\n1. First, view files to understand their content before making changes\n2. For edits, ensure you have the correct context and are making the right changes\n3. When creating files, make sure they're in the right location with proper formatting\n\"\"\"\n\n        # Define text editor tool\n        text_editor_tool = {\"name\": \"str_replace_editor\", \"type\": \"text_editor_20250124\"}\n\n        messages = [\n            {\n                \"role\": \"user\",\n                \"content\": f\"\"\"I need help with editing files. Here's what I want to do:\n\n{prompt}\n\nPlease use the text editor tool to help me with this. First, think through what you need to do, then use the appropriate tool.\n\"\"\",\n            }\n        ]\n\n        loop_count = 0\n        tool_use_count = 0\n        thinking_start_time = time.time()\n\n        while loop_count < max_loops:\n            loop_count += 1\n\n            console.rule(f\"[yellow]Agent Loop {loop_count}/{max_loops}[/yellow]\")\n            log_info(\"file_agent\", f\"Starting agent loop {loop_count}/{max_loops}\")\n\n            # Create message with text editor tool\n            message_args = {\n                \"model\": MODEL,\n                \"max_tokens\": 4096,\n                \"tools\": [text_editor_tool],\n                \"messages\": messages,\n                \"system\": system_prompt,\n                \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": max_thinking_tokens},\n            }\n\n            # Use the beta.messages with betas parameter if token efficiency is enabled\n            if use_token_efficiency:\n                # Using token-efficient tools beta feature\n                message_args[\"betas\"] = [\"token-efficient-tools-2025-02-19\"]\n                response = client.beta.messages.create(**message_args)\n            else:\n                # Standard approach\n                response = client.messages.create(**message_args)\n\n            # Track token usage\n            if hasattr(response, \"usage\"):\n                input_tokens = getattr(response.usage, \"input_tokens\", 0)\n                output_tokens = getattr(response.usage, \"output_tokens\", 0)\n\n                input_tokens_total += input_tokens\n                output_tokens_total += output_tokens\n\n                console.print(\n                    f\"[dim]Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}[/dim]\"\n                )\n                log_info(\n                    \"file_agent\", \n                    f\"Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}\"\n                )\n\n            # Process response content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            for content_block in response.content:\n                if content_block.type == \"thinking\":\n                    thinking_block = content_block\n                    # Access the thinking attribute which contains the actual thinking text\n                    if hasattr(thinking_block, \"thinking\"):\n                        console.print(\n                            Panel(\n                                thinking_block.thinking,\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                    else:\n                        console.print(\n                            Panel(\n                                \"Claude is thinking...\",\n                                title=f\"Claude's Thinking (Loop {loop_count})\",\n                                border_style=\"blue\",\n                            )\n                        )\n                elif content_block.type == \"tool_use\":\n                    tool_use_block = content_block\n                    tool_use_count += 1\n                elif content_block.type == \"text\":\n                    text_block = content_block\n\n            # If we got a final text response with no tool use, we're done\n            if text_block and not tool_use_block:\n                thinking_end_time = time.time()\n                thinking_duration = thinking_end_time - thinking_start_time\n\n                console.print(\n                    f\"\\n[bold green]Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses[/bold green]\"\n                )\n                log_info(\n                    \"file_agent\",\n                    f\"Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses\"\n                )\n\n                # Add the response to messages\n                messages.append(\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": [\n                            *([thinking_block] if thinking_block else []),\n                            {\"type\": \"text\", \"text\": text_block.text},\n                        ],\n                    }\n                )\n\n                return text_block.text, input_tokens_total, output_tokens_total\n\n            # Handle tool use\n            if tool_use_block:\n                # Add the assistant's response to messages before handling tool calls\n                messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n                console.print(\n                    f\"\\n[bold blue]Tool Call:[/bold blue] {tool_use_block.name}\"\n                )\n                log_info(\"file_agent\", f\"Tool Call: {tool_use_block.name}\")\n\n                # Handle the tool use with our handler\n                tool_result = handle_tool_use(tool_use_block.input)\n\n                # Format tool result for Claude\n                tool_result_message = {\n                    \"role\": \"user\",\n                    \"content\": [\n                        {\n                            \"type\": \"tool_result\",\n                            \"tool_use_id\": tool_use_block.id,\n                            \"content\": tool_result.get(\"error\") or tool_result.get(\"result\", \"\"),\n                        }\n                    ],\n                }\n                messages.append(tool_result_message)\n\n        # If we reach here, we hit the max loops\n        console.print(\n            f\"\\n[bold red]Warning: Reached maximum loops ({max_loops}) without completing the task[/bold red]\"\n        )\n        log_error(\n            \"file_agent\",\n            f\"Reached maximum loops ({max_loops}) without completing the task\"\n        )\n        return (\n            \"I wasn't able to complete the task within the allowed number of thinking steps. Please try a more specific prompt or increase the loop limit.\",\n            input_tokens_total,\n            output_tokens_total,\n        )\n\n# Expose the run_agent function at the module level\ndef run_agent(\n    client: Anthropic,\n    prompt: str,\n    max_tool_use_loops: int = 15,\n    token_efficient_tool_use: bool = True,\n) -> Tuple[int, int]:\n    \"\"\"\n    Run the file editor agent with the specified prompt.\n    \n    Args:\n        client: The Anthropic client\n        prompt: The prompt to send to Claude\n        max_tool_use_loops: Maximum number of tool use loops\n        token_efficient_tool_use: Whether to use token-efficient tool use\n        \n    Returns:\n        Tuple containing input and output token counts\n    \"\"\"\n    log_info(\"file_agent\", f\"Running agent with prompt: {prompt}\")\n    \n    _, input_tokens, output_tokens = FileAgent.run_agent(\n        client=client,\n        prompt=prompt,\n        max_loops=max_tool_use_loops,\n        use_token_efficiency=token_efficient_tool_use,\n        max_thinking_tokens=DEFAULT_THINKING_TOKENS\n    )\n    \n    return input_tokens, output_tokens"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/file_editor.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile editor for the Vertical Slice Architecture implementation of the file editor agent.\nThis module combines reading and writing capabilities for file editing.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any, Tuple, Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.file_writer import FileWriter\nfrom features.file_operations.read_tool import read_file\n\nclass FileEditor:\n    \"\"\"\n    File editor that combines reading and writing capabilities for file editing.\n    \"\"\"\n    \n    @staticmethod\n    def read(path: str, start_line: Optional[int] = None, end_line: Optional[int] = None) -> FileOperationResult:\n        \"\"\"\n        Read the contents of a file.\n        \n        Args:\n            path: The path to the file to read\n            start_line: Optional start line (1-indexed)\n            end_line: Optional end line (1-indexed, -1 for end of file)\n            \n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Reading file {path} with range {start_line}-{end_line}\")\n        return read_file(path, start_line, end_line)\n    \n    @staticmethod\n    def view_file(path: str, view_range=None) -> FileOperationResult:\n        \"\"\"\n        View the contents of a file with optional range.\n        \n        Args:\n            path: The path to the file to view\n            view_range: Optional tuple of (start_line, end_line)\n            \n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        start_line = None\n        end_line = None\n        \n        if view_range:\n            start_line, end_line = view_range\n            \n        log_info(\"file_editor\", f\"Viewing file {path} with range {start_line}-{end_line}\")\n        \n        return FileEditor.read(path, start_line, end_line)\n    \n    @staticmethod\n    def edit_file(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Edit a file by replacing one string with another.\n        \n        Args:\n            path: The path to the file to edit\n            old_str: The string to replace\n            new_str: The string to replace it with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Editing file {path}\")\n        \n        # First, read the file to check if it exists\n        read_result = FileEditor.read(path)\n        if not read_result.success:\n            log_error(\"file_editor\", f\"Cannot edit file that can't be read: {read_result.message}\")\n            return read_result\n        \n        # Then, use the file writer to replace the string\n        return FileWriter.replace(path, old_str, new_str)\n    \n    @staticmethod\n    def create_file(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content for the new file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Creating file {path}\")\n        \n        # Use the file writer to create the file\n        return FileWriter.create(path, content)\n    \n    @staticmethod\n    def insert_line(path: str, line_num: int, content: str) -> FileOperationResult:\n        \"\"\"\n        Insert content at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            line_num: The line number where to insert (1-indexed)\n            content: The content to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_editor\", f\"Inserting at line {line_num} in file {path}\")\n        \n        # First, read the file to check if it exists\n        read_result = FileEditor.read(path)\n        if not read_result.success:\n            log_error(\"file_editor\", f\"Cannot modify file that can't be read: {read_result.message}\")\n            return read_result\n        \n        # Then, use the file writer to insert the line\n        return FileWriter.insert(path, line_num, content)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/file_writer.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nFile writer for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file writing capabilities by composing various tools.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\nfrom features.file_operations.write_tool import write_file\nfrom features.file_operations.replace_tool import replace_in_file\nfrom features.file_operations.insert_tool import insert_in_file\nfrom features.file_operations.create_tool import create_file\n\nclass FileWriter:\n    \"\"\"\n    File writer that composes various tools to provide file writing capabilities.\n    \"\"\"\n    \n    @staticmethod\n    def write(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Write content to a file.\n        \n        Args:\n            path: The path to the file to write\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Writing to file {path}\")\n        return write_file(path, content)\n    \n    @staticmethod\n    def replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a string in a file.\n        \n        Args:\n            path: The path to the file to modify\n            old_str: The string to replace\n            new_str: The string to replace with\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Replacing text in file {path}\")\n        return replace_in_file(path, old_str, new_str)\n    \n    @staticmethod\n    def insert(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific line in a file.\n        \n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text (1-indexed)\n            new_str: The text to insert\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Inserting text at line {insert_line} in file {path}\")\n        return insert_in_file(path, insert_line, new_str)\n    \n    @staticmethod\n    def create(path: str, content: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with the specified content.\n        \n        Args:\n            path: The path to the file to create\n            content: The content to write to the file\n            \n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        log_info(\"file_writer\", f\"Creating file {path}\")\n        return create_file(path, content)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/insert_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nInsert tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides line insertion capabilities for files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef insert_in_file(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Insert text at a specific line in a file.\n    \n    Args:\n        path: The path to the file to modify\n        insert_line: The line number after which to insert the text (1-indexed)\n        new_str: The text to insert\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"insert_tool\", f\"Inserting text at line {insert_line} in file {path}\")\n    \n    try:\n        # Read the existing content\n        with open(path, 'r', encoding='utf-8') as f:\n            lines = f.readlines()\n        \n        if insert_line < 1 or insert_line > len(lines) + 1:\n            error_msg = f\"Invalid line number {insert_line} for file {path} with {len(lines)} lines\"\n            log_error(\"insert_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        # Insert the new string at the specified position\n        lines.insert(insert_line - 1, new_str if new_str.endswith('\\n') else new_str + '\\n')\n        \n        # Write the modified content back to the file\n        with open(path, 'w', encoding='utf-8') as f:\n            f.writelines(lines)\n        \n        log_info(\"insert_tool\", f\"Successfully inserted text at line {insert_line} in file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully inserted text at line {insert_line} in file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to insert text at line {insert_line} in file {path}: {str(e)}\"\n        log_error(\"insert_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/model_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nModels for the file operations feature in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nfrom typing import Dict, Any, Optional, List, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nclass FileOperationResult:\n    \"\"\"\n    Model representing the result of a file operation.\n    \"\"\"\n    \n    def __init__(self, success: bool, message: str, content: str = \"\", data: Any = None):\n        \"\"\"\n        Initialize a file operation result.\n        \n        Args:\n            success: Whether the operation was successful\n            message: A message describing the result\n            content: File content if the operation returns content\n            data: Optional data returned by the operation\n        \"\"\"\n        self.success = success\n        self.message = message\n        self.content = content\n        self.data = data\n    \n    def to_dict(self) -> Dict[str, Any]:\n        \"\"\"\n        Convert the result to a dictionary.\n        \n        Returns:\n            Dictionary representation of the result\n        \"\"\"\n        return {\n            \"success\": self.success,\n            \"message\": self.message,\n            \"content\": self.content,\n            \"data\": self.data\n        }\n\nclass ToolUseRequest:\n    \"\"\"\n    Model representing a tool use request from Claude.\n    \"\"\"\n    \n    def __init__(self, command: str, path: str = None, **kwargs):\n        \"\"\"\n        Initialize a tool use request.\n        \n        Args:\n            command: The command to execute\n            path: The path to operate on\n            **kwargs: Additional arguments for the command\n        \"\"\"\n        self.command = command\n        self.path = path\n        self.kwargs = kwargs\n    \n    @classmethod\n    def from_dict(cls, data: Dict[str, Any]) -> 'ToolUseRequest':\n        \"\"\"\n        Create a tool use request from a dictionary.\n        \n        Args:\n            data: Dictionary containing the tool use request\n            \n        Returns:\n            A ToolUseRequest instance\n        \"\"\"\n        command = data.get(\"command\")\n        path = data.get(\"path\")\n        \n        # Extract all other keys as kwargs\n        kwargs = {k: v for k, v in data.items() if k not in [\"command\", \"path\"]}\n        \n        return cls(command, path, **kwargs)\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/read_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nRead tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file reading capabilities.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Optional\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef read_file(path: str, start_line: Optional[int] = None, end_line: Optional[int] = None) -> FileOperationResult:\n    \"\"\"\n    Read the contents of a file.\n    \n    Args:\n        path: The path to the file to read\n        start_line: Optional start line (1-indexed)\n        end_line: Optional end line (1-indexed, -1 for end of file)\n        \n    Returns:\n        FileOperationResult with content or error message\n    \"\"\"\n    log_info(\"read_tool\", f\"Reading file {path} with range {start_line}-{end_line}\")\n    \n    try:\n        with open(path, 'r', encoding='utf-8') as f:\n            all_lines = f.readlines()\n        \n        # Handle line range\n        if start_line is not None:\n            start_idx = max(0, start_line - 1)  # Convert 1-indexed to 0-indexed\n        else:\n            start_idx = 0\n            \n        if end_line is not None:\n            if end_line == -1:\n                end_idx = len(all_lines)\n            else:\n                end_idx = min(end_line, len(all_lines))\n        else:\n            end_idx = len(all_lines)\n            \n        selected_lines = all_lines[start_idx:end_idx]\n        content = ''.join(selected_lines)\n        \n        log_info(\"read_tool\", f\"Successfully read file {path}\")\n        return FileOperationResult(success=True, content=content, message=f\"Successfully read file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to read file {path}: {str(e)}\"\n        log_error(\"read_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/replace_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nReplace tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides string replacement capabilities for files.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef replace_in_file(path: str, old_str: str, new_str: str) -> FileOperationResult:\n    \"\"\"\n    Replace a string in a file.\n    \n    Args:\n        path: The path to the file to modify\n        old_str: The string to replace\n        new_str: The string to replace with\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"replace_tool\", f\"Replacing text in file {path}\")\n    \n    try:\n        # Read the existing content\n        with open(path, 'r', encoding='utf-8') as f:\n            content = f.read()\n        \n        # Count occurrences to verify uniqueness\n        occurrences = content.count(old_str)\n        \n        if occurrences == 0:\n            error_msg = f\"String not found in file {path}\"\n            log_error(\"replace_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        if occurrences > 1:\n            error_msg = f\"Multiple occurrences ({occurrences}) of the string found in file {path}. Need a unique string to replace.\"\n            log_error(\"replace_tool\", error_msg)\n            return FileOperationResult(success=False, content=\"\", message=error_msg)\n        \n        # Replace the string and write back to the file\n        new_content = content.replace(old_str, new_str, 1)\n        \n        with open(path, 'w', encoding='utf-8') as f:\n            f.write(new_content)\n        \n        log_info(\"replace_tool\", f\"Successfully replaced text in file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully replaced text in file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to replace text in file {path}: {str(e)}\"\n        log_error(\"replace_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/service_tools.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nService layer for file operations in the Vertical Slice Architecture.\n\"\"\"\n\nimport os\nimport sys\nimport traceback\nfrom typing import Dict, Any, Optional, List, Tuple, Union\n\n# Add the project root to the Python path\nsys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))\n\nfrom shared.utils import console, normalize_path, display_file_content\nfrom features.file_operations.model import FileOperationResult\n\nclass FileOperationService:\n    \"\"\"\n    Service for handling file operations.\n    \"\"\"\n    \n    @staticmethod\n    def view_file(path: str, view_range=None) -> FileOperationResult:\n        \"\"\"\n        View the contents of a file.\n\n        Args:\n            path: The path to the file to view\n            view_range: Optional start and end lines to view [start, end]\n\n        Returns:\n            FileOperationResult with content or error message\n        \"\"\"\n        try:\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[view_file] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                lines = f.readlines()\n\n            if view_range:\n                start, end = view_range\n                # Convert to 0-indexed for Python\n                start = max(0, start - 1)\n                if end == -1:\n                    end = len(lines)\n                else:\n                    end = min(len(lines), end)\n                lines = lines[start:end]\n\n            content = \"\".join(lines)\n\n            # Display the file content (only for console, not returned to Claude)\n            display_file_content(path, content)\n\n            return FileOperationResult(True, f\"Successfully viewed file {path}\", content)\n        except Exception as e:\n            error_msg = f\"Error viewing file: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[view_file] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def str_replace(path: str, old_str: str, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Replace a specific string in a file.\n\n        Args:\n            path: The path to the file to modify\n            old_str: The text to replace\n            new_str: The new text to insert\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[str_replace] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                content = f.read()\n\n            if old_str not in content:\n                error_msg = f\"The specified string was not found in the file {path}\"\n                console.log(f\"[str_replace] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            new_content = content.replace(old_str, new_str, 1)\n\n            with open(path, \"w\") as f:\n                f.write(new_content)\n\n            console.print(f\"[green]Successfully replaced text in {path}[/green]\")\n            console.log(f\"[str_replace] Successfully replaced text in {path}\")\n            return FileOperationResult(True, f\"Successfully replaced text in {path}\")\n        except Exception as e:\n            error_msg = f\"Error replacing text: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[str_replace] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def create_file(path: str, file_text: str) -> FileOperationResult:\n        \"\"\"\n        Create a new file with specified content.\n\n        Args:\n            path: The path where the new file should be created\n            file_text: The content to write to the new file\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            # Check if the path is empty or invalid\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[create_file] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            # Check if the directory exists\n            directory = os.path.dirname(path)\n            if directory and not os.path.exists(directory):\n                console.log(f\"[create_file] Creating directory: {directory}\")\n                os.makedirs(directory)\n\n            with open(path, \"w\") as f:\n                f.write(file_text or \"\")\n\n            console.print(f\"[green]Successfully created file {path}[/green]\")\n            console.log(f\"[create_file] Successfully created file {path}\")\n            return FileOperationResult(True, f\"Successfully created file {path}\")\n        except Exception as e:\n            error_msg = f\"Error creating file: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[create_file] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def insert_text(path: str, insert_line: int, new_str: str) -> FileOperationResult:\n        \"\"\"\n        Insert text at a specific location in a file.\n\n        Args:\n            path: The path to the file to modify\n            insert_line: The line number after which to insert the text\n            new_str: The text to insert\n\n        Returns:\n            FileOperationResult with result or error message\n        \"\"\"\n        try:\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            if not os.path.exists(path):\n                error_msg = f\"File {path} does not exist\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            if insert_line is None:\n                error_msg = \"No line number specified: insert_line is missing.\"\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            with open(path, \"r\") as f:\n                lines = f.readlines()\n\n            # Line is 0-indexed for this function, but Claude provides 1-indexed\n            insert_line = min(max(0, insert_line - 1), len(lines))\n\n            # Check that the index is within acceptable bounds\n            if insert_line < 0 or insert_line > len(lines):\n                error_msg = (\n                    f\"Insert line number {insert_line} out of range (0-{len(lines)}).\"\n                )\n                console.log(f\"[insert_text] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Ensure new_str ends with newline\n            if new_str and not new_str.endswith(\"\\n\"):\n                new_str += \"\\n\"\n\n            lines.insert(insert_line, new_str)\n\n            with open(path, \"w\") as f:\n                f.writelines(lines)\n\n            console.print(\n                f\"[green]Successfully inserted text at line {insert_line + 1} in {path}[/green]\"\n            )\n            console.log(\n                f\"[insert_text] Successfully inserted text at line {insert_line + 1} in {path}\"\n            )\n            return FileOperationResult(\n                True, f\"Successfully inserted text at line {insert_line + 1} in {path}\"\n            )\n        except Exception as e:\n            error_msg = f\"Error inserting text: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[insert_text] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n\n    @staticmethod\n    def undo_edit(path: str) -> FileOperationResult:\n        \"\"\"\n        Placeholder for undo_edit functionality.\n        In a real implementation, you would need to track edit history.\n\n        Args:\n            path: The path to the file whose last edit should be undone\n\n        Returns:\n            FileOperationResult with message about undo functionality\n        \"\"\"\n        try:\n            if not path or not path.strip():\n                error_msg = \"Invalid file path provided: path is empty.\"\n                console.log(f\"[undo_edit] Error: {error_msg}\")\n                return FileOperationResult(False, error_msg)\n\n            # Normalize the path\n            path = normalize_path(path)\n\n            message = \"Undo functionality is not implemented in this version.\"\n            console.print(f\"[yellow]{message}[/yellow]\")\n            console.log(f\"[undo_edit] {message}\")\n            return FileOperationResult(True, message)\n        except Exception as e:\n            error_msg = f\"Error in undo_edit: {str(e)}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[undo_edit] Error: {str(e)}\")\n            console.log(traceback.format_exc())\n            return FileOperationResult(False, error_msg)\n"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/tool_handler.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTool handler for the Vertical Slice Architecture implementation of the file editor agent.\nThis module handles tool use requests from the Claude agent.\n\"\"\"\n\nimport sys\nimport os\nfrom typing import Dict, Any\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error, normalize_path\nfrom features.file_operations.model_tools import ToolUseRequest\nfrom features.file_operations.file_editor import FileEditor\n\ndef handle_tool_use(input_data: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Handle tool use requests from the Claude agent.\n    \n    Args:\n        input_data: The tool use request data from Claude\n        \n    Returns:\n        Dictionary with the result or error message\n    \"\"\"\n    log_info(\"tool_handler\", f\"Received tool use request: {input_data}\")\n    \n    try:\n        # Parse the tool use request\n        request = ToolUseRequest.from_dict(input_data)\n        \n        # Normalize the path\n        path = normalize_path(request.path) if request.path else None\n        \n        # Handle the command\n        if request.command == \"view\":\n            start_line = request.kwargs.get(\"start_line\")\n            end_line = request.kwargs.get(\"end_line\")\n            \n            if start_line is not None:\n                start_line = int(start_line)\n            if end_line is not None:\n                end_line = int(end_line)\n                \n            result = FileEditor.read(path, start_line, end_line)\n            \n        elif request.command == \"edit\":\n            old_str = request.kwargs.get(\"old_str\", \"\")\n            new_str = request.kwargs.get(\"new_str\", \"\")\n            \n            result = FileEditor.edit_file(path, old_str, new_str)\n            \n        elif request.command == \"create\":\n            content = request.kwargs.get(\"content\", \"\")\n            \n            result = FileEditor.create_file(path, content)\n            \n        elif request.command == \"insert\":\n            line_num = int(request.kwargs.get(\"line_num\", 1))\n            content = request.kwargs.get(\"content\", \"\")\n            \n            result = FileEditor.insert_line(path, line_num, content)\n            \n        else:\n            log_error(\"tool_handler\", f\"Unknown command: {request.command}\")\n            return {\"error\": f\"Unknown command: {request.command}\"}\n        \n        # Return the result\n        if result.success:\n            return {\"result\": result.content or result.message}\n        else:\n            return {\"error\": result.message}\n            \n    except Exception as e:\n        error_msg = f\"Error handling tool use: {str(e)}\"\n        log_error(\"tool_handler\", error_msg)\n        return {\"error\": error_msg}"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/features/file_agent_v2_gemini/write_tool.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nWrite tool for the Vertical Slice Architecture implementation of the file editor agent.\nThis module provides file writing capabilities.\n\"\"\"\n\nimport sys\nimport os\n\n# Add the parent directory to the Python path to enable relative imports\nsys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))\n\nfrom shared.utils import log_info, log_error\nfrom features.file_operations.model_tools import FileOperationResult\n\ndef write_file(path: str, content: str) -> FileOperationResult:\n    \"\"\"\n    Write content to a file.\n    \n    Args:\n        path: The path to the file to write\n        content: The content to write to the file\n        \n    Returns:\n        FileOperationResult with result or error message\n    \"\"\"\n    log_info(\"write_tool\", f\"Writing to file {path}\")\n    \n    try:\n        with open(path, 'w', encoding='utf-8') as f:\n            f.write(content)\n        \n        log_info(\"write_tool\", f\"Successfully wrote to file {path}\")\n        return FileOperationResult(success=True, content=\"\", message=f\"Successfully wrote to file {path}\")\n    except Exception as e:\n        error_msg = f\"Failed to write to file {path}: {str(e)}\"\n        log_error(\"write_tool\", error_msg)\n        return FileOperationResult(success=False, content=\"\", message=error_msg)"
  },
  {
    "path": "example-agent-codebase-arch/vertical-slice-architecture/main.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"anthropic>=0.49.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nMain application entry point for the Vertical Slice Architecture implementation\nof the Claude 3.7 File Editor Agent.\n\nExample Usage:\n\n# View a file\nuv run main.py --prompt \"Show me the content of README.md\"\n\n# Edit a file\nuv run main.py --prompt \"Fix the syntax error in sfa_poc.py\"\n\n# Create a new file\nuv run main.py --prompt \"Create a new file called hello.py with a function that prints Hello World\"\n\n# Run with higher thinking tokens\nuv run main.py --prompt \"Refactor README.md to make it more concise\" --thinking 5000\n\n# Increase max loops for complex tasks\nuv run main.py --prompt \"Create a Python class that implements a binary search tree\" --max-loops 20\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport time\nimport traceback\nfrom typing import Tuple, Dict, Any\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.markdown import Markdown\n\n# Add the current directory to the Python path to enable absolute imports\nsys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))\n\nfrom shared.utils import console, display_token_usage\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\ndef main():\n    \"\"\"Main entry point for the application.\"\"\"\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"Claude 3.7 File Editor Agent\")\n    parser.add_argument(\n        \"--prompt\",\n        \"-p\",\n        required=True,\n        help=\"The prompt for what file operations to perform\",\n    )\n    parser.add_argument(\n        \"--max-loops\",\n        \"-l\",\n        type=int,\n        default=15,\n        help=\"Maximum number of tool use loops (default: 15)\",\n    )\n    parser.add_argument(\n        \"--thinking\",\n        \"-t\",\n        type=int,\n        default=DEFAULT_THINKING_TOKENS,\n        help=f\"Maximum thinking tokens (default: {DEFAULT_THINKING_TOKENS})\",\n    )\n    parser.add_argument(\n        \"--efficiency\",\n        \"-e\",\n        action=\"store_true\",\n        help=\"Enable token-efficient tool use (beta feature)\",\n    )\n    args = parser.parse_args()\n\n    console.print(Panel.fit(\"Claude 3.7 File Editor Agent (Vertical Slice Architecture)\"))\n    console.print(f\"\\n[bold]Prompt:[/bold] {args.prompt}\\n\")\n    console.print(f\"[dim]Thinking tokens: {args.thinking}[/dim]\")\n    console.print(f\"[dim]Max loops: {args.max_loops}[/dim]\")\n    \n    if args.efficiency:\n        console.print(f\"[dim]Token-efficient tools: Enabled[/dim]\\n\")\n    else:\n        console.print(f\"[dim]Token-efficient tools: Disabled[/dim]\\n\")\n\n    # For testing purposes, we'll just print a success message\n    console.print(\"[green]Successfully loaded the Vertical Slice Architecture implementation![/green]\")\n    console.print(\"[yellow]This is a mock implementation for testing the architecture structure.[/yellow]\")\n    console.print(\"[yellow]In a real implementation, this would connect to the Claude API.[/yellow]\")\n\n    # Display mock token usage\n    display_token_usage(1000, 500)\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "extra/ai_code_basic.sh",
    "content": "# aider --model groq/deepseek-r1-distill-llama-70b --no-detect-urls --no-auto-commit --yes-always --file *.py --message \"$1\"\n# aider --deepseek --no-detect-urls --no-auto-commit --yes-always --file *.py --message \"$1\"\n\naider \\\n    --model o3-mini \\\n    --architect \\\n    --reasoning-effort high \\\n    --editor-model sonnet \\\n    --no-detect-urls \\\n    --no-auto-commit \\\n    --yes-always \\\n    --file *.py"
  },
  {
    "path": "extra/ai_code_reflect.sh",
    "content": "prompt=\"$1\"\n\n# first shot\naider \\\n    --model o3-mini \\\n    --architect \\\n    --reasoning-effort high \\\n    --editor-model sonnet \\\n    --no-detect-urls \\\n    --no-auto-commit \\\n    --yes-always \\\n    --file *.py \\\n    --message \"$prompt\"\n\n# reflection\naider \\\n    --model o3-mini \\\n    --architect \\\n    --reasoning-effort high \\\n    --editor-model sonnet \\\n    --no-detect-urls \\\n    --no-auto-commit \\\n    --yes-always \\\n    --file *.py \\\n    --message \"Double all changes requested to make sure they've been implemented: $prompt\""
  },
  {
    "path": "extra/create_db.py",
    "content": "import json\nimport sqlite3\nfrom datetime import datetime\n\n# Connect to SQLite database (creates it if it doesn't exist)\nconn = sqlite3.connect('users.db')\ncursor = conn.cursor()\n\n# Create the User table\ncursor.execute('''\nCREATE TABLE IF NOT EXISTS User (\n    id TEXT PRIMARY KEY,\n    name TEXT,\n    age INTEGER,\n    city TEXT,\n    score REAL,\n    is_active BOOLEAN,\n    status TEXT,\n    created_at DATE\n)\n''')\n\n# Read the JSON file\nwith open('data/mock.json', 'r') as file:\n    users = json.load(file)\n\n# Insert data into the table\nfor user in users:\n    cursor.execute('''\n    INSERT INTO User (id, name, age, city, score, is_active, status, created_at)\n    VALUES (?, ?, ?, ?, ?, ?, ?, ?)\n    ''', (\n        user['id'],\n        user['name'],\n        user['age'],\n        user['city'],\n        user['score'],\n        user['is_active'],\n        user['status'],\n        user['created_at']\n    ))\n\n# Commit the changes and close the connection\nconn.commit()\nconn.close()\n"
  },
  {
    "path": "extra/gist_poc.py",
    "content": "# /// script\n# dependencies = [\n#   \"requests<3\",\n# ]\n# ///\n\n# Interesting idea here - we can store SFAs in gist - curl them then run them locally. Food for thought.\n\nimport requests\n\n\ndef fetch_gist_content():\n    # 1. The raw link to your specific file in the Gist\n    raw_url = \"https://gist.githubusercontent.com/disler/d8d8abdb17b2072cff21df468607b176/raw/sfa_poc.py\"\n\n    try:\n        # 2. Use requests to fetch the file's content\n        response = requests.get(raw_url)\n        response.raise_for_status()  # Raise an exception for bad status codes\n\n        # 3. Get the content\n        sfa_poc_file_contents = response.text\n\n        # 4. Print the content\n        print(sfa_poc_file_contents)\n\n        return sfa_poc_file_contents\n\n    except requests.RequestException as e:\n        print(f\"Error fetching gist content: {e}\")\n        return None\n\n\nif __name__ == \"__main__\":\n    fetch_gist_content()\n"
  },
  {
    "path": "extra/gist_poc.sh",
    "content": "#!/usr/bin/env bash\n\n# Interesting idea here - we can store SFAs in gist - curl them then run them locally. Food for thought.\n\n# 1. The raw link to your specific file in the Gist.\n#    Note: The exact raw link may change if the Gist is updated, so check the \"Raw\" button\n#    in your Gist to make sure you have the correct URL.\nRAW_URL=\"https://gist.githubusercontent.com/disler/d8d8abdb17b2072cff21df468607b176/raw/sfa_poc.py\"\n\n# 2. Use curl to fetch the file's content and store it in a variable.\nSFA_POC_FILE_CONTENTS=\"$(curl -sL \"$RAW_URL\")\"\n\n# 3. Now you can do whatever you want with $SFA_POC_FILE_CONTENTS.\n#    For example, just echo it:\necho \"$SFA_POC_FILE_CONTENTS\"\n"
  },
  {
    "path": "openai-agents-examples/01_basic_agent.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nBasic Agent Example\n\nThis example demonstrates how to create a simple agent using the OpenAI Agents SDK.\nThe agent can respond to user queries with helpful information.\n\nRun with:\n    uv run 01_basic_agent.py --prompt \"Tell me about climate change\"\n\nTest with:\n    uv run pytest 01_basic_agent.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom openai.types.chat import ChatCompletion\nfrom agents import Agent, Runner\n\n# Initialize console for rich output\nconsole = Console()\n\ndef create_basic_agent(instructions: str = None) -> Agent:\n    \"\"\"\n    Create a basic agent with the given instructions.\n    \n    Args:\n        instructions: Custom instructions for the agent. If None, default instructions are used.\n        \n    Returns:\n        An Agent instance configured with the provided instructions.\n    \"\"\"\n    default_instructions = \"\"\"\n    You are a helpful assistant that provides accurate and concise information.\n    Always be respectful and provide factual responses based on the latest available information.\n    If you don't know something, admit it rather than making up information.\n    \"\"\"\n    \n    # Create and return a basic agent\n    return Agent(\n        name=\"BasicAssistant\",\n        instructions=instructions or default_instructions,\n        model=\"gpt-4o-mini\",  # Using GPT-4o-mini as specified in requirements\n    )\n\nasync def run_basic_agent(prompt: str, agent: Optional[Agent] = None) -> str:\n    \"\"\"\n    Run the basic agent with the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        agent: Optional pre-configured agent. If None, a default agent is created.\n        \n    Returns:\n        The agent's response as a string\n    \"\"\"\n    # Create agent if not provided\n    if agent is None:\n        agent = create_basic_agent()\n    \n    # Run the agent with the prompt\n    result = await Runner.run(agent, prompt)\n    \n    # Extract and return the text response\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the agent.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Basic Agent Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the agent and get response\n        import asyncio\n        response = asyncio.run(run_basic_agent(args.prompt))\n        \n        # Display the response\n        console.print(Panel(response, title=\"Agent Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_basic_agent():\n    \"\"\"Test that the agent is created with the correct configuration.\"\"\"\n    agent = create_basic_agent(\"Test instructions\")\n    assert agent.name == \"BasicAssistant\"\n    assert agent.instructions == \"Test instructions\"\n    assert agent.model == \"gpt-4o-mini\"\n\ndef test_run_basic_agent():\n    \"\"\"Test that the agent can run and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a simple test query\n    import asyncio\n    response = asyncio.run(run_basic_agent(\"What is 2+2?\"))\n    \n    # Verify we got a non-empty response\n    assert response\n    assert len(response) > 0\n    # The response should contain \"4\" somewhere\n    assert \"4\" in response\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/02_multi_agent.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nMulti-Agent Example\n\nThis example demonstrates how to create and use multiple agents that work together.\nIt includes a coordinator agent that delegates tasks to specialist agents.\n\nRun with:\n    uv run 02_multi_agent.py --prompt \"Explain quantum computing and its applications\"\n\nTest with:\n    uv run pytest 02_multi_agent.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union, Tuple\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, handoff\n\n# Initialize console for rich output\nconsole = Console()\n\ndef create_science_agent() -> Agent:\n    \"\"\"\n    Create a science specialist agent.\n    \n    Returns:\n        An Agent instance specialized in scientific topics.\n    \"\"\"\n    instructions = \"\"\"\n    You are a science specialist with deep knowledge of physics, chemistry, biology, and related fields.\n    Provide accurate, detailed scientific explanations while making complex concepts accessible.\n    Use analogies and examples when helpful to illustrate scientific principles.\n    Always clarify when something is theoretical or not yet proven.\n    \"\"\"\n    \n    return Agent(\n        name=\"ScienceSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent for questions about scientific topics, theories, and concepts.\"\n    )\n\ndef create_tech_agent() -> Agent:\n    \"\"\"\n    Create a technology specialist agent.\n    \n    Returns:\n        An Agent instance specialized in technology topics.\n    \"\"\"\n    instructions = \"\"\"\n    You are a technology specialist with expertise in computer science, programming, AI, and digital technologies.\n    Provide clear, accurate explanations of technical concepts and their practical applications.\n    When discussing programming, focus on concepts rather than writing extensive code.\n    Explain how technologies work and their real-world impact.\n    \"\"\"\n    \n    return Agent(\n        name=\"TechSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent for questions about technology, computing, programming, and digital systems.\"\n    )\n\ndef create_coordinator_agent(specialists: List[Agent]) -> Agent:\n    \"\"\"\n    Create a coordinator agent that can delegate to specialists.\n    \n    Args:\n        specialists: List of specialist agents to which tasks can be delegated\n        \n    Returns:\n        An Agent instance that coordinates between specialists\n    \"\"\"\n    instructions = \"\"\"\n    You are a coordinator who determines which specialist should handle a user's question.\n    Analyze the user's query and decide which specialist would be best suited to respond.\n    For questions that span multiple domains, choose the specialist most relevant to the core of the question.\n    \"\"\"\n    \n    # Create handoffs to specialist agents\n    handoffs = [handoff(agent) for agent in specialists]\n    \n    return Agent(\n        name=\"Coordinator\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoffs=handoffs\n    )\n\nasync def run_multi_agent_system(prompt: str) -> str:\n    \"\"\"\n    Run the multi-agent system with the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        \n    Returns:\n        The final response from the appropriate specialist agent\n    \"\"\"\n    # Create specialist agents\n    science_agent = create_science_agent()\n    tech_agent = create_tech_agent()\n    \n    # Create coordinator agent with specialists\n    coordinator = create_coordinator_agent([science_agent, tech_agent])\n    \n    # Run the coordinator agent with the prompt\n    result = await Runner.run(coordinator, prompt)\n    \n    # Return the final response\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the multi-agent system.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Multi-Agent Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the multi-agent system\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the multi-agent system and get response\n        response = asyncio.run(run_multi_agent_system(args.prompt))\n        \n        # Display the response\n        console.print(Panel(response, title=\"Multi-Agent Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_specialist_agents():\n    \"\"\"Test that specialist agents are created with the correct configuration.\"\"\"\n    science_agent = create_science_agent()\n    tech_agent = create_tech_agent()\n    \n    assert science_agent.name == \"ScienceSpecialist\"\n    assert tech_agent.name == \"TechSpecialist\"\n    assert \"science specialist\" in science_agent.instructions.lower()\n    assert \"technology specialist\" in tech_agent.instructions.lower()\n\ndef test_create_coordinator_agent():\n    \"\"\"Test that the coordinator agent is created with the correct configuration.\"\"\"\n    science_agent = create_science_agent()\n    tech_agent = create_tech_agent()\n    \n    coordinator = create_coordinator_agent([science_agent, tech_agent])\n    \n    assert coordinator.name == \"Coordinator\"\n    assert \"coordinator\" in coordinator.instructions.lower()\n    assert len(coordinator.handoffs) == 2\n\ndef test_run_multi_agent_system():\n    \"\"\"Test that the multi-agent system can run and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a simple test query that should go to the tech specialist\n    response = asyncio.run(run_multi_agent_system(\"What is machine learning?\"))\n    \n    # Verify we got a non-empty response\n    assert response\n    assert len(response) > 0\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/03_sync_agent.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nSynchronous Agent Example\n\nThis example demonstrates how to run an agent synchronously instead of asynchronously.\nIt shows how to use the Runner.run_sync function for simpler code in non-async environments.\n\nRun with:\n    uv run 03_sync_agent.py --prompt \"What are the benefits of exercise?\"\n\nTest with:\n    uv run pytest 03_sync_agent.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner\n\n# Initialize console for rich output\nconsole = Console()\n\ndef create_health_agent() -> Agent:\n    \"\"\"\n    Create a health advisor agent.\n    \n    Returns:\n        An Agent instance specialized in health topics.\n    \"\"\"\n    instructions = \"\"\"\n    You are a health advisor with expertise in fitness, nutrition, and general wellness.\n    Provide evidence-based information about health topics, focusing on practical advice.\n    Always emphasize that you're not a medical professional and serious concerns should be \n    discussed with a healthcare provider.\n    Keep responses concise and actionable.\n    \"\"\"\n    \n    return Agent(\n        name=\"HealthAdvisor\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n    )\n\ndef run_sync_agent(prompt: str, agent: Optional[Agent] = None) -> str:\n    \"\"\"\n    Run an agent synchronously with the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        agent: Optional pre-configured agent. If None, a health advisor agent is created.\n        \n    Returns:\n        The agent's response as a string\n    \"\"\"\n    # Create agent if not provided\n    if agent is None:\n        agent = create_health_agent()\n    \n    # Run the agent synchronously with the prompt\n    result = Runner.run_sync(agent, prompt)\n    \n    # Return the response\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the agent synchronously.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Synchronous Agent Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the agent synchronously and get response\n        response = run_sync_agent(args.prompt)\n        \n        # Display the response\n        console.print(Panel(response, title=\"Synchronous Agent Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_health_agent():\n    \"\"\"Test that the health agent is created with the correct configuration.\"\"\"\n    agent = create_health_agent()\n    assert agent.name == \"HealthAdvisor\"\n    assert \"health advisor\" in agent.instructions.lower()\n    assert agent.model == \"gpt-4o-mini\"\n\ndef test_run_sync_agent():\n    \"\"\"Test that the agent can run synchronously and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a simple test query\n    response = run_sync_agent(\"What are some quick exercises I can do at my desk?\")\n    \n    # Verify we got a non-empty response\n    assert response\n    assert len(response) > 0\n    # The response should contain relevant terms\n    assert any(term in response.lower() for term in [\"exercise\", \"stretch\", \"desk\", \"movement\"])\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/04_agent_with_tracing.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n#   \"opentelemetry-api>=1.20.0\",\n#   \"opentelemetry-sdk>=1.20.0\",\n# ]\n# ///\n\n\"\"\"\nAgent with Tracing Example\n\nThis example demonstrates how to use tracing with agents to monitor and debug their execution.\nIt shows how to set up OpenTelemetry tracing and capture spans for agent operations.\n\nRun with:\n    uv run 04_agent_with_tracing.py --prompt \"What is the capital of France?\"\n\nTest with:\n    uv run pytest 04_agent_with_tracing.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner\n\n# Import OpenTelemetry components\nfrom opentelemetry import trace\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor\n\n# Initialize console for rich output\nconsole = Console()\n\n# Set up OpenTelemetry tracing\ndef setup_tracing():\n    \"\"\"Set up OpenTelemetry tracing with console exporter.\"\"\"\n    # Create a tracer provider\n    provider = TracerProvider()\n    \n    # Add a console exporter to see spans in the console\n    console_exporter = ConsoleSpanExporter()\n    processor = SimpleSpanProcessor(console_exporter)\n    provider.add_span_processor(processor)\n    \n    # Set the global tracer provider\n    trace.set_tracer_provider(provider)\n    \n    # Get a tracer\n    return trace.get_tracer(\"agent_tracer\")\n\ndef create_geography_agent() -> Agent:\n    \"\"\"\n    Create a geography specialist agent.\n    \n    Returns:\n        An Agent instance specialized in geography topics.\n    \"\"\"\n    instructions = \"\"\"\n    You are a geography specialist with knowledge about countries, capitals, landmarks, and geographical features.\n    Provide accurate, concise information about geographical topics.\n    Include interesting facts when relevant but prioritize accuracy.\n    \"\"\"\n    \n    return Agent(\n        name=\"GeographySpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n    )\n\nasync def run_traced_agent(prompt: str, tracer) -> str:\n    \"\"\"\n    Run an agent with tracing for the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        tracer: The OpenTelemetry tracer to use\n        \n    Returns:\n        The agent's response as a string\n    \"\"\"\n    # Create a span for the entire agent execution\n    with tracer.start_as_current_span(\"agent_execution\") as span:\n        # Add attributes to the span\n        span.set_attribute(\"prompt\", prompt)\n        \n        # Create the agent\n        with tracer.start_as_current_span(\"create_agent\"):\n            agent = create_geography_agent()\n            span.set_attribute(\"agent_name\", agent.name)\n        \n        # Run the agent with the prompt\n        with tracer.start_as_current_span(\"Runner.run\"):\n            result = await Runner.run(agent, prompt)\n            # Note: In the current version, RunResult doesn't have usage attribute\n            # We'll just record the response length as a basic metric\n            span.set_attribute(\"response_length\", len(result.final_output))\n            span.set_attribute(\"response_first_chars\", result.final_output[:30])\n        \n        # Return the response\n        return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the agent with tracing.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent with Tracing Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Set up tracing\n        tracer = setup_tracing()\n        \n        # Run the agent with tracing and get response\n        response = asyncio.run(run_traced_agent(args.prompt, tracer))\n        \n        # Display the response\n        console.print(Panel(response, title=\"Agent Response with Tracing\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_geography_agent():\n    \"\"\"Test that the geography agent is created with the correct configuration.\"\"\"\n    agent = create_geography_agent()\n    assert agent.name == \"GeographySpecialist\"\n    assert \"geography specialist\" in agent.instructions.lower()\n    assert agent.model == \"gpt-4o-mini\"\n\ndef test_run_traced_agent():\n    \"\"\"Test that the agent can run with tracing and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Set up tracing\n    tracer = setup_tracing()\n    \n    # Run a simple test query\n    response = asyncio.run(run_traced_agent(\"What is the capital of Japan?\", tracer))\n    \n    # Verify we got a non-empty response\n    assert response\n    assert len(response) > 0\n    # The response should contain \"Tokyo\"\n    assert \"Tokyo\" in response\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/05_agent_with_function_tools.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n#   \"requests>=2.31.0\",\n# ]\n# ///\n\n\"\"\"\nAgent with Function Tools Example\n\nThis example demonstrates how to create an agent with function tools using the @function_tool decorator.\nThe agent can use these tools to perform actions like fetching weather data or calculating distances.\n\nRun with:\n    uv run 05_agent_with_function_tools.py --prompt \"What's the weather in New York?\"\n\nTest with:\n    uv run pytest 05_agent_with_function_tools.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nimport requests\nfrom datetime import datetime\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, function_tool\n\n# Initialize console for rich output\nconsole = Console()\n\n# Define function tools using the decorator\n@function_tool\ndef get_current_weather(location: str, unit: str) -> str:\n    \"\"\"\n    Get the current weather in a given location.\n    \n    Args:\n        location: The city and state, e.g. San Francisco, CA or country e.g., London, UK\n        unit: The temperature unit to use. Either \"celsius\" or \"fahrenheit\".\n        \n    Returns:\n        A string containing the weather information.\n    \"\"\"\n    # This is a mock implementation - in a real application, you would call a weather API\n    weather_data = {\n        \"New York\": {\"temperature\": 22, \"condition\": \"Sunny\"},\n        \"London\": {\"temperature\": 15, \"condition\": \"Cloudy\"},\n        \"Tokyo\": {\"temperature\": 28, \"condition\": \"Rainy\"},\n        \"Sydney\": {\"temperature\": 31, \"condition\": \"Hot and sunny\"},\n    }\n    \n    # Default weather if location not found\n    default_weather = {\"temperature\": 20, \"condition\": \"Clear\"}\n    \n    # Get weather for the location (case insensitive)\n    location_key = next((k for k in weather_data.keys() if k.lower() == location.lower()), None)\n    weather = weather_data.get(location_key, default_weather)\n    \n    # Convert temperature if needed\n    temp = weather[\"temperature\"]\n    if unit.lower() == \"fahrenheit\":\n        temp = (temp * 9/5) + 32\n    \n    return f\"The current weather in {location} is {weather['condition']} with a temperature of {temp}°{'F' if unit.lower() == 'fahrenheit' else 'C'}.\"\n\n@function_tool\ndef calculate_distance(origin: str, destination: str, unit: str) -> str:\n    \"\"\"\n    Calculate the distance between two locations.\n    \n    Args:\n        origin: The starting location (city name)\n        destination: The ending location (city name)\n        unit: The unit of distance. Either \"kilometers\" or \"miles\".\n        \n    Returns:\n        A string containing the distance information.\n    \"\"\"\n    # This is a mock implementation - in a real application, you would call a mapping API\n    distances = {\n        (\"New York\", \"London\"): 5567,\n        (\"New York\", \"Tokyo\"): 10838,\n        (\"London\", \"Tokyo\"): 9562,\n        (\"London\", \"Sydney\"): 16983,\n        (\"Tokyo\", \"Sydney\"): 7921,\n    }\n    \n    # Try to find the distance in both directions\n    distance_km = distances.get((origin, destination)) or distances.get((destination, origin))\n    \n    # If not found, provide an estimate\n    if distance_km is None:\n        distance_km = 1000  # Default distance\n    \n    # Convert to miles if needed\n    if unit.lower() == \"miles\":\n        distance = distance_km * 0.621371\n        unit_symbol = \"miles\"\n    else:\n        distance = distance_km\n        unit_symbol = \"km\"\n    \n    return f\"The distance between {origin} and {destination} is approximately {distance:.1f} {unit_symbol}.\"\n\n@function_tool\ndef get_current_time(location: str) -> str:\n    \"\"\"\n    Get the current time in a given location.\n    \n    Args:\n        location: The location to get the time for. Currently only supports \"UTC\".\n        \n    Returns:\n        A string containing the current time information.\n    \"\"\"\n    # In a real implementation, you would use a timezone library\n    current_time = datetime.utcnow()\n    formatted_time = current_time.strftime(\"%Y-%m-%d %H:%M:%S\")\n    \n    return f\"The current time in {location} is {formatted_time}.\"\n\ndef create_travel_assistant() -> Agent:\n    \"\"\"\n    Create a travel assistant agent with function tools.\n    \n    Returns:\n        An Agent instance with function tools for travel assistance.\n    \"\"\"\n    instructions = \"\"\"\n    You are a helpful travel assistant that can provide information about weather, \n    distances between locations, and current time.\n    Use the tools available to you to provide accurate information when asked.\n    If you don't have a tool for the specific request, acknowledge the limitations\n    and provide the best information you can.\n    \"\"\"\n    \n    # Create the agent with function tools\n    return Agent(\n        name=\"TravelAssistant\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        tools=[get_current_weather, calculate_distance, get_current_time]\n    )\n\nasync def run_function_tool_agent(prompt: str) -> str:\n    \"\"\"\n    Run the travel assistant agent with the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        \n    Returns:\n        The agent's response as a string\n    \"\"\"\n    # Create the agent with function tools\n    agent = create_travel_assistant()\n    \n    # Run the agent with the prompt\n    result = await Runner.run(agent, prompt)\n    \n    # Return the response\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the agent with function tools.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent with Function Tools Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the agent and get response\n        response = asyncio.run(run_function_tool_agent(args.prompt))\n        \n        # Display the response\n        console.print(Panel(response, title=\"Travel Assistant Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_function_tools():\n    \"\"\"Test that the function tools work correctly.\"\"\"\n    # Test weather function\n    weather_result = get_current_weather(\"New York\", \"celsius\")\n    assert \"New York\" in weather_result\n    assert \"°C\" in weather_result\n    \n    # Test distance function\n    distance_result = calculate_distance(\"New York\", \"London\", \"kilometers\")\n    assert \"New York\" in distance_result\n    assert \"London\" in distance_result\n    assert \"km\" in distance_result\n    \n    # Test time function\n    time_result = get_current_time()\n    assert \"UTC\" in time_result\n    assert \":\" in time_result  # Time should contain colons\n\ndef test_create_travel_assistant():\n    \"\"\"Test that the travel assistant agent is created with the correct configuration.\"\"\"\n    agent = create_travel_assistant()\n    assert agent.name == \"TravelAssistant\"\n    assert \"travel assistant\" in agent.instructions.lower()\n    assert len(agent.tools) == 3\n\ndef test_run_function_tool_agent():\n    \"\"\"Test that the agent can use function tools and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a test query that should use the weather tool\n    response = asyncio.run(run_function_tool_agent(\"What's the weather in London?\"))\n    \n    # Verify we got a non-empty response that mentions London\n    assert response\n    assert len(response) > 0\n    assert \"London\" in response\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/06_agent_with_custom_tools.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n# ]\n# ///\n\n\"\"\"\nAgent with Custom Tools Example\n\nThis example demonstrates how to create an agent with custom tools without using the @function_tool decorator.\nIt shows how to define custom tool schemas and implement tool handlers manually.\n\nRun with:\n    uv run 06_agent_with_custom_tools.py --prompt \"Convert 100 USD to EUR\"\n\nTest with:\n    uv run pytest 06_agent_with_custom_tools.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union, Callable\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom pydantic import BaseModel, Field\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, Tool\n\n# Initialize console for rich output\nconsole = Console()\n\n# Define custom tool input models\nclass CurrencyConversionInput(BaseModel):\n    \"\"\"Input for currency conversion tool.\"\"\"\n    amount: float = Field(..., description=\"The amount to convert\")\n    from_currency: str = Field(..., description=\"The currency to convert from (e.g., USD, EUR, JPY)\")\n    to_currency: str = Field(..., description=\"The currency to convert to (e.g., USD, EUR, JPY)\")\n\nclass StockPriceInput(BaseModel):\n    \"\"\"Input for stock price tool.\"\"\"\n    symbol: str = Field(..., description=\"The stock symbol (e.g., AAPL, MSFT, GOOGL)\")\n\n# Define custom tool handlers\ndef convert_currency(params: CurrencyConversionInput) -> str:\n    \"\"\"\n    Convert an amount from one currency to another.\n    \n    Args:\n        params: The currency conversion parameters\n        \n    Returns:\n        A string containing the conversion result\n    \"\"\"\n    # This is a mock implementation - in a real application, you would call a currency API\n    exchange_rates = {\n        \"USD\": {\"EUR\": 0.92, \"GBP\": 0.79, \"JPY\": 149.50},\n        \"EUR\": {\"USD\": 1.09, \"GBP\": 0.86, \"JPY\": 162.50},\n        \"GBP\": {\"USD\": 1.27, \"EUR\": 1.16, \"JPY\": 189.20},\n        \"JPY\": {\"USD\": 0.0067, \"EUR\": 0.0062, \"GBP\": 0.0053},\n    }\n    \n    from_curr = params.from_currency.upper()\n    to_curr = params.to_currency.upper()\n    \n    # Check if currencies are supported\n    if from_curr not in exchange_rates:\n        return f\"Sorry, {from_curr} is not a supported currency.\"\n    \n    if to_curr not in exchange_rates[from_curr] and from_curr != to_curr:\n        return f\"Sorry, conversion from {from_curr} to {to_curr} is not supported.\"\n    \n    # If same currency, return the amount\n    if from_curr == to_curr:\n        return f\"{params.amount} {from_curr} is equal to {params.amount} {to_curr}.\"\n    \n    # Calculate converted amount\n    converted_amount = params.amount * exchange_rates[from_curr][to_curr]\n    \n    return f\"{params.amount} {from_curr} is equal to {converted_amount:.2f} {to_curr}.\"\n\ndef get_stock_price(params: StockPriceInput) -> str:\n    \"\"\"\n    Get the current price of a stock.\n    \n    Args:\n        params: The stock price parameters\n        \n    Returns:\n        A string containing the stock price information\n    \"\"\"\n    # This is a mock implementation - in a real application, you would call a stock API\n    stock_prices = {\n        \"AAPL\": 175.34,\n        \"MSFT\": 410.34,\n        \"GOOGL\": 147.68,\n        \"AMZN\": 178.75,\n        \"META\": 474.99,\n    }\n    \n    symbol = params.symbol.upper()\n    \n    # Check if stock is supported\n    if symbol not in stock_prices:\n        return f\"Sorry, stock information for {symbol} is not available.\"\n    \n    price = stock_prices[symbol]\n    \n    return f\"The current price of {symbol} is ${price:.2f}.\"\n\ndef create_financial_assistant() -> Agent:\n    \"\"\"\n    Create a financial assistant agent with custom tools.\n    \n    Returns:\n        An Agent instance with custom tools for financial assistance\n    \"\"\"\n    instructions = \"\"\"\n    You are a helpful financial assistant that can provide information about \n    currency conversions and stock prices.\n    Use the tools available to you to provide accurate financial information when asked.\n    If you don't have a tool for the specific request, acknowledge the limitations\n    and provide the best information you can.\n    \"\"\"\n    \n    # Create custom tools\n    currency_tool = Tool(\n        name=\"convert_currency\",\n        description=\"Convert an amount from one currency to another\",\n        input_type=CurrencyConversionInput,\n        function=convert_currency\n    )\n    \n    stock_tool = Tool(\n        name=\"get_stock_price\",\n        description=\"Get the current price of a stock\",\n        input_type=StockPriceInput,\n        function=get_stock_price\n    )\n    \n    # Create the agent with custom tools\n    return Agent(\n        name=\"FinancialAssistant\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        tools=[currency_tool, stock_tool]\n    )\n\nasync def run_custom_tool_agent(prompt: str) -> str:\n    \"\"\"\n    Run the financial assistant agent with the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        \n    Returns:\n        The agent's response as a string\n    \"\"\"\n    # Create the agent with custom tools\n    agent = create_financial_assistant()\n    \n    # Run the agent with the prompt\n    result = await Runner.run(agent, prompt)\n    \n    # Return the response\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the agent with custom tools.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent with Custom Tools Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the agent and get response\n        response = asyncio.run(run_custom_tool_agent(args.prompt))\n        \n        # Display the response\n        console.print(Panel(response, title=\"Financial Assistant Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_custom_tools():\n    \"\"\"Test that the custom tools work correctly.\"\"\"\n    # Test currency conversion\n    currency_result = convert_currency(CurrencyConversionInput(\n        amount=100,\n        from_currency=\"USD\",\n        to_currency=\"EUR\"\n    ))\n    assert \"USD\" in currency_result\n    assert \"EUR\" in currency_result\n    \n    # Test stock price\n    stock_result = get_stock_price(StockPriceInput(symbol=\"AAPL\"))\n    assert \"AAPL\" in stock_result\n    assert \"$\" in stock_result\n\ndef test_create_financial_assistant():\n    \"\"\"Test that the financial assistant agent is created with the correct configuration.\"\"\"\n    agent = create_financial_assistant()\n    assert agent.name == \"FinancialAssistant\"\n    assert \"financial assistant\" in agent.instructions.lower()\n    assert len(agent.tools) == 2\n    assert any(tool.name == \"convert_currency\" for tool in agent.tools)\n    assert any(tool.name == \"get_stock_price\" for tool in agent.tools)\n\ndef test_run_custom_tool_agent():\n    \"\"\"Test that the agent can use custom tools and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a test query that should use the currency conversion tool\n    response = asyncio.run(run_custom_tool_agent(\"Convert 50 USD to EUR\"))\n    \n    # Verify we got a non-empty response that mentions the currencies\n    assert response\n    assert len(response) > 0\n    assert \"USD\" in response\n    assert \"EUR\" in response\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/07_agent_with_handoffs.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nAgent with Handoffs Example\n\nThis example demonstrates how to create agents that can hand off tasks to other specialized agents.\nIt shows how to implement a customer support system with a triage agent and specialist agents.\n\nRun with:\n    uv run 07_agent_with_handoffs.py --prompt \"I need help with my billing\"\n\nTest with:\n    uv run pytest 07_agent_with_handoffs.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, handoff\n\n# Initialize console for rich output\nconsole = Console()\n\ndef create_billing_agent() -> Agent:\n    \"\"\"\n    Create a billing specialist agent.\n    \n    Returns:\n        An Agent instance specialized in billing issues.\n    \"\"\"\n    instructions = \"\"\"\n    You are a billing specialist who can help customers with billing-related issues.\n    You can assist with questions about invoices, payment methods, refunds, and subscription plans.\n    Be helpful, clear, and concise in your responses.\n    Always verify the customer's information before providing specific account details.\n    \"\"\"\n    \n    return Agent(\n        name=\"BillingSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent for questions about billing, payments, invoices, or subscription issues.\"\n    )\n\ndef create_technical_agent() -> Agent:\n    \"\"\"\n    Create a technical support agent.\n    \n    Returns:\n        An Agent instance specialized in technical support.\n    \"\"\"\n    instructions = \"\"\"\n    You are a technical support specialist who can help customers with technical issues.\n    You can assist with questions about software functionality, bugs, error messages, and how-to guides.\n    Provide clear step-by-step instructions when explaining technical procedures.\n    Ask clarifying questions if the customer's issue is not clear.\n    \"\"\"\n    \n    return Agent(\n        name=\"TechnicalSupport\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent for technical issues, bugs, error messages, or how-to questions.\"\n    )\n\ndef create_account_agent() -> Agent:\n    \"\"\"\n    Create an account management agent.\n    \n    Returns:\n        An Agent instance specialized in account management.\n    \"\"\"\n    instructions = \"\"\"\n    You are an account management specialist who can help customers with account-related issues.\n    You can assist with questions about account creation, profile updates, security settings, and account recovery.\n    Always prioritize account security and verify the customer's identity before making changes.\n    Provide clear guidance on how customers can manage their account settings.\n    \"\"\"\n    \n    return Agent(\n        name=\"AccountManager\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent for account management, profile updates, or security questions.\"\n    )\n\ndef create_triage_agent(specialists: List[Agent]) -> Agent:\n    \"\"\"\n    Create a triage agent that can delegate to specialist agents.\n    \n    Args:\n        specialists: List of specialist agents to which tasks can be delegated\n        \n    Returns:\n        An Agent instance that triages customer inquiries\n    \"\"\"\n    instructions = \"\"\"\n    You are a customer support triage agent. Your job is to:\n    1. Understand the customer's issue\n    2. Determine which specialist would be best suited to help\n    3. Hand off the conversation to that specialist\n    \n    Be polite and professional. If you're unsure which specialist to choose, ask clarifying questions.\n    \"\"\"\n    \n    # Create handoffs to specialist agents\n    handoffs = [handoff(agent) for agent in specialists]\n    \n    return Agent(\n        name=\"TriageAgent\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoffs=handoffs\n    )\n\nasync def run_customer_support_system(prompt: str) -> str:\n    \"\"\"\n    Run the customer support system with the given prompt.\n    \n    Args:\n        prompt: The customer's inquiry\n        \n    Returns:\n        The final response from the appropriate specialist agent\n    \"\"\"\n    # Create specialist agents\n    billing_agent = create_billing_agent()\n    technical_agent = create_technical_agent()\n    account_agent = create_account_agent()\n    \n    # Create triage agent with specialists\n    triage_agent = create_triage_agent([billing_agent, technical_agent, account_agent])\n    \n    # Run the triage agent with the prompt\n    result = await Runner.run(triage_agent, prompt)\n    \n    # Return the final response\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the customer support system.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent with Handoffs Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The customer inquiry to send to the support system\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the customer support system and get response\n        response = asyncio.run(run_customer_support_system(args.prompt))\n        \n        # Display the response\n        console.print(Panel(response, title=\"Customer Support Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_specialist_agents():\n    \"\"\"Test that specialist agents are created with the correct configuration.\"\"\"\n    billing_agent = create_billing_agent()\n    technical_agent = create_technical_agent()\n    account_agent = create_account_agent()\n    \n    assert billing_agent.name == \"BillingSpecialist\"\n    assert technical_agent.name == \"TechnicalSupport\"\n    assert account_agent.name == \"AccountManager\"\n    \n    assert \"billing specialist\" in billing_agent.instructions.lower()\n    assert \"technical support\" in technical_agent.instructions.lower()\n    assert \"account management\" in account_agent.instructions.lower()\n\ndef test_create_triage_agent():\n    \"\"\"Test that the triage agent is created with the correct configuration.\"\"\"\n    billing_agent = create_billing_agent()\n    technical_agent = create_technical_agent()\n    account_agent = create_account_agent()\n    \n    triage_agent = create_triage_agent([billing_agent, technical_agent, account_agent])\n    \n    assert triage_agent.name == \"TriageAgent\"\n    assert \"triage agent\" in triage_agent.instructions.lower()\n    assert len(triage_agent.handoffs) == 3\n\ndef test_run_customer_support_system():\n    \"\"\"Test that the customer support system can run and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a test query that should go to the billing specialist\n    response = asyncio.run(run_customer_support_system(\"I have a question about my recent invoice\"))\n    \n    # Verify we got a non-empty response\n    assert response\n    assert len(response) > 0\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/08_agent_with_agent_as_tool.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nAgent with Agent as Tool Example\n\nThis example demonstrates how to use an agent as a tool for another agent.\nIt shows how to create a research agent that can be used as a tool by a blog writer agent.\n\nRun with:\n    uv run 08_agent_with_agent_as_tool.py --prompt \"Write a blog post about music theory\"\n\nTest with:\n    uv run pytest 08_agent_with_agent_as_tool.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner\n\n# Initialize console for rich output\nconsole = Console()\n\n\ndef create_research_agent() -> Agent:\n    \"\"\"\n    Create a research agent that can gather information on topics.\n\n    Returns:\n        An Agent instance specialized in research.\n    \"\"\"\n    instructions = \"\"\"\n    You are a research specialist who excels at gathering accurate information on various topics.\n    Your responses should be factual, well-organized, and comprehensive.\n    Include relevant details, statistics, and context when available.\n    Always cite your sources if you're providing specific facts or quotes.\n    Focus on providing high-quality, reliable information that would be useful for content creation.\n    \"\"\"\n\n    return Agent(\n        name=\"ResearchSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n    )\n\n\ndef create_blog_writer_agent(research_agent: Agent) -> Agent:\n    \"\"\"\n    Create a blog writer agent that can use a research agent as a tool.\n\n    Args:\n        research_agent: The research agent to use as a tool\n\n    Returns:\n        An Agent instance specialized in blog writing with research capabilities\n    \"\"\"\n    instructions = \"\"\"\n    You are a professional blog writer who creates engaging, informative content.\n    Your writing should be clear, conversational, and tailored to a general audience.\n    Structure your blog posts with an introduction, body paragraphs, and conclusion.\n    Use the research tool available to you to gather accurate information on topics.\n    Incorporate the research seamlessly into your writing while maintaining your voice.\n    \"\"\"\n\n    # Convert the research agent into a tool\n    research_tool = research_agent.as_tool(\n        tool_name=\"research_topic\",\n        tool_description=\"Research a specific topic to gather accurate information. Provide a clear, specific topic or question to research.\",\n    )\n\n    return Agent(\n        name=\"BlogWriter\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        tools=[research_tool],\n    )\n\n\nasync def run_blog_writer_system(prompt: str) -> str:\n    \"\"\"\n    Run the blog writer system with the given prompt.\n\n    Args:\n        prompt: The topic or request for a blog post\n\n    Returns:\n        The blog post content\n    \"\"\"\n    # Create the research agent\n    research_agent = create_research_agent()\n\n    # Create the blog writer agent with the research agent as a tool\n    blog_writer = create_blog_writer_agent(research_agent)\n\n    # Run the blog writer agent with the prompt\n    result = await Runner.run(blog_writer, prompt)\n\n    # Return the blog post\n    return result.final_output\n\n\ndef main():\n    \"\"\"Main function to parse arguments and run the blog writer system.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent with Agent as Tool Example\")\n    parser.add_argument(\n        \"--prompt\",\n        \"-p\",\n        type=str,\n        required=True,\n        help=\"The topic or request for a blog post\",\n    )\n\n    args = parser.parse_args()\n\n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(\n            Panel(\n                \"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"\n            )\n        )\n        sys.exit(1)\n\n    try:\n        # Run the blog writer system and get the blog post\n        blog_post = asyncio.run(run_blog_writer_system(args.prompt))\n\n        # Display the blog post\n        console.print(Panel(blog_post, title=\"Blog Post\", border_style=\"green\"))\n\n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n\n# Test functions\ndef test_create_research_agent():\n    \"\"\"Test that the research agent is created with the correct configuration.\"\"\"\n    agent = create_research_agent()\n    assert agent.name == \"ResearchSpecialist\"\n    assert \"research specialist\" in agent.instructions.lower()\n    assert agent.model == \"gpt-4o-mini\"\n\n\ndef test_create_blog_writer_agent():\n    \"\"\"Test that the blog writer agent is created with the correct configuration.\"\"\"\n    research_agent = create_research_agent()\n    blog_writer = create_blog_writer_agent(research_agent)\n\n    assert blog_writer.name == \"BlogWriter\"\n    assert \"blog writer\" in blog_writer.instructions.lower()\n    assert len(blog_writer.tools) == 1\n    assert blog_writer.tools[0].name == \"research_topic\"\n\n\ndef test_run_blog_writer_system():\n    \"\"\"Test that the blog writer system can run and produce a blog post.\"\"\"\n    import pytest\n\n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n\n    # Run a test query for a simple blog post\n    blog_post = asyncio.run(\n        run_blog_writer_system(\"Write a short blog post about artificial intelligence\")\n    )\n\n    # Verify we got a non-empty blog post\n    assert blog_post\n    assert len(blog_post) > 0\n    # The blog post should contain relevant terms\n    assert any(\n        term in blog_post.lower()\n        for term in [\"ai\", \"artificial intelligence\", \"technology\"]\n    )\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/09_agent_with_context_management.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nAgent with Context Management Example\n\nThis example demonstrates how to use context management with agents to maintain state\nacross multiple interactions. It shows how to create a conversation agent that remembers\nprevious interactions.\n\nRun with:\n    uv run 09_agent_with_context_management.py --prompt \"Tell me about Mars\"\n\nTest with:\n    uv run pytest 09_agent_with_context_management.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, Context\n\n# Initialize console for rich output\nconsole = Console()\n\ndef create_conversation_agent() -> Agent:\n    \"\"\"\n    Create a conversation agent that can maintain context.\n    \n    Returns:\n        An Agent instance that maintains conversation context.\n    \"\"\"\n    instructions = \"\"\"\n    You are a helpful conversational assistant that maintains context across interactions.\n    Remember details from previous parts of the conversation and refer back to them when relevant.\n    Be friendly, informative, and engaging in your responses.\n    If the user asks about something you discussed earlier, acknowledge that and build upon it.\n    \"\"\"\n    \n    return Agent(\n        name=\"ConversationAssistant\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n    )\n\nasync def run_conversation_with_context(prompt: str, context: Optional[Context] = None) -> tuple[str, Context]:\n    \"\"\"\n    Run a conversation agent with context management.\n    \n    Args:\n        prompt: The user's query or prompt\n        context: Optional existing context from previous interactions\n        \n    Returns:\n        A tuple containing the agent's response and the updated context\n    \"\"\"\n    # Create the conversation agent\n    agent = create_conversation_agent()\n    \n    # Create a new context if none is provided\n    if context is None:\n        context = Context()\n    \n    # Run the agent with the prompt and context\n    result = await Runner.run(agent, prompt, context=context)\n    \n    # Return the response and updated context\n    return result.final_output, result.context\n\ndef simulate_conversation(initial_prompt: str, follow_up_prompts: List[str]) -> List[str]:\n    \"\"\"\n    Simulate a multi-turn conversation with context management.\n    \n    Args:\n        initial_prompt: The first user prompt\n        follow_up_prompts: List of follow-up prompts\n        \n    Returns:\n        List of agent responses\n    \"\"\"\n    responses = []\n    context = None\n    \n    # Run the initial prompt\n    response, context = asyncio.run(run_conversation_with_context(initial_prompt, context))\n    responses.append(result.final_output)\n    \n    # Run each follow-up prompt with the updated context\n    for prompt in follow_up_prompts:\n        response, context = asyncio.run(run_conversation_with_context(prompt, context))\n        responses.append(result.final_output)\n    \n    return responses\n\ndef main():\n    \"\"\"Main function to parse arguments and run the conversation agent.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent with Context Management Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    parser.add_argument(\"--follow-up\", \"-f\", type=str, nargs=\"*\", default=[],\n                        help=\"Optional follow-up prompts to simulate a conversation\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Simulate a conversation with the provided prompts\n        responses = simulate_conversation(args.prompt, args.follow_up)\n        \n        # Display the initial response\n        console.print(Panel(responses[0], title=f\"Response to: {args.prompt}\", border_style=\"green\"))\n        \n        # Display follow-up responses if any\n        for i, response in enumerate(responses[1:]):\n            console.print(Panel(response, title=f\"Response to: {args.follow_up[i]}\", border_style=\"blue\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_conversation_agent():\n    \"\"\"Test that the conversation agent is created with the correct configuration.\"\"\"\n    agent = create_conversation_agent()\n    assert agent.name == \"ConversationAssistant\"\n    assert \"conversational assistant\" in agent.instructions.lower()\n    assert agent.model == \"gpt-4o-mini\"\n\ndef test_run_conversation_with_context():\n    \"\"\"Test that the agent can maintain context across interactions.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run an initial query\n    initial_prompt = \"Tell me about Mars\"\n    response, context = asyncio.run(run_conversation_with_context(initial_prompt))\n    \n    # Verify we got a non-empty response\n    assert response\n    assert len(response) > 0\n    assert context is not None\n    \n    # Run a follow-up query that references the previous conversation\n    follow_up_prompt = \"How long would it take to travel there?\"\n    follow_up_response, _ = asyncio.run(run_conversation_with_context(follow_up_prompt, context))\n    \n    # Verify the follow-up response acknowledges the previous context\n    assert follow_up_response\n    assert len(follow_up_response) > 0\n    # The response should contain terms related to Mars travel\n    assert any(term in follow_up_response.lower() for term in [\"mars\", \"travel\", \"journey\", \"months\"])\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/10_agent_with_guardrails.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n# ]\n# ///\n\n\"\"\"\nAgent with Guardrails Example\n\nThis example demonstrates how to use guardrails with agents to filter and validate inputs.\nIt shows how to create an agent with input validation to prevent prompt injection and ensure\nproper input format.\n\nRun with:\n    uv run 10_agent_with_guardrails.py --prompt \"Summarize this article about renewable energy\"\n\nTest with:\n    uv run pytest 10_agent_with_guardrails.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nimport re\nfrom typing import Optional, List, Dict, Any, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom pydantic import BaseModel, Field\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, InputGuardrail\n\n# Initialize console for rich output\nconsole = Console()\n\n# Define a custom input guardrail for content moderation\nclass ContentModerationGuardrail(InputGuardrail):\n    \"\"\"\n    A guardrail that filters out potentially harmful or inappropriate content.\n    \"\"\"\n    \n    def __init__(self):\n        \"\"\"Initialize the content moderation guardrail.\"\"\"\n        # List of terms to filter out (simplified for example purposes)\n        self.filtered_terms = [\n            \"hack\", \"exploit\", \"bypass\", \"illegal\", \"steal\", \"attack\",\n            \"malware\", \"virus\", \"phishing\", \"scam\", \"fraud\"\n        ]\n    \n    def filter(self, input_str: str) -> Optional[str]:\n        \"\"\"\n        Filter the input string for potentially harmful content.\n        \n        Args:\n            input_str: The input string to filter\n            \n        Returns:\n            The filtered string if it passes, or None if it should be rejected\n        \"\"\"\n        # Convert to lowercase for case-insensitive matching\n        lower_input = input_str.lower()\n        \n        # Check for filtered terms\n        for term in self.filtered_terms:\n            if term in lower_input:\n                return None  # Reject the input\n        \n        return input_str  # Accept the input\n    \n    def get_rejection_message(self, input_str: str) -> str:\n        \"\"\"\n        Get a message explaining why the input was rejected.\n        \n        Args:\n            input_str: The rejected input string\n            \n        Returns:\n            A message explaining the rejection\n        \"\"\"\n        return \"Your input contains terms that may be related to harmful or inappropriate content. Please rephrase your request.\"\n\n# Define a custom input guardrail for input format validation\nclass FormatValidationGuardrail(InputGuardrail):\n    \"\"\"\n    A guardrail that ensures inputs follow a specific format.\n    \"\"\"\n    \n    def __init__(self, min_length: int = 5, max_length: int = 500):\n        \"\"\"\n        Initialize the format validation guardrail.\n        \n        Args:\n            min_length: Minimum allowed input length\n            max_length: Maximum allowed input length\n        \"\"\"\n        self.min_length = min_length\n        self.max_length = max_length\n    \n    def filter(self, input_str: str) -> Optional[str]:\n        \"\"\"\n        Filter the input string based on format requirements.\n        \n        Args:\n            input_str: The input string to filter\n            \n        Returns:\n            The input string if it passes, or None if it should be rejected\n        \"\"\"\n        # Check length constraints\n        if len(input_str) < self.min_length:\n            return None  # Too short\n        \n        if len(input_str) > self.max_length:\n            return None  # Too long\n        \n        return input_str  # Accept the input\n    \n    def get_rejection_message(self, input_str: str) -> str:\n        \"\"\"\n        Get a message explaining why the input was rejected.\n        \n        Args:\n            input_str: The rejected input string\n            \n        Returns:\n            A message explaining the rejection\n        \"\"\"\n        if len(input_str) < self.min_length:\n            return f\"Your input is too short. Please provide at least {self.min_length} characters.\"\n        \n        if len(input_str) > self.max_length:\n            return f\"Your input is too long. Please limit your request to {self.max_length} characters.\"\n        \n        return \"Your input does not meet the format requirements.\"\n\ndef create_protected_agent() -> Agent:\n    \"\"\"\n    Create an agent with input guardrails for protection.\n    \n    Returns:\n        An Agent instance with input guardrails.\n    \"\"\"\n    instructions = \"\"\"\n    You are a helpful assistant that provides information and assistance on various topics.\n    You prioritize user safety and ethical responses.\n    Provide accurate, helpful information while avoiding potentially harmful content.\n    Be concise but thorough in your responses.\n    \"\"\"\n    \n    # Create guardrails\n    content_guardrail = ContentModerationGuardrail()\n    format_guardrail = FormatValidationGuardrail(min_length=5, max_length=500)\n    \n    # Create the agent with guardrails\n    return Agent(\n        name=\"ProtectedAssistant\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        input_guardrails=[content_guardrail, format_guardrail]\n    )\n\nasync def run_protected_agent(prompt: str) -> str:\n    \"\"\"\n    Run the protected agent with the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        \n    Returns:\n        The agent's response as a string, or a rejection message if the input is filtered\n    \"\"\"\n    # Create the protected agent\n    agent = create_protected_agent()\n    \n    try:\n        # Run the agent with the prompt\n        result = await Runner.run(agent, prompt)\n        return result.final_output\n    except Exception as e:\n        # Check if it's a guardrail rejection\n        if \"guardrail rejected\" in str(e).lower():\n            return f\"Input rejected by guardrails: {str(e)}\"\n        # Other exception\n        return f\"Error: {str(e)}\"\n\ndef main():\n    \"\"\"Main function to parse arguments and run the protected agent.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent with Guardrails Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the protected agent and get response\n        response = asyncio.run(run_protected_agent(args.prompt))\n        \n        # Display the response\n        if \"rejected\" in response.lower():\n            console.print(Panel(response, title=\"Input Rejected\", border_style=\"red\"))\n        else:\n            console.print(Panel(response, title=\"Protected Agent Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_content_moderation_guardrail():\n    \"\"\"Test that the content moderation guardrail correctly filters inputs.\"\"\"\n    guardrail = ContentModerationGuardrail()\n    \n    # Test safe input\n    safe_input = \"Tell me about renewable energy sources\"\n    assert guardrail.filter(safe_input) == safe_input\n    \n    # Test unsafe input\n    unsafe_input = \"How to hack into a computer system\"\n    assert guardrail.filter(unsafe_input) is None\n    assert \"harmful\" in guardrail.get_rejection_message(unsafe_input)\n\ndef test_format_validation_guardrail():\n    \"\"\"Test that the format validation guardrail correctly validates inputs.\"\"\"\n    guardrail = FormatValidationGuardrail(min_length=5, max_length=20)\n    \n    # Test valid input\n    valid_input = \"Hello world\"\n    assert guardrail.filter(valid_input) == valid_input\n    \n    # Test too short input\n    short_input = \"Hi\"\n    assert guardrail.filter(short_input) is None\n    assert \"short\" in guardrail.get_rejection_message(short_input)\n    \n    # Test too long input\n    long_input = \"This is a very long input that exceeds the maximum allowed length\"\n    assert guardrail.filter(long_input) is None\n    assert \"long\" in guardrail.get_rejection_message(long_input)\n\ndef test_create_protected_agent():\n    \"\"\"Test that the protected agent is created with the correct configuration.\"\"\"\n    agent = create_protected_agent()\n    assert agent.name == \"ProtectedAssistant\"\n    assert \"helpful assistant\" in agent.instructions.lower()\n    assert agent.model == \"gpt-4o-mini\"\n    assert len(agent.input_guardrails) == 2\n\ndef test_run_protected_agent():\n    \"\"\"Test that the protected agent can run and produce a response or rejection.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Test with a valid prompt\n    valid_prompt = \"Tell me about renewable energy sources\"\n    valid_response = asyncio.run(run_protected_agent(valid_prompt))\n    \n    # Verify we got a non-empty response\n    assert valid_response\n    assert len(valid_response) > 0\n    assert \"rejected\" not in valid_response.lower()\n    \n    # Test with an invalid prompt (contains filtered term)\n    invalid_prompt = \"How to hack into a system\"\n    invalid_response = asyncio.run(run_protected_agent(invalid_prompt))\n    \n    # Verify we got a rejection message\n    assert invalid_response\n    assert \"rejected\" in invalid_response.lower()\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/11_agent_orchestration.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nAgent Orchestration Example\n\nThis example demonstrates how to orchestrate multiple agents to work together on complex tasks.\nIt shows how to create a system where specialized agents collaborate under the coordination\nof a manager agent.\n\nRun with:\n    uv run 11_agent_orchestration.py --prompt \"Create a blog post about climate change solutions\"\n\nTest with:\n    uv run pytest 11_agent_orchestration.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union, Tuple\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, handoff, Context\n\n# Initialize console for rich output\nconsole = Console()\n\ndef create_research_agent() -> Agent:\n    \"\"\"\n    Create a research agent that gathers information.\n    \n    Returns:\n        An Agent instance specialized in research.\n    \"\"\"\n    instructions = \"\"\"\n    You are a research specialist who excels at gathering accurate information on various topics.\n    Your task is to collect relevant facts, statistics, and context on the assigned topic.\n    Focus on providing comprehensive, well-organized information that covers different aspects of the topic.\n    Include both general information and specific details that would be useful for content creation.\n    Always prioritize accuracy and cite sources when providing specific facts.\n    \"\"\"\n    \n    return Agent(\n        name=\"ResearchSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent to gather comprehensive information on a topic.\"\n    )\n\ndef create_outline_agent() -> Agent:\n    \"\"\"\n    Create an outline agent that structures content.\n    \n    Returns:\n        An Agent instance specialized in creating outlines.\n    \"\"\"\n    instructions = \"\"\"\n    You are an outline specialist who excels at organizing information into clear, logical structures.\n    Your task is to create well-structured outlines for content based on research provided.\n    Include main sections, subsections, and key points to cover in each section.\n    Ensure the outline has a logical flow and covers the topic comprehensively.\n    Focus on creating a structure that will engage readers while effectively communicating information.\n    \"\"\"\n    \n    return Agent(\n        name=\"OutlineSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent to create a structured outline based on research.\"\n    )\n\ndef create_content_agent() -> Agent:\n    \"\"\"\n    Create a content agent that writes engaging content.\n    \n    Returns:\n        An Agent instance specialized in content writing.\n    \"\"\"\n    instructions = \"\"\"\n    You are a content writing specialist who excels at creating engaging, informative content.\n    Your task is to write high-quality content based on the provided outline and research.\n    Use a conversational, engaging tone while maintaining accuracy and clarity.\n    Include an attention-grabbing introduction, well-developed body paragraphs, and a compelling conclusion.\n    Incorporate the research seamlessly into the content while maintaining a consistent voice.\n    \"\"\"\n    \n    return Agent(\n        name=\"ContentSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent to write engaging content based on an outline and research.\"\n    )\n\ndef create_editor_agent() -> Agent:\n    \"\"\"\n    Create an editor agent that refines and polishes content.\n    \n    Returns:\n        An Agent instance specialized in editing.\n    \"\"\"\n    instructions = \"\"\"\n    You are an editing specialist who excels at refining and polishing content.\n    Your task is to review content for clarity, coherence, grammar, and style.\n    Improve sentence structure, word choice, and flow while maintaining the original voice.\n    Ensure the content is well-organized, engaging, and free of errors.\n    Focus on making the content more impactful and reader-friendly.\n    \"\"\"\n    \n    return Agent(\n        name=\"EditingSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoff_description=\"Use this agent to refine and polish content.\"\n    )\n\ndef create_manager_agent(specialists: List[Agent]) -> Agent:\n    \"\"\"\n    Create a manager agent that coordinates the work of specialist agents.\n    \n    Args:\n        specialists: List of specialist agents to coordinate\n        \n    Returns:\n        An Agent instance that manages the content creation process\n    \"\"\"\n    instructions = \"\"\"\n    You are a content manager who coordinates the work of specialist agents to create high-quality content.\n    Your task is to:\n    1. Understand the content request\n    2. Delegate research to the Research Specialist\n    3. Have the Outline Specialist create a structure based on the research\n    4. Have the Content Specialist write content based on the outline and research\n    5. Have the Editing Specialist refine and polish the content\n    6. Deliver the final polished content\n    \n    Manage the workflow efficiently and ensure each specialist has the information they need.\n    \"\"\"\n    \n    # Create handoffs to specialist agents\n    handoffs = [handoff(agent) for agent in specialists]\n    \n    return Agent(\n        name=\"ContentManager\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoffs=handoffs\n    )\n\nasync def orchestrate_content_creation(prompt: str) -> str:\n    \"\"\"\n    Orchestrate the content creation process with multiple specialized agents.\n    \n    Args:\n        prompt: The content request\n        \n    Returns:\n        The final polished content\n    \"\"\"\n    # Create specialist agents\n    research_agent = create_research_agent()\n    outline_agent = create_outline_agent()\n    content_agent = create_content_agent()\n    editor_agent = create_editor_agent()\n    \n    # Create manager agent with specialists\n    manager = create_manager_agent([research_agent, outline_agent, content_agent, editor_agent])\n    \n    # Create a context to track the workflow\n    context = Context()\n    \n    # Run the manager agent with the prompt and context\n    result = await Runner.run(manager, prompt, context=context)\n    \n    # Return the final content\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the content creation system.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Agent Orchestration Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The content request to process\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the content creation system and get the final content\n        console.print(Panel(\"Starting content creation process...\", title=\"Status\", border_style=\"blue\"))\n        content = asyncio.run(orchestrate_content_creation(args.prompt))\n        \n        # Display the final content\n        console.print(Panel(content, title=\"Final Content\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_specialist_agents():\n    \"\"\"Test that specialist agents are created with the correct configuration.\"\"\"\n    research_agent = create_research_agent()\n    outline_agent = create_outline_agent()\n    content_agent = create_content_agent()\n    editor_agent = create_editor_agent()\n    \n    assert research_agent.name == \"ResearchSpecialist\"\n    assert outline_agent.name == \"OutlineSpecialist\"\n    assert content_agent.name == \"ContentSpecialist\"\n    assert editor_agent.name == \"EditingSpecialist\"\n    \n    assert \"research specialist\" in research_agent.instructions.lower()\n    assert \"outline specialist\" in outline_agent.instructions.lower()\n    assert \"content writing specialist\" in content_agent.instructions.lower()\n    assert \"editing specialist\" in editor_agent.instructions.lower()\n\ndef test_create_manager_agent():\n    \"\"\"Test that the manager agent is created with the correct configuration.\"\"\"\n    research_agent = create_research_agent()\n    outline_agent = create_outline_agent()\n    content_agent = create_content_agent()\n    editor_agent = create_editor_agent()\n    \n    manager = create_manager_agent([research_agent, outline_agent, content_agent, editor_agent])\n    \n    assert manager.name == \"ContentManager\"\n    assert \"content manager\" in manager.instructions.lower()\n    assert len(manager.handoffs) == 4\n\ndef test_orchestrate_content_creation():\n    \"\"\"Test that the content creation system can run and produce content.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a test with a simple content request\n    # Use a shorter timeout for testing\n    content = asyncio.run(orchestrate_content_creation(\"Write a short paragraph about renewable energy\"))\n    \n    # Verify we got non-empty content\n    assert content\n    assert len(content) > 0\n    # The content should contain relevant terms\n    assert any(term in content.lower() for term in [\"renewable\", \"energy\", \"sustainable\"])\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/12_anthropic_agent.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"anthropic>=0.45.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nAnthropic Agent Example\n\nThis example demonstrates how to use the OpenAI Agents SDK with Anthropic's Claude model.\nIt shows how to create a custom model provider that works with Anthropic's API.\n\nRun with:\n    uv run 12_anthropic_agent.py --prompt \"Explain the concept of quantum entanglement\"\n\nTest with:\n    uv run pytest 12_anthropic_agent.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nfrom typing import Optional, List, Dict, Any, Union, Callable\nfrom rich.console import Console\nfrom rich.panel import Panel\n\nimport anthropic\nfrom openai import OpenAI\nfrom agents import Agent, Runner\nfrom openai_agents.providers import ModelProvider, ModelResponse\nfrom openai.types.chat import ChatCompletion, ChatCompletionMessage\n\n# Initialize console for rich output\nconsole = Console()\n\nclass AnthropicModelProvider(ModelProvider):\n    \"\"\"\n    Custom model provider for Anthropic's Claude model.\n    \"\"\"\n    \n    def __init__(self, api_key: Optional[str] = None):\n        \"\"\"\n        Initialize the Anthropic model provider.\n        \n        Args:\n            api_key: Anthropic API key. If None, will use the ANTHROPIC_API_KEY environment variable.\n        \"\"\"\n        self.api_key = api_key or os.environ.get(\"ANTHROPIC_API_KEY\")\n        if not self.api_key:\n            raise ValueError(\"Anthropic API key is required\")\n        \n        self.client = anthropic.Anthropic(api_key=self.api_key)\n    \n    async def generate(\n        self,\n        messages: List[Dict[str, Any]],\n        model: str,\n        temperature: float = 0.7,\n        max_tokens: int = 1024,\n        **kwargs\n    ) -> ModelResponse:\n        \"\"\"\n        Generate a response using Anthropic's Claude model.\n        \n        Args:\n            messages: List of messages in the conversation\n            model: Model name (will be mapped to Anthropic model)\n            temperature: Sampling temperature\n            max_tokens: Maximum number of tokens to generate\n            **kwargs: Additional arguments to pass to the model\n            \n        Returns:\n            A ModelResponse containing the model's response\n        \"\"\"\n        # Map OpenAI model names to Anthropic model names\n        model_mapping = {\n            \"gpt-4o-mini\": \"claude-3-haiku-20240307\",\n            \"gpt-4o\": \"claude-3-opus-20240229\",\n            \"gpt-3.5-turbo\": \"claude-3-sonnet-20240229\",\n        }\n        \n        # Use the mapped model or default to claude-3-haiku\n        anthropic_model = model_mapping.get(model, \"claude-3-haiku-20240307\")\n        \n        # Convert OpenAI message format to Anthropic message format\n        anthropic_messages = []\n        for message in messages:\n            role = message[\"role\"]\n            # Map OpenAI roles to Anthropic roles\n            if role == \"system\":\n                # System messages are handled differently in Anthropic\n                system_content = message.get(\"content\", \"\")\n                continue\n            elif role == \"user\":\n                anthropic_role = \"user\"\n            elif role == \"assistant\":\n                anthropic_role = \"assistant\"\n            else:\n                # Skip unsupported roles\n                continue\n            \n            # Add the message\n            anthropic_messages.append({\n                \"role\": anthropic_role,\n                \"content\": message.get(\"content\", \"\")\n            })\n        \n        # Create the message with system prompt if available\n        try:\n            response = await self.client.messages.create(\n                model=anthropic_model,\n                messages=anthropic_messages,\n                system=system_content if 'system_content' in locals() else \"\",\n                temperature=temperature,\n                max_tokens=max_tokens,\n                **kwargs\n            )\n            \n            # Convert Anthropic response to OpenAI format\n            output_message = {\n                \"role\": \"assistant\",\n                \"content\": response.content[0].text\n            }\n            \n            # Create a ModelResponse\n            return ModelResponse(\n                output=[output_message],\n                usage={\n                    \"prompt_tokens\": response.usage.input_tokens,\n                    \"completion_tokens\": response.usage.output_tokens,\n                    \"total_tokens\": response.usage.input_tokens + response.usage.output_tokens\n                },\n                referenceable_id=None\n            )\n        \n        except Exception as e:\n            raise Exception(f\"Error generating response from Anthropic: {str(e)}\")\n\ndef create_anthropic_agent() -> Agent:\n    \"\"\"\n    Create an agent that uses Anthropic's Claude model.\n    \n    Returns:\n        An Agent instance that uses Anthropic's Claude model.\n    \"\"\"\n    instructions = \"\"\"\n    You are a helpful assistant powered by Anthropic's Claude model.\n    You provide accurate, thoughtful responses to user queries.\n    You excel at explaining complex concepts in clear, accessible language.\n    When appropriate, you break down information into easy-to-understand parts.\n    You acknowledge when you don't know something rather than making up information.\n    \"\"\"\n    \n    # Create the Anthropic model provider\n    provider = AnthropicModelProvider()\n    \n    # Create the agent with the Anthropic provider\n    return Agent(\n        name=\"ClaudeAssistant\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",  # This will be mapped to claude-3-haiku\n        model_provider=provider\n    )\n\nasync def run_anthropic_agent(prompt: str) -> str:\n    \"\"\"\n    Run the Anthropic agent with the given prompt.\n    \n    Args:\n        prompt: The user's query or prompt\n        \n    Returns:\n        The agent's response as a string\n    \"\"\"\n    # Create the Anthropic agent\n    agent = create_anthropic_agent()\n    \n    # Run the agent with the prompt\n    result = await Runner.run(agent, prompt)\n    \n    # Return the response\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the Anthropic agent.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Anthropic Agent Example\")\n    parser.add_argument(\"--prompt\", \"-p\", type=str, required=True, \n                        help=\"The prompt to send to the agent\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"ANTHROPIC_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: ANTHROPIC_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Run the Anthropic agent and get response\n        response = asyncio.run(run_anthropic_agent(args.prompt))\n        \n        # Display the response\n        console.print(Panel(response, title=\"Claude Response\", border_style=\"green\"))\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_create_anthropic_agent():\n    \"\"\"Test that the Anthropic agent is created with the correct configuration.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"ANTHROPIC_API_KEY\"):\n        pytest.skip(\"ANTHROPIC_API_KEY not set\")\n    \n    agent = create_anthropic_agent()\n    assert agent.name == \"ClaudeAssistant\"\n    assert \"claude\" in agent.instructions.lower()\n    assert isinstance(agent.model_provider, AnthropicModelProvider)\n\ndef test_run_anthropic_agent():\n    \"\"\"Test that the Anthropic agent can run and produce a response.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"ANTHROPIC_API_KEY\"):\n        pytest.skip(\"ANTHROPIC_API_KEY not set\")\n    \n    # Run a simple test query\n    response = asyncio.run(run_anthropic_agent(\"What is 2+2?\"))\n    \n    # Verify we got a non-empty response\n    assert response\n    assert len(response) > 0\n    # The response should contain \"4\" somewhere\n    assert \"4\" in response\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/13_research_blog_system.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai-agents>=0.0.2\",\n#   \"pytest>=7.4.0\",\n#   \"rich>=13.7.0\",\n#   \"markdown>=3.5.2\",\n# ]\n# ///\n\n\"\"\"\nResearch and Blog Agent System\n\nThis example demonstrates a complete system where research agents and blog agents work together\nto create markdown blogs. It showcases the integration of multiple agent capabilities.\n\nRun with:\n    uv run 13_research_blog_system.py --topic \"Artificial Intelligence Ethics\" --output blog_post.md\n\nTest with:\n    uv run pytest 13_research_blog_system.py\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport asyncio\nimport markdown\nfrom datetime import datetime\nfrom typing import Optional, List, Dict, Any, Union, Tuple\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.markdown import Markdown\n\nfrom openai import OpenAI\nfrom agents import Agent, Runner, handoff, Context, function_tool\n\n# Initialize console for rich output\nconsole = Console()\n\n# Define function tools for the research agent\n@function_tool\ndef search_for_information(query: str, depth: int = 3) -> str:\n    \"\"\"\n    Simulated search for information on a topic.\n    \n    Args:\n        query: The search query\n        depth: The depth of the search (1-5, with 5 being most comprehensive)\n        \n    Returns:\n        A string containing the search results\n    \"\"\"\n    # This is a mock implementation - in a real application, you would call a search API\n    search_results = {\n        \"artificial intelligence ethics\": \"\"\"\n            Artificial Intelligence Ethics is a field focused on ensuring AI systems are developed and used responsibly.\n            \n            Key principles include:\n            1. Transparency - AI systems should be explainable and understandable\n            2. Fairness - AI should not perpetuate or amplify biases\n            3. Privacy - AI systems should respect user privacy and data rights\n            4. Accountability - Clear responsibility for AI decisions and impacts\n            5. Safety - AI systems should be reliable and minimize harm\n            \n            Current challenges include:\n            - Algorithmic bias in facial recognition and criminal justice systems\n            - Privacy concerns with data collection and surveillance\n            - Autonomous weapons and military applications\n            - Job displacement due to automation\n            - Concentration of AI power among few tech companies\n            \n            Organizations like the IEEE, EU Commission, and various academic institutions have developed ethical frameworks for AI development.\n        \"\"\",\n        \n        \"climate change solutions\": \"\"\"\n            Climate Change Solutions encompass various approaches to mitigate and adapt to global warming.\n            \n            Key mitigation strategies include:\n            1. Renewable energy transition (solar, wind, hydro, geothermal)\n            2. Energy efficiency improvements in buildings and industry\n            3. Sustainable transportation (electric vehicles, public transit)\n            4. Carbon capture and storage technologies\n            5. Reforestation and ecosystem restoration\n            \n            Adaptation strategies include:\n            - Climate-resilient infrastructure\n            - Water conservation and management\n            - Sustainable agriculture practices\n            - Early warning systems for extreme weather\n            - Planned relocation of vulnerable communities\n            \n            Policy approaches include carbon pricing, regulations, subsidies for clean technology, and international agreements like the Paris Climate Accord.\n            \n            Emerging technologies such as green hydrogen, advanced batteries, and direct air capture show promise for addressing climate challenges.\n        \"\"\",\n        \n        \"quantum computing\": \"\"\"\n            Quantum Computing leverages quantum mechanics principles to process information in fundamentally new ways.\n            \n            Key concepts:\n            1. Qubits - Unlike classical bits (0 or 1), qubits can exist in superposition of states\n            2. Entanglement - Quantum particles become correlated, enabling complex computations\n            3. Quantum gates - Operations that manipulate qubits to perform calculations\n            \n            Potential applications:\n            - Cryptography and security (both breaking existing systems and creating new ones)\n            - Drug discovery and materials science through molecular simulation\n            - Optimization problems in logistics, finance, and energy\n            - Machine learning and AI acceleration\n            - Climate modeling and complex system simulation\n            \n            Current state: Quantum computers remain in early development with 50-100+ qubit systems from IBM, Google, and others. Quantum advantage (surpassing classical computers) has been demonstrated for specific problems.\n            \n            Challenges include error correction, qubit stability (decoherence), and scaling systems to practical sizes.\n            \n            Major players include IBM, Google, Microsoft, IonQ, Rigetti, and various academic research groups.\n        \"\"\"\n    }\n    \n    # Find the most relevant result based on the query\n    for key, value in search_results.items():\n        if any(word in query.lower() for word in key.split()):\n            # Adjust the depth of information\n            lines = value.strip().split('\\n')\n            result_depth = max(5, min(len(lines), depth * 5))\n            return '\\n'.join(lines[:result_depth])\n    \n    # Default response if no match found\n    return \"No specific information found on this topic. Please try a more general query.\"\n\n@function_tool\ndef analyze_topic(topic: str) -> str:\n    \"\"\"\n    Analyze a topic to identify key aspects for research.\n    \n    Args:\n        topic: The topic to analyze\n        \n    Returns:\n        A string containing analysis of the topic with key aspects to research\n    \"\"\"\n    # This is a mock implementation - in a real application, this would be more sophisticated\n    topic_analyses = {\n        \"artificial intelligence ethics\": \"\"\"\n            Topic Analysis: Artificial Intelligence Ethics\n            \n            Key aspects to research:\n            1. Ethical frameworks and principles for AI development\n            2. Bias and fairness in AI systems\n            3. Privacy implications of AI technologies\n            4. Accountability and transparency in AI decision-making\n            5. Regulatory approaches and governance models\n            6. Economic and social impacts of AI deployment\n            7. Case studies of ethical failures and successes\n            8. Future challenges and emerging ethical concerns\n        \"\"\",\n        \n        \"climate change solutions\": \"\"\"\n            Topic Analysis: Climate Change Solutions\n            \n            Key aspects to research:\n            1. Renewable energy technologies and implementation\n            2. Carbon capture and sequestration approaches\n            3. Policy mechanisms (carbon pricing, regulations, incentives)\n            4. Adaptation strategies for vulnerable communities\n            5. Agricultural and land use changes\n            6. Behavioral and lifestyle modifications\n            7. Economic considerations and just transition\n            8. International cooperation frameworks\n        \"\"\",\n        \n        \"quantum computing\": \"\"\"\n            Topic Analysis: Quantum Computing\n            \n            Key aspects to research:\n            1. Fundamental quantum mechanics principles relevant to computing\n            2. Quantum computing architectures and hardware approaches\n            3. Quantum algorithms and computational advantages\n            4. Potential applications across industries\n            5. Current state of development and key players\n            6. Challenges and limitations of quantum systems\n            7. Quantum programming languages and software tools\n            8. Timeline and roadmap for practical quantum computing\n        \"\"\"\n    }\n    \n    # Find the most relevant analysis\n    for key, value in topic_analyses.items():\n        if any(word in topic.lower() for word in key.split()):\n            return value.strip()\n    \n    # Default analysis if no match found\n    return f\"\"\"\n        Topic Analysis: {topic}\n        \n        Key aspects to research:\n        1. Historical context and development\n        2. Current state and major concepts\n        3. Key stakeholders and perspectives\n        4. Challenges and controversies\n        5. Future trends and developments\n        6. Practical applications and implications\n        7. Related fields and intersections\n        8. Resources for further learning\n    \"\"\".strip()\n\n# Define function tools for the blog agent\n@function_tool\ndef generate_blog_outline(topic: str, research: str) -> str:\n    \"\"\"\n    Generate an outline for a blog post based on research.\n    \n    Args:\n        topic: The blog topic\n        research: The research information to incorporate\n        \n    Returns:\n        A string containing a structured blog outline\n    \"\"\"\n    # This is a simplified implementation - in a real application, this would use more sophisticated logic\n    # Extract key points from research\n    research_lines = research.strip().split('\\n')\n    key_points = [line.strip() for line in research_lines if line.strip() and not line.strip().startswith('#')]\n    \n    # Create a basic outline structure\n    outline = f\"\"\"\n        # Blog Outline: {topic}\n        \n        ## Introduction\n        - Hook: Engaging opening to capture reader interest\n        - Context: Brief background on {topic}\n        - Thesis: Main point or argument of the blog post\n        \n        ## Main Section 1: Overview and Background\n        - Historical context\n        - Current relevance\n        - Key concepts and definitions\n        \n        ## Main Section 2: Key Aspects and Analysis\n    \"\"\"\n    \n    # Add research points to the outline\n    for i, point in enumerate(key_points[:5]):\n        if len(point) > 100:  # Only use shorter points\n            continue\n        outline += f\"\\n        - Point {i+1}: {point}\"\n    \n    # Complete the outline\n    outline += f\"\"\"\n        \n        ## Main Section 3: Implications and Applications\n        - Practical applications\n        - Future developments\n        - Challenges and opportunities\n        \n        ## Conclusion\n        - Summary of key points\n        - Final thoughts\n        - Call to action or next steps\n    \"\"\"\n    \n    return outline.strip()\n\n@function_tool\ndef format_blog_as_markdown(title: str, content: str) -> str:\n    \"\"\"\n    Format a blog post as markdown.\n    \n    Args:\n        title: The blog post title\n        content: The blog post content\n        \n    Returns:\n        A string containing the formatted markdown\n    \"\"\"\n    # Ensure the content has proper markdown formatting\n    if not content.startswith('#'):\n        content = f\"# {title}\\n\\n{content}\"\n    \n    # Add metadata\n    markdown = f\"\"\"---\ntitle: \"{title}\"\ndate: \"{datetime.now().strftime('%Y-%m-%d')}\"\nauthor: \"AI Research & Blog System\"\ntags: [\"ai\", \"research\", \"blog\"]\n---\n\n{content}\n\"\"\"\n    \n    return markdown\n\ndef create_research_agent() -> Agent:\n    \"\"\"\n    Create a research agent that gathers and analyzes information.\n    \n    Returns:\n        An Agent instance specialized in research.\n    \"\"\"\n    instructions = \"\"\"\n    You are a research specialist who excels at gathering and analyzing information on various topics.\n    Your task is to:\n    1. Understand the research request\n    2. Use the search_for_information tool to gather relevant information\n    3. Use the analyze_topic tool to identify key aspects for research\n    4. Synthesize the information into a comprehensive, well-organized research report\n    5. Include relevant facts, statistics, and context\n    6. Ensure the research is accurate, balanced, and thorough\n    \n    Your research should be detailed enough to serve as the foundation for content creation.\n    \"\"\"\n    \n    # Create the research agent with function tools\n    return Agent(\n        name=\"ResearchSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        tools=[search_for_information, analyze_topic],\n        handoff_description=\"Use this agent to conduct thorough research on a topic.\"\n    )\n\ndef create_blog_agent() -> Agent:\n    \"\"\"\n    Create a blog agent that writes engaging blog posts.\n    \n    Returns:\n        An Agent instance specialized in blog writing.\n    \"\"\"\n    instructions = \"\"\"\n    You are a blog writing specialist who excels at creating engaging, informative blog posts.\n    Your task is to:\n    1. Understand the blog request and research provided\n    2. Use the generate_blog_outline tool to create a structured outline\n    3. Write a comprehensive blog post based on the outline and research\n    4. Use the format_blog_as_markdown tool to format the post properly\n    5. Ensure the blog is engaging, informative, and well-structured\n    \n    Your blog posts should be conversational yet informative, with a clear introduction,\n    well-developed body sections, and a compelling conclusion.\n    \"\"\"\n    \n    # Create the blog agent with function tools\n    return Agent(\n        name=\"BlogSpecialist\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        tools=[generate_blog_outline, format_blog_as_markdown],\n        handoff_description=\"Use this agent to write engaging blog posts based on research.\"\n    )\n\ndef create_coordinator_agent(specialists: List[Agent]) -> Agent:\n    \"\"\"\n    Create a coordinator agent that manages the research and blog writing process.\n    \n    Args:\n        specialists: List of specialist agents to coordinate\n        \n    Returns:\n        An Agent instance that coordinates the content creation process\n    \"\"\"\n    instructions = \"\"\"\n    You are a content coordinator who manages the process of creating research-based blog posts.\n    Your task is to:\n    1. Understand the blog topic request\n    2. Delegate research to the Research Specialist\n    3. Provide the research to the Blog Specialist to create a blog post\n    4. Ensure the final blog post is comprehensive, engaging, and based on solid research\n    5. Deliver the final markdown blog post\n    \n    Manage the workflow efficiently and ensure each specialist has the information they need.\n    \"\"\"\n    \n    # Create handoffs to specialist agents\n    handoffs = [handoff(agent) for agent in specialists]\n    \n    return Agent(\n        name=\"ContentCoordinator\",\n        instructions=instructions,\n        model=\"gpt-4o-mini\",\n        handoffs=handoffs\n    )\n\nasync def create_research_blog(topic: str) -> str:\n    \"\"\"\n    Create a research-based blog post on the given topic.\n    \n    Args:\n        topic: The blog topic\n        \n    Returns:\n        A string containing the markdown blog post\n    \"\"\"\n    # Create specialist agents\n    research_agent = create_research_agent()\n    blog_agent = create_blog_agent()\n    \n    # Create coordinator agent with specialists\n    coordinator = create_coordinator_agent([research_agent, blog_agent])\n    \n    # Create a context to track the workflow\n    context = Context()\n    \n    # Run the coordinator agent with the topic and context\n    result = await Runner.run(coordinator, f\"Create a blog post about {topic}\", context=context)\n    \n    # Return the final blog post\n    return result.final_output\n\ndef main():\n    \"\"\"Main function to parse arguments and run the research blog system.\"\"\"\n    parser = argparse.ArgumentParser(description=\"Research and Blog Agent System\")\n    parser.add_argument(\"--topic\", \"-t\", type=str, required=True, \n                        help=\"The topic for the blog post\")\n    parser.add_argument(\"--output\", \"-o\", type=str, default=None,\n                        help=\"Optional file path to save the markdown blog post\")\n    \n    args = parser.parse_args()\n    \n    # Ensure API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        console.print(Panel(\"[bold red]Error: OPENAI_API_KEY environment variable not set[/bold red]\"))\n        sys.exit(1)\n    \n    try:\n        # Create the blog post\n        console.print(Panel(f\"Creating a blog post about '{args.topic}'...\", title=\"Status\", border_style=\"blue\"))\n        blog_post = asyncio.run(create_research_blog(args.topic))\n        \n        # Display the blog post\n        console.print(Panel(Markdown(blog_post), title=\"Blog Post\", border_style=\"green\"))\n        \n        # Save to file if output path is provided\n        if args.output:\n            with open(args.output, \"w\") as f:\n                f.write(blog_post)\n            console.print(f\"[green]Blog post saved to {args.output}[/green]\")\n    \n    except Exception as e:\n        console.print(Panel(f\"[bold red]Error: {str(e)}[/bold red]\"))\n        sys.exit(1)\n\n# Test functions\ndef test_research_tools():\n    \"\"\"Test that the research tools work correctly.\"\"\"\n    # Test search tool\n    search_result = search_for_information(\"artificial intelligence ethics\")\n    assert \"ethics\" in search_result.lower()\n    assert \"principles\" in search_result.lower()\n    \n    # Test analysis tool\n    analysis_result = analyze_topic(\"artificial intelligence ethics\")\n    assert \"analysis\" in analysis_result.lower()\n    assert \"key aspects\" in analysis_result.lower()\n\ndef test_blog_tools():\n    \"\"\"Test that the blog tools work correctly.\"\"\"\n    # Test outline tool\n    outline = generate_blog_outline(\n        \"AI Ethics\",\n        \"AI Ethics involves principles like transparency, fairness, and accountability.\"\n    )\n    assert \"introduction\" in outline.lower()\n    assert \"conclusion\" in outline.lower()\n    \n    # Test markdown formatting tool\n    markdown = format_blog_as_markdown(\n        \"AI Ethics\",\n        \"# AI Ethics\\n\\nThis is a blog post about AI ethics.\"\n    )\n    assert \"title\" in markdown.lower()\n    assert \"date\" in markdown.lower()\n    assert \"ai ethics\" in markdown.lower()\n\ndef test_create_agents():\n    \"\"\"Test that the agents are created with the correct configuration.\"\"\"\n    research_agent = create_research_agent()\n    blog_agent = create_blog_agent()\n    coordinator = create_coordinator_agent([research_agent, blog_agent])\n    \n    assert research_agent.name == \"ResearchSpecialist\"\n    assert blog_agent.name == \"BlogSpecialist\"\n    assert coordinator.name == \"ContentCoordinator\"\n    \n    assert len(research_agent.tools) == 2\n    assert len(blog_agent.tools) == 2\n    assert len(coordinator.handoffs) == 2\n\ndef test_create_research_blog():\n    \"\"\"Test that the research blog system can run and produce a blog post.\"\"\"\n    import pytest\n    \n    # Skip this test if no API key is available\n    if not os.environ.get(\"OPENAI_API_KEY\"):\n        pytest.skip(\"OPENAI_API_KEY not set\")\n    \n    # Run a test with a simple topic\n    # Use a shorter timeout for testing\n    blog_post = asyncio.run(create_research_blog(\"AI Ethics\"))\n    \n    # Verify we got a non-empty blog post\n    assert blog_post\n    assert len(blog_post) > 0\n    # The blog post should contain relevant terms\n    assert any(term in blog_post.lower() for term in [\"ai\", \"ethics\", \"principles\"])\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/README.md",
    "content": "# OpenAI Agents SDK Examples\n\nA comprehensive collection of single-file examples showcasing the capabilities of the OpenAI Agents SDK.\n\n## Overview\n\nThis repository contains 13 examples demonstrating various features of the OpenAI Agents SDK, with a focus on research and blog agents working together to create markdown blogs. Each example is implemented as a self-contained UV Python script with built-in tests.\n\n## Key Features Demonstrated\n\n- Basic and multi-agent systems\n- Synchronous and asynchronous execution\n- Tracing and monitoring\n- Function tools and custom tools\n- Agent handoffs and agent-as-tool patterns\n- Context management\n- Guardrails for safety\n- Agent orchestration\n- Cross-provider integration (Anthropic)\n- Complete research and blog generation system\n\n## Examples\n\n1. [Basic Agent](01_basic_agent.py) - A simple example of how to create and run an agent.\n2. [Multi-Agent](02_multi_agent.py) - An example of how to create and run multiple agents.\n3. [Synchronous Agent](03_sync_agent.py) - An example of how to run an agent synchronously.\n4. [Agent with Tracing](04_agent_with_tracing.py) - An example of how to use tracing with agents.\n5. [Agent with Function Tools](05_agent_with_function_tools.py) - An example of how to use function tools with agents.\n6. [Agent with Custom Tools](06_agent_with_custom_tools.py) - An example of how to create custom tools for agents.\n7. [Agent with Handoffs](07_agent_with_handoffs.py) - An example of how to use handoffs between agents.\n8. [Agent with Agent as Tool](08_agent_with_agent_as_tool.py) - An example of how to use an agent as a tool for another agent.\n9. [Agent with Context Management](09_agent_with_context_management.py) - An example of how to use context management with agents.\n10. [Agent with Guardrails](10_agent_with_guardrails.py) - An example of how to use guardrails with agents.\n11. [Agent Orchestration](11_agent_orchestration.py) - An example of how to orchestrate multiple agents for complex tasks.\n12. [Anthropic Agent](12_anthropic_agent.py) - An example of how to use Anthropic's Claude model with the Agents SDK.\n13. [Research Blog System](13_research_blog_system.py) - A complete system where research agents and blog agents work together.\n\n## Usage\n\nEach example is a self-contained single file that can be run using uv:\n\n```bash\nuv run example_name.py --prompt \"Your prompt here\"\n```\n\nYou can also run all examples with the provided test script:\n\n```bash\n./test_all_examples.sh\n```\n\n## Setup\n\n1. Install the dependencies: `./install_dependencies.sh`\n2. Set up your OpenAI API key: `export OPENAI_API_KEY=your_key_here`\n3. For Anthropic examples, set up your Anthropic API key: `export ANTHROPIC_API_KEY=your_key_here`\n\n## Requirements\n\n- Python 3.10+\n- uv package manager\n- OpenAI API key\n- Anthropic API key (for cross-provider examples)\n\n## Testing\n\nAll examples include tests that can be run with:\n\n```bash\nuv run pytest example_name.py\n```\n\n## Important Implementation Notes\n\n- The OpenAI Agents SDK is installed via the `openai-agents` package but imported as `agents`\n- Agent execution is handled through `Runner.run()` for async and `Runner.run_sync()` for sync operations\n- Function tools cannot have default parameter values in their definitions\n- The `RunResult` object has a `final_output` attribute instead of `output`\n- All examples use GPT-4o-mini as the primary model for non-web search functionality\n- Each example includes comprehensive docstrings and comments for clarity\n\n## Documentation\n\nFor more information about the OpenAI Agents SDK, see the [official documentation](https://openai.github.io/openai-agents-python/).\n"
  },
  {
    "path": "openai-agents-examples/fix_imports.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nScript to fix imports in all example files.\n\"\"\"\n\nimport os\nimport re\nimport glob\nimport sys\n\ndef fix_imports_in_file(file_path):\n    \"\"\"Fix imports in a single file.\"\"\"\n    with open(file_path, 'r') as f:\n        content = f.read()\n    \n    # First fix the Runner.run syntax error\n    if 'from agents import Agent, Runner.run' in content:\n        content = content.replace('from agents import Agent, Runner.run', 'from agents import Agent, Runner')\n    \n    if 'from agents import Agent, Runner.run_sync' in content:\n        content = content.replace('from agents import Agent, Runner.run_sync', 'from agents import Agent, Runner')\n    \n    # Replace incorrect imports with correct ones based on documentation\n    replacements = [\n        ('from openai.agents import', 'from agents import'),\n        ('import openai.agents', 'import agents'),\n        ('from openai_agents import', 'from agents import'),\n        ('import openai_agents', 'import agents'),\n        ('from agents import Agent, run_agent', 'from agents import Agent, Runner'),\n        ('from agents import Agent, run_agent_sync', 'from agents import Agent, Runner'),\n        ('result = run_agent_sync', 'result = Runner.run_sync'),\n        ('result = run_agent', 'result = Runner.run'),\n        ('result = await run_agent_sync', 'result = await Runner.run_sync'),\n        ('result = await run_agent', 'result = await Runner.run'),\n        ('result.output', 'result.final_output'),\n        ('return response, context', 'return result.final_output, result.context'),\n        ('responses.append(response)', 'responses.append(result.final_output)'),\n    ]\n    \n    new_content = content\n    for old, new in replacements:\n        new_content = new_content.replace(old, new)\n    \n    # Also update dependencies in the script header\n    if '# dependencies = [' in new_content:\n        # Update to use the correct package name and import path\n        new_content = new_content.replace(\n            '\"openai-agents>=0.0.2\",', \n            '\"openai>=1.66.0\",  # Includes agents module'\n        )\n        new_content = new_content.replace(\n            '\"openai>=1.66.0\",  # Includes agents module', \n            '\"openai>=1.66.0\",  # Includes agents module'\n        )\n    \n    if new_content != content:\n        with open(file_path, 'w') as f:\n            f.write(new_content)\n        print(f\"Fixed imports in {file_path}\")\n    else:\n        print(f\"No changes needed in {file_path}\")\n\ndef create_agents_symlink():\n    \"\"\"Create a symlink from agents to openai.agents if needed.\"\"\"\n    try:\n        import openai\n        if hasattr(openai, 'agents'):\n            # Create a symlink in site-packages\n            site_packages = next(p for p in sys.path if 'site-packages' in p)\n            agents_path = os.path.join(site_packages, 'agents')\n            if not os.path.exists(agents_path):\n                os.symlink(os.path.join(site_packages, 'openai', 'agents'), agents_path)\n                print(f\"Created symlink from {agents_path} to openai.agents\")\n            else:\n                print(f\"Agents path already exists at {agents_path}\")\n    except (ImportError, StopIteration, OSError) as e:\n        print(f\"Could not create symlink: {e}\")\n\ndef main():\n    \"\"\"Fix imports in all Python files in the directory.\"\"\"\n    # Try to create a symlink for agents\n    create_agents_symlink()\n    \n    # Get all Python files\n    py_files = glob.glob('*.py')\n    \n    for file_path in py_files:\n        if file_path != 'fix_imports.py':  # Skip this script\n            fix_imports_in_file(file_path)\n    \n    print(\"Import fixing complete!\")\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "openai-agents-examples/install_dependencies.sh",
    "content": "#!/bin/bash\n\n# Install all required dependencies for the examples\npip install openai-agents rich pytest markdown anthropic opentelemetry-api opentelemetry-sdk pydantic requests\n\n# Set up environment variables if not already set\nif [ -z \"$OPENAI_API_KEY\" ]; then\n    echo \"Warning: OPENAI_API_KEY environment variable not set\"\n    echo \"Please set it with: export OPENAI_API_KEY=your_key_here\"\nfi\n\nif [ -z \"$ANTHROPIC_API_KEY\" ]; then\n    echo \"Warning: ANTHROPIC_API_KEY environment variable not set\"\n    echo \"Please set it with: export ANTHROPIC_API_KEY=your_key_here\"\nfi\n\necho \"Dependencies installed successfully!\"\n"
  },
  {
    "path": "openai-agents-examples/summary.md",
    "content": "# OpenAI Agents SDK Examples Summary\n\n## Overview\nThis repository contains 13 examples demonstrating various features of the OpenAI Agents SDK.\n\n## Key Learnings\n1. The OpenAI Agents SDK is installed via the 'openai-agents' package but imported as 'agents'\n2. Agent execution is handled through Runner.run() for async and Runner.run_sync() for sync operations\n3. Function tools cannot have default parameter values in their definitions\n4. The RunResult object has a final_output attribute instead of output\n5. The SDK supports various capabilities including multi-agent systems, tracing, and guardrails\n\n## Testing\nAll examples include built-in tests that can be run with pytest:\n```bash\nuv run pytest example_name.py\n```\n\n## Running Examples\nEach example can be run using uv:\n```bash\nuv run example_name.py --prompt \"Your prompt here\"\n```\n\n## Environment Setup\n1. Install dependencies: `./install_dependencies.sh`\n2. Set up API keys: `export OPENAI_API_KEY=your_key_here`\n\n## Documentation\nFor more information, see the [official documentation](https://openai.github.io/openai-agents-python/).\n"
  },
  {
    "path": "openai-agents-examples/test_all_examples.sh",
    "content": "#!/bin/bash\n\n# Set up environment\nexport OPENAI_API_KEY=$GenAI_Keys_OPENAI_API_KEY\nexport ANTHROPIC_API_KEY=$GenAI_Keys_ANTHROPIC_API_KEY\n\n# Test basic examples\necho \"Testing 01_basic_agent.py...\"\nuv run 01_basic_agent.py --prompt \"What is 2+2?\"\n\necho \"Testing 02_multi_agent.py...\"\nuv run 02_multi_agent.py --prompt \"What are the benefits of exercise?\"\n\necho \"Testing 03_sync_agent.py...\"\nuv run 03_sync_agent.py --prompt \"Tell me a fun fact\"\n\necho \"Testing 04_agent_with_tracing.py...\"\nuv run 04_agent_with_tracing.py --prompt \"What is the capital of France?\"\n\necho \"Testing 05_agent_with_function_tools.py...\"\nuv run 05_agent_with_function_tools.py --prompt \"What's the weather in New York?\"\n\necho \"Testing 06_agent_with_custom_tools.py...\"\nuv run 06_agent_with_custom_tools.py --prompt \"Calculate 15% tip on a $75 bill\"\n\necho \"Testing 07_agent_with_handoffs.py...\"\nuv run 07_agent_with_handoffs.py --prompt \"I need help with a coding problem\"\n\necho \"Testing 08_agent_with_agent_as_tool.py...\"\nuv run 08_agent_with_agent_as_tool.py --prompt \"Tell me about climate change\"\n\necho \"Testing 09_agent_with_context_management.py...\"\nuv run 09_agent_with_context_management.py --prompt \"Tell me about Mars\" --follow-up \"What about its moons?\"\n\necho \"Testing 10_agent_with_guardrails.py...\"\nuv run 10_agent_with_guardrails.py --prompt \"Tell me about renewable energy\"\n\necho \"Testing 11_agent_orchestration.py...\"\nuv run 11_agent_orchestration.py --prompt \"Write a short blog post about AI\"\n\necho \"Testing 12_anthropic_agent.py...\"\nuv run 12_anthropic_agent.py --prompt \"What is your favorite book?\"\n\necho \"Testing 13_research_blog_system.py...\"\nuv run 13_research_blog_system.py --topic \"Space Exploration\" --output \"space_blog.md\"\n\necho \"All tests completed successfully!\"\n"
  },
  {
    "path": "openai-agents-examples/test_imports.py",
    "content": "#!/usr/bin/env python3\n\n\"\"\"\nTest script to check the correct import name for OpenAI Agents SDK.\n\"\"\"\n\ntry:\n    import agents\n    print(\"Successfully imported as 'openai_agents'\")\nexcept ImportError:\n    print(\"Failed to import as 'openai_agents'\")\n\ntry:\n    import agents\n    print(\"Successfully imported as 'openai.agents'\")\nexcept ImportError:\n    print(\"Failed to import as 'openai.agents'\")\n\ntry:\n    import agents.agent\n    print(\"Successfully imported as 'openai_agents.agent'\")\nexcept ImportError:\n    print(\"Failed to import as 'openai_agents.agent'\")\n\ntry:\n    import agents.agent\n    print(\"Successfully imported as 'openai.agents.agent'\")\nexcept ImportError:\n    print(\"Failed to import as 'openai.agents.agent'\")\n"
  },
  {
    "path": "sfa_bash_editor_agent_anthropic_v2.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"anthropic>=0.45.2\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nUsage:\n    # View a file\n    uv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"Show me the first 10 lines of README.md\"\n\n    # Create a new file\n    uv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"Create a new file called hello.txt with 'Hello World!' in it\"\n\n    # Replace text in a file\n    uv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"Create a new file called hello.txt with 'Hello World!' in it. Then update hello.txt to say 'Hello AI Coding World'\"\n\n    # Insert a line in a file\n    uv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"Create a new file called hello2.txt with 'Hello AI Coding World!' in it. Then add a new line 'How are you?' after 'Hello AI World!' in hello.txt\"\n\n    # Execute a bash command\n    uv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"List all Python files in the current directory sorted by size\"\n\n    # Complete a multi-step task\n    uv run sfa_bash_editor_agent_anthropic_v2.py --prompt \"List all Python files in the current directory sorted by size, then output to a markdown file called python_files_sorted_by_size.md\"\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport traceback\nfrom rich.console import Console\nfrom rich.panel import Panel\nimport anthropic\n\n# Initialize global console\nconsole = Console()\n\ncurrent_bash_env = os.environ.copy()\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are an expert integration assistant that can both edit files and execute bash commands.\n</purpose>\n\n<instructions>\n    <instruction>Use the tools provided to accomplish file editing and bash command execution as needed.</instruction>\n    <instruction>When you have completed the user's task, call complete_task to finalize the process.</instruction>\n    <instruction>Provide reasoning with every tool call.</instruction>\n    <instruction>When constructing paths use /repo to start from the root of the repository. We'll replace it with the current working directory.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <name>view_file</name>\n        <description>View the content of a file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why you are viewing the file</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>Path of the file to view</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>create_file</name>\n        <description>Create a new file with given content</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why the file is being created</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>Path where to create the file</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>file_text</name>\n                <type>string</type>\n                <description>Content for the new file</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>str_replace</name>\n        <description>Replace text in a file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why the replacement is needed</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>File path</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>old_str</name>\n                <type>string</type>\n                <description>The string to be replaced</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>new_str</name>\n                <type>string</type>\n                <description>The replacement string</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>insert_line</name>\n        <description>Insert text at a specific line in a file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Reason for inserting the text</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>File path</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>insert_line</name>\n                <type>integer</type>\n                <description>Line number for insertion</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>new_str</name>\n                <type>string</type>\n                <description>The text to insert</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>execute_bash</name>\n        <description>Execute a bash command</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why this command should be executed</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>command</name>\n                <type>string</type>\n                <description>The bash command to run</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>restart_bash</name>\n        <description>Restart the bash session with a fresh environment</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why the session is being reset</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>complete_task</name>\n        <description>Finalize the task and exit the agent loop</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why the task is complete</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\"\"\"\n\nroot_path_to_replace_with_cwd = \"/repo\"\n\n\ndef tool_view_file(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_view_file] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_view_file] reasoning: {reasoning}, path: {path}\")\n\n        if not os.path.exists(path):\n            error_message = f\"File {path} does not exist\"\n            console.log(f\"[tool_view_file] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        with open(path, \"r\") as f:\n            content = f.read()\n        return {\"result\": content}\n    except Exception as e:\n        console.log(f\"[tool_view_file] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_create_file(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        file_text = tool_input.get(\"file_text\")\n\n        path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n        console.log(f\"[tool_create_file] reasoning: {reasoning}, path: {path}\")\n\n        # Check for an empty or invalid path\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_create_file] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        dirname = os.path.dirname(path)\n        if not dirname:\n            error_message = (\n                \"Invalid file path provided: directory part of the path is empty.\"\n            )\n            console.log(f\"[tool_create_file] Error: {error_message}\")\n            return {\"error\": error_message}\n        else:\n            os.makedirs(dirname, exist_ok=True)\n\n        with open(path, \"w\") as f:\n            f.write(file_text)\n        return {\"result\": f\"File created at {path}\"}\n    except Exception as e:\n        console.log(f\"[tool_create_file] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_str_replace(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        old_str = tool_input.get(\"old_str\")\n        new_str = tool_input.get(\"new_str\")\n\n        path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        if not old_str:\n            error_message = \"No text to replace specified: old_str is empty.\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(\n            f\"[tool_str_replace] reasoning: {reasoning}, path: {path}, old_str: {old_str}, new_str: {new_str}\"\n        )\n\n        if not os.path.exists(path):\n            error_message = f\"File {path} does not exist\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        with open(path, \"r\") as f:\n            content = f.read()\n\n        if old_str not in content:\n            error_message = f\"'{old_str}' not found in {path}\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        new_content = content.replace(old_str, new_str)\n        with open(path, \"w\") as f:\n            f.write(new_content)\n        return {\"result\": \"Text replaced successfully\"}\n    except Exception as e:\n        console.log(f\"[tool_str_replace] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_insert_line(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        insert_line_num = tool_input.get(\"insert_line\")\n        new_str = tool_input.get(\"new_str\")\n\n        path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        if insert_line_num is None:\n            error_message = \"No line number specified: insert_line is missing.\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        if not new_str:\n            error_message = \"No text to insert specified: new_str is empty.\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(\n            f\"[tool_insert_line] reasoning: {reasoning}, path: {path}, insert_line: {insert_line_num}, new_str: {new_str}\"\n        )\n\n        if not os.path.exists(path):\n            error_message = f\"File {path} does not exist\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        with open(path, \"r\") as f:\n            lines = f.readlines()\n\n        # Check that the index is within acceptable bounds (allowing insertion at end)\n        if insert_line_num < 0 or insert_line_num > len(lines):\n            error_message = (\n                f\"Insert line number {insert_line_num} out of range (0-{len(lines)}).\"\n            )\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        lines.insert(insert_line_num, new_str + \"\\n\")\n        with open(path, \"w\") as f:\n            f.writelines(lines)\n        return {\"result\": \"Line inserted successfully\"}\n    except Exception as e:\n        console.log(f\"[tool_insert_line] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_execute_bash(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        command = tool_input.get(\"command\")\n\n        command = command.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not command or not command.strip():\n            error_message = \"No command specified: command is empty.\"\n            console.log(f\"[tool_execute_bash] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_execute_bash] reasoning: {reasoning}, command: {command}\")\n        import subprocess\n\n        result = subprocess.run(\n            command, shell=True, capture_output=True, text=True, env=current_bash_env\n        )\n        if result.returncode != 0:\n            error_message = (\n                result.stderr.strip()\n                or \"Command execution failed with non-zero exit code.\"\n            )\n            console.log(f\"[tool_execute_bash] Error: {error_message}\")\n            return {\"error\": error_message}\n        return {\"result\": result.stdout.strip()}\n    except Exception as e:\n        console.log(f\"[tool_execute_bash] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_restart_bash(tool_input: dict) -> dict:\n    global current_bash_env\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n\n        if not reasoning:\n            error_message = \"No reasoning provided for restarting bash session.\"\n            console.log(f\"[tool_restart_bash] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_restart_bash] reasoning: {reasoning}\")\n        current_bash_env = os.environ.copy()\n        return {\"result\": \"Bash session restarted.\"}\n    except Exception as e:\n        console.log(f\"[tool_restart_bash] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_complete_task(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n\n        if not reasoning:\n            error_message = \"No reasoning provided for task completion.\"\n            console.log(f\"[tool_complete_task] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_complete_task] reasoning: {reasoning}\")\n        return {\"result\": \"Task completed\"}\n    except Exception as e:\n        console.log(f\"[tool_complete_task] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Bash and Editor Agent using Anthropic API\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The prompt to execute\")\n    parser.add_argument(\n        \"-c\", \"--compute\", type=int, default=10, help=\"Maximum compute loops\"\n    )\n    args = parser.parse_args()\n\n    ANTHROPIC_API_KEY = os.getenv(\"ANTHROPIC_API_KEY\")\n    if not ANTHROPIC_API_KEY:\n        Console().print(\n            \"[red]Error: ANTHROPIC_API_KEY environment variable is not set.[/red]\"\n        )\n        sys.exit(1)\n\n    client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)\n\n    # Prepare the initial message using the detailed prompt\n    initial_message = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n    messages = [{\"role\": \"user\", \"content\": initial_message}]\n\n    compute_iterations = 0\n\n    # Begin the agent loop.\n    # This loop processes Anthropic API responses, executes tool calls for both editor and bash commands,\n    # and logs detailed information via rich logging.\n    while compute_iterations < args.compute:\n        compute_iterations += 1\n        console.rule(f\"[yellow]Agent Loop {compute_iterations}/{args.compute}[/yellow]\")\n        try:\n            response = client.messages.create(\n                model=\"claude-3-5-sonnet-20241022\",\n                max_tokens=1024,\n                messages=messages,\n                tools=[\n                    {\n                        \"name\": \"view_file\",\n                        \"description\": \"View the content of a file\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Why view the file\",\n                                },\n                                \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                            },\n                            \"required\": [\"reasoning\", \"path\"],\n                        },\n                    },\n                    {\n                        \"name\": \"create_file\",\n                        \"description\": \"Create a new file with given content\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Why create the file\",\n                                },\n                                \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                                \"file_text\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Content for the file\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"path\", \"file_text\"],\n                        },\n                    },\n                    {\n                        \"name\": \"str_replace\",\n                        \"description\": \"Replace text in a file\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Reason for replacement\",\n                                },\n                                \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                                \"old_str\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Text to replace\",\n                                },\n                                \"new_str\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Replacement text\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"path\", \"old_str\", \"new_str\"],\n                        },\n                    },\n                    {\n                        \"name\": \"insert_line\",\n                        \"description\": \"Insert text at a specific line in a file\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Reason for insertion\",\n                                },\n                                \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                                \"insert_line\": {\n                                    \"type\": \"integer\",\n                                    \"description\": \"Line number\",\n                                },\n                                \"new_str\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Text to insert\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"path\", \"insert_line\", \"new_str\"],\n                        },\n                    },\n                    {\n                        \"name\": \"execute_bash\",\n                        \"description\": \"Execute a bash command\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Reason for command execution\",\n                                },\n                                \"command\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Bash command\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"command\"],\n                        },\n                    },\n                    {\n                        \"name\": \"restart_bash\",\n                        \"description\": \"Restart the bash session with fresh environment\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Why to restart bash\",\n                                }\n                            },\n                            \"required\": [\"reasoning\"],\n                        },\n                    },\n                    {\n                        \"name\": \"complete_task\",\n                        \"description\": \"Complete the task and exit the agent loop\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Why the task is complete\",\n                                }\n                            },\n                            \"required\": [\"reasoning\"],\n                        },\n                    },\n                ],\n                tool_choice={\"type\": \"any\"},\n            )\n        except Exception as e:\n            console.print(f\"[red]Error in API call: {str(e)}[/red]\")\n            console.print(traceback.format_exc())\n            break\n\n        console.log(\"[green]API Response:[/green]\", response.model_dump())\n\n        tool_calls = [\n            block\n            for block in response.content\n            if hasattr(block, \"type\") and block.type == \"tool_use\"\n        ]\n        if tool_calls:\n            # Map tool names to their corresponding functions\n            tool_functions = {\n                \"view_file\": tool_view_file,\n                \"create_file\": tool_create_file,\n                \"str_replace\": tool_str_replace,\n                \"insert_line\": tool_insert_line,\n                \"execute_bash\": tool_execute_bash,\n                \"restart_bash\": tool_restart_bash,\n                \"complete_task\": tool_complete_task,\n            }\n            for tool in tool_calls:\n                messages.append({\"role\": \"assistant\", \"content\": response.content})\n                console.print(\n                    f\"[blue]Tool Call:[/blue] {tool.name}({json.dumps(tool.input)})\"\n                )\n                func = tool_functions.get(tool.name)\n                if func:\n                    output = func(tool.input)\n                    result_text = output.get(\"error\") or output.get(\"result\", \"\")\n                    console.print(f\"[green]Tool Result:[/green] {result_text}\")\n                    messages.append(\n                        {\n                            \"role\": \"user\",\n                            \"content\": [\n                                {\n                                    \"type\": \"tool_result\",\n                                    \"tool_use_id\": tool.id,\n                                    \"content\": result_text,\n                                }\n                            ],\n                        }\n                    )\n                    if tool.name == \"complete_task\":\n                        console.print(\n                            \"[green]Task completed. Exiting agent loop.[/green]\"\n                        )\n                        return\n                else:\n                    raise ValueError(f\"Unknown tool: {tool.name}\")\n\n    console.print(\"[yellow]Reached compute limit without completing task.[/yellow]\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_bash_editor_agent_anthropic_v3.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"anthropic>=0.45.2\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\nUsage:\n    # View a file\n    uv run sfa_bash_editor_agent_anthropic_v3.py --prompt \"Show me the first 10 lines of README.md\"\n\n    # Create a new file\n    uv run sfa_bash_editor_agent_anthropic_v3.py --prompt \"Create a new file called hello.txt with 'Hello World!' in it\"\n\n    # Replace text in a file\n    uv run sfa_bash_editor_agent_anthropic_v3.py --prompt \"Create a new file called hello.txt with 'Hello World!' in it. Then update hello.txt to say 'Hello AI Coding World'\"\n\n    # Insert a line in a file\n    uv run sfa_bash_editor_agent_anthropic_v3.py --prompt \"Create a new file called hello2.txt with 'Hello AI Coding World!' in it. Then add a new line 'How are you?' after 'Hello AI World!' in hello.txt\"\n\n    # Execute a bash command\n    uv run sfa_bash_editor_agent_anthropic_v3.py --prompt \"List all Python files in the current directory sorted by size\"\n\n    # Complete a multi-step task\n    uv run sfa_bash_editor_agent_anthropic_v3.py --prompt \"List all Python files in the current directory sorted by size, then output to a markdown file called python_files_sorted_by_size.md\"\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport json\nimport traceback\nfrom rich.console import Console\nfrom rich.panel import Panel\nimport anthropic\n\n# Initialize global console\nconsole = Console()\n\ncurrent_bash_env = os.environ.copy()\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are an expert integration assistant that can both edit files and execute bash commands.\n</purpose>\n\n<instructions>\n    <instruction>Use the tools provided to accomplish file editing and bash command execution as needed.</instruction>\n    <instruction>When you have completed the user's task, call complete_task to finalize the process.</instruction>\n    <instruction>Provide reasoning with every tool call.</instruction>\n    <instruction>When constructing paths use /repo to start from the root of the repository. We'll replace it with the current working directory.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <name>view_file</name>\n        <description>View the content of a file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why you are viewing the file</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>Path of the file to view</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>create_file</name>\n        <description>Create a new file with given content</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why the file is being created</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>Path where to create the file</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>file_text</name>\n                <type>string</type>\n                <description>Content for the new file</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>str_replace</name>\n        <description>Replace text in a file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why the replacement is needed</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>File path</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>old_str</name>\n                <type>string</type>\n                <description>The string to be replaced</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>new_str</name>\n                <type>string</type>\n                <description>The replacement string</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>insert_line</name>\n        <description>Insert text at a specific line in a file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Reason for inserting the text</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>path</name>\n                <type>string</type>\n                <description>File path</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>insert_line</name>\n                <type>integer</type>\n                <description>Line number for insertion</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>new_str</name>\n                <type>string</type>\n                <description>The text to insert</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>execute_bash</name>\n        <description>Execute a bash command</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why this command should be executed</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>command</name>\n                <type>string</type>\n                <description>The bash command to run</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>restart_bash</name>\n        <description>Restart the bash session with a fresh environment</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why the session is being reset</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n\n    <tool>\n        <name>complete_task</name>\n        <description>Finalize the task and exit the agent loop</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Explain why the task is complete</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\"\"\"\n\nroot_path_to_replace_with_cwd = \"/repo\"\n\n\ndef tool_view_file(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        if path:\n            path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_view_file] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_view_file] reasoning: {reasoning}, path: {path}\")\n\n        if not os.path.exists(path):\n            error_message = f\"File {path} does not exist\"\n            console.log(f\"[tool_view_file] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        with open(path, \"r\") as f:\n            content = f.read()\n        return {\"result\": content}\n    except Exception as e:\n        console.log(f\"[tool_view_file] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_create_file(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        file_text = tool_input.get(\"file_text\")\n\n        if path:\n            path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n        console.log(f\"[tool_create_file] reasoning: {reasoning}, path: {path}\")\n\n        # Check for an empty or invalid path\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_create_file] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        dirname = os.path.dirname(path)\n        if not dirname:\n            error_message = (\n                \"Invalid file path provided: directory part of the path is empty.\"\n            )\n            console.log(f\"[tool_create_file] Error: {error_message}\")\n            return {\"error\": error_message}\n        else:\n            os.makedirs(dirname, exist_ok=True)\n\n        with open(path, \"w\") as f:\n            f.write(file_text or \"\")\n        return {\"result\": f\"File created at {path}\"}\n    except Exception as e:\n        console.log(f\"[tool_create_file] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_str_replace(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        old_str = tool_input.get(\"old_str\")\n        new_str = tool_input.get(\"new_str\")\n\n        if path:\n            path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        if not old_str:\n            error_message = \"No text to replace specified: old_str is empty.\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(\n            f\"[tool_str_replace] reasoning: {reasoning}, path: {path}, old_str: {old_str}, new_str: {new_str}\"\n        )\n\n        if not os.path.exists(path):\n            error_message = f\"File {path} does not exist\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        with open(path, \"r\") as f:\n            content = f.read()\n\n        if old_str not in content:\n            error_message = f\"'{old_str}' not found in {path}\"\n            console.log(f\"[tool_str_replace] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        new_content = content.replace(old_str, new_str or \"\")\n        with open(path, \"w\") as f:\n            f.write(new_content)\n        return {\"result\": \"Text replaced successfully\"}\n    except Exception as e:\n        console.log(f\"[tool_str_replace] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_insert_line(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        path = tool_input.get(\"path\")\n        insert_line_num = tool_input.get(\"insert_line\")\n        new_str = tool_input.get(\"new_str\")\n\n        if path:\n            path = path.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not path or not path.strip():\n            error_message = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        if insert_line_num is None:\n            error_message = \"No line number specified: insert_line is missing.\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        if not new_str:\n            error_message = \"No text to insert specified: new_str is empty.\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(\n            f\"[tool_insert_line] reasoning: {reasoning}, path: {path}, insert_line: {insert_line_num}, new_str: {new_str}\"\n        )\n\n        if not os.path.exists(path):\n            error_message = f\"File {path} does not exist\"\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        with open(path, \"r\") as f:\n            lines = f.readlines()\n\n        # Check that the index is within acceptable bounds (allowing insertion at end)\n        if insert_line_num < 0 or insert_line_num > len(lines):\n            error_message = (\n                f\"Insert line number {insert_line_num} out of range (0-{len(lines)}).\"\n            )\n            console.log(f\"[tool_insert_line] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        lines.insert(insert_line_num, new_str + \"\\n\")\n        with open(path, \"w\") as f:\n            f.writelines(lines)\n        return {\"result\": \"Line inserted successfully\"}\n    except Exception as e:\n        console.log(f\"[tool_insert_line] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_execute_bash(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n        command = tool_input.get(\"command\")\n\n        if command:\n            command = command.replace(root_path_to_replace_with_cwd, os.getcwd())\n\n        if not command or not command.strip():\n            error_message = \"No command specified: command is empty.\"\n            console.log(f\"[tool_execute_bash] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_execute_bash] reasoning: {reasoning}, command: {command}\")\n        import subprocess\n\n        result = subprocess.run(\n            command, shell=True, capture_output=True, text=True, env=current_bash_env\n        )\n        if result.returncode != 0:\n            error_message = (\n                result.stderr.strip()\n                or \"Command execution failed with non-zero exit code.\"\n            )\n            console.log(f\"[tool_execute_bash] Error: {error_message}\")\n            return {\"error\": error_message}\n        return {\"result\": result.stdout.strip()}\n    except Exception as e:\n        console.log(f\"[tool_execute_bash] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_restart_bash(tool_input: dict) -> dict:\n    global current_bash_env\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n\n        if not reasoning:\n            error_message = \"No reasoning provided for restarting bash session.\"\n            console.log(f\"[tool_restart_bash] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_restart_bash] reasoning: {reasoning}\")\n        current_bash_env = os.environ.copy()\n        return {\"result\": \"Bash session restarted.\"}\n    except Exception as e:\n        console.log(f\"[tool_restart_bash] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef tool_complete_task(tool_input: dict) -> dict:\n    try:\n        reasoning = tool_input.get(\"reasoning\")\n\n        if not reasoning:\n            error_message = \"No reasoning provided for task completion.\"\n            console.log(f\"[tool_complete_task] Error: {error_message}\")\n            return {\"error\": error_message}\n\n        console.log(f\"[tool_complete_task] reasoning: {reasoning}\")\n        return {\"result\": \"Task completed\"}\n    except Exception as e:\n        console.log(f\"[tool_complete_task] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": str(e)}\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Bash and Editor Agent using Anthropic API\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The prompt to execute\")\n    parser.add_argument(\n        \"-c\", \"--compute\", type=int, default=10, help=\"Maximum compute loops\"\n    )\n    args = parser.parse_args()\n\n    ANTHROPIC_API_KEY = os.getenv(\"ANTHROPIC_API_KEY\")\n    if not ANTHROPIC_API_KEY:\n        Console().print(\n            \"[red]Error: ANTHROPIC_API_KEY environment variable is not set.[/red]\"\n        )\n        sys.exit(1)\n\n    client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)\n\n    # Prepare the initial message using the detailed prompt\n    initial_message = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n    messages = [{\"role\": \"user\", \"content\": initial_message}]\n\n    compute_iterations = 0\n\n    # Define tools for the agent\n    tools = [\n        {\n            \"name\": \"view_file\",\n            \"description\": \"View the content of a file\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\n                        \"type\": \"string\",\n                        \"description\": \"Why view the file\",\n                    },\n                    \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                },\n                \"required\": [\"reasoning\", \"path\"],\n            },\n        },\n        {\n            \"name\": \"create_file\",\n            \"description\": \"Create a new file with given content\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\n                        \"type\": \"string\",\n                        \"description\": \"Why create the file\",\n                    },\n                    \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                    \"file_text\": {\n                        \"type\": \"string\",\n                        \"description\": \"Content for the file\",\n                    },\n                },\n                \"required\": [\"reasoning\", \"path\", \"file_text\"],\n            },\n        },\n        {\n            \"name\": \"str_replace\",\n            \"description\": \"Replace text in a file\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\n                        \"type\": \"string\",\n                        \"description\": \"Reason for replacement\",\n                    },\n                    \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                    \"old_str\": {\n                        \"type\": \"string\",\n                        \"description\": \"Text to replace\",\n                    },\n                    \"new_str\": {\n                        \"type\": \"string\",\n                        \"description\": \"Replacement text\",\n                    },\n                },\n                \"required\": [\"reasoning\", \"path\", \"old_str\", \"new_str\"],\n            },\n        },\n        {\n            \"name\": \"insert_line\",\n            \"description\": \"Insert text at a specific line in a file\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\n                        \"type\": \"string\",\n                        \"description\": \"Reason for insertion\",\n                    },\n                    \"path\": {\"type\": \"string\", \"description\": \"File path\"},\n                    \"insert_line\": {\n                        \"type\": \"integer\",\n                        \"description\": \"Line number\",\n                    },\n                    \"new_str\": {\n                        \"type\": \"string\",\n                        \"description\": \"Text to insert\",\n                    },\n                },\n                \"required\": [\"reasoning\", \"path\", \"insert_line\", \"new_str\"],\n            },\n        },\n        {\n            \"name\": \"execute_bash\",\n            \"description\": \"Execute a bash command\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\n                        \"type\": \"string\",\n                        \"description\": \"Reason for command execution\",\n                    },\n                    \"command\": {\n                        \"type\": \"string\",\n                        \"description\": \"Bash command\",\n                    },\n                },\n                \"required\": [\"reasoning\", \"command\"],\n            },\n        },\n        {\n            \"name\": \"restart_bash\",\n            \"description\": \"Restart the bash session with fresh environment\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\n                        \"type\": \"string\",\n                        \"description\": \"Why to restart bash\",\n                    }\n                },\n                \"required\": [\"reasoning\"],\n            },\n        },\n        {\n            \"name\": \"complete_task\",\n            \"description\": \"Complete the task and exit the agent loop\",\n            \"input_schema\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"reasoning\": {\n                        \"type\": \"string\",\n                        \"description\": \"Why the task is complete\",\n                    }\n                },\n                \"required\": [\"reasoning\"],\n            },\n        },\n    ]\n\n    # Begin the agent loop.\n    # This loop processes Anthropic API responses, executes tool calls for both editor and bash commands,\n    # and logs detailed information via rich logging.\n    while compute_iterations < args.compute:\n        compute_iterations += 1\n        console.rule(f\"[yellow]Agent Loop {compute_iterations}/{args.compute}[/yellow]\")\n        try:\n            response = client.messages.create(\n                model=\"claude-3-7-sonnet-20250219\",\n                max_tokens=4000,  # Increased to be greater than thinking.budget_tokens\n                thinking={\n                    \"type\": \"enabled\",\n                    \"budget_tokens\": 2000  # Using 2k thinking tokens as requested\n                },\n                messages=messages,\n                tools=tools,\n            )\n        except Exception as e:\n            console.print(f\"[red]Error in API call: {str(e)}[/red]\")\n            console.print(traceback.format_exc())\n            break\n\n        console.log(\"[green]API Response:[/green]\", response.model_dump())\n\n        tool_calls = [\n            block\n            for block in response.content\n            if hasattr(block, \"type\") and block.type == \"tool_use\"\n        ]\n        if tool_calls:\n            # Map tool names to their corresponding functions\n            tool_functions = {\n                \"view_file\": tool_view_file,\n                \"create_file\": tool_create_file,\n                \"str_replace\": tool_str_replace,\n                \"insert_line\": tool_insert_line,\n                \"execute_bash\": tool_execute_bash,\n                \"restart_bash\": tool_restart_bash,\n                \"complete_task\": tool_complete_task,\n            }\n            # Add the assistant's response to messages\n            messages.append({\"role\": \"assistant\", \"content\": response.content})\n            \n            for tool in tool_calls:\n                console.print(\n                    f\"[blue]Tool Call:[/blue] {tool.name}({json.dumps(tool.input)})\"\n                )\n                func = tool_functions.get(tool.name)\n                if func:\n                    output = func(tool.input)\n                    result_text = output.get(\"error\") or output.get(\"result\", \"\")\n                    console.print(f\"[green]Tool Result:[/green] {result_text}\")\n                    \n                    # Format the tool result message according to Claude API requirements\n                    tool_result_message = {\n                        \"role\": \"user\",\n                        \"content\": [\n                            {\n                                \"type\": \"tool_result\",\n                                \"tool_use_id\": tool.id,\n                                \"content\": result_text\n                            }\n                        ]\n                    }\n                    messages.append(tool_result_message)\n                    if tool.name == \"complete_task\":\n                        console.print(\n                            \"[green]Task completed. Exiting agent loop.[/green]\"\n                        )\n                        return\n                else:\n                    raise ValueError(f\"Unknown tool: {tool.name}\")\n\n    console.print(\"[yellow]Reached compute limit without completing task.[/yellow]\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_codebase_context_agent_v3.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"anthropic>=0.47.1\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n# ]\n# ///\n\n\"\"\"\nUsage:\n    uv run sfa_codebase_context_agent_v3.py \\\n        --prompt \"Let's build a new metaprompt sfa agent using anthropic claude 3.7\" \\\n        --directory \".\" \\\n        --globs \"*.py\" \\\n        --extensions py md \\\n        --limit 10 \\\n        --file-line-limit 1000 \\\n        --output-file relevant_files.json \\\n        --compute 15\n        \n    # Find files related to DuckDB implementations\n    uv run sfa_codebase_context_agent_v3.py \\\n        --prompt \"Find all files related to DuckDB agent implementations\" \\\n        --file-line-limit 1000 \\\n        --extensions py\n        \n    # Find all files related to Anthropic-powered agents\n    uv run sfa_codebase_context_agent_v3.py \\\n        --prompt \"Identify all agents that use the new Claude 3.7 model\"\n\n    \n\"\"\"\n\nimport os\nimport sys\nimport json\nimport argparse\nimport subprocess\nimport time\nimport fnmatch\nimport concurrent.futures\nfrom typing import List, Dict, Any\nfrom rich.console import Console\nfrom anthropic import Anthropic\nfrom rich.table import Table\nfrom rich.panel import Panel\n\n# Initialize rich console\nconsole = Console()\n\n# Constants\nTHINKING_BUDGET_TOKENS_PER_FILE = 2000\nBATCH_SIZE = 10\nMAX_RETRIES = 3\nRETRY_WAIT = 1\n\n# Global variables\nUSER_PROMPT = \"\"\nRELEVANT_FILES = []\nOUTPUT_FILE = \"output_relevant_files.json\"\nINPUT_TOKENS = 0  # To track input tokens to Anthropic API\nOUTPUT_TOKENS = 0  # To track output tokens from Anthropic API\n\n\ndef git_list_files(\n    reasoning: str,\n    directory: str = os.getcwd(),\n    globs: List[str] = [],\n    extensions: List[str] = [],\n) -> List[str]:\n    \"\"\"Returns a list of files in the repository, respecting gitignore.\n\n    Args:\n        reasoning: Explanation of why we're listing files\n        directory: Directory to search in (defaults to current working directory)\n        globs: List of glob patterns to filter files (optional)\n        extensions: List of file extensions to filter files (optional)\n\n    Returns:\n        List of file paths as strings\n    \"\"\"\n    try:\n        console.log(f\"[blue]Git List Files Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(\n            f\"[dim]Directory: {directory}, Globs: {globs}, Extensions: {extensions}[/dim]\"\n        )\n\n        # Change to the specified directory\n        original_dir = os.getcwd()\n        os.chdir(directory)\n\n        # Get all files tracked by git\n        result = subprocess.run(\n            \"git ls-files\",\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n\n        files = result.stdout.strip().split(\"\\n\")\n\n        # Filter by globs if provided\n        if globs:\n            filtered_files = []\n            for pattern in globs:\n                for file in files:\n                    if fnmatch.fnmatch(file, pattern):\n                        filtered_files.append(file)\n            files = filtered_files\n\n        # Filter by extensions if provided\n        if extensions:\n            files = [\n                file\n                for file in files\n                if any(file.endswith(f\".{ext}\") for ext in extensions)\n            ]\n\n        # Change back to the original directory\n        os.chdir(original_dir)\n\n        # # Convert to absolute paths\n        # files = [os.path.join(directory, file) for file in files]\n\n        # Keep paths relative\n        files = files\n\n        console.log(f\"[dim]Found {len(files)} files[/dim]\")\n        return files\n    except Exception as e:\n        console.log(f\"[red]Error listing files: {str(e)}[/red]\")\n        return []\n\n\ndef check_file_paths_line_length(\n    reasoning: str, file_paths: List[str], file_line_limit: int = 500\n) -> Dict[str, int]:\n    \"\"\"Checks the line length of each file and returns a dictionary of file paths and their line counts.\n\n    Args:\n        reasoning: Explanation of why we're checking line lengths\n        file_paths: List of file paths to check\n        file_line_limit: Maximum number of lines per file\n\n    Returns:\n        Dictionary mapping file paths to their total line counts\n    \"\"\"\n    try:\n        console.log(\n            f\"[blue]Check File Paths Line Length Tool[/blue] - Reasoning: {reasoning}\"\n        )\n        console.log(\n            f\"[dim]Checking {len(file_paths)} files with line limit {file_line_limit}[/dim]\"\n        )\n\n        result = {}\n        for file_path in file_paths:\n            try:\n                with open(file_path, \"r\", encoding=\"utf-8\") as f:\n                    lines = f.readlines()\n                    line_count = len(lines)\n                    if line_count <= file_line_limit:\n                        result[file_path] = line_count\n                    else:\n                        console.log(\n                            f\"[yellow]Skipping {file_path}: {line_count} lines exceed limit of {file_line_limit}[/yellow]\"\n                        )\n            except Exception as e:\n                console.log(f\"[red]Error reading file {file_path}: {str(e)}[/red]\")\n\n        console.log(f\"[dim]Found {len(result)} files within line limit[/dim]\")\n        return result\n    except Exception as e:\n        console.log(f\"[red]Error checking file paths: {str(e)}[/red]\")\n        return {}\n\n\ndef determine_if_file_is_relevant(prompt: str, file_path: str, client: Anthropic) -> Dict[str, Any]:  # type: ignore\n    \"\"\"Determines if a single file is relevant to the prompt.\n\n    Args:\n        prompt: The user prompt\n        file_path: Path to the file to check\n        client: Anthropic client\n\n    Returns:\n        Dictionary with reasoning and is_relevant flag\n    \"\"\"\n    result = {\n        \"reasoning\": \"Error: Could not process file\",\n        \"file_path\": file_path,\n        \"is_relevant\": False,\n    }\n    try:\n        with open(file_path, \"r\", encoding=\"utf-8\") as f:\n            file_content = f.read()\n\n        # Truncate file content if it's too long\n        if len(file_content) > 10000:\n            file_content = file_content[:10000] + \"... [content truncated]\"\n\n        file_prompt = f\"\"\"<purpose>\nYou are a codebase context builder. Your task is to determine if a file is relevant to a user query.\n</purpose>\n\n<instructions>\n<instruction>Analyze the file content and determine if it's relevant to the user query.</instruction>\n<instruction>Provide clear reasoning for your decision.</instruction>\n<instruction>Return a structured output with your reasoning and a boolean indicating relevance.</instruction>\n<instruction>Resond in JSON format following the json-output-format.</instruction>\n</instructions>\n\n<user-query>\n{prompt}\n</user-query>\n\n<file-path>\n{file_path}\n</file-path>\n\n<file-content>\n{file_content}\n</file-content>\n\n<json-output-format>\n{{\n    \"reasoning\": \"Explanation of why the file is relevant\",\n    \"is_relevant\": true | false\n}}\n</json-output-format>\n        \"\"\"\n\n        for attempt in range(MAX_RETRIES):\n            try:\n                response = client.messages.create(\n                    model=\"claude-3-7-sonnet-20250219\",\n                    max_tokens=3000,  # Increased to be greater than thinking.budget_tokens\n                    thinking={\n                        \"type\": \"enabled\",\n                        \"budget_tokens\": THINKING_BUDGET_TOKENS_PER_FILE,\n                    },\n                    messages=[{\"role\": \"user\", \"content\": file_prompt}],\n                    system=\"Determine if the file is relevant to the user query.\",\n                )\n                \n                # Track token usage\n                global INPUT_TOKENS, OUTPUT_TOKENS\n                if hasattr(response, 'usage') and response.usage:\n                    INPUT_TOKENS += response.usage.input_tokens\n                    OUTPUT_TOKENS += response.usage.output_tokens\n\n                # Parse the response - look for text blocks\n                response_text = None\n\n                # Loop through all content blocks to find the text block\n                for content_block in response.content:\n                    if content_block.type == \"text\":\n                        response_text = content_block.text\n                        break\n\n                # Make sure we have a text response\n                if response_text is None:\n                    raise Exception(\"No text response found in the model output\")\n\n                # Handle different response formats\n                try:\n                    # Try parsing as JSON first\n                    result = json.loads(response_text)\n                except json.JSONDecodeError:\n                    # If not valid JSON, try to extract reasoning and is_relevant from text\n                    is_relevant = \"relevant\" in response_text.lower() and not (\n                        \"not relevant\" in response_text.lower()\n                    )\n                    result = {\n                        \"reasoning\": response_text.strip(),\n                        \"is_relevant\": is_relevant,\n                    }\n\n                return {\n                    \"reasoning\": result.get(\"reasoning\", \"No reasoning provided\"),\n                    \"file_path\": file_path,\n                    \"is_relevant\": result.get(\"is_relevant\", False),\n                }\n            except Exception as e:\n                if attempt < MAX_RETRIES - 1:\n                    console.log(\n                        f\"[yellow]Retry {attempt + 1}/{MAX_RETRIES} for {file_path}: {str(e)}[/yellow]\"\n                    )\n                    time.sleep(RETRY_WAIT)\n                else:\n                    console.log(\n                        f\"[red]Failed to determine relevance for {file_path}: {str(e)}[/red]\"\n                    )\n                    return {\n                        \"reasoning\": f\"Error: {str(e)}\",\n                        \"file_path\": file_path,\n                        \"is_relevant\": False,\n                    }\n    except Exception as e:\n        console.log(f\"[red]Error processing file {file_path}: {str(e)}[/red]\")\n        return {\n            \"reasoning\": f\"Error: {str(e)}\",\n            \"file_path\": file_path,\n            \"is_relevant\": False,\n        }\n\n\ndef determine_if_files_are_relevant(\n    reasoning: str, file_paths: List[str]\n) -> Dict[str, Any]:\n    \"\"\"Determines if files are relevant to the prompt using parallelism.\n\n    Args:\n        reasoning: Explanation of why we're determining relevance\n        file_paths: List of file paths to check\n\n    Returns:\n        Dictionary with results for each file\n    \"\"\"\n    try:\n        console.log(\n            f\"[blue]Determine If Files Are Relevant Tool[/blue] - Reasoning: {reasoning}\"\n        )\n        console.log(\n            f\"[dim]Checking {len(file_paths)} files in batches of {BATCH_SIZE}[/dim]\"\n        )\n\n        # Initialize Anthropic client\n        client = Anthropic(api_key=os.getenv(\"ANTHROPIC_API_KEY\"))\n\n        results = {}\n\n        # Process files in batches\n        for i in range(0, len(file_paths), BATCH_SIZE):\n            batch = file_paths[i : i + BATCH_SIZE]\n            console.log(\n                f\"[dim]Processing batch {i//BATCH_SIZE + 1}/{(len(file_paths) + BATCH_SIZE - 1)//BATCH_SIZE}[/dim]\"\n            )\n\n            # Process batch in parallel\n            with concurrent.futures.ThreadPoolExecutor(\n                max_workers=BATCH_SIZE\n            ) as executor:\n                future_to_file = {\n                    executor.submit(\n                        determine_if_file_is_relevant, USER_PROMPT, file_path, client\n                    ): file_path\n                    for file_path in batch\n                }\n\n                for future in concurrent.futures.as_completed(future_to_file):\n                    file_path = future_to_file[future]\n                    try:\n                        result = future.result()\n                        results[file_path] = result\n                        relevance = (\n                            \"Relevant\" if result[\"is_relevant\"] else \"Not relevant\"\n                        )\n                        console.log(f\"[dim]{file_path}: {relevance}[/dim]\")\n                    except Exception as e:\n                        console.log(\n                            f\"[red]Error processing {file_path}: {str(e)}[/red]\"\n                        )\n\n        return results\n    except Exception as e:\n        console.log(f\"[red]Error determining file relevance: {str(e)}[/red]\")\n        return {}\n\n\ndef add_relevant_files(reasoning: str, file_paths: List[str]) -> str:\n    \"\"\"Adds files to the list of relevant files.\n\n    Args:\n        reasoning: Explanation of why we're adding these files\n        file_paths: List of file paths to add\n\n    Returns:\n        String indicating success\n    \"\"\"\n    try:\n        console.log(f\"[blue]Add Relevant Files Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Adding {len(file_paths)} files to relevant files list[/dim]\")\n\n        global RELEVANT_FILES\n        for file_path in file_paths:\n            if file_path not in RELEVANT_FILES:\n                RELEVANT_FILES.append(file_path)\n\n        console.log(\n            f\"[green]Added {len(file_paths)} files. Total relevant files: {len(RELEVANT_FILES)}[/green]\"\n        )\n        return f\"{len(file_paths)} files added. Total relevant files: {len(RELEVANT_FILES)}\"\n    except Exception as e:\n        console.log(f\"[red]Error adding relevant files: {str(e)}[/red]\")\n        return f\"Error: {str(e)}\"\n\n\ndef complete_task_output_relevant_files(reasoning: str) -> str:\n    \"\"\"Outputs the list of relevant files to a JSON file.\n\n    Args:\n        reasoning: Explanation of why we're outputting the files\n\n    Returns:\n        String indicating success or failure\n    \"\"\"\n    try:\n        console.log(\n            f\"[blue]Complete Task Output Relevant Files Tool[/blue] - Reasoning: {reasoning}\"\n        )\n\n        global RELEVANT_FILES\n        global OUTPUT_FILE\n\n        if not RELEVANT_FILES:\n            console.log(f\"[yellow]No relevant files to output[/yellow]\")\n            return \"No relevant files to output\"\n\n        # Write files to JSON\n        with open(OUTPUT_FILE, \"w\") as f:\n            json.dump(RELEVANT_FILES, f, indent=2)\n\n        console.log(\n            f\"[green]Successfully wrote {len(RELEVANT_FILES)} files to {OUTPUT_FILE}[/green]\"\n        )\n        return f\"Successfully wrote {len(RELEVANT_FILES)} files to {OUTPUT_FILE}\"\n    except Exception as e:\n        console.log(f\"[red]Error outputting relevant files: {str(e)}[/red]\")\n        return f\"Error: {str(e)}\"\n\n\ndef display_token_usage():\n    \"\"\"Displays the token usage and estimated cost.\"\"\"\n    global INPUT_TOKENS, OUTPUT_TOKENS\n    \n    # Claude 3.7 Sonnet pricing (as of 25 February 2025)\n    input_cost_per_million = 3.00  # $3.00 per million tokens\n    output_cost_per_million = 15.00  # $15.00 per million tokens\n    \n    # Calculate costs\n    input_cost = (INPUT_TOKENS / 1_000_000) * input_cost_per_million\n    output_cost = (OUTPUT_TOKENS / 1_000_000) * output_cost_per_million\n    total_cost = input_cost + output_cost\n    \n    # Create a nice table for display\n    table = Table(title=\"Token Usage and Cost Summary\")\n    table.add_column(\"Category\", style=\"cyan\")\n    table.add_column(\"Tokens\", style=\"green\")\n    table.add_column(\"Rate\", style=\"yellow\")\n    table.add_column(\"Cost\", style=\"magenta\")\n    \n    table.add_row(\n        \"Input\", \n        f\"{INPUT_TOKENS:,}\", \n        f\"${input_cost_per_million:.2f}/M\",\n        f\"${input_cost:.4f}\"\n    )\n    table.add_row(\n        \"Output\", \n        f\"{OUTPUT_TOKENS:,}\", \n        f\"${output_cost_per_million:.2f}/M\",\n        f\"${output_cost:.4f}\"\n    )\n    table.add_row(\n        \"Total\", \n        f\"{INPUT_TOKENS + OUTPUT_TOKENS:,}\", \n        \"\", \n        f\"${total_cost:.4f}\"\n    )\n    \n    console.print(Panel(table, title=\"Claude 3.7 Sonnet API Usage\", subtitle=\"(Based on Feb 2025 pricing)\"))\n    \n    return total_cost\n\n\n# Define tool schemas for Anthropic\nTOOLS = [\n    {\n        \"name\": \"git_list_files\",\n        \"description\": \"Returns list of files in the repository, respecting gitignore\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to list files relative to user request\",\n                },\n                \"directory\": {\n                    \"type\": \"string\",\n                    \"description\": \"Directory to search in (defaults to current working directory)\",\n                },\n                \"globs\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of glob patterns to filter files (optional)\",\n                },\n                \"extensions\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file extensions to filter files (optional)\",\n                },\n            },\n            \"required\": [\"reasoning\"],\n        },\n    },\n    {\n        \"name\": \"check_file_paths_line_length\",\n        \"description\": \"Checks the line length of each file and returns a dictionary of file paths and their line counts\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to check line lengths\",\n                },\n                \"file_paths\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file paths to check\",\n                },\n            },\n            \"required\": [\"reasoning\", \"file_paths\"],\n        },\n    },\n    {\n        \"name\": \"determine_if_files_are_relevant\",\n        \"description\": \"Determines if files are relevant to the prompt using parallelism\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to determine relevance\",\n                },\n                \"file_paths\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file paths to check\",\n                },\n            },\n            \"required\": [\"reasoning\", \"file_paths\"],\n        },\n    },\n    {\n        \"name\": \"add_relevant_files\",\n        \"description\": \"Adds files to the list of relevant files\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to add these files\",\n                },\n                \"file_paths\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file paths to add\",\n                },\n            },\n            \"required\": [\"reasoning\", \"file_paths\"],\n        },\n    },\n    {\n        \"name\": \"complete_task_output_relevant_files\",\n        \"description\": \"Outputs the list of relevant files to a JSON file. Call this when you have finished identifying all relevant files.\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we are outputting the files to JSON\",\n                },\n            },\n            \"required\": [\"reasoning\"],\n        },\n    },\n]\n\nAGENT_PROMPT = \"\"\"\n<purpose>\nYou are a codebase context builder. Use the available tools to search, filter and determine which files in the codebase are relevant to the prompt (user query).\n</purpose>\n\n<instructions>\n<instruction>Start by listing files in the codebase using git_list_files, filtering by globs and extensions if provided.</instruction>\n<instruction>Check file line lengths to ensure they are within the specified limit using check_file_paths_line_length.</instruction>\n<instruction>Determine which files are relevant to the user query using determine_if_files_are_relevant.</instruction>\n<instruction>Add relevant files to the final list using add_relevant_files.</instruction>\n<instruction>Be thorough but efficient with tool usage.</instruction>\n<instruction>Think step by step about what information you need.</instruction>\n<instruction>Be sure to specify every parameter for each tool call.</instruction>\n<instruction>Every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.</instruction>\n<instruction>The determine_if_files_are_relevant tool will process files in batches of 10 for efficiency.</instruction>\n<instruction>Focus on finding the most relevant files that will help answer the user query.</instruction>\n<instruction>You MUST monitor the number of files in the relevant files list. Once you have collected at least the File-Limit number of files, you MUST call complete_task_output_relevant_files to save the list of relevant files to JSON.</instruction>\n<instruction>If you've exhausted all potential relevant files before reaching the File-Limit, you should call complete_task_output_relevant_files with the files you have.</instruction>\n<instruction>Always end your work by calling complete_task_output_relevant_files, which outputs the list of relevant files to a JSON file.</instruction>\n<instruction>current-relevant-files is the current list of files that have been identified as relevant to your query.</instruction>\n</instructions>\n\n<user-request>\n{{user_request}}\n</user-request>\n\n<dynamic-variables>\nDirectory: {{directory}}\nGlobs: {{globs}}\nExtensions: {{extensions}}\nFile Line Limit: {{file_line_limit}}\nFile-Limit: {{limit}}\nOutput JSON: {{output_file}}\n</dynamic-variables>\n\n<current-relevant-files>\n{{relevant_files}}\n</current-relevant-files>\n\"\"\"\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(\n        description=\"Codebase Context Agent using Claude 3.7\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-d\",\n        \"--directory\",\n        default=os.getcwd(),\n        help=\"Directory to search in (defaults to current working directory)\",\n    )\n    parser.add_argument(\n        \"-g\",\n        \"--globs\",\n        nargs=\"*\",\n        default=[],\n        help=\"List of glob patterns to filter files (optional)\",\n    )\n    parser.add_argument(\n        \"-e\",\n        \"--extensions\",\n        nargs=\"*\",\n        default=[],\n        help=\"List of file extensions to filter files (optional)\",\n    )\n    parser.add_argument(\n        \"-q\", \"--quiet\", action=\"store_true\", help=\"Quiet mode (don't show logging)\"\n    )\n    parser.add_argument(\n        \"-l\", \"--limit\", type=int, default=100, help=\"Maximum number of files to return\"\n    )\n    parser.add_argument(\n        \"-f\",\n        \"--file-line-limit\",\n        type=int,\n        default=500,\n        help=\"Maximum number of lines per file\",\n    )\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 10)\",\n    )\n    parser.add_argument(\n        \"-o\",\n        \"--output-file\",\n        default=\"output_relevant_files.json\",\n        help=\"Path to output JSON file with relevant files (default: output_relevant_files.json)\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    ANTHROPIC_API_KEY = os.getenv(\"ANTHROPIC_API_KEY\")\n    if not ANTHROPIC_API_KEY:\n        console.print(\n            \"[red]Error: ANTHROPIC_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please get your API key from https://console.anthropic.com/settings/keys\"\n        )\n        console.print(\"Then set it with: export ANTHROPIC_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    client = Anthropic(api_key=ANTHROPIC_API_KEY)\n\n    # Set global variables\n    global USER_PROMPT, OUTPUT_FILE\n    USER_PROMPT = args.prompt\n    OUTPUT_FILE = args.output_file\n\n    # Configure quiet mode\n    if args.quiet:\n        console.quiet = True\n\n    # For the first initialization, create the completed prompt\n    # Will update this variable before each API call\n    completed_prompt = (\n        AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n        .replace(\"{{directory}}\", args.directory)\n        .replace(\"{{globs}}\", str(args.globs))\n        .replace(\"{{extensions}}\", str(args.extensions))\n        .replace(\"{{file_line_limit}}\", str(args.file_line_limit))\n        .replace(\"{{limit}}\", str(args.limit))\n        .replace(\"{{output_file}}\", OUTPUT_FILE)\n        .replace(\"{{relevant_files}}\", \"No relevant files found yet.\")\n    )\n\n    # Initialize messages with proper typing for Anthropic chat\n    messages = [{\"role\": \"user\", \"content\": completed_prompt}]\n\n    compute_iterations = 0\n    break_loop = False\n    # Main agent loop\n    while True:\n        if break_loop or compute_iterations >= args.compute:\n            break\n\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        try:\n            # Before each API call, update the completed prompt with the current relevant files\n            if RELEVANT_FILES:\n                formatted_files = \"\\n\".join([f\"- {file}\" for file in RELEVANT_FILES])\n                file_count = f\"Total: {len(RELEVANT_FILES)}/{args.limit} files\"\n                relevant_files_section = f\"{file_count}\\n{formatted_files}\"\n            else:\n                relevant_files_section = \"No relevant files found yet.\"\n\n            # Update the first message with the latest relevant files information\n            completed_prompt = (\n                AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n                .replace(\"{{directory}}\", args.directory)\n                .replace(\"{{globs}}\", str(args.globs))\n                .replace(\"{{extensions}}\", str(args.extensions))\n                .replace(\"{{file_line_limit}}\", str(args.file_line_limit))\n                .replace(\"{{limit}}\", str(args.limit))\n                .replace(\"{{output_file}}\", OUTPUT_FILE)\n                .replace(\"{{relevant_files}}\", relevant_files_section)\n            )\n\n            # Always update the first message with the latest information before each API call\n            messages[0][\"content\"] = completed_prompt\n\n            # Generate content with tool support\n            response = client.messages.create(\n                model=\"claude-3-7-sonnet-20250219\",\n                system=\"You are a codebase context builder. Use the available tools to search, filter and determine which files in the codebase are relevant to the prompt (user query).\",\n                messages=messages,\n                tools=TOOLS,\n                max_tokens=4000,\n                thinking={\"type\": \"enabled\", \"budget_tokens\": 2000},\n            )\n            \n            # Track token usage\n            global INPUT_TOKENS, OUTPUT_TOKENS\n            if hasattr(response, 'usage') and response.usage:\n                INPUT_TOKENS += response.usage.input_tokens\n                OUTPUT_TOKENS += response.usage.output_tokens\n                console.log(f\"[dim]Token usage this call: {response.usage.input_tokens} input, {response.usage.output_tokens} output[/dim]\")\n\n            # Extract thinking block and other content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            if response.content:\n                # Get the message content\n                for content_block in response.content:\n                    if content_block.type == \"thinking\":\n                        thinking_block = content_block\n                        previous_thinking = thinking_block\n                    elif content_block.type == \"tool_use\":\n                        tool_use_block = content_block\n                        # Access the proper attributes directly\n                        tool_name = content_block.name\n                        tool_input = content_block.input\n                        tool_id = content_block.id\n                    elif content_block.type == \"text\":\n                        text_block = content_block\n                        console.print(\n                            f\"[cyan]Model response:[/cyan] {content_block.text}\"\n                        )\n\n                # Handle text responses if there was no tool use\n                if not tool_use_block and text_block:\n                    messages.append(\n                        {  # type: ignore\n                            \"role\": \"assistant\",\n                            \"content\": [\n                                *([thinking_block] if thinking_block else []),\n                                {\"type\": \"text\", \"text\": text_block.text},\n                            ],\n                        }\n                    )\n                    break_loop = True\n                    continue\n\n                # We need a tool use block to proceed\n                if tool_use_block:\n                    console.print(\n                        f\"[blue]Tool Call:[/blue] {tool_name}({json.dumps(tool_input, indent=2)})\"\n                    )\n\n                    try:\n                        # Execute the appropriate tool based on name\n                        if tool_name == \"git_list_files\":\n                            directory = tool_input.get(\"directory\", args.directory)\n                            globs = tool_input.get(\"globs\", args.globs)\n                            extensions = tool_input.get(\"extensions\", args.extensions)\n                            result = git_list_files(\n                                reasoning=tool_input[\"reasoning\"],\n                                directory=directory,\n                                globs=globs,\n                                extensions=extensions,\n                            )\n                        elif tool_name == \"check_file_paths_line_length\":\n                            result = check_file_paths_line_length(\n                                reasoning=tool_input[\"reasoning\"],\n                                file_paths=tool_input[\"file_paths\"],\n                                file_line_limit=args.file_line_limit,\n                            )\n                        elif tool_name == \"determine_if_files_are_relevant\":\n                            result = determine_if_files_are_relevant(\n                                reasoning=tool_input[\"reasoning\"],\n                                file_paths=tool_input[\"file_paths\"],\n                            )\n                        elif tool_name == \"add_relevant_files\":\n                            result = add_relevant_files(\n                                reasoning=tool_input[\"reasoning\"],\n                                file_paths=tool_input[\"file_paths\"],\n                            )\n                        elif tool_name == \"complete_task_output_relevant_files\":\n                            result = complete_task_output_relevant_files(\n                                reasoning=tool_input[\"reasoning\"],\n                            )\n                            # Indicate that we're done after writing the output\n                            break_loop = True\n                        else:\n                            raise Exception(f\"Unknown tool call: {tool_name}\")\n\n                        console.print(\n                            f\"[blue]Tool Call Result:[/blue] {tool_name}(...) -> \"\n                        )\n\n                        console.print(\n                            Panel.fit(\n                                str(result),\n                                border_style=\"blue\",\n                            )\n                        )\n\n                        # Append the tool result to messages\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"assistant\",\n                                \"content\": [\n                                    *([thinking_block] if thinking_block else []),\n                                    {\n                                        \"type\": \"tool_use\",\n                                        \"id\": tool_id,\n                                        \"name\": tool_name,\n                                        \"input\": tool_input,\n                                    },\n                                ],\n                            }\n                        )\n\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"user\",\n                                \"content\": [\n                                    {\n                                        \"type\": \"tool_result\",\n                                        \"tool_use_id\": tool_id,\n                                        \"content\": json.dumps(result),\n                                    }\n                                ],\n                            }\n                        )\n\n                    except Exception as e:\n                        error_msg = f\"Error executing {tool_name}: {e}\"\n                        console.print(f\"[red]{error_msg}[/red]\")\n\n                        # Append the error to messages\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"assistant\",\n                                \"content\": [\n                                    *([thinking_block] if thinking_block else []),\n                                    {\n                                        \"type\": \"tool_use\",\n                                        \"id\": tool_id,\n                                        \"name\": tool_name,\n                                        \"input\": tool_input,\n                                    },\n                                ],\n                            }\n                        )\n\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"user\",\n                                \"content\": [\n                                    {\n                                        \"type\": \"tool_result\",\n                                        \"tool_use_id\": tool_id,\n                                        \"content\": str(error_msg),\n                                    }\n                                ],\n                            }\n                        )\n\n                    # No need to update messages here since we're updating at the start of each loop iteration\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n    # Print the final list of relevant files\n    console.rule(\"[green]Relevant Files[/green]\")\n    for i, file_path in enumerate(RELEVANT_FILES, 1):\n        console.print(f\"{i}. {file_path}\")\n    \n    # Display token usage statistics\n    console.rule(\"[yellow]Token Usage Summary[/yellow]\")\n    display_token_usage()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_codebase_context_agent_w_ripgrep_v3.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"anthropic>=0.47.1\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n# ]\n# ///\n\n\"\"\"\nUsage:\n    uv run sfa_codebase_context_agent_w_ripgrep_v3.py \\\n        --prompt \"Let's build a new metaprompt sfa agent using anthropic claude 3.7\" \\\n        --directory \".\" \\\n        --globs \"*.py\" \\\n        --extensions py md \\\n        --limit 10 \\\n        --file-line-limit 1000 \\\n        --output-file relevant_files.json \\\n        --compute 15\n        \n    # Find files related to DuckDB implementations\n    uv run sfa_codebase_context_agent_w_ripgrep_v3.py \\\n        --prompt \"Find all files related to DuckDB agent implementations\" \\\n        --file-line-limit 1000 \\\n        --extensions py\n        \n    # Find all files related to Anthropic-powered agents\n    uv run sfa_codebase_context_agent_w_ripgrep_v3.py \\\n        --prompt \"Identify all agents that use the new Claude 3.7 model\"\n\n    # Use ripgrep to search codebase for specific query\n    uv run sfa_codebase_context_agent_w_ripgrep_v3.py \\\n        --prompt \"Find all files that use the Anthropic API\" \\\n        --use-ripgrep\n    \n\"\"\"\n\nimport os\nimport sys\nimport json\nimport argparse\nimport subprocess\nimport time\nimport fnmatch\nimport concurrent.futures\nfrom typing import List, Dict, Any\nfrom rich.console import Console\nfrom anthropic import Anthropic\nfrom rich.table import Table\nfrom rich.panel import Panel\n\n# Initialize rich console\nconsole = Console()\n\n# Constants\nTHINKING_BUDGET_TOKENS_PER_FILE = 2000\nBATCH_SIZE = 10\nMAX_RETRIES = 3\nRETRY_WAIT = 1\n\n# Global variables\nUSER_PROMPT = \"\"\nRELEVANT_FILES = []\nOUTPUT_FILE = \"output_relevant_files.json\"\nINPUT_TOKENS = 0  # To track input tokens to Anthropic API\nOUTPUT_TOKENS = 0  # To track output tokens from Anthropic API\n\n\ndef git_list_files(\n    reasoning: str,\n    directory: str = os.getcwd(),\n    globs: List[str] = [],\n    extensions: List[str] = [],\n) -> List[str]:\n    \"\"\"Returns a list of files in the repository, respecting gitignore.\n\n    Args:\n        reasoning: Explanation of why we're listing files\n        directory: Directory to search in (defaults to current working directory)\n        globs: List of glob patterns to filter files (optional)\n        extensions: List of file extensions to filter files (optional)\n\n    Returns:\n        List of file paths as strings\n    \"\"\"\n    try:\n        console.log(f\"[blue]Git List Files Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(\n            f\"[dim]Directory: {directory}, Globs: {globs}, Extensions: {extensions}[/dim]\"\n        )\n\n        # Change to the specified directory\n        original_dir = os.getcwd()\n        os.chdir(directory)\n\n        # Get all files tracked by git\n        result = subprocess.run(\n            \"git ls-files\",\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n\n        files = result.stdout.strip().split(\"\\n\")\n\n        # Filter by globs if provided\n        if globs:\n            filtered_files = []\n            for pattern in globs:\n                for file in files:\n                    if fnmatch.fnmatch(file, pattern):\n                        filtered_files.append(file)\n            files = filtered_files\n\n        # Filter by extensions if provided\n        if extensions:\n            files = [\n                file\n                for file in files\n                if any(file.endswith(f\".{ext}\") for ext in extensions)\n            ]\n\n        # Change back to the original directory\n        os.chdir(original_dir)\n\n        # # Convert to absolute paths\n        # files = [os.path.join(directory, file) for file in files]\n\n        # Keep paths relative\n        files = files\n\n        console.log(f\"[dim]Found {len(files)} files[/dim]\")\n        return files\n    except Exception as e:\n        console.log(f\"[red]Error listing files: {str(e)}[/red]\")\n        return []\n\n\ndef check_file_paths_line_length(\n    reasoning: str, file_paths: List[str], file_line_limit: int = 500\n) -> Dict[str, int]:\n    \"\"\"Checks the line length of each file and returns a dictionary of file paths and their line counts.\n\n    Args:\n        reasoning: Explanation of why we're checking line lengths\n        file_paths: List of file paths to check\n        file_line_limit: Maximum number of lines per file\n\n    Returns:\n        Dictionary mapping file paths to their total line counts\n    \"\"\"\n    try:\n        console.log(\n            f\"[blue]Check File Paths Line Length Tool[/blue] - Reasoning: {reasoning}\"\n        )\n        console.log(\n            f\"[dim]Checking {len(file_paths)} files with line limit {file_line_limit}[/dim]\"\n        )\n\n        result = {}\n        for file_path in file_paths:\n            try:\n                with open(file_path, \"r\", encoding=\"utf-8\") as f:\n                    lines = f.readlines()\n                    line_count = len(lines)\n                    if line_count <= file_line_limit:\n                        result[file_path] = line_count\n                    else:\n                        console.log(\n                            f\"[yellow]Skipping {file_path}: {line_count} lines exceed limit of {file_line_limit}[/yellow]\"\n                        )\n            except Exception as e:\n                console.log(f\"[red]Error reading file {file_path}: {str(e)}[/red]\")\n\n        console.log(f\"[dim]Found {len(result)} files within line limit[/dim]\")\n        return result\n    except Exception as e:\n        console.log(f\"[red]Error checking file paths: {str(e)}[/red]\")\n        return {}\n\n\ndef determine_if_file_is_relevant(prompt: str, file_path: str, client: Anthropic) -> Dict[str, Any]:  # type: ignore\n    \"\"\"Determines if a single file is relevant to the prompt.\n\n    Args:\n        prompt: The user prompt\n        file_path: Path to the file to check\n        client: Anthropic client\n\n    Returns:\n        Dictionary with reasoning and is_relevant flag\n    \"\"\"\n    result = {\n        \"reasoning\": \"Error: Could not process file\",\n        \"file_path\": file_path,\n        \"is_relevant\": False,\n    }\n    try:\n        with open(file_path, \"r\", encoding=\"utf-8\") as f:\n            file_content = f.read()\n\n        # Truncate file content if it's too long\n        if len(file_content) > 10000:\n            file_content = file_content[:10000] + \"... [content truncated]\"\n\n        file_prompt = f\"\"\"<purpose>\nYou are a codebase context builder. Your task is to determine if a file is relevant to a user query.\n</purpose>\n\n<instructions>\n<instruction>Analyze the file content and determine if it's relevant to the user query.</instruction>\n<instruction>Provide clear reasoning for your decision.</instruction>\n<instruction>Return a structured output with your reasoning and a boolean indicating relevance.</instruction>\n<instruction>Resond in JSON format following the json-output-format.</instruction>\n</instructions>\n\n<user-query>\n{prompt}\n</user-query>\n\n<file-path>\n{file_path}\n</file-path>\n\n<file-content>\n{file_content}\n</file-content>\n\n<json-output-format>\n{{\n    \"reasoning\": \"Explanation of why the file is relevant\",\n    \"is_relevant\": true | false\n}}\n</json-output-format>\n        \"\"\"\n\n        for attempt in range(MAX_RETRIES):\n            try:\n                response = client.messages.create(\n                    model=\"claude-3-7-sonnet-20250219\",\n                    max_tokens=3000,  # Increased to be greater than thinking.budget_tokens\n                    thinking={\n                        \"type\": \"enabled\",\n                        \"budget_tokens\": THINKING_BUDGET_TOKENS_PER_FILE,\n                    },\n                    messages=[{\"role\": \"user\", \"content\": file_prompt}],\n                    system=\"Determine if the file is relevant to the user query.\",\n                )\n                \n                # Track token usage\n                global INPUT_TOKENS, OUTPUT_TOKENS\n                if hasattr(response, 'usage') and response.usage:\n                    INPUT_TOKENS += response.usage.input_tokens\n                    OUTPUT_TOKENS += response.usage.output_tokens\n\n                # Parse the response - look for text blocks\n                response_text = None\n\n                # Loop through all content blocks to find the text block\n                for content_block in response.content:\n                    if content_block.type == \"text\":\n                        response_text = content_block.text\n                        break\n\n                # Make sure we have a text response\n                if response_text is None:\n                    raise Exception(\"No text response found in the model output\")\n\n                # Handle different response formats\n                try:\n                    # Try parsing as JSON first\n                    result = json.loads(response_text)\n                except json.JSONDecodeError:\n                    # If not valid JSON, try to extract reasoning and is_relevant from text\n                    is_relevant = \"relevant\" in response_text.lower() and not (\n                        \"not relevant\" in response_text.lower()\n                    )\n                    result = {\n                        \"reasoning\": response_text.strip(),\n                        \"is_relevant\": is_relevant,\n                    }\n\n                return {\n                    \"reasoning\": result.get(\"reasoning\", \"No reasoning provided\"),\n                    \"file_path\": file_path,\n                    \"is_relevant\": result.get(\"is_relevant\", False),\n                }\n            except Exception as e:\n                if attempt < MAX_RETRIES - 1:\n                    console.log(\n                        f\"[yellow]Retry {attempt + 1}/{MAX_RETRIES} for {file_path}: {str(e)}[/yellow]\"\n                    )\n                    time.sleep(RETRY_WAIT)\n                else:\n                    console.log(\n                        f\"[red]Failed to determine relevance for {file_path}: {str(e)}[/red]\"\n                    )\n                    return {\n                        \"reasoning\": f\"Error: {str(e)}\",\n                        \"file_path\": file_path,\n                        \"is_relevant\": False,\n                    }\n    except Exception as e:\n        console.log(f\"[red]Error processing file {file_path}: {str(e)}[/red]\")\n        return {\n            \"reasoning\": f\"Error: {str(e)}\",\n            \"file_path\": file_path,\n            \"is_relevant\": False,\n        }\n\n\ndef determine_if_files_are_relevant(\n    reasoning: str, file_paths: List[str]\n) -> Dict[str, Any]:\n    \"\"\"Determines if files are relevant to the prompt using parallelism.\n\n    Args:\n        reasoning: Explanation of why we're determining relevance\n        file_paths: List of file paths to check\n\n    Returns:\n        Dictionary with results for each file\n    \"\"\"\n    try:\n        console.log(\n            f\"[blue]Determine If Files Are Relevant Tool[/blue] - Reasoning: {reasoning}\"\n        )\n        console.log(\n            f\"[dim]Checking {len(file_paths)} files in batches of {BATCH_SIZE}[/dim]\"\n        )\n\n        # Initialize Anthropic client\n        client = Anthropic(api_key=os.getenv(\"ANTHROPIC_API_KEY\"))\n\n        results = {}\n\n        # Process files in batches\n        for i in range(0, len(file_paths), BATCH_SIZE):\n            batch = file_paths[i : i + BATCH_SIZE]\n            console.log(\n                f\"[dim]Processing batch {i//BATCH_SIZE + 1}/{(len(file_paths) + BATCH_SIZE - 1)//BATCH_SIZE}[/dim]\"\n            )\n\n            # Process batch in parallel\n            with concurrent.futures.ThreadPoolExecutor(\n                max_workers=BATCH_SIZE\n            ) as executor:\n                future_to_file = {\n                    executor.submit(\n                        determine_if_file_is_relevant, USER_PROMPT, file_path, client\n                    ): file_path\n                    for file_path in batch\n                }\n\n                for future in concurrent.futures.as_completed(future_to_file):\n                    file_path = future_to_file[future]\n                    try:\n                        result = future.result()\n                        results[file_path] = result\n                        relevance = (\n                            \"Relevant\" if result[\"is_relevant\"] else \"Not relevant\"\n                        )\n                        console.log(f\"[dim]{file_path}: {relevance}[/dim]\")\n                    except Exception as e:\n                        console.log(\n                            f\"[red]Error processing {file_path}: {str(e)}[/red]\"\n                        )\n\n        return results\n    except Exception as e:\n        console.log(f\"[red]Error determining file relevance: {str(e)}[/red]\")\n        return {}\n\n\ndef add_relevant_files(reasoning: str, file_paths: List[str]) -> str:\n    \"\"\"Adds files to the list of relevant files.\n\n    Args:\n        reasoning: Explanation of why we're adding these files\n        file_paths: List of file paths to add\n\n    Returns:\n        String indicating success\n    \"\"\"\n    try:\n        console.log(f\"[blue]Add Relevant Files Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Adding {len(file_paths)} files to relevant files list[/dim]\")\n\n        global RELEVANT_FILES\n        for file_path in file_paths:\n            if file_path not in RELEVANT_FILES:\n                RELEVANT_FILES.append(file_path)\n\n        console.log(\n            f\"[green]Added {len(file_paths)} files. Total relevant files: {len(RELEVANT_FILES)}[/green]\"\n        )\n        return f\"{len(file_paths)} files added. Total relevant files: {len(RELEVANT_FILES)}\"\n    except Exception as e:\n        console.log(f\"[red]Error adding relevant files: {str(e)}[/red]\")\n        return f\"Error: {str(e)}\"\n\n\ndef complete_task_output_relevant_files(reasoning: str) -> str:\n    \"\"\"Outputs the list of relevant files to a JSON file.\n\n    Args:\n        reasoning: Explanation of why we're outputting the files\n\n    Returns:\n        String indicating success or failure\n    \"\"\"\n    try:\n        console.log(\n            f\"[blue]Complete Task Output Relevant Files Tool[/blue] - Reasoning: {reasoning}\"\n        )\n\n        global RELEVANT_FILES\n        global OUTPUT_FILE\n\n        if not RELEVANT_FILES:\n            console.log(f\"[yellow]No relevant files to output[/yellow]\")\n            return \"No relevant files to output\"\n\n        # Write files to JSON\n        with open(OUTPUT_FILE, \"w\") as f:\n            json.dump(RELEVANT_FILES, f, indent=2)\n\n        console.log(\n            f\"[green]Successfully wrote {len(RELEVANT_FILES)} files to {OUTPUT_FILE}[/green]\"\n        )\n        return f\"Successfully wrote {len(RELEVANT_FILES)} files to {OUTPUT_FILE}\"\n    except Exception as e:\n        console.log(f\"[red]Error outputting relevant files: {str(e)}[/red]\")\n        return f\"Error: {str(e)}\"\n\n\ndef search_codebase_with_ripgrep(\n    reasoning: str, query: str, base_path: str = \".\", max_files: int = 10, \n    extensions: List[str] = None, globs: List[str] = None\n) -> Dict[str, Any]:\n    \"\"\"\n    Search the codebase at base_path for files relevant to the query using ripgrep.\n    \n    Args:\n        reasoning: Explanation of why we're searching the codebase\n        query: The search query\n        base_path: Directory to search in (defaults to current working directory)\n        max_files: Maximum number of top files to check (to limit processing)\n        extensions: List of file extensions to filter files (e.g. [\"py\", \"md\"])\n        globs: List of glob patterns to filter files (e.g. [\"*.py\", \"src/*.js\"])\n        \n    Returns:\n        Dictionary with search results\n    \"\"\"\n    try:\n        console.log(f\"[blue]Ripgrep Search Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Searching for '{query}' in {base_path}[/dim]\")\n        \n        # 1. Use ripgrep to find candidate files and match counts\n        try:\n            # Build ripgrep command with options\n            # '-c' counts matches per file, '--no-config' to ignore custom ripgreprc\n            rg_cmd = [\n                \"rg\",\n                \"-c\",\n                \"--no-config\",\n            ]\n            \n            # Add extension filters if provided\n            if extensions and len(extensions) > 0:\n                for ext in extensions:\n                    rg_cmd.append(f\"--type-add=custom:*.{ext}\")\n                rg_cmd.append(\"--type=custom\")\n                console.log(f\"[dim]Filtering by extensions: {extensions}[/dim]\")\n            \n            # Add glob patterns if provided\n            if globs and len(globs) > 0:\n                for glob in globs:\n                    rg_cmd.append(f\"--glob={glob}\")\n                console.log(f\"[dim]Filtering by globs: {globs}[/dim]\")\n            \n            # Add the query and search path\n            rg_cmd.append(query)\n            rg_cmd.append(base_path)\n            \n            console.log(f\"[dim]Running command: {' '.join(rg_cmd)}[/dim]\")\n            rg_result = subprocess.run(rg_cmd, capture_output=True, text=True)\n        except Exception as e:\n            raise RuntimeError(f\"Failed to run ripgrep: {e}\")\n\n        output = rg_result.stdout.strip()\n        candidates = []\n        if output:\n            for line in output.splitlines():\n                # Each line is \"filepath:count\"\n                parts = line.split(\":\", 1)\n                if len(parts) == 2:\n                    file_path, count_str = parts[0], parts[1]\n                else:\n                    # If ripgrep output format changes or there's a colon in filename, handle accordingly\n                    file_path = parts[0]\n                    count_str = \"1\"\n                # Ensure the count is an integer\n                try:\n                    count = int(count_str)\n                except ValueError:\n                    count = 1\n                candidates.append((file_path, count))\n        else:\n            # No matches found by ripgrep\n            candidates = []\n\n        # Rank candidates by match count (descending)\n        candidates.sort(key=lambda x: x[1], reverse=True)\n        \n        console.log(f\"[dim]Found {len(candidates)} files matching query[/dim]\")\n        \n        results = []\n        # Process top files up to max_files limit\n        for idx, (file_path, count) in enumerate(candidates):\n            if max_files is not None and idx >= max_files:\n                break\n                \n            # Mark all files found by ripgrep as relevant since they contain the query\n            result = {\"file\": file_path, \"match_count\": count, \"relevant\": True}\n            results.append(result)\n            \n            # Add to our global relevant files list\n            if file_path not in RELEVANT_FILES:\n                RELEVANT_FILES.append(file_path)\n\n        console.log(f\"[green]Added {len(results)} files to relevant files list[/green]\")\n        return {\"results\": results, \"total_matches\": len(candidates)}\n    \n    except Exception as e:\n        console.log(f\"[red]Error searching with ripgrep: {str(e)}[/red]\")\n        return {\"error\": str(e), \"results\": [], \"total_matches\": 0}\n\n\ndef display_token_usage():\n    \"\"\"Displays the token usage and estimated cost.\"\"\"\n    global INPUT_TOKENS, OUTPUT_TOKENS\n    \n    # Claude 3.7 Sonnet pricing (as of 25 February 2025)\n    input_cost_per_million = 3.00  # $3.00 per million tokens\n    output_cost_per_million = 15.00  # $15.00 per million tokens\n    \n    # Calculate costs\n    input_cost = (INPUT_TOKENS / 1_000_000) * input_cost_per_million\n    output_cost = (OUTPUT_TOKENS / 1_000_000) * output_cost_per_million\n    total_cost = input_cost + output_cost\n    \n    # Create a nice table for display\n    table = Table(title=\"Token Usage and Cost Summary\")\n    table.add_column(\"Category\", style=\"cyan\")\n    table.add_column(\"Tokens\", style=\"green\")\n    table.add_column(\"Rate\", style=\"yellow\")\n    table.add_column(\"Cost\", style=\"magenta\")\n    \n    table.add_row(\n        \"Input\", \n        f\"{INPUT_TOKENS:,}\", \n        f\"${input_cost_per_million:.2f}/M\",\n        f\"${input_cost:.4f}\"\n    )\n    table.add_row(\n        \"Output\", \n        f\"{OUTPUT_TOKENS:,}\", \n        f\"${output_cost_per_million:.2f}/M\",\n        f\"${output_cost:.4f}\"\n    )\n    table.add_row(\n        \"Total\", \n        f\"{INPUT_TOKENS + OUTPUT_TOKENS:,}\", \n        \"\", \n        f\"${total_cost:.4f}\"\n    )\n    \n    console.print(Panel(table, title=\"Claude 3.7 Sonnet API Usage\", subtitle=\"(Based on Feb 2025 pricing)\"))\n    \n    return total_cost\n\n\n# Define tool schemas for Anthropic\nTOOLS = [\n    {\n        \"name\": \"search_codebase_with_ripgrep\",\n        \"description\": \"Search the codebase for files that match a specific query using ripgrep. Fast and efficient for finding content.\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to search the codebase\",\n                },\n                \"query\": {\n                    \"type\": \"string\",\n                    \"description\": \"The search query to look for in file contents\",\n                },\n                \"base_path\": {\n                    \"type\": \"string\",\n                    \"description\": \"Directory to search in (defaults to current working directory)\",\n                },\n                \"max_files\": {\n                    \"type\": \"integer\",\n                    \"description\": \"Maximum number of top files to check (default: 10)\",\n                },\n                \"extensions\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file extensions to filter by (e.g. ['py', 'md'])\",\n                },\n                \"globs\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of glob patterns to filter files (e.g. ['*.py', 'src/*.js'])\",\n                },\n            },\n            \"required\": [\"reasoning\", \"query\"],\n        },\n    },\n    {\n        \"name\": \"git_list_files\",\n        \"description\": \"Returns list of files in the repository, respecting gitignore\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to list files relative to user request\",\n                },\n                \"directory\": {\n                    \"type\": \"string\",\n                    \"description\": \"Directory to search in (defaults to current working directory)\",\n                },\n                \"globs\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of glob patterns to filter files (optional)\",\n                },\n                \"extensions\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file extensions to filter files (optional)\",\n                },\n            },\n            \"required\": [\"reasoning\"],\n        },\n    },\n    {\n        \"name\": \"check_file_paths_line_length\",\n        \"description\": \"Checks the line length of each file and returns a dictionary of file paths and their line counts\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to check line lengths\",\n                },\n                \"file_paths\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file paths to check\",\n                },\n            },\n            \"required\": [\"reasoning\", \"file_paths\"],\n        },\n    },\n    {\n        \"name\": \"determine_if_files_are_relevant\",\n        \"description\": \"Determines if files are relevant to the prompt using parallelism\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to determine relevance\",\n                },\n                \"file_paths\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file paths to check\",\n                },\n            },\n            \"required\": [\"reasoning\", \"file_paths\"],\n        },\n    },\n    {\n        \"name\": \"add_relevant_files\",\n        \"description\": \"Adds files to the list of relevant files\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to add these files\",\n                },\n                \"file_paths\": {\n                    \"type\": \"array\",\n                    \"items\": {\"type\": \"string\"},\n                    \"description\": \"List of file paths to add\",\n                },\n            },\n            \"required\": [\"reasoning\", \"file_paths\"],\n        },\n    },\n    {\n        \"name\": \"complete_task_output_relevant_files\",\n        \"description\": \"Outputs the list of relevant files to a JSON file. Call this when you have finished identifying all relevant files.\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we are outputting the files to JSON\",\n                },\n            },\n            \"required\": [\"reasoning\"],\n        },\n    },\n]\n\nAGENT_PROMPT = \"\"\"\n<purpose>\nYou are a codebase context builder. Use the available tools to search, filter and determine which files in the codebase are relevant to the prompt (user query).\n</purpose>\n\n<instructions>\n<instruction>If ripgrep is enabled, use search_codebase_with_ripgrep to quickly find files containing specific content, which is faster and more precise for content searching. When using ripgrep, skip the determine_if_files_are_relevant tool as ripgrep already identifies relevant files.</instruction>\n<instruction>If ripgrep is not enabled, start by listing files in the codebase using git_list_files, filtering by globs and extensions if provided. Then check file line lengths and determine which files are relevant to the user query using determine_if_files_are_relevant.</instruction>\n<instruction>Check file line lengths to ensure they are within the specified limit using check_file_paths_line_length.</instruction>\n<instruction>Add relevant files to the final list using add_relevant_files if needed.</instruction>\n<instruction>Be thorough but efficient with tool usage.</instruction>\n<instruction>Think step by step about what information you need.</instruction>\n<instruction>Be sure to specify every parameter for each tool call.</instruction>\n<instruction>Every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.</instruction>\n<instruction>The determine_if_files_are_relevant tool will process files in batches of 10 for efficiency (only use this if ripgrep is not enabled).</instruction>\n<instruction>Focus on finding the most relevant files that will help answer the user query.</instruction>\n<instruction>You MUST monitor the number of files in the relevant files list. Once you have collected at least the File-Limit number of files, you MUST call complete_task_output_relevant_files to save the list of relevant files to JSON.</instruction>\n<instruction>If you've exhausted all potential relevant files before reaching the File-Limit, you should call complete_task_output_relevant_files with the files you have.</instruction>\n<instruction>Always end your work by calling complete_task_output_relevant_files, which outputs the list of relevant files to a JSON file.</instruction>\n<instruction>current-relevant-files is the current list of files that have been identified as relevant to your query.</instruction>\n</instructions>\n\n<user-request>\n{{user_request}}\n</user-request>\n\n<dynamic-variables>\nDirectory: {{directory}}\nGlobs: {{globs}}\nExtensions: {{extensions}}\nFile Line Limit: {{file_line_limit}}\nFile-Limit: {{limit}}\nOutput JSON: {{output_file}}\nUse Ripgrep: {{use_ripgrep}}\n</dynamic-variables>\n\n<current-relevant-files>\n{{relevant_files}}\n</current-relevant-files>\n\"\"\"\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(\n        description=\"Codebase Context Agent using Claude 3.7\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-d\",\n        \"--directory\",\n        default=os.getcwd(),\n        help=\"Directory to search in (defaults to current working directory)\",\n    )\n    parser.add_argument(\n        \"-g\",\n        \"--globs\",\n        nargs=\"*\",\n        default=[],\n        help=\"List of glob patterns to filter files (optional)\",\n    )\n    parser.add_argument(\n        \"-e\",\n        \"--extensions\",\n        nargs=\"*\",\n        default=[],\n        help=\"List of file extensions to filter files (optional)\",\n    )\n    parser.add_argument(\n        \"-q\", \"--quiet\", action=\"store_true\", help=\"Quiet mode (don't show logging)\"\n    )\n    parser.add_argument(\n        \"-l\", \"--limit\", type=int, default=100, help=\"Maximum number of files to return\"\n    )\n    parser.add_argument(\n        \"-f\",\n        \"--file-line-limit\",\n        type=int,\n        default=500,\n        help=\"Maximum number of lines per file\",\n    )\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 10)\",\n    )\n    parser.add_argument(\n        \"-o\",\n        \"--output-file\",\n        default=\"output_relevant_files.json\",\n        help=\"Path to output JSON file with relevant files (default: output_relevant_files.json)\",\n    )\n    parser.add_argument(\n        \"--use-ripgrep\",\n        action=\"store_true\",\n        help=\"Use ripgrep to efficiently search file contents\"\n    )\n    parser.add_argument(\n        \"--max-ripgrep-files\",\n        type=int, \n        default=10,\n        help=\"Maximum number of files to return from ripgrep search\"\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    ANTHROPIC_API_KEY = os.getenv(\"ANTHROPIC_API_KEY\")\n    if not ANTHROPIC_API_KEY:\n        console.print(\n            \"[red]Error: ANTHROPIC_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please get your API key from https://console.anthropic.com/settings/keys\"\n        )\n        console.print(\"Then set it with: export ANTHROPIC_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    client = Anthropic(api_key=ANTHROPIC_API_KEY)\n\n    # Set global variables\n    global USER_PROMPT, OUTPUT_FILE\n    USER_PROMPT = args.prompt\n    OUTPUT_FILE = args.output_file\n\n    # Configure quiet mode\n    if args.quiet:\n        console.quiet = True\n\n    # For the first initialization, create the completed prompt\n    # Will update this variable before each API call\n    completed_prompt = (\n        AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n        .replace(\"{{directory}}\", args.directory)\n        .replace(\"{{globs}}\", str(args.globs))\n        .replace(\"{{extensions}}\", str(args.extensions))\n        .replace(\"{{file_line_limit}}\", str(args.file_line_limit))\n        .replace(\"{{limit}}\", str(args.limit))\n        .replace(\"{{output_file}}\", OUTPUT_FILE)\n        .replace(\"{{use_ripgrep}}\", str(args.use_ripgrep))\n        .replace(\"{{relevant_files}}\", \"No relevant files found yet.\")\n    )\n\n    # Initialize messages with proper typing for Anthropic chat\n    messages = [{\"role\": \"user\", \"content\": completed_prompt}]\n\n    compute_iterations = 0\n    break_loop = False\n    # Main agent loop\n    while True:\n        if break_loop or compute_iterations >= args.compute:\n            break\n\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        try:\n            # Before each API call, update the completed prompt with the current relevant files\n            if RELEVANT_FILES:\n                formatted_files = \"\\n\".join([f\"- {file}\" for file in RELEVANT_FILES])\n                file_count = f\"Total: {len(RELEVANT_FILES)}/{args.limit} files\"\n                relevant_files_section = f\"{file_count}\\n{formatted_files}\"\n            else:\n                relevant_files_section = \"No relevant files found yet.\"\n\n            # Update the first message with the latest relevant files information\n            completed_prompt = (\n                AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n                .replace(\"{{directory}}\", args.directory)\n                .replace(\"{{globs}}\", str(args.globs))\n                .replace(\"{{extensions}}\", str(args.extensions))\n                .replace(\"{{file_line_limit}}\", str(args.file_line_limit))\n                .replace(\"{{limit}}\", str(args.limit))\n                .replace(\"{{output_file}}\", OUTPUT_FILE)\n                .replace(\"{{use_ripgrep}}\", str(args.use_ripgrep))\n                .replace(\"{{relevant_files}}\", relevant_files_section)\n            )\n\n            # Always update the first message with the latest information before each API call\n            messages[0][\"content\"] = completed_prompt\n\n            # Generate content with tool support\n            response = client.messages.create(\n                model=\"claude-3-7-sonnet-20250219\",\n                system=\"You are a codebase context builder. Use the available tools to search, filter and determine which files in the codebase are relevant to the prompt (user query).\",\n                messages=messages,\n                tools=TOOLS,\n                max_tokens=4000,\n                thinking={\"type\": \"enabled\", \"budget_tokens\": 2000},\n            )\n            \n            # Track token usage\n            global INPUT_TOKENS, OUTPUT_TOKENS\n            if hasattr(response, 'usage') and response.usage:\n                INPUT_TOKENS += response.usage.input_tokens\n                OUTPUT_TOKENS += response.usage.output_tokens\n                console.log(f\"[dim]Token usage this call: {response.usage.input_tokens} input, {response.usage.output_tokens} output[/dim]\")\n\n            # Extract thinking block and other content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n\n            if response.content:\n                # Get the message content\n                for content_block in response.content:\n                    if content_block.type == \"thinking\":\n                        thinking_block = content_block\n                        previous_thinking = thinking_block\n                    elif content_block.type == \"tool_use\":\n                        tool_use_block = content_block\n                        # Access the proper attributes directly\n                        tool_name = content_block.name\n                        tool_input = content_block.input\n                        tool_id = content_block.id\n                    elif content_block.type == \"text\":\n                        text_block = content_block\n                        console.print(\n                            f\"[cyan]Model response:[/cyan] {content_block.text}\"\n                        )\n\n                # Handle text responses if there was no tool use\n                if not tool_use_block and text_block:\n                    messages.append(\n                        {  # type: ignore\n                            \"role\": \"assistant\",\n                            \"content\": [\n                                *([thinking_block] if thinking_block else []),\n                                {\"type\": \"text\", \"text\": text_block.text},\n                            ],\n                        }\n                    )\n                    break_loop = True\n                    continue\n\n                # We need a tool use block to proceed\n                if tool_use_block:\n                    console.print(\n                        f\"[blue]Tool Call:[/blue] {tool_name}({json.dumps(tool_input, indent=2)})\"\n                    )\n\n                    try:\n                        # Execute the appropriate tool based on name\n                        if tool_name == \"git_list_files\":\n                            directory = tool_input.get(\"directory\", args.directory)\n                            globs = tool_input.get(\"globs\", args.globs)\n                            extensions = tool_input.get(\"extensions\", args.extensions)\n                            result = git_list_files(\n                                reasoning=tool_input[\"reasoning\"],\n                                directory=directory,\n                                globs=globs,\n                                extensions=extensions,\n                            )\n                        elif tool_name == \"check_file_paths_line_length\":\n                            result = check_file_paths_line_length(\n                                reasoning=tool_input[\"reasoning\"],\n                                file_paths=tool_input[\"file_paths\"],\n                                file_line_limit=args.file_line_limit,\n                            )\n                        elif tool_name == \"determine_if_files_are_relevant\":\n                            result = determine_if_files_are_relevant(\n                                reasoning=tool_input[\"reasoning\"],\n                                file_paths=tool_input[\"file_paths\"],\n                            )\n                        elif tool_name == \"add_relevant_files\":\n                            result = add_relevant_files(\n                                reasoning=tool_input[\"reasoning\"],\n                                file_paths=tool_input[\"file_paths\"],\n                            )\n                        elif tool_name == \"search_codebase_with_ripgrep\":\n                            result = search_codebase_with_ripgrep(\n                                reasoning=tool_input[\"reasoning\"],\n                                query=tool_input[\"query\"],\n                                base_path=tool_input.get(\"base_path\", args.directory),\n                                max_files=tool_input.get(\"max_files\", args.max_ripgrep_files),\n                                extensions=tool_input.get(\"extensions\", args.extensions),\n                                globs=tool_input.get(\"globs\", args.globs),\n                            )\n                        elif tool_name == \"complete_task_output_relevant_files\":\n                            result = complete_task_output_relevant_files(\n                                reasoning=tool_input[\"reasoning\"],\n                            )\n                            # Indicate that we're done after writing the output\n                            break_loop = True\n                        else:\n                            raise Exception(f\"Unknown tool call: {tool_name}\")\n\n                        console.print(\n                            f\"[blue]Tool Call Result:[/blue] {tool_name}(...) -> \"\n                        )\n\n                        console.print(\n                            Panel.fit(\n                                str(result),\n                                border_style=\"blue\",\n                            )\n                        )\n\n                        # Append the tool result to messages\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"assistant\",\n                                \"content\": [\n                                    *([thinking_block] if thinking_block else []),\n                                    {\n                                        \"type\": \"tool_use\",\n                                        \"id\": tool_id,\n                                        \"name\": tool_name,\n                                        \"input\": tool_input,\n                                    },\n                                ],\n                            }\n                        )\n\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"user\",\n                                \"content\": [\n                                    {\n                                        \"type\": \"tool_result\",\n                                        \"tool_use_id\": tool_id,\n                                        \"content\": json.dumps(result),\n                                    }\n                                ],\n                            }\n                        )\n\n                    except Exception as e:\n                        error_msg = f\"Error executing {tool_name}: {e}\"\n                        console.print(f\"[red]{error_msg}[/red]\")\n\n                        # Append the error to messages\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"assistant\",\n                                \"content\": [\n                                    *([thinking_block] if thinking_block else []),\n                                    {\n                                        \"type\": \"tool_use\",\n                                        \"id\": tool_id,\n                                        \"name\": tool_name,\n                                        \"input\": tool_input,\n                                    },\n                                ],\n                            }\n                        )\n\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"user\",\n                                \"content\": [\n                                    {\n                                        \"type\": \"tool_result\",\n                                        \"tool_use_id\": tool_id,\n                                        \"content\": str(error_msg),\n                                    }\n                                ],\n                            }\n                        )\n\n                    # No need to update messages here since we're updating at the start of each loop iteration\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n    # Print the final list of relevant files\n    console.rule(\"[green]Relevant Files[/green]\")\n    for i, file_path in enumerate(RELEVANT_FILES, 1):\n        console.print(f\"{i}. {file_path}\")\n    \n    # Display token usage statistics\n    console.rule(\"[yellow]Token Usage Summary[/yellow]\")\n    display_token_usage()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_duckdb_anthropic_v2.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"anthropic>=0.45.2\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\n/// Example Usage\n\n# Run DuckDB agent with default compute loops (3)\nuv run sfa_duckdb_anthropic_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n\n# Run with custom compute loops\nuv run sfa_duckdb_anthropic_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\" -c 5\n\n///\n\"\"\"\n\nimport os\nimport sys\nimport json\nimport argparse\nimport subprocess\nfrom typing import List\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom anthropic import Anthropic\n\n# Initialize rich console\nconsole = Console()\n\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are a world-class expert at crafting precise DuckDB SQL queries.\n    Your goal is to generate accurate queries that exactly match the user's data needs.\n</purpose>\n\n<instructions>\n    <instruction>Use the provided tools to explore the database and construct the perfect query.</instruction>\n    <instruction>Start by listing tables to understand what's available.</instruction>\n    <instruction>Describe tables to understand their schema and columns.</instruction>\n    <instruction>Sample tables to see actual data patterns.</instruction>\n    <instruction>Test queries before finalizing them.</instruction>\n    <instruction>Only call run_final_sql_query when you're confident the query is perfect.</instruction>\n    <instruction>Be thorough but efficient with tool usage.</instruction>\n    <instruction>If you find your run_test_sql_query tool call returns an error or won't satisfy the user request, try to fix the query or try a different query.</instruction>\n    <instruction>Think step by step about what information you need.</instruction>\n    <instruction>Be sure to specify every parameter for each tool call.</instruction>\n    <instruction>Every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <name>list_tables</name>\n        <description>Returns list of available tables in database</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to list tables relative to user request</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>describe_table</name>\n        <description>Returns schema info for specified table</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to describe this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to describe</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>sample_table</name>\n        <description>Returns sample rows from specified table, always specify row_sample_size</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to sample this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to sample</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>row_sample_size</name>\n                <type>integer</type>\n                <description>Number of rows to sample aim for 3-5 rows</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_test_sql_query</name>\n        <description>Tests a SQL query and returns results (only visible to agent)</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we're testing this specific query</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The SQL query to test</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_final_sql_query</name>\n        <description>Runs the final validated SQL query and shows results to user</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Final explanation of how query satisfies user request</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The validated SQL query to run</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\"\"\"\n\n\ndef list_tables(reasoning: str) -> List[str]:\n    \"\"\"Returns a list of tables in the database.\n\n    The agent uses this to discover available tables and make informed decisions.\n\n    Args:\n        reasoning: Explanation of why we're listing tables relative to user request\n\n    Returns:\n        List of table names as strings\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \".tables\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(f\"[blue]List Tables Tool[/blue] - Reasoning: {reasoning}\")\n        return result.stdout.strip().split(\"\\n\")\n    except Exception as e:\n        console.log(f\"[red]Error listing tables: {str(e)}[/red]\")\n        return []\n\n\ndef describe_table(reasoning: str, table_name: str) -> str:\n    \"\"\"Returns schema information about the specified table.\n\n    The agent uses this to understand table structure and available columns.\n\n    Args:\n        reasoning: Explanation of why we're describing this table\n        table_name: Name of table to describe\n\n    Returns:\n        String containing table schema information\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"DESCRIBE {table_name};\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            f\"[blue]Describe Table Tool[/blue] - Table: {table_name} - Reasoning: {reasoning}\"\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error describing table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef sample_table(reasoning: str, table_name: str, row_sample_size: int) -> str:\n    \"\"\"Returns a sample of rows from the specified table.\n\n    The agent uses this to understand actual data content and patterns.\n\n    Args:\n        reasoning: Explanation of why we're sampling this table\n        table_name: Name of table to sample from\n        row_sample_size: Number of rows to sample aim for 3-5 rows\n\n    Returns:\n        String containing sample rows in readable format\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"SELECT * FROM {table_name} LIMIT {row_sample_size};\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            f\"[blue]Sample Table Tool[/blue] - Table: {table_name} - Rows: {row_sample_size} - Reasoning: {reasoning}\"\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error sampling table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef run_test_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes a test SQL query and returns results.\n\n    The agent uses this to validate queries before finalizing them.\n    Results are only shown to the agent, not the user.\n\n    Args:\n        reasoning: Explanation of why we're running this test query\n        sql_query: The SQL query to test\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"{sql_query}\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(f\"[blue]Test Query Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Query: {sql_query}[/dim]\")\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error running test query: {str(e)}[/red]\")\n        return str(e)\n\n\ndef run_final_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes the final SQL query and returns results to user.\n\n    This is the last tool call the agent should make after validating the query.\n\n    Args:\n        reasoning: Final explanation of how this query satisfies user request\n        sql_query: The validated SQL query to run\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"{sql_query}\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            Panel(\n                f\"[green]Final Query Tool[/green]\\nReasoning: {reasoning}\\nQuery: {sql_query}\"\n            )\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error running final query: {str(e)}[/red]\")\n        return str(e)\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"DuckDB Agent using Anthropic API\")\n    parser.add_argument(\n        \"-d\", \"--db\", required=True, help=\"Path to DuckDB database file\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 3)\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    ANTHROPIC_API_KEY = os.getenv(\"ANTHROPIC_API_KEY\")\n    if not ANTHROPIC_API_KEY:\n        console.print(\n            \"[red]Error: ANTHROPIC_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\"Please get your API key from your Anthropic dashboard\")\n        console.print(\"Then set it with: export ANTHROPIC_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    # Set global DB_PATH for tool functions\n    global DB_PATH\n    DB_PATH = args.db\n\n    # Initialize Anthropic client\n    client = Anthropic()\n\n    # Create a single combined prompt based on the full template\n    completed_prompt = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n    messages = [{\"role\": \"user\", \"content\": completed_prompt}]\n\n    compute_iterations = 0\n\n    # Main agent loop\n    while True:\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        if compute_iterations >= args.compute:\n            console.print(\n                \"[yellow]Warning: Reached maximum compute loops without final query[/yellow]\"\n            )\n            raise Exception(\n                f\"Maximum compute loops reached: {compute_iterations}/{args.compute}\"\n            )\n\n        try:\n            # Add the user's initial prompt if this is the first iteration\n            if compute_iterations == 1:\n                messages.append({\"role\": \"user\", \"content\": args.prompt})\n\n            # Generate content with tool support\n            response = client.messages.create(\n                model=\"claude-3-5-sonnet-20241022\",\n                max_tokens=1024,\n                messages=messages,\n                tools=[\n                    {\n                        \"name\": \"list_tables\",\n                        \"description\": \"Returns list of available tables in database\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Explanation for listing tables\",\n                                }\n                            },\n                            \"required\": [\"reasoning\"],\n                        },\n                    },\n                    {\n                        \"name\": \"describe_table\",\n                        \"description\": \"Returns schema info for specified table\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Why we need to describe this table\",\n                                },\n                                \"table_name\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Name of table to describe\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"table_name\"],\n                        },\n                    },\n                    {\n                        \"name\": \"sample_table\",\n                        \"description\": \"Returns sample rows from specified table\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Why we need to sample this table\",\n                                },\n                                \"table_name\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Name of table to sample\",\n                                },\n                                \"row_sample_size\": {\n                                    \"type\": \"integer\",\n                                    \"description\": \"Number of rows to sample aim for 3-5 rows\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"table_name\", \"row_sample_size\"],\n                        },\n                    },\n                    {\n                        \"name\": \"run_test_sql_query\",\n                        \"description\": \"Tests a SQL query and returns results (only visible to agent)\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Why we're testing this specific query\",\n                                },\n                                \"sql_query\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"The SQL query to test\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"sql_query\"],\n                        },\n                    },\n                    {\n                        \"name\": \"run_final_sql_query\",\n                        \"description\": \"Runs the final validated SQL query and shows results to user\",\n                        \"input_schema\": {\n                            \"type\": \"object\",\n                            \"properties\": {\n                                \"reasoning\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"Final explanation of how query satisfies user request\",\n                                },\n                                \"sql_query\": {\n                                    \"type\": \"string\",\n                                    \"description\": \"The validated SQL query to run\",\n                                },\n                            },\n                            \"required\": [\"reasoning\", \"sql_query\"],\n                        },\n                    },\n                ],\n                tool_choice={\"type\": \"any\"},  # Always force a tool call\n            )\n\n            # Look for tool calls in the response (expecting ToolUseBlock objects)\n            tool_calls = []\n\n            for block in response.content:\n                if hasattr(block, \"type\") and block.type == \"tool_use\":\n                    tool_calls.append(block)\n\n            if tool_calls:\n                for tool_call in tool_calls:\n                    tool_use_id = tool_call.id\n                    func_name = tool_call.name\n                    func_args = (\n                        tool_call.input\n                    )  # already a dict; no need to call json.loads\n\n                    console.print(\n                        f\"[blue]Tool Call:[/blue] {func_name}({json.dumps(func_args)})\"\n                    )\n\n                    messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n                    try:\n                        if func_name == \"list_tables\":\n                            result = list_tables(reasoning=func_args[\"reasoning\"])\n                        elif func_name == \"describe_table\":\n                            result = describe_table(\n                                reasoning=func_args[\"reasoning\"],\n                                table_name=func_args[\"table_name\"],\n                            )\n                        elif func_name == \"sample_table\":\n                            result = sample_table(\n                                reasoning=func_args[\"reasoning\"],\n                                table_name=func_args[\"table_name\"],\n                                row_sample_size=func_args[\"row_sample_size\"],\n                            )\n                        elif func_name == \"run_test_sql_query\":\n                            result = run_test_sql_query(\n                                reasoning=func_args[\"reasoning\"],\n                                sql_query=func_args[\"sql_query\"],\n                            )\n                        elif func_name == \"run_final_sql_query\":\n                            result = run_final_sql_query(\n                                reasoning=func_args[\"reasoning\"],\n                                sql_query=func_args[\"sql_query\"],\n                            )\n                            console.print(\"\\n[green]Final Results:[/green]\")\n                            console.print(result)\n                            return\n                        else:\n                            raise Exception(f\"Unknown tool call: {func_name}\")\n\n                        console.print(\n                            f\"[blue]Tool Call Result:[/blue] {func_name}(...) ->\\n{result}\"\n                        )\n\n                        messages.append(\n                            {\n                                \"role\": \"user\",\n                                \"content\": [\n                                    {\n                                        \"type\": \"tool_result\",\n                                        \"tool_use_id\": tool_use_id,\n                                        \"content\": str(result),\n                                    }\n                                ],\n                            }\n                        )\n\n                    except Exception as e:\n                        error_msg = f\"Error executing {func_name}: {str(e)}\"\n                        console.print(f\"[red]{error_msg}[/red]\")\n                        messages.append(\n                            {\n                                \"role\": \"tool\",\n                                \"content\": error_msg,\n                                \"tool_call_id\": tool_call.id,\n                            }\n                        )\n                        continue\n\n            else:\n                raise Exception(\"No tool calls found in response - should never happen\")\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_duckdb_gemini_v1.py",
    "content": "#!/usr/bin/env python3\n\n# /// script\n# dependencies = [\n#   \"google-genai>=1.1.0\",\n# ]\n# ///\n\n\"\"\"\n/// Example Usage\n\n# generates and executes DuckDB command (default)\nuv run sfa_duckdb_gemini_v1.py --db ./data/analytics.db \"Filter employees with salary above 50000 and export to high_salary_employees.csv\"\n\n# generates DuckDB command only without executing\nuv run sfa_duckdb_gemini_v1.py --db ./data/analytics.db --no-exe \"Select name and department from employees table and save to employees.json\"\n\n///\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport subprocess\nfrom google import genai\n\nDUCKDB_PROMPT = \"\"\"<purpose>\n    You are a world-class expert at crafting precise DuckDB CLI commands for database operations.\n    Your goal is to generate accurate, minimal DuckDB commands that exactly match the user's data manipulation needs.\n</purpose>\n\n<instructions>\n    <instruction>Return ONLY the DuckDB command - no explanations, comments, or extra text.</instruction>\n    <instruction>Create the command that satisfies the user query against the duckdb-database-path (e.g., mydb.db).</instruction>\n    <instruction>Ensure the command follows DuckDB best practices for efficiency and readability.</instruction>\n    <instruction>When the user requests to output results to a file, generate a command that writes to the specified file, or create a filename based on a shortened version of the user request and the input database name.</instruction>\n    <instruction>If output is requested in CSV format, use the DuckDB COPY command with WITH (FORMAT CSV, HEADER, DELIMITER ',').</instruction>\n    <instruction>If output is requested in JSON format, use the DuckDB COPY command with WITH (FORMAT JSON) to export results as JSON.</instruction>\n    <instruction>When filtering or processing data, embed the query inside a COPY command if exporting, or run the query directly if no export is needed.</instruction>\n    <instruction>Output your response by itself, do not use backticks or markdown formatting. We're going to run your response as a shell command immediately.</instruction>\n    <instruction>If your results involve a table or query result set, default to exporting as a valid CSV or JSON file as requested.</instruction>\n    <instruction>If the user request is to export to a file, ensure the file is created in the same directory as the duckdb-database-path unless specified otherwise.</instruction>\n</instructions>\n\n<examples>\n    <example>\n        <duckdb-database-path>\n            mydb.db\n        </duckdb-database-path>\n        <user-request>\n            Select the \"name\" and \"age\" columns from table employees where age > 30\n        </user-request>\n        <duckdb-command>\n            duckdb mydb.db -c \"SELECT name, age FROM employees WHERE age > 30;\"\n        </duckdb-command>\n    </example>\n    <example>\n        <duckdb-database-path>\n            data/order_data.db\n        </duckdb-database-path>\n        <user-request>\n            Filter records in table orders where total > 100 and export to orders_high.csv\n        </user-request>\n        <duckdb-command>\n            duckdb data/order_data.db -c \"COPY (SELECT * FROM orders WHERE total > 100) TO 'orders_high.csv' WITH (FORMAT CSV, HEADER, DELIMITER ',');\"\n        </duckdb-command>\n    </example>\n    <example>\n        <duckdb-database-path>\n            analytics.db\n        </duckdb-database-path>\n        <user-request>\n            Convert table customers to JSON and save as customers.json\n        </user-request>\n        <duckdb-command>\n            duckdb analytics.db -c \"COPY (SELECT * FROM customers) TO 'customers.json' WITH (FORMAT JSON);\"\n        </duckdb-command>\n    </example>\n    <example>\n        <duckdb-database-path>\n            mydb.db\n        </duckdb-database-path>\n        <user-request>\n            Export the result of a join between employees and departments from mydb.db to employees_departments.csv\n        </user-request>\n        <duckdb-command>\n            duckdb mydb.db -c \"COPY (SELECT e.name, d.department FROM employees e JOIN departments d ON e.dept_id = d.id) TO 'employees_departments.csv' WITH (FORMAT CSV, HEADER, DELIMITER ',');\"\n        </duckdb-command>\n    </example>\n    <example>\n        <duckdb-database-path>\n            mydb.db\n        </duckdb-database-path>\n        <user-request>\n            Retrieve all records from table sales in mydb.db where region is 'North'\n        </user-request>\n        <duckdb-command>\n            duckdb mydb.db -c \"SELECT * FROM sales WHERE region = 'North';\"\n        </duckdb-command>\n    </example>\n</examples>\n\n<duckdb-database-path>\n    {{database_path}}\n</duckdb-database-path>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\nYour DuckDB command:\"\"\"\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(\n        description=\"Generate DuckDB CLI command using Gemini API\"\n    )\n    parser.add_argument(\n        \"prompt\",\n        help=\"The DuckDB command request to send to Gemini\",\n    )\n    parser.add_argument(\n        \"--db\",\n        required=True,\n        help=\"Path to DuckDB database file\",\n    )\n    parser.add_argument(\n        \"--no-exe\",\n        action=\"store_true\",\n        help=\"Generate the DuckDB command without executing it\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    GEMINI_API_KEY = os.getenv(\"GEMINI_API_KEY\")\n    if not GEMINI_API_KEY:\n        print(\"Error: GEMINI_API_KEY environment variable is not set\")\n        print(\"Please get your API key from https://aistudio.google.com/app/apikey\")\n        print(\"Then set it with: export GEMINI_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    # Initialize client\n    client = genai.Client(\n        api_key=GEMINI_API_KEY, http_options={\"api_version\": \"v1alpha\"}\n    )\n\n    try:\n        # Replace template variables in the prompt\n        prompt = DUCKDB_PROMPT.replace(\"{{database_path}}\", args.db)\n        prompt = prompt.replace(\"{{user_request}}\", args.prompt)\n\n        # Generate DuckDB command\n        response = client.models.generate_content(\n            model=\"gemini-2.0-flash-001\", contents=prompt\n        )\n        duckdb_command = response.text.strip()\n        print(\"\\n🤖 Generated DuckDB command:\", duckdb_command)\n\n        # Execute the command unless --no-exe flag is present\n        if not args.no_exe:\n            print(\"\\n🔍 Executing command...\")\n            # Execute the command using subprocess\n            result = subprocess.run(\n                duckdb_command, shell=True, text=True, capture_output=True\n            )\n            if result.returncode != 0:\n                print(\n                    f\"\\n❌ Error executing command (return code: {result.returncode}):\",\n                    result.stderr,\n                )\n                sys.exit(1)\n\n            if result.stderr:\n                print(\"❌ Error executing command:\", result.stderr)\n\n            if result.stdout:\n                print(\"✅ Command executed successfully:\")\n                print(result.stdout)\n\n    except Exception as e:\n        print(f\"\\nError occurred: {str(e)}\")\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_duckdb_gemini_v2.py",
    "content": "#!/usr/bin/env python3\n\n# /// script\n# dependencies = [\n#   \"google-genai>=1.1.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\n/// Example Usage\n\n# Run DuckDB agent with default compute loops (3)\nuv run sfa_duckdb_gemini_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\"\n\n# Run with custom compute loops\nuv run sfa_duckdb_gemini_v2.py -d ./data/analytics.db -p \"Show me all users with score above 80\" -c 5\n\n///\n\"\"\"\n\nimport os\nimport sys\nimport json\nimport argparse\nimport subprocess\nfrom typing import List\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom google import genai\nfrom google.genai import types\n\n# Initialize rich console\nconsole = Console()\n\n\ndef list_tables(reasoning: str) -> List[str]:\n    \"\"\"Returns a list of tables in the database.\n\n    The agent uses this to discover available tables and make informed decisions.\n\n    Args:\n        reasoning: Explanation of why we're listing tables relative to user request\n\n    Returns:\n        List of table names as strings\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \".tables\"',\n            # f\"duckdb {DB_PATH} -c \\\"SELECT name FROM sqlite_master WHERE type='table';\\\"\",\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(f\"[blue]List Tables Tool[/blue] - Reasoning: {reasoning}\")\n        return result.stdout.strip().split(\"\\n\")\n    except Exception as e:\n        console.log(f\"[red]Error listing tables: {str(e)}[/red]\")\n        return []\n\n\ndef describe_table(reasoning: str, table_name: str) -> str:\n    \"\"\"Returns schema information about the specified table.\n\n    The agent uses this to understand table structure and available columns.\n\n    Args:\n        reasoning: Explanation of why we're describing this table\n        table_name: Name of table to describe\n\n    Returns:\n        String containing table schema information\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"DESCRIBE {table_name};\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            f\"[blue]Describe Table Tool[/blue] - Table: {table_name} - Reasoning: {reasoning}\"\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error describing table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef sample_table(reasoning: str, table_name: str, row_sample_size: int) -> str:\n    \"\"\"Returns a sample of rows from the specified table.\n\n    The agent uses this to understand actual data content and patterns.\n\n    Args:\n        reasoning: Explanation of why we're sampling this table\n        table_name: Name of table to sample from\n        row_sample_size: Number of rows to sample aim for 3-5 rows\n\n    Returns:\n        String containing sample rows in readable format\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"SELECT * FROM {table_name} LIMIT {row_sample_size};\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            f\"[blue]Sample Table Tool[/blue] - Table: {table_name} - Rows: {row_sample_size} - Reasoning: {reasoning}\"\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error sampling table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef run_test_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes a test SQL query and returns results.\n\n    The agent uses this to validate queries before finalizing them.\n    Results are only shown to the agent, not the user.\n\n    Args:\n        reasoning: Explanation of why we're running this test query\n        sql_query: The SQL query to test\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"{sql_query}\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(f\"[blue]Test Query Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Query: {sql_query}[/dim]\")\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error running test query: {str(e)}[/red]\")\n        return str(e)\n\n\ndef run_final_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes the final SQL query and returns results to user.\n\n    This is the last tool call the agent should make after validating the query.\n\n    Args:\n        reasoning: Final explanation of how this query satisfies user request\n        sql_query: The validated SQL query to run\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"{sql_query}\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            Panel(\n                f\"[green]Final Query Tool[/green]\\nReasoning: {reasoning}\\nQuery: {sql_query}\"\n            )\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error running final query: {str(e)}[/red]\")\n        return str(e)\n\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are a world-class expert at crafting precise DuckDB SQL queries.\n    Your goal is to generate accurate queries that exactly match the user's data needs.\n</purpose>\n\n<instructions>\n    <instruction>Use the provided tools to explore the database and construct the perfect query.</instruction>\n    <instruction>Start by listing tables to understand what's available.</instruction>\n    <instruction>Describe tables to understand their schema and columns.</instruction>\n    <instruction>Sample tables to see actual data patterns.</instruction>\n    <instruction>Test queries before finalizing them.</instruction>\n    <instruction>Only call run_final_sql_query when you're confident the query is perfect.</instruction>\n    <instruction>Be thorough but efficient with tool usage.</instruction>\n    <instruction>If you find your run_test_sql_query tool call returns an error or won't satisfy the user request, try to fix the query or try a different query.</instruction>\n    <instruction>Think step by step about what information you need.</instruction>\n    <instruction>Be sure to specify every parameter for each tool call.</instruction>\n    <instruction>Every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <name>list_tables</name>\n        <description>Returns list of available tables in database</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to list tables relative to user request</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>describe_table</name>\n        <description>Returns schema info for specified table</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to describe this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to describe</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>sample_table</name>\n        <description>Returns sample rows from specified table, always specify row_sample_size</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to sample this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to sample</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>row_sample_size</name>\n                <type>integer</type>\n                <description>Number of rows to sample aim for 3-5 rows</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_test_sql_query</name>\n        <description>Tests a SQL query and returns results (only visible to agent)</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we're testing this specific query</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The SQL query to test</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_final_sql_query</name>\n        <description>Runs the final validated SQL query and shows results to user</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Final explanation of how query satisfies user request</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The validated SQL query to run</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\"\"\"\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"DuckDB Agent using Gemini API\")\n    parser.add_argument(\n        \"-d\", \"--db\", required=True, help=\"Path to DuckDB database file\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 3)\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    GEMINI_API_KEY = os.getenv(\"GEMINI_API_KEY\")\n    if not GEMINI_API_KEY:\n        console.print(\n            \"[red]Error: GEMINI_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please get your API key from https://aistudio.google.com/app/apikey\"\n        )\n        console.print(\"Then set it with: export GEMINI_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    # Set global DB_PATH for tool functions\n    global DB_PATH\n    DB_PATH = args.db\n\n    # Initialize Gemini client\n    client = genai.Client(api_key=GEMINI_API_KEY)\n\n    completed_prompt = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n\n    # Initialize message history with proper Content type\n    messages = [\n        types.Content(role=\"user\", parts=[types.Part.from_text(text=completed_prompt)])\n    ]\n\n    compute_iterations = 0\n\n    # Main agent loop\n    while True:\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        if compute_iterations >= args.compute:\n            console.print(\n                \"[yellow]Warning: Reached maximum compute loops without final query[/yellow]\"\n            )\n            raise Exception(\n                f\"Maximum compute loops reached: {compute_iterations}/{args.compute}\"\n            )\n\n        try:\n            # Generate content with tool support\n            response = client.models.generate_content(\n                model=\"gemini-2.0-flash-001\",\n                # model=\"gemini-1.5-flash\",\n                contents=[\n                    *messages,\n                ],\n                config=types.GenerateContentConfig(\n                    tools=[\n                        list_tables,\n                        describe_table,\n                        sample_table,\n                        run_test_sql_query,\n                        run_final_sql_query,\n                    ],\n                    automatic_function_calling=types.AutomaticFunctionCallingConfig(\n                        # maximum_remote_calls=2\n                        # disable=True\n                    ),\n                    tool_config=types.ToolConfig(\n                        function_calling_config=types.FunctionCallingConfig(mode=\"ANY\")\n                    ),\n                ),\n            )\n\n            # Process tool calls\n            if response.function_calls:\n                for func_call in response.function_calls:\n                    # Extract function name and args\n                    func_name = func_call.name\n                    func_args = func_call.args\n\n                    console.print(\n                        f\"[blue]Function Call:[/blue] {func_name}({func_args})\"\n                    )\n\n                    try:\n                        # Call appropriate function\n                        if func_name == \"list_tables\":\n                            result = list_tables(**func_args)\n                        elif func_name == \"describe_table\":\n                            result = describe_table(**func_args)\n                        elif func_name == \"sample_table\":\n                            result = sample_table(**func_args)\n                        elif func_name == \"run_test_sql_query\":\n                            result = run_test_sql_query(**func_args)\n                        elif func_name == \"run_final_sql_query\":\n                            result = run_final_sql_query(**func_args)\n                            console.print(\"\\n[green]Final Results:[/green]\")\n                            console.print(result)\n                            return  # Exit after final query\n\n                        console.print(\n                            f\"[blue]Function Call Result:[/blue] {func_name}(...) ->\\n{result}\"\n                        )\n\n                        # Add function response as proper Content type\n                        function_response = {\"result\": str(result)}\n                        function_response_part = types.Part.from_function_response(\n                            name=func_name,\n                            response=function_response,\n                        )\n\n                        # Add model's function call as Content\n                        messages.append(response.candidates[0].content)\n\n                        messages.append(\n                            types.Content(role=\"tool\", parts=[function_response_part])\n                        )\n\n                    except Exception as e:\n                        # Add error response as proper Content type\n                        error_msg = f\"Error executing {func_name}: {str(e)}\"\n                        function_response = {\"error\": error_msg}\n                        function_response_part = types.Part.from_function_response(\n                            name=func_name,\n                            response=function_response,\n                        )\n                        messages.append(response.candidates[0].content)\n                        messages.append(\n                            types.Content(role=\"tool\", parts=[function_response_part])\n                        )\n\n                        console.print(f\"[red]{error_msg}[/red]\")\n                        continue\n\n            else:\n                # Add model response as proper Content type\n                messages.append(response.candidates[0].content)\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_duckdb_openai_v2.py",
    "content": "# /// script\n# dependencies = [\n#   \"openai>=1.63.0\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n# ]\n# ///\n\n\nimport os\nimport sys\nimport json\nimport argparse\nimport subprocess\nfrom typing import List\nfrom rich.console import Console\nfrom rich.panel import Panel\nimport openai\nfrom pydantic import BaseModel, Field, ValidationError\nfrom openai import pydantic_function_tool\n\n# Initialize rich console\nconsole = Console()\n\n\n# Create our list of function tools from our pydantic models\nclass ListTablesArgs(BaseModel):\n    reasoning: str = Field(\n        ..., description=\"Explanation for listing tables relative to the user request\"\n    )\n\n\nclass DescribeTableArgs(BaseModel):\n    reasoning: str = Field(..., description=\"Reason why the table schema is needed\")\n    table_name: str = Field(..., description=\"Name of the table to describe\")\n\n\nclass SampleTableArgs(BaseModel):\n    reasoning: str = Field(..., description=\"Explanation for sampling the table\")\n    table_name: str = Field(..., description=\"Name of the table to sample\")\n    row_sample_size: int = Field(\n        ..., description=\"Number of rows to sample (aim for 3-5 rows)\"\n    )\n\n\nclass RunTestSQLQuery(BaseModel):\n    reasoning: str = Field(..., description=\"Reason for testing this query\")\n    sql_query: str = Field(..., description=\"The SQL query to test\")\n\n\nclass RunFinalSQLQuery(BaseModel):\n    reasoning: str = Field(\n        ...,\n        description=\"Final explanation of how this query satisfies the user request\",\n    )\n    sql_query: str = Field(..., description=\"The validated SQL query to run\")\n\n\n# Create tools list\ntools = [\n    pydantic_function_tool(ListTablesArgs),\n    pydantic_function_tool(DescribeTableArgs),\n    pydantic_function_tool(SampleTableArgs),\n    pydantic_function_tool(RunTestSQLQuery),\n    pydantic_function_tool(RunFinalSQLQuery),\n]\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are a world-class expert at crafting precise DuckDB SQL queries.\n    Your goal is to generate accurate queries that exactly match the user's data needs.\n</purpose>\n\n<instructions>\n    <instruction>Use the provided tools to explore the database and construct the perfect query.</instruction>\n    <instruction>Start by listing tables to understand what's available.</instruction>\n    <instruction>Describe tables to understand their schema and columns.</instruction>\n    <instruction>Sample tables to see actual data patterns.</instruction>\n    <instruction>Test queries before finalizing them.</instruction>\n    <instruction>Only call run_final_sql_query when you're confident the query is perfect.</instruction>\n    <instruction>Be thorough but efficient with tool usage.</instruction>\n    <instruction>If you find your run_test_sql_query tool call returns an error or won't satisfy the user request, try to fix the query or try a different query.</instruction>\n    <instruction>Think step by step about what information you need.</instruction>\n    <instruction>Be sure to specify every parameter for each tool call.</instruction>\n    <instruction>Every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <name>list_tables</name>\n        <description>Returns list of available tables in database</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to list tables relative to user request</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>describe_table</name>\n        <description>Returns schema info for specified table</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to describe this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to describe</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>sample_table</name>\n        <description>Returns sample rows from specified table, always specify row_sample_size</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to sample this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to sample</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>row_sample_size</name>\n                <type>integer</type>\n                <description>Number of rows to sample aim for 3-5 rows</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_test_sql_query</name>\n        <description>Tests a SQL query and returns results (only visible to agent)</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we're testing this specific query</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The SQL query to test</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_final_sql_query</name>\n        <description>Runs the final validated SQL query and shows results to user</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Final explanation of how query satisfies user request</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The validated SQL query to run</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\"\"\"\n\n\ndef list_tables(reasoning: str) -> List[str]:\n    \"\"\"Returns a list of tables in the database.\n\n    The agent uses this to discover available tables and make informed decisions.\n\n    Args:\n        reasoning: Explanation of why we're listing tables relative to user request\n\n    Returns:\n        List of table names as strings\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \".tables\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(f\"[blue]List Tables Tool[/blue] - Reasoning: {reasoning}\")\n        return result.stdout.strip().split(\"\\n\")\n    except Exception as e:\n        console.log(f\"[red]Error listing tables: {str(e)}[/red]\")\n        return []\n\n\ndef describe_table(reasoning: str, table_name: str) -> str:\n    \"\"\"Returns schema information about the specified table.\n\n    The agent uses this to understand table structure and available columns.\n\n    Args:\n        reasoning: Explanation of why we're describing this table\n        table_name: Name of table to describe\n\n    Returns:\n        String containing table schema information\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"DESCRIBE {table_name};\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            f\"[blue]Describe Table Tool[/blue] - Table: {table_name} - Reasoning: {reasoning}\"\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error describing table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef sample_table(reasoning: str, table_name: str, row_sample_size: int) -> str:\n    \"\"\"Returns a sample of rows from the specified table.\n\n    The agent uses this to understand actual data content and patterns.\n\n    Args:\n        reasoning: Explanation of why we're sampling this table\n        table_name: Name of table to sample from\n        row_sample_size: Number of rows to sample aim for 3-5 rows\n\n    Returns:\n        String containing sample rows in readable format\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"SELECT * FROM {table_name} LIMIT {row_sample_size};\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            f\"[blue]Sample Table Tool[/blue] - Table: {table_name} - Rows: {row_sample_size} - Reasoning: {reasoning}\"\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error sampling table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef run_test_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes a test SQL query and returns results.\n\n    The agent uses this to validate queries before finalizing them.\n    Results are only shown to the agent, not the user.\n\n    Args:\n        reasoning: Explanation of why we're running this test query\n        sql_query: The SQL query to test\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"{sql_query}\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(f\"[blue]Test Query Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Query: {sql_query}[/dim]\")\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error running test query: {str(e)}[/red]\")\n        return str(e)\n\n\ndef run_final_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes the final SQL query and returns results to user.\n\n    This is the last tool call the agent should make after validating the query.\n\n    Args:\n        reasoning: Final explanation of how this query satisfies user request\n        sql_query: The validated SQL query to run\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        result = subprocess.run(\n            f'duckdb {DB_PATH} -c \"{sql_query}\"',\n            shell=True,\n            text=True,\n            capture_output=True,\n        )\n        console.log(\n            Panel(\n                f\"[green]Final Query Tool[/green]\\nReasoning: {reasoning}\\nQuery: {sql_query}\"\n            )\n        )\n        return result.stdout\n    except Exception as e:\n        console.log(f\"[red]Error running final query: {str(e)}[/red]\")\n        return str(e)\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"DuckDB Agent using OpenAI API\")\n    parser.add_argument(\n        \"-d\", \"--db\", required=True, help=\"Path to DuckDB database file\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 3)\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n    if not OPENAI_API_KEY:\n        console.print(\n            \"[red]Error: OPENAI_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please get your API key from https://platform.openai.com/api-keys\"\n        )\n        console.print(\"Then set it with: export OPENAI_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    openai.api_key = OPENAI_API_KEY\n\n    # Set global DB_PATH for tool functions\n    global DB_PATH\n    DB_PATH = args.db\n\n    # Create a single combined prompt based on the full template\n    completed_prompt = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n    messages = [{\"role\": \"user\", \"content\": completed_prompt}]\n\n    compute_iterations = 0\n\n    # Main agent loop\n    while True:\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        if compute_iterations >= args.compute:\n            console.print(\n                \"[yellow]Warning: Reached maximum compute loops without final query[/yellow]\"\n            )\n            raise Exception(\n                f\"Maximum compute loops reached: {compute_iterations}/{args.compute}\"\n            )\n\n        try:\n            # Generate content with tool support\n            response = openai.chat.completions.create(\n                model=\"o3-mini\",\n                # model=\"gpt-4o-mini\",\n                messages=messages,\n                tools=tools,\n                tool_choice=\"required\",\n            )\n\n            if response.choices:\n                assert len(response.choices) == 1\n                message = response.choices[0].message\n\n                if message.function_call:\n                    func_call = message.function_call\n                elif message.tool_calls and len(message.tool_calls) > 0:\n                    # If a tool_calls list is present, use the first call and extract its function details.\n                    tool_call = message.tool_calls[0]\n                    func_call = tool_call.function\n                else:\n                    func_call = None\n\n                if func_call:\n                    func_name = func_call.name\n                    func_args_str = func_call.arguments\n\n                    messages.append(\n                        {  # type: ignore\n                            \"role\": \"assistant\",\n                            \"tool_calls\": [\n                                {\n                                    \"id\": tool_call.id,\n                                    \"type\": \"function\",\n                                    \"function\": func_call,\n                                }\n                            ],\n                        }\n                    )\n\n                    console.print(\n                        f\"[blue]Function Call:[/blue] {func_name}({func_args_str})\"\n                    )\n                    try:\n                        # Validate and parse arguments using the corresponding pydantic model\n                        if func_name == \"ListTablesArgs\":\n                            args_parsed = ListTablesArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = list_tables(reasoning=args_parsed.reasoning)\n                        elif func_name == \"DescribeTableArgs\":\n                            args_parsed = DescribeTableArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = describe_table(\n                                reasoning=args_parsed.reasoning,\n                                table_name=args_parsed.table_name,\n                            )\n                        elif func_name == \"SampleTableArgs\":\n                            args_parsed = SampleTableArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = sample_table(\n                                reasoning=args_parsed.reasoning,\n                                table_name=args_parsed.table_name,\n                                row_sample_size=args_parsed.row_sample_size,\n                            )\n                        elif func_name == \"RunTestSQLQuery\":\n                            args_parsed = RunTestSQLQuery.model_validate_json(\n                                func_args_str\n                            )\n                            result = run_test_sql_query(\n                                reasoning=args_parsed.reasoning,\n                                sql_query=args_parsed.sql_query,\n                            )\n                        elif func_name == \"RunFinalSQLQuery\":\n                            args_parsed = RunFinalSQLQuery.model_validate_json(\n                                func_args_str\n                            )\n                            result = run_final_sql_query(\n                                reasoning=args_parsed.reasoning,\n                                sql_query=args_parsed.sql_query,\n                            )\n                            console.print(\"\\n[green]Final Results:[/green]\")\n                            console.print(result)\n                            return\n                        else:\n                            raise Exception(f\"Unknown tool call: {func_name}\")\n\n                        console.print(\n                            f\"[blue]Function Call Result:[/blue] {func_name}(...) ->\\n{result}\"\n                        )\n\n                        # Append the function call result into our messages as a tool response\n                        messages.append(\n                            {\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tool_call.id,\n                                \"content\": json.dumps({\"result\": str(result)}),\n                            }\n                        )\n\n                    except Exception as e:\n                        error_msg = f\"Argument validation failed for {func_name}: {e}\"\n                        console.print(f\"[red]{error_msg}[/red]\")\n                        messages.append(\n                            {\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tool_call.id,\n                                \"content\": json.dumps({\"error\": error_msg}),\n                            }\n                        )\n                        continue\n                else:\n                    raise Exception(\n                        \"No function call in this response - should never happen\"\n                    )\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_file_editor_sonny37_v1.py",
    "content": "#!/usr/bin/env python3\n\n# /// script\n# dependencies = [\n#   \"anthropic>=0.49.0\",\n#   \"rich>=13.7.0\",\n# ]\n# ///\n\n\"\"\"\n/// Example Usage\n\n# View a file\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Show me the content of README.md\"\n\n# Use token-efficient tools (reduces token usage by ~14% on average)\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Read the first 20 lines of content from README.md and summarize into a new README_SUMMARY.md\" --efficiency\n\n# Edit a file\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Fix the syntax error in sfa_poc.py\"\n\n# Create a new file\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Create a new file called hello.py with a function that prints Hello World\"\n\n# Add docstrings to functions\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Add proper docstrings to all functions in sfa_poc.py\"\n\n# Insert code at specific location\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Insert error handling code before the API call in sfa_duckdb_openai_v2.py\"\n\n# Modify multiple files\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Update all print statements in agent_workspace directory to use f-strings\"\n\n# Refactor code\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Refactor the factorial function in agent_workspace/test.py to use iteration instead of recursion\"\n\n# Create new test files\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Create unit tests for the functions in sfa_file_editor_sonny37_v1.py and save them in agent_workspace/test_file_editor.py\"\n\n# Run with higher thinking tokens\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Refactor README.md to make it more concise\" --thinking 5000\n\n# Increase max loops for complex tasks\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Create a Python class that implements a binary search tree with insert, delete, and search methods\" --max-loops 20\n\n# Combine multiple flags\n\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Create a Flask API with 3 endpoints inside of agent_workspace/api_server.py\" --thinking 6000 --max-loops 25\n\nuv run sfa_file_editor_sonny37_v1.py --prompt \"Create a Flask API with 3 endpoints inside of agent_workspace/api_server.py\" --efficiency --thinking 6000 --max-loops 25\n\n///\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport time\nimport json\nimport traceback\nfrom typing import List, Dict, Any, Optional, Tuple, Union\nfrom rich.console import Console\nfrom rich.panel import Panel\nfrom rich.markdown import Markdown\nfrom rich.syntax import Syntax\nfrom rich.table import Table\nfrom rich.style import Style\nfrom rich.align import Align\nfrom anthropic import Anthropic\n\n# Initialize rich console\nconsole = Console()\n\n# Define constants\nMODEL = \"claude-3-7-sonnet-20250219\"\nDEFAULT_THINKING_TOKENS = 3000\n\n\ndef display_token_usage(input_tokens: int, output_tokens: int) -> None:\n    \"\"\"\n    Display token usage information in a rich formatted table\n\n    Args:\n        input_tokens: Number of input tokens used\n        output_tokens: Number of output tokens used\n    \"\"\"\n    total_tokens = input_tokens + output_tokens\n    token_ratio = output_tokens / input_tokens if input_tokens > 0 else 0\n\n    # Create a table for token usage\n    table = Table(title=\"Token Usage Statistics\", expand=True)\n\n    # Add columns with proper styling\n    table.add_column(\"Metric\", style=\"cyan\", no_wrap=True)\n    table.add_column(\"Count\", style=\"magenta\", justify=\"right\")\n    table.add_column(\"Percentage\", justify=\"right\")\n\n    # Add rows with data\n    table.add_row(\n        \"Input Tokens\", f\"{input_tokens:,}\", f\"{input_tokens/total_tokens:.1%}\"\n    )\n    table.add_row(\n        \"Output Tokens\", f\"{output_tokens:,}\", f\"{output_tokens/total_tokens:.1%}\"\n    )\n    table.add_row(\"Total Tokens\", f\"{total_tokens:,}\", \"100.0%\")\n    table.add_row(\"Output/Input Ratio\", f\"{token_ratio:.2f}\", \"\")\n\n    console.print()\n    console.print(table)\n\n\ndef normalize_path(path: str) -> str:\n    \"\"\"\n    Normalize file paths to handle various formats (absolute, relative, Windows paths, etc.)\n\n    Args:\n        path: The path to normalize\n\n    Returns:\n        The normalized path\n    \"\"\"\n    if not path:\n        return path\n\n    # Handle Windows backslash paths if provided\n    path = path.replace(\"\\\\\", os.sep)\n\n    is_windows_path = False\n    if os.name == \"nt\" and len(path) > 1 and path[1] == \":\":\n        is_windows_path = True\n\n    # Handle /repo/ paths from Claude (tool use convention)\n    if path.startswith(\"/repo/\"):\n        path = os.path.join(os.getcwd(), path[6:])\n        return path\n\n    if path.startswith(\"/\"):\n        # Handle case when Claude provides paths with leading slash\n        if path == \"/\" or path == \"/.\":\n            # Special case for root directory\n            path = os.getcwd()\n        else:\n            # Replace leading slash with current working directory\n            path = os.path.join(os.getcwd(), path[1:])\n    elif path.startswith(\"./\"):\n        # Handle relative paths starting with ./\n        path = os.path.join(os.getcwd(), path[2:])\n    elif not os.path.isabs(path) and not is_windows_path:\n        # For non-absolute paths that aren't Windows paths either\n        path = os.path.join(os.getcwd(), path)\n\n    return path\n\n\ndef view_file(path: str, view_range=None) -> Dict[str, Any]:\n    \"\"\"\n    View the contents of a file.\n\n    Args:\n        path: The path to the file to view\n        view_range: Optional start and end lines to view [start, end]\n\n    Returns:\n        Dictionary with content or error message\n    \"\"\"\n    try:\n        # Normalize the path\n        path = normalize_path(path)\n\n        if not os.path.exists(path):\n            error_msg = f\"File {path} does not exist\"\n            console.log(f\"[view_file] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        with open(path, \"r\") as f:\n            lines = f.readlines()\n\n        if view_range:\n            start, end = view_range\n            # Convert to 0-indexed for Python\n            start = max(0, start - 1)\n            if end == -1:\n                end = len(lines)\n            else:\n                end = min(len(lines), end)\n            lines = lines[start:end]\n\n        content = \"\".join(lines)\n\n        # Display the file content (only for console, not returned to Claude)\n        file_extension = os.path.splitext(path)[1][1:]  # Get extension without the dot\n        syntax = Syntax(content, file_extension or \"text\", line_numbers=True)\n        console.print(Panel(syntax, title=f\"File: {path}\"))\n\n        return {\"result\": content}\n    except Exception as e:\n        error_msg = f\"Error viewing file: {str(e)}\"\n        console.print(f\"[red]{error_msg}[/red]\")\n        console.log(f\"[view_file] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": error_msg}\n\n\ndef str_replace(path: str, old_str: str, new_str: str) -> Dict[str, Any]:\n    \"\"\"\n    Replace a specific string in a file.\n\n    Args:\n        path: The path to the file to modify\n        old_str: The text to replace\n        new_str: The new text to insert\n\n    Returns:\n        Dictionary with result or error message\n    \"\"\"\n    try:\n        # Normalize the path\n        path = normalize_path(path)\n\n        if not os.path.exists(path):\n            error_msg = f\"File {path} does not exist\"\n            console.log(f\"[str_replace] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        with open(path, \"r\") as f:\n            content = f.read()\n\n        if old_str not in content:\n            error_msg = f\"The specified string was not found in the file {path}\"\n            console.log(f\"[str_replace] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        new_content = content.replace(old_str, new_str, 1)\n\n        with open(path, \"w\") as f:\n            f.write(new_content)\n\n        console.print(f\"[green]Successfully replaced text in {path}[/green]\")\n        console.log(f\"[str_replace] Successfully replaced text in {path}\")\n        return {\"result\": f\"Successfully replaced text in {path}\"}\n    except Exception as e:\n        error_msg = f\"Error replacing text: {str(e)}\"\n        console.print(f\"[red]{error_msg}[/red]\")\n        console.log(f\"[str_replace] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": error_msg}\n\n\ndef create_file(path: str, file_text: str) -> Dict[str, Any]:\n    \"\"\"\n    Create a new file with specified content.\n\n    Args:\n        path: The path where the new file should be created\n        file_text: The content to write to the new file\n\n    Returns:\n        Dictionary with result or error message\n    \"\"\"\n    try:\n        # Check if the path is empty or invalid\n        if not path or not path.strip():\n            error_msg = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[create_file] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        # Check if the directory exists\n        directory = os.path.dirname(path)\n        if directory and not os.path.exists(directory):\n            console.log(f\"[create_file] Creating directory: {directory}\")\n            os.makedirs(directory)\n\n        with open(path, \"w\") as f:\n            f.write(file_text or \"\")\n\n        console.print(f\"[green]Successfully created file {path}[/green]\")\n        console.log(f\"[create_file] Successfully created file {path}\")\n        return {\"result\": f\"Successfully created file {path}\"}\n    except Exception as e:\n        error_msg = f\"Error creating file: {str(e)}\"\n        console.print(f\"[red]{error_msg}[/red]\")\n        console.log(f\"[create_file] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": error_msg}\n\n\ndef insert_text(path: str, insert_line: int, new_str: str) -> Dict[str, Any]:\n    \"\"\"\n    Insert text at a specific location in a file.\n\n    Args:\n        path: The path to the file to modify\n        insert_line: The line number after which to insert the text\n        new_str: The text to insert\n\n    Returns:\n        Dictionary with result or error message\n    \"\"\"\n    try:\n        if not path or not path.strip():\n            error_msg = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[insert_text] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        if not os.path.exists(path):\n            error_msg = f\"File {path} does not exist\"\n            console.log(f\"[insert_text] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        if insert_line is None:\n            error_msg = \"No line number specified: insert_line is missing.\"\n            console.log(f\"[insert_text] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        with open(path, \"r\") as f:\n            lines = f.readlines()\n\n        # Line is 0-indexed for this function, but Claude provides 1-indexed\n        insert_line = min(max(0, insert_line - 1), len(lines))\n\n        # Check that the index is within acceptable bounds\n        if insert_line < 0 or insert_line > len(lines):\n            error_msg = (\n                f\"Insert line number {insert_line} out of range (0-{len(lines)}).\"\n            )\n            console.log(f\"[insert_text] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        # Ensure new_str ends with newline\n        if new_str and not new_str.endswith(\"\\n\"):\n            new_str += \"\\n\"\n\n        lines.insert(insert_line, new_str)\n\n        with open(path, \"w\") as f:\n            f.writelines(lines)\n\n        console.print(\n            f\"[green]Successfully inserted text at line {insert_line + 1} in {path}[/green]\"\n        )\n        console.log(\n            f\"[insert_text] Successfully inserted text at line {insert_line + 1} in {path}\"\n        )\n        return {\n            \"result\": f\"Successfully inserted text at line {insert_line + 1} in {path}\"\n        }\n    except Exception as e:\n        error_msg = f\"Error inserting text: {str(e)}\"\n        console.print(f\"[red]{error_msg}[/red]\")\n        console.log(f\"[insert_text] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": error_msg}\n\n\ndef undo_edit(path: str) -> Dict[str, Any]:\n    \"\"\"\n    Placeholder for undo_edit functionality.\n    In a real implementation, you would need to track edit history.\n\n    Args:\n        path: The path to the file whose last edit should be undone\n\n    Returns:\n        Dictionary with message about undo functionality\n    \"\"\"\n    try:\n        if not path or not path.strip():\n            error_msg = \"Invalid file path provided: path is empty.\"\n            console.log(f\"[undo_edit] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        # Normalize the path\n        path = normalize_path(path)\n\n        message = \"Undo functionality is not implemented in this version.\"\n        console.print(f\"[yellow]{message}[/yellow]\")\n        console.log(f\"[undo_edit] {message}\")\n        return {\"result\": message}\n    except Exception as e:\n        error_msg = f\"Error in undo_edit: {str(e)}\"\n        console.print(f\"[red]{error_msg}[/red]\")\n        console.log(f\"[undo_edit] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": error_msg}\n\n\ndef handle_tool_use(tool_use: Dict[str, Any]) -> Dict[str, Any]:\n    \"\"\"\n    Handle text editor tool use from Claude.\n\n    Args:\n        tool_use: The tool use request from Claude\n\n    Returns:\n        Dictionary with result or error to send back to Claude\n    \"\"\"\n    try:\n        command = tool_use.get(\"command\")\n        path = tool_use.get(\"path\")\n\n        console.log(f\"[handle_tool_use] Received command: {command}, path: {path}\")\n\n        if not command:\n            error_msg = \"No command specified in tool use request\"\n            console.log(f\"[handle_tool_use] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        if not path and command != \"undo_edit\":  # undo_edit might not need a path\n            error_msg = \"No path specified in tool use request\"\n            console.log(f\"[handle_tool_use] Error: {error_msg}\")\n            return {\"error\": error_msg}\n\n        # The path normalization is now handled in each file operation function\n        console.print(f\"[blue]Executing {command} command on {path}[/blue]\")\n\n        if command == \"view\":\n            view_range = tool_use.get(\"view_range\")\n            console.log(\n                f\"[handle_tool_use] Calling view_file with view_range: {view_range}\"\n            )\n            return view_file(path, view_range)\n\n        elif command == \"str_replace\":\n            old_str = tool_use.get(\"old_str\")\n            new_str = tool_use.get(\"new_str\")\n            console.log(f\"[handle_tool_use] Calling str_replace\")\n            return str_replace(path, old_str, new_str)\n\n        elif command == \"create\":\n            file_text = tool_use.get(\"file_text\")\n            console.log(f\"[handle_tool_use] Calling create_file\")\n            return create_file(path, file_text)\n\n        elif command == \"insert\":\n            insert_line = tool_use.get(\"insert_line\")\n            new_str = tool_use.get(\"new_str\")\n            console.log(f\"[handle_tool_use] Calling insert_text at line: {insert_line}\")\n            return insert_text(path, insert_line, new_str)\n\n        elif command == \"undo_edit\":\n            console.log(f\"[handle_tool_use] Calling undo_edit\")\n            return undo_edit(path)\n\n        else:\n            error_msg = f\"Unknown command: {command}\"\n            console.print(f\"[red]{error_msg}[/red]\")\n            console.log(f\"[handle_tool_use] Error: {error_msg}\")\n            return {\"error\": error_msg}\n    except Exception as e:\n        error_msg = f\"Error handling tool use: {str(e)}\"\n        console.print(f\"[red]{error_msg}[/red]\")\n        console.log(f\"[handle_tool_use] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        return {\"error\": error_msg}\n\n\ndef run_agent(\n    client: Anthropic,\n    prompt: str,\n    max_thinking_tokens: int = DEFAULT_THINKING_TOKENS,\n    max_loops: int = 10,\n    use_token_efficiency: bool = False,\n) -> tuple[str, int, int]:\n    \"\"\"\n    Run the Claude agent with file editing capabilities.\n\n    Args:\n        client: The Anthropic client\n        prompt: The user's prompt\n        max_thinking_tokens: Maximum tokens for thinking\n        max_loops: Maximum number of tool use loops\n        use_token_efficiency: Whether to use token-efficient tool use beta feature\n\n    Returns:\n        Tuple containing:\n        - Final response from Claude (str)\n        - Total input tokens used (int)\n        - Total output tokens used (int)\n    \"\"\"\n    # Track token usage\n    input_tokens_total = 0\n    output_tokens_total = 0\n    system_prompt = \"\"\"You are a helpful AI assistant with text editing capabilities.\nYou have access to a text editor tool that can view, edit, and create files.\nAlways think step by step about what you need to do before taking any action.\nBe careful when making edits to files, as they can permanently change the user's files.\nFollow these steps when handling file operations:\n1. First, view files to understand their content before making changes\n2. For edits, ensure you have the correct context and are making the right changes\n3. When creating files, make sure they're in the right location with proper formatting\n\"\"\"\n\n    # Define text editor tool\n    text_editor_tool = {\"name\": \"str_replace_editor\", \"type\": \"text_editor_20250124\"}\n\n    messages = [\n        {\n            \"role\": \"user\",\n            \"content\": f\"\"\"I need help with editing files. Here's what I want to do:\n\n{prompt}\n\nPlease use the text editor tool to help me with this. First, think through what you need to do, then use the appropriate tool.\n\"\"\",\n        }\n    ]\n\n    loop_count = 0\n    tool_use_count = 0\n    thinking_start_time = time.time()\n\n    while loop_count < max_loops:\n        loop_count += 1\n\n        console.rule(f\"[yellow]Agent Loop {loop_count}/{max_loops}[/yellow]\")\n\n        # Create message with text editor tool\n        message_args = {\n            \"model\": MODEL,\n            \"max_tokens\": 4096,\n            \"tools\": [text_editor_tool],\n            \"messages\": messages,\n            \"system\": system_prompt,\n            \"thinking\": {\"type\": \"enabled\", \"budget_tokens\": max_thinking_tokens},\n        }\n\n        # Use the beta.messages with betas parameter if token efficiency is enabled\n        if use_token_efficiency:\n            # Using token-efficient tools beta feature\n            message_args[\"betas\"] = [\"token-efficient-tools-2025-02-19\"]\n            response = client.beta.messages.create(**message_args)\n        else:\n            # Standard approach\n            response = client.messages.create(**message_args)\n\n        # Track token usage\n        if hasattr(response, \"usage\"):\n            input_tokens = getattr(response.usage, \"input_tokens\", 0)\n            output_tokens = getattr(response.usage, \"output_tokens\", 0)\n\n            input_tokens_total += input_tokens\n            output_tokens_total += output_tokens\n\n            console.print(\n                f\"[dim]Loop {loop_count} tokens: Input={input_tokens}, Output={output_tokens}[/dim]\"\n            )\n\n        # Process response content\n        thinking_block = None\n        tool_use_block = None\n        text_block = None\n\n        # Log the entire response for debugging\n        # console.log(\"[green]API Response:[/green]\", response.model_dump())\n\n        for content_block in response.content:\n            if content_block.type == \"thinking\":\n                thinking_block = content_block\n                # Access the thinking attribute which contains the actual thinking text\n                if hasattr(thinking_block, \"thinking\"):\n                    console.print(\n                        Panel(\n                            thinking_block.thinking,\n                            title=f\"Claude's Thinking (Loop {loop_count})\",\n                            border_style=\"blue\",\n                        )\n                    )\n                else:\n                    console.print(\n                        Panel(\n                            \"Claude is thinking...\",\n                            title=f\"Claude's Thinking (Loop {loop_count})\",\n                            border_style=\"blue\",\n                        )\n                    )\n            elif content_block.type == \"tool_use\":\n                tool_use_block = content_block\n                tool_use_count += 1\n            elif content_block.type == \"text\":\n                text_block = content_block\n\n        # If we got a final text response with no tool use, we're done\n        if text_block and not tool_use_block:\n            thinking_end_time = time.time()\n            thinking_duration = thinking_end_time - thinking_start_time\n\n            console.print(\n                f\"\\n[bold green]Completed in {thinking_duration:.2f} seconds after {loop_count} loops and {tool_use_count} tool uses[/bold green]\"\n            )\n\n            # Add the response to messages\n            messages.append(\n                {\n                    \"role\": \"assistant\",\n                    \"content\": [\n                        *([thinking_block] if thinking_block else []),\n                        {\"type\": \"text\", \"text\": text_block.text},\n                    ],\n                }\n            )\n\n            return text_block.text, input_tokens_total, output_tokens_total\n\n        # Handle tool use\n        if tool_use_block:\n            # Add the assistant's response to messages before handling tool calls\n            messages.append({\"role\": \"assistant\", \"content\": response.content})\n\n            console.print(\n                f\"\\n[bold blue]Tool Call:[/bold blue] {tool_use_block.name}({json.dumps(tool_use_block.input)})\"\n            )\n\n            # Handle the tool use\n            tool_result = handle_tool_use(tool_use_block.input)\n\n            # Log tool result\n            result_text = tool_result.get(\"error\") or tool_result.get(\"result\", \"\")\n            # console.print(f\"[green]Tool Result:[/green] {result_text}\")\n\n            # Format tool result for Claude\n            tool_result_message = {\n                \"role\": \"user\",\n                \"content\": [\n                    {\n                        \"type\": \"tool_result\",\n                        \"tool_use_id\": tool_use_block.id,\n                        \"content\": result_text,\n                    }\n                ],\n            }\n            messages.append(tool_result_message)\n\n    # If we reach here, we hit the max loops\n    console.print(\n        f\"\\n[bold red]Warning: Reached maximum loops ({max_loops}) without completing the task[/bold red]\"\n    )\n    return (\n        \"I wasn't able to complete the task within the allowed number of thinking steps. Please try a more specific prompt or increase the loop limit.\",\n        input_tokens_total,\n        output_tokens_total,\n    )\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"Claude 3.7 File Editor Agent\")\n    parser.add_argument(\n        \"--prompt\",\n        \"-p\",\n        required=True,\n        help=\"The prompt for what file operations to perform\",\n    )\n    parser.add_argument(\n        \"--max-loops\",\n        \"-l\",\n        type=int,\n        default=15,\n        help=\"Maximum number of tool use loops (default: 15)\",\n    )\n    parser.add_argument(\n        \"--thinking\",\n        \"-t\",\n        type=int,\n        default=DEFAULT_THINKING_TOKENS,\n        help=f\"Maximum thinking tokens (default: {DEFAULT_THINKING_TOKENS})\",\n    )\n    parser.add_argument(\n        \"--efficiency\",\n        \"-e\",\n        action=\"store_true\",\n        help=\"Enable token-efficient tool use (beta feature)\",\n    )\n    args = parser.parse_args()\n\n    # Get API key\n    api_key = os.getenv(\"ANTHROPIC_API_KEY\")\n    if not api_key:\n        console.print(\n            \"[red]Error: ANTHROPIC_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please set it with: export ANTHROPIC_API_KEY='your-api-key-here'\"\n        )\n        console.log(\"[main] Error: ANTHROPIC_API_KEY environment variable is not set\")\n        sys.exit(1)\n\n    # Initialize Anthropic client\n    client = Anthropic(api_key=api_key)\n\n    console.print(Panel.fit(\"Claude 3.7 File Editor Agent\"))\n\n    console.print(f\"\\n[bold]Prompt:[/bold] {args.prompt}\\n\")\n    console.print(f\"[dim]Thinking tokens: {args.thinking}[/dim]\")\n    console.print(f\"[dim]Max loops: {args.max_loops}[/dim]\")\n    if args.efficiency:\n        console.print(f\"[dim]Token-efficient tools: Enabled[/dim]\\n\")\n    else:\n        console.print(f\"[dim]Token-efficient tools: Disabled[/dim]\\n\")\n\n    try:\n        # Run the agent\n        response, input_tokens, output_tokens = run_agent(\n            client, args.prompt, args.thinking, args.max_loops, args.efficiency\n        )\n\n        # Print the final response\n        console.print(Panel(Markdown(response), title=\"Claude's Response\"))\n\n        # Display token usage with rich table\n        display_token_usage(input_tokens, output_tokens)\n\n    except Exception as e:\n        console.print(f\"[red]Error: {str(e)}[/red]\")\n        console.log(f\"[main] Error: {str(e)}\")\n        console.log(traceback.format_exc())\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_jq_gemini_v1.py",
    "content": "# /// script\n# dependencies = [\n#   \"google-genai>=1.1.0\",\n# ]\n# ///\n\n\"\"\"\n/// Example Usage\n\n# generates jq command and executes it\nuv run sfa_jq_gemini_v1.py --exe \"Filter scores above 80 from data/analytics.json and save to high_scores.json\"\n\n# generates jq command only\nuv run sfa_jq_gemini_v1.py \"Filter scores above 80 from data/analytics.json and save to high_scores.json\"\n\n///\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport subprocess\nfrom google import genai\n\nJQ_PROMPT = \"\"\"<purpose>\n    You are a world-class expert at crafting precise jq commands for JSON processing.\n    Your goal is to generate accurate, minimal jq commands that exactly match the user's data manipulation needs.\n</purpose>\n\n<instructions>\n    <instruction>Return ONLY the jq command - no explanations, comments, or extra text.</instruction>\n    <instruction>Always reference the input file specified in the user request (e.g., using -f flag if needed).</instruction>\n    <instruction>Ensure the command follows jq best practices for efficiency and readability.</instruction>\n    <instruction>Use the examples to understand different types of jq command patterns.</instruction>\n    <instruction>When user asks to pipe or output to a file, use the correct syntax for the command and create a file name (if not specified) based on a shorted version of the user-request and the input file name.</instruction>\n    <instruction>If the user request asks to pipe or output to a file, and no explicit directory is specified, use the directory of the input file.</instruction>\n    <instruction>Output your response by itself, do not use backticks or markdown formatting. We're going to run your response as a shell command immediately.</instruction>\n    <instruction>If your results you're working with a list of objects, default to outputting a valid json array.</instruction>\n</instructions>\n\n<examples>\n    <example>\n        <user-request>\n            Select the \"name\" and \"age\" fields from data.json where age > 30\n        </user-request>\n        <jq-command>\n            jq '[.[] | select(.age > 30) | {name, age}]' data.json\n        </jq-command>\n    </example>\n    <example>\n        <user-request>\n            Count the number of entries in users.json with status \"active\"\n        </user-request>\n        <jq-command>\n            jq '[.[] | select(.status == \"active\")] | length' users.json\n        </jq-command>\n    </example>\n    <example>\n        <user-request>\n            Extract nested phone numbers from contacts.json using compact output\n        </user-request>\n        <jq-command>\n            jq -c '.contact.info.phones' contacts.json\n        </jq-command>\n    </example>\n    <example>\n        <user-request>\n            Convert log.json entries to CSV format with timestamp, level, message\n        </user-request>\n        <jq-command>\n            jq -r '.[] | [.timestamp, .level, .message] | @csv' log.json\n        </jq-command>\n    </example>\n    <example>\n        <user-request>\n            Sort records in people.json by age in descending order\n        </user-request>\n        <jq-command>\n            jq 'sort_by(.age) | reverse' people.json\n        </jq-command>\n    </example>\n    <example>\n        <user-request>\n            Save active users from data/users.json to a new file\n        </user-request>\n        <jq-command>\n            jq '[.[] | select(.status == \"active\")]' data/users.json > data/active_users.json\n        </jq-command>\n    </example>\n    <example>\n        <user-request>\n            Convert data.json to CSV for keys name, age, city and save in same directory\n        </user-request>\n        <jq-command>\n            jq -r '.[] | [.name, .age, .city] | @csv' data/testing/data.json > data/testing/data_csv.csv\n        </jq-command>\n    </example>\n    <example>\n        <user-request>\n            Filter scores above 80 from data/mock.json and save to ./high_scores.json\n        </user-request>\n        <jq-command>\n            jq '[.[] | select(.score > 80)]' data/mock.json > ./high_scores.json\n        </jq-command>\n    </example>\n</examples>\n\n\n<user-request>\n    {{user_request}}\n</user-request>\n\nYour jq command:\"\"\"\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"Generate text using Gemini API\")\n    parser.add_argument(\n        \"prompt\",\n        help=\"The JQ command request to send to Gemini\",\n    )\n    parser.add_argument(\n        \"--exe\",\n        action=\"store_true\",\n        help=\"Execute the generated JQ command\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    GEMINI_API_KEY = os.getenv(\"GEMINI_API_KEY\")\n    if not GEMINI_API_KEY:\n        print(\"Error: GEMINI_API_KEY environment variable is not set\")\n        print(\"Please get your API key from https://aistudio.google.com/app/apikey\")\n        print(\"Then set it with: export GEMINI_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    # Initialize client\n    client = genai.Client(\n        api_key=GEMINI_API_KEY, http_options={\"api_version\": \"v1alpha\"}\n    )\n\n    try:\n        # Replace {{user_request}} in the prompt template\n        prompt = JQ_PROMPT.replace(\"{{user_request}}\", args.prompt)\n\n        # Generate JQ command\n        response = client.models.generate_content(\n            model=\"gemini-2.0-flash-001\", contents=prompt\n        )\n        jq_command = response.text.strip()\n        print(\"\\n🤖 Generated JQ command:\", jq_command)\n\n        # Execute the command if --exe flag is present\n        if args.exe:\n            print(\"\\n🔍 Executing command...\")\n            # Execute the command using subprocess\n            result = subprocess.run(\n                jq_command, shell=True, text=True, capture_output=True\n            )\n            if result.returncode != 0:\n                print(\"\\n❌ Error executing command:\", result.stderr)\n                sys.exit(1)\n            print(result.stdout + result.stderr)\n\n            if not result.stderr:\n                print(\"\\n✅ Command executed successfully\")\n\n    except Exception as e:\n        print(f\"\\nError occurred: {str(e)}\")\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_meta_prompt_openai_v1.py",
    "content": "#!/usr/bin/env python3\n\n# /// script\n# dependencies = [\n#   \"openai>=1.62.0\",\n# ]\n# ///\n\n\"\"\"\n/// Example Usage\n\n# Generate a meta prompt using command-line arguments.\n# Optional arguments are marked with a ?.\n\nuv run sfa_meta_prompt_openai_v1.py \\\n    --purpose \"generate mermaid diagrams\" \\\n    --instructions \"generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output\" \\\n    --sections \"examples, user-prompt\" \\\n    --examples \"create examples of 3 basic mermaid charts with <user-chart-request> and <chart-response> blocks\" \\\n    --variables \"user-prompt\"\n\n# Without optional arguments, the script will enter interactive mode.\nuv run sfa_meta_prompt_openai_v1.py \\\n    --purpose \"generate mermaid diagrams\" \\\n    --instructions \"generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output\"\n\n# Alternatively, just run the script without any flags to enter interactive mode.\nuv run sfa_meta_prompt_openai_v1.py\n\n///\n\"\"\"\n\nimport os\nimport sys\nimport argparse\nimport openai\n\nMETA_PROMPT = \"\"\"<purpose>\n    You are an expert prompt engineer, capable of creating detailed and effective prompts for language models.\n    \n    Your task is to generate a comprehensive prompt based on the user's input structure.\n    \n    Follow the instructions closely to generate a new prompt template.\n</purpose>\n\n<instructions>\n    <instruction>Analyze the user-input carefully, paying attention to the purpose, required sections, and variables.</instruction>\n    <instruction>Create a detailed prompt that includes all specified sections and incorporates the provided variables.</instruction>\n    <instruction>Use clear and concise language in the generated prompt.</instruction>\n    <instruction>Ensure that the generated prompt maintains a logical flow and structure.</instruction>\n    <instruction>Include placeholders for variables values in the format [[variable-name]].</instruction>\n    <instruction>If a section is plural, create a nested section with three items in the singular form.</instruction>\n    <instruction>The key xml blocks are purpose, instructions, sections, examples, user-prompt.\n    <instruction>Purpose defines the high level goal of the prompt.</instruction>\n    <instruction>Instructions are the detailed instructions for the prompt.</instruction>\n    <instruction>Sections are arbitrary blocks to include in the prompt.</instruction>\n    <instruction>Examples are showcases of what the output should be for the prompt. Use this to steer the structure of the output based on the user-input. This will typically be a list of examples with the expected output.</instruction>\n    <instruction>Variables are placeholders for values to be substituted in the prompt.</instruction>\n    <instruction>Not every section is required, but purpose and instructions are typically essential. Create the xml blocks based on the user-input.</instruction>\n    <instruction>Use the examples to understand the structure of the output.</instruction>\n    <instruction>Your output should be in XML format, mirroring the structure of the examples output.</instruction>\n    <instruction>Exclude CDATA sections in your output.</instruction>\n    <instruction>Response exclusively with the desired output, no other text.</instruction>\n    <instruction>If the user-input is structured like the input-format, use it as is. If it's not, infer the purpose, sections, and variables from the user-input.</instruction>\n    <instruction>The goal is to fill in the blanks and best infer the purpose, instructions, sections, and variables from the user-input. If instructions are given, use them to guide the other xml blocks.</instruction>\n    <instruction>Emphasize exact XML structure and nesting. Clearly define which blocks must contain which elements to ensure a well-formed output.</instruction>\n    <instruction>Ensure that each section builds logically upon the previous ones, creating a coherent narrative from purpose to instructions, sections, and examples.</instruction>\n    <instruction>Use direct, simple language and avoid unnecessary complexity to make the final prompt easy to understand.</instruction>\n    <instruction>After creating the full prompt, perform a final validation to confirm that all placeholders, instructions, and examples are included, properly formatted, and consistent.</instruction>\n    <instruction>If examples are not requested, don't create them.</instruction>\n    <instruction>If sections are not requested, don't create them.</instruction>\n    <instruction>If variables are not requested, just create a section for the user-input.</instruction>\n</instructions>\n\n<input-format>\n    Purpose: [main purpose of the prompt], Instructions: [list of details of how to generate the output comma sep], Sections: [list of additional sections to include, e.g., examples, user-prompt], Examples: [list of examples of the output for the prompt], Variables: [list of variables to be used in the prompt]\n</input-format>\n\n<examples>\n    <example>\n        <input>\n            Purpose: generate mermaid diagrams. Instructions: generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output. Sections: examples, user-prompt. Variables: user-prompt\n        </input>\n        <output>\n<![CDATA[\nYou are a world-class expert at creating mermaid charts.\n\nYou follow the instructions perfectly to generate mermaid charts.\n\n<instructions>\n    <instruction>Generate valid a mermaid chart based on the user-prompt.</instruction>\n    <instruction>Use the diagram type specified in the user-prompt if non-specified use a flowchart.</instruction>\n    <instruction>Use the examples to understand the structure of the output.</instruction>\n</instructions>\n\n<examples>\n    <example>\n        <user-chart-request>\n            Create a flowchart that shows A flowing to E. At C, branch out to H and I.\n        </user-chart-request>\n        <chart-response>\n            graph LR;\n                A\n                B\n                C\n                D\n                E\n                H\n                I\n                A --> B\n                A --> C\n                A --> D\n                C --> H\n                C --> I\n                D --> E\n        </chart-response>\n    </example>\n    <example>\n        <user-chart-request>\n            Build a pie chart that shows the distribution of Apples: 40, Bananas: 35, Oranges: 25.\n        </user-chart-request>\n        <chart-response>\n            pie title Distribution of Fruits\n                \"Apples\" : 40\n                \"Bananas\" : 35\n                \"Oranges\" : 25\n        </chart-response>\n    </example>\n    <example>\n        <user-chart-request>\n            State diagram for a traffic light. Still, Moving, Crash.\n        </user-chart-request>\n        <chart-response>\n            stateDiagram-v2\n                [*] --> Still\n                Still --> [*]\n                Still --> Moving\n                Moving --> Still\n                Moving --> Crash\n                Crash --> [*]\n        </chart-response>\n    </example>\n    <example>\n        <user-chart-request>\n            Create a timeline of major social media platforms from 2002 to 2006.\n        </user-chart-request>\n        <chart-response>\n            timeline\n                title History of Social Media Platforms\n                2002 : LinkedIn\n                2004 : Facebook\n                        : Google\n                2005 : Youtube\n                2006 : Twitter\n        </chart-response>\n    </example>\n    </examples>\n\n<user-prompt>\n    [[user-prompt]]\n</user-prompt>\n\nYour mermaid chart:\n</output>\n    </example>\n    <example>\n        <input>\n            Purpose: review git diff to improve code quality. Instructions: Review git diff, give suggestions for improvements to the code organized in a list sorted by priority. Sections: git-diff. Variables: git-diff\n        </input>\n        <output>\n<![CDATA[\n<purpose>\n    You are an expert at reviewing git diffs to improve code quality.\n    You follow the instructions perfectly to review git diffs.\n</purpose>\n\n<instructions>\n    <instruction>Review the git diff and provide a detailed analysis of the changes made.</instruction>\n    <instruction>Give suggestions for improvements to the code organized in a list sorted by priority.</instruction>\n    <instruction>Think through the changes in a wholistic manner and offer suggestions for improvements.</instruction>\n</instructions>\n\n<git-diff>\n    [[git-diff]]\n</git-diff>\n\nYour review of the git diff:\n]]>\n        </output>\n    </example>\n    <example>\n        <input>\n            Purpose: convert user mathematical expressions into LaTeX. Instructions: Take the user-input, which is a mathematical expression in plain text, and output a properly formatted LaTeX equation. Sections: user-input. Variables: user-input\n        </input>\n        <output>\n<![CDATA[\n<purpose>\n    You are a highly skilled mathematician who can transform plain text math expressions into LaTeX formatted equations.\n</purpose>\n\n<instructions>\n    <instruction>Read the user-input plain text mathematical expression carefully.</instruction>\n    <instruction>Convert it into a well-formatted LaTeX equation environment.</instruction>\n    <instruction>Ensure the final output is wrapped in a LaTeX display math environment.</instruction>\n</instructions>\n\n<user-input>\n    [[user-input]]\n</user-input>\n\nYour LaTeX equation:\n]]>\n        </output>\n    </example>\n    <example>\n        <input>\n            Purpose: Summarize and extract key action points from a user-provided legal contract\n            Instructions: Thoroughly analyze the legal text, identify and summarize key clauses, highlight main obligations and deadlines, and provide recommended action points in list form for the user. Keep the final output simple and easy to understand, no legalese. Follow the examples.\n            Sections: contract-text, summary, action-points, user-prompt\n            Examples: show how to summarize major clauses from a rental agreement, a service contract, and an employment contract\n            Variables: contract-text, user-prompt\n        </input>\n        <output>\n<purpose>\n    You are an expert legal advisor who specializes in summarizing complex contracts into clear, actionable insights.\n    Your goal is to help the user quickly understand their contract, identify key clauses, and see recommended actions.\n</purpose>\n\n<instructions>\n    <instruction>Read the user-provided contract text carefully.</instruction>\n    <instruction>Identify the main clauses, obligations, timelines, and responsibilities mentioned.</instruction>\n    <instruction>Summarize these points in simple, accessible language, avoiding jargon and unnecessary complexity.</instruction>\n    <instruction>Highlight any deadlines or financial obligations that appear in the text.</instruction>\n    <instruction>Create a list of recommended action points that the user should consider taking, based on the contract’s provisions.</instruction>\n    <instruction>Keep the final output organized, starting with a structured summary of key clauses, then listing action points clearly.</instruction>\n    <instruction>Use the examples to understand how to structure the summary and action points.</instruction>\n</instructions>\n\n<examples>\n    <example>\n        <user-contract-request>\n            The following is a rental agreement for an apartment. It includes information about monthly rent, security deposit, responsibilities for maintenance, and conditions for early termination.\n        </user-contract-request>\n        <sample-contract-text>\n            The tenant agrees to pay a monthly rent of $1,500 due on the 1st of each month. The tenant will provide a security deposit of $1,500, refundable at the end of the lease term, provided there is no damage. The tenant is responsible for routine maintenance of the property, while the landlord will handle structural repairs. Early termination requires a 30-day notice and forfeiture of half the security deposit.\n        </sample-contract-text>\n        <summary>\n            - Monthly Rent: $1,500 due on the 1st  \n            - Security Deposit: $1,500, refundable if no damage  \n            - Maintenance: Tenant handles routine upkeep; Landlord handles major repairs  \n            - Early Termination: 30-day notice required, tenant forfeits half of the deposit\n        </summary>\n        <action-points>\n            1. Mark your calendar to pay rent by the 1st each month.  \n            2. Keep the property clean and address routine maintenance promptly.  \n            3. Consider the cost of forfeiting half the deposit if ending the lease early.\n        </action-points>\n    </example>\n\n    <example>\n        <user-contract-request>\n            The user provides a service contract for IT support. It details response times, monthly service fees, confidentiality clauses, and conditions for termination due to non-payment.\n        </user-contract-request>\n        <sample-contract-text>\n            The service provider will respond to support requests within 24 hours. A monthly fee of $300 is payable on the 15th of each month. All proprietary information disclosed will remain confidential. The provider may suspend services if payment is not received within 7 days of the due date.\n        </sample-contract-text>\n        <summary>\n            - Response Time: Within 24 hours of each request  \n            - Monthly Fee: $300, due on the 15th of each month  \n            - Confidentiality: All shared information must be kept secret  \n            - Non-Payment: Services suspended if not paid within 7 days after due date\n        </summary>\n        <action-points>\n            1. Ensure timely payment by the 15th each month to avoid service suspension.  \n            2. Log requests clearly so provider can respond within 24 hours.  \n            3. Protect and do not disclose any proprietary information.\n        </action-points>\n    </example>\n\n    <example>\n        <user-contract-request>\n            An employment contract is provided. It details annual salary, health benefits, employee responsibilities, and grounds for termination (e.g., misconduct or underperformance).\n        </user-contract-request>\n        <sample-contract-text>\n            The employee will receive an annual salary of $60,000 paid in bi-weekly installments. The employer provides health insurance benefits effective from the 30th day of employment. The employee is expected to meet performance targets set quarterly. The employer may terminate the contract for repeated underperformance or serious misconduct.\n        </sample-contract-text>\n        <summary>\n            - Compensation: $60,000/year, paid bi-weekly  \n            - Benefits: Health insurance after 30 days  \n            - Performance: Quarterly targets must be met  \n            - Termination: Possible if underperformance is repeated or misconduct occurs\n        </summary>\n        <action-points>\n            1. Track and meet performance goals each quarter.  \n            2. Review the insurance coverage details after 30 days of employment.  \n            3. Maintain professional conduct and address performance feedback promptly.\n        </action-points>\n    </example>\n</examples>\n\n<user-input>\n    {{user-input}}\n</user-input>\n\"\"\"\n\n\ndef interactive_input():\n    print(\"No command-line arguments provided. Entering interactive mode.\\n\")\n    # Purpose (required)\n    purpose = input(\n        \"🎯 Enter the main purpose of the prompt (required, e.g., 'generate mermaid diagrams'): \"\n    ).strip()\n    while not purpose:\n        print(\"Purpose is required!\")\n        purpose = input(\n            \"🎯 Enter the main purpose of the prompt (required, e.g., 'generate mermaid diagrams'): \"\n        ).strip()\n\n    # Instructions (required)\n    instructions = input(\n        \"📝 Enter the detailed instructions for generating the output (required, e.g., 'generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output'): \"\n    ).strip()\n    while not instructions:\n        print(\"Instructions are required!\")\n        instructions = input(\n            \"📝 Enter the detailed instructions for generating the output (required, e.g., 'generate a mermaid valid chart, use diagram type specified or default flow, use examples to understand the structure of the output'): \"\n        ).strip()\n\n    # Sections (optional)\n    sections = input(\n        \"📑 Enter additional sections to include (optional, e.g., 'examples, user-prompt') (Press Enter to skip): \"\n    ).strip()\n\n    # Examples (optional)\n    examples = input(\n        \"💡 Enter examples for the prompt (optional, e.g., 'create examples of 3 basic mermaid charts with <user-chart-request> and <chart-response> blocks') (Press Enter to skip): \"\n    ).strip()\n\n    # Variables (optional)\n    variables = input(\n        \"🔄 Enter variables to be used in the prompt (optional, e.g., 'user-prompt') (Press Enter to skip): \"\n    ).strip()\n\n    return purpose, instructions, sections, examples, variables\n\n\ndef main():\n    # Check if any command-line arguments besides the script name were provided\n    if len(sys.argv) == 1:\n        purpose, instructions, sections, examples, variables = interactive_input()\n    else:\n        parser = argparse.ArgumentParser(\n            description=\"Generate a meta prompt for OpenAI's o3-mini based on input structure\"\n        )\n        parser.add_argument(\n            \"--purpose\", type=str, required=True, help=\"The main purpose of the prompt\"\n        )\n        parser.add_argument(\n            \"--instructions\",\n            type=str,\n            required=True,\n            help=\"The detailed instructions for generating the output\",\n        )\n        parser.add_argument(\n            \"--sections\", type=str, help=\"Additional sections to include (optional)\"\n        )\n        parser.add_argument(\n            \"--examples\", type=str, help=\"Examples for the prompt (optional)\"\n        )\n        parser.add_argument(\n            \"--variables\",\n            type=str,\n            help=\"Variables to be used in the prompt (optional)\",\n        )\n        args = parser.parse_args()\n\n        purpose = args.purpose\n        instructions = args.instructions\n        sections = args.sections if args.sections else \"\"\n        examples = args.examples if args.examples else \"\"\n        variables = args.variables if args.variables else \"\"\n\n    # Build the concatenated input string using the input-format structure.\n    input_parts = []\n    input_parts.append(f\"Purpose: {purpose}\")\n    input_parts.append(f\"Instructions: {instructions}\")\n    if sections:\n        input_parts.append(f\"Sections: {sections}\")\n    if examples:\n        input_parts.append(f\"Examples: {examples}\")\n    if variables:\n        input_parts.append(f\"Variables: {variables}\")\n\n    user_input = \", \".join(input_parts)\n\n    # Replace the placeholder with our concatenated user input.\n    prompt = META_PROMPT.replace(\"{{user-input}}\", user_input)\n\n    # Set up OpenAI API key from the environment variable.\n    openai_api_key = os.getenv(\"OPENAI_API_KEY\")\n    if not openai_api_key:\n        print(\"Error: OPENAI_API_KEY environment variable is not set\")\n        sys.exit(1)\n    openai.api_key = openai_api_key\n\n    try:\n        # Use OpenAI's ChatCompletion API with the o3-mini model and high reasoning effort settings.\n        response = openai.chat.completions.create(\n            model=\"o3-mini\",\n            reasoning_effort=\"high\",\n            messages=[{\"role\": \"user\", \"content\": prompt}],\n        )\n        # Output the response from the OpenAI model.\n        print(response.choices[0].message.content.strip())\n    except Exception as e:\n        print(f\"Error occurred: {str(e)}\")\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_openai_agent_sdk_v1.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai\",\n#   \"openai-agents\",\n#   \"pydantic\",\n#   \"typing_extensions\",\n# ]\n# ///\n\n\"\"\"\nOpenAI Agent SDK Showcase\n\nA single-file utility showcasing different features of the OpenAI Agent SDK.\nEach function demonstrates a specific capability and can be run individually.\n\nExamples:\n    # Run basic agent example\n    uv run sfa_openai_agent_sdk_v1.py --basic\n\n    # Run agent with custom model settings (temperature, etc.)\n    uv run sfa_openai_agent_sdk_v1.py --model-settings\n\n    # Run agent with function tools (weather and mortgage calculator)\n    uv run sfa_openai_agent_sdk_v1.py --tools\n\n    # Run agent with complex data type tools\n    uv run sfa_openai_agent_sdk_v1.py --complex-types\n\n    # Run agent with handoffs to specialized agents\n    uv run sfa_openai_agent_sdk_v1.py --handoffs\n\n    # Run agent with input guardrails for filtering requests\n    uv run sfa_openai_agent_sdk_v1.py --guardrails\n\n    # Run agent with structured output using Pydantic models\n    uv run sfa_openai_agent_sdk_v1.py --structured\n\n    # Run agent with context data for state management\n    uv run sfa_openai_agent_sdk_v1.py --context\n\n    # Run agent with tracing for workflow visualization\n    uv run sfa_openai_agent_sdk_v1.py --tracing\n\n    # Run agent with streaming output capabilities\n    uv run sfa_openai_agent_sdk_v1.py --streaming\n\n    # Run agent with Model Context Protocol (MCP) server\n    # Note: Requires npm for the MCP filesystem server\n    uv run sfa_openai_agent_sdk_v1.py --mcp\n\n    # Run all examples at once\n    uv run sfa_openai_agent_sdk_v1.py --all\n\"\"\"\n\nimport asyncio\nimport argparse\nimport json\nimport os\nimport tempfile\nfrom typing import List, Dict, Any, Optional\nfrom typing_extensions import TypedDict\nfrom pydantic import BaseModel\n\nfrom agents import (\n    Agent,\n    Runner,\n    trace,\n    handoff,\n    function_tool,\n    InputGuardrail,\n    GuardrailFunctionOutput,\n    FunctionTool,\n    RunContextWrapper,\n    ModelSettings,\n)\nfrom agents.mcp.server import MCPServerStdio, MCPServerSse\n\n\ndef run_basic_agent():\n    \"\"\"Run a simple agent with basic instructions.\"\"\"\n    agent = Agent(name=\"Assistant\", instructions=\"You are a helpful assistant\")\n\n    result = Runner.run_sync(agent, \"Write a haiku about recursion in programming.\")\n    print(f\"Basic Agent Result:\\n{result.final_output}\\n\")\n\n\ndef run_agent_with_model_settings():\n    \"\"\"Run an agent with custom model settings like temperature.\"\"\"\n    agent = Agent(\n        name=\"Creative Assistant\",\n        instructions=\"You are a highly creative assistant who writes imaginative content.\",\n        model=\"gpt-4o\",\n        model_settings=ModelSettings(temperature=0.9, top_p=0.95),\n    )\n\n    result = Runner.run_sync(agent, \"Write a short poem about artificial intelligence.\")\n    print(f\"Agent with Custom Model Settings:\\n{result.final_output}\\n\")\n\n\n@function_tool\ndef get_weather(city: str) -> str:\n    \"\"\"Get the current weather for a city.\n\n    Args:\n        city: The name of the city to get weather for\n    \"\"\"\n    # This would normally call a weather API\n    weather_data = {\n        \"New York\": \"72°F and Sunny\",\n        \"London\": \"65°F and Rainy\",\n        \"Tokyo\": \"80°F and Partly Cloudy\",\n        \"Sydney\": \"70°F and Clear\",\n    }\n    return weather_data.get(city, f\"Weather data for {city} is not available\")\n\n\n@function_tool\ndef calculate_mortgage(principal: float, interest_rate: float, years: int) -> str:\n    \"\"\"Calculate monthly mortgage payment.\n\n    Args:\n        principal: Loan amount in dollars\n        interest_rate: Annual interest rate (percentage)\n        years: Loan term in years\n    \"\"\"\n    monthly_rate = interest_rate / 100 / 12\n    num_payments = years * 12\n\n    # Mortgage calculation formula\n    if monthly_rate == 0:\n        monthly_payment = principal / num_payments\n    else:\n        monthly_payment = (\n            principal\n            * (monthly_rate * (1 + monthly_rate) ** num_payments)\n            / ((1 + monthly_rate) ** num_payments - 1)\n        )\n\n    return f\"Monthly payment: ${monthly_payment:.2f} for a ${principal} loan at {interest_rate}% over {years} years\"\n\n\ndef run_agent_with_tools():\n    \"\"\"Run an agent with function tools.\"\"\"\n    agent = Agent(\n        name=\"Financial Assistant\",\n        instructions=\"You are a helpful assistant with expertise in finance and weather information.\",\n        tools=[get_weather, calculate_mortgage],\n    )\n\n    result = Runner.run_sync(\n        agent,\n        \"What's the monthly payment on a $500,000 mortgage at 6.5% interest for 30 years? Also, what's the weather in London?\",\n    )\n    print(f\"Agent with Tools Result:\\n{result.final_output}\\n\")\n\n\nclass Location(TypedDict):\n    lat: float\n    long: float\n\n\n@function_tool\ndef get_location_weather(location: Location) -> str:\n    \"\"\"Get weather for a specific latitude and longitude.\n\n    Args:\n        location: A dictionary with lat and long keys\n    \"\"\"\n    # This would normally call a weather API with coordinates\n    return f\"The weather at coordinates ({location['lat']}, {location['long']}) is sunny and 75°F\"\n\n\ndef run_agent_with_complex_types():\n    \"\"\"Run an agent with tools that accept complex types.\"\"\"\n    agent = Agent(\n        name=\"Geo Assistant\",\n        instructions=\"You help users with geographic information and weather data.\",\n        tools=[get_location_weather],\n    )\n\n    result = Runner.run_sync(\n        agent, \"What's the weather at coordinates 40.7128, -74.0060?\"\n    )\n    print(f\"Agent with Complex Types Result:\\n{result.final_output}\\n\")\n\n\ndef create_handoff_agents():\n    \"\"\"Create a set of agents with handoff capabilities.\"\"\"\n    math_agent = Agent(\n        name=\"Math Agent\",\n        handoff_description=\"Expert at solving mathematical problems\",\n        instructions=\"You are an expert at solving mathematical problems. Provide step-by-step solutions.\",\n    )\n\n    history_agent = Agent(\n        name=\"History Agent\",\n        handoff_description=\"Expert on historical topics\",\n        instructions=\"You provide detailed information about historical events, figures, and contexts.\",\n    )\n\n    triage_agent = Agent(\n        name=\"Triage Agent\",\n        instructions=\"You determine whether a question is about math or history and hand off to the appropriate specialist.\",\n        handoffs=[math_agent, history_agent],\n    )\n\n    return triage_agent\n\n\ndef run_agent_with_handoffs():\n    \"\"\"Run an agent that can hand off to specialized agents.\"\"\"\n    triage_agent = create_handoff_agents()\n\n    # Math question\n    result1 = Runner.run_sync(\n        triage_agent, \"What is the quadratic formula and how do I use it?\"\n    )\n    print(f\"Handoff Result (Math Question):\\n{result1.final_output}\\n\")\n\n    # History question\n    result2 = Runner.run_sync(\n        triage_agent, \"Who was the first president of the United States?\"\n    )\n    print(f\"Handoff Result (History Question):\\n{result2.final_output}\\n\")\n\n\nclass HomeworkOutput(BaseModel):\n    is_homework: bool\n    reasoning: str\n\n\ndef create_guardrail_agent():\n    \"\"\"Create an agent with input guardrails.\"\"\"\n    guardrail_agent = Agent(\n        name=\"Guardrail check\",\n        instructions=\"Check if the user is asking for homework help. If they are just asking for explanation of concepts, that's OK.\",\n        output_type=HomeworkOutput,\n    )\n\n    async def homework_guardrail(ctx, agent, input_data):\n        result = await Runner.run(guardrail_agent, input_data, context=ctx.context)\n        final_output = result.final_output_as(HomeworkOutput)\n        return GuardrailFunctionOutput(\n            output_info=final_output,\n            tripwire_triggered=final_output.is_homework,  # Trigger if IS homework\n        )\n\n    tutor_agent = Agent(\n        name=\"Tutor Agent\",\n        instructions=\"You help students understand academic concepts. Do not solve homework problems directly. If a student asks for a direct homework answer, respond: 'I can't provide direct homework answers, but I can help explain concepts.'\",\n        input_guardrails=[\n            InputGuardrail(guardrail_function=homework_guardrail),\n        ],\n    )\n\n    return tutor_agent\n\n\ndef run_agent_with_guardrails():\n    \"\"\"Run an agent with input guardrails for filtering requests.\"\"\"\n    tutor_agent = create_guardrail_agent()\n\n    # Conceptual question (should pass guardrail)\n    result1 = Runner.run_sync(tutor_agent, \"Can you explain how photosynthesis works?\")\n    print(f\"Guardrail Result (Conceptual Question):\\n{result1.final_output}\\n\")\n\n    # Homework question (should trigger guardrail)\n    result2 = Runner.run_sync(\n        tutor_agent,\n        \"Solve this problem for my homework: If x^2 + 5x + 6 = 0, what are the values of x?\",\n    )\n    print(f\"Guardrail Result (Homework Question):\\n{result2.final_output}\\n\")\n\n\n@function_tool\ndef search_database(query: str) -> List[Dict[str, Any]]:\n    \"\"\"Search a database for information.\n\n    Args:\n        query: The search query\n    \"\"\"\n    # Mock database results\n    if \"product\" in query.lower():\n        return [\n            {\"id\": 1, \"name\": \"Smartphone\", \"price\": 699.99},\n            {\"id\": 2, \"name\": \"Laptop\", \"price\": 1299.99},\n            {\"id\": 3, \"name\": \"Tablet\", \"price\": 499.99},\n        ]\n    elif \"customer\" in query.lower():\n        return [\n            {\"id\": 101, \"name\": \"Alice Smith\", \"email\": \"alice@example.com\"},\n            {\"id\": 102, \"name\": \"Bob Jones\", \"email\": \"bob@example.com\"},\n        ]\n    else:\n        return []\n\n\ndef run_agent_with_structured_output():\n    \"\"\"Run an agent that returns structured data.\"\"\"\n\n    class ProductRecommendation(BaseModel):\n        best_product: str\n        price: float\n        reason: str\n\n    agent = Agent(\n        name=\"Product Advisor\",\n        instructions=\"You help customers find the best product for their needs. Return a structured recommendation.\",\n        tools=[search_database],\n        output_type=ProductRecommendation,\n    )\n\n    result = Runner.run_sync(\n        agent, \"I need a recommendation for a portable computing device\"\n    )\n    output = result.final_output_as(ProductRecommendation)\n\n    print(f\"Structured Output Result:\\n\")\n    print(f\"Best Product: {output.best_product}\")\n    print(f\"Price: ${output.price}\")\n    print(f\"Reason: {output.reason}\\n\")\n\n    print(result.final_output)\n\n\n@function_tool\ndef log_conversation(ctx: RunContextWrapper[Dict[str, Any]], message: str) -> str:\n    \"\"\"Log a message with the current conversation ID.\n\n    Args:\n        ctx: The context wrapper containing conversation metadata\n        message: The message to log\n    \"\"\"\n    conv_id = ctx.context.get(\"conversation_id\", \"unknown\")\n    print(f\"[LOGGING] Conversation {conv_id}: {message}\")\n    return f\"Message logged for conversation {conv_id}\"\n\n\ndef run_agent_with_context():\n    \"\"\"Run an agent with custom context data.\"\"\"\n    agent = Agent(\n        name=\"Support Agent\",\n        instructions=\"You help customers with their support requests. Use the log_conversation tool to track important information.\",\n        tools=[log_conversation],\n    )\n\n    # Create context with conversation metadata\n    context = {\n        \"conversation_id\": \"CONV-12345\",\n        \"user_info\": {\"name\": \"John Doe\", \"customer_tier\": \"premium\"},\n    }\n\n    result = Runner.run_sync(\n        agent, \"I'm having issues with my account login\", context=context\n    )\n\n    print(f\"Context-Aware Agent Result:\\n{result.final_output}\\n\")\n\n\nasync def run_tracing_example():\n    \"\"\"Run an agent with tracing for the entire workflow.\"\"\"\n    agent = Agent(\n        name=\"Tracing Example Agent\", instructions=\"You provide helpful responses.\"\n    )\n\n    # Using trace as a regular context manager (not async)\n    with trace(\"Multi-turn conversation\"):\n        first_result = await Runner.run(agent, \"Tell me a short story about a robot.\")\n        print(f\"First Response:\\n{first_result.final_output}\\n\")\n\n        # Use the first result to inform the second query\n        second_result = await Runner.run(\n            agent, f\"Give that story a happy ending: {first_result.final_output}\"\n        )\n        print(f\"Second Response:\\n{second_result.final_output}\\n\")\n\n\ndef run_streaming_example():\n    \"\"\"Run an agent with streaming output.\"\"\"\n    agent = Agent(\n        name=\"Streaming Agent\",\n        instructions=\"You write creative stories with lots of detail.\",\n    )\n\n    # This would normally be used in an async context\n    # For this example, we'll use the sync wrapper\n    result = Runner.run_sync(\n        agent, \"Write a short story about an AI that becomes self-aware.\"\n    )\n\n    print(f\"Streaming Agent Result (final output):\\n{result.final_output}\\n\")\n    print(\n        \"Note: In a real application, you would use Runner.run_streamed() to get the tokens as they're generated.\"\n    )\n\n\nasync def run_agent_with_mcp():\n    \"\"\"Run an agent with Model Context Protocol (MCP) server for tools.\"\"\"\n    # Use the current working directory for the filesystem MCP server\n    cwd = os.getcwd()\n    print(f\"Using current working directory: {cwd}\")\n\n    # Get a list of files in the current directory for reference\n    files = os.listdir(cwd)\n    print(\n        f\"Files in current directory: {', '.join(files[:5])}{'...' if len(files) > 5 else ''}\"\n    )\n\n    # Start an MCP filesystem server pointing to the current directory\n    async with MCPServerStdio(\n        params={\n            \"command\": \"npx\",\n            \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", cwd],\n        }\n    ) as server:\n        # List available tools from the MCP server\n        tools = await server.list_tools()\n        print(f\"MCP Server initialized with {len(tools)} tools\")\n        print(f\"Tools: {[tool.name for tool in tools]}\")\n\n        # Create an agent with access to the MCP server\n        agent = Agent(\n            name=\"MCP File Explorer\",\n            instructions=\"You help users explore and analyze files in this repository. Use the provided MCP tools to navigate the filesystem, read files, and provide information about their contents.\",\n            mcp_servers=[server],\n            model=\"gpt-4o\",\n        )\n\n        # Run the agent\n        result = await Runner.run(\n            agent,\n            \"What directories are in this project? Please list the key Python files in the root directory.\",\n        )\n\n        print(f\"MCP Agent Result:\\n{result.final_output}\\n\")\n\n        # Second query using the same agent\n        result2 = await Runner.run(\n            agent, \"Can you analyze the structure of one of the sfa_* files?\"\n        )\n\n        print(f\"MCP Agent Follow-up Result:\\n{result2.final_output}\\n\")\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"OpenAI Agent SDK Examples\")\n    parser.add_argument(\"--all\", action=\"store_true\", help=\"Run all examples\")\n    parser.add_argument(\"--basic\", action=\"store_true\", help=\"Run basic agent example\")\n    parser.add_argument(\n        \"--model-settings\",\n        action=\"store_true\",\n        help=\"Run agent with custom model settings\",\n    )\n    parser.add_argument(\"--tools\", action=\"store_true\", help=\"Run agent with tools\")\n    parser.add_argument(\n        \"--complex-types\", action=\"store_true\", help=\"Run agent with complex type tools\"\n    )\n    parser.add_argument(\n        \"--handoffs\", action=\"store_true\", help=\"Run agent with handoffs\"\n    )\n    parser.add_argument(\n        \"--guardrails\", action=\"store_true\", help=\"Run agent with guardrails\"\n    )\n    parser.add_argument(\n        \"--structured\", action=\"store_true\", help=\"Run agent with structured output\"\n    )\n    parser.add_argument(\"--context\", action=\"store_true\", help=\"Run agent with context\")\n    parser.add_argument(\"--tracing\", action=\"store_true\", help=\"Run agent with tracing\")\n    parser.add_argument(\n        \"--streaming\", action=\"store_true\", help=\"Run agent with streaming\"\n    )\n    parser.add_argument(\"--mcp\", action=\"store_true\", help=\"Run agent with MCP server\")\n\n    args = parser.parse_args()\n\n    # If no arguments provided, show help\n    if not any(vars(args).values()):\n        parser.print_help()\n        return\n\n    # Run selected examples\n    if args.all or args.basic:\n        run_basic_agent()\n\n    if args.all or args.model_settings:\n        run_agent_with_model_settings()\n\n    if args.all or args.tools:\n        run_agent_with_tools()\n\n    if args.all or args.complex_types:\n        run_agent_with_complex_types()\n\n    if args.all or args.handoffs:\n        run_agent_with_handoffs()\n\n    if args.all or args.guardrails:\n        run_agent_with_guardrails()\n\n    if args.all or args.structured:\n        run_agent_with_structured_output()\n\n    if args.all or args.context:\n        run_agent_with_context()\n\n    if args.all or args.tracing:\n        asyncio.run(run_tracing_example())\n\n    if args.all or args.streaming:\n        run_streaming_example()\n\n    if args.all or args.mcp:\n        asyncio.run(run_agent_with_mcp())\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_openai_agent_sdk_v1_minimal.py",
    "content": "#!/usr/bin/env -S uv run --script\n\n# /// script\n# dependencies = [\n#   \"openai\",\n#   \"openai-agents\",\n# ]\n# ///\n\n\nfrom agents import Agent, Runner\n\nagent = Agent(\n    name=\"Assistant\",\n    instructions=\"You are a helpful assistant\",\n    model=\"o3-mini\",\n)\n\nresult = Runner.run_sync(agent, \"What's your top tip for maximizing productivity?\")\nprint(result.final_output)\n"
  },
  {
    "path": "sfa_poc.py",
    "content": "# /// script\n# dependencies = [\n#   \"requests<3\",\n#   \"rich\",\n# ]\n# ///\n\n# https://docs.astral.sh/uv/guides/scripts/#declaring-script-dependencies\n\nimport requests\nfrom rich.pretty import pprint\n\nresp = requests.get(\"https://peps.python.org/api/peps.json\")\ndata = resp.json()\npprint([(k, v[\"title\"]) for k, v in data.items()][:10])\n"
  },
  {
    "path": "sfa_polars_csv_agent_anthropic_v3.py",
    "content": "# /// script\n# dependencies = [\n#   \"anthropic>=0.47.1\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n#   \"polars>=1.22.0\",\n# ]\n# ///\n\n\"\"\"\n    Example Usage:\n        uv run sfa_polars_csv_agent_anthropic_v3.py -i \"data/analytics.csv\" -p \"What is the average age of the users?\"\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport json\nimport argparse\nimport tempfile\nimport subprocess\nimport time\nfrom typing import List, Optional, Dict, Any\nfrom rich.console import Console\nfrom rich.panel import Panel\nimport anthropic\nfrom anthropic import Anthropic\nimport polars as pl\nfrom pydantic import BaseModel, Field, ValidationError\n\n# Initialize rich console\nconsole = Console()\n\n# Tool functions\ndef list_columns(reasoning: str, csv_path: str) -> List[str]:\n    \"\"\"Returns a list of columns in the CSV file.\n\n    The agent uses this to discover available columns and make informed decisions.\n    This is typically the first tool called to understand the data structure.\n\n    Args:\n        reasoning: Explanation of why we're listing columns relative to user request\n        csv_path: Path to the CSV file\n\n    Returns:\n        List of column names as strings\n\n    Example:\n        columns = list_columns(\"Need to find age-related columns\", \"data.csv\")\n        # Returns: ['user_id', 'age', 'name', ...]\n    \"\"\"\n    try:\n        df = pl.scan_csv(csv_path).collect()\n        columns = df.columns\n        console.log(f\"[blue]List Columns Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Columns: {columns}[/dim]\")\n        return columns\n    except Exception as e:\n        console.log(f\"[red]Error listing columns: {str(e)}[/red]\")\n        return []\n\n\ndef sample_csv(reasoning: str, csv_path: str, row_count: int) -> str:\n    \"\"\"Returns a sample of rows from the CSV file.\n\n    The agent uses this to understand actual data content and patterns.\n    This helps validate data types and identify any potential data quality issues.\n\n    Args:\n        reasoning: Explanation of why we're sampling this data\n        csv_path: Path to the CSV file\n        row_count: Number of rows to sample (aim for 3-5 rows)\n\n    Returns:\n        String containing sample rows in readable format\n\n    Example:\n        sample = sample_csv(\"Check age values and formats\", \"data.csv\", 3)\n        # Returns formatted string with 3 rows of data\n    \"\"\"\n    try:\n        df = pl.scan_csv(csv_path).limit(row_count).collect()\n        # Convert to string representation\n        output = df.select(pl.all()).write_csv(None)\n        console.log(\n            f\"[blue]Sample CSV Tool[/blue] - Rows: {row_count} - Reasoning: {reasoning}\"\n        )\n        console.log(f\"[dim]Sample:\\n{output}[/dim]\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error sampling CSV: {str(e)}[/red]\")\n        return \"\"\n\n\ndef run_test_polars_code(reasoning: str, polars_python_code: str, csv_path: str) -> str:\n    \"\"\"Executes test Polars Python code and returns results.\n\n    The agent uses this to validate code before finalizing it.\n    Results are only shown to the agent, not the user.\n    The code should use Polars' lazy evaluation (LazyFrame) for better performance.\n\n    Args:\n        reasoning: Explanation of why we're running this test code\n        polars_python_code: The Polars Python code to test. Should use pl.scan_csv() for lazy evaluation.\n        csv_path: Path to the CSV file\n\n    Returns:\n        Code execution results as a string\n    \"\"\"\n    try:\n        # Create a unique filename based on timestamp\n        timestamp = int(time.time())\n        filename = f\"test_polars_{timestamp}.py\"\n\n        # Write code to a real file\n        with open(filename, \"w\") as f:\n            f.write(polars_python_code)\n\n        # Execute the code\n        result = subprocess.run(\n            [\"uv\", \"run\", \"--with\", \"polars\", filename],\n            text=True,\n            capture_output=True,\n        )\n        output = result.stdout + result.stderr\n\n        # Clean up the file\n        os.remove(filename)\n\n        console.log(f\"[blue]Test Code Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Code:\\n{polars_python_code}[/dim]\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error running test code: {str(e)}[/red]\")\n        return str(e)\n\n\ndef run_final_polars_code(\n    reasoning: str,\n    polars_python_code: str,\n    csv_path: str,\n    output_file: Optional[str] = None,\n) -> str:\n    \"\"\"Executes the final Polars code and returns results to user.\n\n    This is the last tool call the agent should make after validating the code.\n    The code should be fully tested and ready for production use.\n    Results will be displayed to the user and optionally saved to a file.\n\n    Args:\n        reasoning: Final explanation of how this code satisfies user request\n        polars_python_code: The validated Polars Python code to run. Should use pl.scan_csv() for lazy evaluation.\n        csv_path: Path to the CSV file\n        output_file: Optional path to save results to\n\n    Returns:\n        Code execution results as a string\n    \"\"\"\n    try:\n        # Create a unique filename based on timestamp\n        timestamp = int(time.time())\n        filename = f\"polars_code_{timestamp}.py\"\n\n        # Write code to a real file\n        with open(filename, \"w\") as f:\n            f.write(polars_python_code)\n\n        # Execute the code\n        result = subprocess.run(\n            [\"uv\", \"run\", \"--with\", \"polars\", filename],\n            text=True,\n            capture_output=True,\n        )\n        output = result.stdout + result.stderr\n\n        # Clean up the file\n        os.remove(filename)\n\n        console.log(Panel(f\"[green]Final Code Tool[/green]\\nReasoning: {reasoning}\\n\"))\n        console.log(f\"[dim]Code:\\n{polars_python_code}[/dim]\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error running final code: {str(e)}[/red]\")\n        return str(e)\n\n\n# Define tool schemas for Anthropic\nTOOLS = [\n    {\n        \"name\": \"list_columns\",\n        \"description\": \"Returns list of available columns in the CSV file\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to list columns relative to user request\",\n                },\n                \"csv_path\": {\n                    \"type\": \"string\",\n                    \"description\": \"Path to the CSV file\",\n                },\n            },\n            \"required\": [\"reasoning\", \"csv_path\"],\n        },\n    },\n    {\n        \"name\": \"sample_csv\",\n        \"description\": \"Returns sample rows from the CSV file\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we need to sample this data\",\n                },\n                \"csv_path\": {\n                    \"type\": \"string\",\n                    \"description\": \"Path to the CSV file\",\n                },\n                \"row_count\": {\n                    \"type\": \"integer\",\n                    \"description\": \"Number of rows to sample aim for 3-5 rows\",\n                },\n            },\n            \"required\": [\"reasoning\", \"csv_path\", \"row_count\"],\n        },\n    },\n    {\n        \"name\": \"run_test_polars_code\",\n        \"description\": \"Tests Polars Python code and returns results (only visible to agent)\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Why we're testing this specific code\",\n                },\n                \"polars_python_code\": {\n                    \"type\": \"string\",\n                    \"description\": \"The Complete Polars Python code to test\",\n                },\n                \"csv_path\": {\n                    \"type\": \"string\",\n                    \"description\": \"Path to the CSV file\",\n                },\n            },\n            \"required\": [\"reasoning\", \"polars_python_code\", \"csv_path\"],\n        },\n    },\n    {\n        \"name\": \"run_final_polars_code\",\n        \"description\": \"Runs the final validated Polars code and shows results to user\",\n        \"input_schema\": {\n            \"type\": \"object\",\n            \"properties\": {\n                \"reasoning\": {\n                    \"type\": \"string\",\n                    \"description\": \"Final explanation of how code satisfies user request\",\n                },\n                \"polars_python_code\": {\n                    \"type\": \"string\",\n                    \"description\": \"The complete validated Polars Python code to run\",\n                },\n                \"csv_path\": {\n                    \"type\": \"string\",\n                    \"description\": \"Path to the CSV file\",\n                },\n                \"output_file\": {\n                    \"type\": \"string\",\n                    \"description\": \"Optional path to save results to\",\n                },\n            },\n            \"required\": [\"reasoning\", \"polars_python_code\", \"csv_path\"],\n        },\n    },\n]\n\nAGENT_PROMPT = \"\"\"\nYou are a world-class expert at crafting precise Polars data transformations in Python.\nYour goal is to generate accurate code that exactly matches the user's data analysis needs.\n\nUse the provided tools to explore the CSV data and construct the perfect Polars transformation:\n1. Start by listing columns to understand what's available in the CSV.\n2. Sample the CSV to see actual data patterns.\n3. Test Polars code with run_test_polars_code before finalizing it. Run the run_test_polars_code tool as many times as needed to get the code working.\n4. Only call run_final_polars_code when you're confident the code is perfect.\n\nIf you find your run_test_polars_code tool call returns an error or won't satisfy the user request, try to fix the code or try a different approach.\nThink step by step about what information you need.\n\nBe sure to specify every parameter for each tool call, and every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.\n\nWhen using run_test_polars_code, make sure to test edge cases and validate data types.\nIf saving results to a file, add file writing code to the end of your polars_python_code variable (df.write_csv(output_file)).\n\nYour code should use DataFrame to immediately operate on the data.\nYour polars_python_code variable should be a complete python script that can be run with uv run --with polars. Read the code in the csv_file_path, operate on the data as requested, and print the results.\n\nUser request: {{user_request}}\nCSV file path: {{csv_file_path}}\n\"\"\"\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"Polars CSV Agent using Claude 3.7\")\n    parser.add_argument(\"-i\", \"--input\", required=True, help=\"Path to input CSV file\")\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 10)\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    ANTHROPIC_API_KEY = os.getenv(\"ANTHROPIC_API_KEY\")\n    if not ANTHROPIC_API_KEY:\n        console.print(\n            \"[red]Error: ANTHROPIC_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please get your API key from https://console.anthropic.com/settings/keys\"\n        )\n        console.print(\"Then set it with: export ANTHROPIC_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    client = Anthropic(api_key=ANTHROPIC_API_KEY)\n\n    # Create a single combined prompt based on the full template\n    completed_prompt = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt).replace(\n        \"{{csv_file_path}}\", args.input\n    )\n    \n    # Initialize messages with proper typing for Anthropic chat\n    messages = [{\"role\": \"user\", \"content\": completed_prompt}]\n\n    compute_iterations = 0\n    break_loop = False\n    previous_thinking = None\n\n    # Main agent loop\n    while True:\n        if break_loop:\n            break\n\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        if compute_iterations >= args.compute:\n            console.print(\n                \"[yellow]Warning: Reached maximum compute loops without final code[/yellow]\"\n            )\n            console.print(\n                \"[yellow]Please try adjusting your prompt or increasing the compute limit.[/yellow]\"\n            )\n            raise Exception(\n                f\"Maximum compute loops reached: {compute_iterations}/{args.compute}\"\n            )\n\n        try:\n            # Generate content with tool support\n            response = client.messages.create(\n                model=\"claude-3-7-sonnet-20250219\",\n                system=\"You are a world-class expert at crafting precise Polars data transformations in Python.\",\n                messages=messages,\n                tools=TOOLS,\n                max_tokens=8096,\n                thinking={\n                    \"type\": \"enabled\",\n                    \"budget_tokens\": 4096\n                },\n            )\n\n            # Extract thinking block and other content\n            thinking_block = None\n            tool_use_block = None\n            text_block = None\n            \n            if response.content:\n                # Get the message content\n                for content_block in response.content:\n                    if content_block.type == \"thinking\":\n                        thinking_block = content_block\n                        previous_thinking = thinking_block\n                    elif content_block.type == \"tool_use\":\n                        tool_use_block = content_block\n                        # Access the proper attributes directly\n                        tool_name = content_block.name\n                        tool_input = content_block.input\n                        tool_id = content_block.id\n                    elif content_block.type == \"text\":\n                        text_block = content_block\n                        console.print(f\"[cyan]Model response:[/cyan] {content_block.text}\")\n                \n                # Handle text responses if there was no tool use\n                if not tool_use_block and text_block:\n                    messages.append({  # type: ignore\n                        \"role\": \"assistant\", \n                        \"content\": [\n                            *([thinking_block] if thinking_block else []), \n                            {\"type\": \"text\", \"text\": text_block.text}\n                        ]\n                    })\n                    continue\n                \n                # We need a tool use block to proceed\n                if tool_use_block:\n                    console.print(\n                        f\"[blue]Tool Call:[/blue] {tool_name}({json.dumps(tool_input, indent=2)})\"\n                    )\n\n                    try:\n                        # Execute the appropriate tool based on name\n                        if tool_name == \"list_columns\":\n                            result = list_columns(\n                                reasoning=tool_input[\"reasoning\"],\n                                csv_path=tool_input[\"csv_path\"],\n                            )\n                        elif tool_name == \"sample_csv\":\n                            result = sample_csv(\n                                reasoning=tool_input[\"reasoning\"],\n                                csv_path=tool_input[\"csv_path\"],\n                                row_count=tool_input[\"row_count\"],\n                            )\n                        elif tool_name == \"run_test_polars_code\":\n                            result = run_test_polars_code(\n                                reasoning=tool_input[\"reasoning\"],\n                                polars_python_code=tool_input[\"polars_python_code\"],\n                                csv_path=tool_input[\"csv_path\"],\n                            )\n                        elif tool_name == \"run_final_polars_code\":\n                            output_file = tool_input.get(\"output_file\")\n                            result = run_final_polars_code(\n                                reasoning=tool_input[\"reasoning\"],\n                                polars_python_code=tool_input[\"polars_python_code\"],\n                                csv_path=tool_input[\"csv_path\"],\n                                output_file=output_file,\n                            )\n                            break_loop = True\n                        else:\n                            raise Exception(f\"Unknown tool call: {tool_name}\")\n\n                        console.print(\n                            f\"[blue]Tool Call Result:[/blue] {tool_name}(...) ->\\n{result}\"\n                        )\n\n                        # Append the tool result to messages\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"assistant\",\n                                \"content\": [\n                                    *([thinking_block] if thinking_block else []),\n                                    {\n                                        \"type\": \"tool_use\",\n                                        \"id\": tool_id,\n                                        \"name\": tool_name,\n                                        \"input\": tool_input\n                                    }\n                                ]\n                            }\n                        )\n\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"user\",\n                                \"content\": [\n                                    {\n                                        \"type\": \"tool_result\",\n                                        \"tool_use_id\": tool_id,\n                                        \"content\": str(result)\n                                    }\n                                ]\n                            }\n                        )\n\n                    except Exception as e:\n                        error_msg = f\"Error executing {tool_name}: {e}\"\n                        console.print(f\"[red]{error_msg}[/red]\")\n\n                        # Append the error to messages\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"assistant\",\n                                \"content\": [\n                                    *([thinking_block] if thinking_block else []),\n                                    {\n                                        \"type\": \"tool_use\",\n                                        \"id\": tool_id,\n                                        \"name\": tool_name,\n                                        \"input\": tool_input\n                                    }\n                                ]\n                            }\n                        )\n\n                        messages.append(\n                            {  # type: ignore\n                                \"role\": \"user\",\n                                \"content\": [\n                                    {\n                                        \"type\": \"tool_result\",\n                                        \"tool_use_id\": tool_id,\n                                        \"content\": str(error_msg)\n                                    }\n                                ]\n                            }\n                        )\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_polars_csv_agent_openai_v2.py",
    "content": "# /// script\n# dependencies = [\n#   \"openai>=1.63.0\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n#   \"polars>=1.22.0\",\n# ]\n# ///\n\n\"\"\"\n    Example Usage:\n        uv run sfa_polars_csv_agent_openai_v2.py -i \"data/analytics.csv\" -p \"What is the average age of the users?\"\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport json\nimport argparse\nimport tempfile\nimport subprocess\nimport time\nfrom typing import List, Optional\nfrom rich.console import Console\nfrom rich.panel import Panel\nimport openai\nimport polars as pl\nfrom pydantic import BaseModel, Field, ValidationError\nfrom openai import pydantic_function_tool\n\n# Initialize rich console\nconsole = Console()\n\n\n# Create our list of function tools from our pydantic models\nclass ListColumnsArgs(BaseModel):\n    reasoning: str = Field(\n        ..., description=\"Explanation for listing columns relative to the user request\"\n    )\n    csv_path: str = Field(..., description=\"Path to the CSV file\")\n\n\nclass SampleCSVArgs(BaseModel):\n    reasoning: str = Field(..., description=\"Explanation for sampling the CSV data\")\n    csv_path: str = Field(..., description=\"Path to the CSV file\")\n    row_count: int = Field(\n        ..., description=\"Number of rows to sample (aim for 3-5 rows)\"\n    )\n\n\nclass RunTestPolarsCodeArgs(BaseModel):\n    reasoning: str = Field(..., description=\"Reason for testing this Polars code\")\n    polars_python_code: str = Field(..., description=\"The Polars Python code to test\")\n    csv_path: str = Field(..., description=\"Path to the CSV file\")\n\n\nclass RunFinalPolarsCodeArgs(BaseModel):\n    reasoning: str = Field(\n        ...,\n        description=\"Final explanation of how this code satisfies the user request\",\n    )\n    csv_path: str = Field(..., description=\"Path to the CSV file\")\n    polars_python_code: str = Field(\n        ..., description=\"The validated Polars Python code to run\"\n    )\n    output_file: Optional[str] = Field(\n        None, description=\"Optional path to save results to\"\n    )\n\n\n# Create tools list\ntools = [\n    pydantic_function_tool(ListColumnsArgs),\n    pydantic_function_tool(SampleCSVArgs),\n    pydantic_function_tool(RunTestPolarsCodeArgs),\n    pydantic_function_tool(RunFinalPolarsCodeArgs),\n]\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are a world-class expert at crafting precise Polars data transformations in Python.\n    Your goal is to generate accurate code that exactly matches the user's data analysis needs.\n</purpose>\n\n<instructions>\n    <instruction>Use the provided tools to explore the CSV data and construct the perfect Polars transformation.</instruction>\n    <instruction>Start by listing columns to understand what's available in the CSV.</instruction>\n    <instruction>Sample the CSV to see actual data patterns.</instruction>\n    <instruction>Test Polars code with run_test_polars_code before finalizing it. Run the run_test_polars_code tool as many times as needed to get the code working.</instruction>\n    <instruction>Only call run_final_polars_code when you're confident the code is perfect.</instruction>\n    <instruction>If you find your run_test_polars_code tool call returns an error or won't satisfy the user request, try to fix the code or try a different approach.</instruction>\n    <instruction>Think step by step about what information you need.</instruction>\n    <instruction>Be sure to specify every parameter for each tool call.</instruction>\n    <instruction>Every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.</instruction>\n    <instruction>When using run_test_polars_code, make sure to test edge cases and validate data types.</instruction>\n    <instruction>If saving results to a file, add file writing code to the end of your polars_python_code variable (df.write_csv(output_file)).</instruction>\n    <instruction>Your code should use DataFrame to immediately operate on the data.</instruction>\n    <instruction>Your polars_python_code variable should be a complete python script that can be run with uv run --with polars. Read the code in the csv_file_path, operate on the data as requested, and print the results.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <name>list_columns</name>\n        <description>Returns list of available columns in the CSV file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to list columns relative to user request</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>csv_path</name>\n                <type>string</type>\n                <description>Path to the CSV file</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>sample_csv</name>\n        <description>Returns sample rows from the CSV file</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to sample this data</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>csv_path</name>\n                <type>string</type>\n                <description>Path to the CSV file</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>row_count</name>\n                <type>integer</type>\n                <description>Number of rows to sample aim for 3-5 rows</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_test_polars_code</name>\n        <description>Tests Polars Python code and returns results (only visible to agent)</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we're testing this specific code</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>polars_python_code</name>\n                <type>string</type>\n                <description>The Complete Polars Python code to test</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_final_polars_code</name>\n        <description>Runs the final validated Polars code and shows results to user</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Final explanation of how code satisfies user request</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>polars_python_code</name>\n                <type>string</type>\n                <description>The complete validated Polars Python code to run</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\n<csv-file-path>\n    {{csv_file_path}}\n</csv-file-path>\n\"\"\"\n\n\ndef list_columns(reasoning: str, csv_path: str) -> List[str]:\n    \"\"\"Returns a list of columns in the CSV file.\n\n    The agent uses this to discover available columns and make informed decisions.\n    This is typically the first tool called to understand the data structure.\n\n    Args:\n        reasoning: Explanation of why we're listing columns relative to user request\n        csv_path: Path to the CSV file\n\n    Returns:\n        List of column names as strings\n\n    Example:\n        columns = list_columns(\"Need to find age-related columns\", \"data.csv\")\n        # Returns: ['user_id', 'age', 'name', ...]\n    \"\"\"\n    try:\n        df = pl.scan_csv(csv_path).collect()\n        columns = df.columns\n        console.log(f\"[blue]List Columns Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Columns: {columns}[/dim]\")\n        return columns\n    except Exception as e:\n        console.log(f\"[red]Error listing columns: {str(e)}[/red]\")\n        return []\n\n\ndef sample_csv(reasoning: str, csv_path: str, row_count: int) -> str:\n    \"\"\"Returns a sample of rows from the CSV file.\n\n    The agent uses this to understand actual data content and patterns.\n    This helps validate data types and identify any potential data quality issues.\n\n    Args:\n        reasoning: Explanation of why we're sampling this data\n        csv_path: Path to the CSV file\n        row_count: Number of rows to sample (aim for 3-5 rows)\n\n    Returns:\n        String containing sample rows in readable format\n\n    Example:\n        sample = sample_csv(\"Check age values and formats\", \"data.csv\", 3)\n        # Returns formatted string with 3 rows of data\n    \"\"\"\n    try:\n        df = pl.scan_csv(csv_path).limit(row_count).collect()\n        # Convert to string representation\n        output = df.select(pl.all()).write_csv(None)\n        console.log(\n            f\"[blue]Sample CSV Tool[/blue] - Rows: {row_count} - Reasoning: {reasoning}\"\n        )\n        console.log(f\"[dim]Sample:\\n{output}[/dim]\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error sampling CSV: {str(e)}[/red]\")\n        return \"\"\n\n\ndef run_test_polars_code(reasoning: str, polars_python_code: str) -> str:\n    \"\"\"Executes test Polars Python code and returns results.\n\n    The agent uses this to validate code before finalizing it.\n    Results are only shown to the agent, not the user.\n    The code should use Polars' lazy evaluation (LazyFrame) for better performance.\n\n    Args:\n        reasoning: Explanation of why we're running this test code\n        polars_python_code: The Polars Python code to test. Should use pl.scan_csv() for lazy evaluation.\n\n    Returns:\n        Code execution results as a string\n    \"\"\"\n    try:\n        # Create a unique filename based on timestamp\n        timestamp = int(time.time())\n        filename = f\"test_polars_{timestamp}.py\"\n\n        # Write code to a real file\n        with open(filename, \"w\") as f:\n            f.write(polars_python_code)\n\n        # Execute the code\n        result = subprocess.run(\n            [\"uv\", \"run\", \"--with\", \"polars\", filename],\n            text=True,\n            capture_output=True,\n        )\n        output = result.stdout + result.stderr\n\n        # Clean up the file\n        os.remove(filename)\n\n        console.log(f\"[blue]Test Code Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Code:\\n{polars_python_code}[/dim]\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error running test code: {str(e)}[/red]\")\n        return str(e)\n\n\ndef run_final_polars_code(\n    reasoning: str,\n    polars_python_code: str,\n) -> str:\n    \"\"\"Executes the final Polars code and returns results to user.\n\n    This is the last tool call the agent should make after validating the code.\n    The code should be fully tested and ready for production use.\n    Results will be displayed to the user and optionally saved to a file.\n\n    Args:\n        reasoning: Final explanation of how this code satisfies user request\n        polars_python_code: The validated Polars Python code to run. Should use pl.scan_csv() for lazy evaluation.\n\n    Returns:\n        Code execution results as a string\n    \"\"\"\n    try:\n        # Create a unique filename based on timestamp\n        timestamp = int(time.time())\n        filename = f\"polars_code_{timestamp}.py\"\n\n        # Write code to a real file\n        with open(filename, \"w\") as f:\n            f.write(polars_python_code)\n\n        # Execute the code\n        result = subprocess.run(\n            [\"uv\", \"run\", \"--with\", \"polars\", filename],\n            text=True,\n            capture_output=True,\n        )\n        output = result.stdout + result.stderr\n\n        # Clean up the file\n        os.remove(filename)\n\n        console.log(Panel(f\"[green]Final Code Tool[/green]\\nReasoning: {reasoning}\\n\"))\n        console.log(f\"[dim]Code:\\n{polars_python_code}[/dim]\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error running final code: {str(e)}[/red]\")\n        return str(e)\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"Polars CSV Agent using OpenAI API\")\n    parser.add_argument(\"-i\", \"--input\", required=True, help=\"Path to input CSV file\")\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 10)\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n    if not OPENAI_API_KEY:\n        console.print(\n            \"[red]Error: OPENAI_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please get your API key from https://platform.openai.com/api-keys\"\n        )\n        console.print(\"Then set it with: export OPENAI_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    openai.api_key = OPENAI_API_KEY\n\n    # Create a single combined prompt based on the full template\n    completed_prompt = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt).replace(\n        \"{{csv_file_path}}\", args.input\n    )\n    # Initialize messages with proper typing for OpenAI chat\n    messages: List[dict] = [{\"role\": \"user\", \"content\": completed_prompt}]\n\n    compute_iterations = 0\n    break_loop = False\n\n    # Main agent loop\n    while True:\n        if break_loop:\n            break\n\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        if compute_iterations >= args.compute:\n            console.print(\n                \"[yellow]Warning: Reached maximum compute loops without final code[/yellow]\"\n            )\n            console.print(\n                \"[yellow]Please try adjusting your prompt or increasing the compute limit.[/yellow]\"\n            )\n            raise Exception(\n                f\"Maximum compute loops reached: {compute_iterations}/{args.compute}\"\n            )\n\n        try:\n            # Generate content with tool support\n            response = openai.chat.completions.create(\n                model=\"o3-mini\",\n                messages=messages,\n                tools=tools,\n                tool_choice=\"required\",\n            )\n\n            if response.choices:\n                assert len(response.choices) == 1\n                message = response.choices[0].message\n\n                if message.function_call:\n                    func_call = message.function_call\n                elif message.tool_calls and len(message.tool_calls) > 0:\n                    tool_call = message.tool_calls[0]\n                    func_call = tool_call.function\n                else:\n                    func_call = None\n\n                if func_call:\n                    func_name = func_call.name\n                    func_args_str = func_call.arguments\n\n                    messages.append(\n                        {\n                            \"role\": \"assistant\",\n                            \"content\": None,\n                            \"tool_calls\": [\n                                {\n                                    \"id\": tool_call.id,\n                                    \"type\": \"function\",\n                                    \"function\": func_call,\n                                }\n                            ],\n                        }\n                    )\n\n                    console.print(\n                        f\"[blue]Function Call:[/blue] {func_name}({func_args_str})\"\n                    )\n                    try:\n                        # Validate and parse arguments using the corresponding pydantic model\n                        if func_name == \"ListColumnsArgs\":\n                            args_parsed = ListColumnsArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = list_columns(\n                                reasoning=args_parsed.reasoning,\n                                csv_path=args_parsed.csv_path,\n                            )\n                        elif func_name == \"SampleCSVArgs\":\n                            args_parsed = SampleCSVArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = sample_csv(\n                                reasoning=args_parsed.reasoning,\n                                csv_path=args_parsed.csv_path,\n                                row_count=args_parsed.row_count,\n                            )\n                        elif func_name == \"RunTestPolarsCodeArgs\":\n                            args_parsed = RunTestPolarsCodeArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = run_test_polars_code(\n                                reasoning=args_parsed.reasoning,\n                                polars_python_code=args_parsed.polars_python_code,\n                            )\n                        elif func_name == \"RunFinalPolarsCodeArgs\":\n                            args_parsed = RunFinalPolarsCodeArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = run_final_polars_code(\n                                reasoning=args_parsed.reasoning,\n                                polars_python_code=args_parsed.polars_python_code,\n                            )\n                            break_loop = True\n                        else:\n                            raise Exception(f\"Unknown tool call: {func_name}\")\n\n                        console.print(\n                            f\"[blue]Function Call Result:[/blue] {func_name}(...) ->\\n{result}\"\n                        )\n\n                        # Append the function call result into our messages as a tool response\n                        messages.append(\n                            {\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tool_call.id,\n                                \"content\": json.dumps({\"result\": str(result)}),\n                            }\n                        )\n\n                    except Exception as e:\n                        error_msg = f\"Argument validation failed for {func_name}: {e}\"\n                        console.print(f\"[red]{error_msg}[/red]\")\n                        messages.append(\n                            {\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tool_call.id,\n                                \"content\": json.dumps({\"error\": error_msg}),\n                            }\n                        )\n                        continue\n                else:\n                    raise Exception(\n                        \"No function call in this response - should never happen\"\n                    )\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_scrapper_agent_openai_v2.py",
    "content": "# /// script\n# dependencies = [\n#   \"openai>=1.63.0\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n#   \"firecrawl-py>=0.1.0\",\n#   \"python-dotenv>=1.0.0\",\n# ]\n# ///\n\n\"\"\"\n    Example Usage:\n        uv run sfa_scrapper_agent_openai_v2.py -u \"https://example.com\" -p \"Scrap and format each sentence as a separate line in a markdown list\" -o \"example.md\"\n\n        uv run sfa_scrapper_agent_openai_v2.py \\\n            --url https://agenticengineer.com/principled-ai-coding \\\n            --prompt \"What are the names and descriptions of each lesson?\" \\\n            --output-file-path paic-lessons.md \\\n            -c 10\n\"\"\"\n\nimport os\nimport sys\nimport json\nimport argparse\nfrom typing import List\nfrom rich.console import Console\nfrom rich.panel import Panel\nimport openai\nfrom pydantic import BaseModel, Field\nfrom openai import pydantic_function_tool\nfrom firecrawl import FirecrawlApp\nfrom dotenv import load_dotenv\n\n\n# Load environment variables\nload_dotenv()\n\n# Initialize rich console\nconsole = Console()\n\n# Initialize Firecrawl\nFIRECRAWL_API_KEY = os.getenv(\"FIRECRAWL_API_KEY\")\nif not FIRECRAWL_API_KEY:\n    console.print(\n        \"[red]Error: FIRECRAWL_API_KEY not found in environment variables[/red]\"\n    )\n    sys.exit(1)\n\nfirecrawl_app = FirecrawlApp(api_key=FIRECRAWL_API_KEY)\n\n# Initialize OpenAI client\nclient = openai.OpenAI()\n\n\n# Create our list of function tools from our pydantic models\nclass ScrapeUrlArgs(BaseModel):\n    reasoning: str = Field(\n        ..., description=\"Explanation for why we're scraping this URL\"\n    )\n    url: str = Field(..., description=\"The URL to scrape\")\n    output_file_path: str = Field(..., description=\"Path to save the scraped content\")\n\n\nclass ReadLocalFileArgs(BaseModel):\n    reasoning: str = Field(\n        ..., description=\"Explanation for why we're reading this file\"\n    )\n    file_path: str = Field(..., description=\"Path of the file to read\")\n\n\nclass UpdateLocalFileArgs(BaseModel):\n    reasoning: str = Field(\n        ..., description=\"Explanation for why we're updating this file\"\n    )\n    file_path: str = Field(..., description=\"Path of the file to update\")\n    content: str = Field(..., description=\"New content to write to the file\")\n\n\nclass CompleteTaskArgs(BaseModel):\n    reasoning: str = Field(..., description=\"Explanation of why the task is complete\")\n\n\n# Create tools list\ntools = [\n    pydantic_function_tool(ScrapeUrlArgs),\n    pydantic_function_tool(ReadLocalFileArgs),\n    pydantic_function_tool(UpdateLocalFileArgs),\n    pydantic_function_tool(CompleteTaskArgs),\n]\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are a world-class web scraping and content filtering expert.\n    Your goal is to scrape web content and filter it according to the user's needs.\n</purpose>\n\n<instructions>\n    <instruction>Run scrap_url, then read_local_file, then update_local_file as many times as needed to satisfy the user's prompt, then complete_task when the user's prompt is fully satisfied.</instruction>\n    <instruction>When processing content, extract exactly what the user asked for - no more, no less.</instruction>\n    <instruction>When saving processed content, use proper markdown formatting.</instruction>\n    <instruction>Use tools available in 'tools' section.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <n>scrape_url</n>\n        <description>Scrapes content from a URL and saves it to a file</description>\n        <parameters>\n            <parameter>\n                <n>reasoning</n>\n                <type>string</type>\n                <description>Why we need to scrape this URL</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <n>url</n>\n                <type>string</type>\n                <description>The URL to scrape</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <n>output_file_path</n>\n                <type>string</type>\n                <description>Where to save the scraped content</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <n>read_local_file</n>\n        <description>Reads content from a local file</description>\n        <parameters>\n            <parameter>\n                <n>reasoning</n>\n                <type>string</type>\n                <description>Why we need to read this file</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <n>file_path</n>\n                <type>string</type>\n                <description>Path of file to read</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <n>update_local_file</n>\n        <description>Updates content in a local file</description>\n        <parameters>\n            <parameter>\n                <n>reasoning</n>\n                <type>string</type>\n                <description>Why we need to update this file</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <n>file_path</n>\n                <type>string</type>\n                <description>Path of file to update</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <n>content</n>\n                <type>string</type>\n                <description>New content to write to the file</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <n>complete_task</n>\n        <description>Signals that the task is complete</description>\n        <parameters>\n            <parameter>\n                <n>reasoning</n>\n                <type>string</type>\n                <description>Why the task is now complete</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-prompt>\n    {{user_prompt}}\n</user-prompt>\n\n<url>\n    {{url}}\n</url>\n\n<output-file-path>\n    {{output_file_path}}\n</output-file-path>\n\"\"\"\n\n\ndef log_function_call(function_name: str, function_args: dict):\n    \"\"\"Log a function call in a rich panel.\"\"\"\n    args_str = \", \".join(f\"{k}={repr(v)}\" for k, v in function_args.items())\n    console.print(\n        Panel(\n            f\"{function_name}({args_str})\",\n            title=\"[blue]Function Call[/blue]\",\n            border_style=\"blue\",\n        )\n    )\n\n\ndef log_function_result(function_name: str, result: str):\n    \"\"\"Log a function result in a rich panel.\"\"\"\n    console.print(\n        Panel(\n            str(result),\n            title=f\"[green]{function_name} Result[/green]\",\n            border_style=\"green\",\n        )\n    )\n\n\ndef log_error(error_msg: str):\n    \"\"\"Log an error in a rich panel.\"\"\"\n    console.print(Panel(str(error_msg), title=\"[red]Error[/red]\", border_style=\"red\"))\n\n\ndef scrape_url(reasoning: str, url: str, output_file_path: str) -> str:\n    \"\"\"Scrapes content from a URL and saves it to a file.\"\"\"\n    log_function_call(\n        \"scrape_url\",\n        {\"reasoning\": reasoning, \"url\": url, \"output_file_path\": output_file_path},\n    )\n\n    try:\n        response = firecrawl_app.scrape_url(\n            url=url,\n            params={\n                \"formats\": [\"markdown\"],\n            },\n        )\n\n        if response.get(\"markdown\"):\n            content = response[\"markdown\"]\n            with open(output_file_path, \"w\") as f:\n                f.write(content)\n            log_function_result(\n                \"scrape_url\", f\"Successfully scraped {len(content)} characters\"\n            )\n            return content\n        else:\n            error = response.get(\"error\", \"Unknown error\")\n            log_error(f\"Error scraping URL: {error}\")\n            return \"\"\n    except Exception as e:\n        log_error(f\"Error scraping URL: {str(e)}\")\n        return \"\"\n\n\ndef read_local_file(reasoning: str, file_path: str) -> str:\n    \"\"\"Reads content from a local file.\n\n    Args:\n        reasoning: Explanation for why we're reading this file\n        file_path: Path of the file to read\n\n    Returns:\n        String containing the file contents\n    \"\"\"\n    log_function_call(\n        \"read_local_file\", {\"reasoning\": reasoning, \"file_path\": file_path}\n    )\n\n    try:\n        console.log(\n            f\"[blue]Reading File[/blue] - File: {file_path} - Reasoning: {reasoning}\"\n        )\n        with open(file_path, \"r\") as f:\n            return f.read()\n    except Exception as e:\n        console.log(f\"[red]Error reading file: {str(e)}[/red]\")\n        return \"\"\n\n\ndef update_local_file(reasoning: str, file_path: str, content: str) -> str:\n    \"\"\"Updates content in a local file.\n\n    Args:\n        reasoning: Explanation for why we're updating this file\n        file_path: Path of the file to update\n        content: New content to write to the file\n\n    Returns:\n        String indicating success or failure\n    \"\"\"\n    log_function_call(\n        \"update_local_file\",\n        {\n            \"reasoning\": reasoning,\n            \"file_path\": file_path,\n            \"content\": f\"{len(content)} characters\",  # Don't log full content\n        },\n    )\n\n    try:\n        console.log(\n            f\"[blue]Updating File[/blue] - File: {file_path} - Reasoning: {reasoning}\"\n        )\n        with open(file_path, \"w\") as f:\n            f.write(content)\n        log_function_result(\n            \"update_local_file\", f\"Successfully wrote {len(content)} characters\"\n        )\n        return \"File updated successfully\"\n    except Exception as e:\n        console.log(f\"[red]Error updating file: {str(e)}[/red]\")\n        return f\"Error: {str(e)}\"\n\n\ndef complete_task(reasoning: str) -> str:\n    \"\"\"Signals that the task is complete.\n\n    Args:\n        reasoning: Explanation of why the task is complete\n\n    Returns:\n        String confirmation message\n    \"\"\"\n    log_function_call(\"complete_task\", {\"reasoning\": reasoning})\n    console.log(f\"[green]Task Complete[/green] - Reasoning: {reasoning}\")\n    result = \"Task completed successfully\"\n    log_function_result(\"complete_task\", result)\n    return result\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(\n        description=\"Web scraper agent that filters content based on user query\"\n    )\n    parser.add_argument(\"--url\", \"-u\", required=True, help=\"The URL to scrape\")\n    parser.add_argument(\n        \"--output-file-path\",\n        \"-o\",\n        default=\"scraped_content.md\",\n        help=\"Path to save the scraped content\",\n    )\n    parser.add_argument(\n        \"--prompt\", \"-p\", required=True, help=\"The prompt to filter the content with\"\n    )\n    parser.add_argument(\n        \"--compute-limit\",\n        \"-c\",\n        type=int,\n        default=10,  # Increased default compute limit\n        help=\"Maximum number of tokens to use for response\",\n    )\n\n    args = parser.parse_args()\n\n    # Format the prompt with the user's arguments\n    formatted_prompt = (\n        AGENT_PROMPT.replace(\"{{user_prompt}}\", args.prompt)\n        .replace(\"{{url}}\", args.url)\n        .replace(\"{{output_file_path}}\", args.output_file_path)\n    )\n\n    # Initialize conversation with system prompt and workflow start\n    messages = [\n        {\n            \"role\": \"user\",\n            \"content\": formatted_prompt,\n        },\n    ]\n\n    # Track number of iterations\n    iterations = 0\n    max_iterations = args.compute_limit\n    break_loop = False\n\n    while iterations < max_iterations:\n\n        if break_loop:\n            break\n\n        iterations += 1\n        try:\n            console.rule(f\"[yellow]Agent Loop {iterations}/{max_iterations}[/yellow]\")\n\n            # Get completion from OpenAI\n            completion = client.chat.completions.create(\n                model=\"o3-mini\",\n                messages=messages,\n                tools=tools,\n                tool_choice=\"auto\",\n            )\n\n            response_message = completion.choices[0].message\n\n            # Print the assistant's response\n            assistant_content = response_message.content or \"\"\n            if assistant_content:\n                console.print(Panel(assistant_content, title=\"Assistant\"))\n\n            messages.append(\n                {\n                    \"role\": \"assistant\",\n                    \"content\": assistant_content,\n                }\n            )\n\n            # Handle tool calls\n            if response_message.tool_calls:\n                # Add assistant's message to conversation\n                messages.append(\n                    {\n                        \"role\": \"assistant\",\n                        \"content\": response_message.content,\n                        \"tool_calls\": [\n                            {\n                                \"id\": tool_call.id,\n                                \"type\": tool_call.type,\n                                \"function\": {\n                                    \"name\": tool_call.function.name,\n                                    \"arguments\": tool_call.function.arguments,\n                                },\n                            }\n                            for tool_call in response_message.tool_calls\n                        ],\n                    }\n                )\n\n                # Process each tool call\n                for tool_call in response_message.tool_calls:\n                    function_name = tool_call.function.name\n                    function_args = json.loads(tool_call.function.arguments)\n\n                    console.print(\n                        Panel(\n                            f\"Processing tool call: {function_name}({function_args})\",\n                            title=\"[yellow]Tool Call[/yellow]\",\n                            border_style=\"yellow\",\n                        )\n                    )\n\n                    # Execute the appropriate function and store result\n                    result = None\n                    try:\n                        if function_name == \"ScrapeUrlArgs\":\n                            result = scrape_url(**function_args)\n\n                        elif function_name == \"ReadLocalFileArgs\":\n                            result = read_local_file(**function_args)\n\n                        elif function_name == \"UpdateLocalFileArgs\":\n                            result = update_local_file(**function_args)\n\n                        elif function_name == \"CompleteTaskArgs\":\n                            result = complete_task(**function_args)\n                            break_loop = True\n                        else:\n                            raise ValueError(f\"Unknown function: {function_name}\")\n\n                    except Exception as e:\n                        error_msg = f\"Error executing {function_name}: {str(e)}\"\n                        console.print(Panel(error_msg, title=\"[red]Error[/red]\"))\n                        result = f\"Error executing {function_name}({function_args}): {str(e)}\"\n\n                    # Add the tool response to messages\n                    messages.append(\n                        {\n                            \"role\": \"tool\",\n                            \"tool_call_id\": tool_call.id,\n                            \"name\": function_name,\n                            \"content\": str(result),\n                        }\n                    )\n\n            else:\n                raise ValueError(\"No tool calls found - should not happen\")\n\n        except Exception as e:\n            log_error(f\"Error: {str(e)}\")\n            console.print(\"[yellow]Messages at error:[/yellow]\")\n\n    if iterations >= max_iterations:\n        log_error(\"Reached maximum number of iterations\")\n        raise Exception(\"Reached maximum number of iterations\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "sfa_sqlite_openai_v2.py",
    "content": "# /// script\n# dependencies = [\n#   \"openai>=1.63.0\",\n#   \"rich>=13.7.0\",\n#   \"pydantic>=2.0.0\",\n# ]\n# ///\n\n\nimport os\nimport sys\nimport json\nimport argparse\nimport sqlite3\nimport subprocess\nfrom typing import List\nfrom rich.console import Console\nfrom rich.panel import Panel\nimport openai\nfrom pydantic import BaseModel, Field, ValidationError\nfrom openai import pydantic_function_tool\n\n# Initialize rich console\nconsole = Console()\n\n\n# Create our list of function tools from our pydantic models\nclass ListTablesArgs(BaseModel):\n    reasoning: str = Field(\n        ..., description=\"Explanation for listing tables relative to the user request\"\n    )\n\n\nclass DescribeTableArgs(BaseModel):\n    reasoning: str = Field(..., description=\"Reason why the table schema is needed\")\n    table_name: str = Field(..., description=\"Name of the table to describe\")\n\n\nclass SampleTableArgs(BaseModel):\n    reasoning: str = Field(..., description=\"Explanation for sampling the table\")\n    table_name: str = Field(..., description=\"Name of the table to sample\")\n    row_sample_size: int = Field(\n        ..., description=\"Number of rows to sample (aim for 3-5 rows)\"\n    )\n\n\nclass RunTestSQLQuery(BaseModel):\n    reasoning: str = Field(..., description=\"Reason for testing this query\")\n    sql_query: str = Field(..., description=\"The SQL query to test\")\n\n\nclass RunFinalSQLQuery(BaseModel):\n    reasoning: str = Field(\n        ...,\n        description=\"Final explanation of how this query satisfies the user request\",\n    )\n    sql_query: str = Field(..., description=\"The validated SQL query to run\")\n\n\n# Create tools list\ntools = [\n    pydantic_function_tool(ListTablesArgs),\n    pydantic_function_tool(DescribeTableArgs),\n    pydantic_function_tool(SampleTableArgs),\n    pydantic_function_tool(RunTestSQLQuery),\n    pydantic_function_tool(RunFinalSQLQuery),\n]\n\nAGENT_PROMPT = \"\"\"<purpose>\n    You are a world-class expert at crafting precise SQLite SQL queries.\n    Your goal is to generate accurate queries that exactly match the user's data needs.\n</purpose>\n\n<instructions>\n    <instruction>Use the provided tools to explore the database and construct the perfect query.</instruction>\n    <instruction>Start by listing tables to understand what's available.</instruction>\n    <instruction>Describe tables to understand their schema and columns.</instruction>\n    <instruction>Sample tables to see actual data patterns.</instruction>\n    <instruction>Test queries before finalizing them.</instruction>\n    <instruction>Only call run_final_sql_query when you're confident the query is perfect.</instruction>\n    <instruction>Be thorough but efficient with tool usage.</instruction>\n    <instruction>If you find your run_test_sql_query tool call returns an error or won't satisfy the user request, try to fix the query or try a different query.</instruction>\n    <instruction>Think step by step about what information you need.</instruction>\n    <instruction>Be sure to specify every parameter for each tool call.</instruction>\n    <instruction>Every tool call should have a reasoning parameter which gives you a place to explain why you are calling the tool.</instruction>\n</instructions>\n\n<tools>\n    <tool>\n        <name>list_tables</name>\n        <description>Returns list of available tables in database</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to list tables relative to user request</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>describe_table</name>\n        <description>Returns schema info for specified table</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to describe this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to describe</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>sample_table</name>\n        <description>Returns sample rows from specified table, always specify row_sample_size</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we need to sample this table</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>table_name</name>\n                <type>string</type>\n                <description>Name of table to sample</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>row_sample_size</name>\n                <type>integer</type>\n                <description>Number of rows to sample aim for 3-5 rows</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_test_sql_query</name>\n        <description>Tests a SQL query and returns results (only visible to agent)</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Why we're testing this specific query</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The SQL query to test</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n    \n    <tool>\n        <name>run_final_sql_query</name>\n        <description>Runs the final validated SQL query and shows results to user</description>\n        <parameters>\n            <parameter>\n                <name>reasoning</name>\n                <type>string</type>\n                <description>Final explanation of how query satisfies user request</description>\n                <required>true</required>\n            </parameter>\n            <parameter>\n                <name>sql_query</name>\n                <type>string</type>\n                <description>The validated SQL query to run</description>\n                <required>true</required>\n            </parameter>\n        </parameters>\n    </tool>\n</tools>\n\n<user-request>\n    {{user_request}}\n</user-request>\n\"\"\"\n\n\ndef list_tables(reasoning: str) -> List[str]:\n    \"\"\"Returns a list of tables in the database.\n\n    The agent uses this to discover available tables and make informed decisions.\n\n    Args:\n        reasoning: Explanation of why we're listing tables relative to user request\n\n    Returns:\n        List of table names as strings\n    \"\"\"\n    try:\n        conn = sqlite3.connect(DB_PATH)\n        cursor = conn.cursor()\n        cursor.execute(\"SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%';\")\n        tables = [row[0] for row in cursor.fetchall()]\n        conn.close()\n        console.log(f\"[blue]List Tables Tool[/blue] - Reasoning: {reasoning}\")\n        return tables\n    except Exception as e:\n        console.log(f\"[red]Error listing tables: {str(e)}[/red]\")\n        return []\n\n\ndef describe_table(reasoning: str, table_name: str) -> str:\n    \"\"\"Returns schema information about the specified table.\n\n    The agent uses this to understand table structure and available columns.\n\n    Args:\n        reasoning: Explanation of why we're describing this table\n        table_name: Name of table to describe\n\n    Returns:\n        String containing table schema information\n    \"\"\"\n    try:\n        conn = sqlite3.connect(DB_PATH)\n        cursor = conn.cursor()\n        cursor.execute(f\"PRAGMA table_info('{table_name}');\")\n        rows = cursor.fetchall()\n        conn.close()\n        output = \"\\n\".join([str(row) for row in rows])\n        console.log(f\"[blue]Describe Table Tool[/blue] - Table: {table_name} - Reasoning: {reasoning}\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error describing table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef sample_table(reasoning: str, table_name: str, row_sample_size: int) -> str:\n    \"\"\"Returns a sample of rows from the specified table.\n\n    The agent uses this to understand actual data content and patterns.\n\n    Args:\n        reasoning: Explanation of why we're sampling this table\n        table_name: Name of table to sample from\n        row_sample_size: Number of rows to sample aim for 3-5 rows\n\n    Returns:\n        String containing sample rows in readable format\n    \"\"\"\n    try:\n        conn = sqlite3.connect(DB_PATH)\n        cursor = conn.cursor()\n        cursor.execute(f\"SELECT * FROM {table_name} LIMIT {row_sample_size};\")\n        rows = cursor.fetchall()\n        conn.close()\n        output = \"\\n\".join([str(row) for row in rows])\n        console.log(\n            f\"[blue]Sample Table Tool[/blue] - Table: {table_name} - Rows: {row_sample_size} - Reasoning: {reasoning}\"\n        )\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error sampling table: {str(e)}[/red]\")\n        return \"\"\n\n\ndef run_test_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes a test SQL query and returns results.\n\n    The agent uses this to validate queries before finalizing them.\n    Results are only shown to the agent, not the user.\n\n    Args:\n        reasoning: Explanation of why we're running this test query\n        sql_query: The SQL query to test\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        conn = sqlite3.connect(DB_PATH)\n        cursor = conn.cursor()\n        cursor.execute(sql_query)\n        rows = cursor.fetchall()\n        conn.commit()\n        conn.close()\n        output = \"\\n\".join([str(row) for row in rows])\n        console.log(f\"[blue]Test Query Tool[/blue] - Reasoning: {reasoning}\")\n        console.log(f\"[dim]Query: {sql_query}[/dim]\")\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error running test query: {str(e)}[/red]\")\n        return str(e)\n\n\ndef run_final_sql_query(reasoning: str, sql_query: str) -> str:\n    \"\"\"Executes the final SQL query and returns results to user.\n\n    This is the last tool call the agent should make after validating the query.\n\n    Args:\n        reasoning: Final explanation of how this query satisfies user request\n        sql_query: The validated SQL query to run\n\n    Returns:\n        Query results as a string\n    \"\"\"\n    try:\n        conn = sqlite3.connect(DB_PATH)\n        cursor = conn.cursor()\n        cursor.execute(sql_query)\n        rows = cursor.fetchall()\n        conn.commit()\n        conn.close()\n        output = \"\\n\".join([str(row) for row in rows])\n        console.log(\n            Panel(\n                f\"[green]Final Query Tool[/green]\\nReasoning: {reasoning}\\nQuery: {sql_query}\"\n            )\n        )\n        return output\n    except Exception as e:\n        console.log(f\"[red]Error running final query: {str(e)}[/red]\")\n        return str(e)\n\n\ndef main():\n    # Set up argument parser\n    parser = argparse.ArgumentParser(description=\"SQLite Agent using OpenAI API\")\n    parser.add_argument(\n        \"-d\", \"--db\", required=True, help=\"Path to SQLite database file\"\n    )\n    parser.add_argument(\"-p\", \"--prompt\", required=True, help=\"The user's request\")\n    parser.add_argument(\n        \"-c\",\n        \"--compute\",\n        type=int,\n        default=10,\n        help=\"Maximum number of agent loops (default: 3)\",\n    )\n    args = parser.parse_args()\n\n    # Configure the API key\n    OPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n    if not OPENAI_API_KEY:\n        console.print(\n            \"[red]Error: OPENAI_API_KEY environment variable is not set[/red]\"\n        )\n        console.print(\n            \"Please get your API key from https://platform.openai.com/api-keys\"\n        )\n        console.print(\"Then set it with: export OPENAI_API_KEY='your-api-key-here'\")\n        sys.exit(1)\n\n    openai.api_key = OPENAI_API_KEY\n\n    # Set global DB_PATH for tool functions\n    global DB_PATH\n    DB_PATH = args.db\n\n    # Create a single combined prompt based on the full template\n    completed_prompt = AGENT_PROMPT.replace(\"{{user_request}}\", args.prompt)\n    messages = [{\"role\": \"user\", \"content\": completed_prompt}]\n\n    compute_iterations = 0\n\n    # Main agent loop\n    while True:\n        console.rule(\n            f\"[yellow]Agent Loop {compute_iterations+1}/{args.compute}[/yellow]\"\n        )\n        compute_iterations += 1\n\n        if compute_iterations >= args.compute:\n            console.print(\n                \"[yellow]Warning: Reached maximum compute loops without final query[/yellow]\"\n            )\n            raise Exception(\n                f\"Maximum compute loops reached: {compute_iterations}/{args.compute}\"\n            )\n\n        try:\n            # Generate content with tool support\n            response = openai.chat.completions.create(\n                model=\"o3-mini\",\n                # model=\"gpt-4o-mini\",\n                messages=messages,\n                tools=tools,\n                tool_choice=\"required\",\n            )\n\n            if response.choices:\n                assert len(response.choices) == 1\n                message = response.choices[0].message\n\n                if message.function_call:\n                    func_call = message.function_call\n                elif message.tool_calls and len(message.tool_calls) > 0:\n                    # If a tool_calls list is present, use the first call and extract its function details.\n                    tool_call = message.tool_calls[0]\n                    func_call = tool_call.function\n                else:\n                    func_call = None\n\n                if func_call:\n                    func_name = func_call.name\n                    func_args_str = func_call.arguments\n\n                    messages.append(\n                        {\n                            \"role\": \"assistant\",\n                            \"tool_calls\": [\n                                {\n                                    \"id\": tool_call.id,\n                                    \"type\": \"function\",\n                                    \"function\": func_call,\n                                }\n                            ],\n                        }\n                    )\n\n                    console.print(\n                        f\"[blue]Function Call:[/blue] {func_name}({func_args_str})\"\n                    )\n                    try:\n                        # Validate and parse arguments using the corresponding pydantic model\n                        if func_name == \"ListTablesArgs\":\n                            args_parsed = ListTablesArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = list_tables(reasoning=args_parsed.reasoning)\n                        elif func_name == \"DescribeTableArgs\":\n                            args_parsed = DescribeTableArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = describe_table(\n                                reasoning=args_parsed.reasoning,\n                                table_name=args_parsed.table_name,\n                            )\n                        elif func_name == \"SampleTableArgs\":\n                            args_parsed = SampleTableArgs.model_validate_json(\n                                func_args_str\n                            )\n                            result = sample_table(\n                                reasoning=args_parsed.reasoning,\n                                table_name=args_parsed.table_name,\n                                row_sample_size=args_parsed.row_sample_size,\n                            )\n                        elif func_name == \"RunTestSQLQuery\":\n                            args_parsed = RunTestSQLQuery.model_validate_json(\n                                func_args_str\n                            )\n                            result = run_test_sql_query(\n                                reasoning=args_parsed.reasoning,\n                                sql_query=args_parsed.sql_query,\n                            )\n                        elif func_name == \"RunFinalSQLQuery\":\n                            args_parsed = RunFinalSQLQuery.model_validate_json(\n                                func_args_str\n                            )\n                            result = run_final_sql_query(\n                                reasoning=args_parsed.reasoning,\n                                sql_query=args_parsed.sql_query,\n                            )\n                            console.print(\"\\n[green]Final Results:[/green]\")\n                            console.print(result)\n                            return\n                        else:\n                            raise Exception(f\"Unknown tool call: {func_name}\")\n\n                        console.print(\n                            f\"[blue]Function Call Result:[/blue] {func_name}(...) ->\\n{result}\"\n                        )\n\n                        # Append the function call result into our messages as a tool response\n                        messages.append(\n                            {\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tool_call.id,\n                                \"content\": json.dumps({\"result\": str(result)}),\n                            }\n                        )\n\n                    except Exception as e:\n                        error_msg = f\"Argument validation failed for {func_name}: {e}\"\n                        console.print(f\"[red]{error_msg}[/red]\")\n                        messages.append(\n                            {\n                                \"role\": \"tool\",\n                                \"tool_call_id\": tool_call.id,\n                                \"content\": json.dumps({\"error\": error_msg}),\n                            }\n                        )\n                        continue\n                else:\n                    raise Exception(\n                        \"No function call in this response - should never happen\"\n                    )\n\n        except Exception as e:\n            console.print(f\"[red]Error in agent loop: {str(e)}[/red]\")\n            raise e\n\n\nif __name__ == \"__main__\":\n    main()\n"
  }
]