[
  {
    "path": ".github/workflows/main.yml",
    "content": "name: Ruby\n\non:\n  push:\n    branches:\n      - main\n\n  pull_request:\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n    name: Ruby ${{ matrix.ruby }}\n    strategy:\n      matrix:\n        ruby:\n          - '3.2.2'\n\n    steps:\n    - uses: actions/checkout@v3\n    - name: Set up Ruby\n      uses: ruby/setup-ruby@v1\n      with:\n        ruby-version: ${{ matrix.ruby }}\n        bundler-cache: true\n    - name: Run the default task\n      run: bundle exec rake ci\n      env:\n        OR_ACCESS_TOKEN: ${{ secrets.OR_ACCESS_TOKEN }}\n        OAI_ACCESS_TOKEN: ${{ secrets.OAI_ACCESS_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": "/.bundle/\n/.yardoc\n/_yardoc/\n/coverage/\n/doc/\n/pkg/\n/spec/reports/\n/tmp/\n\n# rspec failure tracking\n.rspec_status\n*.gem\n.env\n.envrc\n.claude/settings.local.json\n"
  },
  {
    "path": ".rspec",
    "content": "--format documentation\n--color\n--require spec_helper\n"
  },
  {
    "path": ".rubocop.yml",
    "content": "AllCops:\n  NewCops: enable\n  SuggestExtensions: false\n  TargetRubyVersion: 3.2.1\n\nGemspec/RequireMFA:\n  Enabled: false\n\nStyle/OpenStructUse:\n  Enabled: false\n\nStyle/StringLiterals:\n  Enabled: true\n  EnforcedStyle: double_quotes\n\nStyle/StringLiteralsInInterpolation:\n  Enabled: true\n  EnforcedStyle: double_quotes\n\nStyle/IfUnlessModifier:\n  Enabled: false\nLayout/LineLength:\n  Enabled: false\n\nMetrics/BlockLength:\n  Enabled: false\n\nMetrics/MethodLength:\n  Enabled: false\n\nMetrics/ModuleLength:\n  Enabled: false\n\nMetrics/AbcSize:\n  Enabled: false\n\nMetrics/CyclomaticComplexity:\n  Enabled: false\n\nMetrics/PerceivedComplexity:\n  Enabled: false\n\nMetrics/ParameterLists:\n  Enabled: false\n\nMetrics/ClassLength:\n  Enabled: false\n\nStyle/FrozenStringLiteralComment:\n  Enabled: false\n\nStyle/MultilineBlockChain:\n  Enabled: false\n"
  },
  {
    "path": ".ruby-version",
    "content": "3.4.2\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "## [Unreleased]\n\n## [2.0.3] - 2026-04-30\n\n### Fixed\n- `NoMethodError: undefined method 'strip' for nil` in `Raix::ChatCompletion` when an LLM (notably Gemini under certain stop conditions) returns a final assistant message with `\"content\": null`. Three call sites in `lib/raix/chat_completion.rb` now use `content.to_s.strip` so a nil response coerces to `\"\"` instead of raising.\n\n## [2.0.2] - 2026-03-27\n\n### Fixed\n- Ensure gem files are world-readable (644) for Docker deployments where gems are installed as root but the app runs as a non-root user\n- Added gemspec-level safety net that normalizes file permissions at build time\n\n## [2.0.1] - 2026-03-20\n\n### Changed\n- Replaced `require_relative` with Zeitwerk autoloading (thanks @seuros, PR #47)\n\n## [2.0.0] - 2025-12-17\n\n### Breaking Changes\n- **Migrated from OpenRouter/OpenAI gems to RubyLLM** - Raix now uses [RubyLLM](https://github.com/crmne/ruby_llm) as its unified backend for all LLM providers. This provides better multi-provider support and a more consistent API.\n- **Configuration changes** - API keys are now configured through RubyLLM's configuration system instead of separate client instances.\n- **Removed direct client dependencies** - `openrouter` and `ruby-openai` gems are no longer direct dependencies; RubyLLM handles provider connections.\n\n### Added\n- **`before_completion` hook** - New hook system for intercepting and modifying chat completion requests before they're sent to the AI provider.\n  - Configure at global, class, or instance levels\n  - Hooks receive a `CompletionContext` with access to messages, params, and the chat completion instance\n  - Messages are mutable for content filtering, PII redaction, adding system prompts, etc.\n  - Params can be modified for dynamic model selection, A/B testing, and more\n  - Supports any callable object (Proc, Lambda, or object responding to `#call`)\n  - Use cases: database-backed configuration, logging, PII redaction, content filtering, cost tracking\n- **`FunctionToolAdapter`** - New adapter for converting Raix function declarations to RubyLLM tool format\n- **`TranscriptAdapter`** - New adapter for bridging Raix's abbreviated message format with standard OpenAI format\n\n### Changed\n- Chat completions now use RubyLLM's unified API for all providers (OpenAI, Anthropic, Google, etc.)\n- Improved provider detection based on model name patterns\n- Streamlined internal architecture with dedicated adapters\n\n### Migration Guide\nUpdate your configuration from:\n```ruby\nRaix.configure do |config|\n  config.openrouter_client = OpenRouter::Client.new(access_token: \"...\")\n  config.openai_client = OpenAI::Client.new(access_token: \"...\")\nend\n```\n\nTo:\n```ruby\nRubyLLM.configure do |config|\n  config.openrouter_api_key = ENV[\"OPENROUTER_API_KEY\"]\n  config.openai_api_key = ENV[\"OPENAI_API_KEY\"]\n  # Also supports: anthropic_api_key, gemini_api_key\nend\n```\n\n## [1.0.2] - 2025-07-16\n### Added\n- Added method to check for API client availability in Configuration\n\n### Changed\n- Updated ruby-openai dependency to ~> 8.1\n\n### Fixed\n- Fixed gemspec file reference\n\n## [1.0.1] - 2025-06-04\n### Fixed\n- Fixed PromptDeclarations module namespace - now properly namespaced under Raix\n- Removed Rails.logger dependencies from PromptDeclarations for non-Rails environments\n- Fixed documentation example showing incorrect `openai: true` usage (should be model string)\n- Added comprehensive tests for PromptDeclarations module\n\n### Changed\n- Improved error handling in PromptDeclarations to catch StandardError instead of generic rescue\n\n## [1.0.0] - 2025-06-04\n### Breaking Changes\n- **Deprecated `loop` parameter in ChatCompletion** - The system now automatically continues conversations after tool calls until the AI provides a text response. The `loop` parameter shows a deprecation warning but still works for backwards compatibility.\n- **Tool-based completions now return strings instead of arrays** - When functions are called, the final response is a string containing the AI's text response, not an array of function results.\n- **`stop_looping!` renamed to `stop_tool_calls_and_respond!`** - Better reflects the new automatic continuation behavior.\n\n### Added\n- **Automatic conversation continuation** - Chat completions automatically continue after tool execution without needing the `loop` parameter.\n- **`max_tool_calls` parameter** - Controls the maximum number of tool invocations to prevent infinite loops (default: 25).\n- **Configuration for `max_tool_calls`** - Added `max_tool_calls` to the Configuration class with sensible defaults.\n\n### Changed\n- ChatCompletion handles continuation after tool function calls automatically.\n- Improved CI/CD workflow to use `bundle exec rake ci` for consistent testing.\n\n### Fixed\n- Resolved conflict between `loop` attribute and Ruby's `Kernel.loop` method (fixes #11).\n- Fixed various RuboCop warnings using keyword argument forwarding.\n- Improved error handling with proper warning messages instead of puts.\n\n## [0.9.2] - 2025-06-03\n### Fixed\n- Fixed OpenAI chat completion compatibility\n- Fixed SHA256 hexdigest generation for MCP tool names\n- Added ostruct as explicit dependency to prevent warnings\n- Fixed rubocop lint error for alphabetized gemspec dependencies\n- Updated default OpenRouter model\n\n## [0.9.1] - 2025-05-30\n### Added\n- **MCP Type Coercion** - Automatic type conversion for MCP tool arguments based on JSON schema\n  - Supports integer, number, boolean, array, and object types\n  - Handles nested objects and arrays of objects with proper coercion\n  - Gracefully handles invalid JSON and type mismatches\n- **MCP Image Support** - MCP tools can now return image content as structured JSON\n\n### Fixed\n- Fixed handling of nil values in MCP argument coercion\n\n## [0.9.0] - 2025-05-30\n### Added\n- **MCP (Model Context Protocol) Support**\n  - New `stdio_mcp` method for stdio-based MCP servers\n  - Refactored existing MCP code into `SseClient` and `StdioClient`\n  - Split top-level `mcp` method into `sse_mcp` and `stdio_mcp`\n  - Added authentication support for MCP servers\n- **Class-Level Configuration**\n  - Moved configuration to separate `Configuration` class\n  - Added fallback mechanism for configuration options\n  - Cleaner metaprogramming implementation\n\n### Fixed\n- Fixed method signature of functions added via MCP\n\n## [0.8.6] - 2025-05-19\n- add `required` and `optional` flags for parameters in `function` declarations\n\n## [0.8.5] - 2025-05-08\n- renamed `tools` argument to `chat_completion` to `available_tools` to prevent shadowing the existing tool attribute (potentially breaking change to enhancement introduced in 0.8.1)\n\n## [0.8.4] - 2025-05-07\n- Calls strip instead of squish on response of chat_completion in order to not clobber linebreaks\n\n## [0.8.3] - 2025-04-30\n- Adds optional ActiveSupport Cache parameter to `dispatch_tool_function` for caching tool calls\n\n## [0.8.2] - 2025-04-29\n- Extracts function call dispatch into a public `dispatch_tool_function` that can be overridden in subclasses\n- Uses `public_send` instead of `send` for better security and explicitness\n\n## [0.8.1] - 2025-04-24\nAdded ability to filter tool functions (or disable completely) when calling `chat_completion`. Thanks to @parruda for the contribution.\n\n## [0.8.0] - 2025-04-23\n### Added\n* **MCP integration (Experimental)** — new `Raix::MCP` concern and `mcp` DSL for declaring remote MCP servers.\n  * Automatically fetches `tools/list`, registers remote tools as OpenAI‑compatible function schemas, and defines proxy methods that forward `tools/call`.\n  * `ChatCompletion#tools` now returns remote MCP tools alongside local `function` declarations.\n\n### Changed\n* `lib/raix.rb` now requires `raix/mcp` so the concern is auto‑loaded.\n\n### Fixed\n* Internal transcript handling spec expectations updated.\n\n### Specs\n* Added `spec/raix/mcp_spec.rb` with comprehensive stubs for tools discovery & call flow.\n\n## [0.7.3] - 2025-04-23\n- commit function call and result to transcript in one operation for thread safety\n\n## [0.7.2] - 2025-04-19\n- adds support for `messages` parameter in `chat_completion` to override the transcript\n- fixes potential race conditions in parallel chat completion calls by duplicating transcript\n\n## [0.7.1] - 2025-04-10\n- adds support for JSON response format with automatic parsing\n- improves error handling for JSON parsing failures\n\n## [0.7] - 2025-04-02\n- adds support for `until` condition in `PromptDeclarations` to control prompt looping\n- adds support for `if` and `unless` conditions in `PromptDeclarations` to control prompt execution\n- adds support for `success` callback in `PromptDeclarations` to handle prompt responses\n- adds support for `stream` handler in `PromptDeclarations` to control response streaming\n- adds support for `params` in `PromptDeclarations` to customize API parameters per prompt\n- adds support for `system` directive in `PromptDeclarations` to set per-prompt system messages\n- adds support for `call` in `PromptDeclarations` to delegate to callable prompt objects\n- adds support for `text` in `PromptDeclarations` to specify prompt content via lambda, string, or symbol\n- adds support for `raw` parameter in `PromptDeclarations` to return raw API responses\n- adds support for `openai` parameter in `PromptDeclarations` to use OpenAI directly\n- adds support for `prompt` parameter in `PromptDeclarations` to specify initial prompt\n- adds support for `last_response` in `PromptDeclarations` to access previous prompt responses\n- adds support for `current_prompt` in `PromptDeclarations` to access current prompt context\n- adds support for `MAX_LOOP_COUNT` in `PromptDeclarations` to prevent infinite loops\n- adds support for `execute_ai_request` in `PromptDeclarations` to handle API calls\n- adds support for `chat_completion_from_superclass` in `PromptDeclarations` to handle superclass calls\n- adds support for `model`, `temperature`, and `max_tokens` in `PromptDeclarations` to access prompt parameters\n- Make automatic JSON parsing available to non-OpenAI providers that don't support the response_format parameter by scanning for json XML tags\n\n## [0.6.0] - 2024-11-12\n- adds `save_response` option to `chat_completion` to control transcript updates\n- fixes potential race conditions in transcript handling\n\n## [0.4.8] - 2024-11-12\n- adds documentation for `Predicate` maybe handler\n- logs to stdout when a response is unhandled by `Predicate`\n\n## [0.4.7] - 2024-11-12\n- adds missing requires `raix/predicate` so that it can be used in a Rails app automatically\n- adds missing openai support for `Predicate`\n\n## [0.4.5] - 2024-11-11\n- adds support for `ResponseFormat`\n- added some missing requires to support String#squish\n\n## [0.4.4] - 2024-11-11\n- adds support for multiple tool calls in a single response\n\n## [0.4.3] - 2024-11-11\n- adds support for `Predicate` module\n\n## [0.4.2] - 2024-11-05\n- adds support for [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs) with the `prediction` option for OpenAI\n\n## [0.4.0] - 2024-10-18\n- adds support for Anthropic-style prompt caching\n- defaults to `max_completion_tokens` when using OpenAI directly\n\n## [0.3.2] - 2024-06-29\n- adds support for streaming\n\n## [0.2.0] - tbd\n- adds `ChatCompletion` module\n- adds `PromptDeclarations` module\n- adds `FunctionDispatch` module\n\n## [0.1.0] - 2024-04-03\n- Initial release, placeholder gem\n"
  },
  {
    "path": "CLAUDE.md",
    "content": "This is a Ruby gem called Raix. Its purpose is to facilitate chat completion style AI text generation using LLMs provided by OpenAI and OpenRouter.\n\n- When running all tests just do `bundle exec rake` since it automatically runs the linter with autocorrect\n- Documentation: Include method/class documentation with examples when appropriate\n- Add runtime dependencies to `raix.gemspec`.\n- Add development dependencies to `Gemfile`.\n- Don't ever test private methods directly. Specs should test behavior, not implementation.\n- Never add test-specific code embedded in production code\n- **Do not use require_relative**\n- Require statements should always be in alphabetical order\n- Always leave a blank line after module includes and before the rest of the class\n- Do not decide unilaterally to leave code for the sake of \"backwards compatibility\"... always run those decisions by me first.\n- Don't ever commit and push changes unless directly told to do so"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our community include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the overall community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or\n  advances of any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email\n  address, without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n  professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.\n\nCommunity leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at obiefernandez@gmail.com. All complaints will be reviewed and investigated promptly and fairly.\n\nAll community leaders are obligated to respect the privacy and security of the reporter of any incident.\n\n## Enforcement Guidelines\n\nCommunity leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:\n\n### 1. Correction\n\n**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.\n\n**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.\n\n### 2. Warning\n\n**Community Impact**: A violation through a single incident or series of actions.\n\n**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.\n\n### 3. Temporary Ban\n\n**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.\n\n**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.\n\n### 4. Permanent Ban\n\n**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior,  harassment of an individual, or aggression toward or disparagement of classes of individuals.\n\n**Consequence**: A permanent ban from any sort of public interaction within the community.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0,\navailable at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.\n\nCommunity Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).\n\n[homepage]: https://www.contributor-covenant.org\n\nFor answers to common questions about this code of conduct, see the FAQ at\nhttps://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.\n"
  },
  {
    "path": "Gemfile",
    "content": "# frozen_string_literal: true\n\nsource \"https://rubygems.org\"\n\n# Specify your gem's dependencies in raix.gemspec\ngemspec\n\ngroup :development do\n  gem \"dotenv\", \">= 2\"\n  gem \"guard\"\n  gem \"guard-rspec\"\n  gem \"pry\", \">= 0.14\"\n  gem \"rake\", \"~> 13.0\"\n  gem \"rspec\", \"~> 3.0\"\n  gem \"rubocop\", \"~> 1.21\"\n  gem \"solargraph-rails\", \"~> 0.2.0.pre\"\n  gem \"sorbet\"\n  gem \"tapioca\", require: false\nend\n\ngroup :test do\n  gem \"vcr\"\n  gem \"webmock\"\nend\n"
  },
  {
    "path": "Guardfile",
    "content": "# frozen_string_literal: true\n\n# A sample Guardfile\n# More info at https://github.com/guard/guard#readme\n\n## Uncomment and set this to only include directories you want to watch\n# directories %w(app lib config test spec features) \\\n#  .select{|d| Dir.exist?(d) ? d : UI.warning(\"Directory #{d} does not exist\")}\n\n## Note: if you are using the `directories` clause above and you are not\n## watching the project directory ('.'), then you will want to move\n## the Guardfile to a watched dir and symlink it back, e.g.\n#\n#  $ mkdir config\n#  $ mv Guardfile config/\n#  $ ln -s config/Guardfile .\n#\n# and, you'll have to watch \"config/Guardfile\" instead of \"Guardfile\"\n\n# NOTE: The cmd option is now required due to the increasing number of ways\n#       rspec may be run, below are examples of the most common uses.\n#  * bundler: 'bundle exec rspec'\n#  * bundler binstubs: 'bin/rspec'\n#  * spring: 'bin/rspec' (This will use spring if running and you have\n#                          installed the spring binstubs per the docs)\n#  * zeus: 'zeus rspec' (requires the server to be started separately)\n#  * 'just' rspec: 'rspec'\n\nguard :rspec, cmd: \"bundle exec rspec\" do\n  require \"guard/rspec/dsl\"\n  dsl = Guard::RSpec::Dsl.new(self)\n\n  # Feel free to open issues for suggestions and improvements\n\n  # RSpec files\n  rspec = dsl.rspec\n  watch(rspec.spec_helper) { rspec.spec_dir }\n  watch(rspec.spec_support) { rspec.spec_dir }\n  watch(rspec.spec_files)\n\n  # Ruby files\n  ruby = dsl.ruby\n  dsl.watch_spec_files_for(ruby.lib_files)\n\n  # Rails files\n  rails = dsl.rails(view_extensions: %w[erb haml slim])\n  dsl.watch_spec_files_for(rails.app_files)\n  dsl.watch_spec_files_for(rails.views)\n\n  watch(rails.controllers) do |m|\n    [\n      rspec.spec.call(\"routing/#{m[1]}_routing\"),\n      rspec.spec.call(\"controllers/#{m[1]}_controller\"),\n      rspec.spec.call(\"acceptance/#{m[1]}\")\n    ]\n  end\n\n  # Rails config changes\n  watch(rails.spec_helper)     { rspec.spec_dir }\n  watch(rails.routes)          { \"#{rspec.spec_dir}/routing\" }\n  watch(rails.app_controller)  { \"#{rspec.spec_dir}/controllers\" }\n\n  # Capybara features specs\n  watch(rails.view_dirs)     { |m| rspec.spec.call(\"features/#{m[1]}\") }\n  watch(rails.layouts)       { |m| rspec.spec.call(\"features/#{m[1]}\") }\n\n  # Turnip features and steps\n  watch(%r{^spec/acceptance/(.+)\\.feature$})\n  watch(%r{^spec/acceptance/steps/(.+)_steps\\.rb$}) do |m|\n    Dir[File.join(\"**/#{m[1]}.feature\")][0] || \"spec/acceptance\"\n  end\nend\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2024 Obie Fernandez\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n"
  },
  {
    "path": "README.llm",
    "content": "# Raix (Ruby AI eXtensions)\nRaix adds LLM-based AI functionality to Ruby classes. It supports OpenAI or OpenRouter as providers and can work in non-Rails apps if you include ActiveSupport.\n\n## Chat Completion\nYou must include `Raix::ChatCompletion`. It gives you a `transcript` array for messages and a `chat_completion` method that sends them to the AI.\n\n```ruby\nclass MeaningOfLife\n  include Raix::ChatCompletion\nend\n\nai = MeaningOfLife.new\nai.transcript << { user: \"What is the meaning of life?\" }\nputs ai.chat_completion\n```\n\nYou can add messages using either `{ user: \"...\" }` or `{ role: \"user\", content: \"...\" }`.\n\n### Predicted Outputs\nPass `prediction` to support [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs):\n```ruby\nai.chat_completion(openai: \"gpt-4o\", params: { prediction: \"...\" })\n```\n\n### Prompt Caching\nWhen using Anthropic models, you can specify `cache_at`. Messages above that size get sent as ephemeral multipart segments.\n```ruby\nai.chat_completion(params: { cache_at: 1000 })\n```\n\n## Function Dispatch\nInclude `Raix::FunctionDispatch` to declare functions AI can call in a chat loop. Use `chat_completion(loop: true)` so the AI can call functions and generate more messages until it outputs a final text response.\n\n```ruby\nclass WhatIsTheWeather\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :check_weather, \"Check the weather for a location\", location: { type: \"string\" } do |args|\n    \"The weather in #{args[:location]} is hot and sunny\"\n  end\nend\n```\n\nIf the AI calls multiple functions at once, Raix handles them in sequence and returns an array of results. Call `stop_tool_calls_and_respond!` inside a function to end the loop.\n\n## Prompt Declarations\nInclude `Raix::PromptDeclarations` to define a chain of prompts in order. Each prompt can be inline text or a callable class that also includes `ChatCompletion`.\n\n```ruby\nclass PromptSubscriber\n  include Raix::ChatCompletion\n  include Raix::PromptDeclarations\n\n  prompt call: FetchUrlCheck\n  prompt call: MemoryScan\n  prompt text: -> { user_message.content }\n\n  def message_created(user_message)\n    chat_completion(loop: true, openai: \"gpt-4o\")\n  end\nend\n```\n\n## Predicate Module\nInclude `Raix::Predicate` to handle yes/no/maybe questions. Define blocks with the `yes?`, `no?`, and `maybe?` methods.\n\n```ruby\nclass Question\n  include Raix::Predicate\n\n  yes? { |explanation| puts \"Affirmative: #{explanation}\" }\n  no?  { |explanation| puts \"Negative: #{explanation}\" }\n\nend\n```\n\n## ResponseFormat (Experimental)\nUse `Raix::ResponseFormat` to enforce JSON schemas for structured responses.\n\n```ruby\nformat = Raix::ResponseFormat.new(\"PersonInfo\", {\n  name: { type: \"string\" },\n  age: { type: \"integer\" }\n})\n\nclass StructuredResponse\n  include Raix::ChatCompletion\n\n  def analyze_person(name)\n    chat_completion(response_format: format)\n  end\nend\n```\n\n## Installation\nAdd `gem \"raix\"` to your Gemfile or run `gem install raix`. Configure an OpenRouter or OpenAI client in an initializer:\n\n```ruby\n# config/initializers/raix.rb\nRaix.configure do |config|\n  config.openrouter_client = OpenRouter::Client.new\nend\n```\nMake sure you have valid API tokens for your chosen provider.\n```\n"
  },
  {
    "path": "README.md",
    "content": "# Ruby AI eXtensions\n\n## What's Raix\n\nRaix (pronounced \"ray\" because the x is silent) is a library that gives you everything you need to add discrete large-language model (LLM) AI components to your Ruby applications. Raix consists of proven code that has been extracted from [Olympia](https://olympia.chat), the world's leading virtual AI team platform, and probably one of the biggest and most successful AI chat projects written completely in Ruby.\n\nUnderstanding how to use discrete AI components in otherwise normal code is key to productively leveraging Raix, and the subject of a book written by Raix's author Obie Fernandez, titled [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai). You can easily support the ongoing development of this project by buying the book at Leanpub.\n\nRaix 2.0 is powered by [RubyLLM](https://github.com/crmne/ruby_llm), giving you unified access to OpenAI, Anthropic, Google Gemini, and dozens of other providers through OpenRouter. Note that you can use Raix to add AI capabilities to non-Rails applications as long as you include ActiveSupport as a dependency.\n\n### Chat Completions\n\nRaix consists of three modules that can be mixed in to Ruby classes to give them AI powers. The first (and mandatory) module is `ChatCompletion`, which provides `transcript` and `chat_completion` methods.\n\n```ruby\nclass MeaningOfLife\n  include Raix::ChatCompletion\nend\n\n>> ai = MeaningOfLife.new\n>> ai.transcript << { user: \"What is the meaning of life?\" }\n>> ai.chat_completion\n\n=> \"The question of the meaning of life is one of the most profound and enduring inquiries in philosophy, religion, and science.\n    Different perspectives offer various answers...\"\n```\n\nBy default, Raix will automatically add the AI's response to the transcript. This behavior can be controlled with the `save_response` parameter, which defaults to `true`. You may want to set it to `false` when making multiple chat completion calls during the lifecycle of a single object (whether sequentially or in parallel) and want to manage the transcript updates yourself:\n\n```ruby\n>> ai.chat_completion(save_response: false)\n```\n\n#### Transcript Format\n\nThe transcript accepts both abbreviated and standard OpenAI message hash formats. The abbreviated format, suitable for system, assistant, and user messages is simply a mapping of `role => content`, as shown in the example above.\n\n```ruby\ntranscript << { user: \"What is the meaning of life?\" }\n```\n\nAs mentioned, Raix also understands standard OpenAI messages hashes. The previous example could be written as:\n\n```ruby\ntranscript << { role: \"user\", content: \"What is the meaning of life?\" }\n```\n\nOne of the advantages of OpenRouter and the reason that it is used by default by this library is that it handles mapping message formats from the OpenAI standard to whatever other model you're wanting to use (Anthropic, Cohere, etc.)\n\nNote that it's possible to override the current object's transcript by passing a `messages` array to `chat_completion`. This allows for multiple threads to share a single conversation context in parallel, by deferring when they write their responses back to the transcript.\n\n```\nchat_completion(openai: \"gpt-4.1-nano\", messages: [{ user: \"What is the meaning of life?\" }])\n```\n\n### Predicted Outputs\n\nRaix supports [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs) with the `prediction` parameter for OpenAI.\n\n```ruby\n>> ai.chat_completion(openai: \"gpt-4o\", params: { prediction: })\n```\n\n### Prompt Caching\n\nRaix supports [Anthropic-style prompt caching](https://openrouter.ai/docs/prompt-caching#anthropic-claude) when using Anthropic's Claude family of models. You can specify a `cache_at` parameter when doing a chat completion. If the character count for the content of a particular message is longer than the cache_at parameter, it will be sent to Anthropic as a multipart message with a cache control \"breakpoint\" set to \"ephemeral\".\n\nNote that there is a limit of four breakpoints, and the cache will expire within five minutes. Therefore, it is recommended to reserve the cache breakpoints for large bodies of text, such as character cards, CSV data, RAG data, book chapters, etc. Raix does not enforce a limit on the number of breakpoints, which means that you might get an error if you try to cache too many messages.\n\n```ruby\n>> my_class.chat_completion(params: { cache_at: 1000 })\n=> {\n  \"messages\": [\n    {\n      \"role\": \"system\",\n      \"content\": [\n        {\n          \"type\": \"text\",\n          \"text\": \"HUGE TEXT BODY LONGER THAN 1000 CHARACTERS\",\n          \"cache_control\": {\n            \"type\": \"ephemeral\"\n          }\n        }\n      ]\n    },\n```\n\n### JSON Mode\n\nRaix supports JSON mode for chat completions, which ensures that the AI model's response is valid JSON. This is particularly useful when you need structured data from the model.\n\nWhen using JSON mode with OpenAI models, Raix will automatically set the `response_format` parameter on requests accordingly, and attempt to parse the entire response body as JSON.\nWhen using JSON mode with other models (e.g. Anthropic) that don't support `response_format`, Raix will look for JSON content inside of &lt;json&gt; XML tags in the response, before\nfalling back to parsing the entire response body. Make sure you tell the AI to reply with JSON inside of XML tags.\n\n```ruby\n>> my_class.chat_completion(json: true)\n=> { \"key\": \"value\" }\n```\n\nWhen using JSON mode with non-OpenAI providers, Raix automatically sets the `require_parameters` flag to ensure proper JSON formatting. You can also combine JSON mode with other parameters:\n\n```ruby\n>> my_class.chat_completion(json: true, openai: \"gpt-4o\")\n=> { \"key\": \"value\" }\n```\n\n### before_completion Hook\n\nThe `before_completion` hook lets you intercept and modify chat completion requests before they're sent to the AI provider. This is useful for dynamic parameter resolution, logging, content filtering, PII redaction, and more.\n\n#### Configuration Levels\n\nHooks can be configured at three levels, with later levels overriding earlier ones:\n\n```ruby\n# Global level - applies to all chat completions\nRaix.configure do |config|\n  config.before_completion = ->(context) {\n    # Return a hash of params to merge, or modify context.messages directly\n    { temperature: 0.7 }\n  }\nend\n\n# Class level - applies to all instances of a class\nclass MyAssistant\n  include Raix::ChatCompletion\n\n  configure do |config|\n    config.before_completion = ->(context) { { model: \"gpt-4o\" } }\n  end\nend\n\n# Instance level - applies to a single instance\nassistant = MyAssistant.new\nassistant.before_completion = ->(context) { { max_tokens: 500 } }\n```\n\nWhen hooks exist at multiple levels, they're called in order (global → class → instance), with returned params merged together. Later hooks override earlier ones for the same parameter.\n\n#### The CompletionContext Object\n\nHooks receive a `CompletionContext` object with access to:\n\n```ruby\ncontext.chat_completion       # The ChatCompletion instance\ncontext.messages              # Array of messages (mutable, in OpenAI format)\ncontext.params                # Hash of params (mutable)\ncontext.transcript            # The instance's transcript\ncontext.current_model         # Currently configured model\ncontext.chat_completion_class # The class including ChatCompletion\ncontext.configuration         # The instance's configuration\n```\n\n#### Use Cases\n\n**Dynamic model selection from database:**\n\n```ruby\nRaix.configure do |config|\n  config.before_completion = ->(context) {\n    settings = TenantSettings.find_by(tenant: Current.tenant)\n    {\n      model: settings.preferred_model,\n      temperature: settings.temperature,\n      max_tokens: settings.max_tokens\n    }\n  }\nend\n```\n\n**PII redaction:**\n\n```ruby\nclass SecureAssistant\n  include Raix::ChatCompletion\n\n  before_completion = ->(context) {\n    context.messages.each do |msg|\n      next unless msg[:content].is_a?(String)\n      # Redact SSN patterns\n      msg[:content] = msg[:content].gsub(/\\d{3}-\\d{2}-\\d{4}/, \"[SSN REDACTED]\")\n      # Redact email addresses\n      msg[:content] = msg[:content].gsub(/[\\w.-]+@[\\w.-]+\\.\\w+/, \"[EMAIL REDACTED]\")\n    end\n    {} # Return empty hash if not modifying params\n  }\nend\n```\n\n**Request logging:**\n\n```ruby\nRaix.configure do |config|\n  config.before_completion = ->(context) {\n    Rails.logger.info({\n      event: \"chat_completion_request\",\n      model: context.current_model,\n      message_count: context.messages.length,\n      params: context.params.except(:messages)\n    }.to_json)\n    {} # Return empty hash, just logging\n  }\nend\n```\n\n**Adding system prompts:**\n\n```ruby\nassistant.before_completion = ->(context) {\n  context.messages.unshift({\n    role: \"system\",\n    content: \"Always be helpful and respectful.\"\n  })\n  {}\n}\n```\n\n**A/B testing models:**\n\n```ruby\nRaix.configure do |config|\n  config.before_completion = ->(context) {\n    if Flipper.enabled?(:new_model, Current.user)\n      { model: \"gpt-4o\" }\n    else\n      { model: \"gpt-4o-mini\" }\n    end\n  }\nend\n```\n\nHooks can also be any object that responds to `#call`:\n\n```ruby\nclass CostTracker\n  def call(context)\n    # Track estimated cost based on message length\n    estimated_tokens = context.messages.sum { |m| m[:content].to_s.length / 4 }\n    StatsD.gauge(\"ai.estimated_input_tokens\", estimated_tokens)\n    {}\n  end\nend\n\nRaix.configure do |config|\n  config.before_completion = CostTracker.new\nend\n```\n\n### Use of Tools/Functions\n\nThe second (optional) module that you can add to your Ruby classes after `ChatCompletion` is `FunctionDispatch`. It lets you declare and implement functions to be called at the AI's discretion in a declarative, Rails-like \"DSL\" fashion.\n\nWhen the AI responds with tool function calls instead of a text message, Raix automatically:\n1. Executes the requested tool functions\n2. Adds the function results to the conversation transcript  \n3. Sends the updated transcript back to the AI for another completion\n4. Repeats this process until the AI responds with a regular text message\n\nThis automatic continuation ensures that tool calls are seamlessly integrated into the conversation flow. The AI can use tool results to formulate its final response to the user. You can limit the number of tool calls using the `max_tool_calls` parameter to prevent excessive function invocations.\n\n```ruby\nclass WhatIsTheWeather\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :check_weather,\n           \"Check the weather for a location\",\n           location: { type: \"string\", required: true } do |arguments|\n    \"The weather in #{arguments[:location]} is hot and sunny\"\n  end\nend\n\nRSpec.describe WhatIsTheWeather do\n  subject { described_class.new }\n\n  it \"provides a text response after automatically calling weather function\" do\n    subject.transcript << { user: \"What is the weather in Zipolite, Oaxaca?\" }\n    response = subject.chat_completion(openai: \"gpt-4o\")\n    expect(response).to include(\"hot and sunny\")\n  end\nend\n```\n\nParameters are optional by default. Mark them as required with `required: true` or explicitly optional with `optional: true`.\n\nNote that for security reasons, dispatching functions only works with functions implemented using `Raix::FunctionDispatch#function` or directly on the class.\n\n#### Tool Filtering\n\nYou can control which tool functions are exposed to the AI per request using the `available_tools` parameter of the `chat_completion` method:\n\n```ruby\nclass WeatherAndTime\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :check_weather, \"Check the weather for a location\", location: { type: \"string\" } do |arguments|\n    \"The weather in #{arguments[:location]} is sunny\"\n  end\n\n  function :get_time, \"Get the current time\" do |_arguments|\n    \"The time is 12:00 PM\"\n  end\nend\n\nweather = WeatherAndTime.new\n\n# Don't pass any tools to the LLM\nweather.chat_completion(available_tools: false)\n\n# Only pass specific tools to the LLM\nweather.chat_completion(available_tools: [:check_weather])\n\n# Pass all declared tools (default behavior)\nweather.chat_completion\n```\n\nThe `available_tools` parameter accepts three types of values:\n- `nil`: All declared tool functions are passed (default behavior)\n- `false`: No tools are passed to the LLM\n- An array of symbols: Only the specified tools are passed (raises `Raix::UndeclaredToolError` if a specified tool function is not declared)\n\n#### Multiple Tool Calls\n\nSome AI models (like GPT-4) can make multiple tool calls in a single response. When this happens, Raix will automatically handle all the function calls sequentially.\nIf you need to capture the arguments to the function calls, do so in the block passed to `function`. The response from `chat_completion` is always the final text\nresponse from the assistant, and is not affected by function calls.\n\n```ruby\nclass MultipleToolExample\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  attr_reader :invocations\n\n  function :first_tool do |arguments|\n    @invocations << :first\n    \"Result from first tool\"\n  end\n\n  function :second_tool do |arguments|\n    @invocations << :second\n    \"Result from second tool\"\n  end\n\n  def initialize\n    @invocations = []\n  end\nend\n\nexample = MultipleToolExample.new\nexample.transcript << { user: \"Please use both tools\" }\nexample.chat_completion(openai: \"gpt-4o\")\n# => \"I used both tools, as requested\"\n\nexample.invocations\n# => [:first, :second]\n```\n\n#### Customizing Function Dispatch\n\nYou can customize how function calls are handled by overriding the `dispatch_tool_function` in your class. This is useful if you need to add logging, caching, error handling, or other custom behavior around function calls.\n\n```ruby\nclass CustomDispatchExample\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :example_tool do |arguments|\n    \"Result from example tool\"\n  end\n\n  def dispatch_tool_function(function_name, arguments)\n    puts \"Calling #{function_name} with #{arguments}\"\n    result = super\n    puts \"Result: #{result}\"\n    result\n  end\nend\n```\n\n#### Function Call Caching\n\nYou can use ActiveSupport's Cache to cache function call results, which can be particularly useful for expensive operations or external API calls that don't need to be repeated frequently.\n\n```ruby\nclass CachedFunctionExample\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :expensive_operation do |arguments|\n    \"Result of expensive operation with #{arguments}\"\n  end\n\n  # Override dispatch_tool_function to enable caching for all functions\n  def dispatch_tool_function(function_name, arguments)\n    # Pass the cache to the superclass implementation\n    super(function_name, arguments, cache: Rails.cache)\n  end\nend\n```\n\nThe caching mechanism works by:\n1. Passing the cache object through `dispatch_tool_function` to the function implementation\n2. Using the function name and arguments as cache keys\n3. Automatically fetching from cache when available or executing the function when not cached\n\nThis is particularly useful for:\n- Expensive database operations\n- External API calls\n- Resource-intensive computations\n- Functions with deterministic outputs for the same inputs\n\n#### Limiting Tool Calls\n\nYou can control the maximum number of tool calls before the AI must provide a text response:\n\n```ruby\n# Limit to 5 tool calls (default is 25)\nresponse = my_ai.chat_completion(max_tool_calls: 5)\n\n# Configure globally\nRaix.configure do |config|\n  config.max_tool_calls = 10\nend\n```\n\n#### Manually Stopping Tool Calls\n\nFor AI components that process tasks without end-user interaction, you can use `stop_tool_calls_and_respond!` within a function to force the AI to provide a text response without making additional tool calls.\n\n```ruby\nclass OrderProcessor\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  SYSTEM_DIRECTIVE = \"You are an order processor, tasked with order validation, inventory check,\n                      payment processing, and shipping.\"\n\n  attr_accessor :order\n\n  def initialize(order)\n    self.order = order\n    transcript << { system: SYSTEM_DIRECTIVE }\n    transcript << { user: order.to_json }\n  end\n\n  def perform\n    # will automatically continue after tool calls until finished_processing is called\n    chat_completion\n  end\n\n\n  # implementation of functions that can be called by the AI\n  # entirely at its discretion, depending on the needs of the order.\n  # The return value of each `perform` method will be added to the\n  # transcript of the conversation as a function result.\n\n  function :validate_order do\n    OrderValidationWorker.perform(@order)\n  end\n\n  function :check_inventory do\n    InventoryCheckWorker.perform(@order)\n  end\n\n  function :process_payment do\n    PaymentProcessingWorker.perform(@order)\n  end\n\n  function :schedule_shipping do\n    ShippingSchedulerWorker.perform(@order)\n  end\n\n  function :send_confirmation do\n    OrderConfirmationWorker.perform(@order)\n  end\n\n  function :finished_processing do\n    order.update!(transcript:, processed_at: Time.current)\n    stop_tool_calls_and_respond!\n    \"Order processing completed successfully\"\n  end\nend\n```\n\n### Prompt Declarations\n\nThe third (also optional) module that you can add mix in along with `ChatCompletion` is `PromptDeclarations`. It provides the ability to declare a \"Prompt Chain\" (series of prompts to be called in a sequence), and also features a declarative, Rails-like \"DSL\" of its own. Prompts can be defined inline or delegate to callable prompt objects, which themselves implement `ChatCompletion`.\n\nThe following example is a rough excerpt of the main \"Conversation Loop\" in Olympia, which pre-processes user messages to check for\nthe presence of URLs and scan memory before submitting as a prompt to GPT-4. Note that prompt declarations are executed in the order\nthat they are declared. The `FetchUrlCheck` callable prompt class is included for instructional purposes. Note that it is passed the\nan instance of the object that is calling it in its initializer as its `context`. The passing of context means that you can assemble\ncomposite prompt structures of arbitrary depth.\n\n```ruby\nclass PromptSubscriber\n  include Raix::ChatCompletion\n  include Raix::PromptDeclarations\n\n  attr_accessor :conversation, :bot_message, :user_message\n\n  # many other declarations omitted...\n\n  prompt call: FetchUrlCheck\n\n  prompt call: MemoryScan\n\n  prompt text: -> { user_message.content }, stream: -> { ReplyStream.new(self) }, until: -> { bot_message.complete? }\n\n  def initialize(conversation)\n    self.conversation = conversation\n  end\n\n  def message_created(user_message)\n    self.user_message = user_message\n    self.bot_message = conversation.bot_message!(responding_to: user_message)\n\n    chat_completion(loop: true, openai: \"gpt-4o\")\n  end\n\n  ...\nend\n\nclass FetchUrlCheck\n  include ChatCompletion\n  include FunctionDispatch\n\n  REGEX = %r{\\b(?:http(s)?://)?(?:www\\.)?[a-zA-Z0-9-]+(\\.[a-zA-Z]{2,})+(/[^\\s]*)?\\b}\n\n  attr_accessor :context, :conversation\n\n  delegate :user_message, to: :context\n  delegate :content, to: :user_message\n\n  def initialize(context)\n    self.context = context\n    self.conversation = context.conversation\n    self.model = \"anthropic/claude-3-haiku\"\n  end\n\n  def call\n    return unless content&.match?(REGEX)\n\n    transcript << { system: \"Call the `fetch` function if the user mentions a website, otherwise say nil\" }\n    transcript << { user: content }\n\n    chat_completion # TODO: consider looping to fetch more than one URL per user message\n  end\n\n  function :fetch, \"Gets the plain text contents of a web page\", url: { type: \"string\" } do |arguments|\n    Tools::FetchUrl.fetch(arguments[:url]).tap do |result|\n      parent = conversation.function_call!(\"fetch_url\", arguments, parent: user_message)\n      conversation.function_result!(\"fetch_url\", result, parent:)\n    end\n  end\n\n```\n\nNotably, Olympia does not use the `FunctionDispatch` module in its primary conversation loop because it does not have a fixed set of tools that are included in every single prompt. Functions are made available dynamically based on a number of factors including the user's plan tier and capabilities of the assistant with whom the user is conversing.\n\nStreaming of the AI's response to the end user is handled by the `ReplyStream` class, passed to the final prompt declaration as its `stream` parameter. [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai) devotes a whole chapter to describing how to write your own `ReplyStream` class.\n\n#### Additional PromptDeclarations Options\n\nThe `PromptDeclarations` module supports several additional options that can be used to customize prompt behavior:\n\n```ruby\nclass CustomPromptExample\n  include Raix::ChatCompletion\n  include Raix::PromptDeclarations\n\n  # Basic prompt with text\n  prompt text: \"Process this input\"\n\n  # Prompt with system directive\n  prompt system: \"You are a helpful assistant\",\n        text: \"Analyze this text\"\n\n  # Prompt with conditions\n  prompt text: \"Process this input\",\n        if: -> { some_condition },\n        unless: -> { some_other_condition }\n\n  # Prompt with success callback\n  prompt text: \"Process this input\",\n        success: ->(response) { handle_response(response) }\n\n  # Prompt with custom parameters\n  prompt text: \"Process with custom settings\",\n        params: { temperature: 0.7, max_tokens: 1000 }\n\n  # Prompt with until condition for looping\n  prompt text: \"Keep processing until complete\",\n        until: -> { processing_complete? }\n\n  # Prompt with raw response\n  prompt text: \"Get raw response\",\n        raw: true\n\n  # Prompt using OpenAI directly\n  prompt text: \"Use OpenAI\",\n        openai: \"gpt-4o\"\nend\n```\n\nThe available options include:\n\n- `system`: Set a system directive for the prompt\n- `if`/`unless`: Control prompt execution with conditions\n- `success`: Handle prompt responses with callbacks\n- `params`: Customize API parameters per prompt\n- `until`: Control prompt looping\n- `raw`: Get raw API responses\n- `openai`: Use OpenAI directly\n- `stream`: Control response streaming\n- `call`: Delegate to callable prompt objects\n\nYou can also access the current prompt context and previous responses:\n\n```ruby\nclass ContextAwarePrompt\n  include Raix::ChatCompletion\n  include Raix::PromptDeclarations\n\n  def process_with_context\n    # Access current prompt\n    current_prompt.params[:temperature]\n\n    # Access previous response\n    last_response\n\n    chat_completion\n  end\nend\n```\n\n## Predicate Module\n\nThe `Raix::Predicate` module provides a simple way to handle yes/no/maybe questions using AI chat completion. It allows you to define blocks that handle different types of responses with their explanations. It is one of the concrete patterns described in the \"Discrete Components\" chapter of [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai).\n\n### Usage\n\nInclude the `Raix::Predicate` module in your class and define handlers using block syntax:\n\n```ruby\nclass Question\n  include Raix::Predicate\n\n  yes? do |explanation|\n    puts \"Affirmative: #{explanation}\"\n  end\n\n  no? do |explanation|\n    puts \"Negative: #{explanation}\"\n  end\n\n  maybe? do |explanation|\n    puts \"Uncertain: #{explanation}\"\n  end\nend\n\nquestion = Question.new\nquestion.ask(\"Is Ruby a programming language?\")\n# => Affirmative: Yes, Ruby is a dynamic, object-oriented programming language...\n```\n\n### Features\n\n- Define handlers for yes, no, and/or maybe responses using the declarative class level block syntax.\n- At least one handler (yes, no, or maybe) must be defined.\n- Handlers receive the full AI response including explanation as an argument.\n- Responses always start with \"Yes, \", \"No, \", or \"Maybe, \" followed by an explanation.\n- Make sure to ask a question that can be answered with yes, no, or maybe (otherwise the results are indeterminate).\n\n### Example with Single Handler\n\nYou can define only the handlers you need:\n\n```ruby\nclass SimpleQuestion\n  include Raix::Predicate\n\n  # Only handle positive responses\n  yes? do |explanation|\n    puts \"✅ #{explanation}\"\n  end\nend\n\nquestion = SimpleQuestion.new\nquestion.ask(\"Is 2 + 2 = 4?\")\n# => ✅ Yes, 2 + 2 equals 4, this is a fundamental mathematical fact.\n```\n\n### Error Handling\n\nThe module will raise a RuntimeError if you attempt to ask a question without defining any response handlers:\n\n```ruby\nclass InvalidQuestion\n  include Raix::Predicate\nend\n\nquestion = InvalidQuestion.new\nquestion.ask(\"Any question\")\n# => RuntimeError: Please define a yes and/or no block\n```\n\n## Model Context Protocol (Experimental)\n\nThe `Raix::MCP` module provides integration with the Model Context Protocol, allowing you to connect your Raix-powered application to remote MCP servers. This feature is currently **experimental**.\n\n### Usage\n\nInclude the `Raix::MCP` module in your class and declare MCP servers using the `mcp` DSL:\n\n```ruby\nclass McpConsumer\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n  include Raix::MCP\n\n  mcp \"https://your-mcp-server.example.com/sse\"\nend\n```\n\n### Features\n\n- Automatically fetches available tools from the remote MCP server using `tools/list`\n- Registers remote tools as OpenAI-compatible function schemas\n- Defines proxy methods that forward requests to the remote server via `tools/call`\n- Seamlessly integrates with the existing `FunctionDispatch` workflow\n- Handles transcript recording to maintain consistent conversation history\n\n### Filtering Tools\n\nYou can filter which remote tools to include:\n\n```ruby\nclass FilteredMcpConsumer\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n  include Raix::MCP\n\n  # Only include specific tools\n  mcp \"https://server.example.com/sse\", only: [:tool_one, :tool_two]\n\n  # Or exclude specific tools\n  mcp \"https://server.example.com/sse\", except: [:tool_to_exclude]\nend\n```\n\n## Response Format (Experimental)\n\nThe `ResponseFormat` class provides a way to declare a JSON schema for the response format of an AI chat completion. It's particularly useful when you need structured responses from AI models, ensuring the output conforms to your application's requirements.\n\n### Features\n\n- Converts Ruby hashes and arrays into JSON schema format\n- Supports nested structures and arrays\n- Enforces strict validation with `additionalProperties: false`\n- Automatically marks all top-level properties as required\n- Handles both simple type definitions and complex nested schemas\n\n### Basic Usage\n\n```ruby\n# Simple schema with basic types\nformat = Raix::ResponseFormat.new(\"PersonInfo\", {\n  name: { type: \"string\" },\n  age: { type: \"integer\" }\n})\n\n# Use in chat completion\nmy_ai.chat_completion(response_format: format)\n```\n\n### Complex Structures\n\n```ruby\n# Nested structure with arrays\nformat = Raix::ResponseFormat.new(\"CompanyInfo\", {\n  company: {\n    name: { type: \"string\" },\n    employees: [\n      {\n        name: { type: \"string\" },\n        role: { type: \"string\" },\n        skills: [\"string\"]\n      }\n    ],\n    locations: [\"string\"]\n  }\n})\n```\n\n### Generated Schema\n\nThe ResponseFormat class generates a schema that follows this structure:\n\n```json\n{\n  \"type\": \"json_schema\",\n  \"json_schema\": {\n    \"name\": \"SchemaName\",\n    \"schema\": {\n      \"type\": \"object\",\n      \"properties\": {\n        \"property1\": { \"type\": \"string\" },\n        \"property2\": { \"type\": \"integer\" }\n      },\n      \"required\": [\"property1\", \"property2\"],\n      \"additionalProperties\": false\n    },\n    \"strict\": true\n  }\n}\n```\n\n### Using with Chat Completion\n\nWhen used with chat completion, the AI model will format its response according to your schema:\n\n```ruby\nclass StructuredResponse\n  include Raix::ChatCompletion\n\n  def analyze_person(name)\n    format = Raix::ResponseFormat.new(\"PersonAnalysis\", {\n      full_name: { type: \"string\" },\n      age_estimate: { type: \"integer\" },\n      personality_traits: [\"string\"]\n    })\n\n    transcript << { user: \"Analyze the person named #{name}\" }\n    chat_completion(params: { response_format: format })\n  end\nend\n\nresponse = StructuredResponse.new.analyze_person(\"Alice\")\n# Returns a hash matching the defined schema\n```\n\n## Installation\n\nInstall the gem and add to the application's Gemfile by executing:\n\n    $ bundle add raix\n\nIf bundler is not being used to manage dependencies, install the gem by executing:\n\n    $ gem install raix\n\n### Configuration\n\nRaix 2.0 uses [RubyLLM](https://github.com/crmne/ruby_llm) as its backend for LLM provider connections. Configure your API keys through RubyLLM:\n\n```ruby\n# config/initializers/raix.rb\nRubyLLM.configure do |config|\n  config.openrouter_api_key = ENV[\"OPENROUTER_API_KEY\"]\n  config.openai_api_key = ENV[\"OPENAI_API_KEY\"]\n  # Optional: configure other providers\n  # config.anthropic_api_key = ENV[\"ANTHROPIC_API_KEY\"]\n  # config.gemini_api_key = ENV[\"GEMINI_API_KEY\"]\nend\n```\n\nRaix will automatically use the appropriate provider based on the model name:\n- Models starting with `gpt-` or `o1` use OpenAI directly\n- All other models route through OpenRouter\n\n### Global vs Class-Level Configuration\n\nYou can configure Raix options globally or at the class level:\n\n```ruby\n# Global configuration\nRaix.configure do |config|\n  config.temperature = 0.7\n  config.max_tokens = 1000\n  config.model = \"gpt-4o\"\n  config.max_tool_calls = 25\nend\n\n# Class-level configuration (overrides global)\nclass MyAssistant\n  include Raix::ChatCompletion\n\n  configure do |config|\n    config.model = \"anthropic/claude-3-opus\"\n    config.temperature = 0.5\n  end\nend\n```\n\n### Upgrading from Raix 1.x\n\nIf upgrading from Raix 1.x, update your configuration from:\n\n```ruby\n# Old 1.x configuration\nRaix.configure do |config|\n  config.openrouter_client = OpenRouter::Client.new(access_token: \"...\")\n  config.openai_client = OpenAI::Client.new(access_token: \"...\")\nend\n```\n\nTo the new RubyLLM-based configuration shown above.\n\n## Development\n\nAfter checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.\n\nSpecs require `OR_ACCESS_TOKEN` and `OAI_ACCESS_TOKEN` environment variables, for access to OpenRouter and OpenAI, respectively. You can add those keys to a local unversioned `.env` file and they will be picked up by the `dotenv` gem.\n\nTo install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).\n\n## Contributing\n\nBug reports and pull requests are welcome on GitHub at https://github.com/OlympiaAI/raix. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/OlympiaAI/raix/blob/main/CODE_OF_CONDUCT.md).\n\n## License\n\nThe gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).\n\n## Code of Conduct\n\nEveryone interacting in the Raix project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/OlympiaAI/raix/blob/main/CODE_OF_CONDUCT.md).\n"
  },
  {
    "path": "Rakefile",
    "content": "# frozen_string_literal: true\n\nrequire \"bundler/gem_tasks\"\nrequire \"rspec/core/rake_task\"\n\nRSpec::Core::RakeTask.new(:spec)\n\nrequire \"rubocop/rake_task\"\n\nRuboCop::RakeTask.new(:rubocop_ci)\n\ntask ci: %i[spec rubocop_ci]\n\nRuboCop::RakeTask.new(:rubocop) do |task|\n  task.options = [\"--autocorrect\"]\nend\n\ntask default: %i[spec rubocop]\n"
  },
  {
    "path": "bin/console",
    "content": "#!/usr/bin/env ruby\n# frozen_string_literal: true\n\nrequire \"bundler/setup\"\nrequire \"raix\"\n\n# You can add fixtures and/or initialization code here to make experimenting\n# with your gem easier. You can also use a different console, if you like.\n\nrequire \"irb\"\nIRB.start(__FILE__)\n"
  },
  {
    "path": "bin/setup",
    "content": "#!/usr/bin/env bash\nset -euo pipefail\nIFS=$'\\n\\t'\nset -vx\n\nbundle install\n\n# Do any other automated setup that you need to do here\n"
  },
  {
    "path": "lib/raix/chat_completion.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"active_support/concern\"\nrequire \"active_support/core_ext/object/blank\"\nrequire \"active_support/core_ext/string/filters\"\nrequire \"active_support/core_ext/hash/indifferent_access\"\nrequire \"ruby_llm\"\n\nmodule Raix\n  class UndeclaredToolError < StandardError; end\n\n  # The `ChatCompletion` module is a Rails concern that provides a way to interact\n  # with the OpenRouter Chat Completion API via its client. The module includes a few\n  # methods that allow you to build a transcript of messages and then send them to\n  # the API for completion. The API will return a response that you can use however\n  # you see fit.\n  #\n  # When the AI responds with tool function calls instead of a text message, this\n  # module automatically:\n  # 1. Executes the requested tool functions\n  # 2. Adds the function results to the conversation transcript\n  # 3. Sends the updated transcript back to the AI for another completion\n  # 4. Repeats this process until the AI responds with a regular text message\n  #\n  # This automatic continuation ensures that tool calls are seamlessly integrated\n  # into the conversation flow. The AI can use tool results to formulate its final\n  # response to the user. You can limit the number of tool calls using the\n  # `max_tool_calls` parameter to prevent excessive function invocations.\n  #\n  # Tool functions must be defined on the class that includes this module. The\n  # `FunctionDispatch` module provides a Rails-like DSL for declaring these\n  # functions at the class level, which is cleaner than implementing them as\n  # instance methods.\n  #\n  # Note that some AI models can make multiple tool function calls in a single\n  # response. When that happens, the module executes all requested functions\n  # before continuing the conversation.\n  module ChatCompletion\n    extend ActiveSupport::Concern\n\n    attr_accessor :before_completion, :cache_at, :frequency_penalty, :logit_bias, :logprobs, :loop, :min_p, :model,\n                  :presence_penalty, :prediction, :repetition_penalty, :response_format, :stream, :temperature,\n                  :max_completion_tokens, :max_tokens, :seed, :stop, :top_a, :top_k, :top_logprobs, :top_p, :tools,\n                  :available_tools, :tool_choice, :provider, :max_tool_calls, :stop_tool_calls_and_respond\n\n    class_methods do\n      # Returns the current configuration of this class. Falls back to global configuration for unset values.\n      def configuration\n        @configuration ||= Configuration.new(fallback: Raix.configuration)\n      end\n\n      # Let's you configure the class-level configuration using a block.\n      def configure\n        yield(configuration)\n      end\n    end\n\n    # Instance level access to the class-level configuration.\n    def configuration\n      self.class.configuration\n    end\n\n    # This method performs chat completion based on the provided transcript and parameters.\n    #\n    # @param params [Hash] The parameters for chat completion.\n    # @option loop [Boolean] :loop (false) DEPRECATED - The system now automatically continues after tool calls.\n    # @option params [Boolean] :json (false) Whether to return the parse the response as a JSON object. Will search for <json> tags in the response first, then fall back to the default JSON parsing of the entire response.\n    # @option params [String] :openai (nil) If non-nil, use OpenAI with the model specified in this param.\n    # @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.\n    # @option params [Array] :messages (nil) An array of messages to use instead of the transcript.\n    # @option tools [Array|false] :available_tools (nil) Tools to pass to the LLM. Ignored if nil (default). If false, no tools are passed. If an array, only declared tools in the array are passed.\n    # @option max_tool_calls [Integer] :max_tool_calls Maximum number of tool calls before forcing a text response. Defaults to the configured value.\n    # @return [String|Hash] The completed chat response.\n    def chat_completion(params: {}, loop: false, json: false, raw: false, openai: nil, save_response: true, messages: nil, available_tools: nil, max_tool_calls: nil)\n      # set params to default values if not provided\n      params[:cache_at] ||= cache_at.presence\n      params[:frequency_penalty] ||= frequency_penalty.presence\n      params[:logit_bias] ||= logit_bias.presence\n      params[:logprobs] ||= logprobs.presence\n      params[:max_completion_tokens] ||= max_completion_tokens.presence || configuration.max_completion_tokens\n      params[:max_tokens] ||= max_tokens.presence || configuration.max_tokens\n      params[:min_p] ||= min_p.presence\n      params[:prediction] = { type: \"content\", content: params[:prediction] || prediction } if params[:prediction] || prediction.present?\n      params[:presence_penalty] ||= presence_penalty.presence\n      params[:provider] ||= provider.presence\n      params[:repetition_penalty] ||= repetition_penalty.presence\n      params[:response_format] ||= response_format.presence\n      params[:seed] ||= seed.presence\n      params[:stop] ||= stop.presence\n      params[:temperature] ||= temperature.presence || configuration.temperature\n      params[:tool_choice] ||= tool_choice.presence\n      params[:tools] = if available_tools == false\n                         nil\n                       elsif available_tools.is_a?(Array)\n                         filtered_tools(available_tools)\n                       else\n                         tools.presence\n                       end\n      params[:top_a] ||= top_a.presence\n      params[:top_k] ||= top_k.presence\n      params[:top_logprobs] ||= top_logprobs.presence\n      params[:top_p] ||= top_p.presence\n\n      json = true if params[:response_format].is_a?(Raix::ResponseFormat)\n\n      if json\n        unless openai\n          params[:provider] ||= {}\n          params[:provider][:require_parameters] = true\n        end\n        if params[:response_format].blank?\n          params[:response_format] ||= {}\n          params[:response_format][:type] = \"json_object\"\n        end\n      end\n\n      # Deprecation warning for loop parameter\n      if loop\n        warn \"\\n\\nWARNING: The 'loop' parameter is DEPRECATED and will be ignored.\\nChat completions now automatically continue after tool calls until the AI provides a text response.\\nUse 'max_tool_calls' to limit the number of tool calls (default: #{configuration.max_tool_calls}).\\n\\n\"\n      end\n\n      # Set max_tool_calls from parameter or configuration default\n      self.max_tool_calls = max_tool_calls || configuration.max_tool_calls\n\n      # Reset stop_tool_calls_and_respond flag\n      @stop_tool_calls_and_respond = false\n\n      # Track tool call count\n      tool_call_count = 0\n\n      # set the model to the default if not provided\n      self.model ||= configuration.model\n\n      adapter = MessageAdapters::Base.new(self)\n\n      # duplicate the transcript to avoid race conditions in situations where\n      # chat_completion is called multiple times in parallel\n      # TODO: Defensive programming, ensure messages is an array\n      messages ||= transcript.flatten.compact\n      messages = messages.map { |msg| adapter.transform(msg) }.dup\n      raise \"Can't complete an empty transcript\" if messages.blank?\n\n      # Run before_completion hooks (global -> class -> instance)\n      # Hooks can modify params and messages for logging, filtering, PII redaction, etc.\n      run_before_completion_hooks(params, messages)\n\n      begin\n        response = ruby_llm_request(params:, model: openai || model, messages:, openai_override: openai)\n        retry_count = 0\n        content = nil\n\n        # no need for additional processing if streaming\n        return if stream && response.blank?\n\n        # tuck the full response into a thread local in case needed\n        Thread.current[:chat_completion_response] = response.is_a?(Hash) ? response.with_indifferent_access : response\n\n        # TODO: add a standardized callback hook for usage events\n        # broadcast(:usage_event, usage_subject, self.class.name.to_s, response, premium?)\n\n        tool_calls = response.dig(\"choices\", 0, \"message\", \"tool_calls\") || []\n        if tool_calls.any?\n          tool_call_count += tool_calls.size\n\n          # Check if we've exceeded max_tool_calls\n          if tool_call_count > self.max_tool_calls\n            # Add system message about hitting the limit\n            messages << { role: \"system\", content: \"Maximum tool calls (#{self.max_tool_calls}) exceeded. Please provide a final response to the user without calling any more tools.\" }\n\n            # Force a final response without tools\n            params[:tools] = nil\n            response = ruby_llm_request(params:, model: openai || model, messages:, openai_override: openai)\n\n            # Process the final response\n            content = response.dig(\"choices\", 0, \"message\", \"content\")\n            transcript << { assistant: content } if save_response\n            return raw ? response : content.to_s.strip\n          end\n\n          # Dispatch tool calls\n          tool_calls.each do |tool_call| # TODO: parallelize this?\n            # dispatch the called function\n            function_name = tool_call[\"function\"][\"name\"]\n            arguments = JSON.parse(tool_call[\"function\"][\"arguments\"].presence || \"{}\")\n            raise \"Unauthorized function call: #{function_name}\" unless self.class.functions.map { |f| f[:name].to_sym }.include?(function_name.to_sym)\n\n            dispatch_tool_function(function_name, arguments.with_indifferent_access)\n          end\n\n          # After executing tool calls, we need to continue the conversation\n          # to let the AI process the results and provide a text response.\n          # We continue until the AI responds with a regular assistant message\n          # (not another tool call request), unless stop_tool_calls_and_respond! was called.\n\n          # Use the updated transcript for the next call, not the original messages\n          updated_messages = transcript.flatten.compact\n          last_message = updated_messages.last\n\n          if !@stop_tool_calls_and_respond && (last_message[:role] != \"assistant\" || last_message[:tool_calls].present?)\n            # Send the updated transcript back to the AI\n            return chat_completion(\n              params:,\n              json:,\n              raw:,\n              openai:,\n              save_response:,\n              messages: nil, # Use transcript instead\n              available_tools:,\n              max_tool_calls: self.max_tool_calls - tool_call_count\n            )\n          elsif @stop_tool_calls_and_respond\n            # If stop_tool_calls_and_respond was set, force a final response without tools\n            params[:tools] = nil\n            response = ruby_llm_request(params:, model: openai || model, messages:, openai_override: openai)\n\n            content = response.dig(\"choices\", 0, \"message\", \"content\")\n            transcript << { assistant: content } if save_response\n            return raw ? response : content.to_s.strip\n          end\n        end\n\n        response.tap do |res|\n          content = res.dig(\"choices\", 0, \"message\", \"content\")\n\n          transcript << { assistant: content } if save_response\n          content = content.to_s.strip\n\n          if json\n            # Make automatic JSON parsing available to non-OpenAI providers that don't support the response_format parameter\n            content = content.match(%r{<json>(.*?)</json>}m)[1] if content.include?(\"<json>\")\n\n            return JSON.parse(content)\n          end\n\n          return content unless raw\n        end\n      rescue JSON::ParserError => e\n        if e.message.include?(\"not a valid\") # blank JSON\n          warn \"Retrying blank JSON response... (#{retry_count} attempts) #{e.message}\"\n          retry_count += 1\n          sleep 1 * retry_count # backoff\n          retry if retry_count < 3\n\n          raise e # just fail if we can't get content after 3 attempts\n        end\n\n        warn \"Bad JSON received!!!!!!: #{content}\"\n        raise e\n      rescue Faraday::BadRequestError => e\n        # make sure we see the actual error message on console or Honeybadger\n        warn \"Chat completion failed!!!!!!!!!!!!!!!!: #{e.response[:body]}\"\n        raise e\n      end\n    end\n\n    # This method returns the transcript array.\n    # Manually add your messages to it in the following abbreviated format\n    # before calling `chat_completion`.\n    #\n    # { system: \"You are a pumpkin\" },\n    # { user: \"Hey what time is it?\" },\n    # { assistant: \"Sorry, pumpkins do not wear watches\" }\n    #\n    # to add a function call use the following format:\n    # { function: { name: 'fancy_pants_function', arguments: { param: 'value' } } }\n    #\n    # to add a function result use the following format:\n    # { function: result, name: 'fancy_pants_function' }\n    #\n    # @return [Array] The transcript array.\n    def transcript\n      @transcript ||= TranscriptAdapter.new(ruby_llm_chat)\n    end\n\n    # Returns the RubyLLM::Chat instance for this conversation\n    def ruby_llm_chat\n      @ruby_llm_chat ||= begin\n        model_id = model || configuration.model\n\n        # Determine provider based on model format or explicit openai flag\n        provider = if model_id.to_s.start_with?(\"openai/\") || model_id.to_s.match?(/^gpt-/)\n                     :openai\n                   else\n                     :openrouter\n                   end\n\n        RubyLLM.chat(model: model_id, provider:, assume_model_exists: true)\n      end\n    end\n\n    # Dispatches a tool function call with the given function name and arguments.\n    # This method can be overridden in subclasses to customize how function calls are handled.\n    #\n    # @param function_name [String] The name of the function to call\n    # @param arguments [Hash] The arguments to pass to the function\n    # @param cache [ActiveSupport::Cache] Optional cache object\n    # @return [Object] The result of the function call\n    def dispatch_tool_function(function_name, arguments, cache: nil)\n      public_send(function_name, arguments, cache)\n    end\n\n    private\n\n    def filtered_tools(tool_names)\n      return nil if tool_names.blank?\n\n      requested_tools = tool_names.map(&:to_sym)\n      available_tool_names = tools.map { |tool| tool.dig(:function, :name).to_sym }\n\n      undeclared_tools = requested_tools - available_tool_names\n      raise UndeclaredToolError, \"Undeclared tools: #{undeclared_tools.join(\", \")}\" if undeclared_tools.any?\n\n      tools.select { |tool| requested_tools.include?(tool.dig(:function, :name).to_sym) }\n    end\n\n    def run_before_completion_hooks(params, messages)\n      hooks = [\n        Raix.configuration.before_completion,\n        self.class.configuration.before_completion,\n        before_completion\n      ].compact\n\n      return if hooks.empty?\n\n      context = CompletionContext.new(\n        chat_completion: self,\n        messages:,\n        params:\n      )\n\n      hooks.each do |hook|\n        result = hook.call(context) if hook.respond_to?(:call)\n        next unless result.is_a?(Hash)\n\n        # Handle model separately since it's passed as a keyword arg to ruby_llm_request\n        self.model = result[:model] if result.key?(:model)\n        params.merge!(result.compact)\n      end\n    end\n\n    def ruby_llm_request(params:, model:, messages:, openai_override: nil)\n      # Create a temporary chat instance for this request\n      provider = determine_provider(model, openai_override)\n      chat = RubyLLM.chat(model:, provider:, assume_model_exists: true)\n\n      # Apply messages to the chat\n      # Track if we have a user message to determine how to call ask\n      has_user_message = false\n\n      messages.each do |msg|\n        role = msg[:role] || msg[\"role\"]\n        content = msg[:content] || msg[\"content\"]\n\n        case role.to_s\n        when \"system\"\n          chat.with_instructions(content)\n        when \"user\"\n          has_user_message = true\n          chat.add_message(role: :user, content:)\n        when \"assistant\"\n          if msg[:tool_calls] || msg[\"tool_calls\"]\n            chat.add_message(role: :assistant, content:, tool_calls: msg[:tool_calls] || msg[\"tool_calls\"])\n          else\n            chat.add_message(role: :assistant, content:)\n          end\n        when \"tool\"\n          chat.add_message(\n            role: :tool,\n            content:,\n            tool_call_id: msg[:tool_call_id] || msg[\"tool_call_id\"]\n          )\n        end\n      end\n\n      # Apply configuration parameters\n      chat.with_temperature(params[:temperature]) if params[:temperature]\n\n      # Apply additional params (RubyLLM with_params expects keyword args)\n      additional_params = params.compact.except(:temperature, :tools, :max_tokens, :max_completion_tokens)\n      chat.with_params(**additional_params) if additional_params.any?\n\n      # Handle tools - convert Raix function declarations to RubyLLM tools\n      if params[:tools].present? && respond_to?(:class) && self.class.respond_to?(:functions)\n        ruby_llm_tools = FunctionToolAdapter.convert_tools_for_ruby_llm(self)\n        ruby_llm_tools.each { |tool| chat.with_tool(tool) }\n      end\n\n      # Execute the completion\n      if stream.present?\n        # Streaming mode\n        if has_user_message\n          chat.complete(&stream)\n        else\n          chat.ask(&stream)\n        end\n        nil # Return nil for streaming as per original behavior\n      else\n        # Non-streaming mode - return OpenAI-compatible response format\n        response_message = has_user_message ? chat.complete : chat.ask\n\n        # Convert RubyLLM response to OpenAI format for compatibility\n        {\n          \"choices\" => [\n            {\n              \"message\" => {\n                \"role\" => \"assistant\",\n                \"content\" => response_message.content,\n                \"tool_calls\" => response_message.tool_calls\n              },\n              \"finish_reason\" => response_message.tool_call? ? \"tool_calls\" : \"stop\"\n            }\n          ],\n          \"usage\" => {\n            \"prompt_tokens\" => response_message.input_tokens,\n            \"completion_tokens\" => response_message.output_tokens,\n            \"total_tokens\" => (response_message.input_tokens || 0) + (response_message.output_tokens || 0)\n          }\n        }\n      end\n    rescue StandardError => e\n      warn \"RubyLLM request failed: #{e.message}\"\n      raise e\n    end\n\n    def determine_provider(model, openai_override)\n      return :openai if openai_override\n      return :openai if model.to_s.match?(/^gpt-/) || model.to_s.match?(/^o\\d/)\n\n      # Default to openrouter for model IDs with provider prefix\n      :openrouter\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/completion_context.rb",
    "content": "# frozen_string_literal: true\n\nmodule Raix\n  # Context object passed to before_completion hooks.\n  # Provides access to the chat completion instance, messages, and request parameters.\n  # Messages can be mutated for content filtering, PII redaction, etc.\n  class CompletionContext\n    attr_reader :chat_completion, :messages, :params\n\n    def initialize(chat_completion:, messages:, params:)\n      @chat_completion = chat_completion\n      @messages = messages # mutable - hooks can modify for filtering, redaction, etc.\n      @params = params # mutable - hooks can modify parameters\n    end\n\n    # Convenience accessor for the transcript\n    def transcript\n      chat_completion.transcript\n    end\n\n    # Get the currently configured model\n    def current_model\n      chat_completion.model || chat_completion.configuration.model\n    end\n\n    # Get the class that includes ChatCompletion\n    def chat_completion_class\n      chat_completion.class\n    end\n\n    # Get the current configuration\n    def configuration\n      chat_completion.configuration\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/configuration.rb",
    "content": "# frozen_string_literal: true\n\nmodule Raix\n  # The Configuration class holds the configuration options for the Raix gem.\n  class Configuration\n    def self.attr_accessor_with_fallback(method_name)\n      define_method(method_name) do\n        value = instance_variable_get(\"@#{method_name}\")\n        return value if value\n        return unless fallback\n\n        fallback.public_send(method_name)\n      end\n      define_method(\"#{method_name}=\") do |value|\n        instance_variable_set(\"@#{method_name}\", value)\n      end\n    end\n\n    # The temperature option determines the randomness of the generated text.\n    # Higher values result in more random output.\n    attr_accessor_with_fallback :temperature\n\n    # The max_tokens option determines the maximum number of tokens to generate.\n    attr_accessor_with_fallback :max_tokens\n\n    # The max_completion_tokens option determines the maximum number of tokens to generate.\n    attr_accessor_with_fallback :max_completion_tokens\n\n    # The model option determines the model to use for text generation. This option\n    # is normally set in each class that includes the ChatCompletion module.\n    attr_accessor_with_fallback :model\n\n    # DEPRECATED: Use ruby_llm_config.openrouter_api_key instead\n    attr_accessor_with_fallback :openrouter_client\n\n    # DEPRECATED: Use ruby_llm_config.openai_api_key instead\n    attr_accessor_with_fallback :openai_client\n\n    # The max_tool_calls option determines the maximum number of tool calls\n    # before forcing a text response to prevent excessive function invocations.\n    attr_accessor_with_fallback :max_tool_calls\n\n    # Access to RubyLLM configuration\n    attr_accessor_with_fallback :ruby_llm_config\n\n    # A callable hook that runs before each chat completion request.\n    # Receives a CompletionContext and can modify params and messages.\n    # Use for: dynamic parameter resolution, logging, content filtering, PII redaction, etc.\n    attr_accessor_with_fallback :before_completion\n\n    DEFAULT_MAX_TOKENS = 1000\n    DEFAULT_MAX_COMPLETION_TOKENS = 16_384\n    DEFAULT_MODEL = \"meta-llama/llama-3.3-8b-instruct:free\"\n    DEFAULT_TEMPERATURE = 0.0\n    DEFAULT_MAX_TOOL_CALLS = 25\n\n    # Initializes a new instance of the Configuration class with default values.\n    def initialize(fallback: nil)\n      self.temperature = DEFAULT_TEMPERATURE\n      self.max_completion_tokens = DEFAULT_MAX_COMPLETION_TOKENS\n      self.max_tokens = DEFAULT_MAX_TOKENS\n      self.model = DEFAULT_MODEL\n      self.max_tool_calls = DEFAULT_MAX_TOOL_CALLS\n      self.ruby_llm_config = RubyLLM.config\n      self.fallback = fallback\n    end\n\n    def client?\n      # Support legacy openrouter_client/openai_client or new RubyLLM config\n      !!(openrouter_client || openai_client || ruby_llm_configured?)\n    end\n\n    def ruby_llm_configured?\n      ruby_llm_config&.openai_api_key || ruby_llm_config&.openrouter_api_key ||\n        ruby_llm_config&.anthropic_api_key || ruby_llm_config&.gemini_api_key\n    end\n\n    private\n\n    attr_accessor :fallback\n\n    def get_with_fallback(method)\n      value = instance_variable_get(\"@#{method}\")\n      return value if value\n      return unless fallback\n\n      fallback.public_send(method)\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/function_dispatch.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"securerandom\"\nmodule Raix\n  # Provides declarative function definition for ChatCompletion classes.\n  #\n  # Example:\n  #\n  #   class MeaningOfLife\n  #     include Raix::ChatCompletion\n  #     include Raix::FunctionDispatch\n  #\n  #     function :ask_deep_thought do\n  #       wait 236_682_000_000_000\n  #       \"The meaning of life is 42\"\n  #     end\n  #\n  #     def initialize\n  #       transcript << { user: \"What is the meaning of life?\" }\n  #       chat_completion\n  #     end\n  #   end\n  module FunctionDispatch\n    extend ActiveSupport::Concern\n\n    class_methods do\n      attr_reader :functions\n\n      # Defines a function that can be dispatched by the ChatCompletion module while\n      # processing the response from an AI model.\n      #\n      # Declaring a function here will automatically add it (in JSON Schema format) to\n      # the list of tools provided to the OpenRouter Chat Completion API. The function\n      # will be dispatched by name, so make sure the name is unique. The function's block\n      # argument will be executed in the instance context of the class that includes this module.\n      #\n      # Example:\n      #   function :google_search, \"Search Google for something\", query: { type: \"string\" } do |arguments|\n      #     GoogleSearch.new(arguments[:query]).search\n      #   end\n      #\n      # @param name [Symbol] The name of the function.\n      # @param description [String] An optional description of the function.\n      # @param parameters [Hash] The parameters that the function accepts.\n      # @param block [Proc] The block of code to execute when the function is called.\n      def function(name, description = nil, **parameters, &block)\n        @functions ||= []\n        @functions << begin\n          {\n            name:,\n            parameters: { type: \"object\", properties: {}, required: [] }\n          }.tap do |definition|\n            definition[:description] = description if description.present?\n            parameters.each do |key, value|\n              value = value.dup\n              required = value.delete(:required)\n              optional = value.delete(:optional)\n              definition[:parameters][:properties][key] = value\n              if required || optional == false\n                definition[:parameters][:required] << key\n              end\n            end\n            definition[:parameters].delete(:required) if definition[:parameters][:required].empty?\n          end\n        end\n\n        define_method(name) do |arguments, cache|\n          id = SecureRandom.uuid[0, 23]\n\n          content = if cache.present?\n                      cache.fetch([name, arguments]) do\n                        instance_exec(arguments, &block)\n                      end\n                    else\n                      instance_exec(arguments, &block)\n                    end\n\n          # add in one operation to prevent race condition and potential wrong\n          # interleaving of tool calls in multi-threaded environments\n          transcript << [\n            {\n              role: \"assistant\",\n              content: nil,\n              tool_calls: [\n                {\n                  id:,\n                  type: \"function\",\n                  function: {\n                    name:,\n                    arguments: arguments.to_json\n                  }\n                }\n              ]\n            },\n            {\n              role: \"tool\",\n              tool_call_id: id,\n              name:,\n              content: content.to_s\n            }\n          ]\n\n          # Return the content - ChatCompletion will automatically continue\n          # the conversation after tool execution to get a final response\n          content\n        end\n      end\n    end\n\n    included do\n      attr_accessor :chat_completion_args\n    end\n\n    def chat_completion(**chat_completion_args)\n      self.chat_completion_args = chat_completion_args\n      super\n    end\n\n    # Stops the automatic continuation of chat completions after this function call.\n    # Useful when you want to halt processing within a function and force the AI\n    # to provide a text response without making additional tool calls.\n    def stop_tool_calls_and_respond!\n      @stop_tool_calls_and_respond = true\n    end\n\n    def tools\n      return [] unless self.class.functions\n\n      self.class.functions.map { |function| { type: \"function\", function: } }\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/function_tool_adapter.rb",
    "content": "# frozen_string_literal: true\n\nmodule Raix\n  # Adapter to convert Raix function declarations to RubyLLM::Tool instances\n  class FunctionToolAdapter\n    def self.create_tool_from_function(function_def, instance)\n      tool_class = Class.new(RubyLLM::Tool) do\n        description function_def[:description] if function_def[:description]\n\n        # Define parameters based on function definition\n        function_def[:parameters][:properties]&.each do |param_name, param_def|\n          required = function_def[:parameters][:required]&.include?(param_name)\n          param param_name.to_sym, type: param_def[:type], desc: param_def[:description], required:\n        end\n\n        # Store reference to the instance and function name\n        define_method(:raix_instance) { instance }\n        define_method(:raix_function_name) { function_def[:name] }\n\n        # Override execute to call the Raix function\n        define_method(:execute) do |**args|\n          raix_instance.public_send(raix_function_name, args.with_indifferent_access, nil)\n        end\n      end\n\n      # Set a meaningful name for the tool class\n      tool_class.define_singleton_method(:name) do\n        \"Raix::GeneratedTool::#{function_def[:name].to_s.camelize}\"\n      end\n\n      tool_instance = tool_class.new\n\n      # Override the name method to return the original function name\n      # This ensures RubyLLM can match the tool call from the AI\n      tool_instance.define_singleton_method(:name) do\n        function_def[:name].to_s\n      end\n\n      tool_instance\n    end\n\n    def self.convert_tools_for_ruby_llm(raix_instance)\n      return [] unless raix_instance.class.respond_to?(:functions)\n      return [] if raix_instance.class.functions.blank?\n\n      raix_instance.class.functions.map do |function_def|\n        create_tool_from_function(function_def, raix_instance)\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/mcp/sse_client.rb",
    "content": "require \"json\"\nrequire \"securerandom\"\nrequire \"faraday\"\nrequire \"uri\"\nrequire \"digest\"\n\nmodule Raix\n  module MCP\n    # Client for communicating with MCP servers via Server-Sent Events (SSE).\n    class SseClient\n      PROTOCOL_VERSION = \"2024-11-05\".freeze\n      CONNECTION_TIMEOUT = 10\n      OPEN_TIMEOUT = 30\n\n      # Creates a new client and establishes SSE connection to discover the JSON-RPC endpoint.\n      #\n      # @param url [String] the SSE endpoint URL\n      def initialize(url, headers: {})\n        @url = url\n        @endpoint_url = nil\n        @sse_thread = nil\n        @event_queue = Thread::Queue.new\n        @buffer = \"\"\n        @closed = false\n        @headers = headers\n\n        # Start the SSE connection and discover endpoint\n        establish_sse_connection\n      end\n\n      # Returns available tools from the server.\n      def tools\n        @tools ||= begin\n          request_id = SecureRandom.uuid\n          send_json_rpc(request_id, \"tools/list\", {})\n\n          # Wait for response through SSE\n          response = wait_for_response(request_id)\n          response[:tools].map do |tool_json|\n            Tool.from_json(tool_json)\n          end\n        end\n      end\n\n      # Executes a tool with given arguments.\n      # Returns text content directly, or JSON-encoded data for other content types.\n      def call_tool(name, **arguments)\n        request_id = SecureRandom.uuid\n        send_json_rpc(request_id, \"tools/call\", name:, arguments:)\n\n        # Wait for response through SSE\n        response = wait_for_response(request_id)\n        content = response[:content]\n        return \"\" if content.nil? || content.empty?\n\n        # Handle different content formats\n        first_item = content.first\n        case first_item\n        when Hash\n          case first_item[:type]\n          when \"text\"\n            first_item[:text]\n          when \"image\"\n            # Return a structured response for images\n            {\n              type: \"image\",\n              data: first_item[:data],\n              mime_type: first_item[:mimeType] || \"image/png\"\n            }.to_json\n          else\n            # For any other type, return the item as JSON\n            first_item.to_json\n          end\n        else\n          first_item.to_s\n        end\n      end\n\n      # Closes the connection to the server.\n      def close\n        @closed = true\n        @sse_thread&.kill\n        @connection&.close\n      end\n\n      def unique_key\n        parametrized_url = @url.parameterize.underscore.gsub(\"https_\", \"\")\n        Digest::SHA256.hexdigest(parametrized_url)[0..2]\n      end\n\n      private\n\n      # Establishes and maintains the SSE connection\n      def establish_sse_connection\n        @sse_thread = Thread.new do\n          headers = {\n            \"Accept\" => \"text/event-stream\",\n            \"Cache-Control\" => \"no-cache\",\n            \"Connection\" => \"keep-alive\",\n            \"MCP-Version\" => PROTOCOL_VERSION\n          }.merge(@headers)\n\n          @connection = Faraday.new(url: @url) do |faraday|\n            faraday.options.timeout = CONNECTION_TIMEOUT\n            faraday.options.open_timeout = OPEN_TIMEOUT\n          end\n\n          @connection.get do |req|\n            req.headers = headers\n            req.options.on_data = proc do |chunk, _size|\n              next if @closed\n\n              @buffer << chunk\n              process_sse_buffer\n            end\n          end\n        rescue StandardError => e\n          # puts \"[MCP DEBUG] SSE connection error: #{e.message}\"\n          @event_queue << { error: e }\n        end\n\n        # Wait for endpoint discovery\n        loop do\n          event = @event_queue.pop\n          if event[:error]\n            raise ProtocolError, \"SSE connection failed: #{event[:error].message}\"\n          elsif event[:endpoint_url]\n            @endpoint_url = event[:endpoint_url]\n            break\n          end\n        end\n\n        # Initialize the MCP session\n        initialize_mcp_session\n      end\n\n      # Process SSE buffer for complete events\n      def process_sse_buffer\n        while (idx = @buffer.index(\"\\n\\n\"))\n          event_text = @buffer.slice!(0..(idx + 1))\n          event_type, event_data = parse_sse_fields(event_text)\n\n          case event_type\n          when \"endpoint\"\n            endpoint_url = build_absolute_url(@url, event_data)\n            @event_queue << { endpoint_url: }\n          when \"message\"\n            handle_message_event(event_data)\n          end\n        end\n      end\n\n      # Handle SSE message events\n      def handle_message_event(event_data)\n        parsed = JSON.parse(event_data, symbolize_names: true)\n\n        # Handle different message types\n        case parsed\n        when ->(p) { p[:method] == \"initialize\" && p.dig(:params, :endpoint_url) }\n          # Legacy endpoint discovery\n          endpoint_url = parsed.dig(:params, :endpoint_url)\n          @event_queue << { endpoint_url: }\n        when ->(p) { p[:id] && p[:result] }\n          @event_queue << { id: parsed[:id], result: parsed[:result] }\n        when ->(p) { p[:result] }\n          @event_queue << { result: parsed[:result] }\n        end\n      rescue JSON::ParserError => e\n        puts \"[MCP DEBUG] Error parsing message: #{e.message}\"\n        puts \"[MCP DEBUG] Message data: #{event_data}\"\n      end\n\n      # Initialize the MCP session\n      def initialize_mcp_session\n        request_id = SecureRandom.uuid\n        send_json_rpc(request_id, \"initialize\", {\n                        protocolVersion: PROTOCOL_VERSION,\n                        capabilities: {\n                          roots: { listChanged: true },\n                          sampling: {}\n                        },\n                        clientInfo: {\n                          name: \"Raix\",\n                          version: Raix::VERSION\n                        }\n                      })\n\n        # Wait for initialization response\n        response = wait_for_response(request_id)\n\n        # Send acknowledgment if needed\n        return unless response.dig(:capabilities, :tools, :listChanged)\n\n        send_notification(\"notifications/initialized\", {})\n      end\n\n      # Send a JSON-RPC request\n      def send_json_rpc(id, method, params)\n        body = {\n          jsonrpc: JSONRPC_VERSION,\n          id:,\n          method:,\n          params:\n        }\n\n        # Use a new connection for the POST request\n        conn = Faraday.new(url: @endpoint_url) do |faraday|\n          faraday.options.timeout = CONNECTION_TIMEOUT\n        end\n\n        conn.post do |req|\n          req.headers[\"Content-Type\"] = \"application/json\"\n          req.body = body.to_json\n        end\n      rescue StandardError => e\n        raise ProtocolError, \"Failed to send request: #{e.message}\"\n      end\n\n      # Send a notification (no response expected)\n      def send_notification(method, params)\n        body = {\n          jsonrpc: JSONRPC_VERSION,\n          method:,\n          params:\n        }\n\n        conn = Faraday.new(url: @endpoint_url) do |faraday|\n          faraday.options.timeout = CONNECTION_TIMEOUT\n        end\n\n        conn.post do |req|\n          req.headers[\"Content-Type\"] = \"application/json\"\n          req.body = body.to_json\n        end\n      rescue StandardError => e\n        puts \"[MCP DEBUG] Error sending notification: #{e.message}\"\n      end\n\n      # Wait for a response with a specific ID\n      def wait_for_response(request_id)\n        timeout = Time.now + CONNECTION_TIMEOUT\n\n        loop do\n          if Time.now > timeout\n            raise ProtocolError, \"Timeout waiting for response\"\n          end\n\n          # Use non-blocking pop with timeout\n          begin\n            event = @event_queue.pop(true) # non_block = true\n          rescue ThreadError\n            # Queue is empty, wait a bit\n            sleep 0.1\n            next\n          end\n\n          if event[:error]\n            raise ProtocolError, \"SSE error: #{event[:error].message}\"\n          elsif event[:result] && (event[:id] == request_id || !event[:id])\n            return event[:result]\n          else\n            @event_queue << event\n            sleep 0.01\n          end\n        end\n      end\n\n      # Parses SSE event fields from raw text.\n      def parse_sse_fields(event_text)\n        event_type = \"message\"\n        data_lines = []\n\n        event_text.each_line do |line|\n          case line\n          when /^event:\\s*(.+)$/\n            event_type = Regexp.last_match(1).strip\n          when /^data:\\s*(.*)$/\n            data_lines << Regexp.last_match(1)\n          end\n        end\n\n        [event_type, data_lines.join(\"\\n\").strip]\n      end\n\n      # Builds an absolute URL for candidate relative to base.\n      def build_absolute_url(base, candidate)\n        uri = URI.parse(candidate)\n        return candidate if uri.absolute?\n\n        URI.join(base, candidate).to_s\n      rescue URI::InvalidURIError\n        candidate\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/mcp/stdio_client.rb",
    "content": "require \"json\"\nrequire \"securerandom\"\nrequire \"digest\"\n\nmodule Raix\n  module MCP\n    # Client for communicating with MCP servers via stdio using JSON-RPC.\n    class StdioClient\n      # Creates a new client with a bidirectional pipe to the MCP server.\n      def initialize(*args, env)\n        @args = args\n        @io = IO.popen(env, args, \"w+\")\n      end\n\n      # Returns available tools from the server.\n      def tools\n        result = call(\"tools/list\")\n\n        result[\"tools\"].map do |tool_json|\n          Tool.from_json(tool_json)\n        end\n      end\n\n      # Executes a tool with given arguments.\n      # Returns text content directly, or JSON-encoded data for other content types.\n      def call_tool(name, **arguments)\n        result = call(\"tools/call\", name:, arguments:)\n        content = result[\"content\"]\n        return \"\" if content.nil? || content.empty?\n\n        # Handle different content formats\n        first_item = content.first\n        case first_item\n        when Hash\n          case first_item[\"type\"]\n          when \"text\"\n            first_item[\"text\"]\n          when \"image\"\n            # Return a structured response for images\n            {\n              type: \"image\",\n              data: first_item[\"data\"],\n              mime_type: first_item[\"mimeType\"] || \"image/png\"\n            }.to_json\n          else\n            # For any other type, return the item as JSON\n            first_item.to_json\n          end\n        else\n          first_item.to_s\n        end\n      end\n\n      # Closes the connection to the server.\n      def close\n        @io.close\n      end\n\n      def unique_key\n        parametrized_args = @args.join(\" \").parameterize.underscore\n        Digest::SHA256.hexdigest(parametrized_args)[0..2]\n      end\n\n      private\n\n      # Sends JSON-RPC request and returns the result.\n      def call(method, **params)\n        @io.puts({ id: SecureRandom.uuid, method:, params:, jsonrpc: JSONRPC_VERSION }.to_json)\n        @io.flush # Ensure data is immediately sent\n        message = JSON.parse(@io.gets)\n        if (error = message[\"error\"])\n          raise ProtocolError, error[\"message\"]\n        end\n\n        message[\"result\"]\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/mcp/tool.rb",
    "content": "module Raix\n  module MCP\n    # Represents an MCP (Model Context Protocol) tool with metadata and schema\n    #\n    # @example\n    #   tool = Tool.new(\n    #     name: \"weather\",\n    #     description: \"Get weather info\",\n    #     input_schema: { \"type\" => \"object\", \"properties\" => { \"city\" => { \"type\" => \"string\" } } }\n    #   )\n    class Tool\n      attr_reader :name, :description, :input_schema\n\n      # Initialize a new Tool\n      #\n      # @param name [String] the tool name\n      # @param description [String] human-readable description of what the tool does\n      # @param input_schema [Hash] JSON schema defining the tool's input parameters\n      def initialize(name:, description:, input_schema: {})\n        @name = name\n        @description = description\n        @input_schema = input_schema\n      end\n\n      # Initialize from raw MCP JSON response\n      #\n      # @param json [Hash] parsed JSON data from MCP response\n      # @return [Tool] new Tool instance\n      def self.from_json(json)\n        new(\n          name: json[:name] || json[\"name\"],\n          description: json[:description] || json[\"description\"],\n          input_schema: json[:inputSchema] || json[\"inputSchema\"] || {}\n        )\n      end\n\n      # Get the input schema type\n      #\n      # @return [String, nil] the schema type (e.g., \"object\")\n      def input_type\n        input_schema[\"type\"]\n      end\n\n      # Get the properties hash\n      #\n      # @return [Hash] schema properties definition\n      def properties\n        input_schema[\"properties\"] || {}\n      end\n\n      # Get required properties array\n      #\n      # @return [Array<String>] list of required property names\n      def required_properties\n        input_schema[\"required\"] || []\n      end\n\n      # Check if a property is required\n      #\n      # @param property_name [String] name of the property to check\n      # @return [Boolean] true if the property is required\n      def required?(property_name)\n        required_properties.include?(property_name)\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/mcp.rb",
    "content": "# Simple integration layer that lets Raix classes declare an MCP server\n# with a single DSL call:\n#\n#   mcp \"https://my-server.example.com/sse\"\n#\n# The concern fetches the remote server's tool list (via JSON‑RPC 2.0\n# `tools/list`) and exposes each remote tool as if it were an inline\n# `function` declared with Raix::FunctionDispatch.  When the tool is\n# invoked by the model, the generated instance method forwards the\n# request to the remote server using `tools/call`, captures the result,\n# and appends the appropriate messages to the transcript so that the\n# conversation history stays consistent.\n\nrequire \"active_support/concern\"\nrequire \"active_support/inflector\"\nrequire \"securerandom\"\nrequire \"uri\"\n\nmodule Raix\n  # Model Context Protocol integration for Raix\n  #\n  # Allows declaring MCP servers with a simple DSL that automatically:\n  # - Queries tools from the remote server\n  # - Exposes each tool as a function callable by LLMs\n  # - Handles transcript recording and response processing\n  module MCP\n    extend ActiveSupport::Concern\n\n    # Error raised when there's a protocol-level error in MCP communication\n    class ProtocolError < StandardError; end\n\n    JSONRPC_VERSION = \"2.0\".freeze\n\n    class_methods do\n      # Declare an MCP server by URL, using the SSE transport.\n      #\n      #   sse_mcp \"https://server.example.com/sse\",\n      #           headers: { \"Authorization\" => \"Bearer <token>\" },\n      #           only: [:get_issue]\n      #\n      def sse_mcp(url, headers: {}, only: nil, except: nil)\n        mcp(only:, except:, client: MCP::SseClient.new(url, headers:))\n      end\n\n      # Declare an MCP server by command line arguments, and environment variables  ,\n      # using the stdio transport.\n      #\n      #   stdio_mcp \"docker\", \"run\", \"-i\", \"--rm\",\n      #             \"-e\", \"GITHUB_PERSONAL_ACCESS_TOKEN\",\n      #             \"ghcr.io/github/github-mcp-server\",\n      #             env: { GITHUB_PERSONAL_ACCESS_TOKEN: \"${input:github_token}\" },\n      #             only: [:github_search]\n      #\n      def stdio_mcp(*args, env: {}, only: nil, except: nil)\n        mcp(only:, except:, client: MCP::StdioClient.new(*args, env))\n      end\n\n      # Declare an MCP server, using the given client.\n      #\n      #   mcp client: MCP::SseClient.new(\"https://server.example.com/sse\")\n      #\n      # This will automatically:\n      #   • query `tools/list` on the server\n      #   • register each remote tool with FunctionDispatch so that the\n      #     OpenAI / OpenRouter request body includes its JSON‑Schema\n      #   • define an instance method for each tool that forwards the\n      #     call to the server and appends the proper messages to the\n      #     transcript.\n      # NOTE TO SELF: NEVER MOCK SERVER RESPONSES! THIS MUST WORK WITH REAL SERVERS!\n      def mcp(client:, only: nil, except: nil)\n        @mcp_servers ||= {}\n\n        return if @mcp_servers.key?(client.unique_key) # avoid duplicate definitions\n\n        # Fetch tools\n        tools = client.tools\n\n        if tools.empty?\n          # puts \"[MCP DEBUG] No tools found from MCP server at #{url}\"\n          client.close\n          return nil\n        end\n\n        # Apply filters\n        filtered_tools = if only.present?\n                           only_symbols = Array(only).map(&:to_sym)\n                           tools.select { |tool| only_symbols.include?(tool.name.to_sym) }\n                         elsif except.present?\n                           except_symbols = Array(except).map(&:to_sym)\n                           tools.reject { |tool| except_symbols.include?(tool.name.to_sym) }\n                         else\n                           tools\n                         end\n\n        # Ensure FunctionDispatch is included in the class\n        include FunctionDispatch unless included_modules.include?(FunctionDispatch)\n        # puts \"[MCP DEBUG] FunctionDispatch included in #{name}\"\n\n        filtered_tools.each do |tool|\n          remote_name = tool.name\n          # TODO: Revisit later whether this much context is needed in the function name\n          local_name = :\"#{remote_name}_#{client.unique_key}\"\n\n          description = tool.description\n          input_schema = tool.input_schema || {}\n\n          # --- register with FunctionDispatch (adds to .functions)\n          function(local_name, description, **{}) # placeholder parameters replaced next\n          latest_definition = functions.last\n          latest_definition[:parameters] = input_schema.deep_symbolize_keys || {}\n\n          # Required by OpenAI\n          latest_definition[:parameters][:properties] ||= {}\n\n          # Store the schema for type coercion\n          tool_schemas = @tool_schemas ||= {}\n          tool_schemas[local_name] = input_schema\n\n          # --- define an instance method that proxies to the server\n          define_method(local_name) do |arguments, _cache|\n            arguments ||= {}\n\n            # Coerce argument types based on the input schema\n            stored_schema = self.class.instance_variable_get(:@tool_schemas)&.dig(local_name)\n            coerced_arguments = coerce_arguments(arguments, stored_schema)\n\n            content_text = client.call_tool(remote_name, **coerced_arguments)\n            call_id = SecureRandom.uuid\n\n            # Mirror FunctionDispatch transcript behaviour\n            transcript << [\n              {\n                role: \"assistant\",\n                content: nil,\n                tool_calls: [\n                  {\n                    id: call_id,\n                    type: \"function\",\n                    function: {\n                      name: local_name.to_s,\n                      arguments: arguments.to_json\n                    }\n                  }\n                ]\n              },\n              {\n                role: \"tool\",\n                tool_call_id: call_id,\n                name: local_name.to_s,\n                content: content_text\n              }\n            ]\n\n            # Return the content - ChatCompletion will automatically continue\n            # the conversation after tool execution\n            content_text\n          end\n        end\n\n        # Store the URL, tools, and client for future use\n        @mcp_servers[client.unique_key] = { tools: filtered_tools, client: }\n      end\n    end\n\n    private\n\n    # Coerce argument types based on the JSON schema\n    def coerce_arguments(arguments, schema)\n      return arguments unless schema.is_a?(Hash) && schema[\"properties\"].is_a?(Hash)\n\n      coerced = {}\n      schema[\"properties\"].each do |key, prop_schema|\n        value = if arguments.key?(key)\n                  arguments[key]\n                elsif arguments.key?(key.to_sym)\n                  arguments[key.to_sym]\n                end\n        next if value.nil?\n\n        coerced[key] = coerce_value(value, prop_schema)\n      end\n\n      # Include any additional arguments not in the schema\n      arguments.each do |key, value|\n        key_str = key.to_s\n        coerced[key_str] = value unless coerced.key?(key_str)\n      end\n\n      coerced.with_indifferent_access\n    end\n\n    # Coerce a single value based on its schema\n    def coerce_value(value, schema)\n      return value unless schema.is_a?(Hash)\n\n      case schema[\"type\"]\n      when \"number\", \"integer\"\n        if value.is_a?(String) && value.match?(/\\A-?\\d+(\\.\\d+)?\\z/)\n          schema[\"type\"] == \"integer\" ? value.to_i : value.to_f\n        else\n          value\n        end\n      when \"boolean\"\n        case value\n        when \"true\", true then true\n        when \"false\", false then false\n        else value\n        end\n      when \"array\"\n        array_value = begin\n          value.is_a?(String) ? JSON.parse(value) : value\n        rescue JSON::ParserError\n          value\n        end\n\n        # If there's an items schema, coerce each element\n        if array_value.is_a?(Array) && schema[\"items\"]\n          array_value.map { |item| coerce_value(item, schema[\"items\"]) }\n        else\n          array_value\n        end\n      when \"object\"\n        object_value = begin\n          value.is_a?(String) ? JSON.parse(value) : value\n        rescue JSON::ParserError\n          value\n        end\n\n        # If there are properties defined, coerce them recursively\n        if object_value.is_a?(Hash) && schema[\"properties\"]\n          coerced_object = {}\n          schema[\"properties\"].each do |prop_key, prop_schema|\n            prop_value = object_value[prop_key] || object_value[prop_key.to_sym]\n            coerced_object[prop_key] = coerce_value(prop_value, prop_schema) unless prop_value.nil?\n          end\n\n          # Include any additional properties not in the schema\n          object_value.each do |obj_key, obj_value|\n            obj_key_str = obj_key.to_s\n            coerced_object[obj_key_str] = obj_value unless coerced_object.key?(obj_key_str)\n          end\n\n          coerced_object\n        else\n          object_value\n        end\n      else\n        value\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/message_adapters/base.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"active_support/core_ext/module/delegation\"\n\nmodule Raix\n  module MessageAdapters\n    # Transforms messages into the format expected by the OpenAI API\n    class Base\n      attr_accessor :context\n\n      delegate :cache_at, :model, to: :context\n\n      def initialize(context)\n        @context = context\n      end\n\n      def transform(message)\n        return message if message[:role].present?\n\n        if message[:function].present?\n          { role: \"assistant\", name: message.dig(:function, :name), content: message.dig(:function, :arguments).to_json }\n        elsif message[:result].present?\n          { role: \"function\", name: message[:name], content: message[:result] }\n        else\n          content(message)\n        end\n      end\n\n      protected\n\n      def content(message)\n        case message\n        in { system: content }\n          { role: \"system\", content: }\n        in { user: content }\n          { role: \"user\", content: }\n        in { assistant: content }\n          { role: \"assistant\", content: }\n        else\n          raise ArgumentError, \"Invalid message format: #{message.inspect}\"\n        end.tap do |msg|\n          # convert to anthropic multipart format if model is claude-3 and cache_at is set\n          if model.to_s.include?(\"anthropic/claude-3\") && cache_at && msg[:content].to_s.length > cache_at.to_i\n            msg[:content] = [{ type: \"text\", text: msg[:content], cache_control: { type: \"ephemeral\" } }]\n          end\n        end\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/predicate.rb",
    "content": "# frozen_string_literal: true\n\nmodule Raix\n  # A module for handling yes/no questions using AI chat completion.\n  # When included in a class, it provides methods to define handlers for\n  # yes and no responses. All handlers are optional. Any response that\n  # does not begin with \"yes, \" or \"no, \" will be considered a maybe.\n  #\n  # @example\n  #   class Question\n  #     include Raix::Predicate\n  #\n  #     yes? do |explanation|\n  #       puts \"Yes: #{explanation}\"\n  #     end\n  #\n  #     no? do |explanation|\n  #       puts \"No: #{explanation}\"\n  #     end\n  #\n  #     maybe? do |explanation|\n  #       puts \"Maybe: #{explanation}\"\n  #     end\n  #   end\n  #\n  #   question = Question.new\n  #   question.ask(\"Is Ruby a programming language?\")\n  module Predicate\n    extend ActiveSupport::Concern\n    include ChatCompletion\n\n    def ask(question, openai: false)\n      raise \"Please define a yes and/or no block\" if self.class.yes_block.nil? && self.class.no_block.nil?\n\n      transcript << { system: \"Always answer 'Yes, ', 'No, ', or 'Maybe, ' followed by a concise explanation!\" }\n      transcript << { user: question }\n\n      chat_completion(openai:).tap do |response|\n        if response.downcase.start_with?(\"yes,\")\n          instance_exec(response, &self.class.yes_block) if self.class.yes_block\n        elsif response.downcase.start_with?(\"no,\")\n          instance_exec(response, &self.class.no_block) if self.class.no_block\n        elsif self.class.maybe_block\n          instance_exec(response, &self.class.maybe_block)\n        else\n          puts \"[Raix::Predicate] Unhandled response: #{response}\"\n        end\n      end\n    end\n\n    # Class methods added to the including class\n    module ClassMethods\n      attr_reader :yes_block, :no_block, :maybe_block\n\n      def yes?(&block)\n        @yes_block = block\n      end\n\n      def no?(&block)\n        @no_block = block\n      end\n\n      def maybe?(&block)\n        @maybe_block = block\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/prompt_declarations.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"ostruct\"\n\n# This module provides a way to chain prompts and handle\n# user responses in a serialized manner, with support for\n# functions if the FunctionDispatch module is also included.\nmodule Raix\n  # The PromptDeclarations module provides a way to chain prompts and handle\n  # user responses in a serialized manner, with support for\n  # functions if the FunctionDispatch module is also included.\n  module PromptDeclarations\n    extend ActiveSupport::Concern\n\n    module ClassMethods # rubocop:disable Style/Documentation\n      # Adds a prompt to the list of prompts. At minimum, provide a `text` or `call` parameter.\n      #\n      # @param system [Proc] A lambda that generates the system message.\n      # @param call [ChatCompletion] A callable class that includes ChatCompletion. Will be passed a context object when initialized.\n      # @param text Accepts 1) a lambda that returns the prompt text, 2) a string, or 3) a symbol that references a method.\n      # @param stream [Proc] A lambda stream handler\n      # @param success [Proc] The block of code to execute when the prompt is answered.\n      # @param params [Hash] Additional parameters for the completion API call\n      # @param if [Proc] A lambda that determines if the prompt should be executed.\n      def prompt(system: nil, call: nil, text: nil, stream: nil, success: nil, params: {}, if: nil, unless: nil, until: nil)\n        name = Digest::SHA256.hexdigest(text.inspect)[0..7]\n        prompts << OpenStruct.new({ name:, system:, call:, text:, stream:, success:, if:, unless:, until:, params: })\n\n        define_method(name) do |response|\n          return response if success.nil?\n          return send(success, response) if success.is_a?(Symbol)\n\n          instance_exec(response, &success)\n        end\n      end\n\n      def prompts\n        @prompts ||= []\n      end\n    end\n\n    attr_reader :current_prompt, :last_response\n\n    MAX_LOOP_COUNT = 5\n\n    # Executes the chat completion process based on the class-level declared prompts.\n    # The response to each prompt is added to the transcript automatically and returned.\n    #\n    # Raises an error if there are not enough prompts defined.\n    #\n    # Uses system prompt in following order of priority:\n    #   - system lambda specified in the prompt declaration\n    #   - system_prompt instance method if defined\n    #   - system_prompt class-level declaration if defined\n    #\n    #  Prompts require a text lambda to be defined at minimum.\n    #  TODO: shortcut syntax passes just a string prompt if no other options are needed.\n    #\n    # @raise [RuntimeError] If no prompts are defined.\n    #\n    # @param prompt [String] The prompt to use for the chat completion.\n    # @param params [Hash] Parameters for the chat completion.\n    # @param raw [Boolean] Whether to return the raw response.\n    #\n    # TODO: SHOULD NOT HAVE A DIFFERENT INTERFACE THAN PARENT\n    def chat_completion(prompt = nil, params: {}, raw: false, openai: false)\n      raise \"No prompts defined\" unless self.class.prompts.present?\n\n      loop_count = 0\n\n      current_prompts = self.class.prompts.clone\n\n      while (@current_prompt = current_prompts.shift)\n        next if @current_prompt.if.present? && !instance_exec(&@current_prompt.if)\n        next if @current_prompt.unless.present? && instance_exec(&@current_prompt.unless)\n\n        input = case current_prompt.text\n                when Proc\n                  instance_exec(&current_prompt.text)\n                when String\n                  current_prompt.text\n                when Symbol\n                  send(current_prompt.text)\n                else\n                  last_response.presence || prompt\n                end\n\n        if current_prompt.call.present?\n          current_prompt.call.new(self).call(input).tap do |response|\n            if response.present?\n              transcript << { assistant: response }\n              @last_response = send(current_prompt.name, response)\n            end\n          end\n        else\n          __system_prompt = instance_exec(&current_prompt.system) if current_prompt.system.present? # rubocop:disable Lint/UnderscorePrefixedVariableName\n          __system_prompt ||= system_prompt if respond_to?(:system_prompt)\n          __system_prompt ||= self.class.system_prompt.presence\n          transcript << { system: __system_prompt } if __system_prompt\n          transcript << { user: instance_exec(&current_prompt.text) } # text is required\n\n          params = current_prompt.params.merge(params)\n\n          # set the stream if necessary\n          self.stream = instance_exec(&current_prompt.stream) if current_prompt.stream.present?\n\n          execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)\n        end\n\n        next unless current_prompt.until.present? && !instance_exec(&current_prompt.until)\n\n        if loop_count >= MAX_LOOP_COUNT\n          warn \"Max loop count reached in chat_completion. Forcing return.\"\n\n          return last_response\n        else\n          current_prompts.unshift(@current_prompt) # put it back at the front\n          loop_count += 1\n        end\n      end\n\n      last_response\n    end\n\n    def execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)\n      chat_completion_from_superclass(params:, raw:, openai:).then do |response|\n        transcript << { assistant: response }\n        @last_response = send(current_prompt.name, response)\n        self.stream = nil # clear it again so it's not used for the next prompt\n      end\n    rescue StandardError => e\n      # Bubbles the error up the stack if no loops remain\n      raise e if loop_count >= MAX_LOOP_COUNT\n\n      sleep 1 # Wait before continuing\n    end\n\n    # Returns the model parameter of the current prompt or the default model.\n    #\n    # @return [Object] The model parameter of the current prompt or the default model.\n    def model\n      @current_prompt.params[:model] || super\n    end\n\n    # Returns the temperature parameter of the current prompt or the default temperature.\n    #\n    # @return [Float] The temperature parameter of the current prompt or the default temperature.\n    def temperature\n      @current_prompt.params[:temperature] || super\n    end\n\n    # Returns the max_tokens parameter of the current prompt or the default max_tokens.\n    #\n    # @return [Integer] The max_tokens parameter of the current prompt or the default max_tokens.\n    def max_tokens\n      @current_prompt.params[:max_tokens] || super\n    end\n\n    protected\n\n    # workaround for super.chat_completion, which is not available in ruby\n    def chat_completion_from_superclass(*, **kargs)\n      method(:chat_completion).super_method.call(*, **kargs)\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/response_format.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"active_support/core_ext/object/deep_dup\"\nrequire \"active_support/core_ext/string/filters\"\n\nmodule Raix\n  # Handles the formatting of responses for AI interactions.\n  #\n  # This class is responsible for converting input data into a JSON schema\n  # that can be used to structure and validate AI responses. It supports\n  # nested structures and arrays, ensuring that the output conforms to\n  # the expected format for AI model interactions.\n  #\n  # @example\n  #   input = { name: { type: \"string\" }, age: { type: \"integer\" } }\n  #   format = ResponseFormat.new(\"PersonInfo\", input)\n  #   schema = format.to_schema\n  #\n  # @attr_reader [String] name The name of the response format\n  # @attr_reader [Hash] input The input data to be formatted\n  class ResponseFormat\n    def initialize(name, input)\n      @name = name\n      @input = input\n    end\n\n    def to_json(*)\n      JSON.pretty_generate(to_schema)\n    end\n\n    def to_schema\n      {\n        type: \"json_schema\",\n        json_schema: {\n          name: @name,\n          schema: {\n            type: \"object\",\n            properties: decode(@input.deep_dup),\n            required: @input.keys,\n            additionalProperties: false\n          },\n          strict: true\n        }\n      }\n    end\n\n    private\n\n    def decode(input)\n      {}.tap do |response|\n        case input\n        when Array\n          response[:type] = \"array\"\n\n          if input.size == 1 && input.first.is_a?(String)\n            response[:items] = { type: input.first }\n          else\n            properties = {}\n            input.each { |item| properties.merge!(decode(item)) }\n            response[:items] = {\n              type: \"object\",\n              properties:,\n              required: properties.keys.select { |key| properties[key].delete(:required) },\n              additionalProperties: false\n            }\n          end\n        when Hash\n          input.each do |key, value|\n            response[key] = if value.is_a?(Hash) && value.key?(:type)\n                              value\n                            else\n                              decode(value)\n                            end\n          end\n        else\n          raise \"Invalid input\"\n        end\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/transcript_adapter.rb",
    "content": "# frozen_string_literal: true\n\nmodule Raix\n  # Adapter to convert between Raix's transcript array format and RubyLLM's Message objects\n  class TranscriptAdapter\n    attr_reader :ruby_llm_chat\n\n    def initialize(ruby_llm_chat)\n      @ruby_llm_chat = ruby_llm_chat\n      @pending_messages = []\n    end\n\n    # Add a message in Raix format (hash) to the transcript\n    def <<(message_hash)\n      case message_hash\n      when Array\n        # Handle nested arrays (from function dispatch)\n        message_hash.each { |msg| self << msg }\n      when Hash\n        add_message_from_hash(message_hash)\n      end\n      self\n    end\n\n    # Return all messages in Raix-compatible format\n    def flatten\n      ruby_llm_messages = @ruby_llm_chat.messages.map { |msg| message_to_raix_format(msg) }\n      pending = @pending_messages.map { |msg| normalize_message_format(msg) }\n      (ruby_llm_messages + pending).flatten\n    end\n\n    # Get all messages including pending ones\n    def to_a\n      flatten\n    end\n\n    # Allow iteration\n    def compact\n      flatten.compact\n    end\n\n    # Clear all messages\n    def clear\n      @ruby_llm_chat.reset_messages!\n      @pending_messages.clear\n      self\n    end\n\n    # Get last message\n    def last\n      flatten.last\n    end\n\n    # Get size of transcript\n    def size\n      flatten.size\n    end\n\n    alias length size\n\n    private\n\n    def add_message_from_hash(hash)\n      # Raix abbreviated format: { system: \"text\" }, { user: \"text\" }, { assistant: \"text\" }\n      if hash.key?(:system) || hash.key?(\"system\")\n        content = hash[:system] || hash[\"system\"]\n        @ruby_llm_chat.with_instructions(content)\n        @pending_messages << { role: \"system\", content: }\n      elsif hash.key?(:user) || hash.key?(\"user\")\n        content = hash[:user] || hash[\"user\"]\n        # Don't add to ruby_llm_chat yet - wait for chat_completion call\n        @pending_messages << { role: \"user\", content: }\n      elsif hash.key?(:assistant) || hash.key?(\"assistant\")\n        content = hash[:assistant] || hash[\"assistant\"]\n        @pending_messages << { role: \"assistant\", content: }\n      elsif hash[:role] || hash[\"role\"]\n        # Standard OpenAI format (tool messages, assistant with tool_calls, etc.)\n        @pending_messages << hash.with_indifferent_access\n      end\n    end\n\n    def message_to_raix_format(message)\n      # Return in Raix abbreviated format { system: \"...\", user: \"...\", assistant: \"...\" }\n      # unless it's a tool message which needs full format\n      if message.tool_call? || message.tool_result?\n        result = {\n          role: message.role.to_s,\n          content: message.content\n        }\n        result[:tool_calls] = message.tool_calls if message.tool_call?\n        result[:tool_call_id] = message.tool_call_id if message.tool_result?\n        result\n      else\n        # Use abbreviated format\n        { message.role.to_sym => message.content }\n      end\n    end\n\n    def normalize_message_format(msg)\n      # If already in abbreviated format, return as-is\n      return msg if msg.key?(:system) || msg.key?(:user) || msg.key?(:assistant)\n      return msg if msg[\"system\"] || msg[\"user\"] || msg[\"assistant\"]\n\n      # If in standard format with role/content, convert to abbreviated\n      if msg[:role] || msg[\"role\"]\n        role = (msg[:role] || msg[\"role\"]).to_sym\n        content = msg[:content] || msg[\"content\"]\n\n        # Tool messages stay in full format\n        if msg[:tool_calls] || msg[\"tool_calls\"] || msg[:tool_call_id] || msg[\"tool_call_id\"]\n          return msg\n        end\n\n        # Convert to abbreviated format\n        { role => content }\n      else\n        msg\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "lib/raix/version.rb",
    "content": "# frozen_string_literal: true\n\nmodule Raix\n  VERSION = \"2.0.3\"\nend\n"
  },
  {
    "path": "lib/raix.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"ruby_llm\"\nrequire \"zeitwerk\"\n\n# Ruby AI eXtensions\nmodule Raix\n  class << self\n    attr_writer :configuration\n  end\n\n  # Returns the current configuration instance.\n  def self.configuration\n    @configuration ||= Configuration.new\n  end\n\n  # Configures the Raix gem using a block.\n  def self.configure\n    yield(configuration)\n  end\nend\n\nloader = Zeitwerk::Loader.for_gem\nloader.inflector.inflect(\"mcp\" => \"MCP\")\nloader.setup\n"
  },
  {
    "path": "raix.gemspec",
    "content": "# frozen_string_literal: true\n\nrequire_relative \"lib/raix/version\"\n\nGem::Specification.new do |spec|\n  spec.name = \"raix\"\n  spec.version = Raix::VERSION\n  spec.authors = [\"Obie Fernandez\"]\n  spec.email = [\"obiefernandez@gmail.com\"]\n\n  spec.summary = \"Ruby AI eXtensions\"\n  spec.homepage = \"https://github.com/OlympiaAI/raix\"\n  spec.license = \"MIT\"\n  spec.required_ruby_version = \">= 3.2.2\"\n\n  spec.metadata[\"homepage_uri\"] = spec.homepage\n  spec.metadata[\"source_code_uri\"] = \"https://github.com/OlympiaAI/raix\"\n  spec.metadata[\"changelog_uri\"] = \"https://github.com/OlympiaAI/raix/blob/main/CHANGELOG.md\"\n\n  # Specify which files should be added to the gem when it is released.\n  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.\n  spec.files = Dir.chdir(__dir__) do\n    `git ls-files -z`.split(\"\\x0\").reject do |f|\n      (File.expand_path(f) == __FILE__) || f.start_with?(*%w[bin/ test/ spec/ features/ .git .circleci appveyor])\n    end\n  end\n\n  # Ensure all gem files are world-readable so they work in Docker containers\n  # where gems are installed as root but the app runs as a non-root user.\n  spec.files.each do |f|\n    path = File.join(__dir__, f)\n    File.chmod(0o644, path) if File.file?(path) && !File.executable?(path)\n  end\n  spec.bindir = \"exe\"\n  spec.executables = spec.files.grep(%r{\\Aexe/}) { |f| File.basename(f) }\n  spec.require_paths = [\"lib\"]\n\n  spec.add_dependency \"activesupport\", \">= 6.0\"\n  spec.add_dependency \"faraday-retry\", \"~> 2.0\"\n  spec.add_dependency \"ostruct\"\n  spec.add_dependency \"ruby_llm\", \"~> 1.9\"\n  spec.add_dependency \"zeitwerk\", \"~> 2.7\"\nend\n"
  },
  {
    "path": "sig/raix.rbs",
    "content": "module Raix\n  VERSION: String\n  # See the writing guide of rbs: https://github.com/ruby/rbs#guides\nend\n"
  },
  {
    "path": "spec/files/getting_real.md",
    "content": "Introduction\nWhat is Getting Real?\nAbout 37signals\nCaveats, disclaimers, and other preemptive strikes\n\n\n What is Getting Real?\nWant to build a successful web app? Then it’s time to Get Real. Getting Real is a smaller, faster, better way to build software.\nGetting Real is about skipping all the stuff that represents real (charts, graphs, boxes, arrows, schematics, wireframes, etc.) and actually building the real thing.\nGetting real is less. Less mass, less software, less features, less paperwork, less of everything that’s not essential (and most of what you think is essential actually isn’t).\nGetting Real is staying small and being agile.\nGetting Real starts with the interface, the real screens that people are going to use. It begins with what the customer actually experiences and builds backwards from there.This lets you get the interface right before you get the software wrong.\nGetting Real is about iterations and lowering the cost of change. Getting Real is all about launching, tweaking, and constantly improving which makes it a perfect approach for web-based software.\nGetting Real delivers just what customers need and eliminates anything they don’t.\nThe benefits of Getting Real\nGetting Real delivers better results because it forces you to deal with the actual problems you’re trying to solve instead of your ideas about those problems. It forces you to deal with reality.\n\n Getting Real foregoes functional specs and other transitory documentation in favor of building real screens. A functional spec is make-believe, an illusion of agreement, while an actual web page is reality. That’s what your customers are going to see and use. That’s what matters. Getting Real gets you there faster.\nAnd that means you’re making software decisions based on the real thing instead of abstract notions.\nFinally, Getting Real is an approach ideally suited to web-based software. The old school model of shipping software in a box and then waiting a year or two to deliver an update is fading away. Unlike installed software, web apps can constantly evolve on a day-to-day basis. Getting Real leverages this advantage for all its worth.\nHow To Write Vigorous Software\nVigorous writing is concise.A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts.This requires not that the writer make all sentences short or avoid all detail and treat subjects only in outline, but that every word tell.\nFrom “The Elements of Style” by William Strunk Jr.\nNo more bloat\nThe old way: a lengthy, bureaucratic, we’re-doing-this-to-cover- our-asses process. The typical result: bloated, forgettable soft- ware dripping with mediocrity. Blech.\nGetting Real gets rid of...\nTimelines that take months or even years Pie-in-the-sky functional specs Scalability debates\n\n Interminable staff meetings\nThe “need” to hire dozens of employees Meaningless version numbers\nPristine roadmaps that predict the perfect future Endless preference options\nOutsourced support\nUnrealistic user testing\nUseless paperwork\nTop-down hierarchy\nYou don’t need tons of money or a huge team or a lengthy development cycle to build great software. Those things are the ingredients for slow, murky, changeless applications. Getting real takes the opposite approach.\nIn this book we’ll show you...\nThe importance of having a philosophy Why staying small is a good thing\nHow to build less\nHow to get from idea to reality quickly How to staff your team\nWhy you should design from the inside out Why writing is so crucial\nWhy you should underdo your competition\n\n How to promote your app and spread the word Secrets to successful support\nTips on keeping momentum going after launch\n...and lots more\nThe focus is on big-picture ideas. We won’t bog you down with detailed code snippets or css tricks. We’ll stick to the major ideas and philosophies that drive the Getting Real process.\nIs this book for you?\nYou’re an entrepreneur, designer, programmer, or marketer working on a big idea.\nYou realize the old rules don’t apply anymore. Distribute your software on cd-roms every year? How 2002. Version numbers? Out the window. You need to build, launch, and tweak. Then rinse and repeat.\nOr maybe you’re not yet on board with agile development and business structures, but you’re eager to learn more.\nIf this sounds like you, then this book is for you.\nNote: While this book’s emphasis is on building a web app, a lot of these ideas are applicable to non-software activities too. The suggestions about small teams, rapid prototyping, expect- ing iterations, and many others presented here can serve as a guide whether you’re starting a business, writing a book, designing a web site, recording an album, or doing a variety\nof other endeavors. Once you start Getting Real in one area of your life, you’ll see how these concepts can apply to a wide range of activities.\n\n About 37signals\nWhat we do\n37signals is a small team that creates simple, focused software. Our products help you collaborate and get organized. More than 350,000 people and small businesses use our web-apps to get things done. Jeremy Wagstaff, of the Wall Street Journal, wrote, “37signals products are beautifully simple, elegant and intuitive tools that make an Outlook screen look like the soft- ware equivalent of a torture chamber.” Our apps never put you on the rack.\nOur modus operandi\nWe believe software is too complex. Too many features, too many buttons, too much to learn. Our products do less than the competition – intentionally. We build products that work smarter, feel better, allow you to do things your way, and are easier to use.\nOur products\nAs of the publishing date of this book, we have five commercial products and one open source web application framework.\nBasecamp turns project management on its head. Instead of Gantt charts, fancy graphs, and stats-heavy spreadsheets, Base- camp offers message boards, to-do lists, simple scheduling, col- laborative writing, and file sharing. So far, hundreds of thou- sands agree it’s a better way. Farhad Manjoo of Salon.com said\n“Basecamp represents the future of software on the Web.”\n\n Campfire brings simple group chat to the business setting. Businesses in the know understand how valuable real-time persistent group chat can be. Conventional instant messaging is great for quick 1-on-1 chats, but it’s miserable for 3 or more people at once. Campfire solves that problem and plenty more.\nBackpack is the alternative to those confusing, complex, “orga- nize your life in 25 simple steps” personal information managers. Backpack’s simple take on pages, notes, to-dos, and cellphone/ email-based reminders is a novel idea in a product category that suffers from status-quo-itis. Thomas Weber of the Wall Street Journal said it’s the best product in its class and David Pogue of the New York Times called it a “very cool” organization tool.\nWriteboard lets you write, share, revise, and compare text\nsolo or with others. It’s the refreshing alternative to bloated word processors that are overkill for 95% of what you write. John Gruber of Daring Fireball said, “Writeboard might be the clearest, simplest web application I’ve ever seen.” Web-guru Jeffrey Zeldman said, “The brilliant minds at 37signals have done it again.”\nTa-da List keeps all your to-do lists together and organized online. Keep the lists to yourself or share them with others for easy collaboration. There’s no easier way to get things done. Over 100,000 lists with nearly 1,000,000 items have been created so far.\nRuby on Rails, for developers, is a full-stack, open-source web framework in Ruby for writing real-world applications quickly and easily. Rails takes care of the busy work so you can focus on your idea. Nathan Torkington of the O’Reilly publish- ing empire said “Ruby on Rails is astounding. Using it is like watching a kung-fu movie, where a dozen bad-ass frameworks prepare to beat up the little newcomer only to be handed their asses in a variety of imaginative ways.” Gotta love that quote.\n\n Caveats, disclaimers, and other preemptive strikes\nJust to get it out of the way, here are our responses to some com- plaints we hear every now and again:\n“These techniques won’t work for me.”\nGetting real is a system that’s worked terrifically for us. That said, the ideas in this book won’t apply to every project under the sun. If you are building a weapons system, a nuclear control plant, a banking system for millions of customers, or some other life/finance-critical system, you’re going to balk at some of our laissez-faire attitude. Go ahead and take additional precautions.\nAnd it doesn’t have to be an all or nothing proposition. Even if you can’t embrace Getting Real fully, there are bound to be at least a few ideas in here you can sneak past the powers that be.\n“You didn’t invent that idea.”\nWe’re not claiming to have invented these techniques. Many of these concepts have been around in one form or another for a long time. Don’t get huffy if you read some\nof our advice and it reminds you of something you read about already on so and so’s weblog or in some book pub- lished 20 years ago. It’s definitely possible. These tech- niques are not at all exclusive to 37signals. We’re just telling you how we work and what’s been successful for us.\n\n “You take too much of a black and white view.”\nIf our tone seems too know-it-allish, bear with us. We think it’s better to present ideas in bold strokes than to be wishy-washy about it. If that comes off as cocky or arrogant, so be it. We’d rather be provocative than water everything down with “it depends...” Of course there will be times when these rules need to be stretched or broken. And some of these tactics may not apply to your situation. Use your judgement and imagination.\n“This won’t work inside my company.”\nThink you’re too big to Get Real? Even Microsoft is Getting Real (and we doubt you’re bigger than them).\nEven if your company typically runs on long-term schedules with big teams, there are still ways to get real.The first step is\nto break up into smaller units. When there’s too many people involved, nothing gets done. The leaner you are, the faster – and better – things get done.\nGranted, it may take some salesmanship. Pitch your company on the Getting Real process. Show them this book. Show them the real results you can achieve in less time and with a smaller team.\nExplain that Getting Real is a low-risk, low-investment way to test new concepts. See if you can split off from the mothership on a smaller project as a proof of concept. Demonstrate results.\nOr, if you really want to be ballsy, go stealth. Fly under the radar and demonstrate real results. That’s the approach the Start.com team has used while Getting Real at Microsoft. “I’ve watched the Start.com team work. They don’t ask permission,” says Robert Scoble, Technical Evangelist at Microsoft. “They have a boss that provides air cover. And they bite off a little bit at a time and do that and respond to feedback.”\n\n   Shipping Microsoft’s Start.com\nIn big companies, processes and meetings are the norm. Many months are spent on planning features and arguing details with the goal of everyone reaching an agreement on what is the “right” thing for the customer.\nThat may be the right approach for shrink-wrapped software, but with the web we have an incredible advantage. Just ship it! Let the user tell you if it’s the right thing and if it’s not, hey you can fix it and ship it to the web the same day if you want! There is no word stronger than the customer’s – resist the urge to engage in long-winded meetings and arguments. Just ship it and prove a point.\nMuch easier said than done – this implies:\nMonths of planning are not necessary.\nMonths of writing specs are not necessary – specs should have the foundations nailed and details figured out and refined during the development phase. Don’t try to close all open issues and nail every single detail before development starts.\nShip less features, but quality features.\nYou don’t need a big bang approach with a whole new release and bunch of features. Give the users byte-size pieces that they can digest.\nIf there are minor bugs, ship it as soon you have the core scenarios nailed and ship the bug fixes to web gradually after that.The faster you get the user feedback the better. Ideas can sound great on paper but in practice turn out to be suboptimal.The sooner you find out about fundamental issues that are wrong with an idea, the better.\nOnce you iterate quickly and react on customer feedback, you will establish a customer connection. Remember the goal is to win the customer by building what they want.\n-Sanaz Ahari, Program Manager of Start.com, Microsoft\n\n\n  The Starting Line\nBuild Less\nWhat’s Your Problem?\nFund Yourself\nFix Time and Budget, Flex Scope Have an Enemy\nIt Shouldn’t be a Chore\n\n\n Build Less\nUnderdo your competition\nConventional wisdom says that to beat your competitors you need to one-up them. If they have four features, you need five (or 15, or 25). If they’re spending x, you need to spend xx. If they have 20, you need 30.\nThis sort of one-upping Cold War mentality is a dead-end. It’s an expensive, defensive, and paranoid way of building products. Defensive, paranoid companies can’t think ahead, they can only think behind. They don’t lead, they follow.\nIf you want to build a company that follows, you might as well put down this book now.\nSo what to do then? The answer is less. Do less than your com- petitors to beat them. Solve the simple problems and leave the hairy, difficult, nasty problems to everyone else. Instead of one- upping, try one-downing. Instead of outdoing, try underdoing.\nWe’ll cover the concept of less throughout this book, but for starters, less means:\nLess features\nLess options/preferences\nLess people and corporate structure Less meetings and abstractions\nLess promises\n\n\n What’s Your Problem?\nBuild software for yourself\nA great way to build software is to start out by solving your own problems. You’ll be the target audience and you’ll know what’s important and what’s not. That gives you a great head start on delivering a breakout product.\nThe key here is understanding that you’re not alone. If you’re having this problem, it’s likely hundreds of thousands of others are in the same boat. There’s your market. Wasn’t that easy?\nBasecamp originated in a problem: As a design firm we needed a simple way to communicate with our clients about projects. We started out doing this via client ex- tranets which we would update manually. But changing the html by hand every time a project needed to be updated just wasn’t working. These project sites always seemed to go stale and eventually were abandoned. It was frustrating because it left us disorganized and left clients in the dark.\nSo we started looking at other options. Yet every tool we found either 1) didn’t do what we needed or 2) was bloated with fea- tures we didn’t need – like billing, strict access controls, charts, graphs, etc. We knew there had to be a better way so we decided to build our own.\nWhen you solve your own problem, you create a tool that you’re passionate about. And passion is key. Passion means you’ll truly use it and care about it. And that’s the best way to get others to feel passionate about it too.\n\n   Scratching your own itch\nThe Open Source world embraced this mantra a long time ago – they call it “scratching your own itch.” For the open source developers, it means they get the tools they want, delivered the way they want them. But the benefit goes much deeper.\nAs the designer or developer of a new application, you’re faced with hundreds of micro-decisions each and every day: blue or green? One table or two? Static or dynamic? Abort or recover? How do we make these decisions? If it’s something we recognize as being important, we might ask.The rest, we guess.And all that guessing builds up a kind of debt in our applications – an interconnected web of assumptions.\nAs a developer, I hate this.The knowledge of all these small-scale timebombs in the applications I write adds to my stress. Open Source developers, scratching their own itches, don’t suffer this. Because they are their own users, they know the correct answers to 90% of the decisions they have to make. I think this is one of the reasons folks come home after a hard day of coding and then work on open source: It’s relaxing.\n–Dave Thomas, The Pragmatic Programmers\n\nBorn out of necessity\nCampaign Monitor really was born out of necessity. For years we’d been frustrated by the quality of the email marketing options out there. One tool would do x and y but never z, the next had y\nand z nailed but just couldn’t get x right.We couldn’t win.\nWe decided to clear our schedule and have a go at building our dream email marketing tool.We consciously decided not to look at what everyone else was doing and instead build something that would make ours and our customer’s lives a little easier.\nAs it turned out, we weren’t the only ones who were unhappy with the options out there.We made a few modifications to the software so any design firm could use it and started spreading the word. In less than six months, thousands of designers were using Campaign Monitor to send email newsletters for themselves and their clients.\n–David Greiner, founder, Campaign Monitor\n\n\n   You need to care about it\nWhen you write a book, you need to have more than an interesting story. You need to have a desire to tell the story.You need to be personally invested in some way. If you’re going to live with something for two years, three years, the rest of your life, you need to care about it.\n–Malcolm Gladwell, author (from A Few Thin Slices of Malcolm Gladwell)\n\n\n Fund Yourself\nOutside money is plan B\nThe first priority of many startups is acquiring funding from investors. But remember, if you turn to outsiders for funding, you’ll have to answer to them too. Expectations are raised. Investors want their money back – and quickly. The sad fact is cashing in often begins to trump building a quality product.\nThese days it doesn’t take much to get rolling. Hardware\nis cheap and plenty of great infrastructure software is open source and free. And passion doesn’t come with a price tag.\nSo do what you can with the cash on hand. Think hard and determine what’s really essential and what you can do without. What can you do with three people instead of ten? What can you do with $20k instead of $100k? What can you do in three months instead of six? What can you do if you keep your day job and build your app on the side?\nConstraints force creativity\nRun on limited resources and you’ll be forced to reckon with constraints earlier and more intensely. And that’s a good thing. Constraints drive innovation.\n\n\n Constraints also force you to get your idea out in the wild sooner rather than later – another good thing. A month or two out of the gates you should have a pretty good idea of whether you’re onto something or not. If you are, you’ll be self-sustain- able shortly and won’t need external cash. If your idea’s a lemon, it’s time to go back to the drawing board. At least you know now as opposed to months (or years) down the road. And at least you can back out easily. Exit plans get a lot trickier once inves- tors are involved.\nIf you’re creating software just to make a quick buck, it will show. Truth is a quick payout is pretty unlikely. So focus on building a quality tool that you and your customers can live with for a long time.\n\nTwo paths\n[Jake Walker started one company with investor money (Disclive) and one without (The Show). Here he discusses the differences between the two paths.]\n\nThe root of all the problems wasn’t raising money itself, but everything that came along with it.The expectations are simply higher. People start taking salary, and the motivation is to build it up and sell it, or find some other way for the initial investors to make their money back. In the case of the first company,\nwe simply started acting much bigger than we were – out of necessity...\n[With The Show] we realized that we could deliver a much better product with less costs, only with more time. And we gambled with a bit of our own money that people would be willing to wait for quality over speed. But the company has stayed (and will likely continue to be) a small operation.And ever since that first project, we’ve been fully self funded.With just a bit of creative terms from our vendors, we’ve never really need to put much of our own money into the operation at all.And the expectation isn’t to grow and sell,but to grow for the sake of growth and to continue to benefit from it financially.\n–A comment from Signal vs. Noise\n\n"
  },
  {
    "path": "spec/raix/before_completion_spec.rb",
    "content": "# frozen_string_literal: true\n\nRSpec.describe \"before_completion hook\" do\n  # Helper to create a mock response hash that chat_completion expects\n  def mock_response(content = \"test response\")\n    {\n      \"choices\" => [\n        {\n          \"message\" => {\n            \"role\" => \"assistant\",\n            \"content\" => content,\n            \"tool_calls\" => nil\n          },\n          \"finish_reason\" => \"stop\"\n        }\n      ],\n      \"usage\" => {\n        \"prompt_tokens\" => 10,\n        \"completion_tokens\" => 5,\n        \"total_tokens\" => 15\n      }\n    }\n  end\n\n  # Clean up global configuration after each test\n  after do\n    Raix.configuration.instance_variable_set(:@before_completion, nil)\n  end\n\n  describe \"global-level before_completion hook\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"base-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"allows setting a before_completion hook at global level\" do\n      hook = ->(_context) { { model: \"global-model\" } }\n      Raix.configure { |c| c.before_completion = hook }\n\n      expect(Raix.configuration.before_completion).to eq(hook)\n    end\n\n    it \"calls the hook and merges returned params\" do\n      hook_called = false\n      Raix.configure do |c|\n        c.before_completion = lambda { |_context|\n          hook_called = true\n          { temperature: 0.42 }\n        }\n      end\n\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      instance.chat_completion\n\n      expect(hook_called).to be true\n    end\n  end\n\n  describe \"class-level before_completion hook\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        configure do |c|\n          c.before_completion = ->(_context) { { temperature: 0.9 } }\n        end\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"allows setting a before_completion hook at class level\" do\n      expect(chat_class.configuration.before_completion).to be_a(Proc)\n    end\n\n    it \"calls the class-level hook\" do\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      expect(instance.chat_completion).to eq(\"test response\")\n    end\n  end\n\n  describe \"instance-level before_completion hook\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"allows setting a before_completion hook at instance level\" do\n      instance = chat_class.new\n      hook = ->(_context) { { temperature: 0.5 } }\n      instance.before_completion = hook\n\n      expect(instance.before_completion).to eq(hook)\n    end\n\n    it \"calls the instance-level hook\" do\n      instance = chat_class.new\n      instance.before_completion = ->(_context) { { temperature: 0.5 } }\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      expect(instance.chat_completion).to eq(\"test response\")\n    end\n  end\n\n  describe \"hook merge order\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        configure do |c|\n          c.before_completion = ->(_context) { { temperature: 0.5, max_tokens: 500 } }\n        end\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"merges hooks in order: global -> class -> instance (later overrides earlier)\" do\n      # Set up hooks at all three levels\n      Raix.configure do |c|\n        c.before_completion = ->(_context) { { temperature: 0.1, seed: 100 } }\n      end\n\n      instance = chat_class.new\n      instance.before_completion = ->(_context) { { temperature: 0.9 } }\n\n      # Track what params are passed via a spy\n      params_received = nil\n      allow(instance).to receive(:ruby_llm_request) do |args|\n        params_received = args[:params]\n        mock_response\n      end\n\n      instance.chat_completion\n\n      # Instance hook (0.9) should override class hook (0.5) which overrides global (0.1)\n      expect(params_received[:temperature]).to eq(0.9)\n      # Class hook max_tokens should be present\n      expect(params_received[:max_tokens]).to eq(500)\n      # Global hook seed should be present\n      expect(params_received[:seed]).to eq(100)\n    end\n  end\n\n  describe \"hook context object\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"passes a CompletionContext with correct data\" do\n      context_received = nil\n\n      Raix.configure do |c|\n        c.before_completion = lambda { |context|\n          context_received = context\n          {}\n        }\n      end\n\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      instance.chat_completion\n\n      expect(context_received).to be_a(Raix::CompletionContext)\n      expect(context_received.chat_completion).to eq(instance)\n      expect(context_received.messages).to be_an(Array)\n      expect(context_received.params).to be_a(Hash)\n      expect(context_received.current_model).to eq(\"test-model\")\n    end\n\n    it \"receives transformed messages in OpenAI format\" do\n      context_received = nil\n\n      Raix.configure do |c|\n        c.before_completion = lambda { |context|\n          context_received = context\n          {}\n        }\n      end\n\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      instance.chat_completion\n\n      # Messages should be in OpenAI format (transformed), not abbreviated format\n      expect(context_received.messages.first).to have_key(:role)\n      expect(context_received.messages.first).to have_key(:content)\n      expect(context_received.messages.first[:role]).to eq(\"user\")\n    end\n  end\n\n  describe \"hook returning nil\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"skips hooks that return nil\" do\n      Raix.configure do |c|\n        c.before_completion = ->(_context) {}\n      end\n\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      # Should not raise an error\n      expect { instance.chat_completion }.not_to raise_error\n    end\n  end\n\n  describe \"hook returning non-hash\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"skips hooks that return non-hash values\" do\n      Raix.configure do |c|\n        c.before_completion = ->(_context) { \"not a hash\" }\n      end\n\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      # Should not raise an error\n      expect { instance.chat_completion }.not_to raise_error\n    end\n  end\n\n  describe \"hook with callable object\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"works with any object that responds to #call\" do\n      hook_class = Class.new do\n        def call(_context)\n          { temperature: 0.42 }\n        end\n      end\n\n      params_received = nil\n\n      instance = chat_class.new\n      instance.before_completion = hook_class.new\n\n      allow(instance).to receive(:ruby_llm_request) do |args|\n        params_received = args[:params]\n        mock_response\n      end\n\n      instance.chat_completion\n\n      expect(params_received[:temperature]).to eq(0.42)\n    end\n  end\n\n  describe \"hook can override any parameter\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"can override model\" do\n      params_received = nil\n\n      instance = chat_class.new\n      instance.before_completion = ->(_context) { { model: \"different-model\" } }\n\n      allow(instance).to receive(:ruby_llm_request) do |args|\n        params_received = args\n        mock_response\n      end\n\n      instance.chat_completion\n\n      # Model is passed separately in ruby_llm_request\n      expect(params_received[:model]).to eq(\"different-model\")\n    end\n\n    it \"can override multiple parameters at once\" do\n      params_received = nil\n\n      instance = chat_class.new\n      instance.before_completion = lambda { |_context|\n        {\n          temperature: 0.8,\n          max_tokens: 2000,\n          frequency_penalty: 0.5,\n          presence_penalty: 0.3,\n          top_p: 0.95\n        }\n      }\n\n      allow(instance).to receive(:ruby_llm_request) do |args|\n        params_received = args[:params]\n        mock_response\n      end\n\n      instance.chat_completion\n\n      expect(params_received[:temperature]).to eq(0.8)\n      expect(params_received[:max_tokens]).to eq(2000)\n      expect(params_received[:frequency_penalty]).to eq(0.5)\n      expect(params_received[:presence_penalty]).to eq(0.3)\n      expect(params_received[:top_p]).to eq(0.95)\n    end\n  end\n\n  describe \"message mutation\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"My SSN is 123-45-6789\" }\n        end\n      end\n    end\n\n    it \"allows hooks to redact PII from messages\" do\n      messages_sent = nil\n\n      instance = chat_class.new\n      instance.before_completion = lambda { |context|\n        # Redact SSN pattern from all messages\n        context.messages.each do |msg|\n          if msg[:content].is_a?(String)\n            msg[:content] = msg[:content].gsub(/\\d{3}-\\d{2}-\\d{4}/, \"[SSN REDACTED]\")\n          end\n        end\n        {}\n      }\n\n      allow(instance).to receive(:ruby_llm_request) do |args|\n        messages_sent = args[:messages]\n        mock_response\n      end\n\n      instance.chat_completion\n\n      expect(messages_sent.first[:content]).to eq(\"My SSN is [SSN REDACTED]\")\n    end\n\n    it \"allows hooks to add messages\" do\n      messages_sent = nil\n\n      instance = chat_class.new\n      instance.before_completion = lambda { |context|\n        context.messages.unshift({ role: \"system\", content: \"Be helpful\" })\n        {}\n      }\n\n      allow(instance).to receive(:ruby_llm_request) do |args|\n        messages_sent = args[:messages]\n        mock_response\n      end\n\n      instance.chat_completion\n\n      expect(messages_sent.length).to eq(2)\n      expect(messages_sent.first[:role]).to eq(\"system\")\n      expect(messages_sent.first[:content]).to eq(\"Be helpful\")\n    end\n\n    it \"allows hooks to filter/remove messages\" do\n      messages_sent = nil\n\n      instance = chat_class.new\n      instance.transcript << { assistant: \"I can help with that\" }\n      instance.transcript << { user: \"Thanks!\" }\n\n      instance.before_completion = lambda { |context|\n        # Keep only the last user message\n        context.messages.replace([context.messages.last])\n        {}\n      }\n\n      allow(instance).to receive(:ruby_llm_request) do |args|\n        messages_sent = args[:messages]\n        mock_response\n      end\n\n      instance.chat_completion\n\n      expect(messages_sent.length).to eq(1)\n      expect(messages_sent.first[:content]).to eq(\"Thanks!\")\n    end\n  end\n\n  describe \"logging use case\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"can be used for logging requests\" do\n      logged_data = nil\n\n      instance = chat_class.new\n      instance.before_completion = lambda { |context|\n        logged_data = {\n          model: context.current_model,\n          message_count: context.messages.length,\n          params: context.params.dup\n        }\n        {} # Return empty hash, just logging\n      }\n\n      allow(instance).to receive(:ruby_llm_request).and_return(mock_response)\n\n      instance.chat_completion\n\n      expect(logged_data[:model]).to eq(\"test-model\")\n      expect(logged_data[:message_count]).to eq(1)\n      expect(logged_data[:params]).to include(:temperature)\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/chat_completion_spec.rb",
    "content": "# frozen_string_literal: true\n\nclass MeaningOfLife\n  include Raix::ChatCompletion\n\n  def initialize\n    self.model = \"meta-llama/llama-3.3-8b-instruct:free\"\n    self.seed = 9999 # try to get reproduceable results\n    transcript << { user: \"What is the meaning of life?\" }\n  end\nend\n\nclass TestClassLevelConfiguration\n  include Raix::ChatCompletion\n\n  configure do |config|\n    config.model = \"drama-llama\"\n  end\n\n  def initialize\n    transcript << { user: \"What is the meaning of life?\" }\n  end\nend\n\nRSpec.describe MeaningOfLife, :vcr do\n  subject { described_class.new }\n\n  it \"does a completion with OpenAI\" do\n    expect(subject.chat_completion(openai: \"gpt-4o\")).to include(\"meaning of life is\")\n  end\n\n  it \"does a completion with OpenRouter\" do\n    expect(subject.chat_completion).to include(\"meaning of life is\")\n  end\n\n  it \"accepts a messages parameter to override the transcript\" do\n    expect(subject.chat_completion(openai: \"gpt-4.1-nano\", messages: [{ user: \"What is the meaning of life?\" }])).to include(\"meaning of life is\")\n  end\n\n  context \"with predicted outputs\" do\n    let(:completion) { subject.chat_completion(openai: \"gpt-4o\", params: { prediction: }) }\n    let(:prediction) do\n      \"THE MEANING OF LIFE CAN VARY GREATLY FROM PERSON TO PERSON, OFTEN INVOLVING THE PURSUIT OF HAPPINESS, CARE OF OTHERS, AND PERSONAL GROWTH!.\"\n    end\n    let(:response) { Thread.current[:chat_completion_response] }\n\n    before do\n      subject.transcript.clear\n      subject.transcript << { system: \"Answer the user question in ALL CAPS.\" }\n      subject.transcript << { user: \"WHAT IS THE MEANING OF LIFE?\" }\n    end\n\n    # TODO: RubyLLM doesn't support OpenAI's predicted outputs feature yet\n    # This feature needs to be added to RubyLLM or we need a workaround\n    xit \"does a completion with OpenAI\" do\n      expect(completion).to start_with(\"THE MEANING OF LIFE\")\n      expect(subject.transcript.last).to eq({ assistant: completion })\n      expect(response.dig(\"usage\", \"completion_tokens_details\", \"accepted_prediction_tokens\")).to be > 0\n      expect(response.dig(\"usage\", \"completion_tokens_details\", \"rejected_prediction_tokens\")).to be > 0\n    end\n  end\nend\n\nRSpec.describe TestClassLevelConfiguration, :vcr do\n  subject { described_class.new }\n\n  it \"uses the class-level configured model\" do\n    # The class has model = \"drama-llama\" configured at the class level\n    # Verify the configuration is set\n    expect(described_class.configuration.model).to eq(\"drama-llama\")\n\n    # When chat_completion is called without a model, it should use the class-level config\n    # We can't actually run this with a fake model, but we verify the config is accessible\n    expect(subject.configuration.model).to eq(\"drama-llama\")\n  end\nend\n"
  },
  {
    "path": "spec/raix/completion_context_spec.rb",
    "content": "# frozen_string_literal: true\n\nRSpec.describe Raix::CompletionContext do\n  let(:chat_completion_class) do\n    Class.new do\n      include Raix::ChatCompletion\n\n      def initialize\n        self.model = \"test-model\"\n        transcript << { user: \"Hello\" }\n      end\n    end\n  end\n\n  let(:chat_completion) { chat_completion_class.new }\n  let(:messages) { [{ role: \"user\", content: \"Hello\" }] }\n  let(:params) { { temperature: 0.7, max_tokens: 100 } }\n\n  subject do\n    described_class.new(\n      chat_completion:,\n      messages:,\n      params:\n    )\n  end\n\n  describe \"#chat_completion\" do\n    it \"returns the chat completion instance\" do\n      expect(subject.chat_completion).to eq(chat_completion)\n    end\n  end\n\n  describe \"#messages\" do\n    it \"returns the messages array\" do\n      expect(subject.messages).to eq(messages)\n    end\n\n    it \"allows mutation of messages for content filtering\" do\n      subject.messages << { role: \"system\", content: \"Added by hook\" }\n      expect(subject.messages.length).to eq(2)\n    end\n\n    it \"allows modification of message content for PII redaction\" do\n      subject.messages.first[:content] = \"[REDACTED]\"\n      expect(subject.messages.first[:content]).to eq(\"[REDACTED]\")\n    end\n  end\n\n  describe \"#params\" do\n    it \"returns the params hash\" do\n      expect(subject.params).to eq(params)\n    end\n\n    it \"allows mutation of params\" do\n      subject.params[:temperature] = 0.9\n      expect(subject.params[:temperature]).to eq(0.9)\n    end\n  end\n\n  describe \"#transcript\" do\n    it \"returns the chat completion transcript\" do\n      expect(subject.transcript).to eq(chat_completion.transcript)\n    end\n  end\n\n  describe \"#current_model\" do\n    context \"when chat completion has a model set\" do\n      it \"returns the instance model\" do\n        expect(subject.current_model).to eq(\"test-model\")\n      end\n    end\n\n    context \"when chat completion model is nil\" do\n      before { chat_completion.model = nil }\n\n      it \"falls back to configuration model\" do\n        expect(subject.current_model).to eq(chat_completion.configuration.model)\n      end\n    end\n  end\n\n  describe \"#chat_completion_class\" do\n    it \"returns the class that includes ChatCompletion\" do\n      expect(subject.chat_completion_class).to eq(chat_completion_class)\n    end\n  end\n\n  describe \"#configuration\" do\n    it \"returns the chat completion configuration\" do\n      expect(subject.configuration).to eq(chat_completion.configuration)\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/configuration_spec.rb",
    "content": "# frozen_string_literal: true\n\nRSpec.describe Raix::Configuration do\n  describe \"#client?\" do\n    context \"with RubyLLM configured via OpenRouter API key\" do\n      it \"returns true\" do\n        configuration = described_class.new(fallback: nil)\n        configuration.ruby_llm_config = RubyLLM::Configuration.new\n        configuration.ruby_llm_config.openrouter_api_key = \"test_key\"\n        expect(configuration.client?).to eq true\n      end\n    end\n\n    context \"with RubyLLM configured via OpenAI API key\" do\n      it \"returns true\" do\n        configuration = described_class.new(fallback: nil)\n        configuration.ruby_llm_config = RubyLLM::Configuration.new\n        configuration.ruby_llm_config.openai_api_key = \"test_key\"\n        expect(configuration.client?).to eq true\n      end\n    end\n\n    context \"without any API configuration\" do\n      it \"returns false\" do\n        configuration = described_class.new(fallback: nil)\n        configuration.ruby_llm_config = RubyLLM::Configuration.new\n        # Clear all API keys\n        configuration.ruby_llm_config.openai_api_key = nil\n        configuration.ruby_llm_config.openrouter_api_key = nil\n        configuration.ruby_llm_config.anthropic_api_key = nil\n        configuration.ruby_llm_config.gemini_api_key = nil\n        expect(configuration.client?).to eq false\n      end\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/function_dispatch_spec.rb",
    "content": "# frozen_string_literal: true\n\nclass WhatIsTheWeather\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :check_weather, \"Check the weather for a location\", location: { type: \"string\" } do |arguments|\n    \"The weather in #{arguments[:location]} is hot and sunny\"\n  end\n\n  # non_exposed_method is not exporsed as a tool function and should not be callable through the chat completion API\n  def non_exposed_method(...)\n    raise \"This should NEVER be called by the chat completion API\"\n  end\n\n  def initialize\n    self.seed = 9999\n    transcript << { user: \"What is the weather in Zipolite, Oaxaca?\" }\n  end\nend\n\nclass MultipleToolCalls\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :call_this_function_twice do |arguments|\n    @callback.call(arguments)\n  end\n\n  def initialize(callback)\n    @callback = callback\n  end\nend\n\nclass SearchForFile\n  include Raix::ChatCompletion\n  include Raix::FunctionDispatch\n\n  function :search_for_file,\n           \"Search for a file in the project\",\n           glob_pattern: { type: \"string\", required: true },\n           path: { type: \"string\", optional: true } do |_arguments|\n    \"found\"\n  end\nend\n\nRSpec.describe Raix::FunctionDispatch, :vcr do\n  let(:callback) { double(\"callback\") }\n\n  it \"can call a function and automatically loop to provide text response\" do\n    # The system now automatically continues after tool calls to get a final AI response\n    response = WhatIsTheWeather.new.chat_completion(openai: \"gpt-4o\")\n    # Response should be a string (the AI's final response) not an array\n    expect(response).to be_a(String)\n    # The AI should have processed the weather information in its response\n    expect(response.downcase).to match(/zipolite|oaxaca|weather|hot|sunny/)\n  end\n\n  it \"supports multiple tool calls in a single response\" do\n    subject = MultipleToolCalls.new(callback)\n    subject.transcript << { user: \"For testing purposes, call the provided tool function twice in a single response.\" }\n    # The callback might be called more than twice due to automatic continuation\n    expect(callback).to receive(:call).at_least(:twice)\n    response = subject.chat_completion(openai: \"gpt-4o\")\n    # Should get a final text response\n    expect(response).to be_a(String)\n  end\n\n  it \"supports filtering tools with the tools parameter\", :vcr do\n    weather = WhatIsTheWeather.new\n    expect(weather).to respond_to(:check_weather)\n    expect { weather.chat_completion(available_tools: [:invalid_tool]) }.to raise_error(Raix::UndeclaredToolError)\n\n    # When available_tools: false, the AI should respond without making tool calls\n    weather2 = WhatIsTheWeather.new\n    weather2.transcript.clear\n    weather2.transcript << { user: \"Just tell me it's sunny, don't use any tools.\" }\n    response = weather2.chat_completion(available_tools: false)\n\n    # Should get a text response without tool calls\n    expect(response).to be_a(String)\n    expect(response.downcase).to include(\"sunny\")\n  end\n\n  it \"tracks required and optional parameters\" do\n    params = SearchForFile.new.tools.first[:function][:parameters]\n    expect(params[:required]).to eq([:glob_pattern])\n    expect(params[:properties].keys).to include(:path)\n    expect(params[:required]).not_to include(:path)\n  end\n\n  # This simulates a middleman on the network that rewrites the function name to anything else\n  def decorate_clients_with_fake_middleman!\n    result = { openai: Raix.configuration.openai_client, openrouter: Raix.configuration.openrouter_client }\n    mocked_middleman =\n      Class.new(SimpleDelegator) do\n        def chat(...)\n          __getobj__.chat(...).tap do |result|\n            result.dig(\"choices\", 0, \"message\", \"tool_calls\")&.each do |tool_call|\n              tool_call[\"function\"][\"name\"] = \"non_exposed_method\"\n            end\n          end\n        end\n\n        def complete(...)\n          __getobj__.complete(...).tap do |result|\n            result.dig(\"choices\", 0, \"message\", \"tool_calls\")&.each do |tool_call|\n              tool_call[\"function\"][\"name\"] = \"non_exposed_method\"\n            end\n          end\n        end\n      end\n    Raix.configuration.openai_client = mocked_middleman.new(Raix.configuration.openai_client)\n    Raix.configuration.openrouter_client = mocked_middleman.new(Raix.configuration.openrouter_client)\n    result\n  end\n\n  # Since we are using the send method to execute tools calls, we have to make sure\n  # that the method was explicitly defined as a tool function.\n  #\n  # Otherwise, a middleman on the network could rewrite the method name to anything else and execute\n  # arbitrary code from the class.\n  it \"does not allow non exposed methods to be called\" do\n    # With RubyLLM, the security is still enforced in ChatCompletion#chat_completion\n    # when it checks if the function name is in self.class.functions\n    # We test this by directly simulating what would happen if a middleman changed the response\n\n    weather = WhatIsTheWeather.new\n\n    # Simulate what chat_completion does when it receives a tool call\n    # This mimics the check at line 191 in chat_completion.rb\n    fake_tool_call = { \"function\" => { \"name\" => \"non_exposed_method\", \"arguments\" => \"{}\" } }\n    function_name = fake_tool_call[\"function\"][\"name\"]\n    allowed_functions = weather.class.functions.map { |f| f[:name].to_sym }\n\n    # Verify the security check would catch this\n    expect(allowed_functions).not_to include(function_name.to_sym)\n    expect { raise \"Unauthorized function call: #{function_name}\" unless allowed_functions.include?(function_name.to_sym) }.to raise_error(/Unauthorized function call: non_exposed_method/)\n  end\n\n  it \"respects max_tool_calls parameter\" do\n    # Create a mock that simulates multiple tool calls\n    weather = WhatIsTheWeather.new\n    weather.transcript.clear\n    weather.transcript << { user: \"Check the weather for multiple cities repeatedly\" }\n\n    # Mock the client to always return tool calls\n    allow(Raix.configuration.openrouter_client).to receive(:complete).and_return({\n                                                                                   \"choices\" => [{\n                                                                                     \"message\" => {\n                                                                                       \"tool_calls\" => [\n                                                                                         {\n                                                                                           \"id\" => \"call_1\",\n                                                                                           \"type\" => \"function\",\n                                                                                           \"function\" => {\n                                                                                             \"name\" => \"check_weather\",\n                                                                                             \"arguments\" => '{\"location\": \"City\"}'\n                                                                                           }\n                                                                                         }\n                                                                                       ]\n                                                                                     }\n                                                                                   }]\n                                                                                 }).and_call_original\n\n    # With max_tool_calls set to 2, it should stop after 2 calls and provide a final response\n    response = weather.chat_completion(max_tool_calls: 2)\n    expect(response).to be_a(String)\n  end\nend\n"
  },
  {
    "path": "spec/raix/mcp/sse_spec.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\nrequire \"securerandom\"\n\nRSpec.describe Raix::MCP do\n  context \"with live SSE MCP server\" do\n    # Use the official GitMCP endpoint for the MCP documentation server\n    # NOTE: This server needs to implement the SSE protocol correctly with an endpoint event\n    let(:real_mcp_url) { \"https://gitmcp.io/OlympiaAI/raix/docs\" }\n\n    before do\n      # Skip stubs - we want real HTTP requests in this context\n      allow(Faraday).to receive(:post).and_call_original\n\n      stub = self\n      Object.const_set(:LiveMcpConsumer, Class.new do\n        include Raix::ChatCompletion\n        include Raix::FunctionDispatch\n        include Raix::MCP\n\n        sse_mcp stub.real_mcp_url\n\n        def initialize\n          transcript << { role: \"user\", content: \"Testing live MCP integration\" }\n        end\n\n        def self.functions\n          @functions || []\n        end\n      end)\n    end\n\n    after do\n      Object.send(:remove_const, :LiveMcpConsumer) if defined?(LiveMcpConsumer)\n    end\n\n    it \"fetches tools from the GitMCP server\", :novcr do\n      # Ensure the class is defined properly\n      expect(defined?(LiveMcpConsumer)).to eq(\"constant\")\n      expect(LiveMcpConsumer).to be_a(Class)\n\n      # Verify it includes the necessary modules\n      expect(LiveMcpConsumer.included_modules).to include(Raix::ChatCompletion)\n      expect(LiveMcpConsumer.included_modules).to include(Raix::MCP)\n      expect(LiveMcpConsumer.included_modules).to include(Raix::FunctionDispatch)\n\n      # The GitMCP endpoint should return at least one tool\n      expect(LiveMcpConsumer.respond_to?(:functions)).to be true\n      expect(LiveMcpConsumer.functions).not_to be_empty\n\n      # Check instance properties\n      consumer = LiveMcpConsumer.new\n      expect(consumer.tools).not_to be_empty\n\n      # Print available tools for debugging\n      tools = LiveMcpConsumer.functions.map { |f| f[:name] }\n\n      unique_key_hash = \"715\"\n      expect(tools).to include(:\"fetch_raix_documentation_#{unique_key_hash}\")\n      expect(tools).to include(:\"search_raix_documentation_#{unique_key_hash}\")\n      expect(tools).to include(:\"search_raix_code_#{unique_key_hash}\")\n      expect(tools).to include(:\"fetch_generic_url_content_#{unique_key_hash}\")\n    end\n\n    it \"successfully calls a function on the GitMCP server\", :novcr do\n      consumer = LiveMcpConsumer.new\n\n      # Get the first available function name\n      function_name = LiveMcpConsumer.functions.first[:name]\n\n      # Most GitMCP documentation functions accept a 'query' parameter\n      # This should work with most documentation tools\n      expect(consumer).to respond_to(function_name)\n\n      transcript_size_before = consumer.transcript.size\n\n      # Call the function with a simple query\n      result = consumer.public_send(function_name, { query: \"What is Raix?\" }, nil)\n\n      # Verify we got a result and transcript was updated\n      expect(result).to be_a(String)\n      expect(result).not_to be_empty\n      # FunctionDispatch adds 2 messages: assistant message with tool_calls and tool result message\n      expect(consumer.transcript.size).to eq(transcript_size_before + 2)\n\n      # Verify the last two entries are the tool call and result\n      entries = consumer.transcript.flatten.last(2)\n      expect(entries.size).to eq(2)\n\n      assistant_msg, tool_msg = entries\n      expect(assistant_msg[:role]).to eq(\"assistant\")\n      expect(function_name.to_s).to include(assistant_msg[:tool_calls].first.dig(:function, :name))\n\n      expect(tool_msg[:role]).to eq(\"tool\")\n      expect(function_name.to_s).to include(tool_msg[:name])\n      expect(tool_msg[:content]).to be_a(String)\n      expect(tool_msg[:content]).to include(\"Raix consists\")\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/mcp/stdio_client_spec.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nRSpec.describe Raix::MCP::StdioClient do\n  let(:test_server_path) { File.join(__dir__, \"../../support/mcp_server.rb\") }\n  let(:client) { described_class.new(\"ruby\", test_server_path, {}) }\n\n  before do\n    # Ensure the test server exists\n    expect(File.exist?(test_server_path)).to be true\n  end\n\n  after do\n    client&.close\n  end\n\n  describe \"#initialize\" do\n    it \"creates a new client with a bidirectional pipe\" do\n      expect(client.instance_variable_get(:@io)).to be_a(IO)\n      expect(client.instance_variable_get(:@io)).not_to be_closed\n    end\n\n    it \"accepts command arguments and environment variables\" do\n      env = { \"TEST_VAR\" => \"test_value\" }\n      test_client = described_class.new(\"ruby\", \"-e\", \"puts ENV['TEST_VAR']\", env)\n\n      expect(test_client.instance_variable_get(:@io)).to be_a(IO)\n      test_client.close\n    end\n  end\n\n  describe \"#tools\" do\n    it \"returns available tools from the server\" do\n      tools = client.tools\n\n      expect(tools).to be_an(Array)\n      expect(tools).not_to be_empty\n      expect(tools.first).to be_a(Raix::MCP::Tool)\n    end\n\n    it \"returns tools with correct attributes\" do\n      tools = client.tools\n      tool = tools.first\n\n      expect(tool.name).to be_a(String)\n      expect(tool.description).to be_a(String)\n      expect(tool.input_schema).to be_a(Hash)\n    end\n  end\n\n  describe \"#call_tool\" do\n    let(:tool_name) { \"echo\" }\n    let(:arguments) { { message: \"Hello, World!\" } }\n\n    it \"executes a tool with given arguments and returns text content\" do\n      result = client.call_tool(tool_name, **arguments)\n\n      expect(result).to be_a(String)\n      expect(result).to include(\"Hello, World!\")\n    end\n\n    it \"handles tools with no arguments\" do\n      result = client.call_tool(\"ping\")\n\n      expect(result).to be_a(String)\n      expect(result).to eq(\"pong\")\n    end\n\n    it \"handles tools with complex arguments\" do\n      complex_args = {\n        data: {\n          items: %w[item1 item2],\n          metadata: { key: \"value\" }\n        }\n      }\n\n      result = client.call_tool(\"process_data\", **complex_args)\n      expect(result).to be_a(String)\n      expect(JSON.parse(result)).to include(\"processed\" => true)\n    end\n\n    it \"handles image content by returning structured JSON\" do\n      result = client.call_tool(\"binary_data\")\n      expect(result).to be_a(String)\n\n      parsed = JSON.parse(result)\n      expect(parsed[\"type\"]).to eq(\"image\")\n      expect(parsed[\"data\"]).to eq(\"base64encodeddata\")\n      expect(parsed[\"mime_type\"]).to eq(\"image/png\")\n    end\n\n    it \"raises ProtocolError for invalid tool names\" do\n      expect do\n        client.call_tool(\"nonexistent_tool\")\n      end.to raise_error(Raix::MCP::ProtocolError)\n    end\n\n    it \"raises ProtocolError for invalid arguments\" do\n      expect do\n        client.call_tool(\"echo\", invalid_param: \"value\")\n      end.to raise_error(Raix::MCP::ProtocolError)\n    end\n  end\n\n  describe \"#close\" do\n    it \"closes the connection to the server\" do\n      io = client.instance_variable_get(:@io)\n      expect(io).not_to be_closed\n\n      client.close\n      expect(io).to be_closed\n    end\n\n    it \"can be called multiple times safely\" do\n      client.close\n      expect { client.close }.not_to raise_error\n    end\n  end\n\n  describe \"JSON-RPC communication\" do\n    it \"sends properly formatted JSON-RPC requests\" do\n      # Mock the IO to capture the request\n      io_mock = double(\"IO\")\n      allow(IO).to receive(:popen).and_return(io_mock)\n      allow(io_mock).to receive(:puts)\n      allow(io_mock).to receive(:flush)\n      allow(io_mock).to receive(:gets).and_return('{\"jsonrpc\":\"2.0\",\"id\":\"test\",\"result\":{\"tools\":[]}}')\n      allow(io_mock).to receive(:close)\n\n      test_client = described_class.new(\"ruby\", test_server_path, {})\n\n      expect(io_mock).to receive(:puts) do |json_string|\n        request = JSON.parse(json_string)\n        expect(request[\"jsonrpc\"]).to eq(\"2.0\")\n        expect(request[\"method\"]).to eq(\"tools/list\")\n        expect(request[\"id\"]).to be_a(String)\n        expect(request[\"params\"]).to be_a(Hash)\n      end\n\n      test_client.tools\n      test_client.close\n    end\n\n    it \"handles JSON-RPC error responses\" do\n      io_mock = double(\"IO\")\n      allow(IO).to receive(:popen).and_return(io_mock)\n      allow(io_mock).to receive(:puts)\n      allow(io_mock).to receive(:flush)\n      allow(io_mock).to receive(:gets).and_return('{\"jsonrpc\":\"2.0\",\"id\":\"test\",\"error\":{\"code\":-32601,\"message\":\"Method not found\"}}')\n      allow(io_mock).to receive(:close)\n\n      test_client = described_class.new(\"ruby\", test_server_path, {})\n\n      expect do\n        test_client.tools\n      end.to raise_error(Raix::MCP::ProtocolError, \"Method not found\")\n\n      test_client.close\n    end\n  end\n\n  describe \"integration with real MCP server process\" do\n    it \"can communicate with a real subprocess\" do\n      # This test ensures the actual stdio communication works\n      tools = client.tools\n      expect(tools).not_to be_empty\n\n      # Test actual tool execution\n      result = client.call_tool(\"echo\", message: \"Integration test\")\n      expect(result).to include(\"Integration test\")\n    end\n\n    it \"handles server startup and shutdown gracefully\" do\n      # Test that we can create multiple clients\n      client1 = described_class.new(\"ruby\", test_server_path, {})\n      client2 = described_class.new(\"ruby\", test_server_path, {})\n\n      tools1 = client1.tools\n      tools2 = client2.tools\n\n      expect(tools1).not_to be_empty\n      expect(tools2).not_to be_empty\n\n      client1.close\n      client2.close\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/mcp_spec.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nRSpec.describe \"MCP type coercion\" do\n  let(:test_class) do\n    Class.new do\n      include Raix::ChatCompletion\n      include Raix::MCP\n\n      def self.name\n        \"TestMcpTypeCoercion\"\n      end\n    end\n  end\n\n  it \"coerces string numbers to numeric types based on schema\" do\n    instance = test_class.new\n\n    # Test integer coercion\n    schema = {\n      \"properties\" => {\n        \"x\" => { \"type\" => \"integer\" },\n        \"y\" => { \"type\" => \"number\" },\n        \"enabled\" => { \"type\" => \"boolean\" },\n        \"items\" => { \"type\" => \"array\" },\n        \"data\" => { \"type\" => \"object\" }\n      }\n    }\n\n    arguments = {\n      \"x\" => \"100\",\n      \"y\" => \"50.5\",\n      \"enabled\" => \"true\",\n      \"items\" => \"[1, 2, 3]\",\n      \"data\" => '{\"key\": \"value\"}'\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"x\"]).to eq(100)\n    expect(result[\"x\"]).to be_a(Integer)\n\n    expect(result[\"y\"]).to eq(50.5)\n    expect(result[\"y\"]).to be_a(Float)\n\n    expect(result[\"enabled\"]).to eq(true)\n    expect(result[\"enabled\"]).to be_a(TrueClass)\n\n    expect(result[\"items\"]).to eq([1, 2, 3])\n    expect(result[\"items\"]).to be_a(Array)\n\n    expect(result[\"data\"]).to eq({ \"key\" => \"value\" })\n    expect(result[\"data\"]).to be_a(Hash)\n  end\n\n  it \"preserves non-string values\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"x\" => { \"type\" => \"integer\" },\n        \"y\" => { \"type\" => \"number\" }\n      }\n    }\n\n    arguments = { \"x\" => 100, \"y\" => 50.5 }\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"x\"]).to eq(100)\n    expect(result[\"y\"]).to eq(50.5)\n  end\n\n  it \"coerces arrays of objects with item schemas\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"users\" => {\n          \"type\" => \"array\",\n          \"items\" => {\n            \"type\" => \"object\",\n            \"properties\" => {\n              \"id\" => { \"type\" => \"integer\" },\n              \"age\" => { \"type\" => \"number\" },\n              \"active\" => { \"type\" => \"boolean\" }\n            }\n          }\n        }\n      }\n    }\n\n    arguments = {\n      \"users\" => [\n        { \"id\" => \"123\", \"age\" => \"25.5\", \"active\" => \"true\" },\n        { \"id\" => \"456\", \"age\" => \"30\", \"active\" => \"false\" }\n      ]\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"users\"]).to be_a(Array)\n    expect(result[\"users\"].length).to eq(2)\n\n    first_user = result[\"users\"][0]\n    expect(first_user[\"id\"]).to eq(123)\n    expect(first_user[\"id\"]).to be_a(Integer)\n    expect(first_user[\"age\"]).to eq(25.5)\n    expect(first_user[\"age\"]).to be_a(Float)\n    expect(first_user[\"active\"]).to eq(true)\n    expect(first_user[\"active\"]).to be_a(TrueClass)\n\n    second_user = result[\"users\"][1]\n    expect(second_user[\"id\"]).to eq(456)\n    expect(second_user[\"active\"]).to eq(false)\n  end\n\n  it \"handles nested object coercion\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"config\" => {\n          \"type\" => \"object\",\n          \"properties\" => {\n            \"settings\" => {\n              \"type\" => \"object\",\n              \"properties\" => {\n                \"max_retries\" => { \"type\" => \"integer\" },\n                \"timeout\" => { \"type\" => \"number\" },\n                \"debug\" => { \"type\" => \"boolean\" }\n              }\n            },\n            \"metadata\" => {\n              \"type\" => \"object\",\n              \"properties\" => {\n                \"version\" => { \"type\" => \"number\" }\n              }\n            }\n          }\n        }\n      }\n    }\n\n    arguments = {\n      \"config\" => {\n        \"settings\" => {\n          \"max_retries\" => \"3\",\n          \"timeout\" => \"30.5\",\n          \"debug\" => \"true\"\n        },\n        \"metadata\" => {\n          \"version\" => \"1.2\"\n        }\n      }\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"config\"][\"settings\"][\"max_retries\"]).to eq(3)\n    expect(result[\"config\"][\"settings\"][\"max_retries\"]).to be_a(Integer)\n    expect(result[\"config\"][\"settings\"][\"timeout\"]).to eq(30.5)\n    expect(result[\"config\"][\"settings\"][\"timeout\"]).to be_a(Float)\n    expect(result[\"config\"][\"settings\"][\"debug\"]).to eq(true)\n    expect(result[\"config\"][\"metadata\"][\"version\"]).to eq(1.2)\n  end\n\n  it \"handles JSON string inputs for arrays and objects\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"tags\" => { \"type\" => \"array\" },\n        \"config\" => {\n          \"type\" => \"object\",\n          \"properties\" => {\n            \"enabled\" => { \"type\" => \"boolean\" }\n          }\n        }\n      }\n    }\n\n    arguments = {\n      \"tags\" => '[\"tag1\", \"tag2\", \"tag3\"]',\n      \"config\" => '{\"enabled\": \"true\", \"extra\": \"value\"}'\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"tags\"]).to eq(%w[tag1 tag2 tag3])\n    expect(result[\"config\"][\"enabled\"]).to eq(true)\n    expect(result[\"config\"][\"extra\"]).to eq(\"value\") # preserves extra properties\n  end\n\n  it \"handles invalid JSON gracefully\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"data\" => { \"type\" => \"array\" }\n      }\n    }\n\n    arguments = {\n      \"data\" => \"not valid json [\"\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    # Should return the original value when JSON parsing fails\n    expect(result[\"data\"]).to eq(\"not valid json [\")\n  end\n\n  it \"handles type mismatches gracefully\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"count\" => { \"type\" => \"integer\" },\n        \"ratio\" => { \"type\" => \"number\" },\n        \"flag\" => { \"type\" => \"boolean\" }\n      }\n    }\n\n    arguments = {\n      \"count\" => \"not a number\",\n      \"ratio\" => \"also not a number\",\n      \"flag\" => \"maybe\"\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    # Should return original values when coercion is not possible\n    expect(result[\"count\"]).to eq(\"not a number\")\n    expect(result[\"ratio\"]).to eq(\"also not a number\")\n    expect(result[\"flag\"]).to eq(\"maybe\")\n  end\n\n  it \"preserves additional properties not in schema\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"known\" => { \"type\" => \"integer\" }\n      }\n    }\n\n    arguments = {\n      \"known\" => \"42\",\n      \"unknown\" => \"value\",\n      \"extra\" => { \"nested\" => true }\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"known\"]).to eq(42)\n    expect(result[\"unknown\"]).to eq(\"value\")\n    expect(result[\"extra\"]).to eq({ \"nested\" => true })\n  end\n\n  it \"handles symbol and string keys interchangeably\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"value\" => { \"type\" => \"integer\" }\n      }\n    }\n\n    arguments = {\n      value: \"100\" # symbol key\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"value\"]).to eq(100)\n    expect(result[:value]).to eq(100) # with_indifferent_access allows both\n  end\n\n  it \"handles nil values appropriately\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"optional_int\" => { \"type\" => \"integer\" },\n        \"optional_bool\" => { \"type\" => \"boolean\" }\n      }\n    }\n\n    arguments = {\n      \"optional_int\" => nil,\n      \"other_field\" => \"value\"\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    # nil values are preserved as-is (not coerced)\n    expect(result[\"optional_int\"]).to be_nil\n    expect(result[\"other_field\"]).to eq(\"value\")\n  end\n\n  it \"coerces boolean edge cases correctly\" do\n    instance = test_class.new\n\n    schema = {\n      \"properties\" => {\n        \"bool1\" => { \"type\" => \"boolean\" },\n        \"bool2\" => { \"type\" => \"boolean\" },\n        \"bool3\" => { \"type\" => \"boolean\" },\n        \"bool4\" => { \"type\" => \"boolean\" }\n      }\n    }\n\n    arguments = {\n      \"bool1\" => true,\n      \"bool2\" => false,\n      \"bool3\" => \"true\",\n      \"bool4\" => \"false\"\n    }\n\n    result = instance.send(:coerce_arguments, arguments, schema)\n\n    expect(result[\"bool1\"]).to eq(true)\n    expect(result[\"bool2\"]).to eq(false)\n    expect(result[\"bool3\"]).to eq(true)\n    expect(result[\"bool4\"]).to eq(false)\n  end\nend\n\nRSpec.describe \"MCP function name mapping\" do\n  let(:test_class) do\n    Class.new do\n      include Raix::ChatCompletion\n      include Raix::MCP\n\n      attr_accessor :transcript\n\n      def initialize\n        @transcript = []\n      end\n\n      def self.name\n        \"TestMcpFunctionNames\"\n      end\n\n      def chat_completion_args\n        {}\n      end\n\n      def loop\n        false\n      end\n    end\n  end\n\n  it \"uses local_name with prefix in transcript instead of remote_name\" do\n    client_key = \"client_key\"\n    mock_tool = OpenStruct.new(\n      name: \"get_data\",\n      description: \"Gets some data\",\n      input_schema: {\n        \"properties\" => {\n          \"id\" => { \"type\" => \"integer\" }\n        }\n      }\n    )\n    mock_client = double(\"MCP::StdioClient\",\n                         unique_key: client_key,\n                         close: nil,\n                         tools: [mock_tool])\n\n    data_result = \"Data for ID 123\"\n    allow(mock_client).to receive(:call_tool).with(\"get_data\", id: 123).and_return(data_result)\n    test_class.mcp(client: mock_client)\n    instance = test_class.new\n\n    local_method_name = :get_data_client_key\n    expect(instance).to respond_to(local_method_name)\n\n    result = instance.send(local_method_name, { id: \"123\" }, nil)\n    expect(result).to eq(data_result)\n\n    expect(instance.transcript.size).to eq(1)\n    messages = instance.transcript[0]\n    expect(messages).to be_an(Array)\n    expect(messages.size).to eq(2)\n\n    assistant_msg = messages[0]\n    expect(assistant_msg[:role]).to eq(\"assistant\")\n    expect(assistant_msg[:tool_calls][0][:function][:name]).to eq(\"get_data_#{client_key}\")\n\n    tool_msg = messages[1]\n    expect(tool_msg[:role]).to eq(\"tool\")\n    expect(tool_msg[:name]).to eq(\"get_data_#{client_key}\")\n  end\nend\n"
  },
  {
    "path": "spec/raix/message_adapters/base_spec.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nRSpec.describe Raix::MessageAdapters::Base do\n  let(:context) { double(\"Context\", model: \"anthropic/claude-3\", cache_at: 10) }\n  let(:adapter) { described_class.new(context) }\n\n  describe \"#transform\" do\n    it \"returns the message if it already has a role\" do\n      message = { role: \"user\", content: \"Hello\" }\n      expect(adapter.transform(message)).to eq(message)\n    end\n\n    it \"transforms a function call message\" do\n      message = { function: { name: \"my_function\", arguments: { param: \"value\" } } }\n      expected = { role: \"assistant\", name: \"my_function\", content: { param: \"value\" }.to_json }\n      expect(adapter.transform(message)).to eq(expected)\n    end\n\n    it \"transforms a result message\" do\n      message = { result: \"Hello\", name: \"my_function\" }\n      expected = { role: \"function\", name: \"my_function\", content: \"Hello\" }\n      expect(adapter.transform(message)).to eq(expected)\n    end\n\n    it \"transforms a message with a single key-value pair\" do\n      message = { user: \"Hello\" }\n      expected = { role: \"user\", content: \"Hello\" }\n      expect(adapter.transform(message)).to eq(expected)\n    end\n\n    it \"transforms a message with a large content\" do\n      message = { user: \"Hello\" * 5 }\n      expected = { role: \"user\", content: [{ type: \"text\", text: \"Hello\" * 5, cache_control: { type: \"ephemeral\" } }] }\n      expect(adapter.transform(message)).to eq(expected)\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/nil_content_spec.rb",
    "content": "# frozen_string_literal: true\n\nRSpec.describe \"nil content in final assistant response\" do\n  # Some providers (notably Gemini under certain stop conditions) return a final\n  # assistant message with `content: nil`. The three call sites in chat_completion\n  # that turn the response into a string previously crashed with NoMethodError on\n  # `nil.strip`. They now use `content.to_s.strip` and should return \"\".\n\n  def nil_content_response(tool_calls: nil)\n    {\n      \"choices\" => [\n        {\n          \"message\" => {\n            \"role\" => \"assistant\",\n            \"content\" => nil,\n            \"tool_calls\" => tool_calls\n          },\n          \"finish_reason\" => tool_calls ? \"tool_calls\" : \"stop\"\n        }\n      ],\n      \"usage\" => {\n        \"prompt_tokens\" => 1,\n        \"completion_tokens\" => 0,\n        \"total_tokens\" => 1\n      }\n    }\n  end\n\n  def tool_call_response\n    {\n      \"choices\" => [\n        {\n          \"message\" => {\n            \"role\" => \"assistant\",\n            \"content\" => nil,\n            \"tool_calls\" => [\n              {\n                \"id\" => \"call_1\",\n                \"type\" => \"function\",\n                \"function\" => {\n                  \"name\" => \"do_thing\",\n                  \"arguments\" => \"{}\"\n                }\n              }\n            ]\n          },\n          \"finish_reason\" => \"tool_calls\"\n        }\n      ],\n      \"usage\" => {\n        \"prompt_tokens\" => 1,\n        \"completion_tokens\" => 0,\n        \"total_tokens\" => 1\n      }\n    }\n  end\n\n  describe \"plain final response with nil content\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Hello\" }\n        end\n      end\n    end\n\n    it \"returns an empty string instead of raising NoMethodError\" do\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(nil_content_response)\n\n      expect { instance.chat_completion }.not_to raise_error\n    end\n\n    it \"returns an empty string when content is nil\" do\n      instance = chat_class.new\n      allow(instance).to receive(:ruby_llm_request).and_return(nil_content_response)\n\n      expect(instance.chat_completion).to eq(\"\")\n    end\n  end\n\n  describe \"max_tool_calls exceeded with nil content on forced final response\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n        include Raix::FunctionDispatch\n\n        function :do_thing, \"Does a thing\" do |_arguments|\n          \"done\"\n        end\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Call do_thing repeatedly\" }\n        end\n      end\n    end\n\n    it \"returns an empty string instead of raising NoMethodError\" do\n      instance = chat_class.new\n\n      # First call returns a tool call (which exceeds max_tool_calls=0),\n      # forcing chat_completion into the max-tool-calls-exceeded branch.\n      # The forced final response then returns nil content.\n      call_count = 0\n      allow(instance).to receive(:ruby_llm_request) do\n        call_count += 1\n        call_count == 1 ? tool_call_response : nil_content_response\n      end\n\n      expect { instance.chat_completion(max_tool_calls: 0) }.not_to raise_error\n    end\n\n    it \"returns an empty string when forced final content is nil\" do\n      instance = chat_class.new\n\n      call_count = 0\n      allow(instance).to receive(:ruby_llm_request) do\n        call_count += 1\n        call_count == 1 ? tool_call_response : nil_content_response\n      end\n\n      expect(instance.chat_completion(max_tool_calls: 0)).to eq(\"\")\n    end\n  end\n\n  describe \"stop_tool_calls_and_respond! with nil content on forced final response\" do\n    let(:chat_class) do\n      Class.new do\n        include Raix::ChatCompletion\n        include Raix::FunctionDispatch\n\n        function :stop_now, \"Halts and forces a final response\" do |_arguments|\n          stop_tool_calls_and_respond!\n          \"stopping\"\n        end\n\n        def initialize\n          self.model = \"test-model\"\n          transcript << { user: \"Call stop_now\" }\n        end\n      end\n    end\n\n    it \"returns an empty string instead of raising NoMethodError\" do\n      instance = chat_class.new\n\n      stop_tool_call = {\n        \"choices\" => [\n          {\n            \"message\" => {\n              \"role\" => \"assistant\",\n              \"content\" => nil,\n              \"tool_calls\" => [\n                {\n                  \"id\" => \"call_stop\",\n                  \"type\" => \"function\",\n                  \"function\" => {\n                    \"name\" => \"stop_now\",\n                    \"arguments\" => \"{}\"\n                  }\n                }\n              ]\n            },\n            \"finish_reason\" => \"tool_calls\"\n          }\n        ],\n        \"usage\" => { \"prompt_tokens\" => 1, \"completion_tokens\" => 0, \"total_tokens\" => 1 }\n      }\n\n      call_count = 0\n      allow(instance).to receive(:ruby_llm_request) do\n        call_count += 1\n        call_count == 1 ? stop_tool_call : nil_content_response\n      end\n\n      expect { instance.chat_completion }.not_to raise_error\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/predicate_spec.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"raix/predicate\"\n\nclass Question\n  include Raix::Predicate\n\n  yes? do |explanation|\n    @callback.call(:yes, explanation)\n  end\n\n  no? do |explanation|\n    @callback.call(:no, explanation)\n  end\n\n  maybe? do |explanation|\n    @callback.call(:maybe, explanation)\n  end\n\n  def initialize(callback)\n    @callback = callback\n  end\nend\n\nclass QuestionWithNoBlocks\n  include Raix::Predicate\nend\n\nRSpec.describe Raix::Predicate, :vcr do\n  let(:callback) { double(\"callback\") }\n  let(:question) { Question.new(callback) }\n\n  it \"yes\" do\n    expect(callback).to receive(:call).with(:yes, \"Yes, Ruby on Rails is a web application framework.\")\n    question.ask(\"Is Ruby on Rails a web application framework?\")\n  end\n\n  it \"no\" do\n    expect(callback).to receive(:call).with(:no, \"No, the Eiffel Tower is located in Paris, France, not Madrid, Spain.\")\n    question.ask(\"Is the Eiffel Tower in Madrid?\")\n  end\n\n  it \"maybe\" do\n    expect(callback).to receive(:call).with(:maybe, \"Maybe, it depends on the specific situation and context.\")\n    question.ask(\"Should I quit my job?\")\n  end\n\n  it \"raises an error if no blocks are defined\" do\n    expect { QuestionWithNoBlocks.new.ask(\"Is Ruby on Rails a web application framework?\") }.to raise_error(RuntimeError, \"Please define a yes and/or no block\")\n  end\nend\n"
  },
  {
    "path": "spec/raix/prompt_caching_spec.rb",
    "content": "# frozen_string_literal: true\n\nclass GettingRealAnthropic\n  include Raix::ChatCompletion\n\n  def initialize\n    self.model = \"anthropic/claude-3-haiku\"\n    transcript << {\n      role: \"system\",\n      content: [\n        {\n          type: \"text\",\n          text: \"You are a modern historian studying trends in modern business. You know the following book callsed 'Getting Real' very well:\"\n        },\n        {\n          type: \"text\",\n          text: File.read(\"spec/files/getting_real.md\"),\n          cache_control: {\n            type: \"ephemeral\"\n          }\n        }\n      ]\n    }\n    transcript << { user: \"What is the meaning of Getting Real according to the book? Begin your response with According to the book,\" }\n  end\nend\n\nRSpec.describe GettingRealAnthropic, :vcr do\n  subject { described_class.new }\n\n  it \"does a completion with prompt caching\" do\n    subject.chat_completion.tap do |response|\n      expect(response).to include(\"According to the book\")\n    end\n\n    # now do it again\n    subject.chat_completion\n\n    # pause to let OpenRouter's usage event system catch up\n    sleep 2\n\n    # TODO: RubyLLM doesn't currently expose OpenRouter's generation stats API\n    # For now, we just verify that the second completion also works (would use cached data)\n    # A more thorough test would require adding generation stats support to RubyLLM\n    expect(Thread.current[:chat_completion_response]).to be_present\n  end\nend\n"
  },
  {
    "path": "spec/raix/prompt_declarations_spec.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nclass TestCallablePrompt\n  include Raix::ChatCompletion\n\n  def initialize(context)\n    @context = context\n  end\n\n  def call(input = nil)\n    \"Called with: #{input}\"\n  end\nend\n\nclass TestPromptDeclarations\n  include Raix::ChatCompletion\n  include Raix::PromptDeclarations\n\n  prompt call: TestCallablePrompt\nend\n\nclass TestTextPromptDeclarations\n  include Raix::ChatCompletion\n  include Raix::PromptDeclarations\n\n  prompt text: \"Hello, world!\"\nend\n\nclass TestMixedPromptDeclarations\n  include Raix::ChatCompletion\n  include Raix::PromptDeclarations\n\n  prompt call: TestCallablePrompt\n  prompt text: -> { \"Dynamic text\" }\nend\n\nRSpec.describe \"PromptDeclarations\" do\n  describe \"prompt declarations\" do\n    it \"supports call syntax without text\" do\n      expect(TestPromptDeclarations.prompts.count).to eq(1)\n      expect(TestPromptDeclarations.prompts.first.call).to eq(TestCallablePrompt)\n      expect(TestPromptDeclarations.prompts.first.text).to be_nil\n    end\n\n    it \"supports text syntax without call\" do\n      expect(TestTextPromptDeclarations.prompts.count).to eq(1)\n      expect(TestTextPromptDeclarations.prompts.first.call).to be_nil\n      expect(TestTextPromptDeclarations.prompts.first.text).to eq(\"Hello, world!\")\n    end\n\n    it \"supports mixing call and text prompts\" do\n      expect(TestMixedPromptDeclarations.prompts.count).to eq(2)\n      expect(TestMixedPromptDeclarations.prompts.first.call).to eq(TestCallablePrompt)\n      expect(TestMixedPromptDeclarations.prompts.last.text).to be_a(Proc)\n    end\n  end\n\n  describe \"chat_completion execution\" do\n    it \"executes callable prompts without text\" do\n      instance = TestPromptDeclarations.new\n      allow(instance).to receive(:transcript).and_return([])\n\n      # The callable should be instantiated and called\n      result = instance.chat_completion\n      expect(result).to eq(\"Called with: \")\n    end\n  end\nend\n"
  },
  {
    "path": "spec/raix/response_format_spec.rb",
    "content": "# frozen_string_literal: true\n\nRSpec.describe Raix::ResponseFormat do\n  RSpec::Matchers.define :serialize_to do |expected|\n    match do |actual|\n      @actual = JSON.pretty_generate(actual.to_schema)\n      @expected = JSON.pretty_generate(expected)\n      @actual_json == @expected_json\n    end\n\n    diffable\n  end\n\n  describe \"complex nested structure with arrays\" do\n    it \"matches the expected schema\" do\n      schema = {\n        observations: [\n          {\n            brief: {\n              type: \"string\",\n              description: \"brief description of the observation\",\n              required: true\n            },\n            content: {\n              type: \"string\",\n              description: \"content of the observation\",\n              required: true\n            },\n            importance: {\n              type: \"integer\",\n              description: \"importance of the observation\",\n              required: true\n            }\n          }\n        ]\n      }\n\n      expect(described_class.new(\"observations\", schema)).to serialize_to(\n        {\n          type: \"json_schema\",\n          json_schema: {\n            name: \"observations\",\n            schema: {\n              type: \"object\",\n              properties: {\n                observations: {\n                  type: \"array\",\n                  items: {\n                    type: \"object\",\n                    properties: {\n                      brief: {\n                        type: \"string\",\n                        description: \"brief description of the observation\"\n                      },\n                      content: {\n                        type: \"string\",\n                        description: \"content of the observation\"\n                      },\n                      importance: {\n                        type: \"integer\",\n                        description: \"importance of the observation\"\n                      }\n                    },\n                    required: %w[brief content importance],\n                    additionalProperties: false\n                  }\n                }\n              },\n              required: [\"observations\"],\n              additionalProperties: false\n            },\n            strict: true\n          }\n        }\n      )\n    end\n  end\n\n  describe \"simple schema with basic types\" do\n    it \"matches the expected schema\" do\n      schema = {\n        name: { type: \"string\" },\n        age: { type: \"integer\" }\n      }\n\n      expect(described_class.new(\"PersonInfo\", schema)).to serialize_to(\n        {\n          type: \"json_schema\",\n          json_schema: {\n            name: \"PersonInfo\",\n            schema: {\n              type: \"object\",\n              properties: {\n                name: {\n                  type: \"string\"\n                },\n                age: {\n                  type: \"integer\"\n                }\n              },\n              required: %w[name age],\n              additionalProperties: false\n            },\n            strict: true\n          }\n        }\n      )\n    end\n  end\n\n  describe \"nested structure with arrays\" do\n    it \"matches the expected schema\" do\n      schema = {\n        company: {\n          name: { type: \"string\" },\n          employees: [\n            {\n              name: { type: \"string\" },\n              role: { type: \"string\" },\n              skills: [\"string\"]\n            }\n          ],\n          locations: [\"string\"]\n        }\n      }\n\n      expect(described_class.new(\"CompanyInfo\", schema)).to serialize_to(\n        {\n          type: \"json_schema\",\n          json_schema: {\n            name: \"CompanyInfo\",\n            schema: {\n              type: \"object\",\n              properties: {\n                company: {\n                  name: {\n                    type: \"string\"\n                  },\n                  employees: {\n                    type: \"array\",\n                    items: {\n                      type: \"object\",\n                      properties: {\n                        name: {\n                          type: \"string\"\n                        },\n                        role: {\n                          type: \"string\"\n                        },\n                        skills: {\n                          type: \"array\",\n                          items: {\n                            type: \"string\"\n                          }\n                        }\n                      },\n                      required: [],\n                      additionalProperties: false\n                    }\n                  },\n                  locations: {\n                    type: \"array\",\n                    items: {\n                      type: \"string\"\n                    }\n                  }\n                }\n              },\n              required: [\"company\"],\n              additionalProperties: false\n            },\n            strict: true\n          }\n        }\n      )\n    end\n  end\n\n  describe \"person analysis example\" do\n    it \"matches the expected schema\" do\n      schema = {\n        full_name: { type: \"string\" },\n        age_estimate: { type: \"integer\" },\n        personality_traits: [\"string\"]\n      }\n\n      expect(described_class.new(\"PersonAnalysis\", schema)).to serialize_to(\n        {\n          type: \"json_schema\",\n          json_schema: {\n            name: \"PersonAnalysis\",\n            schema: {\n              type: \"object\",\n              properties: {\n                full_name: {\n                  type: \"string\"\n                },\n                age_estimate: {\n                  type: \"integer\"\n                },\n                personality_traits: {\n                  type: \"array\",\n                  items: {\n                    type: \"string\"\n                  }\n                }\n              },\n              required: %w[full_name age_estimate personality_traits],\n              additionalProperties: false\n            },\n            strict: true\n          }\n        }\n      )\n    end\n  end\nend\n"
  },
  {
    "path": "spec/spec_helper.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"dotenv\"\nrequire \"faraday\"\nrequire \"faraday/retry\"\nrequire \"ruby_llm\"\nrequire \"pry\"\nrequire \"raix\"\n\nrequire \"vcr\"\n\nVCR.configure do |config|\n  config.cassette_library_dir = \"spec/vcr\" # the directory where your cassettes will be saved\n  config.hook_into :webmock # or :fakeweb\n  config.configure_rspec_metadata!\n  config.ignore_localhost = true\n\n  config.default_cassette_options = {\n    match_requests_on: %i[method uri]\n  }\n\n  config.filter_sensitive_data(\"REDACTED\") { |interaction| interaction.request.headers[\"Authorization\"][0].sub(\"Bearer \", \"\") }\nend\n\nDotenv.load\n\nRubyLLM.configure do |config|\n  config.openrouter_api_key = ENV.fetch(\"OR_ACCESS_TOKEN\", nil)\n  config.openai_api_key = ENV.fetch(\"OAI_ACCESS_TOKEN\", nil)\n  config.log_level = Logger::DEBUG\nend\n\nRaix.configure do |config|\n  # Legacy support - can still set these if needed\n  # config.openrouter_client = OpenRouter::Client.new(access_token: ENV.fetch(\"OR_ACCESS_TOKEN\", nil))\n  # config.openai_client = OpenAI::Client.new(access_token: ENV.fetch(\"OAI_ACCESS_TOKEN\", nil))\nend\n\nRSpec.configure do |config|\n  # Enable flags like --only-failures and --next-failure\n  config.example_status_persistence_file_path = \".rspec_status\"\n\n  # Disable RSpec exposing methods globally on `Module` and `main`\n  config.disable_monkey_patching!\n\n  config.expect_with :rspec do |c|\n    c.syntax = :expect\n  end\n\n  config.before(:example, :novcr) do\n    VCR.turn_off!\n    WebMock.disable!\n  end\n\n  config.after(:example, :novcr) do\n    VCR.turn_on!\n    WebMock.enable!\n  end\nend\n"
  },
  {
    "path": "spec/support/mcp_server.rb",
    "content": "# frozen_string_literal: true\n\nrequire \"json\"\n\n# Test MCP Server implementing the Model Context Protocol over stdio transport\n# This server provides several test tools for validating the StdioClient functionality\nclass TestMCPServer\n  JSONRPC_VERSION = \"2.0\"\n\n  def initialize\n    $stdout.sync = true # Enable auto-flushing for immediate output\n    @tools = build_tools\n  end\n\n  def run\n    # Read JSON-RPC requests from stdin and respond on stdout\n    while (line = $stdin.gets)\n      begin\n        request = JSON.parse(line.strip)\n        response = handle_request(request)\n        puts response.to_json if response\n      rescue JSON::ParserError => e\n        error_response = create_error_response(nil, -32_700, \"Parse error: #{e.message}\")\n        puts error_response.to_json\n      rescue StandardError => e\n        error_response = create_error_response(request&.dig(\"id\"), -32_603, \"Internal error: #{e.message}\")\n        puts error_response.to_json\n      end\n    end\n  end\n\n  private\n\n  def create_response(id:, result: nil, error: nil)\n    {\n      jsonrpc: JSONRPC_VERSION,\n      id:,\n      result:,\n      error:\n    }.compact\n  end\n\n  def create_error_response(id, code, message)\n    create_response(id:, error: { code:, message: })\n  end\n\n  def handle_request(request)\n    method = request[\"method\"]\n    params = request[\"params\"] || {}\n    id = request[\"id\"]\n\n    case method\n    when \"tools/list\"\n      handle_tools_list(id)\n    when \"tools/call\"\n      handle_tools_call(id, params)\n    else\n      create_error_response(id, -32_601, \"Method not found: #{method}\")\n    end\n  end\n\n  def handle_tools_list(id)\n    tools_without_handlers = @tools.values.map do |tool|\n      tool.except(\"handler\")\n    end\n    create_response(id:, result: { tools: tools_without_handlers })\n  end\n\n  def handle_tools_call(id, params)\n    tool_name = params[\"name\"]\n    arguments = params[\"arguments\"] || {}\n\n    tool = @tools[tool_name]\n    unless tool\n      return create_error_response(id, -32_602, \"Unknown tool: #{tool_name}\")\n    end\n\n    begin\n      content = tool[\"handler\"].call(arguments)\n      create_response(id:, result: { content: })\n    rescue ArgumentError => e\n      create_error_response(id, -32_602, \"Invalid parameters: #{e.message}\")\n    end\n  end\n\n  def build_tools\n    {\n      \"ping\" => {\n        \"name\" => \"ping\",\n        \"description\" => \"Returns 'pong' - useful for testing connectivity\",\n        \"inputSchema\" => {\n          \"type\" => \"object\",\n          \"properties\" => {},\n          \"required\" => []\n        },\n        \"handler\" => ->(_args) { [{ type: \"text\", text: \"pong\" }] }\n      },\n      \"echo\" => {\n        \"name\" => \"echo\",\n        \"description\" => \"Echoes back the provided message\",\n        \"inputSchema\" => {\n          \"type\" => \"object\",\n          \"properties\" => {\n            \"message\" => {\n              \"type\" => \"string\",\n              \"description\" => \"The message to echo back\"\n            }\n          },\n          \"required\" => [\"message\"]\n        },\n        \"handler\" => lambda { |args|\n          raise ArgumentError, \"Missing required parameter: message\" unless args[\"message\"]\n\n          [{ type: \"text\", text: args[\"message\"] }]\n        }\n      },\n      \"process_data\" => {\n        \"name\" => \"process_data\",\n        \"description\" => \"Processes complex data structures\",\n        \"inputSchema\" => {\n          \"type\" => \"object\",\n          \"properties\" => {\n            \"data\" => {\n              \"type\" => \"object\",\n              \"description\" => \"Complex data to process\"\n            }\n          },\n          \"required\" => [\"data\"]\n        },\n        \"handler\" => lambda { |args|\n          raise ArgumentError, \"Missing required parameter: data\" unless args[\"data\"]\n\n          [{\n            type: \"text\",\n            text: JSON.generate({ processed: true, original: args[\"data\"] })\n          }]\n        }\n      },\n      \"binary_data\" => {\n        \"name\" => \"binary_data\",\n        \"description\" => \"Returns binary data (for testing non-text content)\",\n        \"inputSchema\" => {\n          \"type\" => \"object\",\n          \"properties\" => {},\n          \"required\" => []\n        },\n        \"handler\" => ->(_args) { [{ type: \"image\", data: \"base64encodeddata\" }] }\n      }\n    }\n  end\nend\n\n# Run the server if this file is executed directly\nif __FILE__ == $PROGRAM_NAME\n  server = TestMCPServer.new\n  server.run\nend\n"
  },
  {
    "path": "spec/vcr/GettingRealAnthropic/does_a_completion_with_prompt_caching.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"system\",\"content\":[{\"type\":\"text\",\"text\":\"You\n        are a modern historian studying trends in modern business. You know the following\n        book callsed ''Getting Real'' very well:\"},{\"type\":\"text\",\"text\":\"Introduction\\nWhat\n        is Getting Real?\\nAbout 37signals\\nCaveats, disclaimers, and other preemptive\n        strikes\\n\\n\\n What is Getting Real?\\nWant to build a successful web app? Then\n        it’s time to Get Real. Getting Real is a smaller, faster, better way to build\n        software.\\nGetting Real is about skipping all the stuff that represents real\n        (charts, graphs, boxes, arrows, schematics, wireframes, etc.) and actually\n        building the real thing.\\nGetting real is less. Less mass, less software,\n        less features, less paperwork, less of everything that’s not essential (and\n        most of what you think is essential actually isn’t).\\nGetting Real is staying\n        small and being agile.\\nGetting Real starts with the interface, the real screens\n        that people are going to use. It begins with what the customer actually experiences\n        and builds backwards from there.This lets you get the interface right before\n        you get the software wrong.\\nGetting Real is about iterations and lowering\n        the cost of change. Getting Real is all about launching, tweaking, and constantly\n        improving which makes it a perfect approach for web-based software.\\nGetting\n        Real delivers just what customers need and eliminates anything they don’t.\\nThe\n        benefits of Getting Real\\nGetting Real delivers better results because it\n        forces you to deal with the actual problems you’re trying to solve instead\n        of your ideas about those problems. It forces you to deal with reality.\\n\\n\n        Getting Real foregoes functional specs and other transitory documentation\n        in favor of building real screens. A functional spec is make-believe, an illusion\n        of agreement, while an actual web page is reality. That’s what your customers\n        are going to see and use. That’s what matters. Getting Real gets you there\n        faster.\\nAnd that means you’re making software decisions based on the real\n        thing instead of abstract notions.\\nFinally, Getting Real is an approach ideally\n        suited to web-based software. The old school model of shipping software in\n        a box and then waiting a year or two to deliver an update is fading away.\n        Unlike installed software, web apps can constantly evolve on a day-to-day\n        basis. Getting Real leverages this advantage for all its worth.\\nHow To Write\n        Vigorous Software\\nVigorous writing is concise.A sentence should contain no\n        unnecessary words, a paragraph no unnecessary sentences, for the same reason\n        that a drawing should have no unnecessary lines and a machine no unnecessary\n        parts.This requires not that the writer make all sentences short or avoid\n        all detail and treat subjects only in outline, but that every word tell.\\nFrom\n        “The Elements of Style” by William Strunk Jr.\\nNo more bloat\\nThe old way:\n        a lengthy, bureaucratic, we’re-doing-this-to-cover- our-asses process. The\n        typical result: bloated, forgettable soft- ware dripping with mediocrity.\n        Blech.\\nGetting Real gets rid of...\\nTimelines that take months or even years\n        Pie-in-the-sky functional specs Scalability debates\\n\\n Interminable staff\n        meetings\\nThe “need” to hire dozens of employees Meaningless version numbers\\nPristine\n        roadmaps that predict the perfect future Endless preference options\\nOutsourced\n        support\\nUnrealistic user testing\\nUseless paperwork\\nTop-down hierarchy\\nYou\n        don’t need tons of money or a huge team or a lengthy development cycle to\n        build great software. Those things are the ingredients for slow, murky, changeless\n        applications. Getting real takes the opposite approach.\\nIn this book we’ll\n        show you...\\nThe importance of having a philosophy Why staying small is a\n        good thing\\nHow to build less\\nHow to get from idea to reality quickly How\n        to staff your team\\nWhy you should design from the inside out Why writing\n        is so crucial\\nWhy you should underdo your competition\\n\\n How to promote\n        your app and spread the word Secrets to successful support\\nTips on keeping\n        momentum going after launch\\n...and lots more\\nThe focus is on big-picture\n        ideas. We won’t bog you down with detailed code snippets or css tricks. We’ll\n        stick to the major ideas and philosophies that drive the Getting Real process.\\nIs\n        this book for you?\\nYou’re an entrepreneur, designer, programmer, or marketer\n        working on a big idea.\\nYou realize the old rules don’t apply anymore. Distribute\n        your software on cd-roms every year? How 2002. Version numbers? Out the window.\n        You need to build, launch, and tweak. Then rinse and repeat.\\nOr maybe you’re\n        not yet on board with agile development and business structures, but you’re\n        eager to learn more.\\nIf this sounds like you, then this book is for you.\\nNote:\n        While this book’s emphasis is on building a web app, a lot of these ideas\n        are applicable to non-software activities too. The suggestions about small\n        teams, rapid prototyping, expect- ing iterations, and many others presented\n        here can serve as a guide whether you’re starting a business, writing a book,\n        designing a web site, recording an album, or doing a variety\\nof other endeavors.\n        Once you start Getting Real in one area of your life, you’ll see how these\n        concepts can apply to a wide range of activities.\\n\\n About 37signals\\nWhat\n        we do\\n37signals is a small team that creates simple, focused software. Our\n        products help you collaborate and get organized. More than 350,000 people\n        and small businesses use our web-apps to get things done. Jeremy Wagstaff,\n        of the Wall Street Journal, wrote, “37signals products are beautifully simple,\n        elegant and intuitive tools that make an Outlook screen look like the soft-\n        ware equivalent of a torture chamber.” Our apps never put you on the rack.\\nOur\n        modus operandi\\nWe believe software is too complex. Too many features, too\n        many buttons, too much to learn. Our products do less than the competition\n        – intentionally. We build products that work smarter, feel better, allow you\n        to do things your way, and are easier to use.\\nOur products\\nAs of the publishing\n        date of this book, we have five commercial products and one open source web\n        application framework.\\nBasecamp turns project management on its head. Instead\n        of Gantt charts, fancy graphs, and stats-heavy spreadsheets, Base- camp offers\n        message boards, to-do lists, simple scheduling, col- laborative writing, and\n        file sharing. So far, hundreds of thou- sands agree it’s a better way. Farhad\n        Manjoo of Salon.com said\\n“Basecamp represents the future of software on the\n        Web.”\\n\\n Campfire brings simple group chat to the business setting. Businesses\n        in the know understand how valuable real-time persistent group chat can be.\n        Conventional instant messaging is great for quick 1-on-1 chats, but it’s miserable\n        for 3 or more people at once. Campfire solves that problem and plenty more.\\nBackpack\n        is the alternative to those confusing, complex, “orga- nize your life in 25\n        simple steps” personal information managers. Backpack’s simple take on pages,\n        notes, to-dos, and cellphone/ email-based reminders is a novel idea in a product\n        category that suffers from status-quo-itis. Thomas Weber of the Wall Street\n        Journal said it’s the best product in its class and David Pogue of the New\n        York Times called it a “very cool” organization tool.\\nWriteboard lets you\n        write, share, revise, and compare text\\nsolo or with others. It’s the refreshing\n        alternative to bloated word processors that are overkill for 95% of what you\n        write. John Gruber of Daring Fireball said, “Writeboard might be the clearest,\n        simplest web application I’ve ever seen.” Web-guru Jeffrey Zeldman said, “The\n        brilliant minds at 37signals have done it again.”\\nTa-da List keeps all your\n        to-do lists together and organized online. Keep the lists to yourself or share\n        them with others for easy collaboration. There’s no easier way to get things\n        done. Over 100,000 lists with nearly 1,000,000 items have been created so\n        far.\\nRuby on Rails, for developers, is a full-stack, open-source web framework\n        in Ruby for writing real-world applications quickly and easily. Rails takes\n        care of the busy work so you can focus on your idea. Nathan Torkington of\n        the O’Reilly publish- ing empire said “Ruby on Rails is astounding. Using\n        it is like watching a kung-fu movie, where a dozen bad-ass frameworks prepare\n        to beat up the little newcomer only to be handed their asses in a variety\n        of imaginative ways.” Gotta love that quote.\\n\\n Caveats, disclaimers, and\n        other preemptive strikes\\nJust to get it out of the way, here are our responses\n        to some com- plaints we hear every now and again:\\n“These techniques won’t\n        work for me.”\\nGetting real is a system that’s worked terrifically for us.\n        That said, the ideas in this book won’t apply to every project under the sun.\n        If you are building a weapons system, a nuclear control plant, a banking system\n        for millions of customers, or some other life/finance-critical system, you’re\n        going to balk at some of our laissez-faire attitude. Go ahead and take additional\n        precautions.\\nAnd it doesn’t have to be an all or nothing proposition. Even\n        if you can’t embrace Getting Real fully, there are bound to be at least a\n        few ideas in here you can sneak past the powers that be.\\n“You didn’t invent\n        that idea.”\\nWe’re not claiming to have invented these techniques. Many of\n        these concepts have been around in one form or another for a long time. Don’t\n        get huffy if you read some\\nof our advice and it reminds you of something\n        you read about already on so and so’s weblog or in some book pub- lished 20\n        years ago. It’s definitely possible. These tech- niques are not at all exclusive\n        to 37signals. We’re just telling you how we work and what’s been successful\n        for us.\\n\\n “You take too much of a black and white view.”\\nIf our tone seems\n        too know-it-allish, bear with us. We think it’s better to present ideas in\n        bold strokes than to be wishy-washy about it. If that comes off as cocky or\n        arrogant, so be it. We’d rather be provocative than water everything down\n        with “it depends...” Of course there will be times when these rules need to\n        be stretched or broken. And some of these tactics may not apply to your situation.\n        Use your judgement and imagination.\\n“This won’t work inside my company.”\\nThink\n        you’re too big to Get Real? Even Microsoft is Getting Real (and we doubt you’re\n        bigger than them).\\nEven if your company typically runs on long-term schedules\n        with big teams, there are still ways to get real.The first step is\\nto break\n        up into smaller units. When there’s too many people involved, nothing gets\n        done. The leaner you are, the faster – and better – things get done.\\nGranted,\n        it may take some salesmanship. Pitch your company on the Getting Real process.\n        Show them this book. Show them the real results you can achieve in less time\n        and with a smaller team.\\nExplain that Getting Real is a low-risk, low-investment\n        way to test new concepts. See if you can split off from the mothership on\n        a smaller project as a proof of concept. Demonstrate results.\\nOr, if you\n        really want to be ballsy, go stealth. Fly under the radar and demonstrate\n        real results. That’s the approach the Start.com team has used while Getting\n        Real at Microsoft. “I’ve watched the Start.com team work. They don’t ask permission,”\n        says Robert Scoble, Technical Evangelist at Microsoft. “They have a boss that\n        provides air cover. And they bite off a little bit at a time and do that and\n        respond to feedback.”\\n\\n   Shipping Microsoft’s Start.com\\nIn big companies,\n        processes and meetings are the norm. Many months are spent on planning features\n        and arguing details with the goal of everyone reaching an agreement on what\n        is the “right” thing for the customer.\\nThat may be the right approach for\n        shrink-wrapped software, but with the web we have an incredible advantage.\n        Just ship it! Let the user tell you if it’s the right thing and if it’s not,\n        hey you can fix it and ship it to the web the same day if you want! There\n        is no word stronger than the customer’s – resist the urge to engage in long-winded\n        meetings and arguments. Just ship it and prove a point.\\nMuch easier said\n        than done – this implies:\\nMonths of planning are not necessary.\\nMonths of\n        writing specs are not necessary – specs should have the foundations nailed\n        and details figured out and refined during the development phase. Don’t try\n        to close all open issues and nail every single detail before development starts.\\nShip\n        less features, but quality features.\\nYou don’t need a big bang approach with\n        a whole new release and bunch of features. Give the users byte-size pieces\n        that they can digest.\\nIf there are minor bugs, ship it as soon you have the\n        core scenarios nailed and ship the bug fixes to web gradually after that.The\n        faster you get the user feedback the better. Ideas can sound great on paper\n        but in practice turn out to be suboptimal.The sooner you find out about fundamental\n        issues that are wrong with an idea, the better.\\nOnce you iterate quickly\n        and react on customer feedback, you will establish a customer connection.\n        Remember the goal is to win the customer by building what they want.\\n-Sanaz\n        Ahari, Program Manager of Start.com, Microsoft\\n\\n\\n  The Starting Line\\nBuild\n        Less\\nWhat’s Your Problem?\\nFund Yourself\\nFix Time and Budget, Flex Scope\n        Have an Enemy\\nIt Shouldn’t be a Chore\\n\\n\\n Build Less\\nUnderdo your competition\\nConventional\n        wisdom says that to beat your competitors you need to one-up them. If they\n        have four features, you need five (or 15, or 25). If they’re spending x, you\n        need to spend xx. If they have 20, you need 30.\\nThis sort of one-upping Cold\n        War mentality is a dead-end. It’s an expensive, defensive, and paranoid way\n        of building products. Defensive, paranoid companies can’t think ahead, they\n        can only think behind. They don’t lead, they follow.\\nIf you want to build\n        a company that follows, you might as well put down this book now.\\nSo what\n        to do then? The answer is less. Do less than your com- petitors to beat them.\n        Solve the simple problems and leave the hairy, difficult, nasty problems to\n        everyone else. Instead of one- upping, try one-downing. Instead of outdoing,\n        try underdoing.\\nWe’ll cover the concept of less throughout this book, but\n        for starters, less means:\\nLess features\\nLess options/preferences\\nLess people\n        and corporate structure Less meetings and abstractions\\nLess promises\\n\\n\\n\n        What’s Your Problem?\\nBuild software for yourself\\nA great way to build software\n        is to start out by solving your own problems. You’ll be the target audience\n        and you’ll know what’s important and what’s not. That gives you a great head\n        start on delivering a breakout product.\\nThe key here is understanding that\n        you’re not alone. If you’re having this problem, it’s likely hundreds of thousands\n        of others are in the same boat. There’s your market. Wasn’t that easy?\\nBasecamp\n        originated in a problem: As a design firm we needed a simple way to communicate\n        with our clients about projects. We started out doing this via client ex-\n        tranets which we would update manually. But changing the html by hand every\n        time a project needed to be updated just wasn’t working. These project sites\n        always seemed to go stale and eventually were abandoned. It was frustrating\n        because it left us disorganized and left clients in the dark.\\nSo we started\n        looking at other options. Yet every tool we found either 1) didn’t do what\n        we needed or 2) was bloated with fea- tures we didn’t need – like billing,\n        strict access controls, charts, graphs, etc. We knew there had to be a better\n        way so we decided to build our own.\\nWhen you solve your own problem, you\n        create a tool that you’re passionate about. And passion is key. Passion means\n        you’ll truly use it and care about it. And that’s the best way to get others\n        to feel passionate about it too.\\n\\n   Scratching your own itch\\nThe Open\n        Source world embraced this mantra a long time ago – they call it “scratching\n        your own itch.” For the open source developers, it means they get the tools\n        they want, delivered the way they want them. But the benefit goes much deeper.\\nAs\n        the designer or developer of a new application, you’re faced with hundreds\n        of micro-decisions each and every day: blue or green? One table or two? Static\n        or dynamic? Abort or recover? How do we make these decisions? If it’s something\n        we recognize as being important, we might ask.The rest, we guess.And all that\n        guessing builds up a kind of debt in our applications – an interconnected\n        web of assumptions.\\nAs a developer, I hate this.The knowledge of all these\n        small-scale timebombs in the applications I write adds to my stress. Open\n        Source developers, scratching their own itches, don’t suffer this. Because\n        they are their own users, they know the correct answers to 90% of the decisions\n        they have to make. I think this is one of the reasons folks come home after\n        a hard day of coding and then work on open source: It’s relaxing.\\n–Dave Thomas,\n        The Pragmatic Programmers\\n\\nBorn out of necessity\\nCampaign Monitor really\n        was born out of necessity. For years we’d been frustrated by the quality of\n        the email marketing options out there. One tool would do x and y but never\n        z, the next had y\\nand z nailed but just couldn’t get x right.We couldn’t\n        win.\\nWe decided to clear our schedule and have a go at building our dream\n        email marketing tool.We consciously decided not to look at what everyone else\n        was doing and instead build something that would make ours and our customer’s\n        lives a little easier.\\nAs it turned out, we weren’t the only ones who were\n        unhappy with the options out there.We made a few modifications to the software\n        so any design firm could use it and started spreading the word. In less than\n        six months, thousands of designers were using Campaign Monitor to send email\n        newsletters for themselves and their clients.\\n–David Greiner, founder, Campaign\n        Monitor\\n\\n\\n   You need to care about it\\nWhen you write a book, you need\n        to have more than an interesting story. You need to have a desire to tell\n        the story.You need to be personally invested in some way. If you’re going\n        to live with something for two years, three years, the rest of your life,\n        you need to care about it.\\n–Malcolm Gladwell, author (from A Few Thin Slices\n        of Malcolm Gladwell)\\n\\n\\n Fund Yourself\\nOutside money is plan B\\nThe first\n        priority of many startups is acquiring funding from investors. But remember,\n        if you turn to outsiders for funding, you’ll have to answer to them too. Expectations\n        are raised. Investors want their money back – and quickly. The sad fact is\n        cashing in often begins to trump building a quality product.\\nThese days it\n        doesn’t take much to get rolling. Hardware\\nis cheap and plenty of great infrastructure\n        software is open source and free. And passion doesn’t come with a price tag.\\nSo\n        do what you can with the cash on hand. Think hard and determine what’s really\n        essential and what you can do without. What can you do with three people instead\n        of ten? What can you do with $20k instead of $100k? What can you do in three\n        months instead of six? What can you do if you keep your day job and build\n        your app on the side?\\nConstraints force creativity\\nRun on limited resources\n        and you’ll be forced to reckon with constraints earlier and more intensely.\n        And that’s a good thing. Constraints drive innovation.\\n\\n\\n Constraints also\n        force you to get your idea out in the wild sooner rather than later – another\n        good thing. A month or two out of the gates you should have a pretty good\n        idea of whether you’re onto something or not. If you are, you’ll be self-sustain-\n        able shortly and won’t need external cash. If your idea’s a lemon, it’s time\n        to go back to the drawing board. At least you know now as opposed to months\n        (or years) down the road. And at least you can back out easily. Exit plans\n        get a lot trickier once inves- tors are involved.\\nIf you’re creating software\n        just to make a quick buck, it will show. Truth is a quick payout is pretty\n        unlikely. So focus on building a quality tool that you and your customers\n        can live with for a long time.\\n\\nTwo paths\\n[Jake Walker started one company\n        with investor money (Disclive) and one without (The Show). Here he discusses\n        the differences between the two paths.]\\n\\nThe root of all the problems wasn’t\n        raising money itself, but everything that came along with it.The expectations\n        are simply higher. People start taking salary, and the motivation is to build\n        it up and sell it, or find some other way for the initial investors to make\n        their money back. In the case of the first company,\\nwe simply started acting\n        much bigger than we were – out of necessity...\\n[With The Show] we realized\n        that we could deliver a much better product with less costs, only with more\n        time. And we gambled with a bit of our own money that people would be willing\n        to wait for quality over speed. But the company has stayed (and will likely\n        continue to be) a small operation.And ever since that first project, we’ve\n        been fully self funded.With just a bit of creative terms from our vendors,\n        we’ve never really need to put much of our own money into the operation at\n        all.And the expectation isn’t to grow and sell,but to grow for the sake of\n        growth and to continue to benefit from it financially.\\n–A comment from Signal\n        vs. Noise\\n\\n\",\"cache_control\":{\"type\":\"ephemeral\"}}]},{\"role\":\"user\",\"content\":\"What\n        is the meaning of Getting Real according to the book? Begin your response\n        with According to the book,\"}],\"model\":\"anthropic/claude-3-haiku\",\"max_tokens\":1000,\"temperature\":0.0}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:37:32 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      Cf-Placement:\n      - local-EWR\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a8ec0afb719c7-EWR\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n{\\\"id\\\":\\\"gen-1731289052-M1FmaSIlmErrMNX8ZSCb\\\",\\\"provider\\\":\\\"Anthropic\\\",\\\"model\\\":\\\"anthropic/claude-3-haiku\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1731289052,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"end_turn\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"According\n        to the book, \\\\\\\"Getting Real\\\\\\\" is \\\\\\\"a smaller, faster, better way to\n        build software.\\\\\\\" The key principles of Getting Real include:\\\\n\\\\n1. Skipping\n        the abstract planning and documentation (like functional specs, wireframes,\n        etc.) and instead focusing on building the actual product.\\\\n\\\\n2. Doing less\n        - less features, less options, less people, less meetings, etc. The goal is\n        to be lean and agile rather than bloated and bureaucratic.\\\\n\\\\n3. Starting\n        with the user interface and customer experience, and building backwards from\n        there. This ensures the product is focused on solving real user problems.\\\\n\\\\n4.\n        Embracing an iterative, launch-and-improve approach rather than trying to\n        perfect everything upfront. Web-based software can be constantly updated,\n        so the focus is on getting something out there quickly and refining it based\n        on user feedback.\\\\n\\\\n5. Avoiding the traditional software development process\n        of lengthy timelines, big teams, and extensive planning in favor of a more\n        nimble, resource-constrained approach.\\\\n\\\\nThe core idea is to be pragmatic,\n        focus on the essentials, and get the actual product in front of users as quickly\n        as possible, rather than getting bogged down in abstract planning and documentation.\n        This \\\\\\\"getting real\\\\\\\" approach is presented as an effective way to build\n        successful web-based software.\\\",\\\"refusal\\\":\\\"\\\"}}],\\\"usage\\\":{\\\"prompt_tokens\\\":4884,\\\"completion_tokens\\\":285,\\\"total_tokens\\\":5169}}\"\n  recorded_at: Mon, 11 Nov 2024 01:37:35 GMT\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"system\",\"content\":[{\"type\":\"text\",\"text\":\"You\n        are a modern historian studying trends in modern business. You know the following\n        book callsed ''Getting Real'' very well:\"},{\"type\":\"text\",\"text\":\"Introduction\\nWhat\n        is Getting Real?\\nAbout 37signals\\nCaveats, disclaimers, and other preemptive\n        strikes\\n\\n\\n What is Getting Real?\\nWant to build a successful web app? Then\n        it’s time to Get Real. Getting Real is a smaller, faster, better way to build\n        software.\\nGetting Real is about skipping all the stuff that represents real\n        (charts, graphs, boxes, arrows, schematics, wireframes, etc.) and actually\n        building the real thing.\\nGetting real is less. Less mass, less software,\n        less features, less paperwork, less of everything that’s not essential (and\n        most of what you think is essential actually isn’t).\\nGetting Real is staying\n        small and being agile.\\nGetting Real starts with the interface, the real screens\n        that people are going to use. It begins with what the customer actually experiences\n        and builds backwards from there.This lets you get the interface right before\n        you get the software wrong.\\nGetting Real is about iterations and lowering\n        the cost of change. Getting Real is all about launching, tweaking, and constantly\n        improving which makes it a perfect approach for web-based software.\\nGetting\n        Real delivers just what customers need and eliminates anything they don’t.\\nThe\n        benefits of Getting Real\\nGetting Real delivers better results because it\n        forces you to deal with the actual problems you’re trying to solve instead\n        of your ideas about those problems. It forces you to deal with reality.\\n\\n\n        Getting Real foregoes functional specs and other transitory documentation\n        in favor of building real screens. A functional spec is make-believe, an illusion\n        of agreement, while an actual web page is reality. That’s what your customers\n        are going to see and use. That’s what matters. Getting Real gets you there\n        faster.\\nAnd that means you’re making software decisions based on the real\n        thing instead of abstract notions.\\nFinally, Getting Real is an approach ideally\n        suited to web-based software. The old school model of shipping software in\n        a box and then waiting a year or two to deliver an update is fading away.\n        Unlike installed software, web apps can constantly evolve on a day-to-day\n        basis. Getting Real leverages this advantage for all its worth.\\nHow To Write\n        Vigorous Software\\nVigorous writing is concise.A sentence should contain no\n        unnecessary words, a paragraph no unnecessary sentences, for the same reason\n        that a drawing should have no unnecessary lines and a machine no unnecessary\n        parts.This requires not that the writer make all sentences short or avoid\n        all detail and treat subjects only in outline, but that every word tell.\\nFrom\n        “The Elements of Style” by William Strunk Jr.\\nNo more bloat\\nThe old way:\n        a lengthy, bureaucratic, we’re-doing-this-to-cover- our-asses process. The\n        typical result: bloated, forgettable soft- ware dripping with mediocrity.\n        Blech.\\nGetting Real gets rid of...\\nTimelines that take months or even years\n        Pie-in-the-sky functional specs Scalability debates\\n\\n Interminable staff\n        meetings\\nThe “need” to hire dozens of employees Meaningless version numbers\\nPristine\n        roadmaps that predict the perfect future Endless preference options\\nOutsourced\n        support\\nUnrealistic user testing\\nUseless paperwork\\nTop-down hierarchy\\nYou\n        don’t need tons of money or a huge team or a lengthy development cycle to\n        build great software. Those things are the ingredients for slow, murky, changeless\n        applications. Getting real takes the opposite approach.\\nIn this book we’ll\n        show you...\\nThe importance of having a philosophy Why staying small is a\n        good thing\\nHow to build less\\nHow to get from idea to reality quickly How\n        to staff your team\\nWhy you should design from the inside out Why writing\n        is so crucial\\nWhy you should underdo your competition\\n\\n How to promote\n        your app and spread the word Secrets to successful support\\nTips on keeping\n        momentum going after launch\\n...and lots more\\nThe focus is on big-picture\n        ideas. We won’t bog you down with detailed code snippets or css tricks. We’ll\n        stick to the major ideas and philosophies that drive the Getting Real process.\\nIs\n        this book for you?\\nYou’re an entrepreneur, designer, programmer, or marketer\n        working on a big idea.\\nYou realize the old rules don’t apply anymore. Distribute\n        your software on cd-roms every year? How 2002. Version numbers? Out the window.\n        You need to build, launch, and tweak. Then rinse and repeat.\\nOr maybe you’re\n        not yet on board with agile development and business structures, but you’re\n        eager to learn more.\\nIf this sounds like you, then this book is for you.\\nNote:\n        While this book’s emphasis is on building a web app, a lot of these ideas\n        are applicable to non-software activities too. The suggestions about small\n        teams, rapid prototyping, expect- ing iterations, and many others presented\n        here can serve as a guide whether you’re starting a business, writing a book,\n        designing a web site, recording an album, or doing a variety\\nof other endeavors.\n        Once you start Getting Real in one area of your life, you’ll see how these\n        concepts can apply to a wide range of activities.\\n\\n About 37signals\\nWhat\n        we do\\n37signals is a small team that creates simple, focused software. Our\n        products help you collaborate and get organized. More than 350,000 people\n        and small businesses use our web-apps to get things done. Jeremy Wagstaff,\n        of the Wall Street Journal, wrote, “37signals products are beautifully simple,\n        elegant and intuitive tools that make an Outlook screen look like the soft-\n        ware equivalent of a torture chamber.” Our apps never put you on the rack.\\nOur\n        modus operandi\\nWe believe software is too complex. Too many features, too\n        many buttons, too much to learn. Our products do less than the competition\n        – intentionally. We build products that work smarter, feel better, allow you\n        to do things your way, and are easier to use.\\nOur products\\nAs of the publishing\n        date of this book, we have five commercial products and one open source web\n        application framework.\\nBasecamp turns project management on its head. Instead\n        of Gantt charts, fancy graphs, and stats-heavy spreadsheets, Base- camp offers\n        message boards, to-do lists, simple scheduling, col- laborative writing, and\n        file sharing. So far, hundreds of thou- sands agree it’s a better way. Farhad\n        Manjoo of Salon.com said\\n“Basecamp represents the future of software on the\n        Web.”\\n\\n Campfire brings simple group chat to the business setting. Businesses\n        in the know understand how valuable real-time persistent group chat can be.\n        Conventional instant messaging is great for quick 1-on-1 chats, but it’s miserable\n        for 3 or more people at once. Campfire solves that problem and plenty more.\\nBackpack\n        is the alternative to those confusing, complex, “orga- nize your life in 25\n        simple steps” personal information managers. Backpack’s simple take on pages,\n        notes, to-dos, and cellphone/ email-based reminders is a novel idea in a product\n        category that suffers from status-quo-itis. Thomas Weber of the Wall Street\n        Journal said it’s the best product in its class and David Pogue of the New\n        York Times called it a “very cool” organization tool.\\nWriteboard lets you\n        write, share, revise, and compare text\\nsolo or with others. It’s the refreshing\n        alternative to bloated word processors that are overkill for 95% of what you\n        write. John Gruber of Daring Fireball said, “Writeboard might be the clearest,\n        simplest web application I’ve ever seen.” Web-guru Jeffrey Zeldman said, “The\n        brilliant minds at 37signals have done it again.”\\nTa-da List keeps all your\n        to-do lists together and organized online. Keep the lists to yourself or share\n        them with others for easy collaboration. There’s no easier way to get things\n        done. Over 100,000 lists with nearly 1,000,000 items have been created so\n        far.\\nRuby on Rails, for developers, is a full-stack, open-source web framework\n        in Ruby for writing real-world applications quickly and easily. Rails takes\n        care of the busy work so you can focus on your idea. Nathan Torkington of\n        the O’Reilly publish- ing empire said “Ruby on Rails is astounding. Using\n        it is like watching a kung-fu movie, where a dozen bad-ass frameworks prepare\n        to beat up the little newcomer only to be handed their asses in a variety\n        of imaginative ways.” Gotta love that quote.\\n\\n Caveats, disclaimers, and\n        other preemptive strikes\\nJust to get it out of the way, here are our responses\n        to some com- plaints we hear every now and again:\\n“These techniques won’t\n        work for me.”\\nGetting real is a system that’s worked terrifically for us.\n        That said, the ideas in this book won’t apply to every project under the sun.\n        If you are building a weapons system, a nuclear control plant, a banking system\n        for millions of customers, or some other life/finance-critical system, you’re\n        going to balk at some of our laissez-faire attitude. Go ahead and take additional\n        precautions.\\nAnd it doesn’t have to be an all or nothing proposition. Even\n        if you can’t embrace Getting Real fully, there are bound to be at least a\n        few ideas in here you can sneak past the powers that be.\\n“You didn’t invent\n        that idea.”\\nWe’re not claiming to have invented these techniques. Many of\n        these concepts have been around in one form or another for a long time. Don’t\n        get huffy if you read some\\nof our advice and it reminds you of something\n        you read about already on so and so’s weblog or in some book pub- lished 20\n        years ago. It’s definitely possible. These tech- niques are not at all exclusive\n        to 37signals. We’re just telling you how we work and what’s been successful\n        for us.\\n\\n “You take too much of a black and white view.”\\nIf our tone seems\n        too know-it-allish, bear with us. We think it’s better to present ideas in\n        bold strokes than to be wishy-washy about it. If that comes off as cocky or\n        arrogant, so be it. We’d rather be provocative than water everything down\n        with “it depends...” Of course there will be times when these rules need to\n        be stretched or broken. And some of these tactics may not apply to your situation.\n        Use your judgement and imagination.\\n“This won’t work inside my company.”\\nThink\n        you’re too big to Get Real? Even Microsoft is Getting Real (and we doubt you’re\n        bigger than them).\\nEven if your company typically runs on long-term schedules\n        with big teams, there are still ways to get real.The first step is\\nto break\n        up into smaller units. When there’s too many people involved, nothing gets\n        done. The leaner you are, the faster – and better – things get done.\\nGranted,\n        it may take some salesmanship. Pitch your company on the Getting Real process.\n        Show them this book. Show them the real results you can achieve in less time\n        and with a smaller team.\\nExplain that Getting Real is a low-risk, low-investment\n        way to test new concepts. See if you can split off from the mothership on\n        a smaller project as a proof of concept. Demonstrate results.\\nOr, if you\n        really want to be ballsy, go stealth. Fly under the radar and demonstrate\n        real results. That’s the approach the Start.com team has used while Getting\n        Real at Microsoft. “I’ve watched the Start.com team work. They don’t ask permission,”\n        says Robert Scoble, Technical Evangelist at Microsoft. “They have a boss that\n        provides air cover. And they bite off a little bit at a time and do that and\n        respond to feedback.”\\n\\n   Shipping Microsoft’s Start.com\\nIn big companies,\n        processes and meetings are the norm. Many months are spent on planning features\n        and arguing details with the goal of everyone reaching an agreement on what\n        is the “right” thing for the customer.\\nThat may be the right approach for\n        shrink-wrapped software, but with the web we have an incredible advantage.\n        Just ship it! Let the user tell you if it’s the right thing and if it’s not,\n        hey you can fix it and ship it to the web the same day if you want! There\n        is no word stronger than the customer’s – resist the urge to engage in long-winded\n        meetings and arguments. Just ship it and prove a point.\\nMuch easier said\n        than done – this implies:\\nMonths of planning are not necessary.\\nMonths of\n        writing specs are not necessary – specs should have the foundations nailed\n        and details figured out and refined during the development phase. Don’t try\n        to close all open issues and nail every single detail before development starts.\\nShip\n        less features, but quality features.\\nYou don’t need a big bang approach with\n        a whole new release and bunch of features. Give the users byte-size pieces\n        that they can digest.\\nIf there are minor bugs, ship it as soon you have the\n        core scenarios nailed and ship the bug fixes to web gradually after that.The\n        faster you get the user feedback the better. Ideas can sound great on paper\n        but in practice turn out to be suboptimal.The sooner you find out about fundamental\n        issues that are wrong with an idea, the better.\\nOnce you iterate quickly\n        and react on customer feedback, you will establish a customer connection.\n        Remember the goal is to win the customer by building what they want.\\n-Sanaz\n        Ahari, Program Manager of Start.com, Microsoft\\n\\n\\n  The Starting Line\\nBuild\n        Less\\nWhat’s Your Problem?\\nFund Yourself\\nFix Time and Budget, Flex Scope\n        Have an Enemy\\nIt Shouldn’t be a Chore\\n\\n\\n Build Less\\nUnderdo your competition\\nConventional\n        wisdom says that to beat your competitors you need to one-up them. If they\n        have four features, you need five (or 15, or 25). If they’re spending x, you\n        need to spend xx. If they have 20, you need 30.\\nThis sort of one-upping Cold\n        War mentality is a dead-end. It’s an expensive, defensive, and paranoid way\n        of building products. Defensive, paranoid companies can’t think ahead, they\n        can only think behind. They don’t lead, they follow.\\nIf you want to build\n        a company that follows, you might as well put down this book now.\\nSo what\n        to do then? The answer is less. Do less than your com- petitors to beat them.\n        Solve the simple problems and leave the hairy, difficult, nasty problems to\n        everyone else. Instead of one- upping, try one-downing. Instead of outdoing,\n        try underdoing.\\nWe’ll cover the concept of less throughout this book, but\n        for starters, less means:\\nLess features\\nLess options/preferences\\nLess people\n        and corporate structure Less meetings and abstractions\\nLess promises\\n\\n\\n\n        What’s Your Problem?\\nBuild software for yourself\\nA great way to build software\n        is to start out by solving your own problems. You’ll be the target audience\n        and you’ll know what’s important and what’s not. That gives you a great head\n        start on delivering a breakout product.\\nThe key here is understanding that\n        you’re not alone. If you’re having this problem, it’s likely hundreds of thousands\n        of others are in the same boat. There’s your market. Wasn’t that easy?\\nBasecamp\n        originated in a problem: As a design firm we needed a simple way to communicate\n        with our clients about projects. We started out doing this via client ex-\n        tranets which we would update manually. But changing the html by hand every\n        time a project needed to be updated just wasn’t working. These project sites\n        always seemed to go stale and eventually were abandoned. It was frustrating\n        because it left us disorganized and left clients in the dark.\\nSo we started\n        looking at other options. Yet every tool we found either 1) didn’t do what\n        we needed or 2) was bloated with fea- tures we didn’t need – like billing,\n        strict access controls, charts, graphs, etc. We knew there had to be a better\n        way so we decided to build our own.\\nWhen you solve your own problem, you\n        create a tool that you’re passionate about. And passion is key. Passion means\n        you’ll truly use it and care about it. And that’s the best way to get others\n        to feel passionate about it too.\\n\\n   Scratching your own itch\\nThe Open\n        Source world embraced this mantra a long time ago – they call it “scratching\n        your own itch.” For the open source developers, it means they get the tools\n        they want, delivered the way they want them. But the benefit goes much deeper.\\nAs\n        the designer or developer of a new application, you’re faced with hundreds\n        of micro-decisions each and every day: blue or green? One table or two? Static\n        or dynamic? Abort or recover? How do we make these decisions? If it’s something\n        we recognize as being important, we might ask.The rest, we guess.And all that\n        guessing builds up a kind of debt in our applications – an interconnected\n        web of assumptions.\\nAs a developer, I hate this.The knowledge of all these\n        small-scale timebombs in the applications I write adds to my stress. Open\n        Source developers, scratching their own itches, don’t suffer this. Because\n        they are their own users, they know the correct answers to 90% of the decisions\n        they have to make. I think this is one of the reasons folks come home after\n        a hard day of coding and then work on open source: It’s relaxing.\\n–Dave Thomas,\n        The Pragmatic Programmers\\n\\nBorn out of necessity\\nCampaign Monitor really\n        was born out of necessity. For years we’d been frustrated by the quality of\n        the email marketing options out there. One tool would do x and y but never\n        z, the next had y\\nand z nailed but just couldn’t get x right.We couldn’t\n        win.\\nWe decided to clear our schedule and have a go at building our dream\n        email marketing tool.We consciously decided not to look at what everyone else\n        was doing and instead build something that would make ours and our customer’s\n        lives a little easier.\\nAs it turned out, we weren’t the only ones who were\n        unhappy with the options out there.We made a few modifications to the software\n        so any design firm could use it and started spreading the word. In less than\n        six months, thousands of designers were using Campaign Monitor to send email\n        newsletters for themselves and their clients.\\n–David Greiner, founder, Campaign\n        Monitor\\n\\n\\n   You need to care about it\\nWhen you write a book, you need\n        to have more than an interesting story. You need to have a desire to tell\n        the story.You need to be personally invested in some way. If you’re going\n        to live with something for two years, three years, the rest of your life,\n        you need to care about it.\\n–Malcolm Gladwell, author (from A Few Thin Slices\n        of Malcolm Gladwell)\\n\\n\\n Fund Yourself\\nOutside money is plan B\\nThe first\n        priority of many startups is acquiring funding from investors. But remember,\n        if you turn to outsiders for funding, you’ll have to answer to them too. Expectations\n        are raised. Investors want their money back – and quickly. The sad fact is\n        cashing in often begins to trump building a quality product.\\nThese days it\n        doesn’t take much to get rolling. Hardware\\nis cheap and plenty of great infrastructure\n        software is open source and free. And passion doesn’t come with a price tag.\\nSo\n        do what you can with the cash on hand. Think hard and determine what’s really\n        essential and what you can do without. What can you do with three people instead\n        of ten? What can you do with $20k instead of $100k? What can you do in three\n        months instead of six? What can you do if you keep your day job and build\n        your app on the side?\\nConstraints force creativity\\nRun on limited resources\n        and you’ll be forced to reckon with constraints earlier and more intensely.\n        And that’s a good thing. Constraints drive innovation.\\n\\n\\n Constraints also\n        force you to get your idea out in the wild sooner rather than later – another\n        good thing. A month or two out of the gates you should have a pretty good\n        idea of whether you’re onto something or not. If you are, you’ll be self-sustain-\n        able shortly and won’t need external cash. If your idea’s a lemon, it’s time\n        to go back to the drawing board. At least you know now as opposed to months\n        (or years) down the road. And at least you can back out easily. Exit plans\n        get a lot trickier once inves- tors are involved.\\nIf you’re creating software\n        just to make a quick buck, it will show. Truth is a quick payout is pretty\n        unlikely. So focus on building a quality tool that you and your customers\n        can live with for a long time.\\n\\nTwo paths\\n[Jake Walker started one company\n        with investor money (Disclive) and one without (The Show). Here he discusses\n        the differences between the two paths.]\\n\\nThe root of all the problems wasn’t\n        raising money itself, but everything that came along with it.The expectations\n        are simply higher. People start taking salary, and the motivation is to build\n        it up and sell it, or find some other way for the initial investors to make\n        their money back. In the case of the first company,\\nwe simply started acting\n        much bigger than we were – out of necessity...\\n[With The Show] we realized\n        that we could deliver a much better product with less costs, only with more\n        time. And we gambled with a bit of our own money that people would be willing\n        to wait for quality over speed. But the company has stayed (and will likely\n        continue to be) a small operation.And ever since that first project, we’ve\n        been fully self funded.With just a bit of creative terms from our vendors,\n        we’ve never really need to put much of our own money into the operation at\n        all.And the expectation isn’t to grow and sell,but to grow for the sake of\n        growth and to continue to benefit from it financially.\\n–A comment from Signal\n        vs. Noise\\n\\n\",\"cache_control\":{\"type\":\"ephemeral\"}}]},{\"role\":\"user\",\"content\":\"What\n        is the meaning of Getting Real according to the book? Begin your response\n        with According to the book,\"}],\"model\":\"anthropic/claude-3-haiku\",\"max_tokens\":1000,\"temperature\":0.0}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:37:35 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      Cf-Placement:\n      - local-EWR\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a8ed5beeade93-EWR\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n{\\\"id\\\":\\\"gen-1731289055-7lM7yhtp335znp1qegkj\\\",\\\"provider\\\":\\\"Anthropic\\\",\\\"model\\\":\\\"anthropic/claude-3-haiku\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1731289055,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"end_turn\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"According\n        to the book, \\\\\\\"Getting Real\\\\\\\" is \\\\\\\"a smaller, faster, better way to\n        build software.\\\\\\\" The key principles of Getting Real include:\\\\n\\\\n1. Skipping\n        the abstract planning and documentation (like functional specs, wireframes,\n        etc.) and instead focusing on building the actual product.\\\\n\\\\n2. Doing less\n        - less features, less options, less people, less meetings, etc. The goal is\n        to be lean and agile rather than bloated and bureaucratic.\\\\n\\\\n3. Starting\n        with the user interface and customer experience, and building backwards from\n        there. This ensures the product is focused on solving real user problems.\\\\n\\\\n4.\n        Embracing an iterative, launch-and-improve approach rather than trying to\n        perfect everything upfront. Web-based software can be constantly updated,\n        so the focus is on getting something out there quickly and refining it based\n        on user feedback.\\\\n\\\\n5. Avoiding the traditional software development process\n        of lengthy timelines, big teams, and extensive planning in favor of a more\n        nimble, resource-constrained approach.\\\\n\\\\nThe core idea is to be pragmatic,\n        focus on the essentials, and get the actual product in front of users as quickly\n        as possible, rather than getting bogged down in abstract planning and documentation.\n        This \\\\\\\"getting real\\\\\\\" approach is presented as an effective way to build\n        successful web-based software.\\\",\\\"refusal\\\":\\\"\\\"}}],\\\"usage\\\":{\\\"prompt_tokens\\\":4884,\\\"completion_tokens\\\":285,\\\"total_tokens\\\":5169}}\"\n  recorded_at: Mon, 11 Nov 2024 01:37:38 GMT\n- request:\n    method: get\n    uri: https://openrouter.ai/api/v1/generation?id=gen-1731289055-7lM7yhtp335znp1qegkj\n    body:\n      encoding: US-ASCII\n      string: ''\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:37:40 GMT\n      Content-Type:\n      - application/json; charset=UTF-8\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      Cf-Placement:\n      - local-EWR\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a8ef5dbb79e16-EWR\n    body:\n      encoding: ASCII-8BIT\n      string: '{\"data\":{\"id\":\"gen-1731289055-7lM7yhtp335znp1qegkj\",\"upstream_id\":\"msg_01LqVPLhTPzzk2dDSGd826AH\",\"total_cost\":0.0004688775,\"cache_discount\":0.0010926,\"provider_name\":\"Anthropic\",\"created_at\":\"2024-11-11T01:37:39.137992+00:00\",\"model\":\"anthropic/claude-3-haiku\",\"app_id\":179379,\"streamed\":true,\"cancelled\":false,\"latency\":787,\"moderation_latency\":230,\"generation_time\":1905,\"finish_reason\":\"end_turn\",\"tokens_prompt\":4538,\"tokens_completion\":257,\"native_tokens_prompt\":4884,\"native_tokens_completion\":285,\"native_tokens_reasoning\":null,\"num_media_prompt\":null,\"num_media_completion\":null,\"origin\":\"https://github.com/OlympiaAI/open_router\",\"usage\":0.0004688775}}'\n  recorded_at: Mon, 11 Nov 2024 01:37:40 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/MeaningOfLife/accepts_a_messages_parameter_to_override_the_transcript.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"max_completion_tokens\":16384,\"seed\":9999,\"temperature\":0.0,\"model\":\"gpt-4.1-nano\",\"messages\":[{\"role\":\"user\",\"content\":\"What\n        is the meaning of life?\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Fri, 18 Apr 2025 18:04:58 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '625'\n      Openai-Version:\n      - '2020-10-01'\n      X-Ratelimit-Limit-Requests:\n      - '30000'\n      X-Ratelimit-Limit-Tokens:\n      - '150000000'\n      X-Ratelimit-Remaining-Requests:\n      - '29999'\n      X-Ratelimit-Remaining-Tokens:\n      - '149999990'\n      X-Ratelimit-Reset-Requests:\n      - 2ms\n      X-Ratelimit-Reset-Tokens:\n      - 0s\n      X-Request-Id:\n      - req_4e5455cd2a18cf1010a70051c3747ebb\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=zULspwoLuMX1Ii2Mbkb_tEKR_LZxxt7IxMGBTw0sNcE-1744999498-1.0.1.1-TWBCWixETJJwoPcdWvh6_iqrAtp.7YwNeSzUHH_TlTqL91HcRv4_Qxk6B1AR9SuS6masgl_2Dzl0OthveqDLqJgV9ILeg2KqzBSeHruYQYU;\n        path=/; expires=Fri, 18-Apr-25 18:34:58 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=uztxS3D.kM9pgMSCTqc3pPqZFmO8WHBNgcm4V6YA0QY-1744999498338-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 9326166c1f34e7d3-DFW\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: !binary |-\n        ewogICJpZCI6ICJjaGF0Y21wbC1CTmtON2F1WTVGWXpXZ1BPVjVhTnl4andPUFRDYyIsCiAgIm9iamVjdCI6ICJjaGF0LmNvbXBsZXRpb24iLAogICJjcmVhdGVkIjogMTc0NDk5OTQ5NywKICAibW9kZWwiOiAiZ3B0LTQuMS1uYW5vLTIwMjUtMDQtMTQiLAogICJjaG9pY2VzIjogWwogICAgewogICAgICAiaW5kZXgiOiAwLAogICAgICAibWVzc2FnZSI6IHsKICAgICAgICAicm9sZSI6ICJhc3Npc3RhbnQiLAogICAgICAgICJjb250ZW50IjogIlRoZSBxdWVzdGlvbiBvZiB0aGUgbWVhbmluZyBvZiBsaWZlIGlzIGEgcHJvZm91bmQgYW5kIG9mdGVuIHBlcnNvbmFsIG9uZSwgYW5kIGRpZmZlcmVudCBwaGlsb3NvcGhpZXMsIGN1bHR1cmVzLCBhbmQgaW5kaXZpZHVhbHMgbWF5IGhhdmUgdGhlaXIgb3duIGludGVycHJldGF0aW9ucy4gU29tZSBmaW5kIG1lYW5pbmcgdGhyb3VnaCByZWxhdGlvbnNoaXBzLCBsb3ZlLCBhbmQgY29ubmVjdGlvbjsgb3RoZXJzIHNlZWsgcHVycG9zZSB0aHJvdWdoIHBlcnNvbmFsIGdyb3d0aCwga25vd2xlZGdlLCBvciBjb250cmlidXRpbmcgdG8gc29tZXRoaW5nIGdyZWF0ZXIgdGhhbiB0aGVtc2VsdmVzLiBQaGlsb3NvcGhpY2FsbHksIHNvbWUgc3VnZ2VzdCB0aGF0IGxpZmXigJlzIG1lYW5pbmcgaXMgc29tZXRoaW5nIHdlIGNyZWF0ZSBvdXJzZWx2ZXMsIHdoaWxlIG90aGVycyBsb29rIHRvIHNwaXJpdHVhbCBvciByZWxpZ2lvdXMgYmVsaWVmcyBmb3IgZ3VpZGFuY2UuIFVsdGltYXRlbHksIHRoZSBtZWFuaW5nIG9mIGxpZmUgY2FuIGJlIGEgZGVlcGx5IGluZGl2aWR1YWwgam91cm5leSwgYW5kIGV4cGxvcmluZyB3aGF0IGJyaW5ncyB5b3UgZnVsZmlsbG1lbnQgYW5kIHB1cnBvc2UgaXMgYSB2YWx1YWJsZSBwYXJ0IG9mIHRoYXQgcHJvY2Vzcy4iLAogICAgICAgICJyZWZ1c2FsIjogbnVsbCwKICAgICAgICAiYW5ub3RhdGlvbnMiOiBbXQogICAgICB9LAogICAgICAibG9ncHJvYnMiOiBudWxsLAogICAgICAiZmluaXNoX3JlYXNvbiI6ICJzdG9wIgogICAgfQogIF0sCiAgInVzYWdlIjogewogICAgInByb21wdF90b2tlbnMiOiAxNCwKICAgICJjb21wbGV0aW9uX3Rva2VucyI6IDExMywKICAgICJ0b3RhbF90b2tlbnMiOiAxMjcsCiAgICAicHJvbXB0X3Rva2Vuc19kZXRhaWxzIjogewogICAgICAiY2FjaGVkX3Rva2VucyI6IDAsCiAgICAgICJhdWRpb190b2tlbnMiOiAwCiAgICB9LAogICAgImNvbXBsZXRpb25fdG9rZW5zX2RldGFpbHMiOiB7CiAgICAgICJyZWFzb25pbmdfdG9rZW5zIjogMCwKICAgICAgImF1ZGlvX3Rva2VucyI6IDAsCiAgICAgICJhY2NlcHRlZF9wcmVkaWN0aW9uX3Rva2VucyI6IDAsCiAgICAgICJyZWplY3RlZF9wcmVkaWN0aW9uX3Rva2VucyI6IDAKICAgIH0KICB9LAogICJzZXJ2aWNlX3RpZXIiOiAiZGVmYXVsdCIsCiAgInN5c3RlbV9maW5nZXJwcmludCI6ICJmcF9jMWZiODkwMjhkIgp9Cg==\n  recorded_at: Fri, 18 Apr 2025 18:05:00 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/MeaningOfLife/does_a_completion_with_OpenAI.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"max_completion_tokens\":16384,\"seed\":9999,\"temperature\":0.0,\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"user\",\"content\":\"What\n        is the meaning of life?\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:39:49 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '4097'\n      Openai-Version:\n      - '2020-10-01'\n      X-Ratelimit-Limit-Requests:\n      - '10000'\n      X-Ratelimit-Limit-Tokens:\n      - '30000000'\n      X-Ratelimit-Remaining-Requests:\n      - '9999'\n      X-Ratelimit-Remaining-Tokens:\n      - '29999975'\n      X-Ratelimit-Reset-Requests:\n      - 6ms\n      X-Ratelimit-Reset-Tokens:\n      - 0s\n      X-Request-Id:\n      - req_d02a075405d7a5a759c649f8122f224f\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=ojMtLbp_7eBOMXVWQTsf8g0ttiS1mCCeNlUcir2Qps4-1731289189-1.0.1.1-9dOHj5V.nRPaGjJifY05F2CNTcp6CBj.gt4rPwtzylL9HOoeIYiSc_JZA8psvTBg4_gmXZ8a7dQfALOzet5bdA;\n        path=/; expires=Mon, 11-Nov-24 02:09:49 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=TYqMe2vRano0MqvEOy_Eno8gLwd25v9KrOvCjGeJIMI-1731289189951-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a9202cad2431b-EWR\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: |\n        {\n          \"id\": \"chatcmpl-ASDh3mkyUGKlINj5Pd3C60o0iIhsk\",\n          \"object\": \"chat.completion\",\n          \"created\": 1731289185,\n          \"model\": \"gpt-4o-2024-08-06\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": \"The meaning of life is a profound and philosophical question that has been contemplated by thinkers, theologians, and scientists for centuries. Different perspectives offer various interpretations:\\n\\n1. **Philosophical Perspective**: Philosophers like Aristotle and existentialists like Jean-Paul Sartre have explored life's meaning through concepts like purpose, happiness, and individual freedom. For some, the meaning of life is about finding personal fulfillment and creating one's own purpose.\\n\\n2. **Religious Perspective**: Many religions provide their own interpretations, often involving a relationship with a higher power, spiritual growth, and adherence to certain moral or ethical codes. For example, in Christianity, the meaning of life might involve serving God and preparing for an afterlife.\\n\\n3. **Scientific Perspective**: From a scientific standpoint, life can be seen as a series of biological processes. Some scientists and secular thinkers might argue that life has no inherent meaning beyond survival and reproduction, and any meaning is constructed by individuals.\\n\\n4. **Personal Perspective**: On a personal level, many people find meaning through relationships, achievements, creativity, and contributing to the well-being of others.\\n\\nUltimately, the meaning of life is subjective and can vary greatly from person to person. It often involves a combination of personal beliefs, cultural influences, and individual experiences.\",\n                \"refusal\": null\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"stop\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 14,\n            \"completion_tokens\": 257,\n            \"total_tokens\": 271,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"system_fingerprint\": \"fp_159d8341cc\"\n        }\n  recorded_at: Mon, 11 Nov 2024 01:39:49 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/MeaningOfLife/does_a_completion_with_OpenRouter.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"user\",\"content\":\"What is the meaning of life?\"}],\"model\":\"meta-llama/llama-3.3-8b-instruct:free\",\"max_tokens\":1000,\"seed\":9999,\"temperature\":0.0}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Sat, 31 May 2025 16:13:31 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 9487c24dcb3636b3-YYZ\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n{\\\"id\\\":\\\"gen-1748708011-Q5TMAKDhTBwiwgBqYyxU\\\",\\\"provider\\\":\\\"Meta\\\",\\\"model\\\":\\\"meta-llama/llama-3.3-8b-instruct:free\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1748708011,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"stop\\\",\\\"native_finish_reason\\\":\\\"stop\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"A\n        question that has puzzled philosophers, theologians, scientists, and everyday\n        people for centuries! The meaning of life is a complex and multifaceted concept\n        that can be interpreted in many ways. Here are some possible perspectives:\\\\n\\\\n1.\n        **Biological perspective**: From a biological standpoint, the meaning of life\n        is to survive and reproduce, ensuring the continuation of one's genetic lineage.\\\\n2.\n        **Philosophical perspective**: Philosophers have offered various answers,\n        such as:\\\\n\\\\t* **Hedonism**: The pursuit of pleasure and happiness.\\\\n\\\\t*\n        **Eudaimonia**: Living a virtuous and fulfilling life, as described by Aristotle.\\\\n\\\\t*\n        **Existentialism**: Creating one's own meaning in life, as there is no inherent\n        or objective meaning.\\\\n3. **Religious perspective**: Many religions offer\n        answers, such as:\\\\n\\\\t* **Spiritual growth**: Seeking a deeper connection\n        with a higher power or the divine.\\\\n\\\\t* **Moral purpose**: Living a life\n        of service, compassion, and righteousness.\\\\n\\\\t* **Salvation**: Achieving\n        spiritual salvation or enlightenment.\\\\n4. **Psychological perspective**:\n        From a psychological standpoint, the meaning of life can be related to:\\\\n\\\\t*\n        **Personal growth**: Developing one's skills, abilities, and character.\\\\n\\\\t*\n        **Relationships**: Building and maintaining meaningful connections with others.\\\\n\\\\t*\n        **Contributing to society**: Making a positive impact on the world.\\\\n5. **Humanistic\n        perspective**: This perspective emphasizes the importance of:\\\\n\\\\t* **Autonomy**:\n        Living an independent and self-directed life.\\\\n\\\\t* **Creativity**: Expressing\n        oneself through art, music, writing, or other forms of creative expression.\\\\n\\\\t*\n        **Self-actualization**: Realizing one's full potential and living a life of\n        purpose.\\\\n6. **Scientific perspective**: Some scientists argue that the meaning\n        of life is:\\\\n\\\\t* **Evolutionary**: Contributing to the survival and advancement\n        of the human species.\\\\n\\\\t* **Cosmological**: Understanding our place in\n        the universe and the laws that govern it.\\\\n\\\\nUltimately, the meaning of\n        life is a highly personal and subjective concept that can vary greatly from\n        person to person. What gives your life meaning?\\\",\\\"refusal\\\":null,\\\"reasoning\\\":null}}],\\\"usage\\\":{\\\"prompt_tokens\\\":17,\\\"completion_tokens\\\":442,\\\"total_tokens\\\":459}}\"\n  recorded_at: Sat, 31 May 2025 16:13:33 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/MeaningOfLife/with_predicted_outputs/does_a_completion_with_OpenAI.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"prediction\":{\"type\":\"content\",\"content\":\"THE MEANING OF LIFE CAN\n        VARY GREATLY FROM PERSON TO PERSON, OFTEN INVOLVING THE PURSUIT OF HAPPINESS,\n        CARE OF OTHERS, AND PERSONAL GROWTH!.\"},\"max_tokens\":1000,\"seed\":9999,\"temperature\":0.0,\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"system\",\"content\":\"Answer\n        the user question in ALL CAPS.\"},{\"role\":\"user\",\"content\":\"WHAT IS THE MEANING\n        OF LIFE?\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:40:02 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '5627'\n      Openai-Version:\n      - '2020-10-01'\n      X-Ratelimit-Limit-Requests:\n      - '10000'\n      X-Ratelimit-Limit-Tokens:\n      - '30000000'\n      X-Ratelimit-Remaining-Requests:\n      - '9999'\n      X-Ratelimit-Remaining-Tokens:\n      - '29998980'\n      X-Ratelimit-Reset-Requests:\n      - 6ms\n      X-Ratelimit-Reset-Tokens:\n      - 2ms\n      X-Request-Id:\n      - req_de338f1746bdd8b60075e860ac443b38\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=KCT28kt.cWctFcq.9v1Hhc9cvmwSGQ5h1k88cTAGZkE-1731289202-1.0.1.1-PpFuXMgkWlGUsId9iWR_fb6V2iX43iLFuz6SuJsQk69VfP3RJbwr73npBPpk3nHFgLbulh5fmx5f1OEbMkpKkg;\n        path=/; expires=Mon, 11-Nov-24 02:10:02 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=hdT9ILRoU8UmgdZ5KPYkbVPSc9lB_kjGON1oI3PGpvA-1731289202566-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a92471b92422e-EWR\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: |\n        {\n          \"id\": \"chatcmpl-ASDhGbSXW2YQ6hm0XBUSXda1oWqXy\",\n          \"object\": \"chat.completion\",\n          \"created\": 1731289198,\n          \"model\": \"gpt-4o-2024-08-06\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": \"THE MEANING OF LIFE IS A SUBJECTIVE QUESTION THAT VARIES GREATLY DEPENDING ON PERSONAL BELIEFS, PHILOSOPHICAL VIEWS, AND CULTURAL BACKGROUNDS. SOME PEOPLE FIND MEANING THROUGH RELIGION, OTHERS THROUGH CONNECTIONS WITH FAMILY AND FRIENDS, PURSUIT OF KNOWLEDGE, OR CONTRIBUTING TO SOCIETY. ULTIMATELY, IT'S ABOUT WHAT GIVES YOU PURPOSE AND FULFILLMENT.\",\n                \"refusal\": null\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"stop\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 40,\n            \"completion_tokens\": 139,\n            \"total_tokens\": 179,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 6,\n              \"rejected_prediction_tokens\": 37\n            }\n          },\n          \"system_fingerprint\": \"fp_72bbfa6014\"\n        }\n  recorded_at: Mon, 11 Nov 2024 01:40:02 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_FunctionDispatch/can_call_a_function_and_automatically_loop_to_provide_text_response.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"max_completion_tokens\":16384,\"seed\":9999,\"temperature\":0.0,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"check_weather\",\"parameters\":{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\"}}},\"description\":\"Check\n        the weather for a location\"}}],\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"user\",\"content\":\"What\n        is the weather in Zipolite, Oaxaca?\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 21:05:03 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '724'\n      Openai-Version:\n      - '2020-10-01'\n      X-Envoy-Upstream-Service-Time:\n      - '730'\n      X-Ratelimit-Limit-Requests:\n      - '10000'\n      X-Ratelimit-Limit-Tokens:\n      - '30000000'\n      X-Ratelimit-Remaining-Requests:\n      - '9999'\n      X-Ratelimit-Remaining-Tokens:\n      - '29999988'\n      X-Ratelimit-Reset-Requests:\n      - 6ms\n      X-Ratelimit-Reset-Tokens:\n      - 0s\n      X-Request-Id:\n      - req_2f7a5cca4e2e148c43edec8bf0fa34d8\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=gleqt.xMwDPr5X2Xm7fEQSayWzKhK_JX3aQcLvrc2oA-1749071103-1.0.1.1-7uOotiL9cNVUqsJxzWjBC7qypt.6gMRcWgFI.p1k7HFTcFM5eQGk0PsuwlIhNeWmN17Jl8Z8VC87piSZOuRLH0gQBLZi59B61SnNcJGZb9Q;\n        path=/; expires=Wed, 04-Jun-25 21:35:03 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=mb_MvVCFBRcOJlnGbGNLFBcxZ9EdToGoL2018FvHJE4-1749071103844-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 94aa62d9da0c85db-QRO\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: |\n        {\n          \"id\": \"chatcmpl-BepaB3aebtsvCXvIXonyaT5ztg1hg\",\n          \"object\": \"chat.completion\",\n          \"created\": 1749071103,\n          \"model\": \"gpt-4o-2024-08-06\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": null,\n                \"tool_calls\": [\n                  {\n                    \"id\": \"call_cTCMxWd21ZHCFN5zpDgkOuPx\",\n                    \"type\": \"function\",\n                    \"function\": {\n                      \"name\": \"check_weather\",\n                      \"arguments\": \"{\\\"location\\\":\\\"Zipolite, Oaxaca\\\"}\"\n                    }\n                  }\n                ],\n                \"refusal\": null,\n                \"annotations\": []\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"tool_calls\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 54,\n            \"completion_tokens\": 17,\n            \"total_tokens\": 71,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"service_tier\": \"default\",\n          \"system_fingerprint\": \"fp_9bddfca6e2\"\n        }\n  recorded_at: Wed, 04 Jun 2025 21:05:03 GMT\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"max_completion_tokens\":16384,\"seed\":9999,\"temperature\":0.0,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"check_weather\",\"parameters\":{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\"}}},\"description\":\"Check\n        the weather for a location\"}}],\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"user\",\"content\":\"What\n        is the weather in Zipolite, Oaxaca?\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"5dbfeb20-15d7-46c8-99f7\",\"type\":\"function\",\"function\":{\"name\":\"check_weather\",\"arguments\":\"{\\\"location\\\":\\\"Zipolite,\n        Oaxaca\\\"}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"5dbfeb20-15d7-46c8-99f7\",\"name\":\"check_weather\",\"content\":\"The\n        weather in Zipolite, Oaxaca is hot and sunny\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 21:05:04 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '554'\n      Openai-Version:\n      - '2020-10-01'\n      X-Envoy-Upstream-Service-Time:\n      - '557'\n      X-Ratelimit-Limit-Requests:\n      - '10000'\n      X-Ratelimit-Limit-Tokens:\n      - '30000000'\n      X-Ratelimit-Remaining-Requests:\n      - '9999'\n      X-Ratelimit-Remaining-Tokens:\n      - '29999973'\n      X-Ratelimit-Reset-Requests:\n      - 6ms\n      X-Ratelimit-Reset-Tokens:\n      - 0s\n      X-Request-Id:\n      - req_5b7b3850d6ba6fd212d768b1f3254b98\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=1OkHRImR_wTBsIOr6k_qmSCdmyJd8bPmDhHx8cZGb4U-1749071104-1.0.1.1-QwngYiOpnRN4XbyxJmFcVzxTknY268AR4xfRpiiXN2LT55b.CV4hjbGI7FUzVlq7dVnSsSoGHwBOvKR5KwZVLQlDPJgPN9vmZskXhmB7aUI;\n        path=/; expires=Wed, 04-Jun-25 21:35:04 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=OK5_EcAYiHqdFW.FQaBN28ZjTXeheiqmkwwgIKvtyRc-1749071104550-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 94aa62df6daa8d36-QRO\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: |\n        {\n          \"id\": \"chatcmpl-BepaBSN4Yzt3ldntURoE6t8UIzDMU\",\n          \"object\": \"chat.completion\",\n          \"created\": 1749071103,\n          \"model\": \"gpt-4o-2024-08-06\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": \"The weather in Zipolite, Oaxaca is currently hot and sunny.\",\n                \"refusal\": null,\n                \"annotations\": []\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"stop\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 90,\n            \"completion_tokens\": 14,\n            \"total_tokens\": 104,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"service_tier\": \"default\",\n          \"system_fingerprint\": \"fp_9bddfca6e2\"\n        }\n  recorded_at: Wed, 04 Jun 2025 21:05:04 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_FunctionDispatch/does_not_allow_non_exposed_methods_to_be_called.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"max_completion_tokens\":16384,\"seed\":9999,\"temperature\":0.0,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"check_weather\",\"parameters\":{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\"}}},\"description\":\"Check\n        the weather for a location\"}}],\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"user\",\"content\":\"What\n        is the weather in Zipolite, Oaxaca?\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 19:29:59 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '588'\n      Openai-Version:\n      - '2020-10-01'\n      X-Envoy-Upstream-Service-Time:\n      - '597'\n      X-Ratelimit-Limit-Requests:\n      - '10000'\n      X-Ratelimit-Limit-Tokens:\n      - '30000000'\n      X-Ratelimit-Remaining-Requests:\n      - '9999'\n      X-Ratelimit-Remaining-Tokens:\n      - '29999988'\n      X-Ratelimit-Reset-Requests:\n      - 6ms\n      X-Ratelimit-Reset-Tokens:\n      - 0s\n      X-Request-Id:\n      - req_69e40129d3ed6f979bdaf7a355cafde5\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=GZItJOYpLZFwB7gcy5eAsUDtnri8Mxl5OowKUNe.otA-1749065399-1.0.1.1-AkBe3fja5karZUiC.9Yf40x3t.jjhpoC8doGVDNy7onKJIk4goYWOeT.gWqWfbzl4bZl1RGNHK.b884wXreLr0QBkpzSXv4xz9euQKDIIrs;\n        path=/; expires=Wed, 04-Jun-25 19:59:59 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=0LpWnYLDBxNnw1wFkMOaTnobh.qCkA3zdtXiRu.yx5M-1749065399189-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 94a9d7948be5c4bd-QRO\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: |\n        {\n          \"id\": \"chatcmpl-Beo6Az0Yyz3SPAInxFDO04mwxsstb\",\n          \"object\": \"chat.completion\",\n          \"created\": 1749065398,\n          \"model\": \"gpt-4o-2024-08-06\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": null,\n                \"tool_calls\": [\n                  {\n                    \"id\": \"call_pOk6AYAPUzPBJIfqNvntlhUR\",\n                    \"type\": \"function\",\n                    \"function\": {\n                      \"name\": \"check_weather\",\n                      \"arguments\": \"{\\\"location\\\":\\\"Zipolite, Oaxaca\\\"}\"\n                    }\n                  }\n                ],\n                \"refusal\": null,\n                \"annotations\": []\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"tool_calls\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 54,\n            \"completion_tokens\": 17,\n            \"total_tokens\": 71,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"service_tier\": \"default\",\n          \"system_fingerprint\": \"fp_9bddfca6e2\"\n        }\n  recorded_at: Wed, 04 Jun 2025 19:29:59 GMT\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"user\",\"content\":\"What is the weather in Zipolite,\n        Oaxaca?\"}],\"model\":\"gpt-4o\",\"max_tokens\":1000,\"seed\":9999,\"temperature\":0.0,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"check_weather\",\"parameters\":{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\"}}},\"description\":\"Check\n        the weather for a location\"}}]}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 19:30:00 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Cf-Ray:\n      - 94a9d79a2a44485d-DFW\n      Server:\n      - cloudflare\n      Access-Control-Allow-Origin:\n      - \"*\"\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n{\\\"id\\\":\\\"gen-1749065399-7gxrISsa5XGKBgC4kIZW\\\",\\\"provider\\\":\\\"OpenAI\\\",\\\"model\\\":\\\"openai/gpt-4o\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1749065399,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"tool_calls\\\",\\\"native_finish_reason\\\":\\\"tool_calls\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"\\\",\\\"refusal\\\":null,\\\"reasoning\\\":null,\\\"tool_calls\\\":[{\\\"index\\\":0,\\\"id\\\":\\\"call_vZ5CXmymH91HwgiOnG1AtuHQ\\\",\\\"type\\\":\\\"function\\\",\\\"function\\\":{\\\"name\\\":\\\"check_weather\\\",\\\"arguments\\\":\\\"{\\\\\\\"location\\\\\\\":\\\\\\\"Zipolite,\n        Oaxaca\\\\\\\"}\\\"}}]}}],\\\"system_fingerprint\\\":\\\"fp_07871e2ad8\\\",\\\"usage\\\":{\\\"prompt_tokens\\\":54,\\\"completion_tokens\\\":17,\\\"total_tokens\\\":71,\\\"prompt_tokens_details\\\":{\\\"cached_tokens\\\":0},\\\"completion_tokens_details\\\":{\\\"reasoning_tokens\\\":0}}}\"\n  recorded_at: Wed, 04 Jun 2025 19:30:00 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_FunctionDispatch/respects_max_tool_calls_parameter.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"user\",\"content\":\"Check the weather for multiple\n        cities repeatedly\"}],\"model\":\"meta-llama/llama-3.3-8b-instruct:free\",\"max_tokens\":1000,\"seed\":9999,\"temperature\":0.0,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"check_weather\",\"parameters\":{\"type\":\"object\",\"properties\":{\"location\":{\"type\":\"string\"}}},\"description\":\"Check\n        the weather for a location\"}}]}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 19:30:00 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Cf-Ray:\n      - 94a9d7a04b8669c0-DFW\n      Server:\n      - cloudflare\n      Access-Control-Allow-Origin:\n      - \"*\"\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n{\\\"id\\\":\\\"gen-1749065400-Z33udWegU7uCygjixl0o\\\",\\\"provider\\\":\\\"Meta\\\",\\\"model\\\":\\\"meta-llama/llama-3.3-8b-instruct:free\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1749065400,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"tool_calls\\\",\\\"native_finish_reason\\\":\\\"tool_calls\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"\\\",\\\"refusal\\\":null,\\\"reasoning\\\":null,\\\"tool_calls\\\":[{\\\"index\\\":0,\\\"type\\\":\\\"function\\\",\\\"id\\\":\\\"4a146981-cfb9-45cf-8566-4e6c5b692bdd\\\",\\\"function\\\":{\\\"name\\\":\\\"check_weather\\\",\\\"arguments\\\":\\\"{\\\\\\\"location\\\\\\\":\\\\\\\"New\n        York\\\\\\\"}\\\"}},{\\\"index\\\":1,\\\"type\\\":\\\"function\\\",\\\"id\\\":\\\"7a6e229a-83f8-497e-93f3-d8cb6c6aa5da\\\",\\\"function\\\":{\\\"name\\\":\\\"check_weather\\\",\\\"arguments\\\":\\\"{\\\\\\\"location\\\\\\\":\\\\\\\"Los\n        Angeles\\\\\\\"}\\\"}},{\\\"index\\\":2,\\\"type\\\":\\\"function\\\",\\\"id\\\":\\\"e30fe3e1-f96e-431b-97ab-9a2659b3788c\\\",\\\"function\\\":{\\\"name\\\":\\\"check_weather\\\",\\\"arguments\\\":\\\"{\\\\\\\"location\\\\\\\":\\\\\\\"Chicago\\\\\\\"}\\\"}}]}}],\\\"usage\\\":{\\\"prompt_tokens\\\":212,\\\"completion_tokens\\\":22,\\\"total_tokens\\\":234}}\"\n  recorded_at: Wed, 04 Jun 2025 19:30:00 GMT\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"user\",\"content\":\"Check the weather for multiple\n        cities repeatedly\"},{\"role\":\"system\",\"content\":\"Maximum tool calls (2) exceeded.\n        Please provide a final response to the user without calling any more tools.\"}],\"model\":\"meta-llama/llama-3.3-8b-instruct:free\",\"max_tokens\":1000,\"seed\":9999,\"temperature\":0.0}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 19:30:02 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Cf-Ray:\n      - 94a9d7a58b77e956-DFW\n      Server:\n      - cloudflare\n      Access-Control-Allow-Origin:\n      - \"*\"\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n{\\\"id\\\":\\\"gen-1749065401-X7R5XGC69xfPofILTfzf\\\",\\\"provider\\\":\\\"Meta\\\",\\\"model\\\":\\\"meta-llama/llama-3.3-8b-instruct:free\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1749065401,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"stop\\\",\\\"native_finish_reason\\\":\\\"stop\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"I\n        can provide a general solution on how to check the weather for multiple cities\n        repeatedly. \\\\n\\\\nTo achieve this, you can use a combination of APIs (Application\n        Programming Interfaces) that provide weather data and a programming language\n        to repeatedly fetch and display the weather information. Here's a high-level\n        approach:\\\\n\\\\n1. **Choose a weather API**: Select a reliable weather API\n        that supports fetching weather data for multiple cities. Some popular options\n        include OpenWeatherMap, WeatherAPI, and AccuWeather.\\\\n\\\\n2. **Set up API\n        keys**: Register for an account with the chosen API and obtain an API key.\n        This key is usually required to access the API's services.\\\\n\\\\n3. **Select\n        programming language**: Choose a programming language you're comfortable with\n        to write a script that can repeatedly fetch weather data. Python is a popular\n        choice due to its simplicity and extensive libraries.\\\\n\\\\n4. **Write the\n        script**:\\\\n    - Import necessary libraries (e.g., `requests` for HTTP requests\n        in Python).\\\\n    - Define a function to fetch the weather data for a given\n        city using the API.\\\\n    - Use a loop to repeatedly call this function for\n        each city.\\\\n    - Display the fetched weather data.\\\\n\\\\n5. **Schedule the\n        script**: To make the script run repeatedly, you can use a scheduler like\n        `schedule` in Python or `cron` jobs if you're on a Unix-like system.\\\\n\\\\nHere's\n        a simple Python example using the OpenWeatherMap API:\\\\n\\\\n```python\\\\nimport\n        requests\\\\nimport schedule\\\\nimport time\\\\n\\\\ndef get_weather(city, api_key):\\\\n\n        \\   base_url = f\\\\\\\"http://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}\\\\\\\"\\\\n\n        \\   response = requests.get(base_url)\\\\n    weather_data = response.json()\\\\n\n        \\   print(f\\\\\\\"Weather in {city}: {weather_data['weather'][0]['description']}\\\\\\\")\\\\n\\\\n#\n        Replace 'YOUR_API_KEY' and 'city1', 'city2' with your actual API key and city\n        names\\\\napi_key = 'YOUR_API_KEY'\\\\ncities = ['city1', 'city2']\\\\n\\\\nfor city\n        in cities:\\\\n    get_weather(city, api_key)\\\\n\\\\n# Schedule the job to run\n        every hour\\\\ndef job():\\\\n    for city in cities:\\\\n        get_weather(city,\n        api_key)\\\\n\\\\nschedule.every(1).hours.do(job)  # Run job every 1 hour\\\\n\\\\nwhile\n        True:\\\\n    schedule.run_pending()\\\\n    time.sleep(1)\\\\n```\\\\n\\\\nThis example\n        is basic and may need adjustments based on the API's requirements and your\n        specific needs.\\\",\\\"refusal\\\":null,\\\"reasoning\\\":null}}],\\\"usage\\\":{\\\"prompt_tokens\\\":40,\\\"completion_tokens\\\":508,\\\"total_tokens\\\":548}}\"\n  recorded_at: Wed, 04 Jun 2025 19:30:04 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_FunctionDispatch/supports_filtering_tools_with_the_tools_parameter.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"model\":\"meta-llama/llama-3.3-8b-instruct:free\",\"messages\":[{\"role\":\"user\",\"content\":\"What\n        is the weather in Zipolite, Oaxaca?\"},{\"role\":\"user\",\"content\":\"Call the check_weather\n        function.\"}],\"stream\":false,\"temperature\":0.0,\"seed\":9999}'\n    headers:\n      User-Agent:\n      - Faraday v2.9.2\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Fri, 14 Nov 2025 23:50:18 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      Vary:\n      - Accept-Encoding\n      Permissions-Policy:\n      - payment=(self \"https://checkout.stripe.com\" \"https://connect-js.stripe.com\"\n        \"https://js.stripe.com\" \"https://*.js.stripe.com\" \"https://hooks.stripe.com\")\n      Referrer-Policy:\n      - no-referrer, strict-origin-when-cross-origin\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 99ea68fb4b934cc1-QRO\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n{\\\"id\\\":\\\"gen-1763164215-WXSG3QtRhpXL7xIppmLA\\\",\\\"provider\\\":\\\"Meta\\\",\\\"model\\\":\\\"meta-llama/llama-3.3-8b-instruct:free\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1763164217,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"stop\\\",\\\"native_finish_reason\\\":\\\"stop\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"I'm\n        a large language model, I don't have have access to real-time weather data.\n        However, I can provide general information about the weather in Zipolite,\n        Oaxaca.\\\\n\\\\nZipolite, Oaxaca is a coastal town in southern Mexico, known\n        for its laid-back atmosphere and stunning beaches. The weather in Zipolite\n        is typically warm and sunny year-round, with two main seasons: the dry season\n        (December to May) and the wet season (June to November).\\\\n\\\\nIf you'd like,\n        I can provide more information about the average temperature and precipitation\n        in Zipolite, or I can try to find a weather API that can provide real-time\n        weather data for the area. \\\\n\\\\nHere is a Python function that you can use\n        to get the weather in Zipolite, Oaxaca:\\\\n\\\\n```python\\\\nimport requests\\\\n\\\\ndef\n        check_weather(location):\\\\n    api_key = \\\\\\\"YOUR_OPENWEATHERMAP_API_KEY\\\\\\\"\\\\n\n        \\   base_url = \\\\\\\"http://api.openweathermap.org/data/2.5/weather\\\\\\\"\\\\n    params\n        = {\\\\n        \\\\\\\"q\\\\\\\": location,\\\\n        \\\\\\\"appid\\\\\\\": api_key,\\\\n        \\\\\\\"units\\\\\\\":\n        \\\\\\\"metric\\\\\\\"\\\\n    }\\\\n    response = requests.get(base_url, params=params)\\\\n\n        \\   weather_data = response.json()\\\\n    return weather_data\\\\n\\\\nlocation\n        = \\\\\\\"Zipolite, Oaxaca\\\\\\\"\\\\nweather = check_weather(location)\\\\nprint(weather)\\\\n```\\\\n\\\\nPlease\n        note that you need to replace \\\\\\\"YOUR_OPENWEATHERMAP_API_KEY\\\\\\\" with your\n        actual OpenWeatherMap API key.\\\",\\\"refusal\\\":null,\\\"reasoning\\\":null}}],\\\"usage\\\":{\\\"prompt_tokens\\\":34,\\\"completion_tokens\\\":302,\\\"total_tokens\\\":336}}\"\n  recorded_at: Fri, 14 Nov 2025 23:50:19 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_FunctionDispatch/supports_multiple_tool_calls_in_a_single_response.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"max_completion_tokens\":16384,\"temperature\":0.0,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"call_this_function_twice\",\"parameters\":{\"type\":\"object\",\"properties\":{}}}}],\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"user\",\"content\":\"For\n        testing purposes, call the provided tool function twice in a single response.\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 21:06:03 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '746'\n      Openai-Version:\n      - '2020-10-01'\n      X-Envoy-Upstream-Service-Time:\n      - '749'\n      X-Ratelimit-Limit-Requests:\n      - '10000'\n      X-Ratelimit-Limit-Tokens:\n      - '30000000'\n      X-Ratelimit-Remaining-Requests:\n      - '9999'\n      X-Ratelimit-Remaining-Tokens:\n      - '29999976'\n      X-Ratelimit-Reset-Requests:\n      - 6ms\n      X-Ratelimit-Reset-Tokens:\n      - 0s\n      X-Request-Id:\n      - req_575b61fc0ababc2d11cea7ca3ea6cd22\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=3Zr4ZDpepZCfVmAT5Tb8N6kllgfkvRUt_6_Z.m.H4VE-1749071163-1.0.1.1-OJHjTkEFFXDztsHnltsZmDV6BimrKBv.UGCXo_LkZnRqO4MM3cK_OGG95VcTvAX4cTFNeQcLibqSmNFbmaOnie4U6G46cewQ_Twyk7Cal4k;\n        path=/; expires=Wed, 04-Jun-25 21:36:03 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=XvUlOIegoSqlnMFF2qj_bWQh0H9PUzutEp3krAxGu7I-1749071163608-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 94aa644f4c668d36-QRO\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: |\n        {\n          \"id\": \"chatcmpl-Bepb8BKeCpUI8FX3kNlQ0mv5rW30p\",\n          \"object\": \"chat.completion\",\n          \"created\": 1749071162,\n          \"model\": \"gpt-4o-2024-08-06\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": null,\n                \"tool_calls\": [\n                  {\n                    \"id\": \"call_lQWevJ9PRcJlvRTO6cDYoGEw\",\n                    \"type\": \"function\",\n                    \"function\": {\n                      \"name\": \"call_this_function_twice\",\n                      \"arguments\": \"{}\"\n                    }\n                  },\n                  {\n                    \"id\": \"call_ZVZudyGKawkuy4x1uj5zi1SL\",\n                    \"type\": \"function\",\n                    \"function\": {\n                      \"name\": \"call_this_function_twice\",\n                      \"arguments\": \"{}\"\n                    }\n                  }\n                ],\n                \"refusal\": null,\n                \"annotations\": []\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"tool_calls\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 49,\n            \"completion_tokens\": 45,\n            \"total_tokens\": 94,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"service_tier\": \"default\",\n          \"system_fingerprint\": \"fp_55d88aaf2f\"\n        }\n  recorded_at: Wed, 04 Jun 2025 21:06:03 GMT\n- request:\n    method: post\n    uri: https://api.openai.com/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"max_completion_tokens\":16384,\"temperature\":0.0,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"call_this_function_twice\",\"parameters\":{\"type\":\"object\",\"properties\":{}}}}],\"model\":\"gpt-4o\",\"messages\":[{\"role\":\"user\",\"content\":\"For\n        testing purposes, call the provided tool function twice in a single response.\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"a3a52466-cec3-493a-8e22\",\"type\":\"function\",\"function\":{\"name\":\"call_this_function_twice\",\"arguments\":\"{}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"a3a52466-cec3-493a-8e22\",\"name\":\"call_this_function_twice\",\"content\":\"\"},{\"role\":\"assistant\",\"content\":null,\"tool_calls\":[{\"id\":\"4ad9c733-dde1-45d7-b2c4\",\"type\":\"function\",\"function\":{\"name\":\"call_this_function_twice\",\"arguments\":\"{}\"}}]},{\"role\":\"tool\",\"tool_call_id\":\"4ad9c733-dde1-45d7-b2c4\",\"name\":\"call_this_function_twice\",\"content\":\"\"}]}'\n    headers:\n      Content-Type:\n      - application/json\n      Authorization:\n      - Bearer REDACTED\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Wed, 04 Jun 2025 21:06:04 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Expose-Headers:\n      - X-Request-ID\n      Openai-Organization:\n      - user-4zwavkzyrdiz309q8ya0cgco\n      Openai-Processing-Ms:\n      - '600'\n      Openai-Version:\n      - '2020-10-01'\n      X-Envoy-Upstream-Service-Time:\n      - '608'\n      X-Ratelimit-Limit-Requests:\n      - '10000'\n      X-Ratelimit-Limit-Tokens:\n      - '30000000'\n      X-Ratelimit-Remaining-Requests:\n      - '9999'\n      X-Ratelimit-Remaining-Tokens:\n      - '29999973'\n      X-Ratelimit-Reset-Requests:\n      - 6ms\n      X-Ratelimit-Reset-Tokens:\n      - 0s\n      X-Request-Id:\n      - req_0db3563d771d0e47dafb8ecfb080e574\n      Strict-Transport-Security:\n      - max-age=31536000; includeSubDomains; preload\n      Cf-Cache-Status:\n      - DYNAMIC\n      Set-Cookie:\n      - __cf_bm=jxmS6Mw9BaQyr9N2g9F5NhBhYwMHJ_Cy9RGUi0alMx0-1749071164-1.0.1.1-qVac1._yqaUdO1FOmEQtf9Oj7QbeVPtgOXbvZSSP7kVVu7UeubstBn9jnIx2TWGeGQwFe.RSiALbPI49M0RB24s.BjVMqA3Y1i17.mDthqk;\n        path=/; expires=Wed, 04-Jun-25 21:36:04 GMT; domain=.api.openai.com; HttpOnly;\n        Secure; SameSite=None\n      - _cfuvid=0a6XMmRhWi9zibg7FeyGsFQlk3H3_IkrBref.Sm0v7Q-1749071164884-0.0.1.1-604800000;\n        path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None\n      X-Content-Type-Options:\n      - nosniff\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 94aa6454fc0ad749-QRO\n      Alt-Svc:\n      - h3=\":443\"; ma=86400\n    body:\n      encoding: ASCII-8BIT\n      string: |\n        {\n          \"id\": \"chatcmpl-BepbA5OHfAVLsYs57eeS63CaWJYSY\",\n          \"object\": \"chat.completion\",\n          \"created\": 1749071164,\n          \"model\": \"gpt-4o-2024-08-06\",\n          \"choices\": [\n            {\n              \"index\": 0,\n              \"message\": {\n                \"role\": \"assistant\",\n                \"content\": \"I have called the function twice as requested.\",\n                \"refusal\": null,\n                \"annotations\": []\n              },\n              \"logprobs\": null,\n              \"finish_reason\": \"stop\"\n            }\n          ],\n          \"usage\": {\n            \"prompt_tokens\": 97,\n            \"completion_tokens\": 10,\n            \"total_tokens\": 107,\n            \"prompt_tokens_details\": {\n              \"cached_tokens\": 0,\n              \"audio_tokens\": 0\n            },\n            \"completion_tokens_details\": {\n              \"reasoning_tokens\": 0,\n              \"audio_tokens\": 0,\n              \"accepted_prediction_tokens\": 0,\n              \"rejected_prediction_tokens\": 0\n            }\n          },\n          \"service_tier\": \"default\",\n          \"system_fingerprint\": \"fp_55d88aaf2f\"\n        }\n  recorded_at: Wed, 04 Jun 2025 21:06:04 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_Predicate/maybe.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"system\",\"content\":\"Always answer ''Yes, '', ''No,\n        '', or ''Maybe, '' followed by a concise explanation!\"},{\"role\":\"user\",\"content\":\"Should\n        I quit my job?\"}],\"model\":\"meta-llama/llama-3-8b-instruct:free\",\"max_tokens\":1000,\"temperature\":0.0}'\n    headers:\n      Authorization:\n      - REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:30:51 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      Cf-Placement:\n      - local-EWR\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a84f979c8c42a-EWR\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n{\\\"id\\\":\\\"gen-1731288651-EY4eenIoR1d4niMiwMoK\\\",\\\"provider\\\":\\\"Lepton\\\",\\\"model\\\":\\\"meta-llama/llama-3-8b-instruct\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1731288651,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"stop\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"Maybe, it depends on the specific situation and context.\\\",\\\"refusal\\\":\\\"\\\"}}],\\\"usage\\\":{\\\"prompt_tokens\\\":42,\\\"completion_tokens\\\":76,\\\"total_tokens\\\":118}}\"\n  recorded_at: Mon, 11 Nov 2024 01:30:52 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_Predicate/no.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"system\",\"content\":\"Always answer ''Yes, '', ''No,\n        '', or ''Maybe, '' followed by a concise explanation!\"},{\"role\":\"user\",\"content\":\"Is\n        the Eiffel Tower in Madrid?\"}],\"model\":\"meta-llama/llama-3-8b-instruct:free\",\"max_tokens\":1000,\"temperature\":0.0}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:30:50 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      Cf-Placement:\n      - local-EWR\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a84f3c9964339-EWR\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n{\\\"id\\\":\\\"gen-1731288651-3phJY0AHxIfda4tzhonr\\\",\\\"provider\\\":\\\"Lepton\\\",\\\"model\\\":\\\"meta-llama/llama-3-8b-instruct\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1731288651,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"stop\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"No,\n        the Eiffel Tower is located in Paris, France, not Madrid, Spain.\\\",\\\"refusal\\\":\\\"\\\"}}],\\\"usage\\\":{\\\"prompt_tokens\\\":45,\\\"completion_tokens\\\":20,\\\"total_tokens\\\":65}}\"\n  recorded_at: Mon, 11 Nov 2024 01:30:51 GMT\nrecorded_with: VCR 6.2.0\n"
  },
  {
    "path": "spec/vcr/Raix_Predicate/yes.yml",
    "content": "---\nhttp_interactions:\n- request:\n    method: post\n    uri: https://openrouter.ai/api/v1/chat/completions\n    body:\n      encoding: UTF-8\n      string: '{\"messages\":[{\"role\":\"system\",\"content\":\"Always answer ''Yes, '', ''No,\n        '', or ''Maybe, '' followed by a concise explanation!\"},{\"role\":\"user\",\"content\":\"Is\n        Ruby on Rails a web application framework?\"}],\"model\":\"meta-llama/llama-3-8b-instruct:free\",\"max_tokens\":1000,\"temperature\":0.0}'\n    headers:\n      Authorization:\n      - Bearer REDACTED\n      Content-Type:\n      - application/json\n      X-Title:\n      - OpenRouter Ruby Client\n      Http-Referer:\n      - https://github.com/OlympiaAI/open_router\n      Accept-Encoding:\n      - gzip;q=1.0,deflate;q=0.6,identity;q=0.3\n      Accept:\n      - \"*/*\"\n      User-Agent:\n      - Ruby\n  response:\n    status:\n      code: 200\n      message: OK\n    headers:\n      Date:\n      - Mon, 11 Nov 2024 01:30:50 GMT\n      Content-Type:\n      - application/json\n      Transfer-Encoding:\n      - chunked\n      Connection:\n      - keep-alive\n      Access-Control-Allow-Origin:\n      - \"*\"\n      Cf-Placement:\n      - local-EWR\n      X-Clerk-Auth-Message:\n      - Invalid JWT form. A JWT consists of three parts separated by dots. (reason=token-invalid,\n        token-carrier=header)\n      X-Clerk-Auth-Reason:\n      - token-invalid\n      X-Clerk-Auth-Status:\n      - signed-out\n      Vary:\n      - Accept-Encoding\n      Server:\n      - cloudflare\n      Cf-Ray:\n      - 8e0a84ef1f931967-EWR\n    body:\n      encoding: ASCII-8BIT\n      string: \"\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n         \\n\\n\n        \\        \\n{\\\"id\\\":\\\"gen-1731288650-XUq3WyEJOxhengNF9FHk\\\",\\\"provider\\\":\\\"Lepton\\\",\\\"model\\\":\\\"meta-llama/llama-3-8b-instruct\\\",\\\"object\\\":\\\"chat.completion\\\",\\\"created\\\":1731288650,\\\"choices\\\":[{\\\"logprobs\\\":null,\\\"finish_reason\\\":\\\"stop\\\",\\\"index\\\":0,\\\"message\\\":{\\\"role\\\":\\\"assistant\\\",\\\"content\\\":\\\"Yes, Ruby on Rails is a web application framework.\\\",\\\"refusal\\\":\\\"\\\"}}],\\\"usage\\\":{\\\"prompt_tokens\\\":45,\\\"completion_tokens\\\":25,\\\"total_tokens\\\":70}}\"\n  recorded_at: Mon, 11 Nov 2024 01:30:50 GMT\nrecorded_with: VCR 6.2.0\n"
  }
]