Showing preview only (347K chars total). Download the full file or copy to clipboard to get everything.
Repository: OlympiaAI/raix
Branch: main
Commit: 7e1b0da45fe8
Files: 64
Total size: 325.6 KB
Directory structure:
gitextract_86u4topi/
├── .github/
│ └── workflows/
│ └── main.yml
├── .gitignore
├── .rspec
├── .rubocop.yml
├── .ruby-version
├── CHANGELOG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── Gemfile
├── Guardfile
├── LICENSE.txt
├── README.llm
├── README.md
├── Rakefile
├── bin/
│ ├── console
│ └── setup
├── lib/
│ ├── raix/
│ │ ├── chat_completion.rb
│ │ ├── completion_context.rb
│ │ ├── configuration.rb
│ │ ├── function_dispatch.rb
│ │ ├── function_tool_adapter.rb
│ │ ├── mcp/
│ │ │ ├── sse_client.rb
│ │ │ ├── stdio_client.rb
│ │ │ └── tool.rb
│ │ ├── mcp.rb
│ │ ├── message_adapters/
│ │ │ └── base.rb
│ │ ├── predicate.rb
│ │ ├── prompt_declarations.rb
│ │ ├── response_format.rb
│ │ ├── transcript_adapter.rb
│ │ └── version.rb
│ └── raix.rb
├── raix.gemspec
├── sig/
│ └── raix.rbs
└── spec/
├── files/
│ └── getting_real.md
├── raix/
│ ├── before_completion_spec.rb
│ ├── chat_completion_spec.rb
│ ├── completion_context_spec.rb
│ ├── configuration_spec.rb
│ ├── function_dispatch_spec.rb
│ ├── mcp/
│ │ ├── sse_spec.rb
│ │ └── stdio_client_spec.rb
│ ├── mcp_spec.rb
│ ├── message_adapters/
│ │ └── base_spec.rb
│ ├── nil_content_spec.rb
│ ├── predicate_spec.rb
│ ├── prompt_caching_spec.rb
│ ├── prompt_declarations_spec.rb
│ └── response_format_spec.rb
├── spec_helper.rb
├── support/
│ └── mcp_server.rb
└── vcr/
├── GettingRealAnthropic/
│ └── does_a_completion_with_prompt_caching.yml
├── MeaningOfLife/
│ ├── accepts_a_messages_parameter_to_override_the_transcript.yml
│ ├── does_a_completion_with_OpenAI.yml
│ ├── does_a_completion_with_OpenRouter.yml
│ └── with_predicted_outputs/
│ └── does_a_completion_with_OpenAI.yml
├── Raix_FunctionDispatch/
│ ├── can_call_a_function_and_automatically_loop_to_provide_text_response.yml
│ ├── does_not_allow_non_exposed_methods_to_be_called.yml
│ ├── respects_max_tool_calls_parameter.yml
│ ├── supports_filtering_tools_with_the_tools_parameter.yml
│ └── supports_multiple_tool_calls_in_a_single_response.yml
└── Raix_Predicate/
├── maybe.yml
├── no.yml
└── yes.yml
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/main.yml
================================================
name: Ruby
on:
push:
branches:
- main
pull_request:
jobs:
build:
runs-on: ubuntu-latest
name: Ruby ${{ matrix.ruby }}
strategy:
matrix:
ruby:
- '3.2.2'
steps:
- uses: actions/checkout@v3
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: ${{ matrix.ruby }}
bundler-cache: true
- name: Run the default task
run: bundle exec rake ci
env:
OR_ACCESS_TOKEN: ${{ secrets.OR_ACCESS_TOKEN }}
OAI_ACCESS_TOKEN: ${{ secrets.OAI_ACCESS_TOKEN }}
================================================
FILE: .gitignore
================================================
/.bundle/
/.yardoc
/_yardoc/
/coverage/
/doc/
/pkg/
/spec/reports/
/tmp/
# rspec failure tracking
.rspec_status
*.gem
.env
.envrc
.claude/settings.local.json
================================================
FILE: .rspec
================================================
--format documentation
--color
--require spec_helper
================================================
FILE: .rubocop.yml
================================================
AllCops:
NewCops: enable
SuggestExtensions: false
TargetRubyVersion: 3.2.1
Gemspec/RequireMFA:
Enabled: false
Style/OpenStructUse:
Enabled: false
Style/StringLiterals:
Enabled: true
EnforcedStyle: double_quotes
Style/StringLiteralsInInterpolation:
Enabled: true
EnforcedStyle: double_quotes
Style/IfUnlessModifier:
Enabled: false
Layout/LineLength:
Enabled: false
Metrics/BlockLength:
Enabled: false
Metrics/MethodLength:
Enabled: false
Metrics/ModuleLength:
Enabled: false
Metrics/AbcSize:
Enabled: false
Metrics/CyclomaticComplexity:
Enabled: false
Metrics/PerceivedComplexity:
Enabled: false
Metrics/ParameterLists:
Enabled: false
Metrics/ClassLength:
Enabled: false
Style/FrozenStringLiteralComment:
Enabled: false
Style/MultilineBlockChain:
Enabled: false
================================================
FILE: .ruby-version
================================================
3.4.2
================================================
FILE: CHANGELOG.md
================================================
## [Unreleased]
## [2.0.3] - 2026-04-30
### Fixed
- `NoMethodError: undefined method 'strip' for nil` in `Raix::ChatCompletion` when an LLM (notably Gemini under certain stop conditions) returns a final assistant message with `"content": null`. Three call sites in `lib/raix/chat_completion.rb` now use `content.to_s.strip` so a nil response coerces to `""` instead of raising.
## [2.0.2] - 2026-03-27
### Fixed
- Ensure gem files are world-readable (644) for Docker deployments where gems are installed as root but the app runs as a non-root user
- Added gemspec-level safety net that normalizes file permissions at build time
## [2.0.1] - 2026-03-20
### Changed
- Replaced `require_relative` with Zeitwerk autoloading (thanks @seuros, PR #47)
## [2.0.0] - 2025-12-17
### Breaking Changes
- **Migrated from OpenRouter/OpenAI gems to RubyLLM** - Raix now uses [RubyLLM](https://github.com/crmne/ruby_llm) as its unified backend for all LLM providers. This provides better multi-provider support and a more consistent API.
- **Configuration changes** - API keys are now configured through RubyLLM's configuration system instead of separate client instances.
- **Removed direct client dependencies** - `openrouter` and `ruby-openai` gems are no longer direct dependencies; RubyLLM handles provider connections.
### Added
- **`before_completion` hook** - New hook system for intercepting and modifying chat completion requests before they're sent to the AI provider.
- Configure at global, class, or instance levels
- Hooks receive a `CompletionContext` with access to messages, params, and the chat completion instance
- Messages are mutable for content filtering, PII redaction, adding system prompts, etc.
- Params can be modified for dynamic model selection, A/B testing, and more
- Supports any callable object (Proc, Lambda, or object responding to `#call`)
- Use cases: database-backed configuration, logging, PII redaction, content filtering, cost tracking
- **`FunctionToolAdapter`** - New adapter for converting Raix function declarations to RubyLLM tool format
- **`TranscriptAdapter`** - New adapter for bridging Raix's abbreviated message format with standard OpenAI format
### Changed
- Chat completions now use RubyLLM's unified API for all providers (OpenAI, Anthropic, Google, etc.)
- Improved provider detection based on model name patterns
- Streamlined internal architecture with dedicated adapters
### Migration Guide
Update your configuration from:
```ruby
Raix.configure do |config|
config.openrouter_client = OpenRouter::Client.new(access_token: "...")
config.openai_client = OpenAI::Client.new(access_token: "...")
end
```
To:
```ruby
RubyLLM.configure do |config|
config.openrouter_api_key = ENV["OPENROUTER_API_KEY"]
config.openai_api_key = ENV["OPENAI_API_KEY"]
# Also supports: anthropic_api_key, gemini_api_key
end
```
## [1.0.2] - 2025-07-16
### Added
- Added method to check for API client availability in Configuration
### Changed
- Updated ruby-openai dependency to ~> 8.1
### Fixed
- Fixed gemspec file reference
## [1.0.1] - 2025-06-04
### Fixed
- Fixed PromptDeclarations module namespace - now properly namespaced under Raix
- Removed Rails.logger dependencies from PromptDeclarations for non-Rails environments
- Fixed documentation example showing incorrect `openai: true` usage (should be model string)
- Added comprehensive tests for PromptDeclarations module
### Changed
- Improved error handling in PromptDeclarations to catch StandardError instead of generic rescue
## [1.0.0] - 2025-06-04
### Breaking Changes
- **Deprecated `loop` parameter in ChatCompletion** - The system now automatically continues conversations after tool calls until the AI provides a text response. The `loop` parameter shows a deprecation warning but still works for backwards compatibility.
- **Tool-based completions now return strings instead of arrays** - When functions are called, the final response is a string containing the AI's text response, not an array of function results.
- **`stop_looping!` renamed to `stop_tool_calls_and_respond!`** - Better reflects the new automatic continuation behavior.
### Added
- **Automatic conversation continuation** - Chat completions automatically continue after tool execution without needing the `loop` parameter.
- **`max_tool_calls` parameter** - Controls the maximum number of tool invocations to prevent infinite loops (default: 25).
- **Configuration for `max_tool_calls`** - Added `max_tool_calls` to the Configuration class with sensible defaults.
### Changed
- ChatCompletion handles continuation after tool function calls automatically.
- Improved CI/CD workflow to use `bundle exec rake ci` for consistent testing.
### Fixed
- Resolved conflict between `loop` attribute and Ruby's `Kernel.loop` method (fixes #11).
- Fixed various RuboCop warnings using keyword argument forwarding.
- Improved error handling with proper warning messages instead of puts.
## [0.9.2] - 2025-06-03
### Fixed
- Fixed OpenAI chat completion compatibility
- Fixed SHA256 hexdigest generation for MCP tool names
- Added ostruct as explicit dependency to prevent warnings
- Fixed rubocop lint error for alphabetized gemspec dependencies
- Updated default OpenRouter model
## [0.9.1] - 2025-05-30
### Added
- **MCP Type Coercion** - Automatic type conversion for MCP tool arguments based on JSON schema
- Supports integer, number, boolean, array, and object types
- Handles nested objects and arrays of objects with proper coercion
- Gracefully handles invalid JSON and type mismatches
- **MCP Image Support** - MCP tools can now return image content as structured JSON
### Fixed
- Fixed handling of nil values in MCP argument coercion
## [0.9.0] - 2025-05-30
### Added
- **MCP (Model Context Protocol) Support**
- New `stdio_mcp` method for stdio-based MCP servers
- Refactored existing MCP code into `SseClient` and `StdioClient`
- Split top-level `mcp` method into `sse_mcp` and `stdio_mcp`
- Added authentication support for MCP servers
- **Class-Level Configuration**
- Moved configuration to separate `Configuration` class
- Added fallback mechanism for configuration options
- Cleaner metaprogramming implementation
### Fixed
- Fixed method signature of functions added via MCP
## [0.8.6] - 2025-05-19
- add `required` and `optional` flags for parameters in `function` declarations
## [0.8.5] - 2025-05-08
- renamed `tools` argument to `chat_completion` to `available_tools` to prevent shadowing the existing tool attribute (potentially breaking change to enhancement introduced in 0.8.1)
## [0.8.4] - 2025-05-07
- Calls strip instead of squish on response of chat_completion in order to not clobber linebreaks
## [0.8.3] - 2025-04-30
- Adds optional ActiveSupport Cache parameter to `dispatch_tool_function` for caching tool calls
## [0.8.2] - 2025-04-29
- Extracts function call dispatch into a public `dispatch_tool_function` that can be overridden in subclasses
- Uses `public_send` instead of `send` for better security and explicitness
## [0.8.1] - 2025-04-24
Added ability to filter tool functions (or disable completely) when calling `chat_completion`. Thanks to @parruda for the contribution.
## [0.8.0] - 2025-04-23
### Added
* **MCP integration (Experimental)** — new `Raix::MCP` concern and `mcp` DSL for declaring remote MCP servers.
* Automatically fetches `tools/list`, registers remote tools as OpenAI‑compatible function schemas, and defines proxy methods that forward `tools/call`.
* `ChatCompletion#tools` now returns remote MCP tools alongside local `function` declarations.
### Changed
* `lib/raix.rb` now requires `raix/mcp` so the concern is auto‑loaded.
### Fixed
* Internal transcript handling spec expectations updated.
### Specs
* Added `spec/raix/mcp_spec.rb` with comprehensive stubs for tools discovery & call flow.
## [0.7.3] - 2025-04-23
- commit function call and result to transcript in one operation for thread safety
## [0.7.2] - 2025-04-19
- adds support for `messages` parameter in `chat_completion` to override the transcript
- fixes potential race conditions in parallel chat completion calls by duplicating transcript
## [0.7.1] - 2025-04-10
- adds support for JSON response format with automatic parsing
- improves error handling for JSON parsing failures
## [0.7] - 2025-04-02
- adds support for `until` condition in `PromptDeclarations` to control prompt looping
- adds support for `if` and `unless` conditions in `PromptDeclarations` to control prompt execution
- adds support for `success` callback in `PromptDeclarations` to handle prompt responses
- adds support for `stream` handler in `PromptDeclarations` to control response streaming
- adds support for `params` in `PromptDeclarations` to customize API parameters per prompt
- adds support for `system` directive in `PromptDeclarations` to set per-prompt system messages
- adds support for `call` in `PromptDeclarations` to delegate to callable prompt objects
- adds support for `text` in `PromptDeclarations` to specify prompt content via lambda, string, or symbol
- adds support for `raw` parameter in `PromptDeclarations` to return raw API responses
- adds support for `openai` parameter in `PromptDeclarations` to use OpenAI directly
- adds support for `prompt` parameter in `PromptDeclarations` to specify initial prompt
- adds support for `last_response` in `PromptDeclarations` to access previous prompt responses
- adds support for `current_prompt` in `PromptDeclarations` to access current prompt context
- adds support for `MAX_LOOP_COUNT` in `PromptDeclarations` to prevent infinite loops
- adds support for `execute_ai_request` in `PromptDeclarations` to handle API calls
- adds support for `chat_completion_from_superclass` in `PromptDeclarations` to handle superclass calls
- adds support for `model`, `temperature`, and `max_tokens` in `PromptDeclarations` to access prompt parameters
- Make automatic JSON parsing available to non-OpenAI providers that don't support the response_format parameter by scanning for json XML tags
## [0.6.0] - 2024-11-12
- adds `save_response` option to `chat_completion` to control transcript updates
- fixes potential race conditions in transcript handling
## [0.4.8] - 2024-11-12
- adds documentation for `Predicate` maybe handler
- logs to stdout when a response is unhandled by `Predicate`
## [0.4.7] - 2024-11-12
- adds missing requires `raix/predicate` so that it can be used in a Rails app automatically
- adds missing openai support for `Predicate`
## [0.4.5] - 2024-11-11
- adds support for `ResponseFormat`
- added some missing requires to support String#squish
## [0.4.4] - 2024-11-11
- adds support for multiple tool calls in a single response
## [0.4.3] - 2024-11-11
- adds support for `Predicate` module
## [0.4.2] - 2024-11-05
- adds support for [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs) with the `prediction` option for OpenAI
## [0.4.0] - 2024-10-18
- adds support for Anthropic-style prompt caching
- defaults to `max_completion_tokens` when using OpenAI directly
## [0.3.2] - 2024-06-29
- adds support for streaming
## [0.2.0] - tbd
- adds `ChatCompletion` module
- adds `PromptDeclarations` module
- adds `FunctionDispatch` module
## [0.1.0] - 2024-04-03
- Initial release, placeholder gem
================================================
FILE: CLAUDE.md
================================================
This is a Ruby gem called Raix. Its purpose is to facilitate chat completion style AI text generation using LLMs provided by OpenAI and OpenRouter.
- When running all tests just do `bundle exec rake` since it automatically runs the linter with autocorrect
- Documentation: Include method/class documentation with examples when appropriate
- Add runtime dependencies to `raix.gemspec`.
- Add development dependencies to `Gemfile`.
- Don't ever test private methods directly. Specs should test behavior, not implementation.
- Never add test-specific code embedded in production code
- **Do not use require_relative**
- Require statements should always be in alphabetical order
- Always leave a blank line after module includes and before the rest of the class
- Do not decide unilaterally to leave code for the sake of "backwards compatibility"... always run those decisions by me first.
- Don't ever commit and push changes unless directly told to do so
================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at obiefernandez@gmail.com. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of actions.
**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0,
available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
================================================
FILE: Gemfile
================================================
# frozen_string_literal: true
source "https://rubygems.org"
# Specify your gem's dependencies in raix.gemspec
gemspec
group :development do
gem "dotenv", ">= 2"
gem "guard"
gem "guard-rspec"
gem "pry", ">= 0.14"
gem "rake", "~> 13.0"
gem "rspec", "~> 3.0"
gem "rubocop", "~> 1.21"
gem "solargraph-rails", "~> 0.2.0.pre"
gem "sorbet"
gem "tapioca", require: false
end
group :test do
gem "vcr"
gem "webmock"
end
================================================
FILE: Guardfile
================================================
# frozen_string_literal: true
# A sample Guardfile
# More info at https://github.com/guard/guard#readme
## Uncomment and set this to only include directories you want to watch
# directories %w(app lib config test spec features) \
# .select{|d| Dir.exist?(d) ? d : UI.warning("Directory #{d} does not exist")}
## Note: if you are using the `directories` clause above and you are not
## watching the project directory ('.'), then you will want to move
## the Guardfile to a watched dir and symlink it back, e.g.
#
# $ mkdir config
# $ mv Guardfile config/
# $ ln -s config/Guardfile .
#
# and, you'll have to watch "config/Guardfile" instead of "Guardfile"
# NOTE: The cmd option is now required due to the increasing number of ways
# rspec may be run, below are examples of the most common uses.
# * bundler: 'bundle exec rspec'
# * bundler binstubs: 'bin/rspec'
# * spring: 'bin/rspec' (This will use spring if running and you have
# installed the spring binstubs per the docs)
# * zeus: 'zeus rspec' (requires the server to be started separately)
# * 'just' rspec: 'rspec'
guard :rspec, cmd: "bundle exec rspec" do
require "guard/rspec/dsl"
dsl = Guard::RSpec::Dsl.new(self)
# Feel free to open issues for suggestions and improvements
# RSpec files
rspec = dsl.rspec
watch(rspec.spec_helper) { rspec.spec_dir }
watch(rspec.spec_support) { rspec.spec_dir }
watch(rspec.spec_files)
# Ruby files
ruby = dsl.ruby
dsl.watch_spec_files_for(ruby.lib_files)
# Rails files
rails = dsl.rails(view_extensions: %w[erb haml slim])
dsl.watch_spec_files_for(rails.app_files)
dsl.watch_spec_files_for(rails.views)
watch(rails.controllers) do |m|
[
rspec.spec.call("routing/#{m[1]}_routing"),
rspec.spec.call("controllers/#{m[1]}_controller"),
rspec.spec.call("acceptance/#{m[1]}")
]
end
# Rails config changes
watch(rails.spec_helper) { rspec.spec_dir }
watch(rails.routes) { "#{rspec.spec_dir}/routing" }
watch(rails.app_controller) { "#{rspec.spec_dir}/controllers" }
# Capybara features specs
watch(rails.view_dirs) { |m| rspec.spec.call("features/#{m[1]}") }
watch(rails.layouts) { |m| rspec.spec.call("features/#{m[1]}") }
# Turnip features and steps
watch(%r{^spec/acceptance/(.+)\.feature$})
watch(%r{^spec/acceptance/steps/(.+)_steps\.rb$}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || "spec/acceptance"
end
end
================================================
FILE: LICENSE.txt
================================================
The MIT License (MIT)
Copyright (c) 2024 Obie Fernandez
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
================================================
FILE: README.llm
================================================
# Raix (Ruby AI eXtensions)
Raix adds LLM-based AI functionality to Ruby classes. It supports OpenAI or OpenRouter as providers and can work in non-Rails apps if you include ActiveSupport.
## Chat Completion
You must include `Raix::ChatCompletion`. It gives you a `transcript` array for messages and a `chat_completion` method that sends them to the AI.
```ruby
class MeaningOfLife
include Raix::ChatCompletion
end
ai = MeaningOfLife.new
ai.transcript << { user: "What is the meaning of life?" }
puts ai.chat_completion
```
You can add messages using either `{ user: "..." }` or `{ role: "user", content: "..." }`.
### Predicted Outputs
Pass `prediction` to support [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs):
```ruby
ai.chat_completion(openai: "gpt-4o", params: { prediction: "..." })
```
### Prompt Caching
When using Anthropic models, you can specify `cache_at`. Messages above that size get sent as ephemeral multipart segments.
```ruby
ai.chat_completion(params: { cache_at: 1000 })
```
## Function Dispatch
Include `Raix::FunctionDispatch` to declare functions AI can call in a chat loop. Use `chat_completion(loop: true)` so the AI can call functions and generate more messages until it outputs a final text response.
```ruby
class WhatIsTheWeather
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :check_weather, "Check the weather for a location", location: { type: "string" } do |args|
"The weather in #{args[:location]} is hot and sunny"
end
end
```
If the AI calls multiple functions at once, Raix handles them in sequence and returns an array of results. Call `stop_tool_calls_and_respond!` inside a function to end the loop.
## Prompt Declarations
Include `Raix::PromptDeclarations` to define a chain of prompts in order. Each prompt can be inline text or a callable class that also includes `ChatCompletion`.
```ruby
class PromptSubscriber
include Raix::ChatCompletion
include Raix::PromptDeclarations
prompt call: FetchUrlCheck
prompt call: MemoryScan
prompt text: -> { user_message.content }
def message_created(user_message)
chat_completion(loop: true, openai: "gpt-4o")
end
end
```
## Predicate Module
Include `Raix::Predicate` to handle yes/no/maybe questions. Define blocks with the `yes?`, `no?`, and `maybe?` methods.
```ruby
class Question
include Raix::Predicate
yes? { |explanation| puts "Affirmative: #{explanation}" }
no? { |explanation| puts "Negative: #{explanation}" }
end
```
## ResponseFormat (Experimental)
Use `Raix::ResponseFormat` to enforce JSON schemas for structured responses.
```ruby
format = Raix::ResponseFormat.new("PersonInfo", {
name: { type: "string" },
age: { type: "integer" }
})
class StructuredResponse
include Raix::ChatCompletion
def analyze_person(name)
chat_completion(response_format: format)
end
end
```
## Installation
Add `gem "raix"` to your Gemfile or run `gem install raix`. Configure an OpenRouter or OpenAI client in an initializer:
```ruby
# config/initializers/raix.rb
Raix.configure do |config|
config.openrouter_client = OpenRouter::Client.new
end
```
Make sure you have valid API tokens for your chosen provider.
```
================================================
FILE: README.md
================================================
# Ruby AI eXtensions
## What's Raix
Raix (pronounced "ray" because the x is silent) is a library that gives you everything you need to add discrete large-language model (LLM) AI components to your Ruby applications. Raix consists of proven code that has been extracted from [Olympia](https://olympia.chat), the world's leading virtual AI team platform, and probably one of the biggest and most successful AI chat projects written completely in Ruby.
Understanding how to use discrete AI components in otherwise normal code is key to productively leveraging Raix, and the subject of a book written by Raix's author Obie Fernandez, titled [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai). You can easily support the ongoing development of this project by buying the book at Leanpub.
Raix 2.0 is powered by [RubyLLM](https://github.com/crmne/ruby_llm), giving you unified access to OpenAI, Anthropic, Google Gemini, and dozens of other providers through OpenRouter. Note that you can use Raix to add AI capabilities to non-Rails applications as long as you include ActiveSupport as a dependency.
### Chat Completions
Raix consists of three modules that can be mixed in to Ruby classes to give them AI powers. The first (and mandatory) module is `ChatCompletion`, which provides `transcript` and `chat_completion` methods.
```ruby
class MeaningOfLife
include Raix::ChatCompletion
end
>> ai = MeaningOfLife.new
>> ai.transcript << { user: "What is the meaning of life?" }
>> ai.chat_completion
=> "The question of the meaning of life is one of the most profound and enduring inquiries in philosophy, religion, and science.
Different perspectives offer various answers..."
```
By default, Raix will automatically add the AI's response to the transcript. This behavior can be controlled with the `save_response` parameter, which defaults to `true`. You may want to set it to `false` when making multiple chat completion calls during the lifecycle of a single object (whether sequentially or in parallel) and want to manage the transcript updates yourself:
```ruby
>> ai.chat_completion(save_response: false)
```
#### Transcript Format
The transcript accepts both abbreviated and standard OpenAI message hash formats. The abbreviated format, suitable for system, assistant, and user messages is simply a mapping of `role => content`, as shown in the example above.
```ruby
transcript << { user: "What is the meaning of life?" }
```
As mentioned, Raix also understands standard OpenAI messages hashes. The previous example could be written as:
```ruby
transcript << { role: "user", content: "What is the meaning of life?" }
```
One of the advantages of OpenRouter and the reason that it is used by default by this library is that it handles mapping message formats from the OpenAI standard to whatever other model you're wanting to use (Anthropic, Cohere, etc.)
Note that it's possible to override the current object's transcript by passing a `messages` array to `chat_completion`. This allows for multiple threads to share a single conversation context in parallel, by deferring when they write their responses back to the transcript.
```
chat_completion(openai: "gpt-4.1-nano", messages: [{ user: "What is the meaning of life?" }])
```
### Predicted Outputs
Raix supports [Predicted Outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs) with the `prediction` parameter for OpenAI.
```ruby
>> ai.chat_completion(openai: "gpt-4o", params: { prediction: })
```
### Prompt Caching
Raix supports [Anthropic-style prompt caching](https://openrouter.ai/docs/prompt-caching#anthropic-claude) when using Anthropic's Claude family of models. You can specify a `cache_at` parameter when doing a chat completion. If the character count for the content of a particular message is longer than the cache_at parameter, it will be sent to Anthropic as a multipart message with a cache control "breakpoint" set to "ephemeral".
Note that there is a limit of four breakpoints, and the cache will expire within five minutes. Therefore, it is recommended to reserve the cache breakpoints for large bodies of text, such as character cards, CSV data, RAG data, book chapters, etc. Raix does not enforce a limit on the number of breakpoints, which means that you might get an error if you try to cache too many messages.
```ruby
>> my_class.chat_completion(params: { cache_at: 1000 })
=> {
"messages": [
{
"role": "system",
"content": [
{
"type": "text",
"text": "HUGE TEXT BODY LONGER THAN 1000 CHARACTERS",
"cache_control": {
"type": "ephemeral"
}
}
]
},
```
### JSON Mode
Raix supports JSON mode for chat completions, which ensures that the AI model's response is valid JSON. This is particularly useful when you need structured data from the model.
When using JSON mode with OpenAI models, Raix will automatically set the `response_format` parameter on requests accordingly, and attempt to parse the entire response body as JSON.
When using JSON mode with other models (e.g. Anthropic) that don't support `response_format`, Raix will look for JSON content inside of <json> XML tags in the response, before
falling back to parsing the entire response body. Make sure you tell the AI to reply with JSON inside of XML tags.
```ruby
>> my_class.chat_completion(json: true)
=> { "key": "value" }
```
When using JSON mode with non-OpenAI providers, Raix automatically sets the `require_parameters` flag to ensure proper JSON formatting. You can also combine JSON mode with other parameters:
```ruby
>> my_class.chat_completion(json: true, openai: "gpt-4o")
=> { "key": "value" }
```
### before_completion Hook
The `before_completion` hook lets you intercept and modify chat completion requests before they're sent to the AI provider. This is useful for dynamic parameter resolution, logging, content filtering, PII redaction, and more.
#### Configuration Levels
Hooks can be configured at three levels, with later levels overriding earlier ones:
```ruby
# Global level - applies to all chat completions
Raix.configure do |config|
config.before_completion = ->(context) {
# Return a hash of params to merge, or modify context.messages directly
{ temperature: 0.7 }
}
end
# Class level - applies to all instances of a class
class MyAssistant
include Raix::ChatCompletion
configure do |config|
config.before_completion = ->(context) { { model: "gpt-4o" } }
end
end
# Instance level - applies to a single instance
assistant = MyAssistant.new
assistant.before_completion = ->(context) { { max_tokens: 500 } }
```
When hooks exist at multiple levels, they're called in order (global → class → instance), with returned params merged together. Later hooks override earlier ones for the same parameter.
#### The CompletionContext Object
Hooks receive a `CompletionContext` object with access to:
```ruby
context.chat_completion # The ChatCompletion instance
context.messages # Array of messages (mutable, in OpenAI format)
context.params # Hash of params (mutable)
context.transcript # The instance's transcript
context.current_model # Currently configured model
context.chat_completion_class # The class including ChatCompletion
context.configuration # The instance's configuration
```
#### Use Cases
**Dynamic model selection from database:**
```ruby
Raix.configure do |config|
config.before_completion = ->(context) {
settings = TenantSettings.find_by(tenant: Current.tenant)
{
model: settings.preferred_model,
temperature: settings.temperature,
max_tokens: settings.max_tokens
}
}
end
```
**PII redaction:**
```ruby
class SecureAssistant
include Raix::ChatCompletion
before_completion = ->(context) {
context.messages.each do |msg|
next unless msg[:content].is_a?(String)
# Redact SSN patterns
msg[:content] = msg[:content].gsub(/\d{3}-\d{2}-\d{4}/, "[SSN REDACTED]")
# Redact email addresses
msg[:content] = msg[:content].gsub(/[\w.-]+@[\w.-]+\.\w+/, "[EMAIL REDACTED]")
end
{} # Return empty hash if not modifying params
}
end
```
**Request logging:**
```ruby
Raix.configure do |config|
config.before_completion = ->(context) {
Rails.logger.info({
event: "chat_completion_request",
model: context.current_model,
message_count: context.messages.length,
params: context.params.except(:messages)
}.to_json)
{} # Return empty hash, just logging
}
end
```
**Adding system prompts:**
```ruby
assistant.before_completion = ->(context) {
context.messages.unshift({
role: "system",
content: "Always be helpful and respectful."
})
{}
}
```
**A/B testing models:**
```ruby
Raix.configure do |config|
config.before_completion = ->(context) {
if Flipper.enabled?(:new_model, Current.user)
{ model: "gpt-4o" }
else
{ model: "gpt-4o-mini" }
end
}
end
```
Hooks can also be any object that responds to `#call`:
```ruby
class CostTracker
def call(context)
# Track estimated cost based on message length
estimated_tokens = context.messages.sum { |m| m[:content].to_s.length / 4 }
StatsD.gauge("ai.estimated_input_tokens", estimated_tokens)
{}
end
end
Raix.configure do |config|
config.before_completion = CostTracker.new
end
```
### Use of Tools/Functions
The second (optional) module that you can add to your Ruby classes after `ChatCompletion` is `FunctionDispatch`. It lets you declare and implement functions to be called at the AI's discretion in a declarative, Rails-like "DSL" fashion.
When the AI responds with tool function calls instead of a text message, Raix automatically:
1. Executes the requested tool functions
2. Adds the function results to the conversation transcript
3. Sends the updated transcript back to the AI for another completion
4. Repeats this process until the AI responds with a regular text message
This automatic continuation ensures that tool calls are seamlessly integrated into the conversation flow. The AI can use tool results to formulate its final response to the user. You can limit the number of tool calls using the `max_tool_calls` parameter to prevent excessive function invocations.
```ruby
class WhatIsTheWeather
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :check_weather,
"Check the weather for a location",
location: { type: "string", required: true } do |arguments|
"The weather in #{arguments[:location]} is hot and sunny"
end
end
RSpec.describe WhatIsTheWeather do
subject { described_class.new }
it "provides a text response after automatically calling weather function" do
subject.transcript << { user: "What is the weather in Zipolite, Oaxaca?" }
response = subject.chat_completion(openai: "gpt-4o")
expect(response).to include("hot and sunny")
end
end
```
Parameters are optional by default. Mark them as required with `required: true` or explicitly optional with `optional: true`.
Note that for security reasons, dispatching functions only works with functions implemented using `Raix::FunctionDispatch#function` or directly on the class.
#### Tool Filtering
You can control which tool functions are exposed to the AI per request using the `available_tools` parameter of the `chat_completion` method:
```ruby
class WeatherAndTime
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :check_weather, "Check the weather for a location", location: { type: "string" } do |arguments|
"The weather in #{arguments[:location]} is sunny"
end
function :get_time, "Get the current time" do |_arguments|
"The time is 12:00 PM"
end
end
weather = WeatherAndTime.new
# Don't pass any tools to the LLM
weather.chat_completion(available_tools: false)
# Only pass specific tools to the LLM
weather.chat_completion(available_tools: [:check_weather])
# Pass all declared tools (default behavior)
weather.chat_completion
```
The `available_tools` parameter accepts three types of values:
- `nil`: All declared tool functions are passed (default behavior)
- `false`: No tools are passed to the LLM
- An array of symbols: Only the specified tools are passed (raises `Raix::UndeclaredToolError` if a specified tool function is not declared)
#### Multiple Tool Calls
Some AI models (like GPT-4) can make multiple tool calls in a single response. When this happens, Raix will automatically handle all the function calls sequentially.
If you need to capture the arguments to the function calls, do so in the block passed to `function`. The response from `chat_completion` is always the final text
response from the assistant, and is not affected by function calls.
```ruby
class MultipleToolExample
include Raix::ChatCompletion
include Raix::FunctionDispatch
attr_reader :invocations
function :first_tool do |arguments|
@invocations << :first
"Result from first tool"
end
function :second_tool do |arguments|
@invocations << :second
"Result from second tool"
end
def initialize
@invocations = []
end
end
example = MultipleToolExample.new
example.transcript << { user: "Please use both tools" }
example.chat_completion(openai: "gpt-4o")
# => "I used both tools, as requested"
example.invocations
# => [:first, :second]
```
#### Customizing Function Dispatch
You can customize how function calls are handled by overriding the `dispatch_tool_function` in your class. This is useful if you need to add logging, caching, error handling, or other custom behavior around function calls.
```ruby
class CustomDispatchExample
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :example_tool do |arguments|
"Result from example tool"
end
def dispatch_tool_function(function_name, arguments)
puts "Calling #{function_name} with #{arguments}"
result = super
puts "Result: #{result}"
result
end
end
```
#### Function Call Caching
You can use ActiveSupport's Cache to cache function call results, which can be particularly useful for expensive operations or external API calls that don't need to be repeated frequently.
```ruby
class CachedFunctionExample
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :expensive_operation do |arguments|
"Result of expensive operation with #{arguments}"
end
# Override dispatch_tool_function to enable caching for all functions
def dispatch_tool_function(function_name, arguments)
# Pass the cache to the superclass implementation
super(function_name, arguments, cache: Rails.cache)
end
end
```
The caching mechanism works by:
1. Passing the cache object through `dispatch_tool_function` to the function implementation
2. Using the function name and arguments as cache keys
3. Automatically fetching from cache when available or executing the function when not cached
This is particularly useful for:
- Expensive database operations
- External API calls
- Resource-intensive computations
- Functions with deterministic outputs for the same inputs
#### Limiting Tool Calls
You can control the maximum number of tool calls before the AI must provide a text response:
```ruby
# Limit to 5 tool calls (default is 25)
response = my_ai.chat_completion(max_tool_calls: 5)
# Configure globally
Raix.configure do |config|
config.max_tool_calls = 10
end
```
#### Manually Stopping Tool Calls
For AI components that process tasks without end-user interaction, you can use `stop_tool_calls_and_respond!` within a function to force the AI to provide a text response without making additional tool calls.
```ruby
class OrderProcessor
include Raix::ChatCompletion
include Raix::FunctionDispatch
SYSTEM_DIRECTIVE = "You are an order processor, tasked with order validation, inventory check,
payment processing, and shipping."
attr_accessor :order
def initialize(order)
self.order = order
transcript << { system: SYSTEM_DIRECTIVE }
transcript << { user: order.to_json }
end
def perform
# will automatically continue after tool calls until finished_processing is called
chat_completion
end
# implementation of functions that can be called by the AI
# entirely at its discretion, depending on the needs of the order.
# The return value of each `perform` method will be added to the
# transcript of the conversation as a function result.
function :validate_order do
OrderValidationWorker.perform(@order)
end
function :check_inventory do
InventoryCheckWorker.perform(@order)
end
function :process_payment do
PaymentProcessingWorker.perform(@order)
end
function :schedule_shipping do
ShippingSchedulerWorker.perform(@order)
end
function :send_confirmation do
OrderConfirmationWorker.perform(@order)
end
function :finished_processing do
order.update!(transcript:, processed_at: Time.current)
stop_tool_calls_and_respond!
"Order processing completed successfully"
end
end
```
### Prompt Declarations
The third (also optional) module that you can add mix in along with `ChatCompletion` is `PromptDeclarations`. It provides the ability to declare a "Prompt Chain" (series of prompts to be called in a sequence), and also features a declarative, Rails-like "DSL" of its own. Prompts can be defined inline or delegate to callable prompt objects, which themselves implement `ChatCompletion`.
The following example is a rough excerpt of the main "Conversation Loop" in Olympia, which pre-processes user messages to check for
the presence of URLs and scan memory before submitting as a prompt to GPT-4. Note that prompt declarations are executed in the order
that they are declared. The `FetchUrlCheck` callable prompt class is included for instructional purposes. Note that it is passed the
an instance of the object that is calling it in its initializer as its `context`. The passing of context means that you can assemble
composite prompt structures of arbitrary depth.
```ruby
class PromptSubscriber
include Raix::ChatCompletion
include Raix::PromptDeclarations
attr_accessor :conversation, :bot_message, :user_message
# many other declarations omitted...
prompt call: FetchUrlCheck
prompt call: MemoryScan
prompt text: -> { user_message.content }, stream: -> { ReplyStream.new(self) }, until: -> { bot_message.complete? }
def initialize(conversation)
self.conversation = conversation
end
def message_created(user_message)
self.user_message = user_message
self.bot_message = conversation.bot_message!(responding_to: user_message)
chat_completion(loop: true, openai: "gpt-4o")
end
...
end
class FetchUrlCheck
include ChatCompletion
include FunctionDispatch
REGEX = %r{\b(?:http(s)?://)?(?:www\.)?[a-zA-Z0-9-]+(\.[a-zA-Z]{2,})+(/[^\s]*)?\b}
attr_accessor :context, :conversation
delegate :user_message, to: :context
delegate :content, to: :user_message
def initialize(context)
self.context = context
self.conversation = context.conversation
self.model = "anthropic/claude-3-haiku"
end
def call
return unless content&.match?(REGEX)
transcript << { system: "Call the `fetch` function if the user mentions a website, otherwise say nil" }
transcript << { user: content }
chat_completion # TODO: consider looping to fetch more than one URL per user message
end
function :fetch, "Gets the plain text contents of a web page", url: { type: "string" } do |arguments|
Tools::FetchUrl.fetch(arguments[:url]).tap do |result|
parent = conversation.function_call!("fetch_url", arguments, parent: user_message)
conversation.function_result!("fetch_url", result, parent:)
end
end
```
Notably, Olympia does not use the `FunctionDispatch` module in its primary conversation loop because it does not have a fixed set of tools that are included in every single prompt. Functions are made available dynamically based on a number of factors including the user's plan tier and capabilities of the assistant with whom the user is conversing.
Streaming of the AI's response to the end user is handled by the `ReplyStream` class, passed to the final prompt declaration as its `stream` parameter. [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai) devotes a whole chapter to describing how to write your own `ReplyStream` class.
#### Additional PromptDeclarations Options
The `PromptDeclarations` module supports several additional options that can be used to customize prompt behavior:
```ruby
class CustomPromptExample
include Raix::ChatCompletion
include Raix::PromptDeclarations
# Basic prompt with text
prompt text: "Process this input"
# Prompt with system directive
prompt system: "You are a helpful assistant",
text: "Analyze this text"
# Prompt with conditions
prompt text: "Process this input",
if: -> { some_condition },
unless: -> { some_other_condition }
# Prompt with success callback
prompt text: "Process this input",
success: ->(response) { handle_response(response) }
# Prompt with custom parameters
prompt text: "Process with custom settings",
params: { temperature: 0.7, max_tokens: 1000 }
# Prompt with until condition for looping
prompt text: "Keep processing until complete",
until: -> { processing_complete? }
# Prompt with raw response
prompt text: "Get raw response",
raw: true
# Prompt using OpenAI directly
prompt text: "Use OpenAI",
openai: "gpt-4o"
end
```
The available options include:
- `system`: Set a system directive for the prompt
- `if`/`unless`: Control prompt execution with conditions
- `success`: Handle prompt responses with callbacks
- `params`: Customize API parameters per prompt
- `until`: Control prompt looping
- `raw`: Get raw API responses
- `openai`: Use OpenAI directly
- `stream`: Control response streaming
- `call`: Delegate to callable prompt objects
You can also access the current prompt context and previous responses:
```ruby
class ContextAwarePrompt
include Raix::ChatCompletion
include Raix::PromptDeclarations
def process_with_context
# Access current prompt
current_prompt.params[:temperature]
# Access previous response
last_response
chat_completion
end
end
```
## Predicate Module
The `Raix::Predicate` module provides a simple way to handle yes/no/maybe questions using AI chat completion. It allows you to define blocks that handle different types of responses with their explanations. It is one of the concrete patterns described in the "Discrete Components" chapter of [Patterns of Application Development Using AI](https://leanpub.com/patterns-of-application-development-using-ai).
### Usage
Include the `Raix::Predicate` module in your class and define handlers using block syntax:
```ruby
class Question
include Raix::Predicate
yes? do |explanation|
puts "Affirmative: #{explanation}"
end
no? do |explanation|
puts "Negative: #{explanation}"
end
maybe? do |explanation|
puts "Uncertain: #{explanation}"
end
end
question = Question.new
question.ask("Is Ruby a programming language?")
# => Affirmative: Yes, Ruby is a dynamic, object-oriented programming language...
```
### Features
- Define handlers for yes, no, and/or maybe responses using the declarative class level block syntax.
- At least one handler (yes, no, or maybe) must be defined.
- Handlers receive the full AI response including explanation as an argument.
- Responses always start with "Yes, ", "No, ", or "Maybe, " followed by an explanation.
- Make sure to ask a question that can be answered with yes, no, or maybe (otherwise the results are indeterminate).
### Example with Single Handler
You can define only the handlers you need:
```ruby
class SimpleQuestion
include Raix::Predicate
# Only handle positive responses
yes? do |explanation|
puts "✅ #{explanation}"
end
end
question = SimpleQuestion.new
question.ask("Is 2 + 2 = 4?")
# => ✅ Yes, 2 + 2 equals 4, this is a fundamental mathematical fact.
```
### Error Handling
The module will raise a RuntimeError if you attempt to ask a question without defining any response handlers:
```ruby
class InvalidQuestion
include Raix::Predicate
end
question = InvalidQuestion.new
question.ask("Any question")
# => RuntimeError: Please define a yes and/or no block
```
## Model Context Protocol (Experimental)
The `Raix::MCP` module provides integration with the Model Context Protocol, allowing you to connect your Raix-powered application to remote MCP servers. This feature is currently **experimental**.
### Usage
Include the `Raix::MCP` module in your class and declare MCP servers using the `mcp` DSL:
```ruby
class McpConsumer
include Raix::ChatCompletion
include Raix::FunctionDispatch
include Raix::MCP
mcp "https://your-mcp-server.example.com/sse"
end
```
### Features
- Automatically fetches available tools from the remote MCP server using `tools/list`
- Registers remote tools as OpenAI-compatible function schemas
- Defines proxy methods that forward requests to the remote server via `tools/call`
- Seamlessly integrates with the existing `FunctionDispatch` workflow
- Handles transcript recording to maintain consistent conversation history
### Filtering Tools
You can filter which remote tools to include:
```ruby
class FilteredMcpConsumer
include Raix::ChatCompletion
include Raix::FunctionDispatch
include Raix::MCP
# Only include specific tools
mcp "https://server.example.com/sse", only: [:tool_one, :tool_two]
# Or exclude specific tools
mcp "https://server.example.com/sse", except: [:tool_to_exclude]
end
```
## Response Format (Experimental)
The `ResponseFormat` class provides a way to declare a JSON schema for the response format of an AI chat completion. It's particularly useful when you need structured responses from AI models, ensuring the output conforms to your application's requirements.
### Features
- Converts Ruby hashes and arrays into JSON schema format
- Supports nested structures and arrays
- Enforces strict validation with `additionalProperties: false`
- Automatically marks all top-level properties as required
- Handles both simple type definitions and complex nested schemas
### Basic Usage
```ruby
# Simple schema with basic types
format = Raix::ResponseFormat.new("PersonInfo", {
name: { type: "string" },
age: { type: "integer" }
})
# Use in chat completion
my_ai.chat_completion(response_format: format)
```
### Complex Structures
```ruby
# Nested structure with arrays
format = Raix::ResponseFormat.new("CompanyInfo", {
company: {
name: { type: "string" },
employees: [
{
name: { type: "string" },
role: { type: "string" },
skills: ["string"]
}
],
locations: ["string"]
}
})
```
### Generated Schema
The ResponseFormat class generates a schema that follows this structure:
```json
{
"type": "json_schema",
"json_schema": {
"name": "SchemaName",
"schema": {
"type": "object",
"properties": {
"property1": { "type": "string" },
"property2": { "type": "integer" }
},
"required": ["property1", "property2"],
"additionalProperties": false
},
"strict": true
}
}
```
### Using with Chat Completion
When used with chat completion, the AI model will format its response according to your schema:
```ruby
class StructuredResponse
include Raix::ChatCompletion
def analyze_person(name)
format = Raix::ResponseFormat.new("PersonAnalysis", {
full_name: { type: "string" },
age_estimate: { type: "integer" },
personality_traits: ["string"]
})
transcript << { user: "Analyze the person named #{name}" }
chat_completion(params: { response_format: format })
end
end
response = StructuredResponse.new.analyze_person("Alice")
# Returns a hash matching the defined schema
```
## Installation
Install the gem and add to the application's Gemfile by executing:
$ bundle add raix
If bundler is not being used to manage dependencies, install the gem by executing:
$ gem install raix
### Configuration
Raix 2.0 uses [RubyLLM](https://github.com/crmne/ruby_llm) as its backend for LLM provider connections. Configure your API keys through RubyLLM:
```ruby
# config/initializers/raix.rb
RubyLLM.configure do |config|
config.openrouter_api_key = ENV["OPENROUTER_API_KEY"]
config.openai_api_key = ENV["OPENAI_API_KEY"]
# Optional: configure other providers
# config.anthropic_api_key = ENV["ANTHROPIC_API_KEY"]
# config.gemini_api_key = ENV["GEMINI_API_KEY"]
end
```
Raix will automatically use the appropriate provider based on the model name:
- Models starting with `gpt-` or `o1` use OpenAI directly
- All other models route through OpenRouter
### Global vs Class-Level Configuration
You can configure Raix options globally or at the class level:
```ruby
# Global configuration
Raix.configure do |config|
config.temperature = 0.7
config.max_tokens = 1000
config.model = "gpt-4o"
config.max_tool_calls = 25
end
# Class-level configuration (overrides global)
class MyAssistant
include Raix::ChatCompletion
configure do |config|
config.model = "anthropic/claude-3-opus"
config.temperature = 0.5
end
end
```
### Upgrading from Raix 1.x
If upgrading from Raix 1.x, update your configuration from:
```ruby
# Old 1.x configuration
Raix.configure do |config|
config.openrouter_client = OpenRouter::Client.new(access_token: "...")
config.openai_client = OpenAI::Client.new(access_token: "...")
end
```
To the new RubyLLM-based configuration shown above.
## Development
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
Specs require `OR_ACCESS_TOKEN` and `OAI_ACCESS_TOKEN` environment variables, for access to OpenRouter and OpenAI, respectively. You can add those keys to a local unversioned `.env` file and they will be picked up by the `dotenv` gem.
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/OlympiaAI/raix. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/OlympiaAI/raix/blob/main/CODE_OF_CONDUCT.md).
## License
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
## Code of Conduct
Everyone interacting in the Raix project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/OlympiaAI/raix/blob/main/CODE_OF_CONDUCT.md).
================================================
FILE: Rakefile
================================================
# frozen_string_literal: true
require "bundler/gem_tasks"
require "rspec/core/rake_task"
RSpec::Core::RakeTask.new(:spec)
require "rubocop/rake_task"
RuboCop::RakeTask.new(:rubocop_ci)
task ci: %i[spec rubocop_ci]
RuboCop::RakeTask.new(:rubocop) do |task|
task.options = ["--autocorrect"]
end
task default: %i[spec rubocop]
================================================
FILE: bin/console
================================================
#!/usr/bin/env ruby
# frozen_string_literal: true
require "bundler/setup"
require "raix"
# You can add fixtures and/or initialization code here to make experimenting
# with your gem easier. You can also use a different console, if you like.
require "irb"
IRB.start(__FILE__)
================================================
FILE: bin/setup
================================================
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
set -vx
bundle install
# Do any other automated setup that you need to do here
================================================
FILE: lib/raix/chat_completion.rb
================================================
# frozen_string_literal: true
require "active_support/concern"
require "active_support/core_ext/object/blank"
require "active_support/core_ext/string/filters"
require "active_support/core_ext/hash/indifferent_access"
require "ruby_llm"
module Raix
class UndeclaredToolError < StandardError; end
# The `ChatCompletion` module is a Rails concern that provides a way to interact
# with the OpenRouter Chat Completion API via its client. The module includes a few
# methods that allow you to build a transcript of messages and then send them to
# the API for completion. The API will return a response that you can use however
# you see fit.
#
# When the AI responds with tool function calls instead of a text message, this
# module automatically:
# 1. Executes the requested tool functions
# 2. Adds the function results to the conversation transcript
# 3. Sends the updated transcript back to the AI for another completion
# 4. Repeats this process until the AI responds with a regular text message
#
# This automatic continuation ensures that tool calls are seamlessly integrated
# into the conversation flow. The AI can use tool results to formulate its final
# response to the user. You can limit the number of tool calls using the
# `max_tool_calls` parameter to prevent excessive function invocations.
#
# Tool functions must be defined on the class that includes this module. The
# `FunctionDispatch` module provides a Rails-like DSL for declaring these
# functions at the class level, which is cleaner than implementing them as
# instance methods.
#
# Note that some AI models can make multiple tool function calls in a single
# response. When that happens, the module executes all requested functions
# before continuing the conversation.
module ChatCompletion
extend ActiveSupport::Concern
attr_accessor :before_completion, :cache_at, :frequency_penalty, :logit_bias, :logprobs, :loop, :min_p, :model,
:presence_penalty, :prediction, :repetition_penalty, :response_format, :stream, :temperature,
:max_completion_tokens, :max_tokens, :seed, :stop, :top_a, :top_k, :top_logprobs, :top_p, :tools,
:available_tools, :tool_choice, :provider, :max_tool_calls, :stop_tool_calls_and_respond
class_methods do
# Returns the current configuration of this class. Falls back to global configuration for unset values.
def configuration
@configuration ||= Configuration.new(fallback: Raix.configuration)
end
# Let's you configure the class-level configuration using a block.
def configure
yield(configuration)
end
end
# Instance level access to the class-level configuration.
def configuration
self.class.configuration
end
# This method performs chat completion based on the provided transcript and parameters.
#
# @param params [Hash] The parameters for chat completion.
# @option loop [Boolean] :loop (false) DEPRECATED - The system now automatically continues after tool calls.
# @option params [Boolean] :json (false) Whether to return the parse the response as a JSON object. Will search for <json> tags in the response first, then fall back to the default JSON parsing of the entire response.
# @option params [String] :openai (nil) If non-nil, use OpenAI with the model specified in this param.
# @option params [Boolean] :raw (false) Whether to return the raw response or dig the text content.
# @option params [Array] :messages (nil) An array of messages to use instead of the transcript.
# @option tools [Array|false] :available_tools (nil) Tools to pass to the LLM. Ignored if nil (default). If false, no tools are passed. If an array, only declared tools in the array are passed.
# @option max_tool_calls [Integer] :max_tool_calls Maximum number of tool calls before forcing a text response. Defaults to the configured value.
# @return [String|Hash] The completed chat response.
def chat_completion(params: {}, loop: false, json: false, raw: false, openai: nil, save_response: true, messages: nil, available_tools: nil, max_tool_calls: nil)
# set params to default values if not provided
params[:cache_at] ||= cache_at.presence
params[:frequency_penalty] ||= frequency_penalty.presence
params[:logit_bias] ||= logit_bias.presence
params[:logprobs] ||= logprobs.presence
params[:max_completion_tokens] ||= max_completion_tokens.presence || configuration.max_completion_tokens
params[:max_tokens] ||= max_tokens.presence || configuration.max_tokens
params[:min_p] ||= min_p.presence
params[:prediction] = { type: "content", content: params[:prediction] || prediction } if params[:prediction] || prediction.present?
params[:presence_penalty] ||= presence_penalty.presence
params[:provider] ||= provider.presence
params[:repetition_penalty] ||= repetition_penalty.presence
params[:response_format] ||= response_format.presence
params[:seed] ||= seed.presence
params[:stop] ||= stop.presence
params[:temperature] ||= temperature.presence || configuration.temperature
params[:tool_choice] ||= tool_choice.presence
params[:tools] = if available_tools == false
nil
elsif available_tools.is_a?(Array)
filtered_tools(available_tools)
else
tools.presence
end
params[:top_a] ||= top_a.presence
params[:top_k] ||= top_k.presence
params[:top_logprobs] ||= top_logprobs.presence
params[:top_p] ||= top_p.presence
json = true if params[:response_format].is_a?(Raix::ResponseFormat)
if json
unless openai
params[:provider] ||= {}
params[:provider][:require_parameters] = true
end
if params[:response_format].blank?
params[:response_format] ||= {}
params[:response_format][:type] = "json_object"
end
end
# Deprecation warning for loop parameter
if loop
warn "\n\nWARNING: The 'loop' parameter is DEPRECATED and will be ignored.\nChat completions now automatically continue after tool calls until the AI provides a text response.\nUse 'max_tool_calls' to limit the number of tool calls (default: #{configuration.max_tool_calls}).\n\n"
end
# Set max_tool_calls from parameter or configuration default
self.max_tool_calls = max_tool_calls || configuration.max_tool_calls
# Reset stop_tool_calls_and_respond flag
@stop_tool_calls_and_respond = false
# Track tool call count
tool_call_count = 0
# set the model to the default if not provided
self.model ||= configuration.model
adapter = MessageAdapters::Base.new(self)
# duplicate the transcript to avoid race conditions in situations where
# chat_completion is called multiple times in parallel
# TODO: Defensive programming, ensure messages is an array
messages ||= transcript.flatten.compact
messages = messages.map { |msg| adapter.transform(msg) }.dup
raise "Can't complete an empty transcript" if messages.blank?
# Run before_completion hooks (global -> class -> instance)
# Hooks can modify params and messages for logging, filtering, PII redaction, etc.
run_before_completion_hooks(params, messages)
begin
response = ruby_llm_request(params:, model: openai || model, messages:, openai_override: openai)
retry_count = 0
content = nil
# no need for additional processing if streaming
return if stream && response.blank?
# tuck the full response into a thread local in case needed
Thread.current[:chat_completion_response] = response.is_a?(Hash) ? response.with_indifferent_access : response
# TODO: add a standardized callback hook for usage events
# broadcast(:usage_event, usage_subject, self.class.name.to_s, response, premium?)
tool_calls = response.dig("choices", 0, "message", "tool_calls") || []
if tool_calls.any?
tool_call_count += tool_calls.size
# Check if we've exceeded max_tool_calls
if tool_call_count > self.max_tool_calls
# Add system message about hitting the limit
messages << { role: "system", content: "Maximum tool calls (#{self.max_tool_calls}) exceeded. Please provide a final response to the user without calling any more tools." }
# Force a final response without tools
params[:tools] = nil
response = ruby_llm_request(params:, model: openai || model, messages:, openai_override: openai)
# Process the final response
content = response.dig("choices", 0, "message", "content")
transcript << { assistant: content } if save_response
return raw ? response : content.to_s.strip
end
# Dispatch tool calls
tool_calls.each do |tool_call| # TODO: parallelize this?
# dispatch the called function
function_name = tool_call["function"]["name"]
arguments = JSON.parse(tool_call["function"]["arguments"].presence || "{}")
raise "Unauthorized function call: #{function_name}" unless self.class.functions.map { |f| f[:name].to_sym }.include?(function_name.to_sym)
dispatch_tool_function(function_name, arguments.with_indifferent_access)
end
# After executing tool calls, we need to continue the conversation
# to let the AI process the results and provide a text response.
# We continue until the AI responds with a regular assistant message
# (not another tool call request), unless stop_tool_calls_and_respond! was called.
# Use the updated transcript for the next call, not the original messages
updated_messages = transcript.flatten.compact
last_message = updated_messages.last
if !@stop_tool_calls_and_respond && (last_message[:role] != "assistant" || last_message[:tool_calls].present?)
# Send the updated transcript back to the AI
return chat_completion(
params:,
json:,
raw:,
openai:,
save_response:,
messages: nil, # Use transcript instead
available_tools:,
max_tool_calls: self.max_tool_calls - tool_call_count
)
elsif @stop_tool_calls_and_respond
# If stop_tool_calls_and_respond was set, force a final response without tools
params[:tools] = nil
response = ruby_llm_request(params:, model: openai || model, messages:, openai_override: openai)
content = response.dig("choices", 0, "message", "content")
transcript << { assistant: content } if save_response
return raw ? response : content.to_s.strip
end
end
response.tap do |res|
content = res.dig("choices", 0, "message", "content")
transcript << { assistant: content } if save_response
content = content.to_s.strip
if json
# Make automatic JSON parsing available to non-OpenAI providers that don't support the response_format parameter
content = content.match(%r{<json>(.*?)</json>}m)[1] if content.include?("<json>")
return JSON.parse(content)
end
return content unless raw
end
rescue JSON::ParserError => e
if e.message.include?("not a valid") # blank JSON
warn "Retrying blank JSON response... (#{retry_count} attempts) #{e.message}"
retry_count += 1
sleep 1 * retry_count # backoff
retry if retry_count < 3
raise e # just fail if we can't get content after 3 attempts
end
warn "Bad JSON received!!!!!!: #{content}"
raise e
rescue Faraday::BadRequestError => e
# make sure we see the actual error message on console or Honeybadger
warn "Chat completion failed!!!!!!!!!!!!!!!!: #{e.response[:body]}"
raise e
end
end
# This method returns the transcript array.
# Manually add your messages to it in the following abbreviated format
# before calling `chat_completion`.
#
# { system: "You are a pumpkin" },
# { user: "Hey what time is it?" },
# { assistant: "Sorry, pumpkins do not wear watches" }
#
# to add a function call use the following format:
# { function: { name: 'fancy_pants_function', arguments: { param: 'value' } } }
#
# to add a function result use the following format:
# { function: result, name: 'fancy_pants_function' }
#
# @return [Array] The transcript array.
def transcript
@transcript ||= TranscriptAdapter.new(ruby_llm_chat)
end
# Returns the RubyLLM::Chat instance for this conversation
def ruby_llm_chat
@ruby_llm_chat ||= begin
model_id = model || configuration.model
# Determine provider based on model format or explicit openai flag
provider = if model_id.to_s.start_with?("openai/") || model_id.to_s.match?(/^gpt-/)
:openai
else
:openrouter
end
RubyLLM.chat(model: model_id, provider:, assume_model_exists: true)
end
end
# Dispatches a tool function call with the given function name and arguments.
# This method can be overridden in subclasses to customize how function calls are handled.
#
# @param function_name [String] The name of the function to call
# @param arguments [Hash] The arguments to pass to the function
# @param cache [ActiveSupport::Cache] Optional cache object
# @return [Object] The result of the function call
def dispatch_tool_function(function_name, arguments, cache: nil)
public_send(function_name, arguments, cache)
end
private
def filtered_tools(tool_names)
return nil if tool_names.blank?
requested_tools = tool_names.map(&:to_sym)
available_tool_names = tools.map { |tool| tool.dig(:function, :name).to_sym }
undeclared_tools = requested_tools - available_tool_names
raise UndeclaredToolError, "Undeclared tools: #{undeclared_tools.join(", ")}" if undeclared_tools.any?
tools.select { |tool| requested_tools.include?(tool.dig(:function, :name).to_sym) }
end
def run_before_completion_hooks(params, messages)
hooks = [
Raix.configuration.before_completion,
self.class.configuration.before_completion,
before_completion
].compact
return if hooks.empty?
context = CompletionContext.new(
chat_completion: self,
messages:,
params:
)
hooks.each do |hook|
result = hook.call(context) if hook.respond_to?(:call)
next unless result.is_a?(Hash)
# Handle model separately since it's passed as a keyword arg to ruby_llm_request
self.model = result[:model] if result.key?(:model)
params.merge!(result.compact)
end
end
def ruby_llm_request(params:, model:, messages:, openai_override: nil)
# Create a temporary chat instance for this request
provider = determine_provider(model, openai_override)
chat = RubyLLM.chat(model:, provider:, assume_model_exists: true)
# Apply messages to the chat
# Track if we have a user message to determine how to call ask
has_user_message = false
messages.each do |msg|
role = msg[:role] || msg["role"]
content = msg[:content] || msg["content"]
case role.to_s
when "system"
chat.with_instructions(content)
when "user"
has_user_message = true
chat.add_message(role: :user, content:)
when "assistant"
if msg[:tool_calls] || msg["tool_calls"]
chat.add_message(role: :assistant, content:, tool_calls: msg[:tool_calls] || msg["tool_calls"])
else
chat.add_message(role: :assistant, content:)
end
when "tool"
chat.add_message(
role: :tool,
content:,
tool_call_id: msg[:tool_call_id] || msg["tool_call_id"]
)
end
end
# Apply configuration parameters
chat.with_temperature(params[:temperature]) if params[:temperature]
# Apply additional params (RubyLLM with_params expects keyword args)
additional_params = params.compact.except(:temperature, :tools, :max_tokens, :max_completion_tokens)
chat.with_params(**additional_params) if additional_params.any?
# Handle tools - convert Raix function declarations to RubyLLM tools
if params[:tools].present? && respond_to?(:class) && self.class.respond_to?(:functions)
ruby_llm_tools = FunctionToolAdapter.convert_tools_for_ruby_llm(self)
ruby_llm_tools.each { |tool| chat.with_tool(tool) }
end
# Execute the completion
if stream.present?
# Streaming mode
if has_user_message
chat.complete(&stream)
else
chat.ask(&stream)
end
nil # Return nil for streaming as per original behavior
else
# Non-streaming mode - return OpenAI-compatible response format
response_message = has_user_message ? chat.complete : chat.ask
# Convert RubyLLM response to OpenAI format for compatibility
{
"choices" => [
{
"message" => {
"role" => "assistant",
"content" => response_message.content,
"tool_calls" => response_message.tool_calls
},
"finish_reason" => response_message.tool_call? ? "tool_calls" : "stop"
}
],
"usage" => {
"prompt_tokens" => response_message.input_tokens,
"completion_tokens" => response_message.output_tokens,
"total_tokens" => (response_message.input_tokens || 0) + (response_message.output_tokens || 0)
}
}
end
rescue StandardError => e
warn "RubyLLM request failed: #{e.message}"
raise e
end
def determine_provider(model, openai_override)
return :openai if openai_override
return :openai if model.to_s.match?(/^gpt-/) || model.to_s.match?(/^o\d/)
# Default to openrouter for model IDs with provider prefix
:openrouter
end
end
end
================================================
FILE: lib/raix/completion_context.rb
================================================
# frozen_string_literal: true
module Raix
# Context object passed to before_completion hooks.
# Provides access to the chat completion instance, messages, and request parameters.
# Messages can be mutated for content filtering, PII redaction, etc.
class CompletionContext
attr_reader :chat_completion, :messages, :params
def initialize(chat_completion:, messages:, params:)
@chat_completion = chat_completion
@messages = messages # mutable - hooks can modify for filtering, redaction, etc.
@params = params # mutable - hooks can modify parameters
end
# Convenience accessor for the transcript
def transcript
chat_completion.transcript
end
# Get the currently configured model
def current_model
chat_completion.model || chat_completion.configuration.model
end
# Get the class that includes ChatCompletion
def chat_completion_class
chat_completion.class
end
# Get the current configuration
def configuration
chat_completion.configuration
end
end
end
================================================
FILE: lib/raix/configuration.rb
================================================
# frozen_string_literal: true
module Raix
# The Configuration class holds the configuration options for the Raix gem.
class Configuration
def self.attr_accessor_with_fallback(method_name)
define_method(method_name) do
value = instance_variable_get("@#{method_name}")
return value if value
return unless fallback
fallback.public_send(method_name)
end
define_method("#{method_name}=") do |value|
instance_variable_set("@#{method_name}", value)
end
end
# The temperature option determines the randomness of the generated text.
# Higher values result in more random output.
attr_accessor_with_fallback :temperature
# The max_tokens option determines the maximum number of tokens to generate.
attr_accessor_with_fallback :max_tokens
# The max_completion_tokens option determines the maximum number of tokens to generate.
attr_accessor_with_fallback :max_completion_tokens
# The model option determines the model to use for text generation. This option
# is normally set in each class that includes the ChatCompletion module.
attr_accessor_with_fallback :model
# DEPRECATED: Use ruby_llm_config.openrouter_api_key instead
attr_accessor_with_fallback :openrouter_client
# DEPRECATED: Use ruby_llm_config.openai_api_key instead
attr_accessor_with_fallback :openai_client
# The max_tool_calls option determines the maximum number of tool calls
# before forcing a text response to prevent excessive function invocations.
attr_accessor_with_fallback :max_tool_calls
# Access to RubyLLM configuration
attr_accessor_with_fallback :ruby_llm_config
# A callable hook that runs before each chat completion request.
# Receives a CompletionContext and can modify params and messages.
# Use for: dynamic parameter resolution, logging, content filtering, PII redaction, etc.
attr_accessor_with_fallback :before_completion
DEFAULT_MAX_TOKENS = 1000
DEFAULT_MAX_COMPLETION_TOKENS = 16_384
DEFAULT_MODEL = "meta-llama/llama-3.3-8b-instruct:free"
DEFAULT_TEMPERATURE = 0.0
DEFAULT_MAX_TOOL_CALLS = 25
# Initializes a new instance of the Configuration class with default values.
def initialize(fallback: nil)
self.temperature = DEFAULT_TEMPERATURE
self.max_completion_tokens = DEFAULT_MAX_COMPLETION_TOKENS
self.max_tokens = DEFAULT_MAX_TOKENS
self.model = DEFAULT_MODEL
self.max_tool_calls = DEFAULT_MAX_TOOL_CALLS
self.ruby_llm_config = RubyLLM.config
self.fallback = fallback
end
def client?
# Support legacy openrouter_client/openai_client or new RubyLLM config
!!(openrouter_client || openai_client || ruby_llm_configured?)
end
def ruby_llm_configured?
ruby_llm_config&.openai_api_key || ruby_llm_config&.openrouter_api_key ||
ruby_llm_config&.anthropic_api_key || ruby_llm_config&.gemini_api_key
end
private
attr_accessor :fallback
def get_with_fallback(method)
value = instance_variable_get("@#{method}")
return value if value
return unless fallback
fallback.public_send(method)
end
end
end
================================================
FILE: lib/raix/function_dispatch.rb
================================================
# frozen_string_literal: true
require "securerandom"
module Raix
# Provides declarative function definition for ChatCompletion classes.
#
# Example:
#
# class MeaningOfLife
# include Raix::ChatCompletion
# include Raix::FunctionDispatch
#
# function :ask_deep_thought do
# wait 236_682_000_000_000
# "The meaning of life is 42"
# end
#
# def initialize
# transcript << { user: "What is the meaning of life?" }
# chat_completion
# end
# end
module FunctionDispatch
extend ActiveSupport::Concern
class_methods do
attr_reader :functions
# Defines a function that can be dispatched by the ChatCompletion module while
# processing the response from an AI model.
#
# Declaring a function here will automatically add it (in JSON Schema format) to
# the list of tools provided to the OpenRouter Chat Completion API. The function
# will be dispatched by name, so make sure the name is unique. The function's block
# argument will be executed in the instance context of the class that includes this module.
#
# Example:
# function :google_search, "Search Google for something", query: { type: "string" } do |arguments|
# GoogleSearch.new(arguments[:query]).search
# end
#
# @param name [Symbol] The name of the function.
# @param description [String] An optional description of the function.
# @param parameters [Hash] The parameters that the function accepts.
# @param block [Proc] The block of code to execute when the function is called.
def function(name, description = nil, **parameters, &block)
@functions ||= []
@functions << begin
{
name:,
parameters: { type: "object", properties: {}, required: [] }
}.tap do |definition|
definition[:description] = description if description.present?
parameters.each do |key, value|
value = value.dup
required = value.delete(:required)
optional = value.delete(:optional)
definition[:parameters][:properties][key] = value
if required || optional == false
definition[:parameters][:required] << key
end
end
definition[:parameters].delete(:required) if definition[:parameters][:required].empty?
end
end
define_method(name) do |arguments, cache|
id = SecureRandom.uuid[0, 23]
content = if cache.present?
cache.fetch([name, arguments]) do
instance_exec(arguments, &block)
end
else
instance_exec(arguments, &block)
end
# add in one operation to prevent race condition and potential wrong
# interleaving of tool calls in multi-threaded environments
transcript << [
{
role: "assistant",
content: nil,
tool_calls: [
{
id:,
type: "function",
function: {
name:,
arguments: arguments.to_json
}
}
]
},
{
role: "tool",
tool_call_id: id,
name:,
content: content.to_s
}
]
# Return the content - ChatCompletion will automatically continue
# the conversation after tool execution to get a final response
content
end
end
end
included do
attr_accessor :chat_completion_args
end
def chat_completion(**chat_completion_args)
self.chat_completion_args = chat_completion_args
super
end
# Stops the automatic continuation of chat completions after this function call.
# Useful when you want to halt processing within a function and force the AI
# to provide a text response without making additional tool calls.
def stop_tool_calls_and_respond!
@stop_tool_calls_and_respond = true
end
def tools
return [] unless self.class.functions
self.class.functions.map { |function| { type: "function", function: } }
end
end
end
================================================
FILE: lib/raix/function_tool_adapter.rb
================================================
# frozen_string_literal: true
module Raix
# Adapter to convert Raix function declarations to RubyLLM::Tool instances
class FunctionToolAdapter
def self.create_tool_from_function(function_def, instance)
tool_class = Class.new(RubyLLM::Tool) do
description function_def[:description] if function_def[:description]
# Define parameters based on function definition
function_def[:parameters][:properties]&.each do |param_name, param_def|
required = function_def[:parameters][:required]&.include?(param_name)
param param_name.to_sym, type: param_def[:type], desc: param_def[:description], required:
end
# Store reference to the instance and function name
define_method(:raix_instance) { instance }
define_method(:raix_function_name) { function_def[:name] }
# Override execute to call the Raix function
define_method(:execute) do |**args|
raix_instance.public_send(raix_function_name, args.with_indifferent_access, nil)
end
end
# Set a meaningful name for the tool class
tool_class.define_singleton_method(:name) do
"Raix::GeneratedTool::#{function_def[:name].to_s.camelize}"
end
tool_instance = tool_class.new
# Override the name method to return the original function name
# This ensures RubyLLM can match the tool call from the AI
tool_instance.define_singleton_method(:name) do
function_def[:name].to_s
end
tool_instance
end
def self.convert_tools_for_ruby_llm(raix_instance)
return [] unless raix_instance.class.respond_to?(:functions)
return [] if raix_instance.class.functions.blank?
raix_instance.class.functions.map do |function_def|
create_tool_from_function(function_def, raix_instance)
end
end
end
end
================================================
FILE: lib/raix/mcp/sse_client.rb
================================================
require "json"
require "securerandom"
require "faraday"
require "uri"
require "digest"
module Raix
module MCP
# Client for communicating with MCP servers via Server-Sent Events (SSE).
class SseClient
PROTOCOL_VERSION = "2024-11-05".freeze
CONNECTION_TIMEOUT = 10
OPEN_TIMEOUT = 30
# Creates a new client and establishes SSE connection to discover the JSON-RPC endpoint.
#
# @param url [String] the SSE endpoint URL
def initialize(url, headers: {})
@url = url
@endpoint_url = nil
@sse_thread = nil
@event_queue = Thread::Queue.new
@buffer = ""
@closed = false
@headers = headers
# Start the SSE connection and discover endpoint
establish_sse_connection
end
# Returns available tools from the server.
def tools
@tools ||= begin
request_id = SecureRandom.uuid
send_json_rpc(request_id, "tools/list", {})
# Wait for response through SSE
response = wait_for_response(request_id)
response[:tools].map do |tool_json|
Tool.from_json(tool_json)
end
end
end
# Executes a tool with given arguments.
# Returns text content directly, or JSON-encoded data for other content types.
def call_tool(name, **arguments)
request_id = SecureRandom.uuid
send_json_rpc(request_id, "tools/call", name:, arguments:)
# Wait for response through SSE
response = wait_for_response(request_id)
content = response[:content]
return "" if content.nil? || content.empty?
# Handle different content formats
first_item = content.first
case first_item
when Hash
case first_item[:type]
when "text"
first_item[:text]
when "image"
# Return a structured response for images
{
type: "image",
data: first_item[:data],
mime_type: first_item[:mimeType] || "image/png"
}.to_json
else
# For any other type, return the item as JSON
first_item.to_json
end
else
first_item.to_s
end
end
# Closes the connection to the server.
def close
@closed = true
@sse_thread&.kill
@connection&.close
end
def unique_key
parametrized_url = @url.parameterize.underscore.gsub("https_", "")
Digest::SHA256.hexdigest(parametrized_url)[0..2]
end
private
# Establishes and maintains the SSE connection
def establish_sse_connection
@sse_thread = Thread.new do
headers = {
"Accept" => "text/event-stream",
"Cache-Control" => "no-cache",
"Connection" => "keep-alive",
"MCP-Version" => PROTOCOL_VERSION
}.merge(@headers)
@connection = Faraday.new(url: @url) do |faraday|
faraday.options.timeout = CONNECTION_TIMEOUT
faraday.options.open_timeout = OPEN_TIMEOUT
end
@connection.get do |req|
req.headers = headers
req.options.on_data = proc do |chunk, _size|
next if @closed
@buffer << chunk
process_sse_buffer
end
end
rescue StandardError => e
# puts "[MCP DEBUG] SSE connection error: #{e.message}"
@event_queue << { error: e }
end
# Wait for endpoint discovery
loop do
event = @event_queue.pop
if event[:error]
raise ProtocolError, "SSE connection failed: #{event[:error].message}"
elsif event[:endpoint_url]
@endpoint_url = event[:endpoint_url]
break
end
end
# Initialize the MCP session
initialize_mcp_session
end
# Process SSE buffer for complete events
def process_sse_buffer
while (idx = @buffer.index("\n\n"))
event_text = @buffer.slice!(0..(idx + 1))
event_type, event_data = parse_sse_fields(event_text)
case event_type
when "endpoint"
endpoint_url = build_absolute_url(@url, event_data)
@event_queue << { endpoint_url: }
when "message"
handle_message_event(event_data)
end
end
end
# Handle SSE message events
def handle_message_event(event_data)
parsed = JSON.parse(event_data, symbolize_names: true)
# Handle different message types
case parsed
when ->(p) { p[:method] == "initialize" && p.dig(:params, :endpoint_url) }
# Legacy endpoint discovery
endpoint_url = parsed.dig(:params, :endpoint_url)
@event_queue << { endpoint_url: }
when ->(p) { p[:id] && p[:result] }
@event_queue << { id: parsed[:id], result: parsed[:result] }
when ->(p) { p[:result] }
@event_queue << { result: parsed[:result] }
end
rescue JSON::ParserError => e
puts "[MCP DEBUG] Error parsing message: #{e.message}"
puts "[MCP DEBUG] Message data: #{event_data}"
end
# Initialize the MCP session
def initialize_mcp_session
request_id = SecureRandom.uuid
send_json_rpc(request_id, "initialize", {
protocolVersion: PROTOCOL_VERSION,
capabilities: {
roots: { listChanged: true },
sampling: {}
},
clientInfo: {
name: "Raix",
version: Raix::VERSION
}
})
# Wait for initialization response
response = wait_for_response(request_id)
# Send acknowledgment if needed
return unless response.dig(:capabilities, :tools, :listChanged)
send_notification("notifications/initialized", {})
end
# Send a JSON-RPC request
def send_json_rpc(id, method, params)
body = {
jsonrpc: JSONRPC_VERSION,
id:,
method:,
params:
}
# Use a new connection for the POST request
conn = Faraday.new(url: @endpoint_url) do |faraday|
faraday.options.timeout = CONNECTION_TIMEOUT
end
conn.post do |req|
req.headers["Content-Type"] = "application/json"
req.body = body.to_json
end
rescue StandardError => e
raise ProtocolError, "Failed to send request: #{e.message}"
end
# Send a notification (no response expected)
def send_notification(method, params)
body = {
jsonrpc: JSONRPC_VERSION,
method:,
params:
}
conn = Faraday.new(url: @endpoint_url) do |faraday|
faraday.options.timeout = CONNECTION_TIMEOUT
end
conn.post do |req|
req.headers["Content-Type"] = "application/json"
req.body = body.to_json
end
rescue StandardError => e
puts "[MCP DEBUG] Error sending notification: #{e.message}"
end
# Wait for a response with a specific ID
def wait_for_response(request_id)
timeout = Time.now + CONNECTION_TIMEOUT
loop do
if Time.now > timeout
raise ProtocolError, "Timeout waiting for response"
end
# Use non-blocking pop with timeout
begin
event = @event_queue.pop(true) # non_block = true
rescue ThreadError
# Queue is empty, wait a bit
sleep 0.1
next
end
if event[:error]
raise ProtocolError, "SSE error: #{event[:error].message}"
elsif event[:result] && (event[:id] == request_id || !event[:id])
return event[:result]
else
@event_queue << event
sleep 0.01
end
end
end
# Parses SSE event fields from raw text.
def parse_sse_fields(event_text)
event_type = "message"
data_lines = []
event_text.each_line do |line|
case line
when /^event:\s*(.+)$/
event_type = Regexp.last_match(1).strip
when /^data:\s*(.*)$/
data_lines << Regexp.last_match(1)
end
end
[event_type, data_lines.join("\n").strip]
end
# Builds an absolute URL for candidate relative to base.
def build_absolute_url(base, candidate)
uri = URI.parse(candidate)
return candidate if uri.absolute?
URI.join(base, candidate).to_s
rescue URI::InvalidURIError
candidate
end
end
end
end
================================================
FILE: lib/raix/mcp/stdio_client.rb
================================================
require "json"
require "securerandom"
require "digest"
module Raix
module MCP
# Client for communicating with MCP servers via stdio using JSON-RPC.
class StdioClient
# Creates a new client with a bidirectional pipe to the MCP server.
def initialize(*args, env)
@args = args
@io = IO.popen(env, args, "w+")
end
# Returns available tools from the server.
def tools
result = call("tools/list")
result["tools"].map do |tool_json|
Tool.from_json(tool_json)
end
end
# Executes a tool with given arguments.
# Returns text content directly, or JSON-encoded data for other content types.
def call_tool(name, **arguments)
result = call("tools/call", name:, arguments:)
content = result["content"]
return "" if content.nil? || content.empty?
# Handle different content formats
first_item = content.first
case first_item
when Hash
case first_item["type"]
when "text"
first_item["text"]
when "image"
# Return a structured response for images
{
type: "image",
data: first_item["data"],
mime_type: first_item["mimeType"] || "image/png"
}.to_json
else
# For any other type, return the item as JSON
first_item.to_json
end
else
first_item.to_s
end
end
# Closes the connection to the server.
def close
@io.close
end
def unique_key
parametrized_args = @args.join(" ").parameterize.underscore
Digest::SHA256.hexdigest(parametrized_args)[0..2]
end
private
# Sends JSON-RPC request and returns the result.
def call(method, **params)
@io.puts({ id: SecureRandom.uuid, method:, params:, jsonrpc: JSONRPC_VERSION }.to_json)
@io.flush # Ensure data is immediately sent
message = JSON.parse(@io.gets)
if (error = message["error"])
raise ProtocolError, error["message"]
end
message["result"]
end
end
end
end
================================================
FILE: lib/raix/mcp/tool.rb
================================================
module Raix
module MCP
# Represents an MCP (Model Context Protocol) tool with metadata and schema
#
# @example
# tool = Tool.new(
# name: "weather",
# description: "Get weather info",
# input_schema: { "type" => "object", "properties" => { "city" => { "type" => "string" } } }
# )
class Tool
attr_reader :name, :description, :input_schema
# Initialize a new Tool
#
# @param name [String] the tool name
# @param description [String] human-readable description of what the tool does
# @param input_schema [Hash] JSON schema defining the tool's input parameters
def initialize(name:, description:, input_schema: {})
@name = name
@description = description
@input_schema = input_schema
end
# Initialize from raw MCP JSON response
#
# @param json [Hash] parsed JSON data from MCP response
# @return [Tool] new Tool instance
def self.from_json(json)
new(
name: json[:name] || json["name"],
description: json[:description] || json["description"],
input_schema: json[:inputSchema] || json["inputSchema"] || {}
)
end
# Get the input schema type
#
# @return [String, nil] the schema type (e.g., "object")
def input_type
input_schema["type"]
end
# Get the properties hash
#
# @return [Hash] schema properties definition
def properties
input_schema["properties"] || {}
end
# Get required properties array
#
# @return [Array<String>] list of required property names
def required_properties
input_schema["required"] || []
end
# Check if a property is required
#
# @param property_name [String] name of the property to check
# @return [Boolean] true if the property is required
def required?(property_name)
required_properties.include?(property_name)
end
end
end
end
================================================
FILE: lib/raix/mcp.rb
================================================
# Simple integration layer that lets Raix classes declare an MCP server
# with a single DSL call:
#
# mcp "https://my-server.example.com/sse"
#
# The concern fetches the remote server's tool list (via JSON‑RPC 2.0
# `tools/list`) and exposes each remote tool as if it were an inline
# `function` declared with Raix::FunctionDispatch. When the tool is
# invoked by the model, the generated instance method forwards the
# request to the remote server using `tools/call`, captures the result,
# and appends the appropriate messages to the transcript so that the
# conversation history stays consistent.
require "active_support/concern"
require "active_support/inflector"
require "securerandom"
require "uri"
module Raix
# Model Context Protocol integration for Raix
#
# Allows declaring MCP servers with a simple DSL that automatically:
# - Queries tools from the remote server
# - Exposes each tool as a function callable by LLMs
# - Handles transcript recording and response processing
module MCP
extend ActiveSupport::Concern
# Error raised when there's a protocol-level error in MCP communication
class ProtocolError < StandardError; end
JSONRPC_VERSION = "2.0".freeze
class_methods do
# Declare an MCP server by URL, using the SSE transport.
#
# sse_mcp "https://server.example.com/sse",
# headers: { "Authorization" => "Bearer <token>" },
# only: [:get_issue]
#
def sse_mcp(url, headers: {}, only: nil, except: nil)
mcp(only:, except:, client: MCP::SseClient.new(url, headers:))
end
# Declare an MCP server by command line arguments, and environment variables ,
# using the stdio transport.
#
# stdio_mcp "docker", "run", "-i", "--rm",
# "-e", "GITHUB_PERSONAL_ACCESS_TOKEN",
# "ghcr.io/github/github-mcp-server",
# env: { GITHUB_PERSONAL_ACCESS_TOKEN: "${input:github_token}" },
# only: [:github_search]
#
def stdio_mcp(*args, env: {}, only: nil, except: nil)
mcp(only:, except:, client: MCP::StdioClient.new(*args, env))
end
# Declare an MCP server, using the given client.
#
# mcp client: MCP::SseClient.new("https://server.example.com/sse")
#
# This will automatically:
# • query `tools/list` on the server
# • register each remote tool with FunctionDispatch so that the
# OpenAI / OpenRouter request body includes its JSON‑Schema
# • define an instance method for each tool that forwards the
# call to the server and appends the proper messages to the
# transcript.
# NOTE TO SELF: NEVER MOCK SERVER RESPONSES! THIS MUST WORK WITH REAL SERVERS!
def mcp(client:, only: nil, except: nil)
@mcp_servers ||= {}
return if @mcp_servers.key?(client.unique_key) # avoid duplicate definitions
# Fetch tools
tools = client.tools
if tools.empty?
# puts "[MCP DEBUG] No tools found from MCP server at #{url}"
client.close
return nil
end
# Apply filters
filtered_tools = if only.present?
only_symbols = Array(only).map(&:to_sym)
tools.select { |tool| only_symbols.include?(tool.name.to_sym) }
elsif except.present?
except_symbols = Array(except).map(&:to_sym)
tools.reject { |tool| except_symbols.include?(tool.name.to_sym) }
else
tools
end
# Ensure FunctionDispatch is included in the class
include FunctionDispatch unless included_modules.include?(FunctionDispatch)
# puts "[MCP DEBUG] FunctionDispatch included in #{name}"
filtered_tools.each do |tool|
remote_name = tool.name
# TODO: Revisit later whether this much context is needed in the function name
local_name = :"#{remote_name}_#{client.unique_key}"
description = tool.description
input_schema = tool.input_schema || {}
# --- register with FunctionDispatch (adds to .functions)
function(local_name, description, **{}) # placeholder parameters replaced next
latest_definition = functions.last
latest_definition[:parameters] = input_schema.deep_symbolize_keys || {}
# Required by OpenAI
latest_definition[:parameters][:properties] ||= {}
# Store the schema for type coercion
tool_schemas = @tool_schemas ||= {}
tool_schemas[local_name] = input_schema
# --- define an instance method that proxies to the server
define_method(local_name) do |arguments, _cache|
arguments ||= {}
# Coerce argument types based on the input schema
stored_schema = self.class.instance_variable_get(:@tool_schemas)&.dig(local_name)
coerced_arguments = coerce_arguments(arguments, stored_schema)
content_text = client.call_tool(remote_name, **coerced_arguments)
call_id = SecureRandom.uuid
# Mirror FunctionDispatch transcript behaviour
transcript << [
{
role: "assistant",
content: nil,
tool_calls: [
{
id: call_id,
type: "function",
function: {
name: local_name.to_s,
arguments: arguments.to_json
}
}
]
},
{
role: "tool",
tool_call_id: call_id,
name: local_name.to_s,
content: content_text
}
]
# Return the content - ChatCompletion will automatically continue
# the conversation after tool execution
content_text
end
end
# Store the URL, tools, and client for future use
@mcp_servers[client.unique_key] = { tools: filtered_tools, client: }
end
end
private
# Coerce argument types based on the JSON schema
def coerce_arguments(arguments, schema)
return arguments unless schema.is_a?(Hash) && schema["properties"].is_a?(Hash)
coerced = {}
schema["properties"].each do |key, prop_schema|
value = if arguments.key?(key)
arguments[key]
elsif arguments.key?(key.to_sym)
arguments[key.to_sym]
end
next if value.nil?
coerced[key] = coerce_value(value, prop_schema)
end
# Include any additional arguments not in the schema
arguments.each do |key, value|
key_str = key.to_s
coerced[key_str] = value unless coerced.key?(key_str)
end
coerced.with_indifferent_access
end
# Coerce a single value based on its schema
def coerce_value(value, schema)
return value unless schema.is_a?(Hash)
case schema["type"]
when "number", "integer"
if value.is_a?(String) && value.match?(/\A-?\d+(\.\d+)?\z/)
schema["type"] == "integer" ? value.to_i : value.to_f
else
value
end
when "boolean"
case value
when "true", true then true
when "false", false then false
else value
end
when "array"
array_value = begin
value.is_a?(String) ? JSON.parse(value) : value
rescue JSON::ParserError
value
end
# If there's an items schema, coerce each element
if array_value.is_a?(Array) && schema["items"]
array_value.map { |item| coerce_value(item, schema["items"]) }
else
array_value
end
when "object"
object_value = begin
value.is_a?(String) ? JSON.parse(value) : value
rescue JSON::ParserError
value
end
# If there are properties defined, coerce them recursively
if object_value.is_a?(Hash) && schema["properties"]
coerced_object = {}
schema["properties"].each do |prop_key, prop_schema|
prop_value = object_value[prop_key] || object_value[prop_key.to_sym]
coerced_object[prop_key] = coerce_value(prop_value, prop_schema) unless prop_value.nil?
end
# Include any additional properties not in the schema
object_value.each do |obj_key, obj_value|
obj_key_str = obj_key.to_s
coerced_object[obj_key_str] = obj_value unless coerced_object.key?(obj_key_str)
end
coerced_object
else
object_value
end
else
value
end
end
end
end
================================================
FILE: lib/raix/message_adapters/base.rb
================================================
# frozen_string_literal: true
require "active_support/core_ext/module/delegation"
module Raix
module MessageAdapters
# Transforms messages into the format expected by the OpenAI API
class Base
attr_accessor :context
delegate :cache_at, :model, to: :context
def initialize(context)
@context = context
end
def transform(message)
return message if message[:role].present?
if message[:function].present?
{ role: "assistant", name: message.dig(:function, :name), content: message.dig(:function, :arguments).to_json }
elsif message[:result].present?
{ role: "function", name: message[:name], content: message[:result] }
else
content(message)
end
end
protected
def content(message)
case message
in { system: content }
{ role: "system", content: }
in { user: content }
{ role: "user", content: }
in { assistant: content }
{ role: "assistant", content: }
else
raise ArgumentError, "Invalid message format: #{message.inspect}"
end.tap do |msg|
# convert to anthropic multipart format if model is claude-3 and cache_at is set
if model.to_s.include?("anthropic/claude-3") && cache_at && msg[:content].to_s.length > cache_at.to_i
msg[:content] = [{ type: "text", text: msg[:content], cache_control: { type: "ephemeral" } }]
end
end
end
end
end
end
================================================
FILE: lib/raix/predicate.rb
================================================
# frozen_string_literal: true
module Raix
# A module for handling yes/no questions using AI chat completion.
# When included in a class, it provides methods to define handlers for
# yes and no responses. All handlers are optional. Any response that
# does not begin with "yes, " or "no, " will be considered a maybe.
#
# @example
# class Question
# include Raix::Predicate
#
# yes? do |explanation|
# puts "Yes: #{explanation}"
# end
#
# no? do |explanation|
# puts "No: #{explanation}"
# end
#
# maybe? do |explanation|
# puts "Maybe: #{explanation}"
# end
# end
#
# question = Question.new
# question.ask("Is Ruby a programming language?")
module Predicate
extend ActiveSupport::Concern
include ChatCompletion
def ask(question, openai: false)
raise "Please define a yes and/or no block" if self.class.yes_block.nil? && self.class.no_block.nil?
transcript << { system: "Always answer 'Yes, ', 'No, ', or 'Maybe, ' followed by a concise explanation!" }
transcript << { user: question }
chat_completion(openai:).tap do |response|
if response.downcase.start_with?("yes,")
instance_exec(response, &self.class.yes_block) if self.class.yes_block
elsif response.downcase.start_with?("no,")
instance_exec(response, &self.class.no_block) if self.class.no_block
elsif self.class.maybe_block
instance_exec(response, &self.class.maybe_block)
else
puts "[Raix::Predicate] Unhandled response: #{response}"
end
end
end
# Class methods added to the including class
module ClassMethods
attr_reader :yes_block, :no_block, :maybe_block
def yes?(&block)
@yes_block = block
end
def no?(&block)
@no_block = block
end
def maybe?(&block)
@maybe_block = block
end
end
end
end
================================================
FILE: lib/raix/prompt_declarations.rb
================================================
# frozen_string_literal: true
require "ostruct"
# This module provides a way to chain prompts and handle
# user responses in a serialized manner, with support for
# functions if the FunctionDispatch module is also included.
module Raix
# The PromptDeclarations module provides a way to chain prompts and handle
# user responses in a serialized manner, with support for
# functions if the FunctionDispatch module is also included.
module PromptDeclarations
extend ActiveSupport::Concern
module ClassMethods # rubocop:disable Style/Documentation
# Adds a prompt to the list of prompts. At minimum, provide a `text` or `call` parameter.
#
# @param system [Proc] A lambda that generates the system message.
# @param call [ChatCompletion] A callable class that includes ChatCompletion. Will be passed a context object when initialized.
# @param text Accepts 1) a lambda that returns the prompt text, 2) a string, or 3) a symbol that references a method.
# @param stream [Proc] A lambda stream handler
# @param success [Proc] The block of code to execute when the prompt is answered.
# @param params [Hash] Additional parameters for the completion API call
# @param if [Proc] A lambda that determines if the prompt should be executed.
def prompt(system: nil, call: nil, text: nil, stream: nil, success: nil, params: {}, if: nil, unless: nil, until: nil)
name = Digest::SHA256.hexdigest(text.inspect)[0..7]
prompts << OpenStruct.new({ name:, system:, call:, text:, stream:, success:, if:, unless:, until:, params: })
define_method(name) do |response|
return response if success.nil?
return send(success, response) if success.is_a?(Symbol)
instance_exec(response, &success)
end
end
def prompts
@prompts ||= []
end
end
attr_reader :current_prompt, :last_response
MAX_LOOP_COUNT = 5
# Executes the chat completion process based on the class-level declared prompts.
# The response to each prompt is added to the transcript automatically and returned.
#
# Raises an error if there are not enough prompts defined.
#
# Uses system prompt in following order of priority:
# - system lambda specified in the prompt declaration
# - system_prompt instance method if defined
# - system_prompt class-level declaration if defined
#
# Prompts require a text lambda to be defined at minimum.
# TODO: shortcut syntax passes just a string prompt if no other options are needed.
#
# @raise [RuntimeError] If no prompts are defined.
#
# @param prompt [String] The prompt to use for the chat completion.
# @param params [Hash] Parameters for the chat completion.
# @param raw [Boolean] Whether to return the raw response.
#
# TODO: SHOULD NOT HAVE A DIFFERENT INTERFACE THAN PARENT
def chat_completion(prompt = nil, params: {}, raw: false, openai: false)
raise "No prompts defined" unless self.class.prompts.present?
loop_count = 0
current_prompts = self.class.prompts.clone
while (@current_prompt = current_prompts.shift)
next if @current_prompt.if.present? && !instance_exec(&@current_prompt.if)
next if @current_prompt.unless.present? && instance_exec(&@current_prompt.unless)
input = case current_prompt.text
when Proc
instance_exec(¤t_prompt.text)
when String
current_prompt.text
when Symbol
send(current_prompt.text)
else
last_response.presence || prompt
end
if current_prompt.call.present?
current_prompt.call.new(self).call(input).tap do |response|
if response.present?
transcript << { assistant: response }
@last_response = send(current_prompt.name, response)
end
end
else
__system_prompt = instance_exec(¤t_prompt.system) if current_prompt.system.present? # rubocop:disable Lint/UnderscorePrefixedVariableName
__system_prompt ||= system_prompt if respond_to?(:system_prompt)
__system_prompt ||= self.class.system_prompt.presence
transcript << { system: __system_prompt } if __system_prompt
transcript << { user: instance_exec(¤t_prompt.text) } # text is required
params = current_prompt.params.merge(params)
# set the stream if necessary
self.stream = instance_exec(¤t_prompt.stream) if current_prompt.stream.present?
execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)
end
next unless current_prompt.until.present? && !instance_exec(¤t_prompt.until)
if loop_count >= MAX_LOOP_COUNT
warn "Max loop count reached in chat_completion. Forcing return."
return last_response
else
current_prompts.unshift(@current_prompt) # put it back at the front
loop_count += 1
end
end
last_response
end
def execute_ai_request(params:, raw:, openai:, transcript:, loop_count:)
chat_completion_from_superclass(params:, raw:, openai:).then do |response|
transcript << { assistant: response }
@last_response = send(current_prompt.name, response)
self.stream = nil # clear it again so it's not used for the next prompt
end
rescue StandardError => e
# Bubbles the error up the stack if no loops remain
raise e if loop_count >= MAX_LOOP_COUNT
sleep 1 # Wait before continuing
end
# Returns the model parameter of the current prompt or the default model.
#
# @return [Object] The model parameter of the current prompt or the default model.
def model
@current_prompt.params[:model] || super
end
# Returns the temperature parameter of the current prompt or the default temperature.
#
# @return [Float] The temperature parameter of the current prompt or the default temperature.
def temperature
@current_prompt.params[:temperature] || super
end
# Returns the max_tokens parameter of the current prompt or the default max_tokens.
#
# @return [Integer] The max_tokens parameter of the current prompt or the default max_tokens.
def max_tokens
@current_prompt.params[:max_tokens] || super
end
protected
# workaround for super.chat_completion, which is not available in ruby
def chat_completion_from_superclass(*, **kargs)
method(:chat_completion).super_method.call(*, **kargs)
end
end
end
================================================
FILE: lib/raix/response_format.rb
================================================
# frozen_string_literal: true
require "active_support/core_ext/object/deep_dup"
require "active_support/core_ext/string/filters"
module Raix
# Handles the formatting of responses for AI interactions.
#
# This class is responsible for converting input data into a JSON schema
# that can be used to structure and validate AI responses. It supports
# nested structures and arrays, ensuring that the output conforms to
# the expected format for AI model interactions.
#
# @example
# input = { name: { type: "string" }, age: { type: "integer" } }
# format = ResponseFormat.new("PersonInfo", input)
# schema = format.to_schema
#
# @attr_reader [String] name The name of the response format
# @attr_reader [Hash] input The input data to be formatted
class ResponseFormat
def initialize(name, input)
@name = name
@input = input
end
def to_json(*)
JSON.pretty_generate(to_schema)
end
def to_schema
{
type: "json_schema",
json_schema: {
name: @name,
schema: {
type: "object",
properties: decode(@input.deep_dup),
required: @input.keys,
additionalProperties: false
},
strict: true
}
}
end
private
def decode(input)
{}.tap do |response|
case input
when Array
response[:type] = "array"
if input.size == 1 && input.first.is_a?(String)
response[:items] = { type: input.first }
else
properties = {}
input.each { |item| properties.merge!(decode(item)) }
response[:items] = {
type: "object",
properties:,
required: properties.keys.select { |key| properties[key].delete(:required) },
additionalProperties: false
}
end
when Hash
input.each do |key, value|
response[key] = if value.is_a?(Hash) && value.key?(:type)
value
else
decode(value)
end
end
else
raise "Invalid input"
end
end
end
end
end
================================================
FILE: lib/raix/transcript_adapter.rb
================================================
# frozen_string_literal: true
module Raix
# Adapter to convert between Raix's transcript array format and RubyLLM's Message objects
class TranscriptAdapter
attr_reader :ruby_llm_chat
def initialize(ruby_llm_chat)
@ruby_llm_chat = ruby_llm_chat
@pending_messages = []
end
# Add a message in Raix format (hash) to the transcript
def <<(message_hash)
case message_hash
when Array
# Handle nested arrays (from function dispatch)
message_hash.each { |msg| self << msg }
when Hash
add_message_from_hash(message_hash)
end
self
end
# Return all messages in Raix-compatible format
def flatten
ruby_llm_messages = @ruby_llm_chat.messages.map { |msg| message_to_raix_format(msg) }
pending = @pending_messages.map { |msg| normalize_message_format(msg) }
(ruby_llm_messages + pending).flatten
end
# Get all messages including pending ones
def to_a
flatten
end
# Allow iteration
def compact
flatten.compact
end
# Clear all messages
def clear
@ruby_llm_chat.reset_messages!
@pending_messages.clear
self
end
# Get last message
def last
flatten.last
end
# Get size of transcript
def size
flatten.size
end
alias length size
private
def add_message_from_hash(hash)
# Raix abbreviated format: { system: "text" }, { user: "text" }, { assistant: "text" }
if hash.key?(:system) || hash.key?("system")
content = hash[:system] || hash["system"]
@ruby_llm_chat.with_instructions(content)
@pending_messages << { role: "system", content: }
elsif hash.key?(:user) || hash.key?("user")
content = hash[:user] || hash["user"]
# Don't add to ruby_llm_chat yet - wait for chat_completion call
@pending_messages << { role: "user", content: }
elsif hash.key?(:assistant) || hash.key?("assistant")
content = hash[:assistant] || hash["assistant"]
@pending_messages << { role: "assistant", content: }
elsif hash[:role] || hash["role"]
# Standard OpenAI format (tool messages, assistant with tool_calls, etc.)
@pending_messages << hash.with_indifferent_access
end
end
def message_to_raix_format(message)
# Return in Raix abbreviated format { system: "...", user: "...", assistant: "..." }
# unless it's a tool message which needs full format
if message.tool_call? || message.tool_result?
result = {
role: message.role.to_s,
content: message.content
}
result[:tool_calls] = message.tool_calls if message.tool_call?
result[:tool_call_id] = message.tool_call_id if message.tool_result?
result
else
# Use abbreviated format
{ message.role.to_sym => message.content }
end
end
def normalize_message_format(msg)
# If already in abbreviated format, return as-is
return msg if msg.key?(:system) || msg.key?(:user) || msg.key?(:assistant)
return msg if msg["system"] || msg["user"] || msg["assistant"]
# If in standard format with role/content, convert to abbreviated
if msg[:role] || msg["role"]
role = (msg[:role] || msg["role"]).to_sym
content = msg[:content] || msg["content"]
# Tool messages stay in full format
if msg[:tool_calls] || msg["tool_calls"] || msg[:tool_call_id] || msg["tool_call_id"]
return msg
end
# Convert to abbreviated format
{ role => content }
else
msg
end
end
end
end
================================================
FILE: lib/raix/version.rb
================================================
# frozen_string_literal: true
module Raix
VERSION = "2.0.3"
end
================================================
FILE: lib/raix.rb
================================================
# frozen_string_literal: true
require "ruby_llm"
require "zeitwerk"
# Ruby AI eXtensions
module Raix
class << self
attr_writer :configuration
end
# Returns the current configuration instance.
def self.configuration
@configuration ||= Configuration.new
end
# Configures the Raix gem using a block.
def self.configure
yield(configuration)
end
end
loader = Zeitwerk::Loader.for_gem
loader.inflector.inflect("mcp" => "MCP")
loader.setup
================================================
FILE: raix.gemspec
================================================
# frozen_string_literal: true
require_relative "lib/raix/version"
Gem::Specification.new do |spec|
spec.name = "raix"
spec.version = Raix::VERSION
spec.authors = ["Obie Fernandez"]
spec.email = ["obiefernandez@gmail.com"]
spec.summary = "Ruby AI eXtensions"
spec.homepage = "https://github.com/OlympiaAI/raix"
spec.license = "MIT"
spec.required_ruby_version = ">= 3.2.2"
spec.metadata["homepage_uri"] = spec.homepage
spec.metadata["source_code_uri"] = "https://github.com/OlympiaAI/raix"
spec.metadata["changelog_uri"] = "https://github.com/OlympiaAI/raix/blob/main/CHANGELOG.md"
# Specify which files should be added to the gem when it is released.
# The `git ls-files -z` loads the files in the RubyGem that have been added into git.
spec.files = Dir.chdir(__dir__) do
`git ls-files -z`.split("\x0").reject do |f|
(File.expand_path(f) == __FILE__) || f.start_with?(*%w[bin/ test/ spec/ features/ .git .circleci appveyor])
end
end
# Ensure all gem files are world-readable so they work in Docker containers
# where gems are installed as root but the app runs as a non-root user.
spec.files.each do |f|
path = File.join(__dir__, f)
File.chmod(0o644, path) if File.file?(path) && !File.executable?(path)
end
spec.bindir = "exe"
spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
spec.require_paths = ["lib"]
spec.add_dependency "activesupport", ">= 6.0"
spec.add_dependency "faraday-retry", "~> 2.0"
spec.add_dependency "ostruct"
spec.add_dependency "ruby_llm", "~> 1.9"
spec.add_dependency "zeitwerk", "~> 2.7"
end
================================================
FILE: sig/raix.rbs
================================================
module Raix
VERSION: String
# See the writing guide of rbs: https://github.com/ruby/rbs#guides
end
================================================
FILE: spec/files/getting_real.md
================================================
Introduction
What is Getting Real?
About 37signals
Caveats, disclaimers, and other preemptive strikes
What is Getting Real?
Want to build a successful web app? Then it’s time to Get Real. Getting Real is a smaller, faster, better way to build software.
Getting Real is about skipping all the stuff that represents real (charts, graphs, boxes, arrows, schematics, wireframes, etc.) and actually building the real thing.
Getting real is less. Less mass, less software, less features, less paperwork, less of everything that’s not essential (and most of what you think is essential actually isn’t).
Getting Real is staying small and being agile.
Getting Real starts with the interface, the real screens that people are going to use. It begins with what the customer actually experiences and builds backwards from there.This lets you get the interface right before you get the software wrong.
Getting Real is about iterations and lowering the cost of change. Getting Real is all about launching, tweaking, and constantly improving which makes it a perfect approach for web-based software.
Getting Real delivers just what customers need and eliminates anything they don’t.
The benefits of Getting Real
Getting Real delivers better results because it forces you to deal with the actual problems you’re trying to solve instead of your ideas about those problems. It forces you to deal with reality.
Getting Real foregoes functional specs and other transitory documentation in favor of building real screens. A functional spec is make-believe, an illusion of agreement, while an actual web page is reality. That’s what your customers are going to see and use. That’s what matters. Getting Real gets you there faster.
And that means you’re making software decisions based on the real thing instead of abstract notions.
Finally, Getting Real is an approach ideally suited to web-based software. The old school model of shipping software in a box and then waiting a year or two to deliver an update is fading away. Unlike installed software, web apps can constantly evolve on a day-to-day basis. Getting Real leverages this advantage for all its worth.
How To Write Vigorous Software
Vigorous writing is concise.A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts.This requires not that the writer make all sentences short or avoid all detail and treat subjects only in outline, but that every word tell.
From “The Elements of Style” by William Strunk Jr.
No more bloat
The old way: a lengthy, bureaucratic, we’re-doing-this-to-cover- our-asses process. The typical result: bloated, forgettable soft- ware dripping with mediocrity. Blech.
Getting Real gets rid of...
Timelines that take months or even years Pie-in-the-sky functional specs Scalability debates
Interminable staff meetings
The “need” to hire dozens of employees Meaningless version numbers
Pristine roadmaps that predict the perfect future Endless preference options
Outsourced support
Unrealistic user testing
Useless paperwork
Top-down hierarchy
You don’t need tons of money or a huge team or a lengthy development cycle to build great software. Those things are the ingredients for slow, murky, changeless applications. Getting real takes the opposite approach.
In this book we’ll show you...
The importance of having a philosophy Why staying small is a good thing
How to build less
How to get from idea to reality quickly How to staff your team
Why you should design from the inside out Why writing is so crucial
Why you should underdo your competition
How to promote your app and spread the word Secrets to successful support
Tips on keeping momentum going after launch
...and lots more
The focus is on big-picture ideas. We won’t bog you down with detailed code snippets or css tricks. We’ll stick to the major ideas and philosophies that drive the Getting Real process.
Is this book for you?
You’re an entrepreneur, designer, programmer, or marketer working on a big idea.
You realize the old rules don’t apply anymore. Distribute your software on cd-roms every year? How 2002. Version numbers? Out the window. You need to build, launch, and tweak. Then rinse and repeat.
Or maybe you’re not yet on board with agile development and business structures, but you’re eager to learn more.
If this sounds like you, then this book is for you.
Note: While this book’s emphasis is on building a web app, a lot of these ideas are applicable to non-software activities too. The suggestions about small teams, rapid prototyping, expect- ing iterations, and many others presented here can serve as a guide whether you’re starting a business, writing a book, designing a web site, recording an album, or doing a variety
of other endeavors. Once you start Getting Real in one area of your life, you’ll see how these concepts can apply to a wide range of activities.
About 37signals
What we do
37signals is a small team that creates simple, focused software. Our products help you collaborate and get organized. More than 350,000 people and small businesses use our web-apps to get things done. Jeremy Wagstaff, of the Wall Street Journal, wrote, “37signals products are beautifully simple, elegant and intuitive tools that make an Outlook screen look like the soft- ware equivalent of a torture chamber.” Our apps never put you on the rack.
Our modus operandi
We believe software is too complex. Too many features, too many buttons, too much to learn. Our products do less than the competition – intentionally. We build products that work smarter, feel better, allow you to do things your way, and are easier to use.
Our products
As of the publishing date of this book, we have five commercial products and one open source web application framework.
Basecamp turns project management on its head. Instead of Gantt charts, fancy graphs, and stats-heavy spreadsheets, Base- camp offers message boards, to-do lists, simple scheduling, col- laborative writing, and file sharing. So far, hundreds of thou- sands agree it’s a better way. Farhad Manjoo of Salon.com said
“Basecamp represents the future of software on the Web.”
Campfire brings simple group chat to the business setting. Businesses in the know understand how valuable real-time persistent group chat can be. Conventional instant messaging is great for quick 1-on-1 chats, but it’s miserable for 3 or more people at once. Campfire solves that problem and plenty more.
Backpack is the alternative to those confusing, complex, “orga- nize your life in 25 simple steps” personal information managers. Backpack’s simple take on pages, notes, to-dos, and cellphone/ email-based reminders is a novel idea in a product category that suffers from status-quo-itis. Thomas Weber of the Wall Street Journal said it’s the best product in its class and David Pogue of the New York Times called it a “very cool” organization tool.
Writeboard lets you write, share, revise, and compare text
solo or with others. It’s the refreshing alternative to bloated word processors that are overkill for 95% of what you write. John Gruber of Daring Fireball said, “Writeboard might be the clearest, simplest web application I’ve ever seen.” Web-guru Jeffrey Zeldman said, “The brilliant minds at 37signals have done it again.”
Ta-da List keeps all your to-do lists together and organized online. Keep the lists to yourself or share them with others for easy collaboration. There’s no easier way to get things done. Over 100,000 lists with nearly 1,000,000 items have been created so far.
Ruby on Rails, for developers, is a full-stack, open-source web framework in Ruby for writing real-world applications quickly and easily. Rails takes care of the busy work so you can focus on your idea. Nathan Torkington of the O’Reilly publish- ing empire said “Ruby on Rails is astounding. Using it is like watching a kung-fu movie, where a dozen bad-ass frameworks prepare to beat up the little newcomer only to be handed their asses in a variety of imaginative ways.” Gotta love that quote.
Caveats, disclaimers, and other preemptive strikes
Just to get it out of the way, here are our responses to some com- plaints we hear every now and again:
“These techniques won’t work for me.”
Getting real is a system that’s worked terrifically for us. That said, the ideas in this book won’t apply to every project under the sun. If you are building a weapons system, a nuclear control plant, a banking system for millions of customers, or some other life/finance-critical system, you’re going to balk at some of our laissez-faire attitude. Go ahead and take additional precautions.
And it doesn’t have to be an all or nothing proposition. Even if you can’t embrace Getting Real fully, there are bound to be at least a few ideas in here you can sneak past the powers that be.
“You didn’t invent that idea.”
We’re not claiming to have invented these techniques. Many of these concepts have been around in one form or another for a long time. Don’t get huffy if you read some
of our advice and it reminds you of something you read about already on so and so’s weblog or in some book pub- lished 20 years ago. It’s definitely possible. These tech- niques are not at all exclusive to 37signals. We’re just telling you how we work and what’s been successful for us.
“You take too much of a black and white view.”
If our tone seems too know-it-allish, bear with us. We think it’s better to present ideas in bold strokes than to be wishy-washy about it. If that comes off as cocky or arrogant, so be it. We’d rather be provocative than water everything down with “it depends...” Of course there will be times when these rules need to be stretched or broken. And some of these tactics may not apply to your situation. Use your judgement and imagination.
“This won’t work inside my company.”
Think you’re too big to Get Real? Even Microsoft is Getting Real (and we doubt you’re bigger than them).
Even if your company typically runs on long-term schedules with big teams, there are still ways to get real.The first step is
to break up into smaller units. When there’s too many people involved, nothing gets done. The leaner you are, the faster – and better – things get done.
Granted, it may take some salesmanship. Pitch your company on the Getting Real process. Show them this book. Show them the real results you can achieve in less time and with a smaller team.
Explain that Getting Real is a low-risk, low-investment way to test new concepts. See if you can split off from the mothership on a smaller project as a proof of concept. Demonstrate results.
Or, if you really want to be ballsy, go stealth. Fly under the radar and demonstrate real results. That’s the approach the Start.com team has used while Getting Real at Microsoft. “I’ve watched the Start.com team work. They don’t ask permission,” says Robert Scoble, Technical Evangelist at Microsoft. “They have a boss that provides air cover. And they bite off a little bit at a time and do that and respond to feedback.”
Shipping Microsoft’s Start.com
In big companies, processes and meetings are the norm. Many months are spent on planning features and arguing details with the goal of everyone reaching an agreement on what is the “right” thing for the customer.
That may be the right approach for shrink-wrapped software, but with the web we have an incredible advantage. Just ship it! Let the user tell you if it’s the right thing and if it’s not, hey you can fix it and ship it to the web the same day if you want! There is no word stronger than the customer’s – resist the urge to engage in long-winded meetings and arguments. Just ship it and prove a point.
Much easier said than done – this implies:
Months of planning are not necessary.
Months of writing specs are not necessary – specs should have the foundations nailed and details figured out and refined during the development phase. Don’t try to close all open issues and nail every single detail before development starts.
Ship less features, but quality features.
You don’t need a big bang approach with a whole new release and bunch of features. Give the users byte-size pieces that they can digest.
If there are minor bugs, ship it as soon you have the core scenarios nailed and ship the bug fixes to web gradually after that.The faster you get the user feedback the better. Ideas can sound great on paper but in practice turn out to be suboptimal.The sooner you find out about fundamental issues that are wrong with an idea, the better.
Once you iterate quickly and react on customer feedback, you will establish a customer connection. Remember the goal is to win the customer by building what they want.
-Sanaz Ahari, Program Manager of Start.com, Microsoft
The Starting Line
Build Less
What’s Your Problem?
Fund Yourself
Fix Time and Budget, Flex Scope Have an Enemy
It Shouldn’t be a Chore
Build Less
Underdo your competition
Conventional wisdom says that to beat your competitors you need to one-up them. If they have four features, you need five (or 15, or 25). If they’re spending x, you need to spend xx. If they have 20, you need 30.
This sort of one-upping Cold War mentality is a dead-end. It’s an expensive, defensive, and paranoid way of building products. Defensive, paranoid companies can’t think ahead, they can only think behind. They don’t lead, they follow.
If you want to build a company that follows, you might as well put down this book now.
So what to do then? The answer is less. Do less than your com- petitors to beat them. Solve the simple problems and leave the hairy, difficult, nasty problems to everyone else. Instead of one- upping, try one-downing. Instead of outdoing, try underdoing.
We’ll cover the concept of less throughout this book, but for starters, less means:
Less features
Less options/preferences
Less people and corporate structure Less meetings and abstractions
Less promises
What’s Your Problem?
Build software for yourself
A great way to build software is to start out by solving your own problems. You’ll be the target audience and you’ll know what’s important and what’s not. That gives you a great head start on delivering a breakout product.
The key here is understanding that you’re not alone. If you’re having this problem, it’s likely hundreds of thousands of others are in the same boat. There’s your market. Wasn’t that easy?
Basecamp originated in a problem: As a design firm we needed a simple way to communicate with our clients about projects. We started out doing this via client ex- tranets which we would update manually. But changing the html by hand every time a project needed to be updated just wasn’t working. These project sites always seemed to go stale and eventually were abandoned. It was frustrating because it left us disorganized and left clients in the dark.
So we started looking at other options. Yet every tool we found either 1) didn’t do what we needed or 2) was bloated with fea- tures we didn’t need – like billing, strict access controls, charts, graphs, etc. We knew there had to be a better way so we decided to build our own.
When you solve your own problem, you create a tool that you’re passionate about. And passion is key. Passion means you’ll truly use it and care about it. And that’s the best way to get others to feel passionate about it too.
Scratching your own itch
The Open Source world embraced this mantra a long time ago – they call it “scratching your own itch.” For the open source developers, it means they get the tools they want, delivered the way they want them. But the benefit goes much deeper.
As the designer or developer of a new application, you’re faced with hundreds of micro-decisions each and every day: blue or green? One table or two? Static or dynamic? Abort or recover? How do we make these decisions? If it’s something we recognize as being important, we might ask.The rest, we guess.And all that guessing builds up a kind of debt in our applications – an interconnected web of assumptions.
As a developer, I hate this.The knowledge of all these small-scale timebombs in the applications I write adds to my stress. Open Source developers, scratching their own itches, don’t suffer this. Because they are their own users, they know the correct answers to 90% of the decisions they have to make. I think this is one of the reasons folks come home after a hard day of coding and then work on open source: It’s relaxing.
–Dave Thomas, The Pragmatic Programmers
Born out of necessity
Campaign Monitor really was born out of necessity. For years we’d been frustrated by the quality of the email marketing options out there. One tool would do x and y but never z, the next had y
and z nailed but just couldn’t get x right.We couldn’t win.
We decided to clear our schedule and have a go at building our dream email marketing tool.We consciously decided not to look at what everyone else was doing and instead build something that would make ours and our customer’s lives a little easier.
As it turned out, we weren’t the only ones who were unhappy with the options out there.We made a few modifications to the software so any design firm could use it and started spreading the word. In less than six months, thousands of designers were using Campaign Monitor to send email newsletters for themselves and their clients.
–David Greiner, founder, Campaign Monitor
You need to care about it
When you write a book, you need to have more than an interesting story. You need to have a desire to tell the story.You need to be personally invested in some way. If you’re going to live with something for two years, three years, the rest of your life, you need to care about it.
–Malcolm Gladwell, author (from A Few Thin Slices of Malcolm Gladwell)
Fund Yourself
Outside money is plan B
The first priority of many startups is acquiring funding from investors. But remember, if you turn to outsiders for funding, you’ll have to answer to them too. Expectations are raised. Investors want their money back – and quickly. The sad fact is cashing in often begins to trump building a quality product.
These days it doesn’t take much to get rolling. Hardware
is cheap and plenty of great infrastructure software is open source and free. And passion doesn’t come with a price tag.
So do what you can with the cash on hand. Think hard and determine what’s really essential and what you can do without. What can you do with three people instead of ten? What can you do with $20k instead of $100k? What can you do in three months instead of six? What can you do if you keep your day job and build your app on the side?
Constraints force creativity
Run on limited resources and you’ll be forced to reckon with constraints earlier and more intensely. And that’s a good thing. Constraints drive innovation.
Constraints also force you to get your idea out in the wild sooner rather than later – another good thing. A month or two out of the gates you should have a pretty good idea of whether you’re onto something or not. If you are, you’ll be self-sustain- able shortly and won’t need external cash. If your idea’s a lemon, it’s time to go back to the drawing board. At least you know now as opposed to months (or years) down the road. And at least you can back out easily. Exit plans get a lot trickier once inves- tors are involved.
If you’re creating software just to make a quick buck, it will show. Truth is a quick payout is pretty unlikely. So focus on building a quality tool that you and your customers can live with for a long time.
Two paths
[Jake Walker started one company with investor money (Disclive) and one without (The Show). Here he discusses the differences between the two paths.]
The root of all the problems wasn’t raising money itself, but everything that came along with it.The expectations are simply higher. People start taking salary, and the motivation is to build it up and sell it, or find some other way for the initial investors to make their money back. In the case of the first company,
we simply started acting much bigger than we were – out of necessity...
[With The Show] we realized that we could deliver a much better product with less costs, only with more time. And we gambled with a bit of our own money that people would be willing to wait for quality over speed. But the company has stayed (and will likely continue to be) a small operation.And ever since that first project, we’ve been fully self funded.With just a bit of creative terms from our vendors, we’ve never really need to put much of our own money into the operation at all.And the expectation isn’t to grow and sell,but to grow for the sake of growth and to continue to benefit from it financially.
–A comment from Signal vs. Noise
================================================
FILE: spec/raix/before_completion_spec.rb
================================================
# frozen_string_literal: true
RSpec.describe "before_completion hook" do
# Helper to create a mock response hash that chat_completion expects
def mock_response(content = "test response")
{
"choices" => [
{
"message" => {
"role" => "assistant",
"content" => content,
"tool_calls" => nil
},
"finish_reason" => "stop"
}
],
"usage" => {
"prompt_tokens" => 10,
"completion_tokens" => 5,
"total_tokens" => 15
}
}
end
# Clean up global configuration after each test
after do
Raix.configuration.instance_variable_set(:@before_completion, nil)
end
describe "global-level before_completion hook" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "base-model"
transcript << { user: "Hello" }
end
end
end
it "allows setting a before_completion hook at global level" do
hook = ->(_context) { { model: "global-model" } }
Raix.configure { |c| c.before_completion = hook }
expect(Raix.configuration.before_completion).to eq(hook)
end
it "calls the hook and merges returned params" do
hook_called = false
Raix.configure do |c|
c.before_completion = lambda { |_context|
hook_called = true
{ temperature: 0.42 }
}
end
instance = chat_class.new
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
instance.chat_completion
expect(hook_called).to be true
end
end
describe "class-level before_completion hook" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
configure do |c|
c.before_completion = ->(_context) { { temperature: 0.9 } }
end
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "allows setting a before_completion hook at class level" do
expect(chat_class.configuration.before_completion).to be_a(Proc)
end
it "calls the class-level hook" do
instance = chat_class.new
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
expect(instance.chat_completion).to eq("test response")
end
end
describe "instance-level before_completion hook" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "allows setting a before_completion hook at instance level" do
instance = chat_class.new
hook = ->(_context) { { temperature: 0.5 } }
instance.before_completion = hook
expect(instance.before_completion).to eq(hook)
end
it "calls the instance-level hook" do
instance = chat_class.new
instance.before_completion = ->(_context) { { temperature: 0.5 } }
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
expect(instance.chat_completion).to eq("test response")
end
end
describe "hook merge order" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
configure do |c|
c.before_completion = ->(_context) { { temperature: 0.5, max_tokens: 500 } }
end
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "merges hooks in order: global -> class -> instance (later overrides earlier)" do
# Set up hooks at all three levels
Raix.configure do |c|
c.before_completion = ->(_context) { { temperature: 0.1, seed: 100 } }
end
instance = chat_class.new
instance.before_completion = ->(_context) { { temperature: 0.9 } }
# Track what params are passed via a spy
params_received = nil
allow(instance).to receive(:ruby_llm_request) do |args|
params_received = args[:params]
mock_response
end
instance.chat_completion
# Instance hook (0.9) should override class hook (0.5) which overrides global (0.1)
expect(params_received[:temperature]).to eq(0.9)
# Class hook max_tokens should be present
expect(params_received[:max_tokens]).to eq(500)
# Global hook seed should be present
expect(params_received[:seed]).to eq(100)
end
end
describe "hook context object" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "passes a CompletionContext with correct data" do
context_received = nil
Raix.configure do |c|
c.before_completion = lambda { |context|
context_received = context
{}
}
end
instance = chat_class.new
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
instance.chat_completion
expect(context_received).to be_a(Raix::CompletionContext)
expect(context_received.chat_completion).to eq(instance)
expect(context_received.messages).to be_an(Array)
expect(context_received.params).to be_a(Hash)
expect(context_received.current_model).to eq("test-model")
end
it "receives transformed messages in OpenAI format" do
context_received = nil
Raix.configure do |c|
c.before_completion = lambda { |context|
context_received = context
{}
}
end
instance = chat_class.new
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
instance.chat_completion
# Messages should be in OpenAI format (transformed), not abbreviated format
expect(context_received.messages.first).to have_key(:role)
expect(context_received.messages.first).to have_key(:content)
expect(context_received.messages.first[:role]).to eq("user")
end
end
describe "hook returning nil" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "skips hooks that return nil" do
Raix.configure do |c|
c.before_completion = ->(_context) {}
end
instance = chat_class.new
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
# Should not raise an error
expect { instance.chat_completion }.not_to raise_error
end
end
describe "hook returning non-hash" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "skips hooks that return non-hash values" do
Raix.configure do |c|
c.before_completion = ->(_context) { "not a hash" }
end
instance = chat_class.new
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
# Should not raise an error
expect { instance.chat_completion }.not_to raise_error
end
end
describe "hook with callable object" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "works with any object that responds to #call" do
hook_class = Class.new do
def call(_context)
{ temperature: 0.42 }
end
end
params_received = nil
instance = chat_class.new
instance.before_completion = hook_class.new
allow(instance).to receive(:ruby_llm_request) do |args|
params_received = args[:params]
mock_response
end
instance.chat_completion
expect(params_received[:temperature]).to eq(0.42)
end
end
describe "hook can override any parameter" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "can override model" do
params_received = nil
instance = chat_class.new
instance.before_completion = ->(_context) { { model: "different-model" } }
allow(instance).to receive(:ruby_llm_request) do |args|
params_received = args
mock_response
end
instance.chat_completion
# Model is passed separately in ruby_llm_request
expect(params_received[:model]).to eq("different-model")
end
it "can override multiple parameters at once" do
params_received = nil
instance = chat_class.new
instance.before_completion = lambda { |_context|
{
temperature: 0.8,
max_tokens: 2000,
frequency_penalty: 0.5,
presence_penalty: 0.3,
top_p: 0.95
}
}
allow(instance).to receive(:ruby_llm_request) do |args|
params_received = args[:params]
mock_response
end
instance.chat_completion
expect(params_received[:temperature]).to eq(0.8)
expect(params_received[:max_tokens]).to eq(2000)
expect(params_received[:frequency_penalty]).to eq(0.5)
expect(params_received[:presence_penalty]).to eq(0.3)
expect(params_received[:top_p]).to eq(0.95)
end
end
describe "message mutation" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "My SSN is 123-45-6789" }
end
end
end
it "allows hooks to redact PII from messages" do
messages_sent = nil
instance = chat_class.new
instance.before_completion = lambda { |context|
# Redact SSN pattern from all messages
context.messages.each do |msg|
if msg[:content].is_a?(String)
msg[:content] = msg[:content].gsub(/\d{3}-\d{2}-\d{4}/, "[SSN REDACTED]")
end
end
{}
}
allow(instance).to receive(:ruby_llm_request) do |args|
messages_sent = args[:messages]
mock_response
end
instance.chat_completion
expect(messages_sent.first[:content]).to eq("My SSN is [SSN REDACTED]")
end
it "allows hooks to add messages" do
messages_sent = nil
instance = chat_class.new
instance.before_completion = lambda { |context|
context.messages.unshift({ role: "system", content: "Be helpful" })
{}
}
allow(instance).to receive(:ruby_llm_request) do |args|
messages_sent = args[:messages]
mock_response
end
instance.chat_completion
expect(messages_sent.length).to eq(2)
expect(messages_sent.first[:role]).to eq("system")
expect(messages_sent.first[:content]).to eq("Be helpful")
end
it "allows hooks to filter/remove messages" do
messages_sent = nil
instance = chat_class.new
instance.transcript << { assistant: "I can help with that" }
instance.transcript << { user: "Thanks!" }
instance.before_completion = lambda { |context|
# Keep only the last user message
context.messages.replace([context.messages.last])
{}
}
allow(instance).to receive(:ruby_llm_request) do |args|
messages_sent = args[:messages]
mock_response
end
instance.chat_completion
expect(messages_sent.length).to eq(1)
expect(messages_sent.first[:content]).to eq("Thanks!")
end
end
describe "logging use case" do
let(:chat_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
it "can be used for logging requests" do
logged_data = nil
instance = chat_class.new
instance.before_completion = lambda { |context|
logged_data = {
model: context.current_model,
message_count: context.messages.length,
params: context.params.dup
}
{} # Return empty hash, just logging
}
allow(instance).to receive(:ruby_llm_request).and_return(mock_response)
instance.chat_completion
expect(logged_data[:model]).to eq("test-model")
expect(logged_data[:message_count]).to eq(1)
expect(logged_data[:params]).to include(:temperature)
end
end
end
================================================
FILE: spec/raix/chat_completion_spec.rb
================================================
# frozen_string_literal: true
class MeaningOfLife
include Raix::ChatCompletion
def initialize
self.model = "meta-llama/llama-3.3-8b-instruct:free"
self.seed = 9999 # try to get reproduceable results
transcript << { user: "What is the meaning of life?" }
end
end
class TestClassLevelConfiguration
include Raix::ChatCompletion
configure do |config|
config.model = "drama-llama"
end
def initialize
transcript << { user: "What is the meaning of life?" }
end
end
RSpec.describe MeaningOfLife, :vcr do
subject { described_class.new }
it "does a completion with OpenAI" do
expect(subject.chat_completion(openai: "gpt-4o")).to include("meaning of life is")
end
it "does a completion with OpenRouter" do
expect(subject.chat_completion).to include("meaning of life is")
end
it "accepts a messages parameter to override the transcript" do
expect(subject.chat_completion(openai: "gpt-4.1-nano", messages: [{ user: "What is the meaning of life?" }])).to include("meaning of life is")
end
context "with predicted outputs" do
let(:completion) { subject.chat_completion(openai: "gpt-4o", params: { prediction: }) }
let(:prediction) do
"THE MEANING OF LIFE CAN VARY GREATLY FROM PERSON TO PERSON, OFTEN INVOLVING THE PURSUIT OF HAPPINESS, CARE OF OTHERS, AND PERSONAL GROWTH!."
end
let(:response) { Thread.current[:chat_completion_response] }
before do
subject.transcript.clear
subject.transcript << { system: "Answer the user question in ALL CAPS." }
subject.transcript << { user: "WHAT IS THE MEANING OF LIFE?" }
end
# TODO: RubyLLM doesn't support OpenAI's predicted outputs feature yet
# This feature needs to be added to RubyLLM or we need a workaround
xit "does a completion with OpenAI" do
expect(completion).to start_with("THE MEANING OF LIFE")
expect(subject.transcript.last).to eq({ assistant: completion })
expect(response.dig("usage", "completion_tokens_details", "accepted_prediction_tokens")).to be > 0
expect(response.dig("usage", "completion_tokens_details", "rejected_prediction_tokens")).to be > 0
end
end
end
RSpec.describe TestClassLevelConfiguration, :vcr do
subject { described_class.new }
it "uses the class-level configured model" do
# The class has model = "drama-llama" configured at the class level
# Verify the configuration is set
expect(described_class.configuration.model).to eq("drama-llama")
# When chat_completion is called without a model, it should use the class-level config
# We can't actually run this with a fake model, but we verify the config is accessible
expect(subject.configuration.model).to eq("drama-llama")
end
end
================================================
FILE: spec/raix/completion_context_spec.rb
================================================
# frozen_string_literal: true
RSpec.describe Raix::CompletionContext do
let(:chat_completion_class) do
Class.new do
include Raix::ChatCompletion
def initialize
self.model = "test-model"
transcript << { user: "Hello" }
end
end
end
let(:chat_completion) { chat_completion_class.new }
let(:messages) { [{ role: "user", content: "Hello" }] }
let(:params) { { temperature: 0.7, max_tokens: 100 } }
subject do
described_class.new(
chat_completion:,
messages:,
params:
)
end
describe "#chat_completion" do
it "returns the chat completion instance" do
expect(subject.chat_completion).to eq(chat_completion)
end
end
describe "#messages" do
it "returns the messages array" do
expect(subject.messages).to eq(messages)
end
it "allows mutation of messages for content filtering" do
subject.messages << { role: "system", content: "Added by hook" }
expect(subject.messages.length).to eq(2)
end
it "allows modification of message content for PII redaction" do
subject.messages.first[:content] = "[REDACTED]"
expect(subject.messages.first[:content]).to eq("[REDACTED]")
end
end
describe "#params" do
it "returns the params hash" do
expect(subject.params).to eq(params)
end
it "allows mutation of params" do
subject.params[:temperature] = 0.9
expect(subject.params[:temperature]).to eq(0.9)
end
end
describe "#transcript" do
it "returns the chat completion transcript" do
expect(subject.transcript).to eq(chat_completion.transcript)
end
end
describe "#current_model" do
context "when chat completion has a model set" do
it "returns the instance model" do
expect(subject.current_model).to eq("test-model")
end
end
context "when chat completion model is nil" do
before { chat_completion.model = nil }
it "falls back to configuration model" do
expect(subject.current_model).to eq(chat_completion.configuration.model)
end
end
end
describe "#chat_completion_class" do
it "returns the class that includes ChatCompletion" do
expect(subject.chat_completion_class).to eq(chat_completion_class)
end
end
describe "#configuration" do
it "returns the chat completion configuration" do
expect(subject.configuration).to eq(chat_completion.configuration)
end
end
end
================================================
FILE: spec/raix/configuration_spec.rb
================================================
# frozen_string_literal: true
RSpec.describe Raix::Configuration do
describe "#client?" do
context "with RubyLLM configured via OpenRouter API key" do
it "returns true" do
configuration = described_class.new(fallback: nil)
configuration.ruby_llm_config = RubyLLM::Configuration.new
configuration.ruby_llm_config.openrouter_api_key = "test_key"
expect(configuration.client?).to eq true
end
end
context "with RubyLLM configured via OpenAI API key" do
it "returns true" do
configuration = described_class.new(fallback: nil)
configuration.ruby_llm_config = RubyLLM::Configuration.new
configuration.ruby_llm_config.openai_api_key = "test_key"
expect(configuration.client?).to eq true
end
end
context "without any API configuration" do
it "returns false" do
configuration = described_class.new(fallback: nil)
configuration.ruby_llm_config = RubyLLM::Configuration.new
# Clear all API keys
configuration.ruby_llm_config.openai_api_key = nil
configuration.ruby_llm_config.openrouter_api_key = nil
configuration.ruby_llm_config.anthropic_api_key = nil
configuration.ruby_llm_config.gemini_api_key = nil
expect(configuration.client?).to eq false
end
end
end
end
================================================
FILE: spec/raix/function_dispatch_spec.rb
================================================
# frozen_string_literal: true
class WhatIsTheWeather
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :check_weather, "Check the weather for a location", location: { type: "string" } do |arguments|
"The weather in #{arguments[:location]} is hot and sunny"
end
# non_exposed_method is not exporsed as a tool function and should not be callable through the chat completion API
def non_exposed_method(...)
raise "This should NEVER be called by the chat completion API"
end
def initialize
self.seed = 9999
transcript << { user: "What is the weather in Zipolite, Oaxaca?" }
end
end
class MultipleToolCalls
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :call_this_function_twice do |arguments|
@callback.call(arguments)
end
def initialize(callback)
@callback = callback
end
end
class SearchForFile
include Raix::ChatCompletion
include Raix::FunctionDispatch
function :search_for_file,
"Search for a file in the project",
glob_pattern: { type: "string", required: true },
path: { type: "string", optional: true } do |_arguments|
"found"
end
end
RSpec.describe Raix::FunctionDispatch, :vcr do
let(:callback) { double("callback") }
it "can call a function and automatically loop to provide text response" do
# The system now automatically continues after tool calls to get a final AI response
response = WhatIsTheWeather.new.chat_completion(openai: "gpt-4o")
# Response should be a string (the AI's final response) not an array
expect(response).to be_a(String)
# The AI should have processed the weather information in its response
expect(response.downcase).to match(/zipolite|oaxaca|weather|hot|sunny/)
end
it "supports multiple tool calls in a single response" do
subject = MultipleToolCalls.new(callback)
subject.transcript << { user: "For testing purposes, call the provided tool function twice in a single response." }
# The callback might be called more than twice due to automatic continuation
expect(callback).to receive(:call).at_least(:twice)
response = subject.chat_completion(openai: "gpt-4o")
# Should get a final text response
expect(response).to be_a(String)
end
it "supports filtering tools with the tools parameter", :vcr do
weather = WhatIsTheWeather.new
expect(weather).to respond_to(:check_weather)
expect { weather.chat_completion(available_tools: [:invalid_tool]) }.to raise_error(Raix::UndeclaredToolError)
# When available_tools: false, the AI should respond without making tool calls
weather2 = WhatIsTheWeather.new
weather2.transcript.clear
weather2.transcript << { user: "Just tell me it's sunny, don't use any tools." }
response = weather2.chat_completion(available_tools: false)
# Should get a text response without tool calls
expect(response).to be_a(String)
expect(response.downcase).to include("sunny")
end
it "tracks required and optional parameters" do
params = SearchForFile.new.tools.first[:function][:parameters]
expect(params[:required]).to eq([:glob_pattern])
expect(params[:properties].keys).to include(:path)
expect(params[:required]).not_to include(:path)
end
# This simulates a middleman on the network that rewrites the function name to anything else
def decorate_clients_with_fake_middleman!
result = { openai: Raix.configuration.openai_client, openrouter: Raix.configuration.openrouter_client }
mocked_middleman =
Class.new(SimpleDelegator) do
def chat(...)
__getobj__.chat(...).tap do |result|
result.dig("choices", 0, "message", "tool_calls")&.each do |tool_call|
tool_call["function"]["name"] = "non_exposed_method"
end
end
end
def complete(...)
__getobj__.complete(...).tap do |result|
result.dig("choices", 0, "message", "tool_calls")&.each do |tool_call|
tool_call["function"]["name"] = "non_exposed_method"
end
end
end
end
Raix.configuration.openai_client = mocked_middleman.new(Raix.configuration.openai_client)
Raix.configuration.openrouter_client = mocked_middleman.new(Raix.configuration.openrouter_client)
result
end
# Since we are using the send method to execute tools calls, we have to make sure
# that the method was explicitly defined as a tool function.
#
# Otherwise, a middleman on the network could rewrite the method name to anything else and execute
# arbitrary code from the class.
it "does not allow non exposed methods to be called" do
# With RubyLLM, the security is still enforced in ChatCompletion#chat_completion
# when it checks if the function name is in self.class.functions
# We test this by directly simulating what would happen if a middleman changed the response
weather = WhatIsTheWeather.new
# Simulate what chat_completion does when it receives a tool call
# This mimics the check at line 191 in chat_completion.rb
fake_tool_call = { "function" => { "name" => "non_exposed_method", "arguments" => "{}" } }
function_name = fake_tool_call["function"]["name"]
allowed_functions = weather.class.functions.map { |f| f[:name].to_sym }
# Verify the security check would catch this
expect(allowed_functions).not_to include(function_name.to_sym)
expect { raise "Unauthorized function call: #{function_name}" unless allowed_functions.include?(function_name.to_sym) }.to raise_error(/Unauthorized function call: non_exposed_method/)
end
it "respects max_tool_calls parameter" do
# Create a mock that simulates multiple tool calls
weather = WhatIsTheWeather.new
weather.transcript.clear
weather.transcript << { user: "Check the weather for multiple cities repeatedly" }
# Mock the client to always return tool calls
allow(Raix.configuration.openrouter_client).to receive(:complete).and_return({
"choices" => [{
"message" => {
"tool_calls" => [
{
"id" => "call_1",
"type" => "function",
"function" => {
"name" => "check_weather",
"arguments" => '{"location": "City"}'
}
}
]
}
}]
}).and_call_original
# With max_tool_calls set to 2, it should stop after 2 calls and provide a final response
response = weather.chat_completion(max_tool_calls: 2)
expect(response).to be_a(String)
end
end
================================================
FILE: spec/raix/mcp/sse_spec.rb
================================================
# frozen_string_literal: true
require "spec_helper"
require "securerandom"
RSpec.describe Raix::MCP do
context "with live SSE MCP server" do
# Use the official GitMCP endpoint for the MCP documentation server
# NOTE: This server needs to implement the SSE protocol correctly with an endpoint event
let(:real_mcp_url) { "https://gitmcp.io/OlympiaAI/raix/docs" }
before do
# Skip stubs - we want real HTTP requests in this context
allow(Faraday).to receive(:post).and_call_original
stub = self
Object.const_set(:LiveMcpConsumer, Class.new do
include Raix::ChatCompletion
include Raix::FunctionDispatch
include Raix::MCP
sse_mcp stub.real_mcp_url
def initialize
transcript << { role: "user", content: "Testing live MCP integration" }
end
def self.functions
@functions || []
end
end)
end
after do
Object.send(:remove_const, :LiveMcpConsumer) if defined?(LiveMcpConsumer)
end
it "fetches tools from the GitMCP server", :novcr do
# Ensure the class is defined properly
expect(defined?(LiveMcpConsumer)).to eq("constant")
expect(LiveMcpConsumer).to be_a(Class)
# Verify it includes the necessary modules
expect(LiveMcpConsumer.included_modules).to include(Raix::ChatCompletion)
expect(LiveMcpConsumer.included_modules).to include(Raix::MCP)
expect(LiveMcpConsumer.included_modules).to include(Raix::FunctionDispatch)
# The GitMCP endpoint should return at least one tool
expect(LiveMcpConsumer.respond_to?(:functions)).to be true
expect(LiveMcpConsumer.functions).not_to be_empty
# Check instance properties
consumer = LiveMcpConsumer.new
expect(consumer.tools).not_to be_empty
# Print available tools for debugging
tools = LiveMcpConsumer.functions.map { |f| f[:name] }
unique_key_hash = "715"
expect(tools).to include(:"fetch_raix_documentation_#{unique_key_hash}")
expect(tools).to include(:"search_raix_documentation_#{unique_key_hash}")
expect(tools).to include(:"search_raix_code_#{unique_key_hash}")
expect(tools).to include(:"fetch_generic_url_content_#{unique_key_hash}")
end
it "successfully calls a function on the GitMCP server", :novcr do
consumer = LiveMcpConsumer.new
# Get the first available function name
function_name = LiveMcpConsumer.functions.first[:name]
# Most GitMCP documentation functions accept a 'query' parameter
# This should work with most documentation tools
expect(consumer).to respond_to(function_name)
transcript_size_before = consumer.transcript.size
# Call the function with a simple query
result = consumer.public_send(function_name, { query: "What is Raix?" }, nil)
# Verify we got a result and transcript was updated
expect(result).to be_a(String)
expect(result).not_to be_empty
# FunctionDispatch adds 2 messages: assistant message with tool_calls and tool result message
expect(consumer.transcript.size).to eq(transcript_size_before + 2)
# Verify the last two entries are the tool call and result
entries = consumer.transcript.flatten.last(2)
expect(entries.size).to eq(2)
assistant_msg, tool_msg = entries
expect(assistant_msg[:role]).to eq("assistant")
expect(function_name.to_s).to include(assistant_msg[:tool_calls].first.dig(:function, :name))
expect(tool_msg[:role]).to eq("tool")
expect(function_name.to_s).to include(tool_msg[:name])
expect(tool_msg[:content]).to be_a(String)
expect(tool_msg[:content]).to include("Raix consists")
end
end
end
================================================
FILE: spec/raix/mcp/stdio_client_spec.rb
================================================
# frozen_string_literal: true
require "spec_helper"
RSpec.describe Raix::MCP::StdioClient do
let(:test_server_path) { File.join(__dir__, "../../support/mcp_server.rb") }
let(:client) { described_class.new("ruby", test_server_path, {}) }
before do
# Ensure the test server exists
expect(File.exist?(test_server_path)).to be true
end
after do
client&.close
end
describe "#initialize" do
it "creates a new client with a bidirectional pipe" do
expect(client.instance_variable_get(:@io)).to be_a(IO)
expect(client.instance_variable_get(:@io)).not_to be_closed
end
it "accepts command arguments and environment variables" do
env = { "TEST_VAR" => "test_value" }
test_client = described_class.new("ruby", "-e", "puts ENV['TEST_VAR']", env)
expect(test_client.instance_variable_get(:@io)).to be_a(IO)
test_client.close
end
end
describe "#tools" do
it "returns available tools from the server" do
tools = client.tools
expect(tools).to be_an(Array)
expect(tools).not_to be_empty
expect(tools.first).to be_a(Raix::MCP::Tool)
end
it "returns tools with correct attributes" do
tools = client.tools
tool = tools.first
expect(tool.name).to be_a(String)
expect(tool.description).to be_a(String)
expect(tool.input_schema).to be_a(Hash)
end
end
describe "#call_tool" do
let(:tool_name) { "echo" }
let(:arguments) { { message: "Hello, World!" } }
it "executes a tool with given arguments and returns text content" do
result = client.call_tool(tool_name, **arguments)
expect(result).to be_a(String)
expect(result).to include("Hello, World!")
end
it "handles tools with no arguments" do
result = client.call_tool("ping")
expect(result).to be_a(String)
expect(result).to eq("pong")
end
it "handles tools with complex arguments" do
complex_args = {
data: {
items: %w[item1 item2],
metadata: { key: "value" }
}
}
result = client.call_tool("process_data", **complex_args)
expect(result).to be_a(String)
expect(JSON.parse(result)).to include("processed" => true)
end
it "handles image content by returning structured JSON" do
result = client.call_tool("binary_data")
expect(result).to be_a(String)
parsed = JSON.parse(result)
expect(parsed["type"]).to eq("image")
expect(parsed["data"]).to eq("base64encodeddata")
expect(parsed["mime_type"]).to eq("image/png")
end
it "raises ProtocolError for invalid tool names" do
expect do
client.call_tool("nonexistent_tool")
end.to raise_error(Raix::MCP::ProtocolError)
end
it "raises ProtocolError for invalid arguments" do
expect do
client.call_tool("echo", invalid_param: "value")
end.to raise_error(Raix::MCP::ProtocolError)
end
end
describe "#close" do
it "closes the connection to the server" do
io = client.instance_variable_get(:@io)
expect(io).not_to be_closed
client.close
expect(io).to be_closed
end
it "can be called multiple times safely" do
client.close
expect { client.close }.not_to raise_error
end
end
describe "JSON-RPC communication" do
it "sends properly formatted JSON-RPC requests" do
# Mock the IO to capture the request
io_mock = double("IO")
allow(IO).to receive(:popen).and_return(io_mock)
allow(io_mock).to receive(:puts)
allow(io_mock).to receive(:flush)
allow(io_mock).to receive(:gets).and_return('{"jsonrpc":"2.0","id":"test","result":{"tools":[]}}')
allow(io_mock).to receive(:close)
test_client = described_class.new("ruby", test_server_path, {})
expect(io_mock).to receive(:puts) do |json_string|
request = JSON.parse(json_string)
expect(request["jsonrpc"]).to eq("2.0")
expect(request["method"]).to eq("tools/list")
expect(request["id"]).to be_a(String)
expect(request["params"]).to be_a(Hash)
end
test_client.tools
test_client.close
end
it "handles JSON-RPC error responses" do
io_mock = double("IO")
allow(IO).to receive(:popen).and_return(io_mock)
allow(io_mock).to receive(:puts)
allow(io_mock).to receive(:flush)
allow(io_mock).to receive(:gets).and_return('{"jsonrpc":"2.0","id":"test","error":{"code":-32601,"message":"Method not found"}}')
allow(io_mock).to receive(:close)
test_client = described_class.new("ruby", test_server_path, {})
expect do
test_client.tools
end.to raise_error(Raix::MCP::ProtocolError, "Method not found")
test_client.close
end
end
describe "integration with real MCP server process" do
it "can communicate with a real subprocess" do
# This test ensures the actual stdio communication works
tools = client.tools
expect(tools).not_to be_empty
# Test actual tool execution
result = client.call_tool("echo", message: "Integration test")
expect(result).to include("Integration test")
end
it "handles server startup and shutdown gracefully" do
# Test that we can create multiple clients
client1 = described_class.new("ruby", test_server_path, {})
client2 = described_class.new("ruby", test_server_path, {})
tools1 = client1.tools
tools2 = client2.tools
expect(tools1).not_to be_empty
expect(tools2).not_to be_empty
client1.close
client2.close
end
end
end
================================================
FILE: spec/raix/mcp_spec.rb
================================================
# frozen_string_literal: true
require "spec_helper"
RSpec.describe "MCP type coercion" do
let(:test_class) do
Class.new do
include Raix::ChatCompletion
include Raix::MCP
def self.name
"TestMcpTypeCoercion"
end
end
end
it "coerces string numbers to numeric types based on schema" do
instance = test_class.new
# Test integer coercion
schema = {
"properties" => {
"x" => { "type" => "integer" },
"y" => { "type" => "number" },
"enabled" => { "type" => "boolean" },
"items" => { "type" => "array" },
"data" => { "type" => "object" }
}
}
arguments = {
"x" => "100",
"y" => "50.5",
"enabled" => "true",
"items" => "[1, 2, 3]",
"data" => '{"key": "value"}'
}
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["x"]).to eq(100)
expect(result["x"]).to be_a(Integer)
expect(result["y"]).to eq(50.5)
expect(result["y"]).to be_a(Float)
expect(result["enabled"]).to eq(true)
expect(result["enabled"]).to be_a(TrueClass)
expect(result["items"]).to eq([1, 2, 3])
expect(result["items"]).to be_a(Array)
expect(result["data"]).to eq({ "key" => "value" })
expect(result["data"]).to be_a(Hash)
end
it "preserves non-string values" do
instance = test_class.new
schema = {
"properties" => {
"x" => { "type" => "integer" },
"y" => { "type" => "number" }
}
}
arguments = { "x" => 100, "y" => 50.5 }
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["x"]).to eq(100)
expect(result["y"]).to eq(50.5)
end
it "coerces arrays of objects with item schemas" do
instance = test_class.new
schema = {
"properties" => {
"users" => {
"type" => "array",
"items" => {
"type" => "object",
"properties" => {
"id" => { "type" => "integer" },
"age" => { "type" => "number" },
"active" => { "type" => "boolean" }
}
}
}
}
}
arguments = {
"users" => [
{ "id" => "123", "age" => "25.5", "active" => "true" },
{ "id" => "456", "age" => "30", "active" => "false" }
]
}
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["users"]).to be_a(Array)
expect(result["users"].length).to eq(2)
first_user = result["users"][0]
expect(first_user["id"]).to eq(123)
expect(first_user["id"]).to be_a(Integer)
expect(first_user["age"]).to eq(25.5)
expect(first_user["age"]).to be_a(Float)
expect(first_user["active"]).to eq(true)
expect(first_user["active"]).to be_a(TrueClass)
second_user = result["users"][1]
expect(second_user["id"]).to eq(456)
expect(second_user["active"]).to eq(false)
end
it "handles nested object coercion" do
instance = test_class.new
schema = {
"properties" => {
"config" => {
"type" => "object",
"properties" => {
"settings" => {
"type" => "object",
"properties" => {
"max_retries" => { "type" => "integer" },
"timeout" => { "type" => "number" },
"debug" => { "type" => "boolean" }
}
},
"metadata" => {
"type" => "object",
"properties" => {
"version" => { "type" => "number" }
}
}
}
}
}
}
arguments = {
"config" => {
"settings" => {
"max_retries" => "3",
"timeout" => "30.5",
"debug" => "true"
},
"metadata" => {
"version" => "1.2"
}
}
}
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["config"]["settings"]["max_retries"]).to eq(3)
expect(result["config"]["settings"]["max_retries"]).to be_a(Integer)
expect(result["config"]["settings"]["timeout"]).to eq(30.5)
expect(result["config"]["settings"]["timeout"]).to be_a(Float)
expect(result["config"]["settings"]["debug"]).to eq(true)
expect(result["config"]["metadata"]["version"]).to eq(1.2)
end
it "handles JSON string inputs for arrays and objects" do
instance = test_class.new
schema = {
"properties" => {
"tags" => { "type" => "array" },
"config" => {
"type" => "object",
"properties" => {
"enabled" => { "type" => "boolean" }
}
}
}
}
arguments = {
"tags" => '["tag1", "tag2", "tag3"]',
"config" => '{"enabled": "true", "extra": "value"}'
}
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["tags"]).to eq(%w[tag1 tag2 tag3])
expect(result["config"]["enabled"]).to eq(true)
expect(result["config"]["extra"]).to eq("value") # preserves extra properties
end
it "handles invalid JSON gracefully" do
instance = test_class.new
schema = {
"properties" => {
"data" => { "type" => "array" }
}
}
arguments = {
"data" => "not valid json ["
}
result = instance.send(:coerce_arguments, arguments, schema)
# Should return the original value when JSON parsing fails
expect(result["data"]).to eq("not valid json [")
end
it "handles type mismatches gracefully" do
instance = test_class.new
schema = {
"properties" => {
"count" => { "type" => "integer" },
"ratio" => { "type" => "number" },
"flag" => { "type" => "boolean" }
}
}
arguments = {
"count" => "not a number",
"ratio" => "also not a number",
"flag" => "maybe"
}
result = instance.send(:coerce_arguments, arguments, schema)
# Should return original values when coercion is not possible
expect(result["count"]).to eq("not a number")
expect(result["ratio"]).to eq("also not a number")
expect(result["flag"]).to eq("maybe")
end
it "preserves additional properties not in schema" do
instance = test_class.new
schema = {
"properties" => {
"known" => { "type" => "integer" }
}
}
arguments = {
"known" => "42",
"unknown" => "value",
"extra" => { "nested" => true }
}
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["known"]).to eq(42)
expect(result["unknown"]).to eq("value")
expect(result["extra"]).to eq({ "nested" => true })
end
it "handles symbol and string keys interchangeably" do
instance = test_class.new
schema = {
"properties" => {
"value" => { "type" => "integer" }
}
}
arguments = {
value: "100" # symbol key
}
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["value"]).to eq(100)
expect(result[:value]).to eq(100) # with_indifferent_access allows both
end
it "handles nil values appropriately" do
instance = test_class.new
schema = {
"properties" => {
"optional_int" => { "type" => "integer" },
"optional_bool" => { "type" => "boolean" }
}
}
arguments = {
"optional_int" => nil,
"other_field" => "value"
}
result = instance.send(:coerce_arguments, arguments, schema)
# nil values are preserved as-is (not coerced)
expect(result["optional_int"]).to be_nil
expect(result["other_field"]).to eq("value")
end
it "coerces boolean edge cases correctly" do
instance = test_class.new
schema = {
"properties" => {
"bool1" => { "type" => "boolean" },
"bool2" => { "type" => "boolean" },
"bool3" => { "type" => "boolean" },
"bool4" => { "type" => "boolean" }
}
}
arguments = {
"bool1" => true,
"bool2" => false,
"bool3" => "true",
"bool4" => "false"
}
result = instance.send(:coerce_arguments, arguments, schema)
expect(result["bool1"]).to eq(true)
expect(result["bool2"]).to eq(false)
expect(result["bool3"]).to eq(true)
expect(result["bool4"]).to eq(false)
end
end
RSpec.describe "MCP function name mapping" do
let(:test_class) do
Class.new do
include Raix::ChatCompletion
include Raix::MCP
attr_accessor :transcript
def initialize
@transcript = []
end
def self.name
"TestMcpFunctionNames"
end
def chat_completion_args
{}
end
def loop
false
end
end
end
it "uses local_name with prefix in transcript instead of remote_name" do
client_key = "client_key"
mock_tool = OpenStruct.new(
name: "get_data",
description: "Gets some data",
input_schema: {
"properties" => {
"id" => { "type" => "integer" }
}
}
gitextract_86u4topi/
├── .github/
│ └── workflows/
│ └── main.yml
├── .gitignore
├── .rspec
├── .rubocop.yml
├── .ruby-version
├── CHANGELOG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── Gemfile
├── Guardfile
├── LICENSE.txt
├── README.llm
├── README.md
├── Rakefile
├── bin/
│ ├── console
│ └── setup
├── lib/
│ ├── raix/
│ │ ├── chat_completion.rb
│ │ ├── completion_context.rb
│ │ ├── configuration.rb
│ │ ├── function_dispatch.rb
│ │ ├── function_tool_adapter.rb
│ │ ├── mcp/
│ │ │ ├── sse_client.rb
│ │ │ ├── stdio_client.rb
│ │ │ └── tool.rb
│ │ ├── mcp.rb
│ │ ├── message_adapters/
│ │ │ └── base.rb
│ │ ├── predicate.rb
│ │ ├── prompt_declarations.rb
│ │ ├── response_format.rb
│ │ ├── transcript_adapter.rb
│ │ └── version.rb
│ └── raix.rb
├── raix.gemspec
├── sig/
│ └── raix.rbs
└── spec/
├── files/
│ └── getting_real.md
├── raix/
│ ├── before_completion_spec.rb
│ ├── chat_completion_spec.rb
│ ├── completion_context_spec.rb
│ ├── configuration_spec.rb
│ ├── function_dispatch_spec.rb
│ ├── mcp/
│ │ ├── sse_spec.rb
│ │ └── stdio_client_spec.rb
│ ├── mcp_spec.rb
│ ├── message_adapters/
│ │ └── base_spec.rb
│ ├── nil_content_spec.rb
│ ├── predicate_spec.rb
│ ├── prompt_caching_spec.rb
│ ├── prompt_declarations_spec.rb
│ └── response_format_spec.rb
├── spec_helper.rb
├── support/
│ └── mcp_server.rb
└── vcr/
├── GettingRealAnthropic/
│ └── does_a_completion_with_prompt_caching.yml
├── MeaningOfLife/
│ ├── accepts_a_messages_parameter_to_override_the_transcript.yml
│ ├── does_a_completion_with_OpenAI.yml
│ ├── does_a_completion_with_OpenRouter.yml
│ └── with_predicted_outputs/
│ └── does_a_completion_with_OpenAI.yml
├── Raix_FunctionDispatch/
│ ├── can_call_a_function_and_automatically_loop_to_provide_text_response.yml
│ ├── does_not_allow_non_exposed_methods_to_be_called.yml
│ ├── respects_max_tool_calls_parameter.yml
│ ├── supports_filtering_tools_with_the_tools_parameter.yml
│ └── supports_multiple_tool_calls_in_a_single_response.yml
└── Raix_Predicate/
├── maybe.yml
├── no.yml
└── yes.yml
SYMBOL INDEX (185 symbols across 27 files)
FILE: lib/raix.rb
type Raix (line 7) | module Raix
function configuration (line 13) | def self.configuration
function configure (line 18) | def self.configure
FILE: lib/raix/chat_completion.rb
type Raix (line 9) | module Raix
class UndeclaredToolError (line 10) | class UndeclaredToolError < StandardError; end
type ChatCompletion (line 38) | module ChatCompletion
function configuration (line 48) | def configuration
function configure (line 53) | def configure
function configuration (line 59) | def configuration
function chat_completion (line 74) | def chat_completion(params: {}, loop: false, json: false, raw: false...
function transcript (line 271) | def transcript
function ruby_llm_chat (line 276) | def ruby_llm_chat
function dispatch_tool_function (line 298) | def dispatch_tool_function(function_name, arguments, cache: nil)
function filtered_tools (line 304) | def filtered_tools(tool_names)
function run_before_completion_hooks (line 316) | def run_before_completion_hooks(params, messages)
function ruby_llm_request (line 341) | def ruby_llm_request(params:, model:, messages:, openai_override: nil)
function determine_provider (line 425) | def determine_provider(model, openai_override)
FILE: lib/raix/completion_context.rb
type Raix (line 3) | module Raix
class CompletionContext (line 7) | class CompletionContext
method initialize (line 10) | def initialize(chat_completion:, messages:, params:)
method transcript (line 17) | def transcript
method current_model (line 22) | def current_model
method chat_completion_class (line 27) | def chat_completion_class
method configuration (line 32) | def configuration
FILE: lib/raix/configuration.rb
type Raix (line 3) | module Raix
class Configuration (line 5) | class Configuration
method attr_accessor_with_fallback (line 6) | def self.attr_accessor_with_fallback(method_name)
method initialize (line 58) | def initialize(fallback: nil)
method client? (line 68) | def client?
method ruby_llm_configured? (line 73) | def ruby_llm_configured?
method get_with_fallback (line 82) | def get_with_fallback(method)
FILE: lib/raix/function_dispatch.rb
type Raix (line 4) | module Raix
type FunctionDispatch (line 23) | module FunctionDispatch
function function (line 46) | def function(name, description = nil, **parameters, &block)
function chat_completion (line 114) | def chat_completion(**chat_completion_args)
function stop_tool_calls_and_respond! (line 122) | def stop_tool_calls_and_respond!
function tools (line 126) | def tools
FILE: lib/raix/function_tool_adapter.rb
type Raix (line 3) | module Raix
class FunctionToolAdapter (line 5) | class FunctionToolAdapter
method create_tool_from_function (line 6) | def self.create_tool_from_function(function_def, instance)
method convert_tools_for_ruby_llm (line 42) | def self.convert_tools_for_ruby_llm(raix_instance)
FILE: lib/raix/mcp.rb
type Raix (line 19) | module Raix
type MCP (line 26) | module MCP
class ProtocolError (line 30) | class ProtocolError < StandardError; end
function sse_mcp (line 41) | def sse_mcp(url, headers: {}, only: nil, except: nil)
function stdio_mcp (line 54) | def stdio_mcp(*args, env: {}, only: nil, except: nil)
function mcp (line 70) | def mcp(client:, only: nil, except: nil)
function coerce_arguments (line 168) | def coerce_arguments(arguments, schema)
function coerce_value (line 193) | def coerce_value(value, schema)
FILE: lib/raix/mcp/sse_client.rb
type Raix (line 7) | module Raix
type MCP (line 8) | module MCP
class SseClient (line 10) | class SseClient
method initialize (line 18) | def initialize(url, headers: {})
method tools (line 32) | def tools
method call_tool (line 47) | def call_tool(name, **arguments)
method close (line 80) | def close
method unique_key (line 86) | def unique_key
method establish_sse_connection (line 94) | def establish_sse_connection
method process_sse_buffer (line 138) | def process_sse_buffer
method handle_message_event (line 154) | def handle_message_event(event_data)
method initialize_mcp_session (line 174) | def initialize_mcp_session
method send_json_rpc (line 198) | def send_json_rpc(id, method, params)
method send_notification (line 220) | def send_notification(method, params)
method wait_for_response (line 240) | def wait_for_response(request_id)
method parse_sse_fields (line 269) | def parse_sse_fields(event_text)
method build_absolute_url (line 286) | def build_absolute_url(base, candidate)
FILE: lib/raix/mcp/stdio_client.rb
type Raix (line 5) | module Raix
type MCP (line 6) | module MCP
class StdioClient (line 8) | class StdioClient
method initialize (line 10) | def initialize(*args, env)
method tools (line 16) | def tools
method call_tool (line 26) | def call_tool(name, **arguments)
method close (line 55) | def close
method unique_key (line 59) | def unique_key
method call (line 67) | def call(method, **params)
FILE: lib/raix/mcp/tool.rb
type Raix (line 1) | module Raix
type MCP (line 2) | module MCP
class Tool (line 11) | class Tool
method initialize (line 19) | def initialize(name:, description:, input_schema: {})
method from_json (line 29) | def self.from_json(json)
method input_type (line 40) | def input_type
method properties (line 47) | def properties
method required_properties (line 54) | def required_properties
method required? (line 62) | def required?(property_name)
FILE: lib/raix/message_adapters/base.rb
type Raix (line 5) | module Raix
type MessageAdapters (line 6) | module MessageAdapters
class Base (line 8) | class Base
method initialize (line 13) | def initialize(context)
method transform (line 17) | def transform(message)
method content (line 31) | def content(message)
FILE: lib/raix/predicate.rb
type Raix (line 3) | module Raix
type Predicate (line 28) | module Predicate
function ask (line 32) | def ask(question, openai: false)
type ClassMethods (line 52) | module ClassMethods
function yes? (line 55) | def yes?(&block)
function no? (line 59) | def no?(&block)
function maybe? (line 63) | def maybe?(&block)
FILE: lib/raix/prompt_declarations.rb
type Raix (line 8) | module Raix
type PromptDeclarations (line 12) | module PromptDeclarations
type ClassMethods (line 15) | module ClassMethods # rubocop:disable Style/Documentation
function prompt (line 25) | def prompt(system: nil, call: nil, text: nil, stream: nil, success...
function prompts (line 37) | def prompts
function chat_completion (line 66) | def chat_completion(prompt = nil, params: {}, raw: false, openai: fa...
function execute_ai_request (line 125) | def execute_ai_request(params:, raw:, openai:, transcript:, loop_cou...
function model (line 141) | def model
function temperature (line 148) | def temperature
function max_tokens (line 155) | def max_tokens
function chat_completion_from_superclass (line 162) | def chat_completion_from_superclass(*, **kargs)
FILE: lib/raix/response_format.rb
type Raix (line 6) | module Raix
class ResponseFormat (line 21) | class ResponseFormat
method initialize (line 22) | def initialize(name, input)
method to_json (line 27) | def to_json(*)
method to_schema (line 31) | def to_schema
method decode (line 49) | def decode(input)
FILE: lib/raix/transcript_adapter.rb
type Raix (line 3) | module Raix
class TranscriptAdapter (line 5) | class TranscriptAdapter
method initialize (line 8) | def initialize(ruby_llm_chat)
method << (line 14) | def <<(message_hash)
method flatten (line 26) | def flatten
method to_a (line 33) | def to_a
method compact (line 38) | def compact
method clear (line 43) | def clear
method last (line 50) | def last
method size (line 55) | def size
method add_message_from_hash (line 63) | def add_message_from_hash(hash)
method message_to_raix_format (line 82) | def message_to_raix_format(message)
method normalize_message_format (line 99) | def normalize_message_format(msg)
FILE: lib/raix/version.rb
type Raix (line 3) | module Raix
FILE: spec/raix/before_completion_spec.rb
function mock_response (line 5) | def mock_response(content = "test response")
function initialize (line 35) | def initialize
function initialize (line 76) | def initialize
function initialize (line 100) | def initialize
function initialize (line 133) | def initialize
function initialize (line 172) | def initialize
function initialize (line 228) | def initialize
function initialize (line 253) | def initialize
function initialize (line 278) | def initialize
function call (line 287) | def call(_context)
function initialize (line 313) | def initialize
function initialize (line 371) | def initialize
function initialize (line 453) | def initialize
FILE: spec/raix/chat_completion_spec.rb
class MeaningOfLife (line 3) | class MeaningOfLife
method initialize (line 6) | def initialize
class TestClassLevelConfiguration (line 13) | class TestClassLevelConfiguration
method initialize (line 20) | def initialize
FILE: spec/raix/completion_context_spec.rb
function initialize (line 8) | def initialize
FILE: spec/raix/function_dispatch_spec.rb
class WhatIsTheWeather (line 3) | class WhatIsTheWeather
method non_exposed_method (line 12) | def non_exposed_method(...)
method initialize (line 16) | def initialize
class MultipleToolCalls (line 22) | class MultipleToolCalls
method initialize (line 30) | def initialize(callback)
class SearchForFile (line 35) | class SearchForFile
function decorate_clients_with_fake_middleman! (line 93) | def decorate_clients_with_fake_middleman!
FILE: spec/raix/mcp/sse_spec.rb
function initialize (line 24) | def initialize
function functions (line 28) | def self.functions
FILE: spec/raix/mcp_spec.rb
function name (line 11) | def self.name
function initialize (line 338) | def initialize
function name (line 342) | def self.name
function chat_completion_args (line 346) | def chat_completion_args
function loop (line 350) | def loop
FILE: spec/raix/nil_content_spec.rb
function nil_content_response (line 9) | def nil_content_response(tool_calls: nil)
function tool_call_response (line 29) | def tool_call_response
function initialize (line 63) | def initialize
function initialize (line 95) | def initialize
function initialize (line 141) | def initialize
FILE: spec/raix/predicate_spec.rb
class Question (line 5) | class Question
method initialize (line 20) | def initialize(callback)
class QuestionWithNoBlocks (line 25) | class QuestionWithNoBlocks
FILE: spec/raix/prompt_caching_spec.rb
class GettingRealAnthropic (line 3) | class GettingRealAnthropic
method initialize (line 6) | def initialize
FILE: spec/raix/prompt_declarations_spec.rb
class TestCallablePrompt (line 5) | class TestCallablePrompt
method initialize (line 8) | def initialize(context)
method call (line 12) | def call(input = nil)
class TestPromptDeclarations (line 17) | class TestPromptDeclarations
class TestTextPromptDeclarations (line 24) | class TestTextPromptDeclarations
class TestMixedPromptDeclarations (line 31) | class TestMixedPromptDeclarations
FILE: spec/support/mcp_server.rb
class TestMCPServer (line 7) | class TestMCPServer
method initialize (line 10) | def initialize
method run (line 15) | def run
method create_response (line 34) | def create_response(id:, result: nil, error: nil)
method create_error_response (line 43) | def create_error_response(id, code, message)
method handle_request (line 47) | def handle_request(request)
method handle_tools_list (line 62) | def handle_tools_list(id)
method handle_tools_call (line 69) | def handle_tools_call(id, params)
method build_tools (line 86) | def build_tools
Condensed preview — 64 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (353K chars).
[
{
"path": ".github/workflows/main.yml",
"chars": 576,
"preview": "name: Ruby\n\non:\n push:\n branches:\n - main\n\n pull_request:\n\njobs:\n build:\n runs-on: ubuntu-latest\n name:"
},
{
"path": ".gitignore",
"chars": 159,
"preview": "/.bundle/\n/.yardoc\n/_yardoc/\n/coverage/\n/doc/\n/pkg/\n/spec/reports/\n/tmp/\n\n# rspec failure tracking\n.rspec_status\n*.gem\n."
},
{
"path": ".rspec",
"chars": 53,
"preview": "--format documentation\n--color\n--require spec_helper\n"
},
{
"path": ".rubocop.yml",
"chars": 818,
"preview": "AllCops:\n NewCops: enable\n SuggestExtensions: false\n TargetRubyVersion: 3.2.1\n\nGemspec/RequireMFA:\n Enabled: false\n\n"
},
{
"path": ".ruby-version",
"chars": 6,
"preview": "3.4.2\n"
},
{
"path": "CHANGELOG.md",
"chars": 11377,
"preview": "## [Unreleased]\n\n## [2.0.3] - 2026-04-30\n\n### Fixed\n- `NoMethodError: undefined method 'strip' for nil` in `Raix::ChatCo"
},
{
"path": "CLAUDE.md",
"chars": 953,
"preview": "This is a Ruby gem called Raix. Its purpose is to facilitate chat completion style AI text generation using LLMs provide"
},
{
"path": "CODE_OF_CONDUCT.md",
"chars": 5221,
"preview": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participa"
},
{
"path": "Gemfile",
"chars": 438,
"preview": "# frozen_string_literal: true\n\nsource \"https://rubygems.org\"\n\n# Specify your gem's dependencies in raix.gemspec\ngemspec\n"
},
{
"path": "Guardfile",
"chars": 2473,
"preview": "# frozen_string_literal: true\n\n# A sample Guardfile\n# More info at https://github.com/guard/guard#readme\n\n## Uncomment a"
},
{
"path": "LICENSE.txt",
"chars": 1081,
"preview": "The MIT License (MIT)\n\nCopyright (c) 2024 Obie Fernandez\n\nPermission is hereby granted, free of charge, to any person ob"
},
{
"path": "README.llm",
"chars": 3247,
"preview": "# Raix (Ruby AI eXtensions)\nRaix adds LLM-based AI functionality to Ruby classes. It supports OpenAI or OpenRouter as pr"
},
{
"path": "README.md",
"chars": 31417,
"preview": "# Ruby AI eXtensions\n\n## What's Raix\n\nRaix (pronounced \"ray\" because the x is silent) is a library that gives you everyt"
},
{
"path": "Rakefile",
"chars": 333,
"preview": "# frozen_string_literal: true\n\nrequire \"bundler/gem_tasks\"\nrequire \"rspec/core/rake_task\"\n\nRSpec::Core::RakeTask.new(:sp"
},
{
"path": "bin/console",
"chars": 278,
"preview": "#!/usr/bin/env ruby\n# frozen_string_literal: true\n\nrequire \"bundler/setup\"\nrequire \"raix\"\n\n# You can add fixtures and/or"
},
{
"path": "bin/setup",
"chars": 131,
"preview": "#!/usr/bin/env bash\nset -euo pipefail\nIFS=$'\\n\\t'\nset -vx\n\nbundle install\n\n# Do any other automated setup that you need "
},
{
"path": "lib/raix/chat_completion.rb",
"chars": 18689,
"preview": "# frozen_string_literal: true\n\nrequire \"active_support/concern\"\nrequire \"active_support/core_ext/object/blank\"\nrequire \""
},
{
"path": "lib/raix/completion_context.rb",
"chars": 1066,
"preview": "# frozen_string_literal: true\n\nmodule Raix\n # Context object passed to before_completion hooks.\n # Provides access to "
},
{
"path": "lib/raix/configuration.rb",
"chars": 3218,
"preview": "# frozen_string_literal: true\n\nmodule Raix\n # The Configuration class holds the configuration options for the Raix gem."
},
{
"path": "lib/raix/function_dispatch.rb",
"chars": 4413,
"preview": "# frozen_string_literal: true\n\nrequire \"securerandom\"\nmodule Raix\n # Provides declarative function definition for ChatC"
},
{
"path": "lib/raix/function_tool_adapter.rb",
"chars": 1863,
"preview": "# frozen_string_literal: true\n\nmodule Raix\n # Adapter to convert Raix function declarations to RubyLLM::Tool instances\n"
},
{
"path": "lib/raix/mcp/sse_client.rb",
"chars": 8867,
"preview": "require \"json\"\nrequire \"securerandom\"\nrequire \"faraday\"\nrequire \"uri\"\nrequire \"digest\"\n\nmodule Raix\n module MCP\n # C"
},
{
"path": "lib/raix/mcp/stdio_client.rb",
"chars": 2197,
"preview": "require \"json\"\nrequire \"securerandom\"\nrequire \"digest\"\n\nmodule Raix\n module MCP\n # Client for communicating with MCP"
},
{
"path": "lib/raix/mcp/tool.rb",
"chars": 2030,
"preview": "module Raix\n module MCP\n # Represents an MCP (Model Context Protocol) tool with metadata and schema\n #\n # @exa"
},
{
"path": "lib/raix/mcp.rb",
"chars": 8935,
"preview": "# Simple integration layer that lets Raix classes declare an MCP server\n# with a single DSL call:\n#\n# mcp \"https://my-"
},
{
"path": "lib/raix/message_adapters/base.rb",
"chars": 1529,
"preview": "# frozen_string_literal: true\n\nrequire \"active_support/core_ext/module/delegation\"\n\nmodule Raix\n module MessageAdapters"
},
{
"path": "lib/raix/predicate.rb",
"chars": 1970,
"preview": "# frozen_string_literal: true\n\nmodule Raix\n # A module for handling yes/no questions using AI chat completion.\n # When"
},
{
"path": "lib/raix/prompt_declarations.rb",
"chars": 6722,
"preview": "# frozen_string_literal: true\n\nrequire \"ostruct\"\n\n# This module provides a way to chain prompts and handle\n# user respon"
},
{
"path": "lib/raix/response_format.rb",
"chars": 2263,
"preview": "# frozen_string_literal: true\n\nrequire \"active_support/core_ext/object/deep_dup\"\nrequire \"active_support/core_ext/string"
},
{
"path": "lib/raix/transcript_adapter.rb",
"chars": 3660,
"preview": "# frozen_string_literal: true\n\nmodule Raix\n # Adapter to convert between Raix's transcript array format and RubyLLM's M"
},
{
"path": "lib/raix/version.rb",
"chars": 67,
"preview": "# frozen_string_literal: true\n\nmodule Raix\n VERSION = \"2.0.3\"\nend\n"
},
{
"path": "lib/raix.rb",
"chars": 466,
"preview": "# frozen_string_literal: true\n\nrequire \"ruby_llm\"\nrequire \"zeitwerk\"\n\n# Ruby AI eXtensions\nmodule Raix\n class << self\n "
},
{
"path": "raix.gemspec",
"chars": 1619,
"preview": "# frozen_string_literal: true\n\nrequire_relative \"lib/raix/version\"\n\nGem::Specification.new do |spec|\n spec.name = \"raix"
},
{
"path": "sig/raix.rbs",
"chars": 103,
"preview": "module Raix\n VERSION: String\n # See the writing guide of rbs: https://github.com/ruby/rbs#guides\nend\n"
},
{
"path": "spec/files/getting_real.md",
"chars": 20806,
"preview": "Introduction\nWhat is Getting Real?\nAbout 37signals\nCaveats, disclaimers, and other preemptive strikes\n\n\n What is Getting"
},
{
"path": "spec/raix/before_completion_spec.rb",
"chars": 12806,
"preview": "# frozen_string_literal: true\n\nRSpec.describe \"before_completion hook\" do\n # Helper to create a mock response hash that"
},
{
"path": "spec/raix/chat_completion_spec.rb",
"chars": 2746,
"preview": "# frozen_string_literal: true\n\nclass MeaningOfLife\n include Raix::ChatCompletion\n\n def initialize\n self.model = \"me"
},
{
"path": "spec/raix/completion_context_spec.rb",
"chars": 2465,
"preview": "# frozen_string_literal: true\n\nRSpec.describe Raix::CompletionContext do\n let(:chat_completion_class) do\n Class.new "
},
{
"path": "spec/raix/configuration_spec.rb",
"chars": 1347,
"preview": "# frozen_string_literal: true\n\nRSpec.describe Raix::Configuration do\n describe \"#client?\" do\n context \"with RubyLLM "
},
{
"path": "spec/raix/function_dispatch_spec.rb",
"chars": 7753,
"preview": "# frozen_string_literal: true\n\nclass WhatIsTheWeather\n include Raix::ChatCompletion\n include Raix::FunctionDispatch\n\n "
},
{
"path": "spec/raix/mcp/sse_spec.rb",
"chars": 3743,
"preview": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\nrequire \"securerandom\"\n\nRSpec.describe Raix::MCP do\n context \"with"
},
{
"path": "spec/raix/mcp/stdio_client_spec.rb",
"chars": 5628,
"preview": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nRSpec.describe Raix::MCP::StdioClient do\n let(:test_server_path) "
},
{
"path": "spec/raix/mcp_spec.rb",
"chars": 10048,
"preview": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nRSpec.describe \"MCP type coercion\" do\n let(:test_class) do\n Cl"
},
{
"path": "spec/raix/message_adapters/base_spec.rb",
"chars": 1467,
"preview": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nRSpec.describe Raix::MessageAdapters::Base do\n let(:context) { do"
},
{
"path": "spec/raix/nil_content_spec.rb",
"chars": 5120,
"preview": "# frozen_string_literal: true\n\nRSpec.describe \"nil content in final assistant response\" do\n # Some providers (notably G"
},
{
"path": "spec/raix/predicate_spec.rb",
"chars": 1336,
"preview": "# frozen_string_literal: true\n\nrequire \"raix/predicate\"\n\nclass Question\n include Raix::Predicate\n\n yes? do |explanatio"
},
{
"path": "spec/raix/prompt_caching_spec.rb",
"chars": 1429,
"preview": "# frozen_string_literal: true\n\nclass GettingRealAnthropic\n include Raix::ChatCompletion\n\n def initialize\n self.mode"
},
{
"path": "spec/raix/prompt_declarations_spec.rb",
"chars": 1935,
"preview": "# frozen_string_literal: true\n\nrequire \"spec_helper\"\n\nclass TestCallablePrompt\n include Raix::ChatCompletion\n\n def ini"
},
{
"path": "spec/raix/response_format_spec.rb",
"chars": 5901,
"preview": "# frozen_string_literal: true\n\nRSpec.describe Raix::ResponseFormat do\n RSpec::Matchers.define :serialize_to do |expecte"
},
{
"path": "spec/spec_helper.rb",
"chars": 1572,
"preview": "# frozen_string_literal: true\n\nrequire \"dotenv\"\nrequire \"faraday\"\nrequire \"faraday/retry\"\nrequire \"ruby_llm\"\nrequire \"pr"
},
{
"path": "spec/support/mcp_server.rb",
"chars": 4350,
"preview": "# frozen_string_literal: true\n\nrequire \"json\"\n\n# Test MCP Server implementing the Model Context Protocol over stdio tran"
},
{
"path": "spec/vcr/GettingRealAnthropic/does_a_completion_with_prompt_caching.yml",
"chars": 56932,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://openrouter.ai/api/v1/chat/completions\n body:\n "
},
{
"path": "spec/vcr/MeaningOfLife/accepts_a_messages_parameter_to_override_the_transcript.yml",
"chars": 4228,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://api.openai.com/v1/chat/completions\n body:\n e"
},
{
"path": "spec/vcr/MeaningOfLife/does_a_completion_with_OpenAI.yml",
"chars": 4716,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://api.openai.com/v1/chat/completions\n body:\n e"
},
{
"path": "spec/vcr/MeaningOfLife/does_a_completion_with_OpenRouter.yml",
"chars": 4377,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://openrouter.ai/api/v1/chat/completions\n body:\n "
},
{
"path": "spec/vcr/MeaningOfLife/with_predicted_outputs/does_a_completion_with_OpenAI.yml",
"chars": 3873,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://api.openai.com/v1/chat/completions\n body:\n e"
},
{
"path": "spec/vcr/Raix_FunctionDispatch/can_call_a_function_and_automatically_loop_to_provide_text_response.yml",
"chars": 7947,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://api.openai.com/v1/chat/completions\n body:\n e"
},
{
"path": "spec/vcr/Raix_FunctionDispatch/does_not_allow_non_exposed_methods_to_be_called.yml",
"chars": 6373,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://api.openai.com/v1/chat/completions\n body:\n e"
},
{
"path": "spec/vcr/Raix_FunctionDispatch/respects_max_tool_calls_parameter.yml",
"chars": 7462,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://openrouter.ai/api/v1/chat/completions\n body:\n "
},
{
"path": "spec/vcr/Raix_FunctionDispatch/supports_filtering_tools_with_the_tools_parameter.yml",
"chars": 3580,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://openrouter.ai/api/v1/chat/completions\n body:\n "
},
{
"path": "spec/vcr/Raix_FunctionDispatch/supports_multiple_tool_calls_in_a_single_response.yml",
"chars": 8311,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://api.openai.com/v1/chat/completions\n body:\n e"
},
{
"path": "spec/vcr/Raix_Predicate/maybe.yml",
"chars": 2452,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://openrouter.ai/api/v1/chat/completions\n body:\n "
},
{
"path": "spec/vcr/Raix_Predicate/no.yml",
"chars": 2226,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://openrouter.ai/api/v1/chat/completions\n body:\n "
},
{
"path": "spec/vcr/Raix_Predicate/yes.yml",
"chars": 2237,
"preview": "---\nhttp_interactions:\n- request:\n method: post\n uri: https://openrouter.ai/api/v1/chat/completions\n body:\n "
}
]
About this extraction
This page contains the full source code of the OlympiaAI/raix GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 64 files (325.6 KB), approximately 82.7k tokens, and a symbol index with 185 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.